Documentation
Complete guide to tracking your brand's visibility in AI-powered search results.
What is Prompt Clarity?
Prompt Clarity is a comprehensive brand visibility monitoring platform that tracks how often your business appears in AI-powered search results across ChatGPT, Claude, Gemini, Perplexity, and Grok. Understand your visibility in the AI ecosystem and optimize your presence.
Bring Your Own API Keys
Prompt Clarity puts you in full control of your AI visibility monitoring:
Connect your own accounts. Pay only for what your use. You control the costs directly.
Track as many prompts as you want. Scale your monitoring to match your needs.
Run on your own infrastructure. Customize models & templates via simple config files.
Why AI Visibility Matters
As AI assistants become primary discovery channels for products and services, appearing in their recommendations directly impacts your business. Prompt Clarity helps you:
- Track Brand Mentions - Monitor how often your brand appears in AI responses
- Benchmark Competitors - Compare your visibility against competitors
- Analyze Sentiment - Understand how AI describes your brand
- Track Sources - See which websites AI cites when mentioning your brand
Improve Your Visibility Score
Beyond tracking, Prompt Clarity analyzes your data to provide actionable recommendations for improving your AI visibility:
- Website Audits - Optimize your site for AI crawlers with structured data, schema markup, and content accessibility improvements
- Content Recommendations - Discover topics and content types that AI models frequently reference in your industry
- Partnership Opportunities - Identify high-authority sites and potential partners that AI models trust and cite regularly
Quick Start
Run your own instance with a single command:
Option 1: Docker (Recommended)
docker run -d \ --name prompt-clarity \ -p 3000:3000 \ -v prompt-clarity-data:/app/data \ promptclarity/promptclarity:latest
Open http://localhost:3000 and follow the setup wizard.
Custom domain? Add -e NEXTAUTH_URL=https://clarity.yourdomain.com to the command.
Option 2: From Source
# Clone the repository git clone https://github.com/promptclarity/promptclarity.git cd promptclarity # Install dependencies npm install # Create environment file cp .env.example .env.local # Generate auth secret openssl rand -base64 32 # Add the output to NEXTAUTH_SECRET in .env.local # Start development server npm run dev
Open http://localhost:3000 and follow the onboarding wizard.
Onboarding Steps
Business Info
Enter your company name and website URL
Platform Configuration
Select AI platforms and enter your API keys
Topic Generation
Prompt Clarity suggests 5-7 industry-relevant topics (customizable)
Prompt Generation
Generates 3-5 search prompts per topic
Competitor Identification
Identifies 3-10 competitors to track alongside your brand
Installation
Prerequisites
- Node.js 18.17 or later
- npm or yarn
- API key from at least one AI platform
Clone & Install
git clone https://github.com/promptclarity/promptclarity.git cd promptclarity npm install
Environment Setup
Copy the example environment file and configure your settings:
cp .env.example .env.local
Generate Auth Secret
openssl rand -base64 32
Add the output to NEXTAUTH_SECRET in your .env.local file.
Start Development Server
npm run dev
Configuration
Required Environment Variables
| Variable | Description | Example |
|---|---|---|
| NEXTAUTH_SECRET | Session encryption key | Output of openssl rand |
| NEXTAUTH_URL | Auth callback URL | http://localhost:3000 |
Optional Environment Variables
| Variable | Description |
|---|---|
| GOOGLE_CLIENT_ID | Google OAuth client ID |
| GOOGLE_CLIENT_SECRET | Google OAuth client secret |
| CRON_SECRET | Secret for cron job authentication |
| RESEND_API_KEY | Email service for team invites |
Note: AI platform API keys are configured through the UI during onboarding, not via environment variables.
Data Collection
Prompt Clarity collects data by executing real queries against multiple AI platforms and analyzing their responses. Here's how the process works:
The Query Pipeline
1. SEND PROMPT TO AI MODEL
├─ Send industry-relevant query to configured AI model
├─ API Calls to configured models: Response model answers the prompt
└─ Returns full response text + cited sources/URLs
2. EXTRACT URLS FROM RESPONSE
├─ Parse any URLs the AI cited in its response
└─ Fetch page metadata (title, description) for each URL
3. ANALYZE RESPONSE FOR MENTIONS
├─ API Calls GPT-4o-mini to analyze the response via mention-analysis prompt
└─ This detects:
├─ Was your brand mentioned?
├─ What position in rankings? (1st, 2nd, 3rd...)
├─ Which competitors were mentioned?
├─ Sentiment (positive/neutral/negative)
└─ Source types (editorial, UGC, reference, etc.)
4. CALCULATE & STORE
├─ Calculate visibility score and share of voice
├─ Save results to database
└─ Update dashboard in real-timeThe mention analysis prompt is fully configurable. See mention-analysis.yaml on GitHub.
What We Extract From Each Response
Brand Mentions
Every instance where your brand or competitors appear, including variations and misspellings.
Position Data
When AI lists multiple options, we track where each brand appears (1st, 2nd, 3rd, etc.).
Source Citations
URLs and domains the AI references as sources for its recommendations.
Context & Sentiment
The surrounding text to understand if mentions are positive, neutral, or negative.
Data Freshness: Prompts can be executed on-demand or scheduled daily. Historical data is preserved to show visibility trends over time.
Visibility Scoring
Understanding how we calculate your visibility metrics and what they mean for your brand.
How Visibility is Calculated
All visibility metrics are calculated from stored execution data—no additional API calls.
1. QUERY EXECUTION DATA
├─ Get all prompt executions from database for date range
└─ Each execution has: business_visibility, competitor_visibilities,
mention_analysis (position, sentiment)
2. CALCULATE VISIBILITY SCORES
├─ Business visibility = (executions with brand mentioned / total) × 100
├─ Competitor visibility = (executions with competitor mentioned / total) × 100
└─ Store per-platform breakdown for model comparison
3. EXTRACT POSITION & SENTIMENT
├─ Parse mention_analysis JSON from each execution
├─ Position = brandPosition field (1st, 2nd, 3rd...)
├─ Sentiment = brandSentiment or brandSentimentScore (0-100)
└─ Average across all executions where brand appeared
4. CALCULATE PERIOD-OVER-PERIOD CHANGES
├─ Query previous period (same duration, before start date)
├─ Compare visibility, sentiment, position
└─ Return change values (e.g., +5.2%, -1 position)Core Metrics
Visibility Score (0-100%)
The percentage of prompts where your brand was mentioned across all platforms.
Visibility Score = (Prompts with mention / Total prompts) × 100 Example: - 50 total prompts executed - Your brand mentioned in 35 responses - Visibility Score = (35/50) × 100 = 70%
Average Position
When AI lists recommendations, where does your brand typically appear?
Average Position = Sum of positions / Number of appearances Example: - Prompt 1: Listed 2nd - Prompt 2: Listed 1st - Prompt 3: Listed 3rd - Prompt 4: Not listed (excluded) - Average Position = (2+1+3) / 3 = 2.0
Sentiment Score (-100 to +100)
How positively or negatively AI describes your brand when mentioned.
Sentiment Analysis examines context around mentions: Positive indicators (+): "highly recommended", "industry leader", "best choice" Neutral indicators (0): "one option is", "alternatives include", "you could use" Negative indicators (-): "drawbacks include", "users complain about", "limited by"
Platform-Specific Scoring
Each AI platform may respond differently to the same prompt. We track metrics per-platform so you can identify where you're strong or weak:
| Platform | Visibility | Avg Position | Sentiment |
|---|---|---|---|
| ChatGPT | 72% | 2.1 | +45 |
| Claude | 68% | 1.8 | +52 |
| Perplexity | 81% | 1.5 | +61 |
Content Recommendations
Prompt Clarity analyzes AI responses to identify content gaps and opportunities that can improve your visibility.
How We Generate Recommendations
Content recommendations are generated entirely from your existing execution data.
1. QUERY EXECUTION DATA
├─ Get all prompts and executions from database
├─ Each execution has: brand_mentions (0 or 1),
│ competitors_mentioned (JSON array), sources (JSON array)
└─ Group executions by prompt_id
2. IDENTIFY CONTENT GAPS
├─ For each prompt, count: brandWins (brand_mentions > 0)
├─ Count: competitorWins (competitors_mentioned.length > 0)
├─ If competitorWins > brandWins → create content gap
├─ yourVisibility = (brandWins / totalExecutions) × 100
├─ competitorVisibility = (competitorWins / totalExecutions) × 100
└─ Track which competitors are winning and sources they cite
3. DETECT SEGMENTS (regex pattern matching)
├─ Industry: /law firm|legal/ → "Law Firms"
│ /healthcare|medical/ → "Healthcare"
│ /e-commerce|retail/ → "E-commerce"
│ /saas|software|cloud/ → "SaaS"
├─ Use-case: /remote team|work from home/ → "Remote Teams"
│ /small business|startup/ → "Small Business"
│ /enterprise|corporation/ → "Enterprise"
└─ Persona: /developer|engineer/ → "Developers"
/marketer|marketing/ → "Marketers"
/sales|salesperson/ → "Sales Teams"
4. EXTRACT TARGET KEYWORDS
├─ Split prompt text into words
├─ Filter stop words: the, a, what, how, best, for, to, in, of, and, or
├─ Keep words longer than 3 characters
├─ Capitalize first letter, return up to 5 keywords
└─ Example: "best CRM tools for startups" → ["Crm", "Tools", "Startups"]
5. SUGGEST CONTENT TYPE (keyword matching)
├─ "compare" or "vs" → Comparison guide
├─ "how to" → Step-by-step tutorial
├─ "best" or "top" → Listicle or roundup
├─ "review" → In-depth review
└─ "price" or "cost" → Pricing guide
6. CALCULATE & PRIORITIZE
├─ Gap score = (competitorVisibility - yourVisibility) × 0.5
│ + competitorsWinning.length × 10 + totalExecutions × 2
├─ Estimated impact = gap score (capped at 100)
└─ Sort all recommendations by impact (highest first)Types of Content Recommendations
Documentation & Guides
AI models heavily cite official documentation. Comprehensive, well-structured docs increase your chances of being recommended.
Comparison Content
Pages comparing your product to competitors help AI understand your positioning and differentiators.
Use Case Studies
Specific examples of how your product solves problems help AI match you to relevant queries.
Technical Tutorials
Step-by-step guides that get indexed and referenced when users ask "how to" questions.
Example Recommendation
Gap Detected: Your brand is mentioned 0% of the time for "enterprise deployment" queries, while competitors average 45%.
Recommendation: Create an "Enterprise Deployment Guide" covering security, scalability, and compliance topics.
PR & Partnership Strategy
AI models rely on external sources to form recommendations. We analyze which sites get cited to identify strategic partnership and PR opportunities.
How Opportunities are Generated
Off-page opportunities are generated from your execution data.
1. EXTRACT SOURCES FROM EXECUTIONS
├─ Query all executions from database
├─ Each execution has sources JSON: [{domain, url, type}, ...]
├─ For each source, track per domain:
│ frequency: how many times this domain appears
│ brandPresent: count of executions where brand was also mentioned
│ competitorPresent: count where competitors were mentioned
│ promptsAppearing: which prompts cited this source
│ urls: list of specific URLs cited
└─ Skip your own domain (business.website)
2. CATEGORIZE SOURCES BY TYPE (domain pattern matching)
├─ UGC patterns: reddit.com, quora.com, facebook.com, twitter.com,
│ x.com, linkedin.com, medium.com, dev.to,
│ stackoverflow.com, *discourse*, *forum*,
│ *community*, *groups*, *discuss*
├─ Reference patterns: wikipedia.org, britannica.com, *.gov,
│ *.edu, *.org, statista.com, *pew*,
│ *research*, *institute*, *foundation*
├─ Competitor: domain contains competitor name or website
│ (matched against competitor list from database)
└─ Editorial: everything else (news, blogs, publications)
3. CALCULATE PRIORITY SCORES
├─ Higher score = higher priority opportunity
├─ Editorial score = frequency × 5
│ + (100 - brandPresenceRate) × 0.3
│ + competitorPresenceRate × 0.5
├─ UGC score = frequency × 4
│ + (100 - brandPresenceRate) × 0.4
│ + competitorPresenceRate × 0.6
├─ Reference score = frequency × 6
│ + (100 - brandPresenceRate) × 0.2
│ + competitorPresenceRate × 0.3
└─ brandPresenceRate = (brandPresent / frequency) × 100
4. GENERATE OUTREACH RECOMMENDATIONS
├─ Editorial actions based on brand presence:
│ 0% + competitor > 50% → "High-priority PR outreach"
│ 0% → "Pitch for inclusion in articles"
│ < competitor → "Increase presence via guest posts"
├─ UGC engagement strategies by platform:
│ Reddit → "Answer questions authentically, avoid self-promotion"
│ LinkedIn → "Share thought leadership, engage discussions"
│ Quora → "Answer thoroughly with data and examples"
│ Stack Overflow → "Provide detailed technical answers"
└─ Pitch types based on domain:
techcrunch/venturebeat/wired → "Tech industry story"
forbes/inc/entrepreneur → "Business/leadership angle"
*review*/pcmag/cnet → "Product review or comparison"
5. PRIORITIZE & SORT
├─ Estimated impact = priority score × multiplier (capped at 100)
│ Editorial: × 1.5, UGC: × 1.2, Reference: × 2.0
└─ Sort all targets by estimated impact (highest first)Partnership Opportunities
High-Authority Sites
Sites AI trusts and cites frequently:
- • Industry publications (TechCrunch, Wired)
- • Developer platforms (GitHub, Stack Overflow)
- • Review sites (G2, Capterra)
- • Community forums (Reddit, Hacker News)
Recommended Actions
How to leverage these insights:
- • Guest posts on high-authority blogs
- • Contribute to open source projects
- • Engage in community discussions
- • Seek product reviews and mentions
PR Move Recommendations
High citation rate for product comparisons
Referenced in 34% of industry queries
Open source projects get 2.3x more mentions
Competitor Source Analysis
We also track which sources cite your competitors but not you. These represent immediate opportunities to close visibility gaps through targeted outreach.
Website Audits
Optimize your website to be better understood and cited by AI models. Our audits analyze how AI-friendly your site structure and content are.
How the Audit Works
Site audits fetch and parse your actual pages to analyze their AI-readiness.
1. DISCOVER URLS ├─ Try sitemap.xml, sitemap_index.xml, sitemap/sitemap.xml ├─ If no sitemap: crawl homepage and extract internal links └─ Limit to 50 pages per audit 2. FETCH EACH PAGE (HTTP request) ├─ GET request with PromptClarity User-Agent ├─ 30 second timeout per page └─ Record load time in milliseconds 3. PARSE HTML (using node-html-parser) ├─ Extract: title, meta description, headings (H1-H6) ├─ Find schema markup: JSON-LD scripts → extract @type ├─ Count: words, lists, tables, images, links └─ Check: Q&A format, canonical URL, robots meta 4. CALCULATE SCORES (0-100) ├─ Structure: title + meta + H1 count + heading hierarchy ├─ Content: word count + Q&A format + lists + images with alt ├─ Technical: load time + schema count + canonical + robots └─ Overall: average of structure, content, technical 5. GENERATE ISSUES & RECOMMENDATIONS ├─ Issues: missing title, no H1, thin content, slow load └─ Recommendations: add FAQ schema, improve headings, add lists
Score Calculation
Structure Score (100 points): Title present (15) + optimal length 30-60 chars (+10) Meta description present (15) + optimal length 70-160 chars (+10) Exactly one H1 (20) + H2 headings present (+10) + H3 present (+5) Proper heading hierarchy H1>H2>H3 (+15) Content Score (100 points): Word count: 1500+ (30), 1000+ (25), 500+ (20), 300+ (10) Q&A format detected (+25) Has lists (+10) + has tables (+5) Internal links 5+ (15), 2+ (10), 1+ (5) Images with alt text (up to 15 based on ratio) Technical Score (100 points): Load time: <1s (35), <2s (30), <3s (20), <5s (10) Schema types: 3+ (35), 2+ (30), 1+ (20) Has canonical URL (+15) Not blocking with noindex (+15)
What We Analyze
Structured Data & Schema Markup
AI models parse structured data to understand your content. We check for:
- • Organization schema (company info, logo, social profiles)
- • Product schema (features, pricing, reviews)
- • FAQ schema (common questions and answers)
- • Article schema (author, publish date, topics)
- • HowTo schema (tutorials and guides)
Content Accessibility
Can AI crawlers easily access and parse your content?
- • Robots.txt configuration (not blocking AI crawlers)
- • JavaScript rendering requirements
- • Content behind authentication walls
- • Mobile-friendliness and page speed
- • Sitemap completeness and accuracy
Content Quality Signals
Factors that influence whether AI considers your content authoritative:
- • Clear headings and content hierarchy
- • Comprehensive topic coverage
- • Updated timestamps and freshness signals
- • Author attribution and expertise indicators
- • Internal linking structure
AI Crawler Compatibility
Known AI crawlers to allow in robots.txt: User-agent: GPTBot # OpenAI User-agent: ChatGPT-User # ChatGPT browse mode User-agent: Google-Extended # Gemini training User-agent: anthropic-ai # Claude User-agent: PerplexityBot # Perplexity User-agent: Bytespider # TikTok/ByteDance AI Recommended robots.txt: Allow: / Allow: /docs/ Allow: /blog/ Allow: /products/
Audit Report Example
| Check | Status | Recommendation |
|---|---|---|
| Organization Schema | ✓ Present | Add social profile links |
| Product Schema | ✗ Missing | Add to all product pages |
| FAQ Schema | ✗ Missing | Create FAQ section with schema |
| GPTBot Access | ⚠ Blocked | Update robots.txt to allow |
| Content Freshness | ✓ Good | Maintain update schedule |
Impact of Optimization
Sites that implement our audit recommendations see an average 23% increase in AI visibility within 30 days, primarily from improved crawlability and structured data implementation.
Dashboard Overview
The dashboard provides a comprehensive view of your brand's AI visibility across multiple dimensions.
Overview
Visibility trends, brand rankings, recent AI responses with highlights, and date range filtering.
Prompts
Manage prompts by topic, view execution history, run bulk or individual executions.
Visibility
Time-series analysis of visibility trends with competitor comparisons.
Competitors
Comparative analysis of all tracked brands with visibility benchmarking.
Sources
See which websites AI models cite when discussing your industry.
Sentiment
Analyze how AI describes your brand - positive, neutral, or negative.
Visibility Tracking
Visibility Score
A binary metric per prompt/platform combination indicating whether your brand was mentioned:
Confidence Score
AI-assigned confidence (0-1 scale) indicating certainty of analysis:
- 0.95 - Very confident
- 0.70 - Moderate confidence
- 0.40 - Low confidence (edge cases)
Competitor Analysis
During onboarding, AI identifies 3-10 competitors based on your industry. You can add, remove, or modify competitors at any time.
Tracked Metrics
- Visibility - How often each brand is mentioned
- Position - Ranking when brands appear in lists
- Sentiment - How positively each brand is portrayed
- Trend - Visibility changes over time
Prompt Management
How Prompts Work
Prompts simulate questions potential customers might ask AI. They're organized by topics and can be executed individually or in bulk.
Prompt Types
- Recommendation - "What's the best [product] for [use case]?"
- Comparison - "Compare [brand] vs [competitor]"
- Research - "What are the top [industry] companies?"
- Problem-solving - "How do I solve [problem]?"
Execution Options
- Execute All - Run all prompts across all platforms
- Execute by Topic - Run prompts for a specific topic
- Execute Single - Run one prompt on selected platforms
Source Attribution
Track which websites and sources AI models cite when discussing your industry. This helps you understand where to focus content and link-building efforts.
Use Cases
- Identify high-authority sites AI frequently cites
- Find content gaps where competitors appear but you don't
- Guide PR and link-building strategy
- Understand AI's information sources
Sentiment Analysis
Understand the context and tone of your brand mentions in AI responses.
AI recommends or praises your brand
AI lists your brand among options
AI mentions criticism or issues
Growth Dashboard
Track visibility trends over time with comprehensive growth analytics.
- Visibility over time - Line charts showing your visibility score trends
- Competitor comparison - See how your growth compares to competitors
- Platform breakdown - Growth trends per AI platform
- Period comparison - Compare this week/month to previous periods
Content Roadmap
AI-powered content recommendations based on your visibility gaps.
Content Gap Analysis
Identify topics where competitors appear but you don't.
Topic Recommendations
AI-suggested topics based on industry trends.
Keyword Suggestions
Terms that frequently appear in high-visibility responses.
Content Type Suggestions
Documentation, tutorials, comparisons based on what AI cites.
Off-Page Roadmap
PR and distribution opportunities to boost your AI visibility through external sources.
Editorial Opportunities
News sites, publications, and tech blogs that AI models frequently cite.
UGC Opportunities
Reddit, Quora, LinkedIn, and community platforms where you should have presence.
Reference Sites
Wikipedia, .edu domains, and authoritative sources that boost credibility.
Outreach Strategies
AI-generated pitch ideas and outreach templates for each opportunity.
Site Audit Dashboard
Technical SEO analysis to ensure your site is AI-crawler friendly.
Structure Scoring
Overall site health score based on AI-crawlability factors.
Schema Detection
Checks for FAQ, HowTo, Product, Organization schema markup.
Content Analysis
Word count, lists, Q&A sections, and content structure.
Technical Factors
Load time, mobile friendliness, canonical URLs, robots.txt.
Benchmarking
Compare your visibility metrics against industry standards and competitors.
- Industry comparisons - See how you rank in your vertical
- Competitor benchmarks - Track relative position changes
- Platform performance - Compare visibility across AI models
- Historical benchmarks - Track improvement over time
Team Management
Invite team members and manage access to your Prompt Clarity instance.
Email Invitations
Send invite links to team members via email. Requires RESEND_API_KEY configuration.
Role-Based Access
Assign roles to control what team members can view and edit.
Member Management
View active members, pending invites, and remove access as needed.
Model Configuration
Configure AI platforms and manage API keys from the Models dashboard.
- Per-platform API keys - Add or update keys for each AI service
- Enable/disable platforms - Toggle which platforms to include in executions
- Budget tracking - Monitor API usage and costs
- Usage statistics - View execution counts and token usage per platform
Position Tracking
Track where your brand appears when AI lists multiple recommendations.
- 1st position - AI recommends you first (highest impact)
- 2nd-3rd position - Still prominent placement
- 4th+ position - Mentioned but lower visibility
- Not mentioned - Opportunity for improvement
Historical Trends
All prompt executions are stored with timestamps to build historical visibility data.
- Compare visibility week-over-week or month-over-month
- Identify trending topics where your visibility is changing
- Track the impact of content and PR efforts over time
- Export historical data for external analysis
Scheduled Execution
Automate prompt execution on a schedule to build continuous visibility data.
Run all prompts every day
Balanced frequency option
Lower API usage
Executions run with up to 5 parallel requests to balance speed and API rate limits. Real-time SSE updates show progress in the UI.
Budget Tracking
Monitor API costs and usage across all configured platforms.
- Cost estimation - Projected costs based on prompt count and frequency
- Usage statistics - Token counts per platform per execution
- Platform breakdown - See which AI services cost the most
- Disable platforms - Turn off expensive platforms to reduce costs
Cloud Platforms
Deploy Prompt Clarity to your preferred cloud provider.
Render
Simple deployment with automatic builds from GitHub.
Fly.io
Edge deployment with global distribution.
Vercel
Optimized for Next.js with built-in cron jobs.
Supported AI Models
Prompt Clarity supports 5 major AI platforms out of the box. All platforms are fully configurable via YAML configuration files.
ChatGPT
OpenAI
gpt-5.1Claude
Anthropic
claude-sonnet-4-20250514Gemini
gemini-2.0-flashPerplexity
Perplexity AI
sonar-proGrok
xAI
grok-2-1212Platform Configuration
Platforms are configured via config/platforms/platforms.yaml. You can customize models, add new platforms, or disable existing ones:
# config/platforms/platforms.yaml
platforms:
chatgpt:
name: ChatGPT
provider: openai
model: gpt-5.1
claude:
name: Anthropic Claude
provider: anthropic
model: claude-sonnet-4-20250514
gemini:
name: Google Gemini
provider: google
model: gemini-2.0-flash
perplexity:
name: Perplexity
provider: perplexity
model: sonar-pro
grok:
name: Grok
provider: xai
model: grok-2-1212Customization Options
- • Change model versions by updating the
modelfield - • Disable a platform by removing or commenting out its section
- • Add new platforms by following the same YAML structure
- • Restart the server after making configuration changes
API Key Setup
Get API keys from the following platforms to enable tracking:
Prompt Templates
Customize how Prompt Clarity generates topics, prompts, competitors, and analyzes responses via YAML configuration files in config/prompts/.
| File | Purpose | Variables |
|---|---|---|
| onboarding-topics.yaml | Generate business topics during onboarding | businessName, website |
| onboarding-prompts.yaml | Generate search prompts for tracking | businessName, website, topics |
| onboarding-competitors.yaml | Identify competitors automatically | businessName, website, topics |
| mention-analysis.yaml | Analyze AI responses for brand mentions | brandName, competitors, response |
Template Format
# config/prompts/onboarding-topics.yaml
systemPrompt: Optional system prompt for the AI
userPromptTemplate: |
For the business {{businessName}} ({{website}}),
generate 5-7 relevant topic categories...
temperature: 0.7
maxOutputTokens: 15000Template Variables
Use {{variableName}} syntax in templates. Variables are automatically replaced at runtime with actual values from your business configuration.
Daily Tracking
Prompt Clarity can automatically execute prompts on a schedule to build historical visibility data.
How It Works
Daily at 2 AM UTC
↓
Trigger /api/cron/daily-executions
↓
For each business:
├─ Get all prompts
├─ Get all configured platforms
├─ Execute prompts (max 5 concurrent)
└─ Store results with today's date
↓
Dashboard updates with new data pointsCron Configuration
Option 1: Vercel (Built-in)
Configured automatically in vercel.json:
{
"crons": [{
"path": "/api/cron/daily-executions",
"schedule": "0 2 * * *"
}]
}Option 2: External Cron Service
Use services like cron-job.org or EasyCron:
POST https://your-domain.com/api/cron/daily-executions Authorization: Bearer YOUR_CRON_SECRET
API Endpoints
/api/prompts/executionsExecute all prompts for a business
// Request
{ "businessId": 1 }
// Response
{ "success": true, "message": "Started execution for all prompts" }/api/dashboard/overviewFetch visibility overview data
// Query params
?businessId=1&startDate=2024-12-01&endDate=2024-12-08
// Response
{
"visibilityScore": 72,
"mentionCount": 45,
"platforms": { ... }
}/api/prompts/executions/streamServer-Sent Events for real-time execution updates
Real-Time Updates
The application uses Server-Sent Events (SSE) to provide live execution progress updates.
// Frontend usage
const eventSource = new EventSource('/api/prompts/executions/stream');
eventSource.onmessage = (event) => {
const data = JSON.parse(event.data);
// Updates: {status: 'started'|'completed', promptId, ...}
updateUI(data);
};Self-Hosting
Prompt Clarity can be self-hosted on any platform that supports Node.js.
Build for Production
npm run build npm start
Database
Uses SQLite by default - no external database required. Data is stored in data/store.db.
Docker Deployment
Quick Start
docker run -d \ --name prompt-clarity \ -p 3000:3000 \ -v prompt-clarity-data:/app/data \ promptclarity/promptclarity:latest
Docker Configuration
| Variable | Description | Default |
|---|---|---|
| NEXTAUTH_URL | Your app's public URL | http://localhost:3000 |
| NEXTAUTH_SECRET | Auth encryption secret | Auto-generated |
Updating
docker pull promptclarity/promptclarity:latest docker stop prompt-clarity && docker rm prompt-clarity # Re-run the docker run command above
Build Image Locally
docker build -t prompt-clarity . docker run -d -p 3000:3000 -v prompt-clarity-data:/app/data prompt-clarity
Included Services
- Multi-stage build (Node 20 Alpine)
- Automatic database migrations on startup
- SQLite database persistence via volume mount
Vercel Deployment
Connect Repository
Link your GitHub repository to Vercel
Configure Environment
Set environment variables in the Vercel dashboard
Deploy
Automatic deployments on push to main branch
Deploy to Render
Create Web Service
Create a new Web Service on Render
Connect Repository
Connect your GitHub repository
Configure Build
Build Command: npm install && npm run build
Start Command: npm start
Add Environment Variables
Add variables in Render dashboard and deploy
Deploy to Fly.io
# Install flyctl curl -L https://fly.io/install.sh | sh # Launch (creates fly.toml) fly launch # Set secrets fly secrets set NEXTAUTH_SECRET="your-secret" NEXTAUTH_URL="https://your-app.fly.dev" # Deploy fly deploy
Database Schema
Prompt Clarity uses SQLite with the following core tables:
businesses
Company information (id, business_name, website)
business_platforms
AI platform configurations with API keys
topics
Industry categories for organizing prompts
prompts
Search queries to test against AI platforms
competitors
Tracked competitor brands
prompt_executions
Stores every execution result including mentions, sentiment, and visibility
Troubleshooting
Port 3000 already in use
PORT=3001 npm run devAuthentication errors
Ensure NEXTAUTH_SECRET is set in your .env.local file
Database locked
Only one app instance should access the SQLite database. Restart the server if needed.
No dashboard data
Wait for prompt execution to complete. Check browser console for errors.
API key errors
Verify API key validity and remaining quota in the provider's dashboard.
Team invites not sending
Configure RESEND_API_KEY environment variable for email functionality.
Need Help?
Check out the GitHub repository for issues, discussions, and contributions.