Make your website better, on a loop. Real-user feedback from your live pages, paired with rubric-scored AI persona critique on the same pages, wired into your engineering team via MCP and API so comments turn into shipped fixes — not screenshots in a Slack thread.
Private beta · invite only · enterprise-tier access on admission.
Anonymous · 1 / day · Public URLs only · Request beta access for the full report.
Most feedback tools stop at "we collected the comment." We close the loop: collect on the live page, score the same page with calibrated AI, then hand resolutions to the engineering team through the same surface their AI agents already use.
Drop one script tag and your visitors can click anything on the page, leave a comment, attach a screenshot or a 30-second video reaction. Every submission lands in a single inbox tied to that website — anonymous or by named stakeholder.
60 built-in personas across 10 lenses judge the same website pages your visitors are reacting to. Weighted rubrics, dimension scores, inline annotations, JSON-schema output — comparable across drafts and over time, not loose prose.
MCP server for Claude Code and Cursor. REST API + gfat_* tokens for in-house agents. Webhooks for your build pipeline. Resolutions, before/after evidence, status taxonomy — so a comment becomes a PR without a human in the relay.
We are the only product collecting both halves on the same artifact: an AI persona's calibrated critique of a website page, AND the same week's real-user reactions to that page through the on-site widget. That paired dataset compounds with every install. Models commoditise; pair-data does not.
Real-user feedback and AI critique land in the same per-website inbox. From there, every comment can flow into your engineering team's tools — the same MCP and API surface their AI agents already use.
One script tag on your live website. Click anything to comment, leave a screenshot, record a 30-second video reaction.
Hand a stakeholder a private link. They review without an account; their feedback is tagged by name, not anonymous.
Run the 60 built-in personas (or your own) on the same pages the widget watches. Compare drafts, score over time.
Forward a draft to feedback@your-ws.greatfeedback.ai and a persona replies on the same thread.
Claude Code, Cursor, any MCP client reads feedback and posts resolutions as native tools — auto-fix without a human relay.
Workspace-scoped, role-clamped, rate-limited, audit-logged. The same surface in-house engineering agents drive against.
Push every new comment, status change, or resolution to your build pipeline, ticket tracker, or Slack channel.
Per-website feedback bundles for design reviews, compliance, or the weekly stakeholder summary.
Widget on every tier, no caps — every install grows the pair-data lake. AI persona runs scale with usage. Top 1% pair-data depositors get Pro free. AI agents pay per run against a prepaid wallet.
Get the website-feedback loop running on day one.
For solo operators iterating on a website weekly.
When the loop becomes the engineering team's default.
Top pair-data depositors get Pro free.
When compliance asks the questions.
Prepaid wallet, no subscription. Built for AI orchestrators that run hundreds of persona reviews per session. Agents sign themselves up via POST /api/v1/agents/register — no human in the loop.
See the agent surface →Three pointers any LLM-driven agent can crawl in seconds. MCP for Claude Code and Cursor; gfat_* tokens + REST for in-house build pipelines; webhooks for downstream notifications. Discovery and MCP are live; agent self-signup opens after the private beta — until then, a human operator requests access at /signup and we mint the token by hand.
llmstxt.org-style summary with auth, base URL, and entry points. Cache-friendly.
/mcp.jsonTool list + auth scheme. Pipe into your MCP client config.
/llms-full.txtCurl examples, schemas, error codes, status taxonomy. Paste into context.
Currently returns 403 private_beta. The wire shape below is the future-state contract; the agent tier opens once admission is automated. Operators provision agents today via a CLI grant.
curl -X POST https://api.greatfeedback.ai/api/v1/agents/register \
-H "Content-Type: application/json" \
-d '{
"agent_name": "my-agent",
"contact_email": "owner@example.com"
}'
# Today → 403 { "reason": "private_beta", "message": "..." }
# Future → 200 { workspace_id, token: "gfat_...", wallet_topup_url, ... }