Calibrated
feedback.
Not vibes.

Make your website better, on a loop. Real-user feedback from your live pages, paired with rubric-scored AI persona critique on the same pages, wired into your engineering team via MCP and API so comments turn into shipped fixes — not screenshots in a Slack thread.

  • 1Capture real-user feedback on your live website with a one-tag widget.
  • 2Score the same pages with calibrated AI personas — weighted rubrics, comparable across drafts.
  • 3Auto-fix through MCP, API, and webhooks so engineering acts on feedback without a human relay.

Private beta · invite only · enterprise-tier access on admission.

Live demo
  1. 1Paste a website
  2. 2Pick a persona
  3. 3Get feedback

Anonymous · 1 / day · Public URLs only · Request beta access for the full report.

60
Built-in personas
10
Calibrated lenses
Widget · Share link · Email
Surfaces in
MCP · API · Webhooks
Surfaces out
Three rails, one loop

From a real comment on your site to a shipped fix.

Most feedback tools stop at "we collected the comment." We close the loop: collect on the live page, score the same page with calibrated AI, then hand resolutions to the engineering team through the same surface their AI agents already use.

1 · Real users

Capture feedback on your live website

Drop one script tag and your visitors can click anything on the page, leave a comment, attach a screenshot or a 30-second video reaction. Every submission lands in a single inbox tied to that website — anonymous or by named stakeholder.

2 · Calibrated AI

Score the same pages with expert AI personas

60 built-in personas across 10 lenses judge the same website pages your visitors are reacting to. Weighted rubrics, dimension scores, inline annotations, JSON-schema output — comparable across drafts and over time, not loose prose.

3 · Auto-fix

Wire the loop into your engineering team

MCP server for Claude Code and Cursor. REST API + gfat_* tokens for in-house agents. Webhooks for your build pipeline. Resolutions, before/after evidence, status taxonomy — so a comment becomes a PR without a human in the relay.

The moat

Human feedback × AI feedback, on the same page

We are the only product collecting both halves on the same artifact: an AI persona's calibrated critique of a website page, AND the same week's real-user reactions to that page through the on-site widget. That paired dataset compounds with every install. Models commoditise; pair-data does not.

  • › Same site, same week — AI critique + real-user comments side by side
  • › Every Free user is a data depositor. The cheapest tier is the most data-productive.
  • › Fine-tuned personas trained on the pair lake outperform generic models on rubric tasks
  • › Agent-native surface (MCP, API, /llms.txt) makes us the default endpoint for AI-driven feedback
Surfaces

Feedback in. Resolutions out. One inbox.

Real-user feedback and AI critique land in the same per-website inbox. From there, every comment can flow into your engineering team's tools — the same MCP and API surface their AI agents already use.

Feedback infrom people on your website
Live

On-site widget

One script tag on your live website. Click anything to comment, leave a screenshot, record a 30-second video reaction.

Live

Tokenised share links

Hand a stakeholder a private link. They review without an account; their feedback is tagged by name, not anonymous.

Live

AI persona runs

Run the 60 built-in personas (or your own) on the same pages the widget watches. Compare drafts, score over time.

Soon

Email-in alias

Forward a draft to feedback@your-ws.greatfeedback.ai and a persona replies on the same thread.

Resolutions outinto your engineering loop
Live

MCP server

Claude Code, Cursor, any MCP client reads feedback and posts resolutions as native tools — auto-fix without a human relay.

Live

REST API + gfat_* tokens

Workspace-scoped, role-clamped, rate-limited, audit-logged. The same surface in-house engineering agents drive against.

Live

Webhooks (HMAC-signed)

Push every new comment, status change, or resolution to your build pipeline, ticket tracker, or Slack channel.

Live

PDF + markdown export

Per-website feedback bundles for design reviews, compliance, or the weekly stakeholder summary.

Pricing · post-beta

The widget is always free. AI persona runs scale by tier.

Widget on every tier, no caps — every install grows the pair-data lake. AI persona runs scale with usage. Top 1% pair-data depositors get Pro free. AI agents pay per run against a prepaid wallet.

We're in private beta. Admitted accounts run on the enterprise tier free while we tune the AI side. Tiers below describe the post-beta shape.

Free

$0forever

Get the website-feedback loop running on day one.

  • Widget on unlimited websites
  • Unlimited real-user submissions
  • 10 AI persona runs / mo
  • 6 built-in personas
  • MCP read-only
  • Tokenised share links
  • Powered-by badge on shares
Upgrade for
  • — Custom personas
  • — MCP write tools
  • — Programmatic API tokens
Request access

Pro

Most popular
$29/ mo

For solo operators iterating on a website weekly.

  • Everything in Free, plus
  • 300 AI persona runs / mo
  • 20 personas (5 custom)
  • MCP read + write — Claude Code, Cursor
  • Custom rubrics
  • Two-draft compare mode
  • Email support
Upgrade for
  • — Programmatic API (gfat_*)
  • — Webhooks
  • — Persona marketplace publishing
Request access

Team

$129/ mo · 5 seats

When the loop becomes the engineering team's default.

  • Everything in Pro, plus
  • Unlimited AI persona runs
  • Unlimited personas
  • Programmatic API tokens (gfat_*)
  • Webhooks (HMAC-signed)
  • Persona marketplace publishing
  • Remove powered-by badge
  • Priority support
Upgrade for
  • — SSO / SAML
  • — SOC-2 audit log export
  • — On-prem / VPC
Request access

Top 1%

Earned, not bought
$0by usage

Top pair-data depositors get Pro free.

  • Everything in Pro
  • Powered-by badge removed
  • Public leaderboard at /top
  • Score = widget submissions × ratings of AI feedback (90d)
  • Re-evaluated weekly; the bar moves with the cohort
See leaderboard

Enterprise

Custom/ yr

When compliance asks the questions.

  • Everything in Team, plus
  • SSO / SAML
  • SOC-2 audit log export
  • On-prem / VPC / BYO-cloud
  • Designated support + SLA
  • Custom rubrics + reviews
Talk to us
For AI agents

Agent

$0.05/ run

Prepaid wallet, no subscription. Built for AI orchestrators that run hundreds of persona reviews per session. Agents sign themselves up via POST /api/v1/agents/register — no human in the loop.

See the agent surface →
  • › Workspace + gfat_* token by API call
  • › Wallet topped up on Stripe checkout
  • › Per-run pricing — pay only for what you trigger
  • › Same MCP + API surface as Team
  • › 402 + topup URL when wallet is empty
For LLMs & engineering agents

The same surface your engineering agents use to auto-fix.

Three pointers any LLM-driven agent can crawl in seconds. MCP for Claude Code and Cursor; gfat_* tokens + REST for in-house build pipelines; webhooks for downstream notifications. Discovery and MCP are live; agent self-signup opens after the private beta — until then, a human operator requests access at /signup and we mint the token by hand.

Self-signup · post-beta

POST /api/v1/agents/register

Currently returns 403 private_beta. The wire shape below is the future-state contract; the agent tier opens once admission is automated. Operators provision agents today via a CLI grant.

curl -X POST https://api.greatfeedback.ai/api/v1/agents/register \
  -H "Content-Type: application/json" \
  -d '{
    "agent_name": "my-agent",
    "contact_email": "owner@example.com"
  }'

# Today  → 403 { "reason": "private_beta", "message": "..." }
# Future → 200 { workspace_id, token: "gfat_...", wallet_topup_url, ... }