Integrations
Where greatfeedback.ai plugs into the rest of your stack. Each integration is a deposit channel for the same pair-data lake — pick the ones that match where you already work.
Widget on any HTML page
Drop one script tag into any page you want feedback on. The bundle ships from cdn.greatfeedback.ai; the loader reads its data-* attributes, picks up the latest content-hashed bundle from a manifest, and injects it. No re-deploy on your side when the bundle updates.
<script
async
src="https://cdn.greatfeedback.ai/loader.js"
data-site-token="ws_XXXX"
data-api-base="https://api.greatfeedback.ai"
></script>
Cmd/Ctrl + Shift + A toggles the toolbar. The visitor can pin an element, annotate text, draw an area, leave a 1-5 rating, record a voice memo or short video reaction, and attach a screenshot. Submissions land in your /feedback inbox.
Operational defaults you should know:
- Allowed origins are enforced server-side per site. An empty allowlist is treated as wildcard for local dev only.
- PII redaction (
redact_pii) is on by default. Comments and captured text are scrubbed for emails, phone numbers, SSNs, and card-like numbers before storage. - Rate limits for abuse control: 30/min/IP and 600/min/site, plus an AWS WAF rule at the edge. The caps exist for COGS protection, not tier separation — the widget itself stays free on every plan.
Full reference: docs/widget.md (architecture, loader/manifest layout, deprecation policy).
MCP for AI agents
Configure once, then drive the inbox from Claude Code, Cursor, OpenDevin, or any MCP client:
claude mcp add greatfeedback https://api.greatfeedback.ai/mcp/<mcp_token>
Tools the agent will see (full schema in /docs/agent):
list_annotations,read_annotation,acknowledge,reply,resolve,patch_annotationget_resolution_upload_url,attach_resolution,list_resolutionsrun_personas
GET /mcp/<mcp_token> opens an SSE channel that emits annotation.created events as new submissions land. Pair list_annotations(auto_fix=true) with attach_resolution for the auto-fix loop — the submitter is notified automatically when a resolution row lands.
REST API and gfat_* tokens
The /api/v1/* surface is the stable programmatic API. Mint a token in dashboard → workspace settings → API tokens (or POST /api/workspaces/{ws}/api-tokens). The plaintext is shown once at creation; only a SHA-256 hash + last-4 persists.
Authorization: Bearer gfat_<token>
Tokens carry their workspace internally — no X-GF-Workspace header needed. Tokens are clamped at MEMBER or GUEST at issuance; cost-bearing routes (run-personas) require MEMBER.
Base URL: https://api.greatfeedback.ai. Rate limits: 600/min/token and 30/min/IP. Idempotency on POST writes via Idempotency-Key.
Full reference, status taxonomy, error codes, pagination: /docs/api.
Stakeholder share links
Send a critique to a designer, lawyer, or VC without provisioning a seat. Generate from any review:
POST /api/reviews/{id}/share
Authorization: Bearer <token>
→ { "share_token": "Vb2fJ7w4..." }
The token is the credential — anyone with the link can read at https://greatfeedback.ai/reviews/shared/<token>. The public view shows the review, the persona feedback, and the rubric scores; it hides owner identity, rating controls, and project metadata.
Stakeholder write access — named reviewers leaving widget feedback without account creation — uses a separate stk_* token roster managed under POST /api/widget/sites/{id}/stakeholders. The widget loader resolves them from the embed attribute or a ?gf_st= URL parameter, so an emailed share link can pre-identify the reviewer.
Persona marketplace
Domain experts publish personas (FDA copy, fintech compliance, B2B PMM, DTC retention) under /marketplace. Workspaces clone a published persona into their own catalog with POST /api/personas/clone; the row records clonedFrom so the royalty rail can attribute usage when it lands.
Each workspace ships with 60 default personas across 10 orthogonal lenses (consumer, domain expert, enterprise buyer, investor, communicator, designer, engineer, skeptic, compliance, educator). Defaults are editable and resettable; customs are workspace-scoped on Solo and Pro, shareable org-wide on Team+.
The wedge: persona authors get paid by run, top authors get free Pro. Quality compounds because the people who calibrate the rubrics are the people who collect the royalty.
Webhooks
Coming. review.completed, review.failed, feedback.rated_negative, and widget.comment.received are on the next-month roadmap with HMAC-signed payloads. Until then, the same events are observable via:
- The MCP SSE channel (
annotation.createdfor widget submissions). - Polling
/api/v1/sites/{id}/annotations?since=<epoch_ms>from a token-authenticated CI script.
We will not break the polling contract when the webhooks ship — both stay supported.