Welcome to GoQA
GoQA turns plain-language requirements, live URLs, screenshots, and OpenAPI specs into comprehensive BDD test suites — and crawls any web application to surface accessibility, performance, SEO, and security defects in minutes. This documentation covers every feature available in v0.4.0.
Quick start (3 steps)
Get your first test cases in under two minutes — no setup, no CLI, no configuration file required.
- Open the Test Cases workbench at /test-cases. Paste acceptance criteria, a URL, a screenshot, or an OpenAPI spec into the input panel.
- Click a test-type tile (e.g. Functional, E2E, Security) to generate BDD scenarios for that category. Results stream back in 2–4 seconds.
- Mark Pass / Fail on each case, then export as JSON, Excel, or Gherkin — or move the whole set to the Test Repository for ongoing tracking.
Account creation
Visit /signup to create a free account with email and password, or sign in with Google OAuth. Email confirmation is required before accessing the dashboard. Once confirmed, you are placed on the Starter plan automatically.
New users see the Onboarding Tour — a guided 5-step walkthrough covering core features. The tour can be dismissed at any point and restarted later from Settings → Onboarding. See Onboarding tour for details.
Plan overview
Full pricing at /pricing. Summary:
- Starter (free) — 5 AI generations/month, 50 cases/request, JSON export, single user.
- Pro (CA$39/mo or CA$31/mo yearly) — unlimited generations, 200+ cases/request, all export formats, scheduled audits, team workspaces, Jira/Zephyr/Azure DevOps integrations, flakiness detection, and priority support.
- Enterprise — custom SLA, SSO, on-premise option. Contact sales@goqa.ai.
Your current plan, monthly usage, and upgrade CTA are shown at the top of the Dashboard. The usage bar turns amber at 80% of your monthly generation limit and red at 95%.
Test Case Generation — overview
The Test cases workbench is the primary surface for converting any specification into runnable BDD test scenarios. It supports four input methods, 18 test-type categories, AI-powered refinement, and a real-time coverage score.
Input methods
Switch between tabs in the input panel to choose your source:
- Acceptance criteria — paste plain text or Gherkin (Given / When / Then). The model reads it directly and maps each criterion to scenarios.
- From URL — give a public URL. Playwright visits it server-side, extracts headings, lists, and form fields, then feeds them to the generator.
- Screenshot upload — drop a PNG / JPG / WebP. OpenAI Vision extracts on-screen criteria; you can append free-text notes to guide generation.
- OpenAPI spec — new in v0.4.0. Paste a JSON OpenAPI spec directly or supply a URL to a hosted spec. The generator extracts every endpoint and produces API contract tests covering status codes, schema validation, auth flows, and error cases — one test per endpoint. See OpenAPI / Swagger integration for details.
Test types & output formats
Click any of the 18 category tiles to generate cases for that type. Each tile triggers a separate AI call, so you can run multiple types in parallel. Results stream progressively — the first case appears in 2–4 seconds.
- Categories: Functional, E2E, Smoke, Regression, Accessibility, Visual, Performance, Security, API, Cross-Browser, Mobile, Forms, Network, Negative, Unit, Integration, Localization, Database.
- Output structure: each case includes a title, preconditions, numbered steps, and expected outcome — formatted as a BDD scenario.
- Filter panel:hide test types you do not need using the “Filter test types” button to focus results.
- Case limits: anonymous: 50/request; free account: 200; Pro/Enterprise: effectively unlimited.
AI Test Coverage Score
After generating test cases, a circular 0–100 score appears alongside your results. The score reflects how many of the 18 test types are represented in the current generation. Hover the score ring to see a tooltip listing the missing test types, so you know exactly which categories to generate next to reach 100% coverage.
The score is computed locally in the browser — no extra API call — and updates in real time as new categories stream in.
Refine with AI
Once test cases are generated, click “Refine with AI” on any individual test case to open an inline editing prompt. Type natural-language instructions such as:
- “make this stricter” — tightens preconditions and assertions.
- “add mobile viewport” — prepends a viewport-resize step.
- “add negative test” — generates a companion failure scenario.
- “convert to Playwright code” — emits TypeScript ready for
playwright-bdd.
The refined version streams back instantly, replacing the original in-place. Undo is always one click away. Refine is available on saved test cases in the Dashboard as well.
Exporting results
Three export formats, no proprietary runtime required:
JSON— full row including category, steps, expected outcome, execution status, and notes.Excel (.xlsx)— one sheet with columns ready for Jira / TestRail import. Status and notes round-trip on re-import..feature (Cucumber / Gherkin)— one Scenario block per case, ready forplaywright-bdd/cucumber-js/ any BDD runner.
Mark Pass / Fail / Blocked / Skipped on each case before exporting to include execution status in the output. Failed and Blocked cases expose a notes textarea for defect links or root-cause comments.
AI Website Auditor — run an audit
Open /ai-testing, paste a URL, tick the authorization checkbox confirming you own or are authorized to test the site, and click Run AI Test. The crawler streams page discoveries live; heavier panes (screenshots, security, a11y, performance, SEO, forms, network) render once the crawl completes.
- Live crawl streaming — Playwright visits every linked page and streams discoveries in real time.
- Screenshots — full-page captures of each discovered URL.
- Accessibility (WCAG 2.2) — axe-core checks for missing alt text, low contrast, missing landmarks, keyboard traps, and more.
- Performance — page-load timing, LCP, CLS, TBT estimates.
- SEO — missing meta tags, duplicate titles, missing canonical, robots directives.
- Broken links — every internal and external href is checked; 4xx/5xx responses are flagged.
- Security scan — OWASP-aligned non-destructive probes: security headers, weak cookies, reflected-XSS markers, SQL-injection smells, information-disclosure paths. Each finding includes severity, evidence, recommendation, and a
curl -ireproduction command. - Forms analysis — fields, validation, and submission behavior.
- Network log — captured requests and responses for all pages crawled.
Multi-environment compare mode
In AI Testing, enable Compare Mode to audit two URLs side-by-side — for example, staging.example.com versus example.com. The diff view categorises every finding as:
- New — issues present on the second URL but not the first (regressions).
- Fixed — issues on the first URL that no longer appear on the second (resolved).
- Unchanged — issues present on both (pre-existing).
Compare Mode runs accessibility, security, and broken-link checks on both targets. Performance and screenshot checks are run independently per URL.
Scheduled audits
Pro users can schedule audits to run automatically on a cadence. Configure schedules at Settings → Scheduled Audits.
- Frequency options: daily, weekly, or monthly.
- Notifications: results are pushed to your configured Slack and/or Discord webhook on completion.
- History: every scheduled run is saved to the Dashboard under AI Sessions, just like a manual audit.
Schedules are tied to a single URL and inherit the authorization flag set when the schedule was created. You can pause, edit, or delete any schedule from the Settings → Scheduled Audits list.
Shareable report URLs
Any audit saved to your Dashboard can be shared publicly via a unique URL. In Dashboard → AI Sessions, click the Share button on any audit row. This generates a secret token and a public link in the format:
https://goqa.ai/report/[token]
Public viewers see the full audit results — all pages, findings, screenshots, and scores — without needing to log in. Toggle sharing off at any time to revoke access; the token is invalidated immediately.
Embeddable audit badge
After sharing an audit, a Get Badge button appears alongside the share link. The badge is an SVG served at:
https://goqa.ai/badge/[token]
Embed it on your site, README, or documentation:
<img src="https://goqa.ai/badge/TOKEN" alt="QA Audit badge" />
Badge color reflects the most severe finding category:
- Green — no critical or high severity findings.
- Amber — medium-severity findings present.
- Red — high or critical findings (security, broken pages, critical a11y).
The badge is regenerated on every subsequent audit run against the same shared token, so it always reflects the latest state.
Visual regression baseline
In the audit screenshot view, click “Set as Baseline” on any screenshot to lock it in as the reference state for that URL. On every subsequent audit, screenshots are compared pixel-by-pixel against the baseline. Pages with visual changes display a “Diff from baseline” badge showing the percentage of changed pixels.
Baselines are stored per URL per account. You can update the baseline at any time by clicking “Set as Baseline” on a newer screenshot. Delete a baseline from the screenshot panel to disable comparison for that URL.
Test Repository — managing suites
/test-repository is a full test-management workspace. Organize cases into named suites(Login Flow, Checkout, API Contract, etc.). Import directly from the Test Cases workbench via “Move to Repository” on any saved test-cases entry.
- Create, rename, and delete suites from the sidebar.
- Drag cases between suites to reorganize.
- Filter by status, category, or keyword across all suites.
- Bulk-select and move or delete cases.
Execution cycles
Create named cycles — Sprint 42, Regression Q2, Smoke — and assign cases to runs. Each cycle tracks pass/fail status independently, so the same test case can be Passing in Sprint 42 but Failing in Regression Q2.
- Pass / Fail / Blocked / Skipped per case per cycle.
- Trend graphs show pass-rate over time across cycles.
- Cycle summary shows total, passed, failed, blocked, skipped counts and a pass-rate %.
Flakiness detection
The Test Repository tracks pass/fail history per test case across all execution cycles. Each case displays a flakiness percentage — the ratio of failures to total runs over its lifetime.
- Green (0–5%) — stable test, passes consistently.
- Amber (6–20%) — occasionally flaky, worth investigating.
- Red (>20%) — unreliable, should be fixed or quarantined before relying on it in CI.
Use the “Flaky Tests” filter to isolate unreliable cases across all suites. Clicking a flaky test shows its run history with timestamps so you can identify patterns (time-of-day failures, environment correlation, etc.).
CI/CD integration
Push run results from your pipeline using the Test Execution Runs API (see API endpoints). Results recorded via the API appear in the repository alongside manually-entered runs and count toward flakiness percentages.
# Record a test run from GitHub Actions
curl -X POST https://goqa.ai/api/test-runs \
-H "Authorization: Bearer $QALABS_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"test_case_id": "tc_abc123",
"status": "passed",
"duration_ms": 1240,
"environment": "staging",
"ci_run_url": "$GITHUB_SERVER_URL/$GITHUB_REPOSITORY/actions/runs/$GITHUB_RUN_ID"
}'Team Workspaces — inviting teammates
Pro plan users can create a shared workspace and invite teammates by email. Navigate to Settings → Team and enter the email address of each person you want to invite. They receive an email with a join link; clicking it adds them to your workspace automatically.
Pending invites are listed in the Team settings panel with an option to resend or revoke. Accepted members appear in the Members list where you can adjust their role or remove them.
Roles
Three roles are available per workspace member:
- Admin — full access including billing, member management, workspace deletion, and all feature access.
- Editor — can generate test cases, run audits, manage suites and cycles, and configure integrations. Cannot manage members or billing.
- Viewer — read-only access to all workspace content. Cannot generate, edit, or delete anything.
Shared dashboard & workspace isolation
All workspace members share the same dashboard: test cases, audit results, test repository suites, execution cycles, and scheduled audits. Changes made by one member are visible to all other members in real time.
Workspaces are fully isolated from each other. A user who belongs to multiple workspaces can switch between them using the workspace selector in the account menu. Data from one workspace never appears in another.
Integrations — CI/CD pipelines
Record test execution results directly from your CI pipeline using the /api/test-runs endpoint. Below are sample configurations for the three most common CI platforms.
GitHub Actions
- name: Report test results to GoQA
run: |
curl -X POST https://goqa.ai/api/test-runs \
-H "Authorization: Bearer $QALABS_API_KEY" \
-H "Content-Type: application/json" \
-d '{"test_case_id":"$TEST_CASE_ID","status":"$STATUS","environment":"ci"}'GitLab CI
report_results:
script:
- |
curl -X POST https://goqa.ai/api/test-runs \
-H "Authorization: Bearer $QALABS_API_KEY" \
-H "Content-Type: application/json" \
-d '{"test_case_id":"$TEST_CASE_ID","status":"passed"}'CircleCI
- run:
name: Report to GoQA
command: |
curl -X POST https://goqa.ai/api/test-runs \
-H "Authorization: Bearer $QALABS_API_KEY" \
-H "Content-Type: application/json" \
-d '{"test_case_id":"'$TEST_CASE_ID'","status":"passed"}'OpenAPI / Swagger integration
In the Test Cases workbench, select the OpenAPI Spec tab. You can either paste a raw JSON OpenAPI spec (OpenAPI 3.x or Swagger 2.0 format) or provide a URL to a hosted spec (e.g. https://api.example.com/openapi.json).
The generator reads each path, method, parameter, and response definition and produces API contract tests covering:
- Expected HTTP status codes (200, 201, 400, 401, 403, 404, 422, 500).
- Response schema validation against the OpenAPI schema definition.
- Authentication flows (Bearer, API key, OAuth2 scopes).
- Error cases and boundary inputs (empty body, missing required fields, invalid types).
One test scenario is generated per endpoint, giving you a full contract test suite you can export to Gherkin and run with cucumber-js or drop into a Postman collection.
Slack & Discord webhooks
Configure webhook notifications in /settings:
- Slack webhook— paste an Incoming Webhook URL. Use “Test Slack” to verify the connection before saving.
- Discord webhook— paste a Discord channel webhook URL. Same “Test Discord” button.
- Notify on fail — fires when any audit finds security or critical accessibility issues.
- Notify on complete — fires when any audit finishes, regardless of findings.
- Scheduled audit notifications — Scheduled audits always send a completion notification via whichever webhooks are configured.
Jira integration (coming soon)
Two-way Jira sync is on the roadmap. When shipped, it will allow you to push failing test cases and audit findings directly to a Jira project as bug tickets, and pull Jira issue status back into the GoQA dashboard. Pro and Enterprise plans will include this integration.
Zephyr Scale and Azure DevOps integrations are already available on Pro — see the Settings → Integrations panel.
Settings — scheduled audits
Settings → Scheduled Audits lists all active schedules. Each schedule shows the target URL, frequency, last run time, next run time, and status. You can:
- Create a new schedule by clicking + New Schedule.
- Pause a schedule without deleting it (preserves history).
- Edit the URL or frequency of an existing schedule.
- Delete a schedule — this does not remove previously saved audit results.
Scheduled audits respect your webhook settings and will ping Slack or Discord on each completion. Missed runs (e.g. if the service was briefly unavailable) are skipped and not retried — the next scheduled time applies.
Notification webhooks
Webhook configuration lives at /settings. Two notification triggers are available — Notify on fail and Notify on complete — for both Slack and Discord. Settings are stored server-side and synced across all devices on your account.
Webhook payloads include the audit URL, grade, page count, and counts of security, accessibility, and broken-link findings — enough to triage in a Slack message without opening the dashboard.
Onboarding tour
New users see a guided 5-step tour on first login. Steps in order:
- Welcome — introduction to GoQA and what it does.
- Generate test — runs a sample test case generation from a pre-filled prompt.
- Save to repo — shows how to move generated cases to the Test Repository.
- Export — demonstrates the JSON / Excel / Gherkin export options.
- Done — links to this documentation and the pricing page.
The tour can be dismissed at any step by clicking Skip tour. To restart from the beginning, go to Settings → Onboarding and click Restart tour. Restarting replays all five steps from step 1.
Billing & plan
Manage your subscription at /settings → Billing, or visit /pricing to compare plans. Payments are processed by Stripe; card details are never stored by GoQA.
- Upgrade or downgrade instantly — changes apply at the next billing cycle.
- Annual billing saves ~20% (CA$31/mo instead of CA$39/mo for Pro).
- Refunds: full refund within 30 days of purchase, no questions asked. Email support@goqa.ai.
Usage dashboard
The top of the Dashboard shows:
- Plan name — Starter, Pro, or Enterprise.
- Generations used / limit this month — e.g. “12 / 5” on Starter.
- Progress bar — turns amber at 80% of your monthly limit, red at 95%.
- Upgrade CTA — shown when you are on Starter or nearing your limit.
The usage counter resets on your monthly billing anniversary. Scheduled audits and manual audits both count toward the generation limit.
API Reference — overview & authentication
The GoQA REST API lets you integrate test generation, audit results, and test run reporting into your own tooling and CI pipelines. All endpoints are under https://goqa.ai/api/.
Authentication: pass your API key in the Authorization header as a Bearer token. Obtain your API key from Settings → API Keys (Pro and Enterprise plans). Unauthenticated requests to protected endpoints return 401 Unauthorized.
Authorization: Bearer qlab_live_xxxxxxxxxxxx
All request bodies use JSON (Content-Type: application/json). All responses are JSON. Dates are ISO 8601 strings in UTC.
All endpoints
| Method | Path | Auth | Description |
|---|---|---|---|
| POST | /api/generate | Required | Generate test cases from acceptance criteria, URL, screenshot, or OpenAPI spec. |
| GET | /api/generate | Required | List all previously generated test-case sets for the authenticated user. |
| POST | /api/crawl | Required | Start a new AI website audit crawl. |
| GET | /api/crawl | Required | List all saved audit results for the authenticated user. |
| GET | /api/crawl/[id] | Required | Retrieve a single audit result by ID. |
| POST | /api/test-runs | Required | Record a test execution result from CI (status, duration, environment). |
| GET | /api/test-runs | Required | List execution run history. Filter by test_case_id query param. |
| GET | /api/test-runs/[id] | Required | Retrieve a single execution run record. |
| POST | /api/share | Required | Generate or revoke a public share token for an audit result. |
| GET | /api/share/[token] | None | Retrieve a publicly shared audit result by token. |
| GET | /badge/[token] | None | Serve an SVG audit badge for the given share token. |
| GET | /api/settings | Required | Retrieve the authenticated user's settings and webhook configuration. |
| POST | /api/settings | Required | Update webhook URLs and notification preferences. |
| GET | /api/discover | None | List recent public audit results (community feed). |
| GET | /api/status | None | Platform operational status for all services. |
429 Too Many Requests.FAQ
1. Can I test a site I don't own?
No. The crawler requires you to check an authorization box confirming ownership or written permission. Unauthorized testing of third-party infrastructure may be illegal under the Computer Fraud and Abuse Act (USA), Computer Misuse Act (UK), and similar laws in other jurisdictions.
2. Does the security scan modify the target site?
No. All probes are non-destructive read-only HTTP requests. The scanner does not attempt to exploit found vulnerabilities, write data, or modify state.
3. How do I add my API key to GitHub Actions?
Go to your GitHub repository → Settings → Secrets and variables → Actions. Add a secret named QALABS_API_KEY and paste in your key from GoQA Settings → API Keys. Reference it in your workflow as secrets.QALABS_API_KEY.
4. What happens to my data when I downgrade from Pro to Starter?
Your data is preserved. You will lose access to Pro-only features (team workspace, scheduled audits, advanced export formats), but all previously saved test cases, audit results, and repository data remain accessible. Upgrades restore full access immediately.
5. How does flakiness percentage work?
Flakiness % = (number of fail/blocked runs) ÷ (total runs for that test case) × 100. A test case must have at least 3 recorded runs before a flakiness indicator appears. Runs recorded via the CI API count the same as manually-entered runs.
6. Can I share a report without sharing the full audit data?
Not currently. Sharing a report exposes the complete audit data for that run, including all pages, findings, screenshots, and scores. If you need to share a subset, export to Excel or JSON and share that file instead.
7. Are scheduled audit results sent to Discover (/discover)?
Only if you have opted in to public sharing for that audit. By default, scheduled audit results are private. You can opt individual audits into the community feed via the Share toggle in the Dashboard.
8. How do I restart the onboarding tour?
Go to /settings → Onboarding and click Restart tour. The tour will appear the next time you open the Dashboard.
9. Does the AI Test Coverage Score count test types from the repository?
No — the score is computed from the current generation session only. It shows which of the 18 test types have been generated in the current workbench session, not across your entire repository. Refresh the workbench to reset it.
10. How does the embeddable badge stay current after new audits?
The badge SVG is generated dynamically on each request from the latest audit results associated with the share token. If you run a new audit and the token is still active, the badge reflects the new results within minutes.
Design patterns (POM + Factory)
The GoQA codebase deliberately uses two simple, beginner-friendly design patterns so a developer joining the team can read the project on day one. Both patterns live under tests/ and are documented in tests/README.md.
- Page Object Model (POM) — one class per public route, wrapping Playwright locators and user actions.
- Factory pattern — one builder per domain object (User, TestCase, AuditResult, UserSettings) returning fresh, deterministic data.
We resist clever abstractions on purpose. No DI containers, no deep inheritance trees, no decorator magic. If a new contributor cannot understand a page object or factory in under two minutes, we simplify it.
Page Object Model
Page objects live in tests/e2e/pages/. Each route gets one class extending a tiny BasePage shared parent. The class exposes verbs (loginAs, submit) and locator getters (emailInput, errorBanner). Specs do the asserting; pages never call expect().
Currently shipped page objects:
HomePage,LoginPage,SignupPageTestCasesPage,AiTestingPage,TestRepositoryPageDashboardPage,SettingsPageDocsPage,PricingPage
See tests/e2e/login.pom.spec.ts for a worked example combining POM + factory patterns in a single test.
Factories
Factories live in tests/factories/. Each exposes build(overrides?) returning a deterministic object — same input, same output, every run. No randomness, no Faker.
userFactory.build()/buildPro()/buildEnterprise()— User fixtures by plan tier.testCaseFactory.build()/buildMany(n)— single or batched BDD test cases.auditResultFactory.build()/buildClean()/buildFailing()— crawl results in three canonical states.settingsFactory.build()— UserSettings with webhook + notification defaults.
factory.build({...overrides}) rather than constructing objects inline. This keeps test intent legible and absorbs schema changes in a single file.Test suite layout
tests/ ├── e2e/ Playwright end-to-end tests │ ├── pages/ Page Object Model — one class per route │ └── *.spec.ts Specs that use page objects + factories ├── factories/ Factory pattern — deterministic test data ├── integration/ API integration tests (Jest + supertest) └── unit/ Pure-function unit tests (Jest)
Run all three layers from the project root:
npm run test:unit— Jest unit suite.npm run test:integration— API integration suite.npm run test:e2e— Playwright end-to-end suite.npm run test:all— runs unit, e2e, and accessibility together.
Automation agent roster
GoQA ships with seven specialized automation agents under .github/agents/. Each agent definition is a Markdown spec a contributor (or another LLM) can read and execute. The roster is tuned for the QA / AI tooling market — every agent has a single, narrow responsibility:
- implementation-agent — ships industry-standard improvements (security, performance, UX).
- testing-agent — verifies the app after each implementation lands.
- fixing-agent — repairs anything the testing-agent flags.
- design-patterns-agent — keeps POM + Factory conventions clean and readable.
- seo-agent — owns technical, on-page, and AI-crawler SEO.
- app-analysis-agent — produces a single GREEN / YELLOW / RED verdict on overall app health.
- feature-suggestion-agent — proposes high-leverage, unshipped features ranked by competitive impact.
Design patterns agent
.github/agents/design-patterns-agent.agent.md enforces the POM + Factory conventions described above. Its job is to:
- Add a page object whenever a new route ships.
- Add a factory whenever a new domain object enters the codebase.
- Refactor any test that constructs objects inline or pokes at locators directly.
- Keep
tests/README.mdcurrent.
The agent explicitly refuses to introduce DI containers, base-class hierarchies, or decorator magic — readability for new developers is the win condition.
SEO agent
.github/agents/seo-agent.agent.md audits and improves how GoQA is discovered by both classical search engines and AI crawlers. It covers:
- Per-route titles, descriptions, canonicals.
- Structured data: Organization, SoftwareApplication, FAQPage, BreadcrumbList.
- OpenGraph + Twitter card validation.
- Core Web Vitals targets (LCP < 2.5s, CLS < 0.1, INP < 200ms).
- LLM discoverability: explicit allow-rules for
GPTBot,ClaudeBot,PerplexityBotplus an/llms.txtsummary. - Keyword-to-page mapping with no duplicate intent.
Success criterion: every primary keyword has a page on SERP page 1 within three months and the home page Lighthouse SEO score is ≥ 95.
App analysis agent
.github/agents/app-analysis-agent.agent.md runs a holistic, end-to-end health check on demand and returns a single verdict — GREEN, YELLOW, or RED — with evidence. It exercises:
- Every public route (200, chrome, console clean).
- Auth flows: signup, login, logout, OAuth.
- Core features: test case generation, AI website crawl, repository workflow, exports, settings, webhooks.
- Every
/api/*endpoint: auth, rate limits, schema. - Integrations: Supabase (with RLS), OpenAI, Stripe, Slack, Discord.
- Operational signals:
/statuspage, queue health, migrations, advisor warnings, 5xx counts.
Feature suggestion agent
.github/agents/feature-suggestion-agent.agent.md takes a fresh look at the live app and proposes the top ten unshipped features that would make GoQA the obvious leader in its category. Suggestions are ranked across five vectors:
- AI-native moves — self-healing tests, semantic visual diffs, test-impact analysis, conversational authoring, bug-report-to-Playwright reproduction.
- Workflow integrations — GitHub Action, VS Code extension, Linear / Jira two-way sync, CLI, CI plug-ins.
- Coverage upgrades — mobile (Appium) audits, authenticated crawl, API contract drift, localization audit.
- Collaboration & insight — audit history diffs, public trust badges, team workspaces, bounty mode.
- Monetization expansions — usage-based add-ons, white-label, marketplace test packs, paid certification.
Output is structured as one-pagers with effort, impact, why-us, risks, and the smallest shippable v0 — enough for a PM to triage and a senior engineer to scope.
Where to go next
Generate test cases
Turn requirements into BDD scenarios across 18 test types.
Run an AI website audit
Streaming Playwright crawl with a11y, perf, SEO, and security.
Test Repository
Organize suites, run cycles, track pass/fail, view flakiness trends.
Discover
Browse recent public audits from the GoQA community.
Changelog
Full release history — see every feature added in v0.4.0 and prior.
Pricing
Starter free forever · Pro CA$39/mo · Enterprise custom.