Docs / Getting started

Welcome to GoQA

GoQA turns plain-language requirements, live URLs, screenshots, and OpenAPI specs into comprehensive BDD test suites — and crawls any web application to surface accessibility, performance, SEO, and security defects in minutes. This documentation covers every feature available in v0.4.0.

Quick start (3 steps)

Get your first test cases in under two minutes — no setup, no CLI, no configuration file required.

  1. Open the Test Cases workbench at /test-cases. Paste acceptance criteria, a URL, a screenshot, or an OpenAPI spec into the input panel.
  2. Click a test-type tile (e.g. Functional, E2E, Security) to generate BDD scenarios for that category. Results stream back in 2–4 seconds.
  3. Mark Pass / Fail on each case, then export as JSON, Excel, or Gherkin — or move the whole set to the Test Repository for ongoing tracking.
Tip: sign up free to persist your test repository across sessions. Anonymous sessions are capped at 50 cases per request.

Account creation

Visit /signup to create a free account with email and password, or sign in with Google OAuth. Email confirmation is required before accessing the dashboard. Once confirmed, you are placed on the Starter plan automatically.

New users see the Onboarding Tour — a guided 5-step walkthrough covering core features. The tour can be dismissed at any point and restarted later from Settings → Onboarding. See Onboarding tour for details.

Plan overview

Full pricing at /pricing. Summary:

Your current plan, monthly usage, and upgrade CTA are shown at the top of the Dashboard. The usage bar turns amber at 80% of your monthly generation limit and red at 95%.

Test Case Generation — overview

The Test cases workbench is the primary surface for converting any specification into runnable BDD test scenarios. It supports four input methods, 18 test-type categories, AI-powered refinement, and a real-time coverage score.

Input methods

Switch between tabs in the input panel to choose your source:

Test types & output formats

Click any of the 18 category tiles to generate cases for that type. Each tile triggers a separate AI call, so you can run multiple types in parallel. Results stream progressively — the first case appears in 2–4 seconds.

AI Test Coverage Score

After generating test cases, a circular 0–100 score appears alongside your results. The score reflects how many of the 18 test types are represented in the current generation. Hover the score ring to see a tooltip listing the missing test types, so you know exactly which categories to generate next to reach 100% coverage.

The score is computed locally in the browser — no extra API call — and updates in real time as new categories stream in.

Refine with AI

Once test cases are generated, click “Refine with AI” on any individual test case to open an inline editing prompt. Type natural-language instructions such as:

The refined version streams back instantly, replacing the original in-place. Undo is always one click away. Refine is available on saved test cases in the Dashboard as well.

Exporting results

Three export formats, no proprietary runtime required:

Mark Pass / Fail / Blocked / Skipped on each case before exporting to include execution status in the output. Failed and Blocked cases expose a notes textarea for defect links or root-cause comments.

AI Website Auditor — run an audit

Open /ai-testing, paste a URL, tick the authorization checkbox confirming you own or are authorized to test the site, and click Run AI Test. The crawler streams page discoveries live; heavier panes (screenshots, security, a11y, performance, SEO, forms, network) render once the crawl completes.

Note: you must own the site or have written authorization to test it. Unauthorized testing can be illegal in your jurisdiction.

Multi-environment compare mode

In AI Testing, enable Compare Mode to audit two URLs side-by-side — for example, staging.example.com versus example.com. The diff view categorises every finding as:

Compare Mode runs accessibility, security, and broken-link checks on both targets. Performance and screenshot checks are run independently per URL.

Scheduled audits

Pro users can schedule audits to run automatically on a cadence. Configure schedules at Settings → Scheduled Audits.

Schedules are tied to a single URL and inherit the authorization flag set when the schedule was created. You can pause, edit, or delete any schedule from the Settings → Scheduled Audits list.

Shareable report URLs

Any audit saved to your Dashboard can be shared publicly via a unique URL. In Dashboard → AI Sessions, click the Share button on any audit row. This generates a secret token and a public link in the format:

https://goqa.ai/report/[token]

Public viewers see the full audit results — all pages, findings, screenshots, and scores — without needing to log in. Toggle sharing off at any time to revoke access; the token is invalidated immediately.

Privacy: only audits you explicitly share are public. All other dashboard data remains private.

Embeddable audit badge

After sharing an audit, a Get Badge button appears alongside the share link. The badge is an SVG served at:

https://goqa.ai/badge/[token]

Embed it on your site, README, or documentation:

<img src="https://goqa.ai/badge/TOKEN" alt="QA Audit badge" />

Badge color reflects the most severe finding category:

The badge is regenerated on every subsequent audit run against the same shared token, so it always reflects the latest state.

Visual regression baseline

In the audit screenshot view, click “Set as Baseline” on any screenshot to lock it in as the reference state for that URL. On every subsequent audit, screenshots are compared pixel-by-pixel against the baseline. Pages with visual changes display a “Diff from baseline” badge showing the percentage of changed pixels.

Baselines are stored per URL per account. You can update the baseline at any time by clicking “Set as Baseline” on a newer screenshot. Delete a baseline from the screenshot panel to disable comparison for that URL.

Test Repository — managing suites

/test-repository is a full test-management workspace. Organize cases into named suites(Login Flow, Checkout, API Contract, etc.). Import directly from the Test Cases workbench via “Move to Repository” on any saved test-cases entry.

Execution cycles

Create named cycles — Sprint 42, Regression Q2, Smoke — and assign cases to runs. Each cycle tracks pass/fail status independently, so the same test case can be Passing in Sprint 42 but Failing in Regression Q2.

Flakiness detection

The Test Repository tracks pass/fail history per test case across all execution cycles. Each case displays a flakiness percentage — the ratio of failures to total runs over its lifetime.

Use the “Flaky Tests” filter to isolate unreliable cases across all suites. Clicking a flaky test shows its run history with timestamps so you can identify patterns (time-of-day failures, environment correlation, etc.).

CI/CD integration

Push run results from your pipeline using the Test Execution Runs API (see API endpoints). Results recorded via the API appear in the repository alongside manually-entered runs and count toward flakiness percentages.

# Record a test run from GitHub Actions
curl -X POST https://goqa.ai/api/test-runs \
  -H "Authorization: Bearer $QALABS_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "test_case_id": "tc_abc123",
    "status": "passed",
    "duration_ms": 1240,
    "environment": "staging",
    "ci_run_url": "$GITHUB_SERVER_URL/$GITHUB_REPOSITORY/actions/runs/$GITHUB_RUN_ID"
  }'

Team Workspaces — inviting teammates

Pro plan users can create a shared workspace and invite teammates by email. Navigate to Settings → Team and enter the email address of each person you want to invite. They receive an email with a join link; clicking it adds them to your workspace automatically.

Pending invites are listed in the Team settings panel with an option to resend or revoke. Accepted members appear in the Members list where you can adjust their role or remove them.

Roles

Three roles are available per workspace member:

Shared dashboard & workspace isolation

All workspace members share the same dashboard: test cases, audit results, test repository suites, execution cycles, and scheduled audits. Changes made by one member are visible to all other members in real time.

Workspaces are fully isolated from each other. A user who belongs to multiple workspaces can switch between them using the workspace selector in the account menu. Data from one workspace never appears in another.

Integrations — CI/CD pipelines

Record test execution results directly from your CI pipeline using the /api/test-runs endpoint. Below are sample configurations for the three most common CI platforms.

GitHub Actions

- name: Report test results to GoQA
  run: |
    curl -X POST https://goqa.ai/api/test-runs \
      -H "Authorization: Bearer $QALABS_API_KEY" \
      -H "Content-Type: application/json" \
      -d '{"test_case_id":"$TEST_CASE_ID","status":"$STATUS","environment":"ci"}'

GitLab CI

report_results:
  script:
    - |
      curl -X POST https://goqa.ai/api/test-runs \
        -H "Authorization: Bearer $QALABS_API_KEY" \
        -H "Content-Type: application/json" \
        -d '{"test_case_id":"$TEST_CASE_ID","status":"passed"}'

CircleCI

- run:
    name: Report to GoQA
    command: |
      curl -X POST https://goqa.ai/api/test-runs \
        -H "Authorization: Bearer $QALABS_API_KEY" \
        -H "Content-Type: application/json" \
        -d '{"test_case_id":"'$TEST_CASE_ID'","status":"passed"}'

OpenAPI / Swagger integration

In the Test Cases workbench, select the OpenAPI Spec tab. You can either paste a raw JSON OpenAPI spec (OpenAPI 3.x or Swagger 2.0 format) or provide a URL to a hosted spec (e.g. https://api.example.com/openapi.json).

The generator reads each path, method, parameter, and response definition and produces API contract tests covering:

One test scenario is generated per endpoint, giving you a full contract test suite you can export to Gherkin and run with cucumber-js or drop into a Postman collection.

Slack & Discord webhooks

Configure webhook notifications in /settings:

Jira integration (coming soon)

Two-way Jira sync is on the roadmap. When shipped, it will allow you to push failing test cases and audit findings directly to a Jira project as bug tickets, and pull Jira issue status back into the GoQA dashboard. Pro and Enterprise plans will include this integration.

Zephyr Scale and Azure DevOps integrations are already available on Pro — see the Settings → Integrations panel.

Settings — scheduled audits

Settings → Scheduled Audits lists all active schedules. Each schedule shows the target URL, frequency, last run time, next run time, and status. You can:

Scheduled audits respect your webhook settings and will ping Slack or Discord on each completion. Missed runs (e.g. if the service was briefly unavailable) are skipped and not retried — the next scheduled time applies.

Notification webhooks

Webhook configuration lives at /settings. Two notification triggers are available — Notify on fail and Notify on complete — for both Slack and Discord. Settings are stored server-side and synced across all devices on your account.

Webhook payloads include the audit URL, grade, page count, and counts of security, accessibility, and broken-link findings — enough to triage in a Slack message without opening the dashboard.

Onboarding tour

New users see a guided 5-step tour on first login. Steps in order:

  1. Welcome — introduction to GoQA and what it does.
  2. Generate test — runs a sample test case generation from a pre-filled prompt.
  3. Save to repo — shows how to move generated cases to the Test Repository.
  4. Export — demonstrates the JSON / Excel / Gherkin export options.
  5. Done — links to this documentation and the pricing page.

The tour can be dismissed at any step by clicking Skip tour. To restart from the beginning, go to Settings → Onboarding and click Restart tour. Restarting replays all five steps from step 1.

Billing & plan

Manage your subscription at /settings → Billing, or visit /pricing to compare plans. Payments are processed by Stripe; card details are never stored by GoQA.

Usage dashboard

The top of the Dashboard shows:

The usage counter resets on your monthly billing anniversary. Scheduled audits and manual audits both count toward the generation limit.

API Reference — overview & authentication

The GoQA REST API lets you integrate test generation, audit results, and test run reporting into your own tooling and CI pipelines. All endpoints are under https://goqa.ai/api/.

Authentication: pass your API key in the Authorization header as a Bearer token. Obtain your API key from Settings → API Keys (Pro and Enterprise plans). Unauthenticated requests to protected endpoints return 401 Unauthorized.

Authorization: Bearer qlab_live_xxxxxxxxxxxx

All request bodies use JSON (Content-Type: application/json). All responses are JSON. Dates are ISO 8601 strings in UTC.

All endpoints

MethodPathAuthDescription
POST/api/generateRequiredGenerate test cases from acceptance criteria, URL, screenshot, or OpenAPI spec.
GET/api/generateRequiredList all previously generated test-case sets for the authenticated user.
POST/api/crawlRequiredStart a new AI website audit crawl.
GET/api/crawlRequiredList all saved audit results for the authenticated user.
GET/api/crawl/[id]RequiredRetrieve a single audit result by ID.
POST/api/test-runsRequiredRecord a test execution result from CI (status, duration, environment).
GET/api/test-runsRequiredList execution run history. Filter by test_case_id query param.
GET/api/test-runs/[id]RequiredRetrieve a single execution run record.
POST/api/shareRequiredGenerate or revoke a public share token for an audit result.
GET/api/share/[token]NoneRetrieve a publicly shared audit result by token.
GET/badge/[token]NoneServe an SVG audit badge for the given share token.
GET/api/settingsRequiredRetrieve the authenticated user's settings and webhook configuration.
POST/api/settingsRequiredUpdate webhook URLs and notification preferences.
GET/api/discoverNoneList recent public audit results (community feed).
GET/api/statusNonePlatform operational status for all services.
Rate limits: Starter plan: 10 requests/minute. Pro: 60 requests/minute. Enterprise: negotiated. Requests exceeding the limit receive 429 Too Many Requests.

FAQ

1. Can I test a site I don't own?

No. The crawler requires you to check an authorization box confirming ownership or written permission. Unauthorized testing of third-party infrastructure may be illegal under the Computer Fraud and Abuse Act (USA), Computer Misuse Act (UK), and similar laws in other jurisdictions.

2. Does the security scan modify the target site?

No. All probes are non-destructive read-only HTTP requests. The scanner does not attempt to exploit found vulnerabilities, write data, or modify state.

3. How do I add my API key to GitHub Actions?

Go to your GitHub repository → Settings → Secrets and variables → Actions. Add a secret named QALABS_API_KEY and paste in your key from GoQA Settings → API Keys. Reference it in your workflow as secrets.QALABS_API_KEY.

4. What happens to my data when I downgrade from Pro to Starter?

Your data is preserved. You will lose access to Pro-only features (team workspace, scheduled audits, advanced export formats), but all previously saved test cases, audit results, and repository data remain accessible. Upgrades restore full access immediately.

5. How does flakiness percentage work?

Flakiness % = (number of fail/blocked runs) ÷ (total runs for that test case) × 100. A test case must have at least 3 recorded runs before a flakiness indicator appears. Runs recorded via the CI API count the same as manually-entered runs.

6. Can I share a report without sharing the full audit data?

Not currently. Sharing a report exposes the complete audit data for that run, including all pages, findings, screenshots, and scores. If you need to share a subset, export to Excel or JSON and share that file instead.

7. Are scheduled audit results sent to Discover (/discover)?

Only if you have opted in to public sharing for that audit. By default, scheduled audit results are private. You can opt individual audits into the community feed via the Share toggle in the Dashboard.

8. How do I restart the onboarding tour?

Go to /settings → Onboarding and click Restart tour. The tour will appear the next time you open the Dashboard.

9. Does the AI Test Coverage Score count test types from the repository?

No — the score is computed from the current generation session only. It shows which of the 18 test types have been generated in the current workbench session, not across your entire repository. Refresh the workbench to reset it.

10. How does the embeddable badge stay current after new audits?

The badge SVG is generated dynamically on each request from the latest audit results associated with the share token. If you run a new audit and the token is still active, the badge reflects the new results within minutes.

Design patterns (POM + Factory)

The GoQA codebase deliberately uses two simple, beginner-friendly design patterns so a developer joining the team can read the project on day one. Both patterns live under tests/ and are documented in tests/README.md.

We resist clever abstractions on purpose. No DI containers, no deep inheritance trees, no decorator magic. If a new contributor cannot understand a page object or factory in under two minutes, we simplify it.

Page Object Model

Page objects live in tests/e2e/pages/. Each route gets one class extending a tiny BasePage shared parent. The class exposes verbs (loginAs, submit) and locator getters (emailInput, errorBanner). Specs do the asserting; pages never call expect().

Currently shipped page objects:

See tests/e2e/login.pom.spec.ts for a worked example combining POM + factory patterns in a single test.

Factories

Factories live in tests/factories/. Each exposes build(overrides?) returning a deterministic object — same input, same output, every run. No randomness, no Faker.

Convention: every test calls factory.build({...overrides}) rather than constructing objects inline. This keeps test intent legible and absorbs schema changes in a single file.

Test suite layout

tests/
├── e2e/                   Playwright end-to-end tests
│   ├── pages/             Page Object Model — one class per route
│   └── *.spec.ts          Specs that use page objects + factories
├── factories/             Factory pattern — deterministic test data
├── integration/           API integration tests (Jest + supertest)
└── unit/                  Pure-function unit tests (Jest)

Run all three layers from the project root:

Automation agent roster

GoQA ships with seven specialized automation agents under .github/agents/. Each agent definition is a Markdown spec a contributor (or another LLM) can read and execute. The roster is tuned for the QA / AI tooling market — every agent has a single, narrow responsibility:

Design patterns agent

.github/agents/design-patterns-agent.agent.md enforces the POM + Factory conventions described above. Its job is to:

The agent explicitly refuses to introduce DI containers, base-class hierarchies, or decorator magic — readability for new developers is the win condition.

SEO agent

.github/agents/seo-agent.agent.md audits and improves how GoQA is discovered by both classical search engines and AI crawlers. It covers:

Success criterion: every primary keyword has a page on SERP page 1 within three months and the home page Lighthouse SEO score is ≥ 95.

App analysis agent

.github/agents/app-analysis-agent.agent.md runs a holistic, end-to-end health check on demand and returns a single verdict — GREEN, YELLOW, or RED — with evidence. It exercises:

Rule: the agent refuses to issue a GREEN verdict if it skipped any section. Skipped sections must be listed explicitly with a reason.

Feature suggestion agent

.github/agents/feature-suggestion-agent.agent.md takes a fresh look at the live app and proposes the top ten unshipped features that would make GoQA the obvious leader in its category. Suggestions are ranked across five vectors:

Output is structured as one-pagers with effort, impact, why-us, risks, and the smallest shippable v0 — enough for a PM to triage and a senior engineer to scope.

Where to go next