Gherkin Generator: From Acceptance Criteria to .feature
Acceptance criteria sit in Jira tickets. Tests sit in a different repo, in a different language, written days later by someone who was not in the refinement meeting. The translation loses detail every step.
A Gherkin generator collapses that gap. You paste the acceptance criteria, you get a .feature file. This post walks through the full path: a sample user story, the criteria, the generated Gherkin, and the conventions that keep the output runnable.
QA Lab AI generates Gherkin from acceptance criteria text input. We will use it as the working example, but the conventions apply to any BDD workflow.
Step 1: The user story
Plain text, written by a product manager. No special format required.
As a returning shopper I want to apply a saved payment method at checkout So that I can complete a purchase without retyping card details
Step 2: The acceptance criteria
Pasted directly into the generator. Bullet form is fine. Numbered is fine. Long prose works too — the model parses intent, not structure.
- A logged-in shopper with at least one saved card sees a "Use saved card" option on the checkout payment step.
- Selecting a saved card pre-fills the payment form and disables manual entry.
- The CVV field remains editable and required.
- If the saved card is expired, it appears greyed out and cannot be selected.
- A shopper without saved cards does not see the "Use saved card" section at all.
- The "Pay" button is disabled until a payment method is selected and CVV is entered.
Six criteria. In a manual workflow, this becomes maybe two test cases because writing more is tedious. A generator has no such bias.
Step 3: The generated .feature file
Here is what QA Lab AI produces for the criteria above. Notice the Background, the Scenario Outline, and the tags.
@checkout @payments @regression
Feature: Saved payment method at checkout
As a returning shopper
I want to apply a saved payment method at checkout
So that I can complete a purchase without retyping card details
Background:
Given a logged-in shopper "alice@example.com"
And a cart containing one item priced 49.99
@smoke
Scenario: Saved card pre-fills the payment form
Given the shopper has a saved card ending in "4242" expiring "12/2030"
When she opens the checkout payment step
And she selects "Use saved card" for "Visa ending 4242"
Then the card number field is populated and read-only
And the CVV field is empty and editable
And the "Pay" button is disabled
Scenario: CVV unlocks the Pay button
Given the shopper has selected a saved card ending in "4242"
When she enters CVV "123"
Then the "Pay" button is enabled
Scenario: Expired saved cards are not selectable
Given the shopper has a saved card ending in "1111" expiring "01/2024"
When she opens the checkout payment step
Then the saved card "Visa ending 1111" is shown as "Expired"
And the saved card "Visa ending 1111" cannot be selected
Scenario: Shoppers without saved cards see no saved-card section
Given the shopper has no saved cards
When she opens the checkout payment step
Then the "Use saved card" section is not visible
And the manual card entry fields are visible
@data-driven
Scenario Outline: CVV validation
Given the shopper has selected a saved card ending in "4242"
When she enters CVV "<cvv>"
Then the "Pay" button is "<state>"
And the CVV field shows error "<error>"
Examples:
| cvv | state | error |
| 1 | disabled | "CVV must be 3 digits" |
| 12 | disabled | "CVV must be 3 digits" |
| 123 | enabled | "" |
| abcd | disabled | "CVV must be numeric" |
| 1234 | disabled | "CVV must be 3 digits" |Six criteria became five scenarios plus an outline with five rows. Coverage that would take 30+ minutes to write by hand, generated in seconds.
Conventions worth knowing
The output above follows a small set of rules. They matter because tooling — Cucumber, SpecFlow, Behave, Playwright with a Gherkin adapter — depends on them.
Given / When / Then
Givensets state. No actions, no assertions. Past tense or stative ("is logged in", "has a saved card").Whenperforms exactly one action. If you have two, you usually want two scenarios.Thenasserts. MultipleThenandAnd Thenlines are fine when they describe one outcome.
A common mistake is putting actions in Given. "Given the shopper logs in" is wrong. "Given the shopper is logged in" is right. The first forces every scenario to re-test login. The second lets your step definition seed the session directly.
Background
Background runs before each scenario in the file. Use it for the state every scenario shares — logged-in user, populated cart, feature flag on. Do not put step-specific setup there.
Scenario Outline and Examples
Outlines turn five near-identical scenarios into one. The Examples table is the actual test data. Generators are good at producing realistic tables because they have read your field constraints.
Avoid outlines with one row. That is a regular Scenario wearing a costume.
Tags
Tags drive CI. The generated file uses three layers:
- File-level:
@checkout @payments @regression— applies to every scenario. - Scenario-level:
@smokeon the critical path. - Type-level:
@data-drivenon the outline.
Your runner picks up tags. In Cucumber: cucumber --tags "@smoke". In Playwright with a BDD adapter: same idea via project config. Tag taxonomies that work in practice: @smoke, @regression, @a11y, @security, @flaky, plus a feature-area tag.
One behavior per scenario
If you find yourself writing "And When" or "And the user also...", split the scenario. BDD scenarios document behavior. Each one should fit on a slide.
What the generator handles that humans skip
Three categories tend to disappear when criteria are translated by hand:
- Negative paths. Criterion 4 ("expired cards cannot be selected") is easy to skip. The generator wrote a scenario for it.
- Empty states. Criterion 5 (shoppers with no saved cards) becomes a real scenario instead of a comment.
- Boundary data. The CVV outline tests
1,12,1234, andabcd— not just123.
The full catalog of generated test types is on /test-cases. Generation also feeds into /ai-testing, which covers how the same engine handles URL and OpenAPI inputs.
Try it
Drop a user story and its acceptance criteria into /test-cases. The Starter plan is free forever — 200 cases per run, 5 AI generations a month — which is enough to put a real ticket through the pipeline and review the .feature file.
If you would rather start from a live URL than from text, /free-audit runs an accessibility, performance, and SEO audit without a signup. The findings drop straight into a generation run when you are ready.