Stop Conditions
Configure a test to auto-complete when it hits a visitor count, conversion count, or when a variant wins. Hands-off lifecycle management.
Stop Conditions
Configure a test to auto-complete when it hits a visitor cap, a conversion cap, or when a variant reaches the decision threshold. Lets you walk away from a test and trust the platform to call it.
Stop conditions are the hands-off way to manage a running test. Set the rules upfront, launch, and Otter A/B auto-completes the test when any rule trips. The test transitions to the same completed state you'd get from clicking Complete manually — including the same ended-at timestamp and the same frozen-results behavior.
You don't have to use stop conditions. A test without any configured stop conditions runs until you complete it manually or hit a scheduled end date. Most teams set at least one — usually a visitor cap as a sample-size floor — to keep tests from running longer than they need to.
The three conditions
Visitor limit
visitor_limitStop when total unique human visitors assigned to the test reaches this number.
Counts unique humans (bots, impersonation sessions, and excluded-IP traffic are excluded). The count is total across all variants, not per-variant.
Conversion limit
conversion_limitStop when total unique converters on the primary goal reaches this number.
Counts visitors who converted at least once on the primary goal. Multiple conversions from the same visitor count as one. Secondary goals don't affect this counter.
Auto-stop on winner
auto_stop_on_winnerStop when the primary goal has a variant in 'winner' status, against the test's effective confidence threshold.
Uses the same decision logic as the results page — Bayesian or frequentist with the configured confidence level, Bonferroni-adjusted for multivariate frequentist tests.
Evaluation rules
- Checked every 5 minutes via a background job. Expect a small lag — up to one batch interval of traffic — between hitting a limit and the test transitioning to completed.
- First-to-trip wins. Internally the checker evaluates in this order: visitor limit, then conversion limit, then decision threshold. As soon as one is satisfied, the test stops; it doesn't wait for the others.
- Zero or blank disables a condition. Only positive integers count. Leave a field blank to skip the condition entirely.
- The reason is recorded. The test's activity log captures whether it stopped on visitor limit, conversion limit, or decision threshold so future-you knows why it ended when it did.
- Conditions are wizard-locked after launch. Like other test settings, stop conditions can't be edited mid-flight. Complete and duplicate to change them, or contact support if you need a manual override.
How to pick good stop conditions
Set a visitor floor alongside auto-stop-on-winner. Auto-stop the moment confidence is reached can inflate false positives in frequentist mode. Combine it with a visitor cap (the sample size the wizard estimated, say) so the test can't end too early on a thin sample.
Don't make limits too tight on low-traffic tests. A 1000-visitor cap on a page that gets 30 visitors per day means a long-running test. Use the wizard's sample-size advisor to ground your limits in something concrete.
Re-think conditions before resuming. If a test auto-stopped and you resume it, the same conditions will be re-evaluated against the same traffic that tripped them. Usually you want to raise the cap or remove the condition before resuming, otherwise you'll just re-trigger it.
Bayesian tests handle peeking better. If you're uncomfortable with the auto-stop-on-winner risk for frequentist, run the test in Bayesian mode — the probability interpretation is mathematically safer to stop on the threshold.
Frequently asked questions
Quick answers to the questions teams ask most about this part of Otter A/B.