Your Best Survey Question for Customer Satisfaction in 2026
Survey question for customer satisfaction - Discover essential strategies to gauge customer sentiment. Unlock insights to enhance experience and boost retention

Your team probably has a test like this running right now. A new checkout layout, a tighter PDP, a shorter lead form, a bolder CTA. The experiment dashboard says the variant is winning, so everyone moves on to rollout. Then the after-effects show up somewhere else. Support tickets get sharper. Repeat purchase softens. New customers convert, but they do not stay as happily as the old journey suggested they would.
That gap matters because conversion tells you what users did, not how the experience felt when they did it. A click can come from confidence, or from confusion. A completed purchase can mean the journey was smooth, or that the customer pushed through irritation because they needed the product anyway. If you only optimise for the immediate action, you can ship friction at scale.
This is why a good survey question for customer satisfaction belongs inside your experimentation workflow, not beside it. It gives your team a way to validate whether a variant improves the experience as well as the metric. That matters even more if you run frequent tests across landing pages, support journeys, pricing pages, onboarding, or checkout. The faster your programme moves, the easier it is to declare a narrow winner and miss what changed emotionally.
The strongest teams I have worked with do not treat satisfaction surveys as a CX side project. They use them as a second lens on experiment quality. They ask different questions at different moments. A loyalty question after repeated use. An effort question after a task. A simple satisfaction question after a transaction. A usability battery after a larger workflow change.
Used properly, these questions stop you from celebrating the wrong winner. They also help you find better winners. If you are using a platform such as Otter A/B to test pages and flows, the practical move is simple. Tie survey responses to variants, then review them alongside conversion, revenue, and user behaviour. That is how you build an optimisation programme that improves both short-term performance and long-term customer health.
1. Net Promoter Score NPS Question
The NPS question is still one of the most useful broad signals you can collect:
“How likely are you to recommend us to a friend or colleague?”
It works because it is simple, familiar, and easy to trend over time. In the UK market, Bain & Company introduced the NPS question in 2004, and early benchmarks showed an average NPS of 32 across retail sectors in 2005 according to Snap Surveys on customer satisfaction survey questions.
That does not make NPS your only metric. It makes it your loyalty pulse.

Where NPS works best
Use NPS when you want to know whether customers would advocate for your brand or product after enough exposure to form a real opinion. That means quarterly relationship surveys, post-onboarding check-ins, or after several successful purchases.
It is less useful immediately after a tiny interaction. Asking for NPS right after someone clicks one button is usually too early. They can rate the brand, but not the specific moment you just changed.
For experimentation teams, NPS becomes valuable when the tested experience is broad enough to shape trust. Think account dashboards, subscription management, onboarding flows, or repeat-purchase journeys.
How to use it inside an A/B programme
The practical move is to pair the score with one open-text follow-up:
Ask “What is the primary reason for your score?” every time. The number tells you direction. The text tells you what to fix.
If Variant B lifts conversion but promoters repeatedly mention clarity while detractors complain about pressure, you have a cleaner read on what changed.
A few rules help:
- Segment responses: Split by customer type, such as first-time buyer versus repeat customer, or merchant versus agency client.
- Watch trend lines: NPS is most useful over repeated waves, not one isolated blast.
- Test survey timing: In-product, post-email, and post-purchase placements often produce very different response quality.
- Do not over-trigger: If every interaction asks for recommendation intent, users stop taking the question seriously.
A useful benchmark point exists for the UK as well. In 2022, UKCSI data put the national average NPS at 45, with Amazon UK at 62 and banks at 38, as noted by Snap Surveys’ write-up on customer satisfaction questions. That kind of context can help teams sense whether they are broadly competitive, but internal movement matters more than copying another sector’s benchmark.
If you want a deeper explainer on interpretation, What Is a Good Net Promoter Score is a useful companion read.
2. Customer Effort Score CES Question
Some journeys do not need a loyalty question. They need an effort question.
If the user just tried to resolve an issue, complete setup, find a report, or finish a checkout, ask about ease. A clean version is:
“How easy was it to complete this task today?”
Or, for support:
“How easy was it to resolve your query today?”
Why effort often beats satisfaction
Customers can be “satisfied” and still feel the process was annoying. That is the blind spot. CES catches friction more directly.
A strong real-world example comes from UK financial services. In a 2024 Qualtrics case study, Barclays piloted a post-support CES question on a 1 to 5 scale across 12,000 interactions per month. After rollout, CES averaged 1.8, with 82% selecting “very easy”, abandonment fell from 28% to 11%, and NPS rose from 42 to 68, according to Qualtrics on measuring customer satisfaction.
That is why I trust CES most for operational journeys. It tells you whether the path itself got lighter.
How to apply CES to testing work
If you test onboarding steps, support flows, help-centre experiences, or form UX, CES should be close to your main KPI stack. It is especially useful when the conversion event alone can mislead you.
For example, a shorter form may increase completions but leave users more uncertain. A redesigned support flow may reduce handle time but make customers work harder to find the right option. CES brings that hidden cost into view.
Use it like this:
- Trigger immediately after the task: Delay weakens recall.
- Tie responses to task success: Someone may report low effort but still fail. That is not a successful outcome.
- Segment by user type: Agencies, first-time users, and internal teams often experience the same interface very differently.
- Review alongside behaviour: Compare CES with completion, abandonment, and follow-on engagement.
If your experiment removes friction, CES should reflect that. If it does not, re-check what your “win” means.
For teams building repeatable test processes, a strong companion read is conversion rate optimization best practices. It helps frame where effort metrics fit inside broader decision-making.
3. System Usability Scale SUS Questions
Sometimes one question is too small. If you changed a whole workflow, a navigation model, or a product surface with many moving parts, use SUS instead.
The System Usability Scale is a ten-item questionnaire built to measure perceived usability across the wider experience. It is not just about whether the user liked something. It checks whether the system feels coherent, learnable, and usable.
When SUS earns its place
SUS is worth the extra survey weight when you have changed something structural. Good examples include:
- a redesigned reporting dashboard
- a rebuilt onboarding wizard
- a new account area
- a major navigation overhaul
- a new experiment creation flow
This is not the survey I would deploy after a single CTA colour test. It is what I would use after changing the environment around that CTA.
Questions typically probe statements such as whether people would use the system frequently, whether functions feel well integrated, and whether the experience seems unnecessarily complex.
How experimentation teams should use it
SUS helps answer a harder question than “did the variant convert?” It asks whether the tested version made the product easier to live with.
That matters because users can tolerate poor usability for a short time if they are motivated enough. The damage shows up later in reduced adoption, confused support requests, weak return usage, or lower trust in your product team’s changes.
A practical setup looks like this:
- Run SUS after meaningful exposure: Let users complete the core workflow before asking.
- Compare by variant cohort: If one redesign consistently earns stronger usability sentiment, you have evidence beyond click behaviour.
- Read comments next to scores: SUS gives a useful quantitative view, but diagnosis still needs language from users.
- Keep the audience tight: Survey people who used the changed system, not the entire customer base.
The main trade-off is completion. SUS is heavier than CSAT or CES, so response volume usually drops. That is acceptable when the redesign is substantial and important. If you are choosing between a single survey question for customer satisfaction and SUS, use the simpler question for transactional moments and reserve SUS for broader UX decisions. It is a strategic instrument, not a default pop-up.
4. CSAT Customer Satisfaction Single-Item Question
If you need one question, fast, use CSAT.
“How satisfied were you with your experience?”
That question remains the cleanest way to capture immediate sentiment about a specific interaction. It is also one of the easiest metrics to thread into an experimentation programme because it is short, familiar, and practical.
In the UK, customer satisfaction surveys have been in formal use for years. The Office for National Statistics launched its first formal Customer Satisfaction Survey in 2007, and by 2008 overall satisfaction with central government services stood at 68% from responses gathered quarterly from over 20,000 citizens using a 0 to 10 scale, according to Typeform’s overview of customer satisfaction survey questions.
That history matters because CSAT is not a trendy metric. It is an established one.

The best use cases for single-item CSAT
CSAT works best when the interaction is specific and recent:
- post-purchase
- after a support interaction
- after viewing test results
- after completing onboarding
- after using a new feature
Do not ask broad, fuzzy versions such as “How satisfied are you with our brand?” unless that is the level you want to measure. Specificity makes the answer more actionable.
Where teams get CSAT wrong
The most common mistake is treating one score as a summary of the whole customer relationship. It is not. CSAT is transactional.
The second mistake is asking too late. If the action happened days ago, memory edits the experience.
The third mistake is collecting the score with no follow-up path. If a user gives a poor rating and nobody investigates, the survey becomes theatre.
A useful benchmark from UK retail shows what a good implementation can look like. In a 2023 SmartSurvey case study, a major e-commerce chain used a post-purchase CSAT question on a 5-point scale with 5,000 customers quarterly. CSAT rose from 72% to 87% within six months after refining timing and deployment, while churn dropped and repeat purchase increased, according to SmartSurvey’s guide to customer satisfaction metrics.
For teams trying to connect satisfaction to business reporting, how KPIs are measured is a useful operational reference.
Keep your CSAT question tied to a real moment. “How satisfied were you with checkout today?” is stronger than “How satisfied are you with us?”
5. Feature-Specific Satisfaction Questions
Overall satisfaction can hide the underlying issue.
A customer may like your product but hate one core feature. Or they may dislike the overall experience because one critical step keeps breaking trust. That is why feature-specific questions matter.
Examples include:
- “How satisfied are you with the speed of setting up a test?”
- “How satisfied are you with the clarity of experiment reporting?”
- “How satisfied are you with variant preview accuracy?”
- “How satisfied are you with integration setup?”
Why this question type is powerful
Feature-level questions make prioritisation easier. They tell product, growth, and engineering teams where satisfaction is being won or lost.
This is particularly useful in experimentation platforms and e-commerce stacks because many frustrations are local, not global. Reporting might be strong while setup is awkward. Checkout customisation might be easy while coupon application is confusing. You need granularity to see that.
An emerging UK angle is especially relevant for teams running experiments. A 2025 Office for National Statistics report on digital business practices found that 62% of UK e-commerce firms conduct A/B tests, but only 18% measure post-test customer satisfaction on variant performance. The same research noted that 27% of respondents reported friction from testing tools harming Core Web Vitals compliance, according to Success Coaching’s article on customer satisfaction survey questions. The future date matters here. Treat it as a reported 2025 finding, not a current-year baseline if your own programme is earlier.
How to use feature questions without bloating the survey
Do not ask about every feature on every survey. That creates noise and fatigue.
A better approach is to rotate modules or trigger them contextually. If someone just built a test, ask about setup. If they just reviewed results, ask about reporting. If they contacted support about integrations, ask about documentation and implementation.
A practical framework:
- Pair satisfaction with importance: A low score on a low-value feature matters less than a mediocre score on a mission-critical one.
- Ask after use, not in the abstract: Users answer more accurately when the feature is fresh.
- Map by segment: Shopify merchants, agencies, and internal product teams often value different features.
- Use open text sparingly but consistently: One “what frustrated you most?” prompt often explains the score.
This question type is one of the best ways to stop broad brand sentiment from hiding specific product debt.

6. Expectation vs Reality Gap Question
A lot of poor satisfaction scores are not really about quality. They are about mismatch.
The customer expected one thing, then got another.
That is why expectation-gap questions are useful. A direct version is:
“How did the experience compare with your expectations?”
The response scale can run from “much worse than expected” to “much better than expected”.
What this question uncovers
This question is excellent for diagnosing onboarding, sales messaging, and first-use friction. If your landing page promises speed and simplicity, but the first session feels technical and heavy, expectation-gap data will show that faster than loyalty surveys usually do.
It is also one of the best checks after experimentation-driven content changes. If your winning variant makes stronger promises and improves conversion, but users later report disappointment, you have learned something important. The test increased acquisition efficiency by shifting expectations in the wrong direction.
Best moments to ask it
I like this question at two points:
- right after onboarding or initial setup
- after the first successful outcome, such as a first purchase, first experiment launch, or first report review
Those two moments tell you whether the promise matched the first impression, and whether the value matched the eventual result.
A useful UK benchmark on response format comes from 2021 ONS survey data showing that 72% of UK consumers completed digital satisfaction surveys and preferred single-question formats. The same dataset reported response rates 18% higher for mobile-optimised CSAT than for multi-question forms, as cited in Snap Surveys’ discussion of customer satisfaction questions. That is a strong reminder to keep expectation-gap surveys lightweight, especially on mobile.
If your message overpromises, no amount of UI polishing will fully repair the disappointment. Fix the promise as well as the product.
One practical tip. Share expectation-gap feedback with marketing and sales, not just product. This survey often reveals positioning problems before it reveals design problems.
7. Problem Resolution and Support Quality Questions
Support surveys deserve more precision than many teams give them.
If you ask only “How satisfied were you with support?”, you collapse several different issues into one number. Was the answer slow? Was the agent polite but ineffective? Was the documentation clear but incomplete? You need sharper questions.
Try a small cluster instead:
- “How satisfied were you with the resolution?”
- “How satisfied were you with the speed of support?”
- “How helpful was the documentation or guidance?”
- “Did you leave with confidence that the issue is solved?”
Why support surveys need separation
A customer can like the support agent and still dislike the outcome. Or they can get the right answer and still resent how long it took.
Separating those dimensions changes what your team does next. If speed is weak, fix routing and staffing. If clarity is weak, fix training and docs. If resolution is weak, fix process and product defects.
This matters inside experimentation programmes too. Test velocity often depends on docs, implementation support, tag setup guidance, and reporting explanations. If those support experiences are frustrating, the platform can still look fine in a product demo while adoption stalls.
A practical support-survey model
Use event-based triggers. Send the survey right after a ticket closes, after a chat ends, or after someone exits a help-centre flow.
Keep it short. Two or three scored questions plus one open field is usually enough.
Then review patterns by source:
- Docs-led journeys: Are people satisfied after self-service?
- Agent-led journeys: Are support reps solving the right problem?
- Technical implementation: Are setup instructions causing friction?
- Experiment interpretation: Do customers understand what the results mean?
A useful UK public-sector example shows how sharply support and service friction can shape sentiment. After 2018 surveys showed only 55% satisfaction with NHS waiting times, the government later invested additional funding and scores rose to 78% by 2023, according to Typeform’s discussion of customer satisfaction survey questions. Different sector, same lesson. Operational bottlenecks show up in satisfaction long before dashboards alone explain them.
The trade-off is survey fatigue. If you survey after every support contact, keep the questions narrow and rotate non-essential prompts. Support-quality questions reveal whether operational fixes are helping.
8. Likelihood to Repurchase or Expand Question
This is not a pure satisfaction question, but it is one of the most commercially useful questions you can ask:
“How likely are you to purchase from us again?” “How likely are you to renew?” “How likely are you to upgrade?” “How likely are you to run more experiments next month?”
Why this belongs in the stack
Satisfaction without continuation is weak comfort. If you want to know whether a better experience is translating into future value, intent-to-repurchase or intent-to-expand gives you a clear directional signal.
It is especially useful after a meaningful value moment. Post-delivery for retail. Post-onboarding for SaaS. After a successful first test for experimentation products. After a support recovery event for at-risk accounts.
This question helps bridge experience measurement and commercial planning. It also forces teams to ask whether the journey they optimised strengthened future demand.
How to use it well
Context matters more here than with most survey types. Do not ask “How likely are you to buy again?” in a vacuum. Tie it to the recent event.
Better versions look like this:
- “Based on your recent purchase experience, how likely are you to buy from us again?”
- “Based on the results from your recent test, how likely are you to run another experiment this month?”
- “Based on your onboarding experience, how likely are you to expand usage to another team?”
For UK practitioners, there is a useful retention warning attached to low satisfaction. According to 2022 ONS data cited by Snap Surveys’ article on customer satisfaction questions, 82% of UK dissatisfied customers with CSAT below 5 out of 10 churn within a month, while proactive NPS follow-ups reduced this by 15% in retail. That is a reminder not to isolate intent questions from satisfaction signals. They belong together.
Use this question to shape follow-up actions:
- High intent, high satisfaction: invite reviews, referrals, or expansion.
- Low intent, high satisfaction: probe value, budget, or timing.
- Low intent, low satisfaction: prioritise recovery and direct outreach.
For agencies and in-house teams alike, client engagement strategies can help turn these signals into a more structured follow-up process.
8-Point Customer Satisfaction Question Comparison
| Metric / Question | Implementation Complexity 🔄 | Resource Requirements ⚡ | Expected Outcomes 📊 | Ideal Use Cases 💡 | Key Advantages ⭐ |
|---|---|---|---|---|---|
| Net Promoter Score (NPS) Question | Low - single 0–10 item, easy deploy | Low - minimal tooling; moderate sample for stable benchmark | High-level loyalty and retention signal; comparable to industry norms | Quarterly loyalty tracking; A/B tests for overall satisfaction | Standardised benchmarkability; predictive of growth; low respondent burden ⭐⭐⭐⭐ |
| Customer Effort Score (CES) Question | Low–Medium - needs task-specific context and timely trigger | Low - short survey, requires instrumentation to time prompts | Identifies friction in workflows; strong predictor of repeat behaviour | Onboarding flows; measuring task friction in UX A/B tests | Actionable for reducing friction; predicts retention in service contexts ⭐⭐⭐⭐ |
| System Usability Scale (SUS) Questions | Medium - 10 items plus scoring calculation | Medium - longer survey, modest sample size for reliable comparison | Detailed usability score (0–100) for benchmarking | Major UI redesigns; dashboard or workflow usability A/B tests | Widely validated; multidimensional usability insight; benchmarkable ⭐⭐⭐⭐ |
| CSAT Single-Item Question | Low - single direct satisfaction item; trivial to deploy | Very low - high completion rates; minimal analysis overhead | Immediate, in-the-moment satisfaction snapshot | Post-event feedback (reports, exports, declared winners) | Fast, high response rate; actionable for quick fixes ⭐⭐⭐ |
| Feature-Specific Satisfaction Questions | Medium - multiple targeted items per feature | Medium - longer surveys, segmentation and importance ratings | Granular, diagnostic feedback to prioritise feature work | Prioritising roadmap; A/B testing alternative feature implementations | Highly actionable and focused; supports priority decisions ⭐⭐⭐⭐ |
| Expectation vs. Reality Gap Question | Low–Medium - best used at staged touchpoints | Low - needs segmentation by acquisition/onboarding source | Reveals misalignment predictive of churn; informs messaging | Onboarding optimisation; testing marketing and trial messaging | Strong churn predictor; guides marketing and onboarding alignment ⭐⭐⭐ |
| Problem Resolution & Support Quality Questions | Low - triggered after support interactions | Low–Medium - integrates with support systems and ticketing | Measures resolution satisfaction and support channel effectiveness | Improving docs, support flows, and post-ticket experience | Directly impacts retention; identifies weak support channels ⭐⭐⭐ |
| Likelihood to Repurchase / Expand Question | Low - simple intent question, easy to include | Low - needs segmentation and linkage to usage/revenue data | Forward-looking signal of renewal and expansion potential | Revenue expansion campaigns; identifying high-potential accounts | Predictive of revenue outcomes; aids sales/prioritisation ⭐⭐⭐⭐ |
From Data Points to Data-Driven Decisions
A strong survey question for customer satisfaction does not fix anything on its own. It only becomes valuable when it changes decisions.
That is the shift many teams need to make. Stop treating surveys as a reporting layer that sits outside experimentation. Treat them as part of experiment evaluation itself. If a variant improves conversion but worsens effort, trust, or perceived usability, you have not found a better experience. You have found a trade-off. Sometimes that trade-off is acceptable. Often it is not. The point is to see it clearly before rollout, not after revenue quality drops or churn rises.
Different question types serve different jobs. NPS is useful for loyalty and advocacy. CES is stronger for friction-heavy tasks. CSAT is ideal for transactional moments. SUS helps when you redesign a broader workflow. Feature-specific questions surface localised pain. Expectation-gap questions catch overpromising. Support-quality questions reveal whether operational fixes are helping. Repurchase and expansion intent tie the experience back to commercial outcomes.
The biggest practical mistake is asking the wrong question at the wrong moment. Teams ask NPS too early, CSAT too broadly, and feature questions without enough context. The result is data that looks neat in a dashboard but does not help anyone choose what to ship next. Tight timing solves much of that. Ask after a clear event. Keep the wording neutral. Tie the response to a variant, user segment, and downstream behaviour. Then review the results alongside the metric you originally cared about.
The second mistake is letting qualitative feedback drift away from quantitative analysis. When survey comments live in one tool and experiment results live in another, nobody sees the full picture. You need the score, the comment, the variant, and the business outcome in the same decision conversation. That is where tools for automated customer feedback analysis can help, especially once response volume rises beyond what one person can read manually every week.
There is also a discipline issue here. A good experimentation programme should define in advance what counts as an acceptable experience trade-off. If a checkout variant lifts immediate conversion but introduces visible hesitation in post-purchase satisfaction, will you still ship it? If a support flow reduces abandonment but harms trust in the outcome, what threshold triggers a rollback? Those rules are easier to set before the result appears than after stakeholders fall in love with the conversion lift.
If you use Otter A/B, the practical opportunity is straightforward. Run your page or flow test, define your primary business metric, and add a satisfaction layer that matches the moment. Review winners not just at statistical significance, but at experiential quality too. That habit produces fewer false wins and better long-term learning.
Optimisation work is full of seductive local maxima. Satisfaction surveys help you avoid locking them in. They give your team a way to ask not only whether users converted, but whether they felt good about doing it. That is the difference between a variant that extracts action and one that earns loyalty.
If you want to tie experiment results to both conversion and customer experience, Otter A/B gives teams a practical way to test headlines, CTAs, and layouts while tracking business outcomes per variant. Add satisfaction surveys around the moments you test, compare the experience between variants, and make rollout decisions with a fuller view of what “winning” means.
Ready to start testing?
Set up your first A/B test in under 5 minutes. No credit card required.