Metrics · METRICS

NPS vs other customer satisfaction metrics: which one to use when

2024-08-12 · 6 min read

A founder asked us last month why his NPS was a 60 while his CSAT was an 89. He thought one of them was wrong. Neither was. They measure different things, and using them interchangeably is one of the most common mistakes in customer feedback.

This is the short tour. Three popular metrics. What each one is good at. When to reach for which.

Net Promoter Score

NPS asks one question: "How likely are you to recommend us?" Scale of 0 to 10. You group the answers into promoters (9, 10), passives (7, 8), and detractors (0 through 6), then subtract the percentage of detractors from the percentage of promoters.

NPS is a relationship metric. It captures how a customer feels about you in general, not how they felt about a specific interaction yesterday. The score moves slowly because the underlying feeling moves slowly. That is a feature, not a bug.

NPS works best for tracking the trajectory of customer sentiment over time and for benchmarking against your peers. It is excellent for board reporting, segmentation analysis, and identifying which customers are likely to refer you.

It is weaker as a diagnostic tool. A 6 tells you a customer is unhappy. It does not tell you why. You need the open-ended follow-up question, and you need someone to read the responses, before NPS becomes actionable.

Customer Satisfaction Score

CSAT asks how satisfied a customer was with a specific thing. A purchase. A support ticket. A delivery. A feature. The scale is usually 1 to 5, sometimes 1 to 7. The score is the percentage of respondents who answered with the top one or two options, depending on how strict you want to be.

CSAT is a transactional metric. It tells you whether a particular interaction met expectations. The score moves quickly because each survey is anchored to one event.

CSAT is the right metric when you want to know whether a specific touchpoint is working. After a support chat. After a return. After an onboarding session. Field it within minutes of the event, while the experience is fresh, and you get clean signal.

It is weaker as a long-term loyalty indicator. A customer can have a great support experience and still cancel their subscription, because the issue was elsewhere. A high CSAT does not mean a healthy relationship.

Customer Effort Score

CES asks how easy it was for the customer to do something. "How easy was it to resolve your issue today?" or "How easy was it to find what you were looking for?" The scale varies, but a common version runs from "very difficult" to "very easy" on a 1 to 7 scale.

CES is the most predictive of the three for repeat purchase and retention, especially in support contexts. The reasoning is simple. Customers do not remember most of the times you delighted them, but they remember every time you made them work. Friction is what gets people to leave.

CES is the right metric when you are trying to identify points of friction in a customer journey. Returns, refunds, password resets, finding documentation, getting to a human. If those are hard, customers churn. CES will catch it before NPS or CSAT do.

It is weaker for measuring overall happiness or advocacy. A frictionless experience is necessary but not sufficient.

When to use which

Here is a clean way to think about it.

Use NPS for the quarterly relationship check-in, board reporting, segment analysis, and identifying advocates worth turning into a referral pipeline. Run it once a quarter or twice a year.

Use CSAT after specific events you want to evaluate. Post-purchase, post-support, post-onboarding. Field it immediately, keep the survey to one or two questions, and use the data to find which events are dragging the experience down.

Use CES when you are looking for friction. Particularly in support, returns, account management, and any other process where customers have to put in effort to get value. CES is your early warning system for churn.

Most mature programs use all three. They are not redundant.

A worked example

A subscription box business we worked with had this profile.

NPS was 38. Steady. Not great, not bad.

CSAT after support tickets was 92. The team was excellent.

CES after the cancellation flow was a 3 out of 7. Customers were saying it was hard to cancel.

The interesting part is what that combination tells you. Support is great. Customers like the brand fine. But the cancellation flow is hostile, and that hostility was leaking into the NPS responses, which in turn was suppressing the relationship score even though no individual interaction looked broken.

We rebuilt the cancellation flow to be one click. CES went to 6. NPS climbed 9 points over the next two quarters. CSAT did not move because it was already strong. Three metrics, three roles.

Don't average them. Don't pick one.

The most common bad practice is rolling NPS, CSAT, and CES into a single "customer health" composite. Do not do that. You lose the diagnostic power of each metric. They are designed to answer different questions. Keep them separate, look at all three, and make decisions about each.

The second most common bad practice is picking only one. Some teams only run NPS. Others only run CSAT. They miss the things their chosen metric was not designed to catch.

Who actually runs the program

Running three customer feedback streams in parallel is not hard, but it is steady work. Sending the right survey at the right moment. Reading responses. Tagging themes. Closing the loop with anyone who left a low score. Reporting monthly to the people who can act on it.

Most operators we talk to want this work happening. They do not want to do it themselves. A senior CS agent can run NPS, CSAT, and CES end to end, including the diagnostic write-ups that turn raw scores into a list of things to fix. That is the work our team handles for clients every day, across software, ecommerce, subscription, and service businesses.

Ready to talk?

If you are running, or trying to run, a customer feedback program and want a senior agent to take it from here, we should chat.

Book a Discovery Call

30 minutes. No commitment. No credit card. You'll talk directly with our founding team.