NPS, CSAT or CES: how to track what really matters in CX
CSAT, NPS and CES are the go-to metrics for measuring customer experience - but most brands are using them wrong. But what do these scores really mean? Where do they fall short? How do you use them together with intent?
- Insights, blogs & articles
Customer experience metrics are broken – and this is why that matters
CSAT, NPS and CES have become the dominant language of customer experience – but they rarely tell the full story. They appear in quarterly reviews, executive dashboards and agency reports. They offer a seemingly objective way to track how customers feel, and a convenient method for comparing performance across teams, regions or time periods. But this reliance on numerical scores has led to a deeper problem – one few brands are willing to confront.
Customer experience is not and never has been a fixed variable. It is situational, emotional and context dependent. Yet – the tools used to measure it are often blunt, oversimplified and misapplied. Metrics designed to signal areas of focus have become mistaken for truth. A positive NPS score is taken as proof of advocacy, even when repeat purchase behaviour tells a different story. A high CSAT is presented as evidence of customer satisfaction – without understanding what, precisely, the customer was satisfied with – or why.
This matters, because these scores shape decision making. They inform hiring, training, channel investment, even brand strategy. When the underlying measurement is shallow or misinterpreted, the consequences ripple outward. Brands find themselves optimising for the score rather than the experience, unable to explain performance trends or act meaningfully on customer feedback. And slowly, a critical disconnect emerges between what the data says and what the customer feels.
The right CX strategy uses all three metrics
Each of the big three customer experience metrics – Customer Satisfaction (CSAT), Net Promoter Score (NPS) and Customer Effort Score (CES) – serves a specific purpose. The problem is not that these metrics are flawed, but it’s that they are routinely expected to answer the wrong questions.
What is CSAT (Customer Satisfaction)?
CSAT is the most transactional of the three. It measures how satisfied a customer was with a particular experience – usually immediately after a support interaction, a delivery or a purchase. It’s familiar and widely used. But CSAT is only as useful as the question it follows. If you ask too generically (“how satisfied were you with our service”?) and you get data with no direction. Ask it at the wrong moment, and it tells you nothing about the outcome that really matters. CSAT is best used to validate the quality of specific touchpoints, not to infer broader customer loyalty or emotional connection.
CSAT is typically asked as “how satisfied were you with (interaction/product/service/etc)” and responses are recorded using a 1-5 or 1-10 scale. Scores of 4 and 5 (or 8-10) are counted as “satisfied” when calculating the overall percentage.
What is NPS (Net Promoter Score)?
NPS aims higher. It measures the likelihood of your customer recommending your brand to others – a proxy for long-term brand affinity. It’s a useful pulse check on how your business is perceived over time. But as a single data point, it’s prone to distortion. Timing, customer expectations and even unrelated frustrations can also skew the result. Many brands also ask the NPS question without segmenting the data meaningfully – grouping together new, inactive or infrequent users into one indistinct score. NPS works best as a long-term strategic indicator, but it needs to be paired with more immediate feedback to surface actionable insight.
The standard NPS question is “how likely are you to recommend our company to a friend or a colleague?” Customers respond on an 0 – 10 scale, and are organised into detractors (0 – 6), passives (7 – 8) and promoters (9 – 10). The final score is calculated by subtracting the percentage of detractors from promoters.
What is CES (Customer Effort Score)?
CES focuses on friction. It asks customers how easy it was to complete a task – typically used to measure support channels, onboarding journeys or digital processes. In many ways, CES is the most predictive of future behaviour – customers who experience high effort are more likely to churn. But effort is subjective – a quick resolution that leaves a customer feeling unheard may still score well. A process that feels intuitive to one user may feel alienating to another. CES is a valuable lens – but like the others, it needs context to be useful.
CES is typically phrased as “how easy was it to resolve your issue today?” and measured using a 1-5 or 1-7 Likert scale. Scores are averaged, and lower effort scores generally correlate with higher retention and satisfaction.
Why you should be using CSAT, NPS and CES together
None of these metrics is inherently wrong – each one is designed to answer a different type of question. The problem arises when they’re treated as interchangeable, or worse, when a single score is presented as the definitive measure of customer experience.
The reality is that a mature customer experience strategy requires layered measurement. CSAT for tactical validation, CES for operational friction and NPS for strategic sentiment. Together and only together, they can form a useful picture. Used intelligently and in the right combinations, these metrics can provide real insight.
And as you know – most businesses don’t do this. According to CustomerGauge, only 49% of NPS users also use CSAT. To really get the best of these metrics and have them give you a narrative (and an action), you should be using all three in conjunction with one another.

CSAT, NPS and CES – how most brands get CX measurement wrong
Every brand tracks something – customer satisfaction, likelihood to recommend, effort scores. The acronyms vary, but the story is the same – collect the number, compare it to last quarter, hope it improves.
This approach misunderstands what these metrics are, and what they’re not.
CSAT tells you whether a customer was satisfied with a specific interaction. That’s it. It doesn’t tell you if they like your brand. It doesn’t tell you if they’ll stay. It tells you if they got what they wanted, in that moment. Yet – brands routinely present rising CSAT scores as if they were a sign of long-term loyalty.
NPS measures intent to recommend, but it’s shaped by far more than service performance. It’s influenced by brand perception, expectations, pricing, product relevance, even mood. Using it to judge how your support team is doing, in isolation, is a category error.
CES gets closer to something predictive – effort really does correlate with loyalty – but it’s still only one lens. And like the others, it’s too often detached from context. What felt “easy” for one customer may feel degrading or disempowering to another.
These metrics were never meant to stand alone – but that’s exactly how they’re used, stripped of nuance, used to justify decisions, and celebrated or blamed without real interrogation. What gets missed is the emotional texture behind the score – the contradiction between what the data says and what the customer actually feels. The reason one customer gives you a ten and still churns six weeks later, while another gives you a six and sticks with you for years.
The failure here is not necessarily in the tools – it’s in the way most brands use them; to prove performance, not understand experience.
What are you really measuring in customer experience – and why?
If metrics are the tools, then intent is the blueprint. And yet – too many businesses begin with the question “what should we measure?” when they should be asking “what are we trying to learn”?
This distinction is important – tracking CSAT because it’s standard or NPS because the board expects it is not a strategy. Without clarity of purpose, even the most sophisticated dashboards become performative – exercises in data collection with no meaningful insight behind them. What exactly do you need to understand about your customer experience? Are you trying to gauge the emotional impact of a service interaction? Are you looking for signs of long-term loyalty? Are you trying to surface friction points in a digital journey? Each of those objectives demands a different approach to measurement, and often, different combinations of metrics entirely.
But intent is rarely interrogated. Metrics get chosen for their familiarity or ease of benchmarking, not for their alignment to a defined question. The result is a proliferation of CX data that looks impressive but doesn’t drive understanding. Leaders review scores but struggle to explain what they mean, how they connect, or more importantly – what action they demand.
Good measurement starts with an important question; “what matters most to our customers right now?” What moments in the journey shape how they feel about you, and what signals might predict future behaviour?
You shouldn’t measure to report – you should measure to act
The purpose of customer experience measurement is not to tick a box or satisfy a stakeholder – it’s to uncover the truth of how your customers feel, where your systems are falling short and what needs to change. And that only happens when measurement leads to action.
This is where interpretation matters. A drop in NPS isn’t a crisis – it’s a signal. A high CES score doesn’t mean your journey is perfect – it may just mean your customers are forgiving. Real insight emerges not from the metric itself, but from the questions you ask of it, and the decisions it prompts.
The most effective CX teams are not those who report the best scores, but those who embed customer insight into their operations. They use feedback to inform product roadmaps, to improve agent training and to redesign digital journeys – because the value of measurement lies not in what it shows, but in what it changes.
The fundamental flaw in how most companies use CX metrics in 2025 is that measurement becomes an administrative process – a way to prove that feedback was collected, rather than a means of driving meaningful change. The risk isn’t just inertia but instead the illusion of progress. When metrics are reported in isolation, without interrogation or action, they stop being tools for improvement and become part of the performance theatre.
This is what we at Ventrica help brands avoid – because we help you turn customer experience measurement into a living, operational practice. We work with brands to interpret what the data is really saying, to uncover the emotional truths behind the numbers and to embed those insights into daily decision making.
If measurement doesn’t lead to action, it isn’t CX – it’s just admin.
Ventrica’s approach to customer experience metrics – insight, emotion and action
As we’ve discussed customer experience measurement is only valuable when it drives meaningful change. That’s the standard we hold ourselves to at Ventrica. As a partner to brands with complex customer journeys and high expectations, we don’t simply track metrics. We interpret them, connect feedback with feeling and we use that understanding to help our clients act with precision.
We help brands get more from their CX metrics by:
- Interpreting CSAT, NPS and CES in context – connecting scores with journey stages, customer segments and operational realities
- Combining structured data with real human insight, using frontline feedback and qualitative signals to uncover what the numbers can’t
- Embedding actionable insights into operations – influencing everything from agent coaching to process improvement and channel strategy
- Designing smarter customer satisfaction surveys, with intentional questions, clean targeting and minimal friction
- Turning feedback into continuous improvement, supporting agile delivery teams, service owners and CX leaders with decision-ready intelligence
- Bridging the gap between emotion and efficiency, balancing performance KPIs with empathy, tone and brand experience
And critically, we bring insight, emotion and action together, embedding measurement into how our clients operate – not just how they report. By interpreting CX data through a human lens, we turn feedback into escalation logic, journey improvements and frontline coaching. This is how we help brands deliver truly emotive CX – experiences that are not only efficient but felt – and remembered.
If your customer experience strategy is still built around standalone scores, it’s probably time for us to have a chat.
Frequently asked questions (FAQs)
How are CSAT and NPS different in what they measure?
Customer Satisfaction Score (CSAT) measures how satisfied a customer is with a specific interaction, product or service – usually immediately after the event. Net Promoter Score (NPS) by contrast gauges overall brand loyalty by asking how likely a customer is to recommend the company to others.
Where does CES fit alongside NPS?
CES (Customer Effort Score) captures how easy or difficult it was for a customer to complete a task, such as resolving an issue or finding information. While NPS reflects emotional loyalty over time, CES highlights operational friction points that may erode trust and satisfaction.
What sets OSAT apart from CSAT?
OSAT (Overall Satisfaction) evaluates the customer’s holistic view of their experience with a brand or service over time. CSAT, however, focuses on moment-specific satisfaction- typically after a single interaction or transaction.
Is CSAT the same as the Customer Satisfaction Index?
No – CSAT is a short-term, tactical metric that measures satisfaction after individual touchpoints. The Customer Satisfaction Index (CSI) is a broader, strategic benchmark that reflects long-term perceptions across multiple aspects of the customer experience.
When should you use CES, CSAT and NPS together?
These three metrics are most effective when used in combination: CES identifies friction, CSAT evaluates satisfaction with recent events, and NPS measures long-term loyalty. Together, they provide a layered view of customer sentiment across operational, emotional, and relational dimensions.

Iain Banks
CEO
Let’s take the guesswork out of your CX performance
Book your free Zendesk health check and get expert, honest insight – no obligation, no pressure.