The 12 UX KPIs you should be across: and why business and growth metrics don't cut it

By
Robert

Business and growth metrics are too macro for UX teams to work with, and this can be as disillusioning as it is unhelpful. It’s like a bonus KPI tied to something you can’t do much about other than pedalling harder. UX KPIs lower the microphone and point to how UX is or is not working.

Key points:

  • UX KPIs are specific to UX and UI teams.
  • They should be standardised and long-term comparable and measurable.
  • The business and UX both get the upside.

Business and growth metrics are helpful to gauge the overall growth of a digital product, though they are neither valuable for guiding UX nor necessarily helpful for the plight of UX and UI teams.

UX metrics solve this and help demonstrate micro and macro, standardised, long-term improvement.

Conclusion: Have your fingers on every pulse. Letting UX run wild without specific metrics is just that.

Product and business KPIs don’t measure UX.

One of the frustrations of working in UX is that product and business KPIs (such as ROI, growth metrics, etc) do not inform or instruct UX performance improvement.

Product and business KPIs aggregate the sum of all efforts, which is great when measuring overall growth, though unhelpful when measuring individual UX changes and whether they have added improvement.


Introducing Behavioural UX KPIs.

Behavioural UX KPIs (or UX KPIs) are key performance indicators that provide objective insight into UX progress. They allow one to see where goals are being met and where further improvement is required.

Whilst good UX designers will know and measure the metrics they’re trying to change, UX KPIs standardise on these metrics, allowing progress to be measured over time.

The benefits are threefold:

  1. Monitor progress over some time: Whether you have a UX strategy or are tactical and responsive, by standardising your KPIs, you can demonstrate consistent progress over time.
  2. Identify issues more easily: By standardising your UX KPIs, you can quickly identify problem areas caused by a UX change or another change outside of UX (such as a pricing or content change).
  3. Demonstrate success: It’s easy to point fingers at UX as the culprit, and UX teams have traditionally failed to use data strategically: by standardising and consistently publishing your UX KPIs, value is demonstrated and hopefully rewarded through UX investment.

Also, introducing Attitudinal UX KPIs.

Whereas Behavioural UX KPIs are objective, Attitudinal UX KPIs are subjective.

Attitudinal KPIs offer insights into users’ thoughts and perceptions of the usability of your designs and products, scoring areas such as ease, consistency, and learnability.

The Net Promoter Score (NPS) is a well-recognised example of an Attitudinal UX KPI.

Whilst the data collected through Attitudinal UX KPIs is subjective, that is not to say that it isn’t quantitative or beneficial. Whilst pure data people might point to Behavioural UX KPIs being the gold standard, those in UX and UI design see many benefits in Attitudinal UX KPIs:

  1. Attitudinal UX KPIs enhance ‘user-centered design’ evaluation by strongly emphasising the alignment of UX and system design with users' needs and expectations.
  2. By standardising questions and numerical rating systems, the data is quantitative and consistent, allowing for the direct comparison of the usability of different interfaces and systems; and tracking progress over time - just like Behavioural UX KPIs.
  3. Asking questions is simple and fast, and data collection is quickly captured and administered. (Because of this speed and simplicity, the format encourages engaged user participation, which results in more data.)
  4. Because of their speed and simplicity, Attitudinal UX KPIs facilitate a more iterative design process supported by data.

The Top 6 Behavioural UX KPIs

Choosing the Behavioural UX KPIs that work for you and your product depends on what you want to measure and why.

This will depend on the UX challenge you are trying to solve, such as improving task completion rates.

1. Conversion rate

The conversion rate is a standard, top-line metric within businesses, though it should be used as a UX KPI in both a macro and micro sense.

At the macro level, understanding the conversion rate(s) of a website or app provides a great metric that can unite a product team/business and point to the general health and direction of a product or app.

However, as with many business and growth metrics, it’s blunt and all-encompassing: seasonality or ranging (as just two examples) could affect a website's conversion rate(s), and these two factors are, of course, completely outside the hands of UX.

At a micro-level, however, the conversion rate can be meaningful if it focuses on areas in the hands of UX: users entering a point of conversion (e.g., checkout, form, download, registration) and those successfully converting and exiting.

Whilst the conversion UX KPI overlaps with other UX KPI metrics, its title is unifying in terms of everyone understanding what the metric is reporting on: it’s just up to UX to ensure that their own ‘conversion rate’ is not caught up in the bigger conversion rate metric.


2. Task Success Rate

The task success rate is a popular UX KPI and tells us the percentage of users that completed the task presented to them, e.g.:

  • Registering for an event.
  • Configuring an item.
  • Adding warranty to an item.

It is a statistical measure to provide confidence in a target audience's likely task success rate.

To measure the task success rate, we divide the number of correctly completed tasks by the total number of attempts.

This, in turn, gives a percentage of users who could complete the task.

As with all goals, you want a clearly defined goal with agreement on what task completion looks like and the expected/intended path for users to complete the task to get a fair and accurate measurement.

💡78% is a good task success rate! (MeasuringU)

3. Average Time on Task

As the name implies, the Average Time on Task UX KPI tells us the average time a user takes to complete a task.

In UX, time is generally* regarded as money; the faster users can complete a task, the better.

Every task is, of course, different, though to improve this metric, we look at two factors:

  • Removing friction (and anxiety) from the task: i.e. simplifying the task.
  • Providing a cognitive incentive (motivation) to complete the task per the Goal-Gradient Effect Law of UX.

The calculation here is the aggregate time spent by all users completing the tasks divided by the total number of users who completed the task:

* Generally, though not always: if your goal is engagement and for a greater time within a task, then you want your average time on task to be up and not down.

4. Search vs Navigation

I’ve included this metric due to its legacy more than anything.

The Search vs. Navigation UX metric has historically measured a website's navigability and the effectiveness of its information architecture.

Users are asked to find a page or piece of content on a website (or app). Those who successfully complete the task are divided into those who used the website's navigation and those who used the website's search capability.

The critical metric is - or was - the ratio of users successfully utilising navigation vs search, which indicates the effectiveness of the website’s navigation and information architecture.

Tools such as Treejack are also excellent and inexpensive ways of testing the effectiveness of information architecture.


Where I refer to the metric as legacy, Search vs Navigation was a popular UX KPI a few generations ago, though it is used less and less today:

  • Fewer and fewer websites have internal searches today than a few generations ago.
  • Navigation is used less on mobile devices, and mobile is typically the majority device on most websites: users scroll to discover and navigate rather than reach for navigation as the default.
  • Today, many users would search for the page or content through Google and wouldn’t rely on either Search or Navigation.

This is not to say that Search vs Navigation is redundant. The Search vs Navigation UX metric is essential if you’re building a website for the Sydney Museum with a vast information architecture.

Otherwise, ensure that the content you want users to find is indexed in Google or signposted for both Desktop and Mobile users.


5. Error rates

Tracking the errors made by users attempting to complete a task is a valuable insight into where your UX could be improved.

We’ve all put our credit card numbers into the name field, and this is an excellent example of a UX error we should want to eliminate.

Tools like Hotjar and observational studies offer a great way to gain initial insights into user errors.

Once you know the error UX KPIs you want to address and improve, you can determine how you want to measure them.

a. Error Occurrence Rate

If you only want to track one error, the Error Occurrence Rate UX KPI is your go-to.

You divide the number of errors by the number of times the error could have occurred.

b. Error Rate

The Error Rate UX KPI is your go-to if you are after an aggregate of errors.

Here, we divide the number of errors by the total number of task attempts.


6. Misclick Rate

The Misclick Rate is perhaps one of the more interesting metrics here because of the sheer number of misclicks you see on almost every website.

It’s remarkable when visualised.

To be fair, users click their mouse as they read through the content, though heatmaps will quickly show you where multitudes of users are clicking. If what they are clicking is not clickable, you have a problem.

Don’t underestimate this metric.


The Top 6 Attitudinal UX KPIs

Insights from customers can be just as informative as raw data.

The Behavioural UX KPIs tell you what users are doing. The Attitudinal UX KPIs potentially tell you why and the attitudes users have towards your product and service.

The two types of UX KPIs are complimentary.

1. Net Promoter Score (NPS)

The NPS probably doesn’t need much of an introduction.

The NPS measures how likely users are to recommend your product or service: user or customer loyalty.

You can then use this data to inform your product’s direction and strategy, and you can also use the data to mitigate churn (as one example). Suppose a user goes from a promoter (someone who rates your product) to a detractor. In that case, you have an indicative insight into their declining feelings about your product and the opportunity to intervene.

Of course, simply that people say one thing does not mean they also do it. Just that someone says they would refer doesn’t mean they do; and subscribers to local newspapers often subscribe through gritted teeth.

Though as a tool for gathering higher-level user sentiment into the perceptions and usability of your product, when paired with some of the Behavioural UX KPIs I have outlined, the NPS is a valuable, long-term and benchmarkable measure of how users are travelling.

A challenge with the NPS is that it is one of those big business metrics that vacuums up everything.

However, UX can generally pluck good insights from each survey.

This article isn’t to guide on how to run an NPS or the score you should be looking for (I’d need another article just for that), though as a long-term benchmarking tool, the NPS is a good place to start, especially for subscriptions and other such products.


2. System Usability Scale (SUS)

The System Usability Scale (SUS) is an effective, cheap, and cheerful way to gain insight into users' feelings about your product's usability.

There is an argument that asking non-UX/Product people to rate usability and provide usability insights is a mistake, though this isn’t one of those circumstances.

Home truths are just that, and here is where we can get standardised data from users on the scale:

Strongly disagree → Strongly agree.

I won’t go into how to score a SUS, though here is an example of some of the ten or so questions you’d ask:

  • I want to use this product frequently.
  • I thought this product was easy to use.
  • I felt confident using the product.


Standardised and scaled over time, you can easily observe changes in perception. As a UX KPI, it might be macro, though it’s also instructive.


3. Customer Effort Score (CES)

The NPS and SUS are macro, but the CES gets into the weeds: here, we ask users to provide insights into their satisfaction at a point in time and often after a specific transaction.

Like the the NPS and SUS, the data is easy to collect, though much more pinpoint and in the moment, something that can have its real upside.

  • “Were you happy with your experience today?”
  • “How would you rate our support?”

Good, I’m satisfied. Bad, I’m dissatisfied.


By asking follow-up questions, you can get further insight.

According to (HBR), the CES outperforms other Attitudinal UX KPIs in predicting user behaviour and the likelihood of repurchasing. It can be an early warning for UX teams of something going awry.


4. Customer Satisfaction Score (CSAT)

The CSAT boils down the CES further with a single question:

“How would you rate your overall satisfaction…”

  1. Very unsatisfied.
  2. Unsatisfied.
  3. Neutral.
  4. Satisfied.
  5. Very satisfied.

It can be immediate and quick, and that’s the CSAT’s advantage.

On the downside (and this isn’t exclusively a criticism of the CSAT), it is very limited in detail and subject to responses from those who have had a brilliant experience or an appalling one.


5. Standardised User Experience Percentile Rank Questionnaire (SUPR-Q)

I haven’t used the SUPR-Q, which is similar to the SUS, with one key advantage.

Developed by MeasuringU, you can buy a license and compare your score to other websites: Benchmarks Baby!

Users are asked eight standard questions around:

  • Usability
  • Credibility
  • Appearance
  • Loyalty

The data is high-level, and UX teams will struggle to get specific and prescriptive insights.

It's meant to be a good KPI to consider for mapping trends at a macro level, and its angle is to be able to benchmark with comparable websites and products.


6. First impression

In my opinion, I’ve left the best to last.

The first impression a user has of your website or app.

Every time I conduct user research, it’s my first question.

There are critics of this approach.

  • You’re asking the wrong person.
  • People overthink the answer.
  • You’re slowing down other user testing (possibly).
  • People try to be too clever.

However, when presented with a new interface, users often have insights UX/UI might not see.

Years ago, I was called by a client with a new website with some appalling metrics. They gave me the URL over the phone; I opened it and then subconsciously closed the tab.

The website didn’t have a discernable logo, and I closed the page, instantly unaware of what I was looking at and unable to anchor myself.

I had to ask for the URL again.

The stats are extraordinary:

It takes only 1/10th of a second to form a first impression about a website and 50 milliseconds to form an opinion about whether users will stay or leave.

94% of this is design-related.

This is make-or-break, and The Laws of UX tell us that users will forgive UX issues if they perceive the design to be visually appealing and on the money.

Testing this is easy and there are plenty of online services that will let you recruit target users, record their voice and screen interactions and ask… what is your first impression.

UX and UI designers can learn a thing.

We acknowledge the Traditional Owners of country throughout Australia and recognise their continuing connection to land, waters and culture. We pay our respects to their Elders past, present and emerging.
Let's talk about your product.