··· Chatra All books
§§ Table of Contents − − − − − − − − −
Customer Conversation Metrics

Ultimate Guide to Customer
Conversation
Metrics

Mercer Smith-Looper

Everyone wants to be the support interaction that someone tweets about — everyone wants their customers to love them and talk about how amazing they are. But that kind of delightful, personable side of support isn’t all that there is. Behind all of those feel-goods, there’s a large core of metrics that make it easy for support team members, team leads, and managers to understand how the team is doing, where they’re struggling, and there they might have to go in the future.

How to use this guide

In this User Guide we’ll introduce you to some of the prime concepts behind customer support metrics. Then, we’ll walk you through a comprehensive guide of metrics that you can use to measure everything from the success of your chat and emails, to how to create the best survey questions when trying to gain insights from your customers. Let’s get started with why your company needs metrics if you aren’t already tracking them.

Why you need metrics

Metrics at their simplest, allow you to track progress. They can serve to benchmark for specific performance indicators on your team, for example, or give you a goal that you want to achieve at a company level. A great example of a solid use of metrics is giving a new hire the average metrics that you’d expect someone in their role to be able to hit by thirty, sixty, and ninety days of employment. This allows them to feel secure in their understanding of where they ‘stand’ amongst fellow employees, but also know the standard they are shooting for across the whole team.

A second example, using metrics at a company level can be valuable for things like Objectives and Key Results (OKRs), and ensuring that all teams within the company see the same thing. For example, churn, which we’ll talk about later, is a metric that can be used both in the customer support and success teams, but also in product and sales. Using churn for a company-wide OKR would allow all teams that impact the same metric to measure it together rather than everyone working separately in their own silo. Imagine how much more impactful you could be with everyone working together? It’s like trying to cross a river in the olden days of wagoneering, and instead of everyone carrying separate pieces across the river to the other side, joining together and grabbing edges of the wagon itself and all working together to save time and energy. That’s what knowing about metrics, and having them clearly communicated at your company can do for you.

What makes good metrics

Good metrics that you pick for either your team or your company — if set well — will usually fit within the guidelines of SMART Metrics. SMART stands for Specific, Measurable, Accurate, Reliable and Timely.

Specific

You should choose metrics that directly relate to the processes that take place within the team you’re creating them for. For example, if you are seeking to improve the number of times your support team can find a document to send to a customer on the first try, you would select the ratio of first time successfully found documents divided by the total number of searches for a given period. In contrast, just using the total number of searches would not be specific enough — it wouldn’t really tell you if you were making an impact on the thing you wanted to change, or if you needed to work on it at all.

Measurable

It’s super important that whatever metrics you are using are derived from actual numbers. They should not be an estimate or a soft number that doesn’t reflect the reality of your business. Seek metrics that are easily obtained from the tools that you have at your fingertips or by combining information from a few different tools together. For example, you could pull your full number of paying customers from your Customer Management System (CMS) and divide it by the number of tickets you’ve resolved over a specific period; this would give you your contact ratio. Having the specific numbers provided by systems within your company is more valuable than having a soft estimate based on speculation.

Accurate

Accuracy, especially when it comes to something that will likely influence your business decisions, is pivotal. A good example of a time when accuracy is important is first response time. Receiving a response within the first hour after sending in your support conversation makes a gigantic impact on the customer’s sentiment towards your company. Beyond that point, though, whether your team responds at 1:15 or 4:00 it doesn’t make that much of a difference. So, that kind of specificity — down to the minute — and accuracy makes a difference with metrics meant to judge success or benchmarks. An inaccurate metric, in this case, would measure the time until the ticket was resolved, rather than the time to first response.

Reliable

A good example of this metric is in turnaround time for support conversations. There are times where a single support person can report their turnaround time for a conversation to be 10 minutes, while another might say five. For a metric to be reliable, it must be able to be clearly defined as to what it is, how it is to be measured, and then understood by all. It is key for this kind of metric to be standardized, usually within something like a helpdesk tool, in order for it to mean anything. A reliable metric is one that can be clearly defined, communicated, and then have raw data gathered and reported on in the same manner by all of your team members involved.

Timely

Metrics should be used for continuous improvement, as well as for benchmarks. Unlike some delicious cheeses, metrics do not grow better as they age — they only become less useful. Similarly, for a business, front-line quality metrics are time sensitive. What we mean by this is, reporting on something that has occurred 90 days after the fact does not facilitate improvement, growth or change because you’ve already moved so far away from the numbers you’re reporting on.


A good example of this might come from a physical retail store: in a quarterly report, one department might report that they had been out of stock on a specific type of t-shirt three times. So, the people in charge of  ordering had increased the number of t-shirts on order for the department so they would not run out again. At the next quarterly report (now six months after the problem first occurred), the report showed that the department had been out the same shirt three times. The manager of the store asked if they needed to increase the order level some more. Instead of increasing the level, they might consider placing a communication sheet in the inventory department so that, on days where the shirts go out of stock, loss prevention or inventory specialists can investigate further. Is there someone stealing the shirts, or is there another issue such as incorrect processing of shipments?

Metrics should be reported back as soon as possible to the part of the company that is directly in control of them and have an understanding of the process. If a leading metric is not capable of immediate communication, choose another one or find another way to report on it.

How to choose your goals and metrics

There are a number of metrics, both at the individual, the team, and the company level. We will go through all of those metrics later, but in the meantime, we’ll talk about what the best way to pick goals and metrics are. There are three questions that you should consider, according to our friends at Help Scout, when selecting metrics:

  • Why are you reporting?
  • Who are you reporting it to?
  • What outcome are you hoping to achieve?

Each of these is integral when selecting your metrics at any level, so let’s break them down a bit more.

Why are you reporting?

The point behind the reporting should not just be the report itself — that’s Sisyphean at best, and a waste of everyone else’s time at worst. Know why you are tracking the metrics and what you are trying to make an impact on before you select specific metrics. If you can’t think of a “why” behind it, it’s probably not the right metric for you, your team, or your company.

Who are you reporting it to?

Different people speak different languages. This is true both globally, but also within a product team. Think about who you’re reporting to, and pick a metric that will make sense to them. For example, marketers might care about something in terms of monthly active users, while product people are interested in  adoption. If you speak to the things that people care about, you are much more likely to have success in the adoption or appreciation of your metric.

What outcome are you hoping to achieve?

Whatever you report on is what people are going to pay attention to. So, make sure that the metrics you select really reflect what is needed rather than what is easiest to report on. Fluff numbers can feel great when you have a beautiful, filled up presentation slide, but not so great when all of a sudden everyone is paying attention to the things that make little difference or impact in the forward movement and growth of your company.

ROI of customer service

If you don’t already know what ROI is, it stands for “return on investment,” and it implies the benefit or value that is returned back to a person who has invested time or energy. A high ROI implies that the person who’s invested is getting a lot of value back, thus making their investment “worth it.” So, who is investing in customer service? Your company. Every dollar that they spend on hiring and training a new employee, adopting and implementing new software, or providing hardware for team members to do their job is an investment in your customer service team.

That being said, not many teams measure the ROI of service and support. But, like the team over at Groove shared, it’s very important. Why? According to them:

  • Customer experience does drive sales, and if you figure out how exactly (and to what extent) you can improve it to drive even more sales.
  • Customer service departments and reps are often overlooked and overworked, and proving the value in investing in them can change that.

The best way to calculate the ROI of your service and support teams is to identify your key metrics, set your ROI hypothesis, then test it, learn, and improve. Let’s break it down.

Identify your key metrics

When you are picking metrics to use for ROI, make sure that they align well with your business plans, and that they are things that your individual team is able to have an impact on. Nothing is worse than picking something that you can’t change. There are a few different things that you can make an impact on where support provides financial benefit: providing more efficient customer support and saving on spend, giving excellent handovers to the sales team so that they can upsell better and generate new revenue, prompting existing customers to upgrade to pricier plans, and much more. So, which of the most commonly used metrics for support fit those for you?

  • Net Promoter Score
  • Customer Satisfaction
  • Cost per contact
  • Customer lifetime value
  • Retention rates

After you’ve walked through with your team and picked the best ways for you to measure against your investments, it’s time to move on to setting your hypothesis.

Set your ROI hypothesis

Build your hypothesis based on the metrics that you set above. The best way to structure a hypothesis is:

if _____, then _____,
which we want to _____

So, for example:

Currently all of our team members are working in the United States, but if we add another service rep in Europe, it will bring our first response time down. We want this to boost first response time, which then boosts CSAT, a metric tied to our customer retention.

This is an excellent hypothesis. It tells your other team members and people at your company what you’re doing, why you’re doing it, and what to expect.

Test, learn and improve

As with all good hypotheses, you need to test, reiterate, and improve your work. Set a timeframe that makes sense for the size of the impact that you are hopeful to have on your metrics. In the case of our hypothesis above, that would likely be around three months. After that amount of time, come back to your company, team, or any stakeholders, and report to let them know how things have gone. In this case, you’d say something like:

We added a service rep in Europe, and over the past three months we have boosted first response time to be consistently below an hour, and subsequently seen an increase in customer satisfaction from 73% to 86%. We’ll continue to iterate on this to see where else we may be able to shift some numbers.

That kind of reporting on tangible results based on an intelligent guess will deeply impact how your company and team members see the value of your team.


2.

Customer conversation metrics

Customer conversations are the bread and butter of customer support and service representatives. Given that, it should come as no surprise that there are a bunch of amazing metrics that you can use to measure how they’re going, and how your support team is doing.

Customer conversation metrics

In this section, we’ll break down all of the metrics that you should know about to keep your support team on track.

Volume

This is the metric that has the biggest story to tell. While it can feel good when volume is up and your team is busy, the end goal for many support and service teams is to drop volume to a manageable level and scale the team. But, if you have a lot of tickets coming through, that means that your self-service tooling could maybe be easier to use, right? Yes, but it also means that your product might not be quite so easy, especially if your volume of tickets in your inbox is growing faster than the number of new customers you have coming in.

Generally speaking, with volume, you should try to decrease or deflect the tickets away by making better documentation or using a third-party AI or tool to help better triage customer issues. More volume does not equal more success, it equals less.

First Reply Time

First reply time is probably one of the most important metrics that most support teams track. It tells you how long it takes for a customer to receive an initial reply to their support request, or from the customer’s perspective: how long they need to wait to be helped. Generally speaking, first reply time should be within at most 24 hours for email, or around 60 minutes if on a social channel. But the shorter the amount of time, the better.

First response time can be the maker or breaker in your customer relationship: it’s their first experience with your brand. Try to make it good! Pay attention to when your first response time goes up or goes down, and what you can do to impact it even further. Things that customers sometimes consider to make impacts on first reply time:

  • Should we be hiring additional support people?
  • How can we be lowering the volume in our support queue?
  • Can we support hiring internationally to make an impact on tickets that come in during different time zones?
  • What types of conversations do we see with long average first response times, or is it just all of them across the board?

Once you’ve considered some of these, you can understand if your first response time is where you would like it to be, or if there’s something that you need to do to shift it and make it better.

Resolution Time

Your resolution time helps you to gain a better understanding of  whether your customers are getting their issues resolved in a timely manner. Nobody likes expecting that they are going to get a quick fix to their question and then waiting hours, or going back and forth over several emails to get it sorted. The best thing about resolution time is that it’s fairly simple to understand and know in what direction your company would like to move: you should always be making it lower and lower. The shorter, the better — as you drop your resolution time, you will surely be increasing your customer satisfaction and happiness. A few things that companies try to consider when thinking about how they can drop their resolution time are:

  • Are we understaffed?
  • Are the people that we have hired undertrained? If we trained them better, would they be better able to resolve tickets?
  • Are there parts of the product that are causing more issues than others? (You can track this through tagging.)
  • Are there parts of the support process that are slowing people down? (You can track this by asking your support team, or shadowing them.)

Figuring out where the issues are that are causing your interactions to drag out will greatly help your resolution time, which will, in turn, boost other metrics key to your support team’s success.

Average Handle Time

Average handle time is a metric that allows you to see how long, on average, it takes your support team members to “handle,” or send an answer to a conversation. While it can be a super useful way to see where there are holes in your support process or which things could be better, it’s also an imperfect metric. As you can imagine, if you incentivize handling a ticket extremely quickly, your employees will be more than happy to comply, but it may be to the detriment of ticket quality. When trying to pick up speed, especially if the speed is incentivized, some agents will cut corners that might be important to  provide a better experience. So, some things to think about when considering average handle time are:

  • Do I need to incentivize this by using it as a metric that our team presents to the company?
  • If I incentivize it, how do I keep track of how this is affecting ticket response quality?
  • As I shift things to speed up response time, do I see a permanent shift in average handle time or is it just temporary?
  • Are there things I could change in the support process, like adding additional saved replies, that could help shift this metric?

Average handle time is a great basic metric, but might not be something that your team or you need to use as a company-shared metric — it might do more harm than good.

First Contact Resolution Rate

Resolving an issue on the first response is the holy grail of excellent support. If you can respond to a customer’s email and answer all of their questions as well as proactively answering any other questions that they might have, you will make their day. In fact, Service Quality Measurement Group’s data suggests that a 1% improvement in first contact resolution (FCR) yields a 1% improvement in customer satisfaction.

The metric that tells you how often you’re making their day is first contact resolution rate, also known as FCRR. Our friends over at Groove have helped to create an excellent framework to calculate FCRR:

FCRR = number of support issues resolved on first contact / total number of FCRR-eligible support issues

An FCRR-eligible ticket in the above calculation is a ticket that is actually able to be solved on the first try. So, one where the customer makes an error in their email, doesn’t include all of the information that you need to resolve the issue, or solely says something like “please help” wouldn’t count as part of the denominator. Basically, after calculating, the higher your FCRR the better it is — setting goals with this metric should drive to increase it. Here are a few questions that you can use to consider how to affect your FCRR:

  • Is your product complex enough that many of your interactions are not resolvable within the first response?
  • Are there product areas that have a higher FCRR than others? (You can determine this using tagging in your help desk system.)
  • Are there internal tools that you could build that would provide you with more context to boost FCRR? For example, sales data about a customer’s interaction with your teams?
  • Are there ways you could make your self-service functionality better so the customer doesn’t even have to email in the first place?

While getting these answered is not guaranteed to raise your FCRR, it will help you have a better understanding of where the number is coming from and how you can take steps forward to boost it up.

Responses per Conversation

Have you ever had an interaction at a restaurant where you ask the server if something can be made without, for example, lettuce? They answer that they aren’t sure and that they’ll go check. Then, they come back and say that it can’t be made without lettuce, so you ask if another item on the menu can be. They answer, again, that they aren’t sure, but that they’ll go check. Imagine this going back and forth a few times until, finally, you both find something that you can eat — something that doesn’t have lettuce on it.

Wouldn’t it have been better if the server had instead just asked the chef for what could be made without lettuce the first time they went back, rather than going back and asking every time if a certain thing could not? The same thing that goes for your support team. If you force a customer to go back and forth with you over and over again, they will be unhappy. According to Forrester, 73% of customers find first contact resolution extremely important to their happiness and loyalty to a brand, so the farther you move from that, the worse off their happiness will be.

With this particular metric, the lower the number, the better. If you see your responses per conversation start to climb, especially if it’s with a particular agent, it may be that they aren’t being as careful with their conversations, or don’t know the right questions to ask. Here are some questions that you can ask to get to the bottom of bettering your number of responses per conversation.

  • Is there a specific type of issue that has a higher number of responses per conversation? (You can find this out via tagging in your help desk.)
  • Do certain agents on your team have a higher number of responses per conversation consistently?
  • Do you have a tone and style guideline that helps new employees learn the best way to communicate with your customer base?
  • Should you offer a different channel for support that would facilitate assisting your customers more readily?

Once you have a handle on where you’re running into issues with the number of responses per conversation, you can start to make shifts to make it better and boost your customer’s satisfaction and happiness!

Agent utilization rate

Agent utilization rate is usually something mentioned when talking about the live chat, which we’ll be discussing later. That being said, much of what works for other channels also works well for email conversation metrics. Agent utilization rate reveals the percentage of time that support team members (also known as agents, customer service specialists, and many other names) are spending in conversations, wrap-up, and other productive functions, as opposed to being away from their computer, checking email, or just generally being offline. The folks over at Comm100 have come up with a useful way for calculating this metric:

Amount of conversations (or live chats) per month x Average Handle Time / Hours worked in a month x 60 minutes

Typically, a utilization rate of about 50 or 60% is good. Higher than that, and you may run the risk of your agents producing errors in documentation, conversations, or running into burnout. Lower than that, and you may be overstaffed, or have other needs and outside projects that your team could be working on, but aren’t. Here are some questions to ask yourself about your utilization rate to see if you should be doing something differently:

  • Are my employees using all of the time that they have available to them? If not could I be providing them with additional projects to better support?
  • Are my customer support representatives burnt out and unhappy in their roles?
  • How can I forecast my staffing in the future to represent the data from our agent utilization rate?
  • Where is the comfort zone for my employees where they are most productive, but not yet at burn out?

While this metric is fairly cut and dry, the benchmarks provided above may not be the same for everyone. It’s important to ask more nuanced questions so that you can get a better understanding of what’s right and most efficient for you and your team.

Knowledge of the product

Your customer support team should have the best overarching product knowledge of anyone in your company — better than the individual product teams and the executives — because they have to answer detailed questions about it all day. Because of that, they should also be able to guide your customers through almost any problem they have and be able to propose new solutions for them within your product as the need arises. As Seth Godin writes, “don’t find customers for your product. Find products for your customers.” Your support team members should be the people that shepherd those customers towards their new experiences.

There isn’t a mathematical way to calculate knowledge of the product, but a few places to look for evidence of product knowledge are:

  • Conversation reviews from customers
  • Drop-through quality assurance reviews of conversations
  • Peer-reviewed conversations
  • Survey responses from CSAT requests, or documentation slide-outs

It can be a difficult and subjective thing for your team members to feel like they are grading their teammates, so make it easier with a rubric for product knowledge for them to fill out as they go through.

  • Did they offer the customer all possible solutions for their requests?
  • Did they provide the customer with a workaround via another feature if needed?
  • Did they answer the customer’s question correctly the first time?
  • Did they correctly walkthrough the solution to the customer’s question with the customer?
  • Did they correctly refer to all pieces of the product that they mentioned to the customer?
  • Did they politely reset expectations on any product behavior that was unexpected to the customer?

By answering some of the questions on the rubric yourself or having your team members do so, you’ll have a deeper gauge into how well your team knows your product and can assist your customers with it moving forward.


3.

Live chat metrics

Many of the metrics that can be used to email conversations or phone conversations can also be used for live chat. That being said, there are some entirely live chat specific metrics that we’ll be going through in this section that can help take your support team’s game to the next level.

Live chat metrics

Response time

With chat, it is even more important to respond quickly to your customer. The types of customers that are using chat to communicate with you are looking for a quick response and solution to their question. In fact, according to Arise, 84% of customers will abandon a chat if they haven’t received a response within two minutes.

Given that, tracking how quickly your agents are responding is  a must. There are a few questions that you can ask yourself if you’re looking to improve your chat response time:

  • Are your agents handling too many chats to be able to effectively respond to each one in a timely manner?
  • Are there specific product areas where response time is slower than others? If so, how can you improve it or learn from product areas where response time is fast?
  • Could you build processes, such as saved replies to boost your chat response time for common issues?
  • Could you put a gate in front of chat to get a bit more information from customers prior to responding to their request?
  • Would additional context to chat conversations be helpful for your customer support agents?

Response time is very dependent on the type of product your customer support agents are working with. If it is a highly technical product, it’s possible that they will need more time to consider their response before sending it. In that event, your company may want to consider how they can offer a “we’re working on it” type response in chat to let the customer know that there is a resolution on its way. You may also want to consider if live chat is the best venue to be offering support for your product — not all channels are created equal for every product.

If you decide to go with a live chat tool, give Chatra a try. It’s simple and easy to use for both agents and customers, and offers powerful features designed to reduce response times without impacting quality. For example, agents can see what visitors are typing and come up  with a reply even before visitors send a message or they can use saved replies to quickly answer common questions. It’s kind of like giving your agents the ability to read minds.
Start your free trial now

Wait time

There is a reason why Sartre wrote a book in which Hell was a waiting room — no one likes to wait. Especially not if the waiting experience is just sitting in the queue. Unsurprisingly, wait time has a huge impact on customer satisfaction. According to Kayako, almost a fifth of customers rate long wait times as the most frustrating part of a live chat — they don’t want to be a part of the queue. Not only is how long visitors waited in the queue a valuable metric, but knowing the number of visitors who waited in the queue, then abandoned it in favor of another channel (or left it altogether) also gives insights into customer happiness and behavior. Not all chat services offer this metric, though. For example, for tools that don’t have a queuing feature, the response time between the first message of a visitor and the first reply of an agent can offer similar insights.

Imagine if you waited in line at the grocery store with a few things in your hand for a long time, and didn’t see the line moving at all. You then went to self-checkout and found all of the machines to be broken. You would probably go put all of the things back (or drop them right where you stood), and leave the store. You would also, likely, tweet about it, tell your friends about it, and not return to the store again if you had a choice.

This is the equivalent of someone sitting in your chat queue, switching to email, and still not getting a response. The chat queue is the checkout line, and switching to email with still no response is them trying to move to self-checkout only to find that it is all broken. Just like the hypothetical situation above, the customer likely would not return, would “drop” whatever they were trying to do, and perhaps even tweet about it. It is because of this that it’s so important to track your wait time.

Some questions that you can ask yourself about wait time and how you could reduce it are:

  • Have we understaffed our team and do we need more people?
  • Are we offering support to customers that shouldn’t be receiving it, or should we consider differentiating our support between paid tiers?
  • Are there macros or saved replies that we could use to make response times shorter and be able to get to more customers?

While having no wait time for chat is, of course, the goal, it doesn’t have to be that way to wow your customers. Drop it down enough, and provide an excellent experience, and you’ll be on the right track.

Number of chats

Number of chats as a metric refers to the number of chats that an agent can handle over the course of a span of time, similar to the number of tickets.

According to the latest industry reports, there are around 450 live chat sessions for 5000 visits daily. If your website visits are at approximately 500k, the same ratio goes down to approx 0.35% or 1700 live chat requests for 500,000 visits. On average, 274 chats per month per agent seems about standard with these numbers.

That’s quite a bit of traffic, especially if you have a small team. Given that amount of traffic, just like with tickets, it can be awesome to see super high numbers being pulled by your employees. That being said, also just like tickets in the inbox, it’s important to recognize what costs the ability to ramp up to that amount of chats handled are incurring. Here are a few questions to consider when evaluating your number of chats for your team:

  • Are my agents bandwidth constrained by the number of chats that they are handling?
  • Are my customers getting a poor experience because my agents aren’t paying as close attention to their inquiries?
  • How long are my customers waiting before they get a response from an agent?
  • What are my customers doing if they leave the queue for a chat? Do they send a request elsewhere, or just leave entirely?

These answers will help you get a better handle on what you need to do, from your chat number metric. If your employees are bandwidth constrained and your customers are suffering, it’s time to hire more people or reevaluate your chat strategy, for example. What people do if/when they leave will also give you some insight into the quality of your conversations, and where you may be able to make changes to create a  better experience.

Invitation Acceptance Rate

Some companies use a proactive live chat strategy to provide support, sales, and other customer services. For example, having a chat bubble that pops up on your pricing page can be a useful way to snag customers with questions before they leave the page and don’t think about your product again. The same can be said for your account page or any other page where you notice that a lot of people run into trouble. Your invitation acceptance rate is how frequently your chat invitation is accepted and used, and it tells you how well you are targeting customers that need help. So, if no one is using your chat, that might be problematic. A similar issue that you can detect through invitation acceptance rate is if your chats are not being picked up by agents. If that’s the case, there are some shifts in strategy that you might need to make.

Think about it: on Halloween, when children are knocking on doors to try to get candy, they won’t go and knock on the door of a house whose lights are off. When you have a chat box that doesn’t respond or just always uses an autoresponder, your company is that house, and if you’re always ready for a chat and no one is asking for it, you’re effectively just a bowl of candy sitting out on a porch.

Here are some questions to ask yourself to dig a little bit deeper into your invitation acceptance rate:

  • Are my chats being accepted by users, and are the questions that they ask valuable?
  • Are my agents responding to chats in a timely manner?
  • Should we pick other pages on which to try to proactively support customers?
  • How does manual chat versus proactive chat work on our customer base? Do people like picking when they get to chat?

For some companies, manual chat that the customer initiates might do much better than chat that populates proactively. Pay attention to how your customers respond and what they respond well to. One company’s best practices may be fatal to another’s strategy — you are the only one who knows your customer, so do what goes best by them.

Conversion Rate

As we mentioned above, it’s not just support or customer service that are using chat — sometimes sales and even product teams can get in on the fun. One of the best metrics to use to understand how live chat is doing for your sales team is conversion rate. As your marketing team generates leads, your sales team can then convert them into actual paying customers, thus your company’s conversion rate. To calculate conversion rate, specifically for sales, take the number of people that went through your live chat funnel and became a customer, and then divide it by the number of people that went through your chat funnel as a whole; to get a whole percentage, multiply it by 100.

Forrester says 44% online consumers say that having questions answered by a live person within a purchasing process is one of the most important features a website can offer. Thus, having live chat as part of your mix should actually boost your conversion rate. That being said, there are a few questions that you can ask of yourself and your team to see if it could be better:

  • Could our sales team get involved earlier on in the conversation and help convert even more customers?
  • Could we place proactive chat bubbles on other pages that would allow the sales team to interface with people more?
  • Are there strategies we could be using in chat to upsell that we aren’t already taking?
  • Are our customers interested in live chat, or is it a medium that doesn’t work for them and their demographic?

While your conversion rate, given the data from Forrester, may be amazing from chat, it’s also possible that the main demographic of your customers are not interested in chat support. So, pay attention to conversion both as a way to gauge your sales team’s success and how to see if there’s another strategy you could be taking.

Peak Hour Traffic

Just like in email support, knowing when you get the most chats can be very helpful for live chat. That’s where peak hour traffic comes in. Peak hour traffic lets you know when your customers are the most active, and you will likely need the most employees, or to hire people in specific time zones to cover it.

According to Ameyo, 48% of companies that are using a contact center for live chat or phone believe that unpredictable customer traffic is one of the topmost challenges they have to face. It can be hard to know when and where customers are going to come, but tracking traffic over time can help to uncover patterns in times of day or even just days of the week to better help you staff your chat team. Often, these kinds of analytics are pulled by your chat system or help desk.

Chat and helpdesk customer traffic analytics
The screenshot above shows how many different insights the reports from Chatra provide: you can see your busiest days during a certain period range, and if you pick a certain date, you can see your busiest hours. Sign up for Chatra now and analyze how visitors use and perceive your live chat support.
Get started

With chat, timeliness is key. You should know that any time there are more customers and more chats are going to be a higher burden on your team. You should be better staffed, and ensure that you have prepared your team with the tools that they need to succeed. Here are some questions to ask yourself:

  • Do I need to hire internationally to cover hours that are peak and currently suffering wait times?
  • Is my team suffering any wait times due to peak traffic hours?
  • Do things like CSAT and other metrics dip during peak traffic hours?
  • Are my peak traffic hours what I expected?
  • Do my peak traffic hours make sense for what I know about our target demographic of a user and their location?

While peak hour traffic is mostly useful for ensuring that you’ve hired enough people to cover the times that are important, it can tell you interesting information about who is using your product and why. This is specifically interesting if the times for chat are different from the times that you see peak traffic in your email inbox: which demographics are you meeting with each?

Missed Chats

Missed chats are one of the worst things that a company can do to a customer: no autoresponse, no notification, just silence on the other line. It makes a customer feel frustrated and not cared for. It’s even worse than keeping a customer waiting, because at least then at the end they get a response. Missed chats are truly the worst possible chat experience for your customer.

Because of that: they’re incredibly important to track. Missed chats can give you indicators on agent productivity, for example: if they’re missing a lot of chats, it’s likely that either they need to do some work on their productivity, or they’re overburdened with more than they can handle. They also give you a good picture into the health of your support organization as a whole: how often are chats missed? Why?

The bad news is that, on average, 21% of all live chats are missed by customer support agents using live chat. Imagine how many missed opportunities you’ve had to talk with customers, and how many frustrated detractors of your company that could create? When considering missed chats and why they are happening, here are some questions to ask yourself:

  • Are we understaffed and is the support team overburdened?
  • Are there better productivity tactics that we could employee on the support team?
  • Are there tools that we could implement that would allow agents to reply more quickly?
  • Do my agents know that this is something that is important to their role?

Work with your team or individual members of the team to understand why it is that they’re missing chats and if there is something that you can do to help. If they have too many conversations and aren’t able to handle them, perhaps hire more. If it’s productivity, maybe a team-wide process would be good to put in place.


4.

Survey Metrics

Surveys can be an incredibly useful way to gather data about your customer base. They provide both qualitative and quantitative data about the performance of the topic of your survey and can provide abundant opportunities to follow up conversations, especially when the surveys are specific to the actions the customer has taken in your app, and are correctly targeted. Let’s talk a little bit more about the different types of surveys that you can implement, and what they can tell you.

Survey Metrics

CSAT

We’ve already talked a little bit about customer satisfaction (CSAT) surveys, but they are valuable enough that they deserve to be reiterated. CSAT surveys are usually short surveys that ask about how a customer felt about a specific interaction that they had with one of your customer-facing teams. Some companies choose to send them directly after the interaction while others choose to include a variation of them in the signature of employees like you can see here in this example from Shopify:

Shopify CSAT survey

No matter how you send it, the gist is that you give the customer something along the lines of “How would you rate your interaction?” and give them the opportunity to pick between “Awesome,” “Okay,” or “Bad” and then allow them to offer you some more qualitative feedback in the form of a typed out response. As a benchmark, it’s generally best to keep negative or passive comments to around 10%. There are some great questions that you can ask yourself if you’re hoping to get a bit of a deeper understanding of your CSAT and how it’s driven by your customers:

  • Are my detractors and passives happening because of issues with my support team, or because of issues with my product?
  • What strategy should I take for responding to people who rate me passively or as a detractor?
  • How should I send my CSAT survey, if at all?
  • How is our response rate to our CSAT survey, and could we make it better?

There are often a lot of CSAT responses that come from customers being dissatisfied with your product or wanting something that you don’t currently do. Another common problem with CSAT is that the same customer can respond multiple times to one conversation if you have the option to do so in the signature of your email response. This can dilute the metric, and make your rating seem far lower than it is. Take these to heart as you move forward with interpreting and sharing information from this specific metric.

Customer Effort Score (CES)

While there are many ways to rank and measure customer happiness and satisfaction (like CSAT above), customer effort score is one of the best. Our friends at Hubspot put it well: customer effort score (CES) is a type of customer satisfaction metric that measures the ease of an experience with a company by asking customers. By using a five-point scale of “Very Difficult” to “Very Easy,” it measures how much effort was required to use the product or service to evaluate how likely they are to continue using and paying for it. Basically, how hard did the customer have to work to get to what they needed?

CES is usually used in place of CSAT or alternating with CSAT — you don’t want to overwhelm your customers with questions, after all. If you send too much, they are significantly less likely to respond — think about it from your perspective, if you get asked something repeatedly it starts to get frustrating and annoying, and eventually you just don’t hear it anymore. This happens with CSAT, too. If you’re experiencing high effort scores, there are a few things that you can ask yourself and your team:

  • Could we make our self-service documentation easier to use to lower the effort score?
  • What are the things that people are expressing qualitatively on their surveys?
  • Does our customer service team need to do more proactive outreach to lower our effort score?
  • How does when we send the CES survey affect how the ratings for CES go? You can send it: right after an interaction or purchase, right after an interaction with your customer support or service team, or to measure an overall experience with a product or service.

There are no “catch-all” metrics, but CES is certainly one of the ones that can tell you about multiple parts of your company. If your product is difficult to use or unsatisfactory, it will take a dip into CES. If your support team or support process is subpar or doesn’t get to answers easily enough, your CES will plummet. Targeting — so you know where your specific CES scores are coming from — can give excellent insights into where you can make improvements and get better.

Net Promoter Score (NPS)

NPS is the talk of the town across most departments at most companies, and that’s because it’s the metric that measures how customers feel about your company, and how likely they are to talk about it to their friends and family. NPS Surveys are traditionally populated via email or a pop-up on your website, and ask customers a variation of “How likely are you to recommend this company to your friends or colleagues?” with a scale of one to ten accompanying it (one being least likely to recommend, ten being most). Sometimes, surveys also give the option to include a qualitative message after the survey is  completed, or underneath the ranking options. Satmetrix defines each of the groups, based on their NPS scores as:

  • Promoters (score 9-10) are loyal enthusiasts who will keep buying and refer others, fueling growth.
  • Passives (score 7-8) are satisfied but unenthusiastic customers who are vulnerable to competitive offerings.
  • Detractors (score 0-6) are unhappy customers who can damage your brand and impede growth through negative word-of-mouth.

The calculation of your Net Promoter Score is % promoters − % detractors = NPS, and the average score, according to Satmetrix, is around five. Some questions to consider if you have a low NPS score are:

  • How are we currently serving areas like feature requests or customer education?
  • Which groups of customers are least and most satisfied with our product, and what differentiates them?
  • What is the best way to reach out to people who are detractors or passives? Is there a best-practice strategy?

While these can’t make an immediate impact on your score, they can create a better experience for your customers and turn someone who was a detractor into a promoter. After all, imagine if you had an awful experience at a retail store, left constructive insights in an email that was sent to you prompting you for feedback and received a call from the manager directly addressing your concerns and looking to make amends. While you were frustrated initially, that probably quickly passed as the manager heard you out, and helped to make things right personally. That’s the same thing that you can do for your customers.

How much would you miss us?

Some customers may be familiar with NPS and find it robotic. Or, maybe it doesn’t fit your brand tone. As an alternative, and to detect slightly different sentiments, sending out a “how much would you miss us?” instead of “would you recommend us?” can be just the shift in language to get better engagement with your surveys. The sentiment of “missing” implies a much deeper emotional bond and attachment to your brand, though, than recommending. After all, you miss your mother’s home cooking but would recommend your local pizza joint to a friend. If you use this question with an audience that is not receptive to it, you may find yourself with a lower score than you anticipated.

Here are a few questions that you can ask if you find your score lower than you would like:

  • What can we do to emotionally engage with our customers?
  • How can we make our product stickier and have customers use it in their day-to-day?
  • Are our customers a demographic of people who want to emotionally engage with products that they use?
  • Is our product the type of product that people traditionally emotionally engage with?

It’s very possible that, rather than switching out NPS in favor of something slightly more emotionally charged, you should target your NPS differently to gain better insights. While NPS can be tedious and frustrate some users, use that qualitative, constructive feedback to create a better process for sending out your survey and you may be better off.

Response rate

While this isn’t a survey that you send out, it’s an important metric to consider as you do. Many people want their response rate to be higher — they think that a higher response rate means better data, but this is not necessarily true. According to Goodhart’s Law, “when a measure becomes a target it ceases to become a good measure.” So, when you start making your metrics into something that you are aiming for, for example, a specific percentage of response rate, it ceases to be a valuable metric.

The reason behind this is because, as you attempt to boost it often the incentivization will create artificially positive ratings. If you give someone a reward or a treat in exchange for filling out your survey to boost your response rate metric, the ratings will primarily be positive and probably not terribly useful. It is better to have a lower response rate to your surveys but have the responses be honest than to incentivize responding with a gift card or reward only to receive more responses, but with less incisive insights.


5.

Customer Loyalty

Customer loyalty is a hot phrase in the tech and SaaS industries, as of late. Everyone wants their customers to be loyal and love them, but what does that really mean and what does it get you? Having a loyal customer is like having a loyal friend: they are there for you in the good times, they are there for you in the hard times, and they try to understand where you’re coming from if you do something that makes them sad. But, just like a loyal friend, you need to cultivate that relationship with your customer.

Customer Loyalty

You can’t expect them to just automatically love and trust you and support everything your brand does. Along with those things, too, a loyal customer will:

  • Talk about you and your product to their friends and colleagues.
  • Buy from you, if they need your products.
  • Remain loyal to your company over your competitors and come to you first.
  • Be open and willing to try new products that your company offers.
  • Be slightly more understanding of your company if there are outages, errors or bugs that they run into, and give your company the benefit of the doubt.
  • Offer more constructive insights about what could be better about your product.

Creating this kind of loyalty with your customers is super important, but measuring it and knowing where you stand is even more so. Here are some great metrics to use to measure both where your customer loyalty stands, and what it’s paying back to you in dividends.

Customer Lifetime Value

Customer lifetime value, or CLV for short is a metric that measures how much value an individual customer is going to provide to a store or company throughout their relationship with the company. It’s calculated by customer value multiplied by the store’s average lifespan. This metric is incredibly useful because it shows if loyalty from your customers is being boosted over time and, if so, by how much. CLV also serves as an excellent benchmark for loyalty programs; if you choose to start a customer loyalty program, you can compare where you were before the program launched to where you are after it, and be able to see if you made an impact in CLV. If you didn’t, some questions that you might be able to ask are:

  • Where are there opportunities to provide customers with an even more excellent experience?
  • Are we using the right incentive for the loyalty program, or is there something else that might be better?
  • Are we using the right loyalty builder (points, bagels, etc.), or should we shift it to something more accessible to our customers?
  • Where can we create value (perceived or real) for our customers?

Having an understanding of the earning potential for your company, specifically based on loyalty, can help you generate more revenue while simultaneously creating a better experience for your customers. CLV lets you get a better handle on that.

Churn

Churn is every company’s nightmare. Whether you’re big or small, losing customers and the revenue that they generated for you is painful. Churn is calculated as:

Customer Churn Rate = (Customers beginning of month − Customers end of month) / Customers beginning of month

So, for example, you would have a 10% churn rate if you had 500 customers at the beginning of the month and 450 at the end of the month:

(500 − 450) / 500 = 50 / 500 = 10%

Churn is important for a few reasons: first, when a customer leaves your company and stops using your product, normally it’s because of something that your product doesn’t offer or has done wrong. Beyond that, it’s an opportunity for you to gain insights into where you might be losing customers: if someone churns, you could ask for additional thoughts into why they are leaving, especially if they were once a loyal customer, then use that information to improve your product experience for everyone else.

If your churn number is rising, here are some questions you can ask yourself to see how you can do better:

  • Why are the majority of customers churning? Is it product-related, pricing-related, support-related or something else?
  • If we find a specific correlation between churn and a specific area (product, support) what can we do to shift it towards loyalty?
  • Are there ways that you can detect churn prior to it happening, such as specific behaviors that customers perform before churning?
  • What period are you using to calculate churn? Monthly, quarterly or annually?

Consider, also, the amount of churn that you are experiencing and how it fits in tandem with your number of customer growth. While churn is important to have a hold of, having a certain amount of it is natural once your company enters a certain stage of growth. When you hit that stage, pull back your focus on churn, and start to focus on ways to cultivate loyalty instead.

Repurchase Ratio

Repurchase ratio is just what it sounds like: it’s the number of times your customers repurchase your product versus ones who don’t. Repurchasing — or customers returning over and over again — is the peak of loyalty. Imagine this: a family decides to go out for ice cream with their two-year-old son. They try to find an ice cream place close to their house, just in case their two-year-old has a meltdown, and then read a few reviews. They pick a place, go, and it’s amazing. So amazing that they bring their son there every Sunday now, as a tradition. How loyal does that sound?

To calculate this metric for subscription-based models, you divide customers who have renewed, by the number of customers who didn’t. For transactional models, like the ice cream shop, you need to first calculate the average time between the first and second buys of repeat customers, as well as its standard variation. By adding two times the standard variation to the average time, you will have captured 95% of your repeat customers. Divide this by the number of non-repeat buyers, and you have a close estimate of your repurchase ratio. Here is a tool to calculate your standard deviation.

Unless your product has made it extremely difficult to switch to another platform or service, your repurchase ratio will tell you a lot about the loyalty that you have cultivated within your customer base. If your repurchase ratio is making you worried, here are a few questions that you can ask yourself:

  • How can we make the customer’s first experience with our product amazing?
  • Can we reach out to people that are not renewing or returning and offer them a coupon to come back to drive retention?
  • Is our repurchase model working for our company? For example, should we automatically charge people each month, or wait for them to purchase for themselves?

Driving people to return and repurchase through excellent service and a solid product is the biggest testament to your brand doing awesome work. Work at increasing your accessibility and connection with your customers, and you’ll see this metric start to go where it’s supposed to.

Upselling ratio

Similar to repurchase ratio, this tracks the number of people who have bought multiple products of yours, versus those who have purchased just one. To calculate it, take the number of people who have bought more than one of your product, and divide it by the number of people who have only purchased one. For some companies, this will be more meaningful than others. Maybe you only have a single product, for example, like a helpdesk, so you only have more advanced iterations of that single product. These further iterations still count as “upsells” and indicate that a customer trusts enough in your brand after their first experience that they want to do more. A great example of this is Apple products.

People that use Apple products are extremely loyal and, after their first purchase of one, likely will continue to purchase other, new Apple products when they fit a need (or even if they don’t). Apple has done such a good job making their system sync up that now, it just makes sense for people to own all of their products. So, how can you get there? Here are a few questions to ask about your company:

  • Is your new product compelling enough to sign up for, or are you able to make the value known?
  • Does it sync or work in tandem with other products that you have?
  • Do you have members of your success, sales, or support team actively talking about these products and offering them as solutions to your customers?
  • How have you built trust with your customer base or showed your dependability?

If people depend on your company for a product that they need either in their personal or business life, they trust you; if they trust you for multiple products, they trust you deeply. The upselling ratio is a great indicator of whether your company has cultivated that trust needed, or if you could have a strong brand infrastructure.

Customer engagement

While customer engagement isn’t a specific metric, it does help you qualify how loyal and invested in your product your customers are. There are a few different metrics that can be good to track for customer engagement and to allow you to see how you’re doing:

  1. Activity time. How much time people spend using your product or being on your site. This can be collectively for whole companies or on an individual level.
  2. Visit frequency. How often do people visit your site or use your product? Depending on the type of product you offer, this can range from a few times a week, to a few times a month and your customers will still be considered loyal.
  3. Core user actions. Designate a few things that you want to be core to the user experience. For example, as a website building site, you might want users to change the default font color on their homepage. This would be a core user action, amongst other things, that you could track for your users. Tracking these core actions and their completion can help you see where you might need to shift your strategy to better guide your user.

As you track these over time, you can see if your “fit” with your userbase is getting better, and people are becoming even more embedded in your product, thus becoming more loyal.


6.

What to do with your data

So now you know about all of these different metrics and what they mean, what are some ways that you can use them?

What to do with your data

Metrics are pretty versatile and depending on your needs, there are a few different things that they can be applied towards and helpful:

  • Making a convincing argument for your executive team. If you aren’t on the executive team but need to ask for something like more headcount for your team, or additional tooling, using metrics can help show your point, rather than just tell it. For example, you could say “I want to drop our first contact to below one hour. To do that, I need more people in different time zones.”
  • Improve your service. In many of these metrics, we mentioned ways that you could use them to know if there were holes in your service that may be making customers unhappy. Use those metrics in your own ecosystem and see if they uncover anything for you as well. For example, if you notice, when looking at your core user actions, that many people are missing one specific action, you could create a webinar, training, or better user onboarding for that user action. This would boost your customer experience and their loyalty.
  • Make changes. Paying attention to trends and metrics within your company can allow your product team to know where they might be falling short. Having a hold on what’s happening can give people a head start towards making customers even happier with the product.
  • Showcase it to customers. If someone has something fancy, they usually want to show it off. People don’t buy beautiful cars and then leave them in a garage and never drive them. The same can be said for excellent customer metrics. If you have something that you’re particularly proud of, you can show it on your marketing website. Customers are drawn to companies that seem stable or responsive, so any metrics that show that off, in particular, would be good.

Combining metrics

Sometimes it can be most useful to combine metrics, rather than looking at them in a vacuum. For example, combining CSAT with first response time can be a very powerful exercise for your support team. If your CSAT drops as your response time rises, it tells you that at least one of the potential causes of the drop may be your slow response. If your CSAT drops as your response rate also drops, it may surprise you, especially in correlation with the slow response reaction earlier.

But through this, you may discover that even though your response times are dropping, the quality in your responses are dropping with them as your agents try to keep up with demand. This would tell you that you needed to potentially hire more employees to cover the number of tickets that you had coming in.

Once you did that, you may find that your CSAT was rising and your response rate was dropping, as your team members had the bandwidth to respond quickly and with quality. That is the expected outcome and demonstrates the value of comparing two separate metrics. You solved the problem that you wouldn’t have been able to with just one or the other.

Some other great metrics to view together are:

  • CSAT and NPS
  • Contact frequency and CSAT
  • Visit frequency and CSAT
  • CSAT and agent metrics
  • FCR and CSAT
  • Churn and NPS
  • Wait time and CSAT


7.

Other metrics to consider

While these are some excellent business-level metrics, there are others that are equally important, though for your internal-customer rather than external. Knowing how your employees are doing and feeling is also imperative to the health of your company, after all: you wouldn’t be in business without them. Here are some metrics that are key to consider when thinking about internal employee health.

Agent well-being

While there isn’t a specific metric that makes up agent well-being, there are several ways to put numbers to it that can be useful when trying to determine where you stand. Here are a few things that you can consider for your agents:

  • Employee Net Promoter Score (eNPS)
  • Sentiment analysis: this new AI effectively takes anything written — blog posts, Facebook, Twitter, chat — and analyzes them for sentiment. It’s very useful for your customers, but also for your employees.
  • Asking in one-on-ones about how they are feeling.
  • Tracking usage and participation data: are they active in town halls? Are they using all the technologies that the other employees are? The more they engage, the better, according to CIO.

Because your support team can sometimes be the first people that your customers come into contact with, it’s imperative that they be happy and enthusiastic about their work. Monitoring them specifically and keeping track of their happiness is important because, in a way, it’s keeping track of the happiness of your customer as well. Agents take on the emotions of everyone that they talk to on a day to day basis, so make sure they’re emotionally prepared to do that by keeping them happy or at least giving them a safe space to talk about it if they aren’t.

eNPS

eNPS is short for Employee Net Promoter Score and, much like the “regular” NPS, is a method for measuring how willing employees are to recommend their workplace to friends and acquaintances. Like NPS, responses are broken out into three separate groups:

  • Promoters (score 9-10) are loyal enthusiasts who will keep referring others, fueling your company’s growth.
  • Passives (score 7-8) are satisfied but unenthusiastic and may be vulnerable to competitive offerings for other jobs.
  • Detractors (score 0-6) are unhappy and may speak negatively about your brand and cause potential damage.

When you send out eNPS, you have the option to send through a qualitative question as well, that allows people to type out a bit more, if they’d like to provide insight beyond a numerical rating. This can be a useful way for you to gauge sentiment, but also get a bit more information about why people might be loving or hating working for your company. If possible, it’s good to ask for information to reach back out, just in case you want to have a further conversation or have additional questions about the employee’s constructive insights.

eNPS is mostly useful to track over time, so consider offering it on a quarterly basis, rather than just doing it once and never again. While the insights that you gain from doing it once may be helpful, they won’t be as helpful as tracking over time and seeing either growth or diminishment in your score.

Social media monitoring

For many people, as soon as something important or interesting happens, they go straight to social media. People like to share their experiences — both positive and negative — and smartphones make it easy to do so quickly and on the go. Because of that, social media can be one of the best ways to get a handle on how people are feeling about your brand and what they want to see change. While you might already have your support team monitoring social to take care of any support-related conversations that come through, having marketing or product monitoring it occasionally can be super helpful for getting deeper insights.

A few useful tools for this, just in case you don’t want to spend company time with multiple people trawling through Twitter:

  • Google Alerts: Lets you know when your brand appears in a prominent position in results.
  • Mention: A powerful freemium tool that gives you a heads up whenever someone mentions your brand on the web. It’s especially handy for social media tracking, which Google Alerts doesn’t really do.
  • SocialMention: A free tool that analyzes social mentions of your brand across social media. It shows the likeliness of your brand being discussed, the ratio of positive to negative mentions, the likelihood of people mentioning your brand repeatedly and the range of influence.

If you automate away some of the manual searching for insights, reading what people have to say about your brand when they think you aren’t paying attention can give you some honest, constructive insights to better your product.

Things gone wrong

The Lean Six Sigma approach suggests also looking at the things that have gone wrong, rather than focusing on positive aspects of the business. This is also exactly what the premise of what the metrics of “things gone wrong” does. When looking at qualitative surveys that add options for people to share information about what has gone wrong, track how many constructive reports come out for every 100, 1000, or up to a 1,000,000 units of survey responses, units sold, or other.

The ultimate worst-case for this metric is that you score a one or higher, meaning that you get at least one complaint for every survey or individual product you sell. If that occurs to you, there are a few things you might want to consider:

  • Should we have a better debugging process for our products?
  • Do we release things too early with little process to our release?
  • Where are most of the things going wrong, and how can we focus better on that?

Usually, the things gone wrong metric is related to an overarching issue in your product infrastructure, rather than something you can fix immediately. But, these questions will get you on the right track.


8.

Tools to use

There are a few tools that you can use, outside of reporting in your helpdesk or CMS, that can really help provide you with insightful, useful reports. Consider third-party tools if you would like to get more data, but aren’t able to do so with whatever you are currently using for tooling.

Google Analytics

Google Analytics is part of the Google Suite of tooling and can be an incredibly robust tool for understanding better what customers (and employees for that matter) are doing on your site. When using it to track for customer satisfaction, you can look at the following metrics to help you get a handle on what’s going on:

  • Bounce rates. How often do people come to your marketing site and then leave?
  • Engagement rates. How well are people engaging with your content and your site? Are they returning often? Do they stay long when they do visit? These kinds of metrics are a great representation of loyalty.
  • User flow. Google analytics allows you to see user flows, which show you where your users are going and how they get there. This is especially useful when paired with Goals, the next item on the list.
  • Goals. Goals allows you to track conversion and outcomes of specific events tied to pages on your site. When used with user flows, it makes for a very impactful story about how your customers are using your site, and if it’s in the way that you would like.

Google analytics might not be for everyone as it can be fairly technical to install, but once you’ve got it going, it’s incredibly robust. Beyond most other tools, it can help you keep a close set of eyes on what’s happening with your site and what you could be doing better.

Customer Surveys

Lastly, we talked a lot about surveys and how beneficial they can be for tracking metrics. There are some fantastic survey tools out there, so we wanted to compile a list for you.

Sending out different surveys like NPS or CSAT is an excellent way to gain both qualitative and quantitative insights about your company and what you could be doing better. Many of these services offer build-your-own survey functionality, so you don’t have to pick a single format (like just NPS or just CSAT) and can instead use the same service for multiple different types of surveys.

Make sure that you have a plan in mind as you move forward so that you can pick the survey tool that has all of the features that you need. For example, if user path tracking is important to you, look for a more enterprise tool. Similarly, for example, if you’re looking for a pre-built NPS template, look for a tool that specializes in NPS. Your tools will only be useful if you do a little bit of work before picking them out.


9.

Conclusion

Metrics are supremely valuable to any business: they let you know what your customers are feeling, they help you get an understanding of what needs to shift and change for your company to grow, and they even give you insights into the feelings of your employees. Having a steady set of metrics that you keep track of can help you to foresee issues coming in the future, and address problems before they occur. They can also help you to boost your customer’s happiness and satisfaction.

While this is by no means an exhaustive guide of all of the possible metrics available or their combinations, we hope that this core set can help get you started on the right path towards metrics nirvana.