I was asked by Brook Perry from ’nuffsaid if I would be interested in contributing to an article she was working on with others to get feedback on a set of questions covering customer success operations. Being close to my heart I agreed. I’ll update this post with a link to the article once it publishes so you get the input from others, but here are my answers for now.Continue reading Customer success operations – some answers
This is a worthy challenge I’ve grappled with before, just check out my posts under the metrics tag. The other day I got my hands on a recently published Forrester report with the same title as this post. I cannot share the report for obvious reasons but this is my review of the highlights of the report which does share some details.Continue reading How to measure value for customers
Customer Success teams in SaaS companies (mostly what I am focusing on here) probably like to think they are the spearhead for making customers successful (as the name suggests). In truth, its not that simple (is it ever).
First a little context on the two DanelDoodle’s I shared in this post. I use the Paper App (iOS) for my Doodles. They have a Paper Store where you can buy Workbooks. I purchased one on Data Doodling. Check out a video of the workbook. Basically it’s a set of instructions and practice steps for creating data doodles.
As part of the practice you create a mind map to break down a topic and then you start to define sources from that breakdown for your data doodle. So that is what you see in the two doodles. What a fun exercise it was and I plan to do more.
Now a few extra words about the subject. I chose it since I am a practicing professional in the space and wanted to explore some recent changes. As mentioned, Customer Success Managers will often take a lead in making customers successful. But since this is a relatively new role in many companies, it will often fall to other roles until CSM’s get up to speed. Even when they do, its a team game.
In the mind map I tried to plot all of the functions that typically interface with the customer and even some that don’t normally, other than in some minor way. I also tried to add some new aspects to the typical roles and how they have evolved to work with customers. I’m not saying this mind map is in any way complete or even correct – it was done as part of the exercise and was a spontaneous process, no science.
Once you create the base mind map, you look to identify relationships that can be explored in the data doodle. These are the larger circles which I believe represent the old and new relationships working to serve customers.
I used one of the template charts from the work book and plotted a change in value the relationships were creating over time. These are then further categorised into new and old. The new and old reference does not imply that some partnerships are disappearing or will be replaced by other newer ones. And the trends are also a non scientific view I have, based on my experience in the customer success role since roughly 2012.
Things are definitely on the move and this view may not stand the test of time very long. Take Microsoft where I work (disclosure), we have just announced a major change in our Customer Success function. I cannot go into it but this public post by a long time and trusted Microsoft journalist at ZDNet gives some detail: Microsoft makes changes in its field sales, support groups as FY’21 begins.
I had an interesting conversation on LinkedIn the other day. It was based on a retrospective view of the customer success profession which I had written about on the State of Customer Success in 2018 after attending a conference. I captured the essence of the conversation in a DanelDoodle and discuss briefly here.Continue reading Metrics that matter in Customer Success
This last week I attended a meetup and workshop in London organised by Customer Success Network, a European based not-for-profit community for customer success managers. It had the same title as this post.
An excellent session which started off with a few minutes of talking by Dan Steinman, GM Gainsight EMEA. I then facilitated one of the breakout workshop sessions on how good data should be used in QBR’s (Quarterly or Executive Business Reviews as they are commonly known). Here are some brief notes.
Dan started off talking about we all have some “good enough” data, which should be good enough for starters. In other words, don’t get hung up with not having a perfect set of usage data or reporting setup. You can easily get started with things that don’t require usage data but can tell you a lot about your customer and how to manage them. Things like:
- How long have they been a customer?
- How many renewals have they done?
- What is their ARR now vs originally?
- Are they paid up on their bills?
- # of Support cases open?
- Survey results?
In terms of the elusive product usage data though, you HAVE to get it at some point. Some ways mentioned: Segment.io, MixPanel, Google Analytics, Aptrinsic. I’ve used MixPanel which was okay but had great experience with Looker too and in my current work I use PowerBI where we actually focus on enabling the customer to have the same views and insights as the customer success manager.
On the last point above, this is holy grail territory in my view because then you and the customer can have truly meaningful conversations since there is a plain and evident, single source of truth you can discuss strategies around.
Back to product usage data. Your product/engineering team should want it as badly as you do. Start with the bare minimum – logins, pageviews, reports run, etc. Don’t accept no for an answer.
Muck in even if it means having to learn a new tool. I remember spending an enormous amount of time learning first MixPanel and then Looker in my last role. All the product team had done was create the connections with the usage data and the reporting tool but how to make sense of it was left up to your own devices. But oh how rewarding when it works and you start making sense of the data and having the right conversations with the customer.
And its not just your product/engineering team who should want it as badly as you do. Marketing and sales teams have spent decades and millions perfecting understanding of prospects. Once they understand that customers are the new growth engine, they’ll be on board to help you create and share access to the same level of understanding on customers.
Different use cases for data
The workshop breakouts were pretty much focused on different use cases for data. I facilitated the one on QBR’s. The activity was focused on mapping as-is and to-be QBR data definitions. First we defined traditional definitions. Next we challenged these. How else could we focus on predictive or future-focused growth measures? What were they?
The output was a view of mapped current and future-focused CS measures, and why you’d use them. Here is the groups output after I took the raw material, cleaned it up, tweaked it and added a little of my own spin.
The other breakout sessions all explored different aspects/use cases of data usage like:
- You work in a small start-up where customer success is just evolving. You want to able to demonstrate the role of the team to show the impact you are making internally.
- Your company has been expanding rapidly, the growth of MRR is now driven by expansions and upselling, which is owned by the Customer Success team. You’ve been asked by your CEO to demonstrate CS’s impact across the business to prepare for another round of funding.
- Customer Success teams are increasingly expected to become more financially driven. This exercise was intended to demonstrate their role in contributing to the growth of the company.
- Drawing out a success plan which would help the most immature customer success team understand:
- What value looks like and where CSM’s can get data from (even if they don’t “have any” today)
- How to track customer health through the life cycle with what they have
NOTE: There is an evolving document capturing the output of all the sessions.
Creating customer advocates is a measure of customer success by some.
Not enough though, in my view. I have a view because I’ve helped create a fair share of advocates. Just recently three of my customers got up on stage and spoke at a large event to other customers and prospects.
I’ve also had the CEO of the largest bank in Africa come and speak at an event I created for other customers and prospects in the financial services industry. He went on to expand his business with my then employer to the tune of a $30 million, multi year deal (TCV).
If ever there was a worthy proxy for customer success this would be it. Apart from renewal or expansion I’m not sure there can be a better indicator of customer success. And as you saw in my example above, the one often leads to the other.
How you qualify an advocate
Okay there are some prerequisites to how you qualify an advocate.
They should ideally be the person or persons that are responsible for paying for the technology.
Ideally they should be highly influential in the organisation.
Failing either of the above, vast numbers of advocates would make up for this.
What else about advocates tells you they are one?
They are willing to speak for you in public about their use of the platform.
They have great stories of that use that they can regale other customers or prospective customers with.
They are happy to write reviews or case studies and jump on calls with other customers or prospective customers.
Last but not least, they should be credible. Either as speakers or writers or they have good standing in the community or industry.
Why advocates qualify as a measure of success
Well in the first instance you’d think they were supremely satisfied or why would they be an advocate otherwise? And customer satisfaction is a measure of customer success so this would be one way of indicating it.
They are also very tangible. You can count how many times a customer speaks for you, jumps on a reference call, helps author a case study. Tracking these success events allows you to track customer success performance very easily.
How to create advocates
- Make them successful, thats the first step. Meaning how they achieve outcomes with your technology but also in how they are perceived in the organisation. Making them look good in the eyes of their peers is a surefire method.
- Nurture the relationship and make advocacy a clear expectation upfront before you put the effort in. Let them know that given everything goes well, you’d like them to be an advocate for you.
- Make others in your organisation responsible for building them, especially executives. Lavish relevant attention on them. Through product teams and executives to make sure their voice is being heard in new feature development is one good example.
- Elevate their advocacy and profile and enhance their credibility by putting them on stage, help them co-author credible content or case studies that can be widely shared or get them in front of peers, e.g. meetups.
- Arm them with the success stories and product knowledge they need to be effective advocates. Its no use if they are willing but not able to be effective advocates.
One chapter of my new eBook / trend report is going to cover metrics. What you track and how you track it.
There is much written about the metrics themselves and I wasn’t yet ready to delve into that. I doodled what I thought was largely the ideal dashboard I could have in front of me as I went about my everyday customer success work.
Just some very high-level thinking unfettered with detail and any influence from the many voices out there on the subject.
It boiled down to this – how to measure what impact my actions, either directly or indirectly are having. By indirectly I often mean the actions I influence the customer to take on behalf of the user and/or platform. Or unforseen circumstances that influence things.
I took the resulting doodle and shared it in a LinkedIn group that has a lot of customer success bodies in it. The Customer Success Forum has 24,738 members at the time of writing this.
My intention was to try garner some feedback. The post got 39 likes and 20 comments (quite a few of the latter are mine in response). I’m not sure how good that is but from what I have observed its not a particularly active group despite the numbers. Perhaps that’s because of the draconian community policies. Some are sensible but most hinder participation in my view.
But I digress. The main point of the exercise was to do a soundcheck and it achieved what I wanted. I think the responses have mainly validated the thinking and I got some terrific input. I have captured my initial brief and request plus a summary of the responses below.
The Ideal Customer Success Dashboard
I’ve tried to capture what I think the ideal is in a quick doodle – hope its legible and the points clear. The question I have for this forum is what do you think and more importantly do you have such a tracking system in place and if so how are you achieving it?
So the key for me is the tracking, not so much the dashboard. And crucially, being able to track your customer success interventions (with automated or manual input) and tying them to outcomes. What those exact interventions and outcomes are is also less important (I’ve suggested some by way of example). Do you do it at all and if so, in one system, disparate tools layered underneath a reporting tool, etc.
The most relevant feedback I got in comments are captured below – with some of my responses. Some really excellent points I’m going to incorporate into more thinking for this chapter.
- Violaine YZIQUEL
- Thanks Stephen for the illustration – makes total sense. Have a look at Kitewheel, it seems they are tracking customer journey events. Iterable too.
Thing is before going into a tool solution, the hardest part to me is to identify the current interactions, channels, content and triggers (and by whom) and see how these can be tailored for your customers (what segmentation? What maturity? What use case? What customer ie end user/decision maker/admin?). Happy to learn more from you all on this!
- Really great points Violaine, totally agree. First try identify current and ideal touchpoints then see how you can track them and what impact they have. Then slice and dice from there.
- Matt Myszkowski
- So this makes sense to me, but the biggest challenge I have always found is the correlation between activities (tactics) and real, meaningful business outcomes.
- Absolutely, totally agree Matt and thanks for the comment. That rabbit hole is for another post and I simply denoted this side of things with the value lable on the vertical axis alongside usage which you have to have first before you can get to value to lead to a satisfied customer. Targeting specific business related outcomes is key, as is agreeing KPI’s around them and most often this is done through use cases. Achieving these drive value creation. In this post I am most interested in the means of tracking all these things and what others are doing around this.
- Stevie Bickford
- Great illustration Stephen. We’re still working on this and the automated / manual divide. As others have referred to, for me a lot of this ties back to evidencing the value of Customer Success. Being able to track and quantify this is super important to help show the value add to the team, management / board and the customer.
- Stevie you absolutely nail another really crucial element of this all – in addition to knowing what success activities are impacting on key outcomes, it’s to be able to show to senior execs the value the customer success team is delivering to the business. Very well pointed out.
- Chuck McMahon
- I believe it has been inferred here, but to put a point on it, in addition to the softer values that a successful support effort brings, finding a way to monetize the value is huge. Leadership and the Board do want to see that your clients are happy, but what is the Cost of Service? If that goes up exponentially with a fractional increase in client sat, not good. Try also to equate your client sat to renewal rates and/or NPS (or some similar) to new sales. Leadership LOVES to be able to bring a dollar story to the table…
- Couldn’t agree more Chuck, thanks for the feedback.
- Mike Grafham
- Like the image here, definitely a useful view. One thing you might want to consider overlaying is what you *expected* to happen, e.g. did your training event happen within the window that it was supposed to happen based on what typical customers do, or was it early/late. Are your peaks happening at the expected time, or earlier/later? That way you’ve got both the ‘what is’ as well as the ‘what should be’.
- Thanks for the comment Mike. Excellent idea. I would tie it into a success plan and specifically the actions you have planned for the period ahead to achieve the metrics you are targeting that quantify success. Then as you go along you can see how you are tracking, or at worst you look back and see how you did against what you planned