So many of our customers have developed various models to help predict and prevent specific customer behaviours. Of all of these, the churn model is the one that seems to get the most attention – and is often developed and dismissed as ineffective. So this blog sets out to establish why this particular model is so hard to get right.
We look at not just what a churn model is, but how it is built, applied and measured. Exploring where it might go wrong and how you can make the most of your model investment by applying some basic tests and some measurement know how.
Churn is a very strange word – derived from the process of churning milk to make butter – the analogy basically is used to describe how customers churn around your database and either settle as loyal customer (the butter) or become dormant to you (the whey).
As it is much cheaper to retain rather than constantly repurchasing the same customers – how do you make sure that you keep as many of these churning customers as you can?
Enter our leading lady – the churn model. Developed by our amazing data scientists this little beauty is supposed to help us to determine which of our customers are most at risk of detracting. But as I was reminded this week, these models are quite hard to deliver in the real world and very often deemed to fall short of performing their basic task of retaining the customers that they sought to protect at an equitable rate…
So, if these models fail – where does the problem lie? Is it in the fundamental construct of the model, the interpretation, the tactics we are applying, or the measurement of it? Read on to find out.
I am not a data scientist, but have been the protagonist of enough models to understand that they are generally sound, using complex regression algorithms to find patterns that help us to determine the most likely behavioural triggers that will predict the impeding defection of our customers or customer groups. In essence, these predictions are sound – the problem arises from the fact that they are neither built nor used at an individual level, so by default the parameters we are given are a blended average of the customer segment that they are built for. They are not absolute, even though that is what the data scientists imply and they need to be applied and tested properly.
So as marketeers we now apply the model to our automated programmes. We use them to target specific behavioural triggers that have been highlighted in the model. Usually a lack of action (ie: a regular purchase interval missed) at a specific point in time or a specific action (ie: a call to discuss renewal rates) that suggests a customer is looking to detract.
What we often forget to do at this point – is to test the accuracy of the model parameters. ie: intervention around the model date not just the model date itself. By doing this you are not only acid testing the model efficacy, but you are establishing the best time to intervene.
What the model can’t tell you is what you should do to prevent a behaviour. So, we have to look at the cohort and determine not just when but what measures we should take. Often this is a financial calculation – how much is this customer group worth/how much will I lose if I have to re-recruit? So, we need to be ready to test a series of offers to see which is most effective at retaining each group – but remember:
1) Your offer can be value add as well as discount
2) You have to test the impact of intervention without an offer
3) You should also be testing timing of the intervention
And here we have the crux of the efficacy of your model. You might have the most amazing model in the world, but your stats are telling you that the results are not equitable – ie: the conversion wasn’t good enough or that the campaign ROI wasn’t enough – so you drop it.
But wait… if you employed a model to reduce churn and therefore improve retention – why are you determining that it doesn’t work based on campaign conversion rates. Sure, this is an indication as to the relative success of each of your tests – but it isn’t the measure of the model.
The model purpose was to improve retention – so the only true measure of its efficacy is the impact on the retention rate of the groups tested – ie: how many more people we have managed to retain as a consequence of the model and at what cost. Remembering that some customers are worth more than others - you need to look at the ROI by customer RFV or segment.
Churn models can be effective when applied properly, but often get dismissed because they are not tested or measured robustly. So, if you want to harness their benefits – you need to remember that the timing parameters need to be tested as well as the tactics you use and that their true measure is not at a campaign ROI or conversion level – but in how many more customers you have managed to retain.
If you need any help in constructing, applying or measuring the impact of a churn model then get in touch.