Using Data is Not Just a Starting Point

My educational background is deeply steeped in research methodology, testing and measurement, and data-based decision making. This grounding in research design has shaped the way I approach actionable insights and marketing. I believe using data and analytics is not a starting point or an endpoint, but that data should live throughout the process in an iterative and comprehensive way.

What Does This Mean and How do you Start?

The first step to using data is by looking at what you have and how it’s organized. Our past few blog posts have been focused on customer segmentation and engaging with these segments in various ways across customer journeys. Viewing customers through the lens of segmentation is the best way to apply actionable insights in a meaningful way. However, for segmentation to be effective it needs to be both significant and meaningful. 

By significant, I mean that the groups should be statistically different from each other in some definable way. To be meaningful, segments should drive actions that are unique to that segment. As you can see, we have already begun to layer in data and insights, even before we have posed our marketing questions. Understanding your customers at this basic level allows you to propel your marketing from a data-based foundation, allowing for more targeted and meaningful interventions.

Your Gut is Good, Data is Better

Think back to all the marketing messages you have sent over the years, whether they were single-channel, multi-channel, or omni-channel. How many times did you wonder if your campaign was effective? Did sending an email work better than calling? Should we have sent a handwritten card instead? Now think about how many times you said – I think that email was a success (or failure) because the response rate was high (or low). The fact is without a control group that received no treatment you can’t accurately measure success, and therefore don’t know what to repeat and what to continue testing.

Experimenting, or testing, to hone-in on the most effective touch-points or messaging is essential before applying that methodology to the whole campaign. For example, if your consistent buyers who are declining in spend frequency and velocity positively respond to social media ads but not concierge phone calls, then do not waste the time, money, and bandwidth on concierge phone calls. I would also argue that if they respond equivalently to social media ads and concierge phone calls, then do not waste the time, money, and bandwidth on concierge phone calls.

One and NOT Done

Because we are working with people, no experiment can truly eliminate all other factors. This means that when evaluating your results, be open to trying the experiment again with a different segment, or at a different time period, or with different messaging. For example, just because social targeting didn’t impact your club attrition rate, that doesn’t mean it won’t impact your new vintage release conversions. Behavior changes with time, and your winery data is based on behaviors. It lives and breathes with your customers, this is why you should be open to failure, open to testing multiple methodologies, and open to repeating experiments over time. 

Intervention Strategies and Experimental Design

Below I am going to outline the steps to test intervention strategies in a measurable and trackable way. I’ll try not to get too technical and boring, but no promises. Just remember, this does not (and should not) be done across all of your segments all at once. Pick the segment you feel will have the most impact on your business and start small. 

Start with a Null Hypothesis

Experimental design begins with a null hypothesis. If your eyes just glazed over a bit, this is essentially a segment of customers who are the same, there is no significant difference between or within a set of observations that you want to measure. For example, if you want to test what offer has the biggest impact on average order value, you would start with a segment of customers who have the same average order value. The null hypothesis is assumed to be true at the start of the experiment, meaning the customers in this segment are all the same and will behave in the same way. Why is this important? Because it allows you to determine if your intervention strategy is working (and if it is working in the way you need and want it to work).

Create A Control Group

When using an intervention for the first time, take a subset of your segment and divide that subset into several groups. One group should be your control – they get no intervention. The other group(s) get exposed to the intervention(s) you are testing. Having a control group allows you to have a comparison group who had no intervention.

Let’s use club memberships as an example here. Imagine you have segmented your club members and identified all members who are approaching a critical point (perhaps the average tenure for that club tier) and your goal is to decrease attrition rates. You decide to send these members an email reminding them of their club benefits. Then attrition rates increase. Was it because you sent the email or another factor? A control group can help answer that question.

If you have a control group (a subset who did not receive the email) you can compare attrition rates between the control group and the intervention (email) group. If the attrition rate for the intervention group is significantly higher than the control group, then you can conclude the email contributed to the increase in cancelations. However, if there is no difference between the two groups, then you conclude the email did not cause an increase in attrition rates.

Measuring Your Outcomes

The null hypothesis should help you determine what outcomes to measure. For example, if your null hypothesis is that there is no difference in average order value between group A and group B, then you should be measuring average order value, and not the number of orders. This goes back to the concept of aligning your goals with your methodology and what outcomes you focus on, discussed in the previous post.

Be honest in your measurement, and willing to change your approach if the outcomes are not in the direction you desire. It is very easy to become attached to a strategy, especially if it was your idea, or if you have seen it work before. Experimental design, and a willingness to invest the time into the experimental process, will allow you to identify the most effective intervention strategies for your customers. 

Key Experimental Design Takeaways

  1. Start small. Pick a representative sample of a cohort to test your interventions on
  2. Use a control group!
  3. Be clear about the outcome of interest, so you measuring the right metric
  4. Once you know what works, generalize it to the rest of the segment.
  5. This should be an iterative and responsive process. Remember, data is a living entity, and your segments will shift and change with time and interventions. Do not become stagnant or complacent in how you engage with them.
  6.  Don’t be afraid to fail. Often times, knowing what does not work is as important as knowing what does work (and using a small test group allows you the room to experiment with different strategies). Test, and test again. Other factors may be influencing the experiment and you can modify parts of the intervention to hone in on what drives success.