3 ways data can steer you wrong — and how to glean better insights
The wrong data is just as harmful to marketing as having no data. Here's how a holistic testing methodology can help you glean better insights.
Modern marketers are obsessed with data — and for a good reason. It gives us direction and informs our strategies, among many other benefits. But not all data is equally useful or helpful. The wrong data sets can be just as damaging to your marketing program as having none.
It’s essential to know how to identify the right data, so your insights accurately guide your decision-making. In this article, I’ll tackle three ways data misuse can harm our marketing efforts and how a holistic testing methodology can help glean better insights.
At the heart of significant marketing trends is data
Look at the trends that have occupied marketers’ attention over the last three years:
- 2021: The rise of Apple’s Mail Privacy Protection feature, which masked user activity data. Marketers panicked at the thought of losing open rates.
- 2022: Zero-party and first-party data, which people suddenly realized they needed to offset the pending loss of third-party cookie information.
- 2023: The rise of ChatGPT and other AI-driven (meaning data-driven) natural language processing models.
Whether it involves losing access to data, needing to find new sources of data, or giving it the power to make decisions for us, data is at the heart of these trends.
Data has technological and philosophical definitions. It can be information a computer can use for processing or, as Google defines, “things known or assumed as facts, making the basis of reasoning or calculation” (emphasis mine).
That “or assumed” part is where we can go wrong with data. People always say, “The numbers don’t lie.” Data might not lie, but it also might not mean what you think.
Dig deeper: Why we care about data-driven marketing
The good side of data
I might tell you things you already know about data, but hear me out. We rely on data daily, both in the obvious things and the non-obvious (to borrow a term from Rohit Bhargava).
For email marketers, the obvious includes marketing data we use when creating and structuring campaigns, choosing audiences, measuring success and taking the next steps. This is why email marketing is so useful. It generates data we can apply throughout the entire marketing ecosphere.
Then, there’s the non-obvious value. Our email data can inform other marketing channels and even go beyond the marketing team to support customer service, business operations and more throughout the company.
Our campaigns are like an ongoing source of market research. Because the people we email are our prospective and existing customers, we’re tapping into, tracking and measuring our customer base daily.
On top of all that, today’s marketing technology makes it easy to gather data. We find data everywhere we turn — in our ESPs, automation platforms, CRMs, ecommerce engines. Numbers are flying past us so fast we can’t catch them all.
But that’s my point. We don’t need all the numbers coming at us. We need to know what the right numbers are and what they mean — which is where we often go wrong.
Dig deeper: Why we care about email marketing: A marketer’s guide
‘Insights, insights, insights’ not ‘data, data, data’
As my good friend Chad S. White, author of “Email Marketing Rules,” perfectly put it:
“You talk about ‘data-data-data.’ I’m not a fan of data. Nobody really wants data. What they really want are insights and analytics are how you find the insights that are hiding in your data.
– Chad S. White during a keynote speech at the ANA-Email Evolution Conference in Washington, DC.
Data will steer you wrong a lot. You need to make sure you’re bringing your knowledge about your customers, your knowledge about your business and analyze that data to squeeze out all the crap and be left with the stuff that’s gold.
There’s a lot of misdirection in the data. So insights, insights, insights. That’s what we want.”
You can collect all the data you want, but you also must sort the necessary data from the extraneous, the relevant from the irrelevant and the real from the fake to learn what it really means.
We collect data not to fill up data silos, lakes and warehouses but to use it to understand our customers and measure how well our marketing programs work. Everything else we do as marketers hinges on those efforts.
Thus, it can be damaging when we collect bad data or look for meanings data isn’t empowered to give because, as White says, it steers us wrong. The wrong data is just as bad as no data — maybe worse because it can give you a false sense of security and achievement.
If you use email data to inform your brand’s understanding of your audience and drive decision-making beyond the marketing department, imagine the chaos if you base your insights on faulty analytics.
When good data goes bad: 3 scenarios to watch out for
Let me correct myself: data doesn’t go bad. It’s how we use and interpret it that creates problems. When you misuse your data — deliberately or accidentally — you can take your team and even your company down a long, wrong path. It’s easy to do, especially if you’re trying to optimize your email program by testing various parts and not just operating on instinct.
I can’t count how often clients started out with good testing intentions and veered off the path because the tests were set up incorrectly or because the team came to the wrong conclusions. Here are three scenarios where data can steer you wrong A/B testing.
1. Optimizing for the wrong success metric
Email is famous for being so easy to measure. All too often, the metric we choose doesn’t capture the true success or failure of our campaigns. But let’s not be too hard on email. Marketers in all channels, from social media to influencer marketing, make the same mistake.
The open rate is the obvious culprit. This metric solves one problem that plagues digital and traditional marketers alike — knowing whether someone actually viewed our campaigns or just scrolled past it, turned the page, tossed out the catalog or got up for a snack during the commercial. No wonder we leaped on sizable open rates as a measure of success.
But those big open rates often don’t translate into the metrics that matter, such as campaign revenue, orders, basket sizes, repurchases and other campaign-related numbers. If you use an intriguing subject line to optimize for a higher open rate, lots of people might open that email out of curiosity and then not go on to click. So you get an extraordinary open rate, but your campaign failed.
Many marketers panicked when Apple’s MPP feature launched in 2021 because it masked email activity data, like opens, times and locations. They worried that they would lose a key performance metric. It was a timely reminder for the rest of us that the open rate doesn’t always correspond to our campaign goals.
However, the MPP work-around many suggested — to focus on the click rate — is only slightly better advice. Clicks are more tangible proof of customer interest than opens. But they can be gamed, too, and they don’t always correlate with conversions.
2. Changing direction based on one-off testing
This error goes hand in hand with optimizing for the wrong success metric. It happens when you run a single A/B test on a single feature, like a subject line, call to action, offer, image, body copy or time of day.
These tests are easy to do. Many ESPs let you set them up with just a few clicks. You might even get results that look clear-cut and unassailable.
“Subject line A got a 54% open rate and a 25% click rate. Subject line B got a 24% open rate and a 12% click rate. Subject line A is the winner! Let’s do all of our subject lines like subject line A from now on!”
This assumes two facts the data doesn’t give you:
- A got more opens than B, and it also converted better.
- Your audience will always respond better to subject lines like A.
A single A/B test gives you results only for that campaign, at that time, with that audience. But your audience is constantly changing. The people who opened and clicked on your so-called winning version this time might not be the ones who see your next campaign. Or they’ll see it but not respond the same way.
Changing your email approach based on a single test can lead to disaster. You need to keep testing and testing different components and making sure your success metrics reflect your campaign goals.
If you want people to see your message, an open or click rate can work. But if you want them to purchase, register, upgrade, download, create an account or do some other business-related action, then you must keep testing to see what works over time.
Dig deeper: How testing can give your email marketing a conversions boost
3. Relying on ad hoc testing instead of scientific methods
“Ad hoc” is a fancy term for “guesswork.” You’re essentially throwing things against the wall to see what sticks — testing a single component instead of taking a hypothesis-driven, holistic approach that considers all aspects of a campaign.
When you test on the fly, you open yourself up to the problems people encounter when they test a single component and then change direction based on that data. Again, the data isn’t wrong, but the conclusions you draw based on that data could be.
Scientific testing using a hypothesis is more likely to deliver meaningful data because it gives you a framework for deriving workable insights. Test duration is one example. All too often, many decisions are made too early in A/B testing. Let’s say your email platform’s A/B test feature lets you send Version A to one sample audience made up of 10% of your list and Version B to another 10% of your list, wait a couple of hours and send the winner to the remaining 80%.
This method might give useful results if you test for opens or clicks. But when conversions are what matters, it doesn’t work. A 50/50 test is more suitable for calculating success based on conversion. It allows you to wait three days to a week before declaring the winner and stating the conclusion.
Meaningful metrics such as conversions don’t always happen in the first 2 hours, and optimizing for those quick results may mean optimizing for the wrong result. The 50/50 test also gives you a greater sample size, thus also making the test more robust.
Combining scientific methods with holistic testing methodology gives you a broader understanding of your audience and what motivates them. Read more about testing problems and my holistic testing approach in this MarTech column, “7 common problems that derail A/B/n email testing success.”
Is your data telling you the right story? Try this litmus test
New clients often are skeptical when I point out (diplomatically, of course!) that their campaign performance or testing data don’t support the conclusions they’ve drawn from it. Why doesn’t the email make money even though they get great open or click rates?
If you’re wondering the same thing, my litmus test can reveal what happens when you use the wrong metrics to declare success or failure.
Create three lists:
- The top 10 campaigns with the highest open rate.
- The top 10 campaigns with the highest click rate.
- The top 10 campaigns with the highest conversions or other campaign goals.
Assuming your conversion calculation isn’t tied to your open rate but based on emails delivered, you should see little overlap among the three sets of campaigns. Now, look at the campaigns in each category. What do your top-converting campaigns look like compared to the ones that got the most opens or clicks?
Did you use longer subject lines that acted like inbox sorters, appealing to your most motivated customers? Did the message content use longer or shorter copy, specific or general calls to action? Did one kind of campaign, like a flash sale, convert better than a new-collections campaign?
When you study the data this way, with your eyes on the results that matter instead of the data that’s easiest to collect, you’ll be able to achieve White’s goal to “analyze that data to squeeze out all the crap and be left with the stuff that’s gold.”
Opinions expressed in this article are those of the guest author and not necessarily MarTech. Staff authors are listed here.
Related stories