A New Rule of Thumb for Email Marketers: Your Results Will Vary


No matter how diligently you test your email campaign, the only rule of thumb that continues to hold true is that your results may vary.

Data is disrupting email in one really sneaky way: It’s getting rid of “best practices.” As analytics catch up with tactical planning and measurement, it’s getting easier to gather insight on smaller and smaller segments. We’re even learning how to parse and respond to individual behaviors and situational context, building triggered programs based on activity on other channels. This is good news – it’s making email more relevant and more effective. But it’s also making our channel more complex and harder to optimize because the more we apply deep analysis to our programs, the more exceptions to widely held rules we find. Best practices that seemed broadly effective turn out to leave entire segments behind.

Two recent discussions about engagement in email marketing remind me of how disruptive this has become: the role of engagement metrics in mailbox providers’ inbox placement decision-making, and consumers’ weak response to offers in welcome messages. Neither of these could have been an issue until recently, because the ability to analyze and act on this level of data is still relatively new. Both ask important questions about what to do to make email more effective, and both turn up the same (frustrating) answer when we analyze them: It depends. Increasingly the only reliable rule of thumb that holds true is, “Your results may vary.”

Engagement May Not Mean What You Think It Means

I’ll get this out of the way first: When we talk about engagement’s effect on mailbox providers’ decision-making, we’re talking about signals that mailbox providers – not senders – can see: Read rates, This Is Spam (TIS) complaints, forwards, This is NOT Spam reports (TINS), deleted-unread rates, and others. Mailbox providers like Gmail and Outlook openly acknowledge that signals like these influence inbox placement, and they can and do evaluate them on a number of levels.

Mailbox providers use engagement data to detect patterns by sender, by message type, by subscriber, etc. They can see unusual engagement with a particular message (positive or negative), they can see when subscribers interact distinctly with messages from a particular sender, and they can see when a sender’s messages are addressed to an uncommon percentage of subscribers that never log in to their accounts. Here’s the frustrating part: Mailbox providers weigh engagement in unique, proprietary ways. Two senders may well see different inbox placement results from the same response to engagement-based filtering; or the same response will produce different results at two mailbox providers.

Seeing your subscriber engagement as mailbox providers see it is vital to analyzing your program and finding ways to optimize your email performance, and including mailbox provider in your segmentation will help isolate problems. Still, your results will vary – universally applied best practices simply won’t get your program to optimal performance.

Common Sense on Offer

We were surprised when a sample of 2 million active subscribers was, on average, unmoved by offers promoted in the subject lines of marketers’ welcome messages. Common sense seemed to suggest that a deal should get more people to read a message – virtually any message. In this case it didn’t. I’ve now spoken to a number of people who bristled at the suggestion that offers in subject lines don’t work, and I’ve told them that they’re right. Sometimes.

For some segments within the group we studied, offers did indeed correlate with higher read rates. But across the whole set, the tactic made no significant difference. To us this was an important reminder to test everything, even best practices and common sense ideas that seem like no-brainers. More important, it underscored how important segmentation is to email optimization. In some cases discounts boosted engagement and probably customer value; in other cases they had no positive effect and may have even reduced order values from customers who would have bought anyway.

To harness the increasingly accessible power of marketing analytics and data to get the most out of your email program, here are three keys to optimizing in a channel where results *will* vary:

  1. Know the data. Before you test assumptions, make sure you see and understand the data that most accurately reflects your efforts.
  2. Start re-testing. Lots of core assumptions, especially common sense relationship-building approaches, still hold true. Just not for everyone. Not for every brand, not in every mailbox, not for every segment.
  3. Redouble your segmentation efforts. Marketing technology companies are making huge strides toward audience management and segmentation analysis. As marketers gain insight into how more narrowly defined segments behave, audiences seem far less predictable. We’re all individuals. Now that the power of data makes it easier to analyze our behavior individually, it’s no surprise that results vary.