Data In The Headlights, 18 July 2018

This week, we challenge our assumptions in marketing with a look at a 4-way A/B test’s results, plus the week’s news in data.

The Bright Idea

In this issue’s Bright Idea, we look at data and assumptions. I run a personal newsletter every Sunday night, and one of the criteria for what I share has been whether or not an article received lots of clicks the previous week (as measured by the bit.ly API). One of the most important things to do as a data-driven practitioner of any industry is to question assumptions – such as how to choose content to share in a newsletter.

The test I ran was a four-way test, evaluating 4 different ways of curating content:

  • Most clicks the previous week
  • Most social shares the previous week
  • Highest page authority (an SEO metric)
  • Most topically-relevant (using text mining techniques)

Qualitatively, when I put together the four editions, the fourth example was a newsletter I’d most like to read. But I’m an n=1 and making the broad assumption that my readers are just like me is foolish.

What were the results of the test?

  • Click edition: 400 opens, 50 clicks, 12.5% click to open rate
  • Page authority edition: 398 opens, 51 clicks, 12.8% click to open rate
  • Share edition: 322 opens, 46 clicks, 14.3% click to open rate
  • Topic edition: 386 opens, 24 clicks, 6.2% click to open rate

My marketing automation software crowned the share edition as the winner. Would you?

Here’s the plot twist: almost no marketing software includes tests of statistical significance. Using the statistical language R, I ran tests of significance against all four editions, comparing them to each other by p-value. Not a single p-value was under 0.27; in most generally-accepted scientific literature, p-values should be under 0.05 to be considered statistically significant.

Thus, even though there’s a “winner” above, the reality is that the result is statistically insignificant. What do we do when we face this kind of situation? Like a court in which the judge declares no verdict, we are remanded back to additional testing. This is clearly a test I need to run more than once, and if repeated tests keep coming back statistically insignificant, only then will I know that the algorithm for choosing which content to share doesn’t really matter.

Some questions for you and your team:

  • What assumptions have you tested lately in your data-driven work?
  • How have you tested those assumptions?
  • Have you evaluated your tests for statistical significance?
  • What software do you use every day that does or does not tell you that a result is statistically significant?
In Case You Missed It
Shiny Objects

Social Media Marketing

Media Landscape

Tools, Machine Learning, and AI

Analytics, Stats, and Data Science

SEO, Google, and Paid Media

Upcoming Events

Where can you find us in person?

  • Greater Los Angeles Chapter of NSA, August 2018, LA
  • Health:Further, August 2018, Nashville
  • Content Marketing World, September 2018, Cleveland
  • INBOUND2018, September 2018, Boston
  • MarketingProfs B2B Forum, November 2018, San Francisco

Can’t wait to pick our brains? Book a Table for Four and spend an hour with us live (virtually) on any topic you like:

https://www.trustinsights.ai/services/insights-foundation/table-for-four-consultation-package/

Going to a conference we should know about? Reach out: (https://www.trustinsights.ai/contact/)

Conclusion

Thanks for subscribing and supporting us. Let us know if you want to see something different or have any feedback for us! (https://www.trustinsights.ai/contact/)

Social follow buttons

Make sure you don’t miss a thing! Follow Trust Insights on the social channels of your choice:

BTI on Twitter

BTI on Facebook

BTI on LinkedIn

Leave a Reply

Your email address will not be published. Required fields are marked *

Pin It on Pinterest

Share This