How I Used Data Science To Win My Football Pool

My NFL data points in Rapidminer

I won our NFL football pool this year.  What's curious about that is the fact that I know nothing about the NFL.  I couldn't name a player and I watched zero minutes of football all season.  Last year, I finished dead last.  Not a surprise, considering my lack of knowledge and lack of any strategy in making my picks. 

Why am I talking about football when I normally talk about marketing?  Because marketers can benefit from some of the softer elements of data science to improve their hunches of what to test, of how to position... In a way not too dissimilar to what I used to boost my win percentage only 12%.   

The difference between worst & first in our league over 2 years was 12% -- I had a win percentage of 44% in 2016 to earn the bottom spot, and a 56% in 2017 to earn the top spot.

I didn't go full machine learning and build a formal model (that's for the 2018 pool).  But, I was able to compile multiple data points that helped tighten my guesses.  I tracked point spreads, home vs. away, head to head results from prior years and the experiences of 8 Vegas handicappers.  These data points gave me a structure to use for my guesses and the ability to see the correlation between the data points and whether my guess was correct.  

applying the football pool metaphor to marketing

Here are a few observations 

#1.  Think like a data scientist

This is painting with too broad a brush, but a lot of the marketing we see is very generalized and doesn't often purposefully drive to a desired outcome.  Sometimes that's OK - e.g., general brand awareness.  Often times, the lack of crispness or definition around a marketing initiative, test scenario, feature modification leads to not being able to determine (and communicate to stakeholders) whether it was successful or not.  What does success look like?  Progressive marketers are starting to employ hypothesis formulation & testing.

  1. Make Assumptions
  2. Take a position
  3. Determine the alternate position
  4. Set success criteria
  5. Create tests that include the success criteria
  6. Evaluate the results
  7. Reach a conclusion

Then, learn from it & repeat.    

Peak at this post on hypothesis testing if you want to dive in. 

#2.  Focus your tests

Focus tests on a customer segment, customer journey segment, channel, etc. whenever possible.  Generalized tests yield generalized results.

#3.  Small changes matter

12% isn't a big number, but it's the difference between worst and first.  When you look at your various conversion rates, many of them are pretty small.  And, small changes to small numbers can still make a big difference over time.  Think about a typical low funnel conversion rate of 1.5%.  Improving that rate by 10% yields a .15% lift to 1.65%.  Apply that lift to enterprise level volume and it's almost always worth the effort.  By stringing together multiple small changes over time, before long, you enjoy the benefit of a significant change effect. 

#4.  Shorten the feedback loop to iterate faster

Part of the lure of small changes (above) is they can be put into play quickly.  A small change can also have a shorter path to results.  And, that aligns nicely with today's ADD/short attention span culture.  In my football pool, some of the best decisions were a result of intel that came out of prior week's activity.  Put this in marketing use case.  A checkout optimization project may include a half a dozen items.  Breaking the project down to 6 discrete items, with testing on each item enables us to deliver small changes and test them right away.  It's very common for learns from early tests to result in changes to already scoped backlog items.     

Today's marketer needs to adopt an analytical mindset when approaching marketing.  Start now, and pretty soon you'll be thinking like a data scientist.