Edition #22 – November 4, 2019

Good morning product owners 👋

Ahhh, it is good to be back! Although I can’t complain, as the 2 weeks spent in Vietnam were simply amazing. Friendly people + good food + too much to do and see + crazy conversion rate from dollars to dongs = pure fun 🙂

Now, I know this will sound as bragging, but as you read this, I’ll be spending the week in London, catching up with clients and partners 🙃 Looking forward to the conversations and hopefully I will have some cool nuggets to share in the next newsletter’s edition.

With that, on with the 22nd edition of the Product Analytics newsletter!

Olivier
@olivierdupuis

 


Top Pick

Chances are that if you’re reading this, you lead or are part of a product team and understand the value of analytics to guide the development of your product. Both stories below are complimentary as the first one provides an overview of the importance of analytics for a product team, and the second one goes deep into how a data team could be structured and operate to generate the insights that guide your product’s development.

Fostering better collaboration within product teams: An interview with John Cutler

If you don’t know John Cutler, I would suggest you follow him on Twitter as he often publishes high quality content on product management, teams, etc. And as a product evangelist for Amplitude, he understands the value of data in managing products.

An interesting quote from that interview:

“The amount of insight you can derive from quantitative data is finite, and qualitative data in itself can be too broad. What I’ve noticed in teams that are using quantitative data really effectively is that they are a lot more laser-focused in how they conduct qualitative research.”

Data Team Handbook

Now I acknowledge that not all product owners have access to a product analytics team, but for those who do or who are considering building one, that resource from GitLab is a must-read.
How to organise your data team, what’s the process of a normal data analysis, how they structured data stack, other resources to help you out, etc. – it’s all in there. This handbook reflects GitLab’s dedication to building the most efficient and valuable data team out there.
Also happy to report that our modest Product Analytics newsletter is listed in their selection of newsletters with such giants as SF Data, Normcore Tech, Data Science Roundup, etc. 🎉

 


Product News

The Forrester Wave™: Enterprise BI Platforms (Vendor-Managed), Q3 2019

First off, I found this report through Looker’s own tweet of it. Not sure how that works, but you can get that report freely through Looker’s page, whereas it costs US$2495 to purchase it from Forrester’s website 🤷‍♂️

Secondly, I tend to be a bit wary about such reports. Their evaluation methodology is always a bit obscure and you wonder how much objectivity there is to it. That said, it’s still a good way to get a somewhat good overview of the landscape and maybe guide your own exploration of the ideal BI solution for you.

4 things pops up for me:

  • How Microsoft’s Power BI is clearly leading in that report
  • How Tableau is losing its edge
  • The number of solutions that always seems to be increasing, but also the number of solutions that I wasn’t even aware of
  • How Looker’s position and Forrester’s analysis of it is still not elucidating to me the mystery of why it’s so popular amongst BI practitioners 😉

Segment and the Privacy Portal

Segment hosted a webinar to present their approach to managing your user’s privacy through their newly launched privacy portal. This might seem GDPR oriented, but I think that we should all keep a close eye on this subject to not only be compliant with regulations, but to empower your users.

How Segment manages privacy, as always, is pretty smooth. It starts by assessing the risk associated to each field. You can then enforce controls on that data, for example preventing certain fields to ever be collected from a specific source.

There are also inventories that answers the following key questions: what are the data fields I’m capturing, from where and where do I send it to. This is your single source of truth in regards to all data that is coming in and out of your stack.

 


Strategy

Who’s Driving This Thing? The Pitfalls of Being “Data-driven”

“We should make decisions informed by data. But should we be driven by data? Or by our strategy?:”
I like that emphasis on how we shouldn’t be data-driven, but data informed. As we’ve talked often before, especially in our guide [https://odignite.wpengine.com/ultimate-guide-to-product-analytics/], data should inform us on our strategy and how well we are achieving our goals. We shouldn’t dig into data with the sole intention of finding some gold nuggets that will blow our mind.
“If you start chasing insights instead of answering questions, you could end up answering a bunch of questions that don’t matter at all.”

What we’re missing in product analytics

That’s an interesting high-level piece for whoever is considering taking advantage of analytics to grow their product. It’s about the different routes you can take: prescriptive out-of-the-box solutions; custom solutions provided by external companies such as Lantrns Analytics [https://odignite.wpengine.com/]; fully-internal custom solution.

“It mainly comes down to “go prescriptive” or “go custom” and that’s a rather hard decision to make. And it turns out that eventually, you end up going custom to a certain extent and I’ve come to accept that as a fact.”

I tend to agree with that statement. It’s normal for all product owners to start their analytics journey with Google Analytics, Amplitude, Mixpanel, etc., but as you get more sophisticated, those solutions just don’t cut it anymore.


Best Practices

How Animoto uses event tracking data to understand and optimize the user journey

Even if you haven’t used, aren’t interested or just aren’t even aware of Snowplow, this story is worth the read. It’s essentially the story of an e-commerce company using analytics that developed 4 event-based models to better understand how users are behaving on their platform.

Best practices for data modeling

If you’ve set up your data architecture correctly, you are now capturing important data from your product and you probably will want to analyse that data yourself. Welcome to the world of data modeling.

Of course, you could just plug a BI software on top of your raw data sources, such as Tableau, PowerBI, Mode, etc. and model your data through their interface. But a better approach is to have a data modeling layer in your architecture, which generates entity tables (facts and dimensions) that takes care of data transformation once and for all. This facilitates analysis by providing a clean and consistent data layer with only one set of transformation rules.

Data modeling is not a complex task, but there are best practices to it. It’s not a field that has changed terribly, although the latest technological advancements has made some of the well-anchored “Kimball rules” less strict then they used to be. That said, it’s good to familiarize yourself with those best practices as you want your analysis to rely on the best data there is.