Edition #37 – October 5, 2020
Originally sent via Mailchimp
Good morning product analytics friends 👋
I’ve been toying with a Product Analytics Knowledge Base for a while now, but have only dabble with it here and there. The goal is to start structuring ideas around building a solid product analytics practice. I’ve came up with a basic structure for that document, copied over a bunch of stuff I wrote and shared on this newsletter.
⚠️ Be aware, this is very much under construction still. There are blank pages in there, and it’s still a very, and I mean very, long way to go before I can even consider this infinitesimally close to being done.
That said, I want to share it with this audience because you all care deeply about product analytics and I would like to start using it as a reference document for some pieces I’m sharing in this newsletter. Honestly, it might lead to nowhere, but it’s an experiment and I’m taking you on that ride 🙂
Care to have a look? It’s right here 👉 https://coda.io/@lantrns-analytics/product-analytics-knowledge-base
With that, on with the 37th edition of the Product Analytics newsletter!
What has been my highlight?
Something I feel strongly about is that analytics should have a purpose. It’s not an end in itself, but an artifact that leads to knowledge. We are helping to instrument our products so that analysis relies on high-quality data. We don’t instrument everything, but just what is meaningful.
This piece takes a somewhat wide view on this. It’s not about analytics, nor product management, nor anything quite close to it. It’s about asking the right questions, going to the foundational principles that feeds knowledge, then action. In the case of product analytics, we are instrumenting products to answer those foundational questions that drives strategy.
I agree, it all sounds a bit esoteric and fuzzy, but I believe that analytics is at a crossroad between engineering and actual business analysis. The technical challenges are quite stimulating for sure, but at the end of the day, they’re useless if we’re generating data that answers the wrong questions.
It’s a long read, an oldish piece, not about Musk (despite its title) and you might have already seen it go by in your feed at some point. But it’s one that is worth setting an hour aside for (with a drink in hand preferably), to read it carefully, think about it and scribble down how you think that applies to your daily work.
Growing your product with the help of data.
Product analytics is about making informed decisions to grow your product. Key to this has to do with choosing the right features to engage your users. So how do you make the right choice? How do you score and prioritize feature introduction?
Feature selection can be a bit of a dark art, but introducing a bit of a method doesn’t have to be about dictating choices but guiding the thinking process / discussion for selection. This article gives the reader a tour of frameworks to prioritize feature introduction. Some of those frameworks are more driven by data than others and of course analytics should support such frameworks.
This article is interesting to me because, as mentioned in the Top Pick article on first principles above, I think analytics practitioners should be more involved in guiding strategy, not just render data and graphs that is to be interpreted by someone else.
Factory operations to transform data into analytics.
by Evgeny Rubtsov
The success of an analytics project is as much about slowly building that trust than it is about building the infrastructure. And users can lose trust fast if a report suddenly has historical KPI values changing drastically… without you knowing.
I experienced that very pain a few weeks back. Luckily, I stumbled upon that elegant dbt solution by Evgeny that tests the integrity of historical KPIs. It’s not complicated to implement and is just another mechanism to improve quality.
Deriving insights from your product’s data.
Yup, sharing another Amplitude article here. I sometimes feel that the scope of what you can do in Amplitude is too constraining, but to their defence, it’s still quite a good tool to explore how users interact with your product. And they always seem to innovate and come up with cool features to get more out of your product data.
What they’re proposing now is a feature to help you monitor KPI deviations from a baseline. The feature is built on top of Facebook’s forecasting tool, Prophet. It takes into account seasonality (and I imagine trend) of a time series to calculate expected values and confidence levels. “If there are anomalies detected, they will appear outside of the confidence band in orange.”
Haven’t had the chance to play with it yet, but it reminds me a bit of Orbiter which we covered back in edition #31…
What’s interesting about their [Orbiter’s] approach is that alerting is not based on a bunch of parameters that the user controls, but on forecasting models that alerts whenever a metric goes outside what’s deemed normal.
What’s happening the product analytics market.
“To treat analytics like code, one must treat analytics schema like code” 🧠
At the foundation of product analytics is a solid tracking plan. This is your strategy to instrument your digital product and perfect implementation is key to realizing that strategy. A tracking plan is a living document that grows as your analytics mature, that is the result of team collaborations.
Iterative.ly has recognized from early on that this is an under-served requirement. They are building a really solid and elegant solution around this, where you document your tracking plan and use it to control instrumentation. And it now includes version control of that plan
To help companies tackle this chaotic nature of tracking plan change management, we are adding support for:
Tracking plan versioning, with support for staging areas and parallel branches so teams can work side-by-side and merge their work when ready
Granular versioning of individual events so changes to their schema (shape) are explicit and clearly visible to everyone on the team