Metric Monitoring With Orbiter + Are You Really Data-Driven? + Knowledge Graphs + More (PAN #31)
Edition #31 - April 20, 2020 Originally sent via Mailchimp
Good morning product analytics friends 👋
I’m sure you’ve been flooded by covid-related stories and so, to not add additional cognitive load in that department, we’re offering you a covid-free issue this week.
There is a unicorn in the data world. And I don’t mean a super-promising startup. I mean an alway evading fairy tale animal that is full of wonders. That unicorn is metric monitoring.
I’ve talked about this before and tested a few approaches and “solutions”. They all have their strengths but all end up being short in some respects. I think that whichever product manages to provide an easy-to-setup solution which even business users can use, that is agnostic to your BI tool, where you can easily define the samples to test, the definition of your metrics, with tons of flexibility for parameters will be a gigantic winner.
An upcoming contender is this week’s top pick, Orbiter. What do you think, are we getting there? Or is this just a pipe dream?
With that, on with the 31th edition of the Product Analytics newsletter!
What has been my highlight?
Metric Monitoring With Orbiter
YCombinator.com by @orbiterai
The above link is a presentation of Orbiter by its founders on Hacker News. What’s interesting about their approach is that alerting is not based on a bunch of parameters that the user controls, but on forecasting models that alerts whenever a metric goes outside what’s deemed normal.
That doesn’t mean that a more “classic” approach to alerting is not possible with their tool. Meaning by that that rules can be used to trigger alerts, such as a % drop week-over-week for a metric. But that’s not how Orbiter wants to offer its service by default. You shouldn’t be setting up those rules. Patterns should be uncovered from historical values which will dictate what’s a condition that seems to be outside that pattern and which should be notified to users.
There’s not much to work from here, but I’ve seen a glimmer of hope and I do wish that a smart, flexible, easy-to-use, BI-agnostic and affordable solution does come out. Is this the one? Too soon to say, but cautiously excited about it. I did request access to it (a few weeks ago… still waiting… 😴) so hopefully I’ll be able to test it eventually.
More info can be found here:
Growing your product with the help of data.
Data Science: Reality Doesn’t Meet Expectations
dfrieds.com by Dan Friedman
Here’s a piece that goes a bit against the current as it lists everything that is wrong with data roles and how organizations aren’t as data-driven as they claim/wish they were. There’s a bit of rant in there, but overall it highlights the perils facing a product team that wants to rely on data to thrive. It’s not enough to have a data person, there’s an organizational transformation that’s required as well.
I can relate to some of the problems described here, but I think they are mostly the characteristics of immature data-driven teams. So even though it might seem a bit caricatural, it’s a good list of warning signs that there might be issues in your organization.
The Hacker News conversation is also worth reading as other practitioners share how organizations might be failing their data-driven commitment.
Factory operations to transform data into analytics.
Add DataOps Tests for Error-Free Analytics
DataKitchen.io by @datakitchen_io
This is a bit DataKitchen-centric, but I do like the content that this company shares (never tried their platform though). Once you get past the Gartner gibberish (such as “Eighty percent of companies surveyed reported three or more errors per month”), it gets interesting as it provides another angle to testing your data pipelines.
The graph above is consistent with approaches documented elsewhere, but I was a bit surprised by the tests that are ran at each stage. For example, they test for week-over-week sales increase threshold in the inputs. So before even transforming your data, they would already test business metrics in the raw data instead of testing such assumptions in the outputs instead.
That said, besides these “small” differences in approaches, I think the ideas shared here reinforces the fact that modern analytics need to rely on solid dataops principles and practices to ensure you will not part of those “thirty percent of respondents [that] reported more than 11 errors per month.” (ref. Gartner).
Deriving insights from your product’s data.
Building A Knowledge Graph Of Commercial Real Estate At Cherre
Very insightful podcast on the use of knowledge graphs to enhance analysis of relationships between entities. It got me curious about how knowledge graphs could be used in product analytics. The interview is specific to real estate but in some respects, the ideas shared here could also be applied to product analysis.
Could we ask richer questions around users and product when investigating relationships, which are interactions? For example, when looking at a group of users which highly interacts with a feature, could we figure out which other groups with neighbouring interaction patterns could see their adoption increased?
There’s most probably examples of such use of knowledge graphs with product analytics out there. If you have examples, I would really appreciate if you could share.
What’s happening the product analytics market.
Dataviz and the 20th Anniversary of R, an Interview With Hadley Wickham
Medium.com by @W_R_Chase
The R language is 20 years old. This interview with community leader Hadley Wickham takes us on his own journey using R, from first exposure, all the way towards extending it with the Tidyverse ecosystem of packages.
Nothing more to say, just a good read for anyone interested in R.