Edition #25 – December 16, 2019
Originally sent via Mailchimp
Good morning product owners and analysts 👋
Small announcements. We hit the 100-subscriber mark last week. And it’s our 25th edition. And… well that’s pretty much it. Celebration 🍾🎉🍻
Oh (oh oh), this is also the last edition of the year. Hope you’ve enjoyed this decade. Wishing you love, health and Skittles for the upcoming one!
With that, on with the 25th edition of the Product Analytics newsletter!
What has been my highlight?
Think I stumbled upon DataKitchen as I was doing research on DataOps and I guess they wrote most of the Wikipedia entry as they had a bunch of their stuff linked in there 🧐 I was going through their website where they coin themselves as The DataOps Company (straight to the point) and Gartner declared them a “cool vendor” (not making this up). They do publish a bunch of stuff on DataOps and I started going through it and stumbled upon that 7-steps document.
In their own words: “DataOps is a tools and process change that incorporates the speed of Agile software development, the responsiveness of DevOps, and the quality of statistical process control (SPC) widely used in manufacturing.”
If you’re already into DataOps, you won’t learn much from this, but it’s a really well laid out document on what DataOps is, how to structure your thoughts around the subject, and its core principles. It is clearly a trend in analytics that data teams benefits from adopting DevOps and Agile practices to insure data quality and the success of your whole operation. Test everything, version control your data processing steps, modularize your stack, etc.
Special mention for their 7th step which is Work Without Fear™. Yup, they trademarked it 💪. Here’s what it’s about:
“Data engineers, scientists and analysts spend an excessive amount of time and energy working to avoid these disastrous scenarios. They work weekends. They do a lot of hoping and praying. […] When an organization implements DataOps, engineers, scientists and analysts can relax because quality is assured. They can Work Without Fear™.”
That sums up my new year wishes for you.
Growing your product with the help of data.
As businesses are being built on the subcsription model, the “by month” promise is attractive, but what’s key is that retention should trump conversion. Acquisition funnels is down to a science and we often hear of businesses who explode to 1 million users quickly, but building a sustainable business requires retaining your users.
In this entertaining talk, Des Traynor (Co-Founder and Chief Strategy Officer at Intercom [love that product btw]) not only brings that point home by going into how to measure retention (for example Net Dollar Retention, which we talked about before when covering that really good NDR explainer), but he also dives into how to bring your team around that objective.
What are the behaviours that are predictive of a user that will remain vs churn, how to influence beneficial behaviours, etc – having a focused data strategy tracks what’s important and keeps a team aligned towards not only converting, but also retaining users.
Factory operations to transform data into analytics.
The premise of this episode is quite simple: data quality is foundational to any successful data science (in its broad acceptance) project.
Guest Buck Woody, a data scientist at Microsoft, has a broad perspective on the subject and it’s a very informative and entertaining interview to listen to. It doesn’t necessarily go into the process steps to improve data quality and ensure project success, but if you want context, history and a few anecdotes, this is worth the listen.
But if you do want to get into the Team Data Science Process, you should probably head over here. It’s a Microsoft thing (and reading through it, you get that vibe also), but the principles do apply to any analytics projects.
It’s essentially a lifecycle in 5 steps: business understanding; data acquisition and understanding; modeling; deployment; and customer acceptance. It should be noted that this document also covers more than this lifecycle. If you are interested in anything “analytics team management”, then there’s probably some other stuff in there to get your brain busy during the holidays.
Deriving insights from your product’s data.
Eye Candy Alert! 🍬🍫🍭
Here’s my holiday gift to you – a bunch of amazingly good-looking data visualizations! It will inspire you and for most of us, it will also remind us how little talent we have at presenting data and insights. Enjoy that mixed feeling of experiencing joy and envy at the same time 😁
What’s happening the product analytics market.
I was listening to the Data Engineering Podcast with Kent Graziano, the Chief Technical Evangelist at Snowflake and they were going over what made Snowflake different than competitors. Things like the decoupling between storage and compute, dynamic scalability, being cloud native, etc. And Kent went on to say:
“And to be fair, many of our competitors are obviously evolving. And they’re adding things to their offerings here. Over time, we’re starting to see that but in the end, it ends up still being a fairly complicated engineering sort of feat for it for the folks who are managing the system. And that’s one of the things that really differentiates us is the I’ll say, lack of management that you need to do.”
So it was interesting to hear of that announcement from AWS in that context, where Redshift would now also offer decoupling of storage and compute. With the new Amazon Redshift RA3 instances, users will be able to scale storage and compute independently.
It’s definitely a step in the right direction, although the RA3 nodes only come in the xlarge size and you need at least 2 to form a Redshift cluster. If my calculations are correct, that amounts to minimum US$18k per month. So, ummm, that’s not for every budget. 🤑