Hilary Gridley shows how to define the right activation metric and move it using app store reviews and qualitative data.
The challenge: figuring out what activation metric to set for your product - at what point should you consider someone activated? And once set, how do you actually move that metric?
Hereâs Hilaryâs approach:
First, feed app store reviews or other qualitative data to Claude. In her example, she asked Claude to create a hypothetical data set using Headspace, then requested help selecting an activation metric based on the âAha momentâ pulled from the reviews.
Second, get A/B test suggestions to move that metric. Hilary emphasizes asking for ânon-obviousâ suggestions because otherwise you get the most obvious answers first and have to say âweâve tried thatâ repeatedly before getting better ideas.
Third - and this is where it gets interesting - change the objective to generate different types of ideas. Instead of asking for tests to move the metric, ask for tests that give maximum information about which levers are most effective. These might be tests youâd never actually roll out but that provide valuable signals.
Fourth, try asking for âextremeâ tests. Hilary gets ideas like forcing users into a single feature immediately after signup. Might not be a good idea to implement permanently, but it could reveal which features have the highest activation impact.
Hilary notes: âYou can see you get very different ideas just by changing the prompt slightly. All of these are interesting experiments to consider.â
âĄď¸ The same data yields completely different insights when you shift your objective from âmove the metricâ to âlearn what works.â Try both approaches to get a fuller picture of your activation levers.
Check out Hilaryâs course (not sponsored, sheâs just awesome).
