Hilary Gridley shows how to define the right activation metric and move it using app store reviews and qualitative data. The challenge: figuring out what activation metric to set for your product - at what point should you consider someone activated? And once set, how do you actually move that metric? Here’s Hilary’s approach: First, feed app store reviews or other qualitative data to Claude. In her example, she asked Claude to create a hypothetical data set using Headspace, then requested help selecting an activation metric based on the “Aha moment” pulled from the reviews. Second, get A/B test suggestions to move that metric. Hilary emphasizes asking for “non-obvious” suggestions because otherwise you get the most obvious answers first and have to say “we’ve tried that” repeatedly before getting better ideas. Third - and this is where it gets interesting - change the objective to generate different types of ideas. Instead of asking for tests to move the metric, ask for tests that give maximum information about which levers are most effective. These might be tests you’d never actually roll out but that provide valuable signals. Fourth, try asking for “extreme” tests. Hilary gets ideas like forcing users into a single feature immediately after signup. Might not be a good idea to implement permanently, but it could reveal which features have the highest activation impact. Hilary notes: “You can see you get very different ideas just by changing the prompt slightly. All of these are interesting experiments to consider.” ➡️ The same data yields completely different insights when you shift your objective from “move the metric” to “learn what works.” Try both approaches to get a fuller picture of your activation levers. Check out Hilary’s course (not sponsored, she’s just awesome).