Ziv Seker from Microsoft shows how to build synthetic users in ChatGPT based on Stanford research that achieved 85% accuracy predicting real user responses. The Stanford study started with two-hour audio interviews from over 1,000 participants discussing topics like the economy and COVID-19. Researchers left out some questions, then used AI to generate synthetic responses and compared them to actual answers - achieving 85% accuracy. Here’s Ziv’s demonstration of their method: First, avoid the naive approach. Simply prompting “You are a 38-year-old American Jewish product manager, please respond as if you are real” won’t work - it’s completely out of context and produces poor results. Second, provide the full interview transcript to ChatGPT. Ziv uses a free dataset from Princeton University and gives it to the AI. Third, generate expert perspectives. Prompt the AI: “Please make observations as you would as each one of these experts” (listing experts like psychologists, behavioral economists, etc.). The AI produces 5-20 bullet points from each expert’s perspective analyzing the interview. Fourth, select the right expert for each question. Ask: “Please let me know which expert would be best” for the specific question you’re testing. Different questions benefit from different expert lenses. Fifth, combine everything for the synthetic response. Using the expert reflection, the interview transcript, and the new question, generate the synthetic user’s answer. When Ziv tests this with a question left out of the original interview, the synthetic response isn’t exactly what the real person said, but gives “a pretty good idea where we are.” Ziv’s key insight: “Synthetic users can help us test assumptions and understand if we’re completely off-track, but they can’t tell us definitively what will work. They won’t replace user research, but they can help us come more prepared to interviews and maybe help us understand general directions for smaller questions.” ➡️ Use synthetic users to pressure-test assumptions before real interviews, not to replace them. The 85% accuracy is good enough to catch when you’re completely off-base, saving time and improving real research quality.