- Basic conversational LLM chat
- Extended thinking mode
- AI web search
How I built basic conversational LLM chat
The two tools I used were any LLM deep research (Claude’s reports are a pleasure to read) and any AI coding IDE (Cursor is great place to start, and good free tier). My process went like this:
1. Pause and guess
How do I think it’s built? I imagine that if I step behind the scenes and act as the LLM (like in the Wizard of Oz) how would I get the job done? What would I need to save and remember? What utilities would help me?2. Deep research
I run a deep research on how basic LLM chat actually works behind the scenes + grapple with the result and ask dumb clarifying questions.- I asked “How does LLM chat UI work? I’m talking about a basic ChatGPT or Claude interface, before adding all sorts of cool features and interesting things, just like the basic original version of those.”
- I then added my favorite phrase, “Explain to me in atoms (as low-level as is reasonable and useful), minimum abstractions or magic.”
3. Create a spec for an MVP
I open a new blank Cursor project, and copy the deep research thread into the chat. I tell it my goals, and ask it to generate a SPEC.MD file with what the minimum scope should be for me to build a POC that works, but not production-grade.4. Create a plan to build the spec
I ask Cursor to generate a PLAN.MD that follows the principle of “something simple working soon” - get a basic working loop, and then incrementally add to that.5. Create agent instructions on how to execute the plan
I ask Cursor (still in the same thread) to create AGENTS.MD that will have it build in a super self-sufficient way.6. Do it!
Finally, I type “Execute on @PLAN.MD for @SPEC.MD” and watch it work. It’s way less babysitting than when I used these tools half a year ago---but still required some technical nudging here and there.Tips
Iterate the prompts to your liking
Each of these prompts started pretty simple (hint: use what I wrote above). I did spend a lot of time reading the results, going back and using the ✏️ edit button, and conversationally adding more stipulations. That was a huge part of the learning experience in itself.Tell the coding agent to design the MVP spec in a way that let the coding agent be self-sufficient
It really helped to tell the spec to design things in a way that the coding agent could be really self-sufficient and troubleshoot a lot of things itself. I gave examples like accessing a lot of the logs directly and making every part extremely testable, with lots of API endpoints that the coding agent could hit itself to troubleshoot. This made the process so much smoother.Avoid SDKs and libraries that “do it for you”
Since the goal is to learn, if there’s a higher-level API or SDK or library that abstracts away the core key concept I want to understand… then I prefer intentionally not to use it (unless the alternative is truly very complicated).How I added “extended thinking” mode
Same steps as above! Except for deep research, my prompt started with: How does LLM “extended thinking” or “reasoning” toggle feature I see in ChatGPT/Claude work? Focus on what’s novel over plain LLM chat.
For the SPEC.MD and PLAN.MD, I worked in the same Cursor project. I had it create new files, saved the previous ones, and told the coding agent to build on what already exists.
How I added AI web search
The deep research prompt here started with “How does an LLM chat “web search” feature work (eg the toggle you can enable in Claude or ChatGPT, or the original version of perplexity, or googles new ai mode search)?”
Everything else is pretty much the same process, in the same Cursor project.
Where I learned the most
My main advice here is to take time and let your brain digest what you’re learning.Deep research
When you sit down and do this, I recommend taking your time to really read the deep research, and wrap your mind around the concept. Take the time, have fun clicking into links, and savor the original sources.I grappled with the concepts and ask dumb questions
Use the thread as your personal tutor. Ask dumb questions, have it challenge your mental model, try to form analogies, and ask if you’re getting it right. If your brain hurts, you’re doing it right.I added educational features
The cool thing about building your own clone is you can add any features you want. I added all sorts of features to make the hard concepts visible: requests, responses, color-coded tokens, completion counters, you name it. I felt like I was building my own kids’ science museum.I asked it to walk me through how things work in what we built
I love asking Cursor to walk me through the code in a narrative format - I tell it to start with “when I click the button,” and tell me the story of what’s happening in the system. Doing this tends to give me ideas for more “educational features” I want to add, above!When it didn’t work that great
I felt this with web search in particular - when the feature clearly worked, but was a little off, it really made me realize how the idea is simple, but the subtleties are really hard to get right (and where things break).Warning: fun ahead!
Warning: this process has made me only more curious. Now every time I use an LLM product, my list of features-to-clone keeps growing. I may have started this as a professional exercise, though now it’s just plain fun. (Spoken like a terrible engineer.)If you want to go deeper on implementation and adoption, I offer live courses and workshops.
