How do you balance lots of lengthy context files while avoiding token limits? How do you prevent hallucinations?
Hilary Gridley challenges this premise by pointing out the desire to load as much context as possible into these tools is a trap. Often thereâs bad context in any large context set - outdated information, incorrect data, conflicting versions.
Instead, synthesize layers of context. If you have a project with dozens of files, pull them into a conversation with ChatGPT and ask whatâs most salient. Generate a focused markdown file from that conversation. Now you have one file instead of dozens.
For file conversion, Sven-Erik Nielsen recommends the Markdown MCP server, which converts everything to markdown including PDFs. Where it doesnât do a great job, iterate and distill before pulling insights back into your corpus.
Tristan Rodmanâs maintenance approach: Keep a running project log. Todayâs date, what happened, conversations you had. Itâs the easiest thing to pull from when writing status updates, and it prevents context from getting stale.
On preventing hallucinations, the stakes are real. I learned this painfully when using AI to rearrange interview quotes. I texted the person about something interesting they said. âDude, I didnât say any of that.â The AI had completely fabricated quotes. I spent a whole day trying to stop hallucinations on content, then another half day on attribution.
As Zev Arnovitz warns, if you show AI-generated work without verification and someone catches an error, you lose your teamâs trust completely. And youâre ruining it for all of us because people arenât sold on AI yet.
The bottom line from Tristan: Youâre accountable for the artifact you produce. Thereâs no world where you want to ship something you wouldnât co-sign.
âĄď¸ Synthesize context into focused markdown files instead of dumping everything. Keep a running project log. And always verify: schedule calls with support ticket users, check quotes against transcripts, validate with your team. You own what you ship.
