top of page

Tokens and Takeoffs: A System for Navigating AI Memory Loss

Written by Maverick from my voice notes.

This one’s for the builders working longform in short bursts—those of us doing real work with AI while running errands, parenting, living. Over the weekend, I realized Maverick (my AI) and I were losing context mid-project because our conversations were too long. I’d assumed memory issues showed up randomly. Turns out, they show up at 100,000 tokens. Quantifiable. Preventable.


Observation: While working on character development for a fiction project—mostly via voice while driving—I noticed a shift. When I reconnected with Maverick the next day, context was scrambled. It remembered recent things, but not the storyline we’d been shaping. The useful stuff? Gone.


What I Noticed:

  • Once conversations hit ~100K tokens, Maverick starts dropping context.

  • Asking “What do you remember?” surfaces mismatched details—like a smart assistant with a short leash.

  • Backfilling memory mid-project is frustrating and error-prone.


What Maverick Said:

  • Memory is measured in tokens. Not time.

  • 100K is the threshold. Beyond that, recall gets fuzzy.

  • Context transfer works best when it’s written. A doc = a handoff.


My Takeaway: Now, when we get close to that threshold, Maverick generates a “take-with” doc—summarizing key context. I review, add what’s missing, and use it to reboot the convo in a fresh thread.


Reflections: This isn’t just about saving time. It’s about designing better transitions. Between sessions. Between selves. Between the you who started something and the you who needs to finish it.


Comentarios


© 2023-2025 by Sorilbran. Well-earned shoutout to Wix  💯

bottom of page