What forgetting actually costs
It's worth being concrete about what the absence of memory means in practice. Consider an AI-assisted research workflow where an analyst uses an agent to investigate a topic over several sessions.
Without persistent memory, the agent can't recall what sources were already reviewed, what conclusions were reached in previous sessions, or what the analyst's evolving preferences and constraints are. Every session starts cold. The analyst must re-orient the agent, re-establish context, and re-explain what they've already ruled out. The overhead is substantial and the experience is frustrating.
Or consider a customer-facing AI assistant deployed by an enterprise. Without memory, it can't recognize returning customers, can't reference previous interactions, and can't build up a model of a customer's situation over time. It behaves identically on the tenth interaction as on the first. For routine transactional queries this may be acceptable, but for complex, high-value customer relationships, it's a significant liability.
The naive fix, simply appending all past interactions to the current prompt, doesn't scale. Context windows are finite and expensive. A comprehensive history of interactions with a single user or across a long project can easily exceed what any model can process in a single call, and even if it could, the performance degradation from bloated contexts applies here too. More history isn't the same as better memory.

