memory-guide

Agent Memory Guide

Agent memory is an emerging discipline focused on how AI agents retain, organize, and reuse information from their past interactions. It studies how agents can incorporate prior experiences, not only static facts, into future decision making and behavior.

Agent memory is not just about searching over past conversations or plugging in a vector database. It includes representing different kinds of information (facts, experiences, preferences, observations, etc.), deciding what to store and when, defining how and when to retrieve prior experience, and specifying update rules so that agents can change their future actions in light of previous outcomes. This perspective connects memory with test‑time learning: the ability of agents to refine their behavior from experience without retraining the base model.

Motivated by the growing interest in building robust AI agents, this guide brings together recent research on long‑horizon agency and test‑time adaptation, memory architectures such as hierarchical and structured memory, practical patterns from production systems, and tools for evaluating whether agents are actually retaining and using prior experience. It is intended as a technical reference for understanding, designing, and implementing agent memory beyond simple log search or prompt tuning.