Agentic AI is going to be huge in 2025. It’s about applying LLMs to decide the flow control of apps and services. After working with simple LLM function calling, I’ve started exploring how multiple agents can think, remember, and collaborate.

🧠 An LLM-enabled agent’s behavior is driven by it’s context window. This includes:

↳ the system prompt
↳ the chat history which logs prior messages and results from tool calls
↳ core memory. Some agentic design patterns reserve block(s) of the context window and use them to describe the agent persona and useful task information, for example a customer profile

💡We know there is a size limit to the context window. When the context window fills up and overflows, instead of losing some information, we can evict selected portions into a database of persisted recall memory. This means no messages, interactions, or responses are ever lost. Through “context compilation,” relevant recall memory can be summarized and brought back into LLM context window state when needed. Imagine a flight booking assistant that remembers your preferences from last year’s holiday - that’s a powerful, far more engaging experience.

🔄A big game-changer will be multi-agent autonomous systems. Consider the following airline scenario. An operations controller agent detects weather delays. Through shared memory blocks and inter-agent messaging, it orchestrates:

↳ Crew roster agent evaluating staff availability and legal requirements
↳ Aircraft assignment agent optimizing fleet deployment
↳ Customer service agent proactively managing passenger communications

📚Want to learn more about agentic framework memory architecture? Read the MemGPT paper and check out the Letta framework.

❓What agentic frameworks excite you that I should look at next?