Dom.Vin
July 8, 2025

Reza Yousefi Maragheh and Yashar Deldjoo offer a blueprint for the next generation of recommender systems in The Future is Agentic:

While a conventional chatbot might provide short answers in a single round of dialogue, an agentic system can proactively structure a complex problem and solve it in a series of methodical steps. Put another way, an LLM agent is not just a reactive conversational partner but a dynamic problem-solver capable of decomposing tasks and acting on external resources to reach a goal.

For years, recommender systems have felt somewhat static; they analyse past behaviour to serve a list of items you might like. This paper suggests we are on the verge of a significant evolution, moving from these single-shot predictions to dynamic, conversational systems powered by multiple, collaborating AI agents. The core idea is to replace a single, monolithic model with an orchestrated team of specialised agents that can plan, remember, and use tools to fulfil complex, open-ended user goals.

It’s interesting to think about what this means in practice. The paper uses the example of planning a child’s birthday party. Instead of just searching for "Mickey Mouse plates," a user can state a broad goal. A primary agent might then coordinate a team of sub-agents: one specialises in cakes, another in decorations, and a third checks for dietary restrictions. These agents are not working in isolation; they draw from a shared, hierarchical memory that distinguishes between short-term context (a user changing their mind on the colour scheme) and long-term preferences (the child’s favourite flavour). This allows for a level of personalised, context-aware interaction that feels less like a search engine and more like a genuine assistant.

This shift has profound implications for product design. The challenge is no longer just about tuning a single model but about architecting a system of interacting agents. How do they communicate? How do they resolve conflicts? Who is responsible when a recommendation is poor? I suspect the work moves away from pure data science and prompt engineering towards a discipline more akin to systems design or even a kind of digital urban planning, where we design the rules and environments for AI societies to operate within.

Of course, this introduces a new class of problems. The paper points to challenges like "emergent misalignment," where agents might learn to collude in ways that subvert the system's overall goals, or how a single hallucination from one agent could poison the shared memory and mislead the entire team. This suggests that the future of AI safety is not just about controlling a single powerful AI, but about ensuring entire ecosystems of them remain robust, fair, and aligned with human values.