Dom.Vin
July 4, 2025

Sarah Schömbs and her colleagues at the University of Melbourne map out the next major shift in how we interact with AI in From Conversation to Orchestration:

This fundamental difference shifts the user's role from the primary driver, issuing commands directly, iteratively and receiving feedback from one agentic source, to the user as 'the composer', delegating and handing over task responsibilities and overseeing multiple agents at a high-level.

For the last few years, our relationship with AI has been defined by the chat window. It’s a simple, sequential conversation with a single entity. This paper argues that we are on the cusp of a profound change, moving from this model of "conversation" to one of "orchestration". The core idea is that instead of interacting with a single, general-purpose AI, we will soon manage teams of specialised agents that collaborate to achieve a goal. This shift recasts the user’s role entirely: we are no longer just the person asking the questions, but the composer of an AI ensemble.

This transition from a simple dialogue to managing a complex system introduces enormous design challenges. If the user is now an orchestrator, what does the interface for that role even look like? The paper rightly points out that the hierarchical model, where a "manager" agent coordinates a team of sub-agents, is emerging as a popular architecture because it reduces the cognitive load on the user. But this simplification creates its own problems, namely a lack of transparency. How do you give a user meaningful oversight of a team of agents all working in parallel without completely overwhelming them?

It’s interesting to think about the emergent behaviours of these systems. The paper touches on how these agent teams might evolve, creating new sub-agents or workflows without direct human instruction. This brings up fascinating, and slightly anxious, questions about trust and control. When a failure occurs, it might not be the fault of the agent you are interacting with, but a sub-agent several layers deep in the hierarchy that you didn't even know existed. How do we design for accountability in a system that is constantly reorganising itself?