When One Agent Isn’t Enough: Experiments with Multi-Agent AI
LLM agents are powerful, but no single one can cover every real-world task. This session shows what happens when you orchestrate multiple agents instead of relying on just one. We’ll explore four concrete experiments: running multiple agents in parallel with LangChain4j and Spring AI; manually coordinating Gemini CLI, Codex CLI, and Claude Code on the same code base; wrapping Gemini in an MCP server so it can be invoked by other tools; and creating a Claude Code sub-agent that delegates to Gemini while conserving context window tokens. You’ll see where orchestration pays off, where it gets messy, and what it suggests about the future of applied multi-agent AI.


