Kim et al. at DeepMind define agentic tasks as those which have three properties:1

  1. Sustained multi-step interactions with an external environment.
  2. Iterative information gathering under partial observability.
  3. Adaptive strategy refinement based on environmental feedback.

This page has my notes about agentic workflows and how they map down to infrastructure.

Orchestration patterns

DeepMind classifies agentic systems as conforming to one of five potential motifs: one single-agent system and four multi-agent systems:1

  • Single-Agent (SAS): A solitary agent executing all reasoning and acting steps sequentially with a unified memory stream.
  • Independent: Multiple agents working in parallel on sub-tasks without communicating, aggregating results only at the end.
  • Centralized: A “hub-and-spoke” model where a central orchestrator delegates tasks to workers and synthesizes their outputs.
  • Decentralized: A peer-to-peer mesh where agents communicate directly with one another to share information and reach consensus.
  • Hybrid: A combination of hierarchical oversight and peer-to-peer coordination to balance central control with flexible execution.
  • Quality: Because there is no multi-agent system, errors do not propagate or amplify.1

They quantitatively showed that there is no best agentic system architecture; using bigger models for each agent yields better agentic workflow outcomes in the general case, but different model families have different optimal agentic architectures. For example, decentralized was best for GPT-5.2 and Gemini model families at smallest and largest scales, but Sonnet worked best with single-agent.

Centralized

Agents publish events/messages to a broker; others subscribe by topic, routing rules, or consumer groups.

Features:

  • Topology cost: closer to O(N) integration (each agent integrates with the bus rather than every other agent).
  • Decoupling: high; senders don’t need to know specific recipients.
  • Backpressure & buffering: can be centralized and enforceable (queues, consumer groups, rate limits).
  • Failure domain: broker becomes a critical dependency, mitigated via clustering/replication
  • Strength: scales to lots of agents, supports fan-out/fan-in, better observability and replay.
  • Quality: Has the lowest error amplification since the orchestrator acts as a “validation bottleneck” and catches errors before they propagate to other agents.1

Decentralized

An agent addresses another agent explicitly and sends messages directly (RPC, direct queue, HTTP/gRPC, WebSocket, etc.).

Features:

  • Topology cost: tends toward O(N²); even if you don’t literally maintain N² sockets, you usually accumulate N² addressability and routing cases.
  • Coupling: tight; sender must know who to talk to and often when.
  • Failure domain: failures propagate socially (retries/backoffs across many pairs); debugging becomes distributed whack-a-mole.
  • Strength: low latency, simple mental model for small N, good when interactions are truly sparse and explicit.
  • Quality: Errors propagate across agents more easily, compounding to worse overall output than the centralized or hybrid approaches.1

Shared state

Agents read/write a shared state substrate (DB, KV store, object store). Others react via polling or change streams.

Features:

  • Good for coordination and long-lived workflows via durable state, checkpoints, auditability.
  • Risk of contention and schema/contract drift; a field’s meaning may change over time
  • Often paired with a bus (state for truth; bus for notification).

Orchestration implementations

Frameworks like Dapr Agents provide the building blocks to implement many of the five agentic architectures described above; for example,

  • Centralized can use Dapr’s state store abstraction (which can be backed by Redis, Postgres, MongoDB)
  • Decentralized can use Dapr’s pub/sub abstraction (which can be backed by Kafka, Redis, RabbitMQ)

Other agentic libraries or frameworks are more prescriptive in how they support building agentic systems.

Decentralized models

I think LangGraph follows a decentralized model. It has the programmer define:

  • Nodes, which contain agent logic. These are functions.
  • Edges, which define how nodes can serve as inputs/outputs for each other. These are also functions.
  • State, which is the shared state that is passed between nodes and edges as the workflow continues.

LANL has a “scientific agent ecosystem” called URSA (Universal Research and Scientific Agent) that is built on LangGraph.2

OpenAI suggests doing something similar, where agents pass around a monotonically increasing context.3

Protocols

Agentic systems are standardizing around a set of protocols that makes it possible for an agent that uses Gemini as its model to understand how to invoke a tool that was developed by (and is hosted by) a third party.

MCP (Model Context Protocol)

MCP is a protocol developed by Anthropic to allow agents to talk to data sources, tools, and workflows.4 An MCP server exposes tools, context, resources, and prompts that an agent can discover and call.

A2A (Agent2Agent)

A2A is a protocol that allows agents to talk to other agents. It was created by Google and is now owned by the Linux Foundation.

UCP (Universal Commerce Protocol)

UCP is a protocol that allows agents to interact with “consumer surfaces” related to shopping.5 It was developed by Google to give agents a standard way to talk to online stores. It is related to the AP2 protocol.

AP2 (Agent Payments Protocol)

AP2 is a protocol designed to make secure payments. It is an extension of A2A.6

Footnotes

  1. Towards a science of scaling agent systems: When and why agent systems work 2 3 4 5

  2. [2506.22653] URSA: The Universal Research and Scientific Agent

  3. Orchestrating Agents: Routines and Handoffs

  4. What is the Model Context Protocol (MCP)? - Model Context Protocol

  5. Under the Hood: Universal Commerce Protocol (UCP) - Google Developers Blog

  6. AP2 - Agent Payments Protocol Documentation