Microsoft AutoGen structures multi-agent AI through conversational message passing. Agents exchange messages and let the LLM decide the next speaker and action. AssistantAgent generates responses and code. UserProxyAgent executes code and requests human input. GroupChat with GroupChatManager coordinates multi-agent roundtables. Nested chats compose complex multi-step workflows. Tool functions registered on agents perform grounded actions. Human-in-the-loop patterns gate critical decisions. AutoGen 0.4+ introduces the autogen-agentchat API with declarative agent configuration. Claude Code generates AutoGen agent configurations, GroupChat workflows, tool registrations, and code execution setups for production autonomous AI systems.
CLAUDE.md for AutoGen Projects
## AutoGen Stack
- Version: autogen-agentchat >= 0.4 (new API — not 0.2.x ConversableAgent pattern)
- LLM: model_client with AnthropicChatCompletionClient or OpenAIChatCompletionClient
- Agents: AssistantAgent (generates), UserProxyAgent (executes), CodeExecutorAgent
- Group: RoundRobinGroupChat, SelectorGroupChat (LLM-directed), or Swarm
- Termination: TextMentionTermination("TERMINATE"), MaxMessageTermination, or combined
- Code execution: DockerCommandLineCodeExecutor (production), LocalCommandLineCodeExecutor (dev)
- Human in loop: UserProxyAgent with human_input_mode="ALWAYS" or "TERMINATE"
Basic Two-Agent Conversation
# agents/two_agent_chat.py — AssistantAgent + UserProxyAgent
import asyncio
from autogen_agentchat.agents import AssistantAgent, UserProxyAgent, CodeExecutorAgent
from autogen_agentchat.conditions import TextMentionTermination, MaxMessageTermination
from autogen_agentchat.teams import RoundRobinGroupChat
from autogen_ext.code_executors.docker import DockerCommandLineCodeExecutor
from autogen_ext.models.anthropic import AnthropicChatCompletionClient
def build_coding_agents():
"""Build a coding assistant + executor pair."""
model_client = AnthropicChatCompletionClient(
model="claude-sonnet-4-6",
# api_key from ANTHROPIC_API_KEY env var
)
# AssistantAgent: generates code and analysis
coder = AssistantAgent(
name="Coder",
model_client=model_client,
system_message="""You are an expert Python developer.
When asked to solve a problem:
1. Write clean, runnable Python code
2. Include error handling
3. Print results clearly
4. When done, say TERMINATE""",
)
# CodeExecutorAgent: runs the code in a sandbox
code_executor = DockerCommandLineCodeExecutor(
image="python:3.12-slim",
timeout=60,
work_dir="/tmp/autogen_work",
)
executor = CodeExecutorAgent(
name="Executor",
code_executor=code_executor,
)
# Termination condition: stop when "TERMINATE" appears
termination = TextMentionTermination("TERMINATE") | MaxMessageTermination(20)
team = RoundRobinGroupChat(
participants=[coder, executor],
termination_condition=termination,
)
return team
async def solve_problem(task: str) -> str:
"""Run a coding task through the two-agent team."""
team = build_coding_agents()
result = await team.run(task=task)
# Get final message
last_message = result.messages[-1]
return last_message.content
GroupChat with Multiple Specialists
# agents/group_chat.py — multi-agent collaboration
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.teams import SelectorGroupChat
from autogen_agentchat.conditions import TextMentionTermination, MaxMessageTermination
from autogen_ext.models.anthropic import AnthropicChatCompletionClient
from autogen_ext.models.openai import OpenAIChatCompletionClient
async def build_research_team():
"""Multi-agent team: researcher, analyst, writer, reviewer."""
claude_client = AnthropicChatCompletionClient(model="claude-sonnet-4-6")
gpt_client = OpenAIChatCompletionClient(model="gpt-4o")
researcher = AssistantAgent(
name="Researcher",
model_client=claude_client,
system_message="""You are a research specialist.
Your job: gather comprehensive information from available sources,
cite sources, and identify key data points.
Focus on accuracy over speed.""",
)
analyst = AssistantAgent(
name="Analyst",
model_client=claude_client,
system_message="""You are a data analyst.
Your job: take research findings and extract key insights,
identify patterns, compare options, and make recommendations
supported by the data.""",
)
writer = AssistantAgent(
name="Writer",
model_client=gpt_client,
system_message="""You are a technical writer.
Your job: synthesize research and analysis into clear,
well-structured content. Write for developer audiences.
Use concrete examples.""",
)
reviewer = AssistantAgent(
name="Reviewer",
model_client=claude_client,
system_message="""You are a critical reviewer.
Your job: review the final output for accuracy, completeness,
and clarity. Point out issues. When satisfied, say APPROVED.""",
)
# SelectorGroupChat: LLM chooses who speaks next
termination = (
TextMentionTermination("APPROVED")
| MaxMessageTermination(30)
)
team = SelectorGroupChat(
participants=[researcher, analyst, writer, reviewer],
model_client=claude_client, # Selector uses this to choose next speaker
termination_condition=termination,
selector_prompt="""Select the next speaker based on the conversation flow:
- Choose Researcher when facts or data are needed
- Choose Analyst when patterns or insights need extraction
- Choose Writer when it's time to draft or refine the output
- Choose Reviewer when a draft is ready for review
Current conversation: {history}
Speakers: {roles}
Next speaker:""",
)
return team
async def research_topic(topic: str) -> str:
team = await build_research_team()
result = await team.run(
task=f"Research and write a comprehensive technical overview of: {topic}"
)
return result.messages[-2].content # Pre-APPROVED message
Tools and Function Calling
# agents/tool_agents.py — agents with registered tools
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.teams import RoundRobinGroupChat
from autogen_agentchat.conditions import TextMentionTermination
from autogen_ext.models.openai import OpenAIChatCompletionClient
from autogen_core.tools import FunctionTool
import httpx
import json
# Tool functions: plain Python async functions
async def search_orders(customer_id: str, status: str = "all") -> str:
"""Search orders in the database for a customer.
Args:
customer_id: The customer's ID
status: Filter by status: all, pending, shipped, delivered
Returns:
JSON string with matching orders
"""
async with httpx.AsyncClient() as client:
params = {"customer_id": customer_id}
if status != "all":
params["status"] = status
response = await client.get(
"https://api.internal.com/orders",
params=params,
headers={"Authorization": f"Bearer {get_api_token()}"},
)
return response.text
async def update_order_status(order_id: str, new_status: str) -> str:
"""Update the status of an order.
Args:
order_id: The order's ID
new_status: New status: processing, shipped, cancelled
Returns:
Success message or error
"""
valid_statuses = {"processing", "shipped", "cancelled"}
if new_status not in valid_statuses:
return f"Error: invalid status. Must be one of: {', '.join(valid_statuses)}"
async with httpx.AsyncClient() as client:
response = await client.patch(
f"https://api.internal.com/orders/{order_id}",
json={"status": new_status},
headers={"Authorization": f"Bearer {get_api_token()}"},
)
if response.status_code == 200:
return f"Order {order_id} updated to {new_status}"
return f"Error: {response.status_code} - {response.text}"
def build_support_agent():
"""Build a customer support agent with order tools."""
tools = [
FunctionTool(search_orders, description="Search customer orders"),
FunctionTool(update_order_status, description="Update order status"),
]
model_client = OpenAIChatCompletionClient(model="gpt-4o")
agent = AssistantAgent(
name="SupportAgent",
model_client=model_client,
tools=tools,
system_message="""You are a customer support agent with access to the order system.
Use the available tools to help customers with their orders.
Always verify information before making changes.
When the issue is resolved, say RESOLVED.""",
)
return agent
Human-in-the-Loop
# agents/human_loop.py — gate decisions through human approval
from autogen_agentchat.agents import AssistantAgent, UserProxyAgent
from autogen_agentchat.teams import RoundRobinGroupChat
from autogen_agentchat.conditions import HandoffTermination, MaxMessageTermination
from autogen_agentchat.messages import HandoffMessage
async def build_approval_workflow():
"""Workflow requiring human approval for critical actions."""
model_client = AnthropicChatCompletionClient(model="claude-sonnet-4-6")
agent = AssistantAgent(
name="ProcessingAgent",
model_client=model_client,
handoffs=["HumanApprover"], # Can hand off to human
system_message="""Process refund requests.
For refunds under $50: approve automatically.
For refunds $50-$500: prepare recommendation and hand off to HumanApprover.
For refunds over $500: always hand off to HumanApprover with detailed analysis.
Use HANDOFF HumanApprover to escalate.""",
)
# UserProxyAgent handles the human interaction
human = UserProxyAgent(name="HumanApprover")
termination = HandoffTermination(target="HumanApprover") | MaxMessageTermination(10)
team = RoundRobinGroupChat(
participants=[agent, human],
termination_condition=termination,
)
return team
async def process_refund_request(order_id: str, amount: float, reason: str):
"""Process a refund with human approval for large amounts."""
team = await build_approval_workflow()
task = f"""Process this refund request:
Order ID: {order_id}
Amount: ${amount:.2f}
Reason: {reason}
Assess if approved, prepare recommendation."""
result = await team.run(task=task)
return {
"messages": [m.content for m in result.messages],
"stop_reason": result.stop_reason,
}
For the CrewAI alternative with role-based agent orchestration and explicit task definitions rather than AutoGen’s conversational flow, see the CrewAI guide for crew and task configuration. For the LangGraph approach with explicit state machine agent graphs rather than AutoGen’s emergent conversation routing, the LangChain guide covers agent loops and tool nodes. The Claude Skills 360 bundle includes AutoGen skill sets covering GroupChat, tool registration, and human-in-the-loop workflows. Start with the free tier to try AutoGen agent generation.