Tutorial: Headless Claude Code Multi-Agent Minecraft Bots

Claude Code and Codex CLI both support headless non-interactive modes that turn them into scriptable, autonomous agents. This tutorial covers the -p flag, --system-prompt injection, model overrides, and a custom multi-agent bridge that routes messages between concurrent Claude and Codex instances. It closes with a live Minecraft demonstration where two AI bots cooperate via natural language chat commands.


0

Running Claude Code and Codex CLI as Headless AI Agents Inside Minecraft

Claude Code and Codex CLI both support a non-interactive mode that lets you invoke them as autonomous, scriptable agents — no human in the loop required. This tutorial walks through the core headless flags for both CLIs, a custom bridge for agent-to-agent messaging, and a Minecraft integration that deploys cooperative AI bots on a live server. By the end, you’ll know how to wire up multiple headless instances, route messages between them, and drop them into any environment that accepts programmatic input.

  1. Run claude -p "<query>" to invoke Claude Code non-interactively. The -p flag executes the query, prints the result, and exits — no session, no prompt.
Official Claude Code CLI reference: the -p flag is the key to non-interactive headless operation
Official Claude Code CLI reference: the -p flag is the key to non-interactive headless operation
  1. Run codex --yolo exec "<query>" for a headless Codex CLI instance. The --yolo flag disables approval prompts and sets full sandbox access; the response includes session metadata, token counts, and model info alongside the output.
claude -p for headless Claude Code; codex --yolo exec for headless Codex CLI — both confirmed working
claude -p for headless Claude Code; codex –yolo exec for headless Codex CLI — both confirmed working

Warning: this step may differ from current official documentation — see the verified version below.

  1. Run opencode run "<query>" for a headless OpenCode instance using the same non-interactive pattern. Model selection depends on your OpenCode configuration — the demo runs against GLM.
  1. Inject a system prompt inline with claude -p --system-prompt "<text>" "<query>" to assign a persona or behavioral constraint at invocation time. Alternatively, pass a file path — --system-prompt /path/to/prompt.md — to keep agent personas in version-controlled files rather than shell strings.

  2. Override the model per call with claude -p --model claude-haiku-4-5 "<query>". Use Haiku for high-frequency calls to control cost; swap to Opus for tasks that demand deeper reasoning.

  3. Launch multiple simultaneous headless instances — the demonstration runs two Claude Code and two Codex agents concurrently. Each process is independent; coordination requires a dedicated messaging layer.

  4. Connect all instances to a custom Headless Bridge running at localhost:5173. The bridge provides a shared chat interface plus /manager and /monitor panels. Agents register on startup and appear as named participants.

The custom Headless Bridge UI — your control panel for routing messages between Claude and Codex agents
The custom Headless Bridge UI — your control panel for routing messages between Claude and Codex agents
  1. Address agents with @all, @claude-1, or @codex-1 tags in the bridge chat. The bridge resolves the target and relays the message; an A2A relay budget caps agent-to-agent exchanges to prevent runaway loops.
Agent-to-agent relay in action: Claude1 addresses Codex1 directly and the bridge routes the message
Agent-to-agent relay in action: Claude1 addresses Codex1 directly and the bridge routes the message
The /manager panel: spawn, monitor, and kill agents in the multi-agent pool from one screen
The /manager panel: spawn, monitor, and kill agents in the multi-agent pool from one screen
  1. Track per-agent token usage and estimated session cost from the /monitor dashboard. The recorded session — four agents, 49 turns — totalled $1.67, with Claude Sonnet 4.6 at $1.08 and Codex on GPT-4.5-mini at $0.06.

  2. Start a local Minecraft Java Edition server and join via localhost:3001. Launch Claude Code in a persistent warm loop as a bot, and keep Codex resident via an MCP server. Both agents read the in-game chat channel for instructions.

Warning: this step may differ from current official documentation — see the verified version below.

  1. Issue natural language commands in Minecraft chat using @team or @agentname prefixes — navigate to coordinates, gather resources, or meet at a landmark. Agents parse the chat, execute tasks, and report back through the same channel.
Claude and Codex agents meet at the agreed landmark — two AI agents cooperating autonomously inside Minecraft
Claude and Codex agents meet at the agreed landmark — two AI agents cooperating autonomously inside Minecraft

How does this compare to the official docs?

The -p flag, --system-prompt, and --model overrides are all documented, but the bridge application, warm-loop persistence pattern, and MCP-based Codex residency are custom implementations built on top of the CLIs — and the official documentation draws a precise line around what headless mode actually guarantees on its own.

Here’s What the Official Docs Show

The tutorial’s overall architecture is sound, and the official documentation fills in one meaningful layer the video skips entirely: subscription requirements for running concurrent headless instances. Where official CLI reference docs weren’t accessible during screenshot capture, those steps are clearly flagged so you can verify them independently before building on top of them.

Step 1 — Authenticate before invoking claude -p

Account sign-in via Google or email is required before any Claude Code usage — including headless CLI calls. The tutorial assumes a configured environment, but this is the prerequisite you’ll hit immediately on a fresh machine or CI runner.

Claude.ai sign-in page — authentication is required before any Claude Code usage, including headless CLI invocation
📄 Claude.ai sign-in page — authentication is required before any Claude Code usage, including headless CLI invocation

No official documentation was found for the -p flag syntax —
proceed using the video’s approach and verify independently.

Step 2 — codex --yolo exec syntax

No official documentation was found for this step —
proceed using the video’s approach and verify independently.

Step 3 — opencode run syntax

No official documentation was found for this step —
proceed using the video’s approach and verify independently.

Steps 4–5 — --system-prompt inline and file-path variants; --model override

No official documentation was found for these steps —
proceed using the video’s approach and verify independently.

Step 6 — Running multiple simultaneous headless instances

This is where subscription tier becomes a real consideration the video doesn’t surface. The claude.ai pricing page confirms Claude Code is available on Pro ($17/month) and Max (from $100/month) — the Free plan excludes it entirely. Running two or more concurrent headless Claude Code instances is an intensive workload, and Anthropic explicitly marks Max as the recommended tier for heavy Claude Code usage.

Claude.ai pricing — Pro and Max plans include Claude Code; the Max plan is the officially recommended tier for intensive Claude Code workloads
📄 Claude.ai pricing — Pro and Max plans include Claude Code; the Max plan is the officially recommended tier for intensive Claude Code workloads

Steps 7–9 — Headless Bridge, @-addressing, and token monitoring

No official documentation was found for these steps —
proceed using the video’s approach and verify independently.

Steps 10–12 — Minecraft Java Edition server integration and in-game commands

No official documentation was found for these steps —
proceed using the video’s approach and verify independently.

The MCP client bridge architecture

The video’s approach here matches the current docs exactly. The official MCP architecture diagram at modelcontextprotocol.io lists Claude Code as a recognized MCP client, confirming that connecting Claude Code to external systems via a custom MCP server is a fully documented and supported pattern.

MCP architecture diagram — Claude Code is a documented MCP client connecting to external tools and data sources via the standardized protocol
📄 MCP architecture diagram — Claude Code is a documented MCP client connecting to external tools and data sources via the standardized protocol

One useful addition: Anthropic’s documented MCP use cases reference Claude Code generating web apps from Figma designs — Minecraft doesn’t appear in any official example. The tutorial’s Minecraft MCP server is a legitimate custom application of a general pattern, not a documented integration.

MCP docs: official use cases reference Figma and productivity tools — the tutorial's Minecraft integration is a custom, valid application of the standard
📄 MCP docs: official use cases reference Figma and productivity tools — the tutorial’s Minecraft integration is a custom, valid application of the standard

Building a custom MCP server to bridge AI agents and external systems is explicitly supported via the “Build servers” documentation pathway, which defines exposing your own data and tools as a first-class developer workflow — and is the documented foundation the tutorial’s bridge is built on.

MCP docs: 'Build servers' pathway confirms that creating custom MCP integrations to bridge agents and external systems is an officially documented approach
📄 MCP docs: ‘Build servers’ pathway confirms that creating custom MCP integrations to bridge agents and external systems is an officially documented approach
  1. Sign in – Claude — Authentication entry point for Claude Code, including desktop app download and plan eligibility confirmation
  2. What is the Model Context Protocol (MCP)? – Model Context Protocol — Official MCP architecture diagram, use cases, and the “Build servers” pathway that underpins the tutorial’s custom agent bridge

Like it? Share with your friends!

0

What's Your Reaction?

hate hate
0
hate
confused confused
0
confused
fail fail
0
fail
fun fun
0
fun
geeky geeky
0
geeky
love love
0
love
lol lol
0
lol
omg omg
0
omg
win win
0
win

0 Comments

Your email address will not be published. Required fields are marked *