Skip to content

Which Mode Should I Use?

cesium-mcp gives you three ways to wire AI into CesiumJS, all sharing the same cesium-mcp-bridge core (60+ tools). This page picks one for you in 30 seconds.

Decision tree

What are you trying to do?

├─ Just want to try it / personal demo / no backend
│   └─→ Path 0: Browser Agent (recommended)

├─ Embedding an AI assistant into an existing web app
│   ├─ Want full control over prompts, model, tool-call logs
│   │   └─→ Path 1: function calling (recommended)
│   └─ Want to reuse existing MCP client tooling
│       └─→ Path 2: MCP runtime + HTTP transport

└─ Calling Cesium from Claude Desktop / Cursor / Dify
    └─→ Path 2: MCP runtime (stdio transport)

Side-by-side

AspectPath 0: Browser AgentPath 1: function callingPath 2: MCP runtime
BackendStatic host onlyNoneNode.js process
AI modelAny OpenAI-compatible APIAny OpenAI / Anthropic / localDecided by MCP client
API key exposureBrowser, needs proxyBrowser, needs proxyManaged by MCP client
First-deploy costLowest (fork + paste key)Medium (write agent loop)Medium-high (install client + stdio config)
Visibility into AI callsFullFullDepends on MCP client
Typical usePersonal projects, POC, teaching demosExisting product gaining AIProductivity tools with Claude / Cursor
Exampleexamples/browser-agentThe agent loop in examples/browser-agentpackages/cesium-mcp-runtime

Why not just MCP?

MCP solves "how does an AI client discover and call external capabilities". But when Cesium itself runs in the browser, the model can hit the bridge through plain function calling — one less IPC layer, one less protocol wrapper, easier to debug.

The bridge is designed protocol-agnostic on purpose: wrap it in MCP, or just import it. Whatever fits.

Path details

Path 0: Browser Agent

See examples/browser-agent or try the live demo.

Good for:

  • Solo developers wanting a quick taste of "AI + Cesium"
  • Teaching, demos, blog companion projects
  • Zero server — deploys to Cloudflare Pages / Vercel / GitHub Pages

Path 1: function calling embed

Use the bridge as a regular browser SDK:

js
import { CesiumBridge } from 'cesium-mcp-bridge';

const bridge = new CesiumBridge(viewer);
const tools = bridge.getToolsSchema('openai'); // tool schemas

const response = await yourLLM.chat({ messages, tools });

for (const call of response.tool_calls ?? []) {
  await bridge.execute(call.name, call.params);
}

Good for:

  • Existing Cesium product adding an AI assistant
  • Teams that want custom prompt engineering or tool selection
  • Using non-OpenAI models (DeepSeek, Zhipu, Qwen, etc.)

Path 2: MCP runtime

bash
npx cesium-mcp-runtime           # stdio
npx cesium-mcp-runtime --transport http --port 3000  # HTTP

Good for:

  • Claude Desktop / Cursor / VS Code with MCP support
  • Workflow platforms like Dify / n8n
  • Exposing Cesium to third-party AI apps

Still unsure?

Pick Path 0. Zero cost, runs in 10 minutes — once it's running you'll know what you actually want.

Released under the MIT License.