MCP Server (AI)
Connect Claude, Cursor, Windsurf, Codex, and other AI tools to your Alloy mission data
Alloy exposes an MCP (Model Context Protocol) server that lets you connect your AI coding and analysis tools directly to your mission data. Query missions, search instances, view reports, and more — right from your preferred AI tool.
Some tools may not be available depending on how your Alloy account is set up and which features are enabled for your org. Your AI tool will only see the tools that apply to you.
Setting up
The MCP server URL:
https://aus.usealloy.ai/mcpConnect it to your AI tool of choice using Streamable HTTP transport:
Add the Alloy MCP server, then authenticate via the /mcp menu inside a session:
claude mcp add alloy --transport http https://aus.usealloy.ai/mcpThis opens a browser-based OAuth flow. Tokens are stored locally and refreshed automatically.
If you need to reauthenticate later, run /mcp again and select the Alloy server.
See the Claude Code MCP docs for more details.
Remote MCP servers are added via the Connectors UI, not the JSON config file.
- Open Settings → Connectors (or click + in chat → Connectors → Manage connectors)
- Click Add custom connector
- Paste the URL:
https://aus.usealloy.ai/mcp - Click Add and complete the authentication prompt
The Alloy connector's tools will then be available in your conversations. You can toggle it on/off per conversation from the + menu.
For Team and Enterprise plans, an org owner must first add the connector in Organization settings → Connectors before members can use it.
See the Claude Desktop custom connectors guide for more details.
Add the MCP server using the Codex CLI:
codex mcp add alloy --url https://aus.usealloy.ai/mcpThen log in to authenticate with Alloy:
codex mcp login alloyThis opens the Alloy consent screen in your browser. Once authenticated, restart Codex (Desktop or CLI) and it should work.
If you get logged out at any point, re-run codex mcp login alloy to reauthenticate.
Add to .cursor/mcp.json in your project (or ~/.cursor/mcp.json for global access):
{
"mcpServers": {
"alloy": {
"url": "https://aus.usealloy.ai/mcp"
}
}
}Restart Cursor after saving. When you first use an Alloy tool, you'll be prompted to authenticate in your browser.
You can verify the connection in the Output panel → MCP dropdown.
See the Cursor MCP docs for more details.
Add to ~/.codeium/windsurf/mcp_config.json (or manage via Windsurf Settings → Cascade → MCP Servers):
{
"mcpServers": {
"alloy": {
"serverUrl": "https://aus.usealloy.ai/mcp"
}
}
}Restart Windsurf after saving. Authentication will be handled via a browser-based OAuth flow on first use.
See the Windsurf MCP docs for more details.
Paste the following into your AI tool for help getting set up:
I need to connect to an MCP server. Here are the details:
- Server URL: https://aus.usealloy.ai/mcp
- Transport: Streamable HTTP (the URL is a single HTTP endpoint that accepts JSON-RPC POST requests)
- Authentication: OAuth 2.0 (browser-based consent flow — no API key or token needed upfront)
- Server name: "alloy"
Please configure this MCP server in my client. The server follows
the MCP Streamable HTTP transport specification. On first use it
will redirect me to a browser to authenticate with my Alloy
credentials via OAuth. No manual token setup is required.The MCP server uses the same authentication as the Alloy web app. When you first connect, you'll be prompted to authenticate in your browser; the connection then persists for your session.
What you can do
Read your data
list_missions— search and list missions across your orgget_mission_summary— full mission details including key events, metrics, and commentsget_available_filters— discover what metadata filters exist for your missionsget_plot_data— data points for a specific plot in a mission summaryget_map_data— trajectory and geospatial data pointssearch_instances— semantic image, log, or similarity search across your instance libraryget_instance— instance metadata, logs, images, timeseries, and detectionslist_scenarios/get_scenario— scenario list and match detailslist_reports/get_report/list_scheduled_reports— read reports and check schedulesquery_data— ask a natural language question about your data; the agent writes SQL and returns resultssearch_docs— search Alloy product documentation
Create & edit (focused, single-shot)
Each of these runs a small focused agent — supply IDs you already know to keep them fast. All finish well inside any AI tool's request timeout. Edit tools are owner-only.
create_report(prompt, mission_id?, mission_ids?)— compose a new shareable report from a briefupdate_report(report_id, prompt)— edit an existing report you owncreate_scheduled_report(prompt)— set up a recurring report (daily / weekly / fortnightly)update_scheduled_report(schedule_id, prompt)— edit a recurring schedule you own
Treat these as zero-reasoning tools. They are deterministic compose-and-write passes on a tight budget. They cannot investigate, decide, hunt anomalies, or pick thresholds — if you ask them to, they will burn the budget exploring and time out without writing anything. Do the analysis upstream with query_data (or alloy_request_start for genuinely open-ended work) and then call these with concrete inputs.
Do this:
- "Build a report titled 'Mission X Summary' for
mission_id m_abcwith sections: Overview, Metrics, Anomalies. The metrics are cpu=42%, latency_p95=87ms." →create_reportworks great - "Look at all our missions this quarter and write a report on the worst performers." → use
alloy_request_start(or break it down:query_datafirst to find the worst, thencreate_reportwith the names and numbers) - "Add this paragraph to report rpt_xyz: …" →
update_reportworks great - "Make this report better." → don't. Either decide the change yourself, or ask
alloy_request_startto do the analysis and callupdate_reportinternally.
Long-running tasks
For anything that needs multi-step exploration, deeper reasoning, or doesn't fit a single-shot tool — most notably creating or editing scenarios, where the scanner's detection strategy benefits from sample-mission validation — use the async pair:
alloy_request_start(prompt, timeout_seconds?)— kick off a multi-step task, returns atask_idimmediately. Defaults to a 25-minute server budget (matches the in-app agent's subagent cap); the agent self-terminates with whatever it has when the cap is hit.alloy_request_poll(task_id, wait_seconds?)— long-poll for completion (server blocks up to 60s); the response always carriesprogressso you can see what the agent is doing
Mesh Storage
browse_mesh_storage— list files and folders in your mesh bucket (MCAP files come back with pipeline status baked in)get_mesh_file_download_url— presigned download URL for a specific fileget_mesh_replay_url— dashboard URL that opens replay for one or more MCAPs (multi-file timeline). Files should overlap in time or be chronological for the best UX.get_mesh_inspect_url— dashboard URL that opens the Inspect MCAP modal for a single file (topics, diagnostics, ROS graph, capture config)query_mesh_storage— read-only DuckDB SQL against your Iceberg cataloglist_mesh_tables— list tables or describe a specific table's schemaget_mesh_connection_info— gateway URL and instructions for external clients (DuckDB, Spark, Trino). Alloy never issues mesh API keys over MCP — you generate those from the dashboard.
Device fleet & edge setup
list_devices— every device with status, last-seen, last-upload, and file countapprove_device/reject_device— lifecycle actions for pending devicesget_edge_config/update_edge_config— read and write recorder config filesget_docker_setup— full Docker bundle (AR token, login/pull commands, compose yaml). See Secret handling below.get_binary_setup— signed binary download URL plusedge-manager.yamlget_edge_manifest— available distros, tags, architectures, and binary releases
Secret handling
Device-setup tools return real short-lived AR access tokens (1-hour, read-only) so docker login commands are immediately usable. The provisioning key is never returned over MCP — compose and yaml templates come back with <YOUR_PROVISIONING_KEY> placeholders. Open Mesh Storage → Device Setup in the dashboard to download the same files with the real key embedded, or copy the provisioning key from that page into the template by hand.
Other actions stay UI-only for safety: deleting devices or files, rolling API keys, creating Mesh Storage API keys, and resetting edge configs.
Example usage
Once connected, you can ask your AI tool questions about your Alloy data:
- "What missions were uploaded this week?"
- "Summarize the latest mission report"
- "Find instances with navigation errors across all missions"
- "What scenarios are currently running?"
- "Compare controller error across the last 10 flights and write up the worst three" — uses
alloy_request_start(analysis + selection upstream, then a report). The async pair will fetch the data and callcreate_reportinternally with concrete values. - "Write a report titled 'Mission M Latency' for mission_id=
m_abcwith these metrics: p95=87ms, p99=142ms, errors=3" — usescreate_reportdirectly (concrete inputs, no analysis needed) - "Schedule a weekly fleet-health summary for Monday 9am" — uses
create_scheduled_report - "Set up a scenario that flags any flight with a Z-axis position error over 0.5m" — uses
alloy_request_start(scenarios need sample-mission validation, so they go through the async pair)
The AI tool uses Alloy's MCP server to fetch the data and respond in context — no need to switch to the Alloy web app.