Track Folder
Point the Alloy edge binary at a folder of recordings and upload them automatically
This feature is in beta. The setup flow described below may not work exactly as written yet — we're actively refining it. If you hit issues, reach out to the Alloy team.
If you already have MCAP files landing in a directory — from a ROS2 recorder, Foxglove, or any other tool — the Alloy edge binary can watch that folder and upload new recordings to Mesh Storage automatically.
This is the simplest way to get data into Alloy when you don't need the full Docker setup, or when you're running alongside another agent like Foxglove Agent.
Open Mesh Storage → Device Setup → and select Binary to run the interactive flow with pre-filled config.
The client needs outbound access on port 443 (HTTPS) only. No inbound ports required.
Step 1: Download and install
In the Device Setup modal, select Binary — your OS and architecture are auto-detected. Click Download to get the binary.
.deb is recommended — it includes systemd service integration.
sudo dpkg -i alloy-edge_*.debchmod +x alloy-edge-linux-*
sudo mv alloy-edge-linux-* /usr/local/bin/alloy-edgechmod +x alloy-edge-darwin-*
sudo mv alloy-edge-darwin-* /usr/local/bin/alloy-edgeMove alloy-edge-windows-amd64.exe to a folder in your PATH, or run it directly from the download location.
Step 2: Download the config
Click Download edge-manager.yaml in the Device Setup modal. The file comes pre-filled with your provisioning key and backend URL.
sudo mkdir -p /etc/alloy
sudo mv ~/Downloads/edge-manager.yaml /etc/alloy/edge-manager.yamlmkdir -p ~/.config/alloy
mv ~/Downloads/edge-manager.yaml ~/.config/alloy/edge-manager.yamlmkdir "$env:APPDATA\alloy" -Force
mv "$env:USERPROFILE\Downloads\edge-manager.yaml" "$env:APPDATA\alloy\edge-manager.yaml"Open the file and configure your device's identity and the local sync process:
seed_state:
api_key: "zpka_..." # pre-filled from setup page
edge_id: "robot-01" # uncomment and set — defaults to hostname
tags:
environment: production
location: warehouse-3
# Run edge-sync locally instead of receiving config from the cloud
local_state:
processes:
- name: edge-sync
command: "alloy-edge sync --config /etc/alloy/edge-sync.yaml"
restart: on_failureedge_id defaults to the device's hostname if not set. Set it explicitly to give your device a predictable, human-readable name.
The local_state block tells the manager to run edge-sync locally using your config file. Without it, the manager would fetch its process config from Alloy Cloud — which is the right approach for the Docker setup, but not for the binary track-folder flow where you control which folder to watch.
Step 3: Point at your recording folder
Create edge-sync.yaml to tell the client which folder to watch:
input_dir: "/recordings" # the folder to watch
file_pattern: "*.mcap" # which files to upload
upload_delay: "60s" # wait after last write before uploading
# Disk management — oldest files deleted first (FIFO by mtime)
max_folder_size: 10GB # delete oldest files when total exceeds this
max_file_age: 72h # delete files older than this
# max_file_count: 1000 # optional — cap on number of filesFiles currently being uploaded are never deleted. Files open by other processes (e.g. an active recorder or another upload agent) are also skipped.
For the full schema — including keep_files, mcap_require_footer, cycle_time, max_concurrent_uploads, and uploading directly to your own cloud via OpenDAL — see the configuration reference.
Step 4: Run
alloy-edge managerThe client contacts Alloy using the provisioning key and registers the device. It then waits for approval. Your device will appear in Mesh Storage under the devices/ folder as Pending.
For production, run it as a background service so it starts on boot:
The .deb package includes a systemd unit file:
sudo systemctl enable --now alloy-edgeCreate a launch agent at ~/Library/LaunchAgents/ai.usealloy.edge.plist:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>Label</key>
<string>ai.usealloy.edge</string>
<key>ProgramArguments</key>
<array>
<string>/usr/local/bin/alloy-edge</string>
<string>manager</string>
</array>
<key>RunAtLoad</key>
<true/>
<key>KeepAlive</key>
<true/>
</dict>
</plist>launchctl load ~/Library/LaunchAgents/ai.usealloy.edge.plistUse NSSM or Task Scheduler to run alloy-edge.exe manager at startup:
# Task Scheduler (runs at logon)
schtasks /create /tn "AlloyEdge" /tr "C:\path\to\alloy-edge.exe manager" /sc onlogon /rl highestStep 5: Approve the device
- Open the
devices/folder in Mesh Storage - Find the new device and click Approve
- Alloy issues the device a permanent API key
- The client picks up the new key on its next sync — no manual key distribution needed
What happens next
After approval, the edge client starts uploading files from your watched directory. Within a few minutes you should see:
- Last seen updating as the client syncs
- Files appearing in the device's folder in Mesh Storage
You can then replay, inspect, or query any uploaded MCAP file directly from Mesh Storage.
Running alongside Foxglove Agent
The Alloy edge binary coexists with Foxglove Agent on the same machine — no conflicts. Both can watch the same recording directory simultaneously.
Recommended setup:
- Your recorder writes MCAP files to a shared directory (e.g.
/recordings) - Foxglove Agent watches the directory and uploads recordings to Foxglove
- Alloy Edge watches the same directory and uploads recordings to Alloy
- Alloy Edge handles disk cleanup — set
max_folder_sizehigh enough that both agents have time to upload before old files are evicted
Neither agent deletes files on upload — they both read and upload independently. Alloy Edge's disk cleanup is safe:
- Files are deleted oldest-first (FIFO by modification time)
- Files currently being uploaded by Alloy Edge are skipped
- Files open by other processes (including Foxglove Agent) are never deleted
Disable Foxglove Agent's retainRecordingsSeconds (set to 0, which is the default) so that Alloy Edge is the single owner of disk cleanup. This avoids race conditions where one agent deletes a file before the other has finished uploading.