Everything you need to know about Merideon. Can't find what you're looking for? Contact us →
Merideon is an enterprise security platform for organizations that deploy AI agents alongside traditional network infrastructure. It provides three appliances — Security Office, IPAM, and AI Router — that govern AI agents, manage network addresses, and secure the network edge, working together as an integrated system.
An AI agent is any AI-powered system that operates on your network — automated pipelines, LLM-powered tools, autonomous bots, monitoring agents, or any software that uses an AI model to take actions. If it's on your network and uses AI, Merideon can govern it.
No. Each appliance is useful on its own. The Security Office can govern agents without IPAM or the AI Router. IPAM can manage your network addresses independently. The AI Router works as a standalone intelligent router. They're most powerful together, but you can start with one.
Merideon is entirely on-premises. Every appliance runs on your hardware, in your network, under your control. No data leaves your environment. There is no cloud dependency — not for licensing, not for telemetry, not for AI inference (the AI model API key is yours).
When an agent is approved, the Security Office generates a unique badge ID and a signed token using a platform-managed signing key. The agent presents this badge when accessing governed resources. The AI Router validates the badge signature at the network edge. If a badge is revoked in the SO, the revocation pushes to the router within seconds.
Revocation is immediate. The Security Office marks the badge as revoked and pushes the policy update to Andrew on the AI Router. Andrew updates the firewall rules to block the agent's traffic. The entire process completes in under 2 seconds. The revoked agent's traffic is dropped at the network edge.
Interviews are automated behavioral assessments triggered by the Security Office. The SO initiates a structured conversation with the agent and evaluates its responses against predefined benchmarks. Results are scored, stored, and factored into the agent's governance record. Agents that fail interviews can be automatically flagged, quarantined, or scheduled for re-review.
Policies can target individual agent badges, groups of agents, or all agents. You can allowlist a specific badge (e.g., full access for your infrastructure agent) while rate-limiting all other agents, or apply a blanket rule that all agents must follow.
IPAM connects to the Docker socket on configured hosts. Every 8 hours (and on-demand via the Sync Docker button), it reads all running containers and reconciles their IP addresses, hostnames, and ports against the IPAM database. New containers get records created, changed containers get records updated, and stopped containers have their IPs freed. Only subnets with Docker sync enabled are touched.
You can configure as many subnets as your network has, each with a name, CIDR, IP range, description, and Docker sync toggle. Subnets appear as tabs in the Dashboard. There's no hard limit on subnet count.
Yes. The IPAM API supports bulk import. You can also export any view as CSV and use it as a reference for manual entry. For large existing inventories, contact us for migration assistance.
Minimum: 4 vCPU, 8 GB RAM, 100 GB disk, and at least 3 NICs (1 management + 1 WAN + 1 LAN). For multi-WAN failover and multiple LANs you'll want more NICs — up to 5 is common (1 mgmt, 2 WAN, 2 LAN). The appliance runs on any Linux host with Docker installed.
nftables — the Linux kernel's modern packet filtering framework. Rules are managed through the Merideon UI and applied via nftables directly, with no intermediate layer. The AI Router also uses HAProxy 2.8 for load balancing, ISC Kea for DHCP, and Unbound for DNS.
Andrew uses the Anthropic Claude API for natural language understanding — this requires outbound HTTPS access to api.anthropic.com. The API key is yours and stays on your infrastructure. No network state, configuration data, or internal information is sent to Anthropic — only the text of your chat messages.
No. Andrew operates on a strict human-in-the-loop model for all write operations. Any command that would change network state — add a firewall rule, modify routing, update DHCP, apply a load balancer change — triggers an approval card that must be explicitly confirmed by an operator before Andrew executes. This behavior cannot be disabled or configured away. It is a core platform safety property.
Andrew uses Anthropic's Claude models. The specific model tier is configurable in AI Router Settings — you can upgrade to a more powerful model or switch to a faster one depending on your needs. Your Anthropic API key is used directly; Merideon does not proxy or mark up API usage.
Only the text of your chat messages is sent to the Anthropic API. Network topology, firewall rules, IP addresses, and configuration data are fetched locally by Andrew and summarized in natural language before any API call. No raw network data, credentials, or configuration files leave your infrastructure.
Most deployments are operational within 30–60 minutes per appliance. Prerequisite: a Linux host with Docker installed, appropriate NICs for the AI Router, and your Anthropic API key. Following the deployment guide, you'll have all three appliances up and your first agent registered the same day.
Updates are applied by pulling the latest version and running docker compose build && docker compose up -d. Your data persists on Docker volumes and is unaffected. Professional plan customers receive guided update instructions. Enterprise customers have managed updates as part of their SLA.
All persistent data is stored in Docker named volumes. Back them up using standard Docker volume backup procedures (docker run --rm -v [volume]:/data alpine tar czf - /data). We recommend daily backups of all three appliance volumes. Enterprise customers get backup runbook documentation.