Route traffic. Detect anomalies. Govern AI agents. Bill consumers. One platform, not six.
No credit card · 5-minute setup · Apache 2.0 core
of companies can't tell you how many APIs they run. Right now. In production.
of engineering teams say their API management tools don't work for them
gone in a single weekend because ungoverned AI agents hit production APIs with no budget limits
Kong stores data in tables. Apigee stores it in configs. Apiloom stores it in a knowledge graph — every API, every dependency, every cost driver connected.
That's how we answer questions they can't. "Which APIs will break if this service goes down?" "Who's spending the most on LLM calls this week?" "What changed between yesterday's deploy and today's latency spike?"
Platform teams see every API, every dependency, every cost driver. Across all environments. In real time.
External developers get a self-service portal with interactive docs, sandbox environments, and usage-based billing.
Partners build on your platform with a marketplace for plugins, API products, and revenue-sharing integrations.
Maps every API, consumer, team, and dependency into a live knowledge graph. Not a list — a living model of your API estate.
Token budgets, provider routing, guardrails, prompt management. Control LLM spend before it controls you.
Anomaly detection, cost attribution, blast-radius analysis. The platform builds a model after one week of traffic.
Automated runbooks fire when things break. Circuit breakers, traffic shifting, rollback — no human needed.
OWASP scanning, DLP masking, secret detection, OIDC/SAML SSO, SCIM provisioning. Security is not an add-on.
OpenTelemetry-native traces, metrics, and logs. Correlate API behavior with business outcomes, not just status codes.
Everything you need to run a production API gateway. No strings, no phone calls, no sales team.
The full intelligence layer. For teams that need to understand, govern, and monetize their API estate.
Ready to try it? CE installs in under 5 minutes.
helm install apiloom oci://ghcr.io/apiloom/helm/apiloom
One command. Works on any Kubernetes cluster or as a standalone Docker container.
Import from Kong, Apigee, AWS API Gateway, or any OpenAPI spec. Your existing APIs work on day one.
Give it a week. The platform maps your dependencies, cost drivers, and failure patterns automatically. You don't configure this — it learns.
| Apiloom | Kong | Apigee | AWS API GW | |
|---|---|---|---|---|
| Starting price | Free (OSS) | $150/mo | Contact sales | Pay per call |
| Annual cost (100 APIs) | $0 – $6K | ~$25K | ~$60K+ | ~$18K (at scale) |
| API intelligence | Built-in | No | Basic analytics | No |
| AI / LLM gateway | Native | Plugin | No | No |
| Knowledge graph | Yes | No | No | No |
| Self-healing | Automated | Manual | Manual | Manual |
| Open source | Apache 2.0 | Partial | No | No |
| Vendor lock-in | None | Moderate | High (GCP) | High (AWS) |
We ship importers for Kong, Apigee, and AWS API Gateway configs. Most teams migrate a handful of routes in an afternoon, then run both gateways in parallel until they're comfortable. There's no big-bang cutover required.
The gateway core is built on Lura (the engine behind KrakenD, which handles billions of requests daily). We run integration tests with race detection on every commit. That said, we're honest: the intelligence layer is newer. Most teams start with the gateway in production and add intelligence features gradually.
The Community Edition is Apache 2.0. That license is irrevocable. The code is on GitHub. If we shut down tomorrow, you keep running what you have, fork it, or let someone else maintain it. Your gateway doesn't stop working because a vendor goes under.
Kong is a solid proxy. If all you need is routing and rate limiting, Kong works fine. Apiloom is different when you need to understand relationships between APIs, track AI/LLM costs across teams, or answer "what breaks if this service goes down?" Kong doesn't have a knowledge graph. We do.
No. You can run Apiloom alongside your current gateway. Many teams put Apiloom in front of Kong or AWS API Gateway to get the intelligence layer without migrating routes. Swap things over when you're ready, or don't.
The AI gateway is a first-class feature, not a bolt-on plugin. Token budgets per team, provider failover (OpenAI to Anthropic to local models), prompt management, guardrails, and cost tracking per consumer. It's how you let 50 teams use GPT-4 without a $200K surprise on your next invoice.
Install the Community Edition in under 5 minutes. Or talk to us about the platform.
No credit card · Apache 2.0 core · Import from Kong/Apigee in minutes