The Intelligent API Operating System

Know what your APIs are doing. Before your customers do.

Route traffic. Detect anomalies. Govern AI agents. Bill consumers. One platform, not six.

No credit card · 5-minute setup · Apache 2.0 core

GitHub stars

Join developers building the future of API management

0
Requests/sec per instance
0
Sub-second answers to "why is my API slow?"
0
Agents that fix issues before you notice
0
Replace your 6-tool stack
The problem is worse than you think

APIs are the backbone. Nobody is watching.

78%

of companies can't tell you how many APIs they run. Right now. In production.

94%

of engineering teams say their API management tools don't work for them

$1.6M

gone in a single weekend because ungoverned AI agents hit production APIs with no budget limits

Why this is different

It's the data model, not the features

Kong stores data in tables. Apigee stores it in configs. Apiloom stores it in a knowledge graph — every API, every dependency, every cost driver connected.

That's how we answer questions they can't. "Which APIs will break if this service goes down?" "Who's spending the most on LLM calls this week?" "What changed between yesterday's deploy and today's latency spike?"

Three ecosystems, one platform

Built for every stakeholder in your API economy

Internal

Build & Operate

Platform teams see every API, every dependency, every cost driver. Across all environments. In real time.

Client

Consume & Pay

External developers get a self-service portal with interactive docs, sandbox environments, and usage-based billing.

Partner

Sell & Extend

Partners build on your platform with a marketplace for plugins, API products, and revenue-sharing integrations.

Core capabilities

Six systems working as one

Ontology Engine

Maps every API, consumer, team, and dependency into a live knowledge graph. Not a list — a living model of your API estate.

AI / Agentic Gateway

Token budgets, provider routing, guardrails, prompt management. Control LLM spend before it controls you.

Intelligence Layer

Anomaly detection, cost attribution, blast-radius analysis. The platform builds a model after one week of traffic.

Self-Healing Ops

Automated runbooks fire when things break. Circuit breakers, traffic shifting, rollback — no human needed.

Zero Trust Security

OWASP scanning, DLP masking, secret detection, OIDC/SAML SSO, SCIM provisioning. Security is not an add-on.

Full Observability

OpenTelemetry-native traces, metrics, and logs. Correlate API behavior with business outcomes, not just status codes.

Open source at the core

Start free. Scale when ready.

Apache 2.0

Community Edition

Everything you need to run a production API gateway. No strings, no phone calls, no sales team.

  • Gateway core with Lura engine
  • Auth, rate limiting, CORS blocks
  • Plugin SDK with gRPC interface
  • CLI (fgctl) + Helm charts
  • OpenTelemetry integration
Star on GitHub
Commercial

Enterprise Platform

The full intelligence layer. For teams that need to understand, govern, and monetize their API estate.

  • Everything in Community Edition
  • Ontology engine + knowledge graph
  • Intelligence agents + anomaly detection
  • SSO, SCIM, compliance frameworks
  • Developer portal + marketplace
  • Multi-cluster federation
Request Demo

Ready to try it? CE installs in under 5 minutes.

Install Community Edition Talk to Sales
Getting started

Three steps to API intelligence

1

Install

helm install apiloom oci://ghcr.io/apiloom/helm/apiloom

One command. Works on any Kubernetes cluster or as a standalone Docker container.

2

Connect

Import from Kong, Apigee, AWS API Gateway, or any OpenAPI spec. Your existing APIs work on day one.

3

Intelligence Builds

Give it a week. The platform maps your dependencies, cost drivers, and failure patterns automatically. You don't configure this — it learns.

Honest comparison

How Apiloom stacks up

Apiloom Kong Apigee AWS API GW
Starting price Free (OSS) $150/mo Contact sales Pay per call
Annual cost (100 APIs) $0 – $6K ~$25K ~$60K+ ~$18K (at scale)
API intelligence Built-in No Basic analytics No
AI / LLM gateway Native Plugin No No
Knowledge graph Yes No No No
Self-healing Automated Manual Manual Manual
Open source Apache 2.0 Partial No No
Vendor lock-in None Moderate High (GCP) High (AWS)
Pricing

Transparent pricing. No surprises.

Community
Free
Apache 2.0 open source
  • Gateway core
  • Auth + rate limiting
  • Plugin SDK
  • CLI + Helm
Get Started
Business
$499/mo
For platform teams
  • Everything in Pro
  • Intelligence layer
  • SSO + compliance
  • 25 team members
Contact Sales
Enterprise
Custom
For large organizations
  • Everything in Business
  • Multi-cluster federation
  • Dedicated support
  • Custom SLA
Talk to Us
FAQ

Common questions

How hard is migration from Kong or Apigee?

We ship importers for Kong, Apigee, and AWS API Gateway configs. Most teams migrate a handful of routes in an afternoon, then run both gateways in parallel until they're comfortable. There's no big-bang cutover required.

Is it production-ready?

The gateway core is built on Lura (the engine behind KrakenD, which handles billions of requests daily). We run integration tests with race detection on every commit. That said, we're honest: the intelligence layer is newer. Most teams start with the gateway in production and add intelligence features gradually.

What if the company disappears?

The Community Edition is Apache 2.0. That license is irrevocable. The code is on GitHub. If we shut down tomorrow, you keep running what you have, fork it, or let someone else maintain it. Your gateway doesn't stop working because a vendor goes under.

How does it compare to just using Kong?

Kong is a solid proxy. If all you need is routing and rate limiting, Kong works fine. Apiloom is different when you need to understand relationships between APIs, track AI/LLM costs across teams, or answer "what breaks if this service goes down?" Kong doesn't have a knowledge graph. We do.

Do I need to rip out my existing gateway?

No. You can run Apiloom alongside your current gateway. Many teams put Apiloom in front of Kong or AWS API Gateway to get the intelligence layer without migrating routes. Swap things over when you're ready, or don't.

What about AI and LLM support?

The AI gateway is a first-class feature, not a bolt-on plugin. Token budgets per team, provider failover (OpenAI to Anthropic to local models), prompt management, guardrails, and cost tracking per consumer. It's how you let 50 teams use GPT-4 without a $200K surprise on your next invoice.

Ready to stop guessing what your APIs are doing?

Install the Community Edition in under 5 minutes. Or talk to us about the platform.

Start Free on GitHub Request Demo

No credit card · Apache 2.0 core · Import from Kong/Apigee in minutes