Skip to main content
OpenStack is the simplest way to turn your AI application into a business. Today it provides an OpenAI‑compatible chat‑completions proxy with real‑time metering, analytics integrations, dynamic per‑token or per‑request pricing, freemium paywalls and an integrated wallet for in‑app credits. You can connect your Stripe account and start charging immediately, and switch between providers like OpenAI and Anthropic without changing your code. We’re just getting started. Over the coming months we’ll add deeper authentication, automated cost controls and routing, workflow connectors and other enterprise features. Join the waitlist to get early access to new features as they launch.

What you get

  • OpenAI compatible API: Use the same API you already know and love. No need to learn new SDKs or APIs.
  • Multi-provider support: Switch between AI providers like OpenAI, Anthropic, and more with a single API key.
  • Analytics: Get insights into usage patterns and performance metrics. Automatically send data to tools you already use, like PostHog or Mixpanel.
  • Billing & Payments: Easily manage billing and payments for your AI application. Accept payments with Stripe integration.
  • Authentication: Securely manage access to your AI applications. (coming soon)
  • Monitoring and Alerts: Keep track of your AI application’s health and performance. Receive instant notifications for any issues or anomalies. (coming soon)
  • Observability Integrations: Integrate with popular observability tools to monitor your AI applications. (coming soon)
  • Many more to come: We’re constantly adding new features and integrations to make launching AI applications even easier.

How it works

OpenStack is powered by a programmable AI gateway, that sits between your AI application and LLM provider (e.g., OpenAI, Anthropic). When your application makes a request to OpenStack, it handles authentication, checks user balance, and any additional rules you’ve enabled before forwarding the request to the appropriate LLM provider. After receiving the response from the LLM provider, OpenStack processes it according to your configured rules, sends analytics data to connected services and returns the final response to your application.

Example use case

Imagine you are building an AI chat app with usage based billing. Instead of sending requests directly to OpenAI model, and plumbing all the necessary services to accept payments, catch webhooks, log usage, track user balances, and monitor the app, you can simply swap model provider URL and send requests to OpenStack. OpenStack will take care of checking user’s balance or subscription status, logging usage for analytics, handling payments, and monitoring the application’s health. This allows you to focus on building your AI chat app without worrying about the underlying infrastructure and business logic.

Get started

Get started in minutes, simply swap your existing LLM API URL with OpenStack’s endpoint and enable the features you need when you need them, without changing your code.