Skip to main content
OpenStack is designed to work seamlessly with a variety of AI models, allowing you to easily integrate and manage them within your applications. Whether you’re using popular models from providers like OpenAI, Anthropic, or custom models hosted on platforms like Together AI, OpenStack provides the tools you need to get started quickly.

Supported Model Providers

Any provider that supports OpenAI-compatible APIs can be integrated with OpenStack. This includes, but is not limited to:
  • OpenAI: Access models such as GPT-5, GPT-5-Codex, and more.
  • Anthropic: Utilize models like Claude Haiku and Claude Sonnet.
  • Together AI: Connect to models hosted on Together AI’s platform.
  • OpenRouter: Leverage 500+ models available through OpenRouter.
  • Gemini: Use Google’s Gemini models via OpenAI-compatible APIs.
  • Custom Models: Bring your own models hosted on any platform that offers OpenAI-compatible APIs.

Getting Started

To connect a model to OpenStack, follow these general steps:
1

Set Up Your Provider Account

Ensure you have an account with the model provider and obtain the necessary API keys.
2

Configure OpenStack

In the OpenStack dashboard, navigate to the model integration section and add your provider’s API key and endpoint.
3

Set Model Pricing

Optionally, define pricing for the models to manage costs effectively.
4

Test the Integration

Use the OpenStack playground to send test requests to the model and verify that everything is working as expected.
5

Deploy in Your Application

Once tested, you can start using the model in your applications through OpenStack’s API.