Prerequisites
- A configured OpenStack paywall (see Quickstart) with a model mapped to the proxy.
- A OpenStack API key (
sk-openstack-...) and base URLhttps://api.openstack.ai/v1. - An ADK project with
google-adkandlitellminstalled (per the “Using Cloud & Proprietary Models via LiteLLM” section of the ADK docs). - Runtime access to a stable, pseudonymous user identifier that you can forward on every request.
1. Configure the OpenStack proxy
- In the OpenStack dashboard, configure your project and connect the provider you want ADK to call (enable billing when you need it).
- Map the upstream model name (e.g.,
openai/gpt-4o-mini) to OpenStack pricing. - Copy your OpenStack API key and note the proxy base URL:
https://api.openstack.ai/v1. - Decide how you will supply the OpenStack user context. OpenStack requires either the body-level
userfield or anX-Openstack-Userheader on every call.
The proxy returns normal assistant messages when a user must authorize or top up. Render them as-is to give end-users
the correct paywall UX with zero branching.
2. Point LiteLLM to OpenStack
ADK’s LiteLLM wrapper expects OpenAI-style environment variables. Set them to your OpenStack values (note the required/v1 suffix, as highlighted in the ADK “Using openai provider” guidance).
.env files or secrets managers, mirror the same values there. The OpenStack key now feeds LiteLLM exactly as an OpenAI key would.
3. Instantiate an ADK agent that targets OpenStack
Create your agent with the LiteLLM wrapper and use the OpenStack-backed model name. Everything else—tools, instructions, streaming—works identically to the standard ADK examples.- Pick any model identifier that your project routes (e.g.,
openai/gpt-4o,anthropic/claude-3-haiku). - OpenStack can relay to Bring-Your-Own-Key providers or the built-in catalog; the agent code stays unchanged.
4. Forward the OpenStack user identifier
OpenStack enforces spend and renders billing actions per end user (when enabled). Thread your application’s user handle into each ADK session and make sure it reaches LiteLLM as either theuser body field or X-Openstack-User header.
session.user_id along:
- If you call
LiteLlmdirectly, includeuser=session.user_idin the OpenAI-compatible payload. - If you rely on headers (e.g., for shared middleware), add
X-Openstack-User: session.user_idthrough LiteLLM’s request configuration or HTTP client hooks.
Double-check that retries or parallel tool calls reuse the same user id. If the identifier is missing or changes
mid-thread, OpenStack will decline the request.
5. Test the end-to-end flow
- Run an ADK interaction against your agent and confirm the response completes normally.
- In the OpenStack dashboard, verify that the request appears under Proxy → Requests with the correct model, user, and pricing rule.
- Trigger a low-balance or unauthorized scenario to see the proxy return an assistant message instructing the user to authorize or top up; ensure your UI renders it verbatim.
Troubleshooting
- 401 or 403 errors: Confirm
OPENAI_API_KEYis the OpenStack key and that the paywall is in the correct mode for your environment (test vs. live). - 404 from LiteLLM/ADK: Ensure the model string matches a OpenStack-mapped provider model.
- Missing billing events: Re-check that every request carries a stable
userfield orX-Openstack-Userheader; without it, OpenStack drops the call before metering. - Local testing on Windows: Follow the ADK LiteLLM note to set
PYTHONUTF8=1if you hit encoding issues.