Skip to main content
Use the OpenStack OpenAI-compatible gateway as a drop-in model backend for the Google Agent Development Kit (ADK). ADK wraps providers through LiteLLM, so you can point it at the OpenStack base URL, keep OpenAI-style request payloads, and let OpenStack handle metering, pricing, and paywall responses.

Prerequisites

  • A configured OpenStack paywall (see Quickstart) with a model mapped to the proxy.
  • A OpenStack API key (sk-openstack-...) and base URL https://api.openstack.ai/v1.
  • An ADK project with google-adk and litellm installed (per the “Using Cloud & Proprietary Models via LiteLLM” section of the ADK docs).
  • Runtime access to a stable, pseudonymous user identifier that you can forward on every request.

1. Configure the OpenStack proxy

  1. In the OpenStack dashboard, configure your project and connect the provider you want ADK to call (enable billing when you need it).
  2. Map the upstream model name (e.g., openai/gpt-4o-mini) to OpenStack pricing.
  3. Copy your OpenStack API key and note the proxy base URL: https://api.openstack.ai/v1.
  4. Decide how you will supply the OpenStack user context. OpenStack requires either the body-level user field or an X-Openstack-User header on every call.
The proxy returns normal assistant messages when a user must authorize or top up. Render them as-is to give end-users the correct paywall UX with zero branching.

2. Point LiteLLM to OpenStack

ADK’s LiteLLM wrapper expects OpenAI-style environment variables. Set them to your OpenStack values (note the required /v1 suffix, as highlighted in the ADK “Using openai provider” guidance).
pip install litellm  # if you have not already

export OPENSTACK_API_KEY="sk-openstack-…"   # keep server-side
export OPENAI_API_KEY="$OPENSTACK_API_KEY"
export OPENAI_API_BASE="https://api.openstack.ai/v1"
If you manage configuration through .env files or secrets managers, mirror the same values there. The OpenStack key now feeds LiteLLM exactly as an OpenAI key would.

3. Instantiate an ADK agent that targets OpenStack

Create your agent with the LiteLLM wrapper and use the OpenStack-backed model name. Everything else—tools, instructions, streaming—works identically to the standard ADK examples.
from google.adk.agents import LlmAgent
from google.adk.models.lite_llm import LiteLlm

openstack_agent = LlmAgent(
    model=LiteLlm(model="openai/gpt-4o-mini"),
    name="openstack_proxy_agent",
    instruction="You are a metered assistant running through OpenStack.",
    # … add tools or additional configuration as needed
)
  • Pick any model identifier that your project routes (e.g., openai/gpt-4o, anthropic/claude-3-haiku).
  • OpenStack can relay to Bring-Your-Own-Key providers or the built-in catalog; the agent code stays unchanged.

4. Forward the OpenStack user identifier

OpenStack enforces spend and renders billing actions per end user (when enabled). Thread your application’s user handle into each ADK session and make sure it reaches LiteLLM as either the user body field or X-Openstack-User header.
from google.adk.sessions import InMemorySessionService

session_service = InMemorySessionService()
session = await session_service.create_session(
    app_name="my_adk_app",
    user_id="user_123",  # stable pseudonymous id required by OpenStack
)
When you invoke the agent (via an ADK Runner, the Agent Runtime API, or your own orchestration), pass that session.user_id along:
  • If you call LiteLlm directly, include user=session.user_id in the OpenAI-compatible payload.
  • If you rely on headers (e.g., for shared middleware), add X-Openstack-User: session.user_id through LiteLLM’s request configuration or HTTP client hooks.
Either approach satisfies the proxy requirement and keeps ledger entries correctly attributed.
Double-check that retries or parallel tool calls reuse the same user id. If the identifier is missing or changes mid-thread, OpenStack will decline the request.

5. Test the end-to-end flow

  1. Run an ADK interaction against your agent and confirm the response completes normally.
  2. In the OpenStack dashboard, verify that the request appears under Proxy → Requests with the correct model, user, and pricing rule.
  3. Trigger a low-balance or unauthorized scenario to see the proxy return an assistant message instructing the user to authorize or top up; ensure your UI renders it verbatim.

Troubleshooting

  • 401 or 403 errors: Confirm OPENAI_API_KEY is the OpenStack key and that the paywall is in the correct mode for your environment (test vs. live).
  • 404 from LiteLLM/ADK: Ensure the model string matches a OpenStack-mapped provider model.
  • Missing billing events: Re-check that every request carries a stable user field or X-Openstack-User header; without it, OpenStack drops the call before metering.
  • Local testing on Windows: Follow the ADK LiteLLM note to set PYTHONUTF8=1 if you hit encoding issues.
Once the proxy is wired up, you can add OpenStack features (webhooks, custom proxy responses, analytics) without revisiting the ADK-side integration—the OpenAI-compatible interface remains stable.