OpenAI-compatible providers

Configure any LLM provider that provides OpenAI-compatible endpoints with agentgateway. This includes providers like xAI (Grok), Cohere, Ollama, Together AI, Groq, and many others.

xAI (Grok)

xAI provides OpenAI-compatible endpoints for their Grok models.

# yaml-language-server: $schema=https://agentgateway.dev/schema/config
binds:
- port: 3000
  listeners:
  - routes:
    - policies:
        urlRewrite:
          authority:
            full: api.x.ai
        backendTLS: {}
        backendAuth:
          key: $XAI_API_KEY
      backends:
      - ai:
          name: xai
          hostOverride: api.x.ai:443
          provider:
            openAI:
              model: grok-2-latest

Cohere

Cohere provides an OpenAI-compatible endpoint for their models.

# yaml-language-server: $schema=https://agentgateway.dev/schema/config
binds:
- port: 3000
  listeners:
  - routes:
    - policies:
        urlRewrite:
          authority:
            full: api.cohere.ai
          path:
            full: "/compatibility/v1/chat/completions"
        backendTLS: {}
        backendAuth:
          key: $COHERE_API_KEY
      backends:
      - ai:
          name: cohere
          hostOverride: api.cohere.ai:443
          provider:
            openAI:
              model: command-r-plus

Ollama (Local)

Ollama runs models locally and provides an OpenAI-compatible API.

# yaml-language-server: $schema=https://agentgateway.dev/schema/config
binds:
- port: 3000
  listeners:
  - routes:
    - policies:
        urlRewrite:
          authority:
            full: localhost:11434
      backends:
      - ai:
          name: ollama
          hostOverride: localhost:11434
          provider:
            openAI:
              model: llama3.2

Together AI

Together AI provides access to open-source models via OpenAI-compatible endpoints.

# yaml-language-server: $schema=https://agentgateway.dev/schema/config
binds:
- port: 3000
  listeners:
  - routes:
    - policies:
        urlRewrite:
          authority:
            full: api.together.xyz
        backendTLS: {}
        backendAuth:
          key: $TOGETHER_API_KEY
      backends:
      - ai:
          name: together
          hostOverride: api.together.xyz:443
          provider:
            openAI:
              model: meta-llama/Llama-3.2-90B-Vision-Instruct-Turbo

Groq

Groq provides fast inference via OpenAI-compatible endpoints.

# yaml-language-server: $schema=https://agentgateway.dev/schema/config
binds:
- port: 3000
  listeners:
  - routes:
    - policies:
        urlRewrite:
          authority:
            full: api.groq.com
          path:
            full: "/openai/v1/chat/completions"
        backendTLS: {}
        backendAuth:
          key: $GROQ_API_KEY
      backends:
      - ai:
          name: groq
          hostOverride: api.groq.com:443
          provider:
            openAI:
              model: llama-3.3-70b-versatile

Generic configuration

For any OpenAI-compatible provider, use this template:

# yaml-language-server: $schema=https://agentgateway.dev/schema/config
binds:
- port: 3000
  listeners:
  - routes:
    - policies:
        urlRewrite:
          authority:
            full: <provider-api-host>
          path:
            full: "<provider-chat-endpoint>"  # Often /v1/chat/completions
        backendTLS: {}  # Include if provider uses HTTPS
        backendAuth:
          key: $PROVIDER_API_KEY
      backends:
      - ai:
          name: <provider-name>
          hostOverride: <provider-api-host>:443
          provider:
            openAI:
              model: <model-name>

Configuration reference

SettingDescription
urlRewriteConfigure a policy to rewrite the URL of the upstream requests to match your LLM provider.
authoritySet the default hostname authority to forward incoming requests.
pathRewrite the path to the appropriate LLM provider endpoint. This setting is optional if requests on the provider hostname are already sent on this path.
backendTLSOptionally configure a policy to use TLS when connecting to the LLM provider.
backendAuthYou can optionally configure a policy to attach an API key that authenticates to the LLM provider on outgoing requests. If you do not include an API key, each request must authenticate per the LLM provider requirements.
ai.nameThe name of the LLM provider for this AI backend.
ai.hostOverrideOverride the hostname. If not set, the hostname defaults to OpenAI (api.openai.com). This setting is optional if the hostname is already set in the URL rewrite policy’s authority setting.
ai.provider.openAI.modelOptionally set the model to use for requests. If not set, the request must include the model to use.
Agentgateway assistant

Ask me anything about agentgateway configuration, features, or usage.

Note: AI-generated content might contain errors; please verify and test all returned information.

Tip: one topic per conversation gives the best results. Use the + button in the chat header to start a new conversation.

Switching topics? Starting a new conversation improves accuracy.
↑↓ navigate select esc dismiss

What could be improved?

Your feedback helps us improve assistant answers and identify docs gaps we should fix.

Need more help? Join us on Discord: https://discord.gg/y9efgEmppm

Want to use your own agent? Add the Solo MCP server to query our docs directly. Get started here: https://search.solo.io/.