Static configuration

Most agentgateway configurations dynamically update as you make changes to the binds, policies, backends, and so on.

However, a few configurations are statically configured at startup. These static configurations are under the config section.

Static configuration file schema

The following table shows the config file schema for static configurations at startup. For the full agentgateway schema of dynamic and static configuration, see the reference docs.

FieldDescription
config
config.enableIpv6
config.localXdsPathLocal XDS path. If not specified, the current configuration file will be used.
config.caAddress
config.caAuthToken
config.xdsAddress
config.xdsAuthToken
config.namespace
config.gateway
config.trustDomain
config.serviceAccount
config.clusterId
config.network
config.adminAddrAdmin UI address in the format “ip:port”
config.statsAddrStats/metrics server address in the format “ip:port”
config.readinessAddrReadiness probe server address in the format “ip:port”
config.sessionConfiguration for stateful session management
config.session.keyThe signing key to be used. If not set, sessions will not be encrypted.
For example, generated via openssl rand -hex 32.
config.connectionTerminationDeadline
config.connectionMinTerminationDeadline
config.workerThreads
config.tracing
config.tracing.otlpEndpoint
config.tracing.headers
config.tracing.otlpProtocol
config.tracing.fields
config.tracing.fields.remove
config.tracing.fields.add
config.tracing.randomSamplingExpression to determine the amount of random sampling.
Random sampling will initiate a new trace span if the incoming request does not have a trace already.
This should evaluate to either a float between 0.0-1.0 (0-100%) or true/false.
This defaults to ‘false’.
config.tracing.clientSamplingExpression to determine the amount of client sampling.
Client sampling determines whether to initiate a new trace span if the incoming request does have a trace already.
This should evaluate to either a float between 0.0-1.0 (0-100%) or true/false.
This defaults to ’true'.
config.tracing.pathOTLP path. Default is /v1/traces
config.logging
config.logging.filter
config.logging.fields
config.logging.fields.remove
config.logging.fields.add
config.logging.level
config.logging.format
config.metrics
config.metrics.remove
config.metrics.fields
config.metrics.fields.add
config.backend
config.backend.keepalives
config.backend.keepalives.enabled
config.backend.keepalives.time
config.backend.keepalives.interval
config.backend.keepalives.retries
config.backend.connectTimeout
config.backend.poolIdleTimeoutThe maximum duration to keep an idle connection alive.
config.backend.poolMaxSizeThe maximum number of connections allowed in the pool, per hostname. If set, this will limit
the total number of connections kept alive to any given host.
Note: excess connections will still be created, they will just not remain idle.
If unset, there is no limit
config.hbone
config.hbone.windowSize
config.hbone.connectionWindowSize
config.hbone.frameSize
config.hbone.poolMaxStreamsPerConn
config.hbone.poolUnusedReleaseTimeout
llm.port
llm.modelsmodels defines the set of models that can be served by this gateway. The model name refers to the
model in the users request that is matched; the model sent to the actual LLM can be overridden
on a per-model basis.
llm.models[].namename is the name of the model we are matching from a users request. If params.model is set, that
will be used in the request to the LLM provider. If not, the incoming model is used.
llm.models[].paramsparams customizes parameters for the outgoing request
llm.models[].params.modelThe model to send to the provider.
If unset, the same model will be used from the request.
llm.models[].params.apiKeyAn API key to attach to the request.
If unset this will be automatically detected from the environment.
llm.models[].params.awsRegion
llm.models[].params.vertexRegion
llm.models[].params.vertexProject
llm.models[].params.azureHostFor Azure: the host of the deployment
llm.models[].params.azureApiVersionFor Azure: the API version to use
llm.models[].providerprovider of the LLM we are connecting too
llm.models[].defaultsdefaults allows setting default values for the request. If these are not present in the request body, they will be set.
To override even when set, use overrides.
llm.models[].overridesoverrides allows setting values for the request, overriding any existing values
llm.models[].transformationtransformation allows setting values from CEL expressions for the request, overriding any existing values.
llm.models[].requestHeadersrequestHeaders modifies headers in requests to the LLM provider.
llm.models[].requestHeaders.add
llm.models[].requestHeaders.set
llm.models[].requestHeaders.remove
llm.models[].guardrailsguardrails to apply to the request or response
llm.models[].guardrails.request
llm.models[].guardrails.request[].(1)regex
llm.models[].guardrails.request[].(1)regex.action
llm.models[].guardrails.request[].(1)regex.rules
llm.models[].guardrails.request[].(1)regex.rules[].(any)builtin
llm.models[].guardrails.request[].(1)regex.rules[].(any)pattern
llm.models[].guardrails.request[].(1)webhook
llm.models[].guardrails.request[].(1)webhook.target
llm.models[].guardrails.request[].(1)webhook.target.(1)service
llm.models[].guardrails.request[].(1)webhook.target.(1)service.name
llm.models[].guardrails.request[].(1)webhook.target.(1)service.name.namespace
llm.models[].guardrails.request[].(1)webhook.target.(1)service.name.hostname
llm.models[].guardrails.request[].(1)webhook.target.(1)service.port
llm.models[].guardrails.request[].(1)webhook.target.(1)hostHostname or IP address
llm.models[].guardrails.request[].(1)webhook.target.(1)backendExplicit backend reference. Backend must be defined in the top level
llm.models[].guardrails.request[].(1)webhook.forwardHeaderMatches[].name
llm.models[].guardrails.request[].(1)webhook.forwardHeaderMatches[].value
llm.models[].guardrails.request[].(1)webhook.forwardHeaderMatches[].value.(1)exact
llm.models[].guardrails.request[].(1)webhook.forwardHeaderMatches[].value.(1)regex
llm.models[].guardrails.request[].(1)openAIModeration
llm.models[].guardrails.request[].(1)openAIModeration.modelModel to use. Defaults to omni-moderation-latest
`llm.models[].guardrails.request[].(1)openAIModeration`llm.models[].guardrails.request[].(1)openAIModeration
llm.models[].guardrails.request[].(1)bedrockGuardrails.guardrailIdentifierThe unique identifier of the guardrail
llm.models[].guardrails.request[].(1)bedrockGuardrails.guardrailVersionThe version of the guardrail
llm.models[].guardrails.request[].(1)bedrockGuardrails.regionAWS region where the guardrail is deployed
`llm.models[].guardrails.request[].(1)bedrockGuardrails`llm.models[].guardrails.request[].(1)bedrockGuardrails
llm.models[].guardrails.request[].(1)googleModelArmor.templateIdThe template ID for the Model Armor configuration
llm.models[].guardrails.request[].(1)googleModelArmor.projectIdThe GCP project ID
llm.models[].guardrails.request[].(1)googleModelArmor.locationThe GCP region (default: us-central1)
`llm.models[].guardrails.request[].(1)googleModelArmor`llm.models[].guardrails.request[].(1)googleModelArmor
llm.models[].guardrails.request[].rejection.body
llm.models[].guardrails.request[].rejection.status
llm.models[].guardrails.request[].rejection.headersOptional headers to add, set, or remove from the rejection response
llm.models[].guardrails.request[].rejection.headers.add
llm.models[].guardrails.request[].rejection.headers.set
llm.models[].guardrails.request[].rejection.headers.remove
llm.models[].guardrails.response
llm.models[].guardrails.response[].(1)regex
llm.models[].guardrails.response[].(1)regex.action
llm.models[].guardrails.response[].(1)regex.rules
llm.models[].guardrails.response[].(1)regex.rules[].(any)builtin
llm.models[].guardrails.response[].(1)regex.rules[].(any)pattern
llm.models[].guardrails.response[].(1)webhook
llm.models[].guardrails.response[].(1)webhook.target
llm.models[].guardrails.response[].(1)webhook.target.(1)service
llm.models[].guardrails.response[].(1)webhook.target.(1)service.name
llm.models[].guardrails.response[].(1)webhook.target.(1)service.name.namespace
llm.models[].guardrails.response[].(1)webhook.target.(1)service.name.hostname
llm.models[].guardrails.response[].(1)webhook.target.(1)service.port
llm.models[].guardrails.response[].(1)webhook.target.(1)hostHostname or IP address
llm.models[].guardrails.response[].(1)webhook.target.(1)backendExplicit backend reference. Backend must be defined in the top level
llm.models[].guardrails.response[].(1)webhook.forwardHeaderMatches[].name
llm.models[].guardrails.response[].(1)webhook.forwardHeaderMatches[].value
llm.models[].guardrails.response[].(1)webhook.forwardHeaderMatches[].value.(1)exact
llm.models[].guardrails.response[].(1)webhook.forwardHeaderMatches[].value.(1)regex
llm.models[].guardrails.response[].(1)bedrockGuardrailsConfiguration for AWS Bedrock Guardrails integration.
llm.models[].guardrails.response[].(1)bedrockGuardrails.guardrailIdentifierThe unique identifier of the guardrail
llm.models[].guardrails.response[].(1)bedrockGuardrails.guardrailVersionThe version of the guardrail
llm.models[].guardrails.response[].(1)bedrockGuardrails.regionAWS region where the guardrail is deployed
`llm.models[].guardrails.response[].(1)bedrockGuardrails`llm.models[].guardrails.response[].(1)bedrockGuardrails
llm.models[].guardrails.response[].(1)googleModelArmor.templateIdThe template ID for the Model Armor configuration
llm.models[].guardrails.response[].(1)googleModelArmor.projectIdThe GCP project ID
llm.models[].guardrails.response[].(1)googleModelArmor.locationThe GCP region (default: us-central1)
`llm.models[].guardrails.response[].(1)googleModelArmor`llm.models[].guardrails.response[].(1)googleModelArmor
llm.models[].guardrails.response[].rejection.body
llm.models[].guardrails.response[].rejection.status
llm.models[].guardrails.response[].rejection.headersOptional headers to add, set, or remove from the rejection response
llm.models[].guardrails.response[].rejection.headers.add
llm.models[].guardrails.response[].rejection.headers.set
llm.models[].guardrails.response[].rejection.headers.remove
llm.models[].matchesmatches specifies the conditions under which this model should be used in addition to matching the model name.
llm.models[].matches[].headers
llm.models[].matches[].headers[].name
llm.models[].matches[].headers[].value
llm.models[].matches[].headers[].value.(1)exact
llm.models[].matches[].headers[].value.(1)regex
`llm`llm
mcp.port
mcp.targets
mcp.targets[].(1)sse
mcp.targets[].(1)sse.host
mcp.targets[].(1)sse.port
mcp.targets[].(1)sse.path
mcp.targets[].(1)mcp
mcp.targets[].(1)mcp.host
mcp.targets[].(1)mcp.port
mcp.targets[].(1)mcp.path
mcp.targets[].(1)stdio
mcp.targets[].(1)stdio.cmd
mcp.targets[].(1)stdio.args
mcp.targets[].(1)stdio.env
mcp.targets[].(1)openapi
mcp.targets[].(1)openapi.host
mcp.targets[].(1)openapi.port
mcp.targets[].(1)openapi.path
mcp.targets[].(1)openapi.schema
mcp.targets[].(1)openapi.schema.(any)file
mcp.targets[].(1)openapi.schema.(any)url
mcp.targets[].name
`mcp.targets[]`mcp.targets[]
mcp.prefixMode
`mcp`mcp
Agentgateway assistant

Ask me anything about agentgateway configuration, features, or usage.

Note: AI-generated content might contain errors; please verify and test all returned information.

Tip: one topic per conversation gives the best results. Use the + button in the chat header to start a new conversation.

Switching topics? Starting a new conversation improves accuracy.
↑↓ navigate select esc dismiss

What could be improved?

Your feedback helps us improve assistant answers and identify docs gaps we should fix.

Need more help? Join us on Discord: https://discord.gg/y9efgEmppm

Want to use your own agent? Add the Solo MCP server to query our docs directly. Get started here: https://search.solo.io/.