AI (LLM) Policies

Attach to:

(AI Backends only)

Agentgateway has a number of policies that can be used to control the behavior of the AI (LLM) model. For more information on connecting to LLM providers, see LLM consumption.

PolicyDetails
defaultsConfigure default values for settings in the request. For example, temperature: 0.7.
overridesConfigure override values for settings in the request.
promptsAppend or prepend additional prompts to requests.
routesControl the type of LLM request, such as OpenAI Completions, Anthropic Messages, or Embeddings.
promptGuardAuthorize requests based on their prompts.
modelAliasesConfigure aliases for model names.
promptCachingConfigure automatic caching controls in requests.
Agentgateway assistant

Ask me anything about agentgateway configuration, features, or usage.

Note: AI-generated content might contain errors; please verify and test all returned information.

Tip: one topic per conversation gives the best results. Use the + button in the chat header to start a new conversation.

Switching topics? Starting a new conversation improves accuracy.
↑↓ navigate select esc dismiss

What could be improved?

Your feedback helps us improve assistant answers and identify docs gaps we should fix.

Need more help? Join us on Discord: https://discord.gg/y9efgEmppm

Want to use your own agent? Add the Solo MCP server to query our docs directly. Get started here: https://search.solo.io/.