LLM Models

An overview of the LLM providers and models you can use with the Voice Agent API.

Defines the LLM (Large Language Model) to be used with your Agent. The provider.type field specifies the format or protocol of the API.

For example:

  • open_ai means the API follows OpenAI’s Chat Completions format.
  • This option can be used with OpenAI, Azure OpenAI, or Amazon Bedrock — as long as the endpoint behaves like OpenAI’s Chat Completion API.

You can set your Voice Agent’s LLM model in the Settings Message See the docs for more information.

Supported LLM providers

Parameteropen_aianthropicaws_bedrockgooglegroqnvidia
agent.think.provider.typeopen_aianthropicaws_bedrockgooglegroqnvidia
agent.think.endpointoptionaloptionalrequiredoptionalrequiredoptional

The agent.think.endpoint is optional or required based on the provider type:

  • For open_ai, anthropic, google, and nvidia, the endpoint field is optional because Deepgram provides managed LLMs for these providers.
  • For groq and aws_bedrock provider types, endpoint is required because Deepgram does not manage those LLMs.
  • If an endpoint is provided the url is required but headers are optional.

If you don’t specify agent.think.provider.type the Voice Agent will use Deepgram’s default managed LLMs. For managed LLMs, supported model names are predefined in our configuration.

See the Amazon Bedrock section below for credentials and endpoint configuration. To fetch the current list of providers and models programmatically, see Listing supported models via the API.

Supported LLM models

OpenAI

ProviderModelPricing Tier
open_aigpt-5.5Advanced
open_aigpt-5.4-nanoStandard
open_aigpt-5.4-miniStandard
open_aigpt-5.4Advanced
open_aigpt-5.3-chat-latestAdvanced
open_aigpt-5.2-chat-latestAdvanced
open_aigpt-5.2Advanced
open_aigpt-5.1-chat-latestAdvanced
open_aigpt-5.1Advanced
open_aigpt-5-nanoStandard
open_aigpt-5-miniStandard
open_aigpt-5Advanced
open_aigpt-4.1-nanoStandard
open_aigpt-4.1-miniStandard
open_aigpt-4.1Advanced
open_aigpt-4o-miniStandard
open_aigpt-4oAdvanced

Anthropic

ProviderModelPricing Tier
anthropicclaude-sonnet-4-6Advanced
anthropicclaude-sonnet-4-5Advanced
anthropicclaude-4-5-haiku-latestStandard
anthropicclaude-3-5-haiku-latestStandard
anthropicclaude-sonnet-4-20250514Advanced

Google

ProviderModelPricing Tier
googlegemini-3.1-flash-lite-previewStandard
googlegemini-3-flash-previewStandard
googlegemini-3-pro-previewAdvanced
googlegemini-2.5-flashStandard
googlegemini-2.0-flashStandard (Deprecated)
googlegemini-2.0-flash-liteStandard

Example using Deepgram’s managed Google LLM

JSON
1 // ... other settings ...
2 "think": {
3 "provider": {
4 "type": "google",
5 "model": "gemini-2.5-flash",
6 "temperature": 0.5
7 }
8 }
9 // ... other settings ...

Example using a custom Google endpoint (BYO)

When using a custom endpoint, the model property is not supported. The desired model is specified as part of the endpoint URL instead.

Use API keys from Google AI Studio for Gemini models. Keys from Vertex AI, Workspace Gemini, or Gemini Enterprise will not work with the Agent API.

JSON
1 // ... other settings ...
2 "think": {
3 "provider": {
4 "type": "google",
5 "temperature": 0.5
6 },
7 "endpoint": {
8 "url": "https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash:streamGenerateContent?alt=sse",
9 "headers": {
10 "x-goog-api-key": "xxxxxxxxx"
11 }
12 }
13 }
14 // ... other settings ...

NVIDIA

ProviderModelPricing Tier
nvidianemotron-3-nano-30B-A3BStandard

Example using Deepgram’s managed NVIDIA LLM

JSON
1 // ... other settings ...
2 "think": {
3 "provider": {
4 "type": "nvidia",
5 "model": "nemotron-3-nano-30B-A3B",
6 "temperature": 0.5
7 }
8 }
9 // ... other settings ...

Groq

ProviderModelPricing Tier
groqopenai/gpt-oss-20bStandard

Amazon Bedrock

Amazon Bedrock is a BYO provider. Deepgram does not host Bedrock models, so endpoint.url is required and you supply your own AWS credentials. Bedrock model IDs (for example us.anthropic.claude-3-5-sonnet-20241022-v2:0) are passed through to Bedrock as-is.

ParameterValue
agent.think.provider.typeaws_bedrock
agent.think.provider.modelA Bedrock model ID
agent.think.provider.credentialsIAM or STS credentials (see below)
agent.think.endpoint.urlhttps://bedrock-runtime.{region}.amazonaws.com/

IAM credentials

Use long-lived IAM access keys when your application has stable credentials.

JSON
1 // ... other settings ...
2 "think": {
3 "provider": {
4 "type": "aws_bedrock",
5 "model": "us.anthropic.claude-3-5-sonnet-20241022-v2:0",
6 "temperature": 0.7,
7 "credentials": {
8 "type": "iam",
9 "region": "us-east-2",
10 "access_key_id": "{{your_access_key_id}}",
11 "secret_access_key": "{{your_secret_access_key}}"
12 }
13 },
14 "endpoint": {
15 "url": "https://bedrock-runtime.us-east-2.amazonaws.com/"
16 }
17 }
18 // ... other settings ...

STS (temporary) credentials

Use STS credentials when your application assumes a role and rotates tokens. Add the session_token returned by your STS call.

JSON
1 // ... other settings ...
2 "think": {
3 "provider": {
4 "type": "aws_bedrock",
5 "model": "us.anthropic.claude-3-5-sonnet-20241022-v2:0",
6 "temperature": 0.7,
7 "credentials": {
8 "type": "sts",
9 "region": "us-east-2",
10 "access_key_id": "{{your_temporary_access_key_id}}",
11 "secret_access_key": "{{your_temporary_secret_access_key}}",
12 "session_token": "{{your_session_token}}"
13 }
14 },
15 "endpoint": {
16 "url": "https://bedrock-runtime.us-east-2.amazonaws.com/"
17 }
18 }
19 // ... other settings ...

AWS credentials must have permission to invoke Bedrock models, and the endpoint URL must match the region the Bedrock model is hosted in.

If you need an OpenAI-compatible proxy in front of Bedrock (for logging, header rewriting, or use of the Bedrock Agents service), see Passing a custom (BYO) LLM through a Cloud Provider below.

Example Payload

JSON
1// ... other settings ...
2 "think": {
3 "provider": {
4 "type": "open_ai",
5 "model": "gpt-4o-mini",
6 "temperature": 0.7
7 },
8 "endpoint": { // Optional if LLM provider is open_ai, anthropic, or google. Required for 3rd party LLM providers such as groq
9 "url": "https://api.example.com/llm", // Required if endpoint is provided
10 "headers": { // Optional if an endpoint is provided
11 "authorization": "Bearer {{token}}"
12 }
13 },
14 }
15// ... other settings ...

Passing a custom (BYO) LLM through a Cloud Provider

For Bring Your Own (BYO) LLMs, any model string provided is accepted without restriction.

Deepgram tests against major LLM providers including OpenAI, Anthropic, and Google. When bringing your own LLM, you have two options:

  • Use an OpenAI-compatible LLM service or gateway. Set provider.type to open_ai and point the endpoint.url to your service. Any LLM endpoint that conforms to the OpenAI Chat Completions API format will work, including third-party LLM gateways.
  • Use a custom endpoint from one of the supported major LLM providers. If you have your own contract or deployment with a supported provider (such as OpenAI, Anthropic, or Google), set the provider.type to match that provider and supply your own endpoint.url and endpoint.headers.

In both cases, configure the provider.type to one of the supported provider values and set the endpoint.url and endpoint.headers fields to the correct values for your provider or gateway.

JSON
1 // ... other settings ...
2"think": {
3 "provider": {
4 "type": "open_ai",
5 "model": "gpt-4",
6 "temperature": 0.7
7 },
8 "endpoint": { // Required for a custom LLM
9 "url": "https://cloud.provider.com/llm", // Required for a custom LLM
10 "headers": { // Optional for a custom LLM
11 "authorization": "Bearer {{token}}"
12 }
13 },
14 }
15 // ... other settings ...

Using multiple LLM providers

The think object accepts both a single provider and an array of providers. When you supply an array, the Voice Agent uses the providers as an ordered fallback chain: it sends each LLM request to the first provider in the list and automatically falls back to the next provider if the request fails.

How fallback works

  1. The agent sends the request to the first provider in the array.
  2. If that provider returns an error or times out, the agent sends a THINK_REQUEST_FAILED warning over the WebSocket and retries with the next provider.
  3. This continues through every provider in the array.
  4. If all providers fail, the agent sends a FAILED_TO_THINK error and the turn produces no LLM response.

The fallback is per-request — each new conversational turn starts again from the first provider. Provider order matters, so place your preferred provider first and your most reliable fallback last.

Fallback providers do not need to use the same provider.type. You can mix providers (for example, open_ai primary with an anthropic fallback) to maximize availability across independent infrastructure.

Example

JSON
1{
2 "agent": {
3 "think": [
4 {
5 "provider": {
6 "type": "open_ai",
7 "model": "gpt-4o-mini",
8 "temperature": 0.7
9 }
10 },
11 {
12 "provider": {
13 "type": "anthropic",
14 "model": "claude-4-5-haiku-latest",
15 "temperature": 0.7
16 }
17 }
18 ]
19 }
20}

Listing supported models via the API

The current list of providers and models is exposed by a public API endpoint. Query it whenever you need to discover which model IDs are valid for which provider, or to programmatically build a model picker.

GET
/v1/agent/settings/think/models
1curl https://agent.deepgram.com/v1/agent/settings/think/models
Response
1{
2 "models": [
3 {
4 "id": "gpt-5",
5 "name": "GPT-5",
6 "provider": "open_ai"
7 }
8 ]
9}