LLM Models

An overview of the LLM providers and models you can use with the Voice Agent API.

Defines the LLM (Large Language Model) to be used with your Agent. The provider.type field specifies the format or protocol of the API.

For example:

  • open_ai means the API follows OpenAI’s Chat Completions format.
  • This option can be used with OpenAI, Azure OpenAI, or Amazon Bedrock — as long as the endpoint behaves like OpenAI’s Chat Completion API.

You can set your Voice Agent’s LLM model in the Settings Message See the docs for more information.

Supported LLM providers

If you don’t specify agent.think.provider.type the Voice Agent will use Deepgram’s default LLM.

Parameteropen_aianthropicx_ai
agent.think.provider.typeopen_aianthropicx_ai
agent.think.endpointoptionaloptionalrequired

The agent.think.endpoint is optional or required based on the provider type:

  • For open_ai and anthropic, the endpoint field is optional because Deepgram provides managed LLMs for these providers.
  • For all other provider types, endpoint is required because Deepgram does not manage those LLMs.
  • If an endpoint is provided the url is required but headers are optional.

Example Payload

JSON
1// ... other settings ...
2 "think": {
3 "provider": {
4 "type": "open_ai",
5 "model": "gpt-4",
6 "temperature": 0.7
7 },
8 "endpoint": { // Optional if LLM provider is open_ai or anthropic. Required for non-Deepgram LLM providers like x_ai
9 "url": "https://api.example.com/llm", // Required if endpoint is provided
10 "headers": { // Optional if an endpoint is provided
11 "authorization": "Bearer {{token}}"
12 }
13 },
14// ... other settings ...

Passing a custom LLM through a Cloud Provider

You can use a custom LLM hosted by a 3rd party Cloud Provider by setting the provider.type to one of the supported provider values and setting the endpoint.url and endpoint.headers fields to the correct values for your Cloud Provider.

JSON
1{
2 // ... other settings ...
3"think": {
4 "provider": {
5 "type": "open_ai",
6 "model": "gpt-4",
7 "temperature": 0.7
8 },
9 "endpoint": { // Required for a custom LLM
10 "url": "https://cloud.provider.com/llm", // Required for a custom LLM
11 "headers": { // Optional for a custom LLM
12 "authorization": "Bearer {{token}}"
13 }
14 },
15 // ... other settings ...
16}