Amazon Bedrock and Deepgram Voice Agent

Use Amazon Bedrock with Deepgram Voice Agent either natively or via a proxy server.

This guide walks you through two methods to use Amazon Bedrock with Deepgram Voice Agent:

  1. Native Integration (Recommended) - Use the built-in aws_bedrock provider type
  2. Proxy Server - Route requests through a proxy server for advanced use cases

Before you Begin

Before you can use Deepgram, you’ll need to create a Deepgram account. Signup is free and includes $200 in free credit and access to all of Deepgram’s features!

Before you start, you’ll need to follow the steps in the Make Your First API Request guide to obtain a Deepgram API key, and configure your environment if you are choosing to use a Deepgram SDK.

Native AWS Bedrock support is now available directly in the Voice Agent API using the aws_bedrock provider type.

Prerequisites

  • An Amazon Bedrock service account with appropriate permissions
  • AWS credentials (Access Key ID and Secret Access Key)
  • Access to desired Bedrock models in your AWS account

Configuration

Configure your Voice Agent with the aws_bedrock provider type. You can use either IAM credentials or STS (temporary) credentials:

Using IAM Credentials

1{
2 "agent": {
3 "think": {
4 "provider": {
5 "type": "aws_bedrock",
6 "model": "us.anthropic.claude-3-5-sonnet-20241022-v2:0",
7 "temperature": 0.7,
8 "credentials": {
9 "type": "iam",
10 "region": "us-east-2",
11 "access_key_id": "{{your_access_key_id}}",
12 "secret_access_key": "{{your_secret_access_key}}"
13 }
14 },
15 "endpoint": {
16 "url": "https://bedrock-runtime.us-east-2.amazonaws.com/"
17 }
18 }
19 }
20}

Using STS (Temporary) Credentials

1{
2 "agent": {
3 "think": {
4 "provider": {
5 "type": "aws_bedrock",
6 "model": "us.anthropic.claude-3-5-sonnet-20241022-v2:0",
7 "temperature": 0.7,
8 "credentials": {
9 "type": "sts",
10 "region": "us-east-2",
11 "access_key_id": "{{your_temporary_access_key_id}}",
12 "secret_access_key": "{{your_temporary_secret_access_key}}",
13 "session_token": "{{your_session_token}}"
14 }
15 },
16 "endpoint": {
17 "url": "https://bedrock-runtime.us-east-2.amazonaws.com/"
18 }
19 }
20 }
21}

Ensure your AWS credentials have the necessary permissions to invoke Bedrock models. The endpoint URL should match your AWS region.

Method 2: Proxy Server Integration

For advanced use cases or if you need additional processing between Deepgram and Bedrock, you can use a proxy server.

Prerequisites

For the complete code for the proxy used in this guide, please check out this: repository

You will need:

  • An understanding of Python and using Python virtual environments.
  • An Amazon Bedrock service account
  • A Deepgram Voice Agent. Here’s our guide on building the voice agent.
  • ngrok to allow access to a local server OR your own hosted server

Architecture Overview (Proxy Method)

How it works:

  • The proxy logs and forwards agent.think payloads to Bedrock
  • Bedrock handles LLM logic and returns structured responses
  • Deepgram converts the response into speech back to the user

Set Up the Proxy

Clone the proxy repo

$git clone https://github.com/deepgram-devs/deepgram-voice-agent-client-llm-proxy.git
>cd deepgram-voice-agent-client-llm-proxy
>python3 -m venv venv
>source venv/bin/activate
>pip install -r requirements.txt

Configure the environment

$cp .env.example .env

Specifying Bedrock provider details

1AGENT_ID=your_bedrock_agent_id
2AGENT_ALIAS_ID=your_bedrock_agent_alias_id
3AWS_ACCESS_KEY_ID=your_aws_access_key_id
4AWS_SECRET_ACCESS_KEY=your_aws_secret_access_key
5AWS_REGION=us-east-1

Start the server

$python app.py

Using ngrok

ngrok is recommended for quick development and testing but shouldn’t be used for production instances. Follow these steps to configure ngrok.

Be sure to set the port correctly to 5000 by running:

$ngrok http 5000

Configure Deepgram Voice Agent (Proxy Method)

In your Deepgram Voice Agent settings, update the provider, model, and endpoint URL for agent.think. See more examples of configuration here.

1"agent": {
2 "think": {
3 "provider": {
4 "type": "open_ai",
5 "model": "gpt-4o-mini",
6 "temperature": 0.7
7 },
8 "endpoint": {
9 "url": "{{host}}/v1/chat/completions",
10 "headers": {
11 "authorization": "Bearer {{token}}"
12 }
13 }
14 }
15}

Test the Integration

  1. Launch the proxy and ngrok
  2. Deploy your Deepgram Voice Agent with the updated config
  3. Start a call or session
  4. Observe agent.think payloads and Bedrock responses in proxy logs
  5. Confirm LLM responses originate from Bedrock (e.g., function calls reflected)