Amazon Bedrock and Deepgram Voice Agent
This guide walks you through two methods to use Amazon Bedrock with Deepgram Voice Agent:
- Native Integration (Recommended) - Use the built-in
aws_bedrock
provider type - Proxy Server - Route requests through a proxy server for advanced use cases
Before you Begin
Before you can use Deepgram, you’ll need to create a Deepgram account. Signup is free and includes $200 in free credit and access to all of Deepgram’s features!
Before you start, you’ll need to follow the steps in the Make Your First API Request guide to obtain a Deepgram API key, and configure your environment if you are choosing to use a Deepgram SDK.
Method 1: Native Integration (Recommended)
Native AWS Bedrock support is now available directly in the Voice Agent API using the aws_bedrock
provider type.
Prerequisites
- An Amazon Bedrock service account with appropriate permissions
- AWS credentials (Access Key ID and Secret Access Key)
- Access to desired Bedrock models in your AWS account
Configuration
Configure your Voice Agent with the aws_bedrock
provider type. You can use either IAM credentials or STS (temporary) credentials:
Using IAM Credentials
Using STS (Temporary) Credentials
Ensure your AWS credentials have the necessary permissions to invoke Bedrock models. The endpoint URL should match your AWS region.
Method 2: Proxy Server Integration
For advanced use cases or if you need additional processing between Deepgram and Bedrock, you can use a proxy server.
Prerequisites
For the complete code for the proxy used in this guide, please check out this: repository
You will need:
- An understanding of Python and using Python virtual environments.
- An Amazon Bedrock service account
- A Deepgram Voice Agent. Here’s our guide on building the voice agent.
- ngrok to allow access to a local server OR your own hosted server
Architecture Overview (Proxy Method)
How it works:
- The proxy logs and forwards
agent.think
payloads to Bedrock - Bedrock handles LLM logic and returns structured responses
- Deepgram converts the response into speech back to the user
Set Up the Proxy
Clone the proxy repo
Configure the environment
Specifying Bedrock provider details
Start the server
Using ngrok
ngrok is recommended for quick development and testing but shouldn’t be used for production instances. Follow these steps to configure ngrok.
Be sure to set the port correctly to 5000
by running:
Configure Deepgram Voice Agent (Proxy Method)
In your Deepgram Voice Agent settings, update the provider, model, and endpoint URL for agent.think
.
See more examples of configuration here.
Test the Integration
- Launch the proxy and ngrok
- Deploy your Deepgram Voice Agent with the updated config
- Start a call or session
- Observe
agent.think
payloads and Bedrock responses in proxy logs - Confirm LLM responses originate from Bedrock (e.g., function calls reflected)