Amazon Bedrock and Deepgram Voice Agent
This guide walks you through setting up a proxy server to route Deepgram Voice Agent’s LLM requests to Amazon Bedrock, enabling you to use Bedrock’s language models with Deepgram’s Voice agent.
Before you Begin
Before you can use Deepgram, you’ll need to create a Deepgram account. Signup is free and includes $200 in free credit and access to all of Deepgram’s features!
Before you start, you’ll need to follow the steps in the Make Your First API Request guide to obtain a Deepgram API key, and configure your environment if you are choosing to use a Deepgram SDK.
1. Prerequisites
For the complete code for the proxy used in this guide, please check out this: repository
You will need:
- An understanding of Python and using Python virtual environments.
- An Amazon Bedrock service account
- A Deepgram Voice Agent. Here’s our guide on building the voice agent.
- ngrok to allow access to a local server OR your own hosted server
2. Architecture Overview
How it works:
- The proxy logs and forwards
agent.think
payloads to Bedrock - Bedrock handles LLM logic and returns structured responses
- Deepgram converts the response into speech back to the user
3. Set Up the Proxy
Clone the proxy repo
Configure the environment
Specifying Bedrock provider details
Start the server
Using ngrok
ngrok is recommended for quick development and testing but shouldn’t be used for production instances. Follow these steps to configure ngrok.
Be sure to set the port correctly to 5000
by running:
4. Configure Deepgram Voice Agent
In your Deepgram Voice Agent settings, update the provider, model, and endpoint URL for agent.think
.
See more examples of configuration here.
5. Test the Integration
- Launch the proxy and ngrok
- Deploy your Deepgram Voice Agent with the updated config
- Start a call or session
- Observe
agent.think
payloads and Bedrock responses in proxy logs - Confirm LLM responses originate from Bedrock (e.g., function calls reflected)