Amazon Bedrock and Deepgram Voice Agent

Use Amazon Bedrock to route Deepgram Voice Agent's traffic to Amazon Bedrock via a proxy server.

This guide walks you through setting up a proxy server to route Deepgram Voice Agent’s LLM requests to Amazon Bedrock, enabling you to use Bedrock’s language models with Deepgram’s Voice agent.

Before you Begin

Before you can use Deepgram, you’ll need to create a Deepgram account. Signup is free and includes $200 in free credit and access to all of Deepgram’s features!

Before you start, you’ll need to follow the steps in the Make Your First API Request guide to obtain a Deepgram API key, and configure your environment if you are choosing to use a Deepgram SDK.

1. Prerequisites

For the complete code for the proxy used in this guide, please check out this: repository

You will need:

  • An understanding of Python and using Python virtual environments.
  • An Amazon Bedrock service account
  • A Deepgram Voice Agent. Here’s our guide on building the voice agent.
  • ngrok to allow access to a local server OR your own hosted server

2. Architecture Overview

How it works:

  • The proxy logs and forwards agent.think payloads to Bedrock
  • Bedrock handles LLM logic and returns structured responses
  • Deepgram converts the response into speech back to the user

3. Set Up the Proxy

Clone the proxy repo

$git clone https://github.com/deepgram-devs/deepgram-voice-agent-client-llm-proxy.git
>cd deepgram-voice-agent-client-llm-proxy
>python3 -m venv venv
>source venv/bin/activate
>pip install -r requirements.txt

Configure the environment

$cp .env.example .env

Specifying Bedrock provider details

1AGENT_ID=your_bedrock_agent_id
2AGENT_ALIAS_ID=your_bedrock_agent_alias_id
3AWS_ACCESS_KEY_ID=your_aws_access_key_id
4AWS_SECRET_ACCESS_KEY=your_aws_secret_access_key
5AWS_REGION=us-east-1

Start the server

$python app.py

Using ngrok

ngrok is recommended for quick development and testing but shouldn’t be used for production instances. Follow these steps to configure ngrok.

Be sure to set the port correctly to 5000 by running:

$ngrok http 5000

4. Configure Deepgram Voice Agent

In your Deepgram Voice Agent settings, update the provider, model, and endpoint URL for agent.think. See more examples of configuration here.

1"agent": {
2 "think": {
3 "provider": {
4 "type": "open_ai",
5 "model": "gpt-4o-mini",
6 "temperature": 0.7
7 },
8 "endpoint": {
9 "url": "{{host}}/v1/chat/completions",
10 "headers": {
11 "authorization": "Bearer {{token}}"
12 }
13 }
14 }
15}

5. Test the Integration

  1. Launch the proxy and ngrok
  2. Deploy your Deepgram Voice Agent with the updated config
  3. Start a call or session
  4. Observe agent.think payloads and Bedrock responses in proxy logs
  5. Confirm LLM responses originate from Bedrock (e.g., function calls reflected)