Getting Started
An introduction to using Deepgram’s Aura Streaming Text-to-Speech Websocket API to convert streaming text into audio.
Aura-2 is currently available for the TTS REST API only. Websocket support is coming soon.
This guide will walk you through how to turn streaming text into speech with Deepgram’s text-to-speech Websocket API.
Before you start, you’ll need to follow the steps in the Make Your First API Request guide to obtain a Deepgram API key, and configure your environment if you are choosing to use a Deepgram SDK.
Text-to-Speech Implementations
Deepgram has several SDKs that can make the API easier to use. Follow these steps to use the SDK of your choice to make a Deepgram TTS request.
Add Dependencies
Make the Request with the SDK
To learn more, check out our audio format tips for websockets in the TTS Chunking for Optimization Guide and our Audio Format Combinations that we offer.
Text-to-Speech Workflow
Below is a high-level workflow for obtaining an audio stream from user-provided text.
Establish a WebSocket Connection
To establish a connection, you must provide a few parameters on the URL to describe the type of audio you want. You can check out the API Reference to set the audio model, which controls the voice, the encoding, and the sample rate of the audio.
Sending Text and Retrieving Audio
Send the desired text to transform to audio using the WebSocket message below:
When you have queued enough text, you can obtain the corresponding audio by sending a Flush
command.
Upon successfully sending the Flush
, you will receive an audio byte stream from the websocket connection containing the synthesized text-to-speech. The format will be based on the encoding values provided upon establishing the connection.
Closing the Connection
When you are finished with the WebSocket, you can close the connection by sending the following Close
command.
Limits
Keep these limits in mind when making a Deepgram text-to-speech request.
Use One WebSocket per Conversation
If you are building for conversational AI use cases where a human is talking to a TTS agent, a single websocket per conversation is required. After you establish a connection, you will not be able to change the voice or media output settings.
Character Limits
The input limit is currently 2000 characters for the text input of each Speak message. If the string length sent as the text payload is 2001 characters or more, you will receive an error, and the audio file will not be created.
Character Throughput Limits
The throughput limit is 12k characters / 2 minutes and is measured by the number of characters sent to the websocket.
Timeout Limits
An active websocket has a 60-minute timeout period from the initial connection. This timeout exists for connections that are actively being used. If you desire a connection for longer than 60 minutes, create a new websocket connection to Deepgram.
Flush Message Limits
The maximum number of times you can send the Flush message is 20 times every 60 seconds. After that, you will receive a warning message stating that we cannot process any more flush messages until the 60-second time window has passed.
Rate Limits
For information on Deepgram’s Concurrency Rate Limits, refer to our API Rate Limits Documentation.
Handling Rate Limits
If the number of in-progress requests for a project meets or exceeds the rate limit, new requests will receive a 429: Too Many Requests error.
For suggestions on handling Concurrency Rate Limits, refer to our Working with Concurrency Rate Limits Documentation guide.
What’s Next?
Now that you’ve transformed text into speech with Deepgram’s API, enhance your knowledge by exploring the following areas.
Read the Feature Guides
Deepgram’s features help you customize your request to produce the best output for your use case. Here are a few guides that can help:
Starter Apps
- Clone and run one of our Starter App repositories to see a full application with a frontend UI and a backend server sending text to Deepgram to be converted into audio.