Text to Speech Streaming
An overview of the Deepgram .NET SDK and Deepgram streaming text-to-speech.
Installing the SDK
To begin using Deepgram's Text-to-Speech functionality, you need to install the Deepgram .NET SDK in your existing project. You can do this using the following command:
# Install the Deepgram .NET SDK
# https://github.com/deepgram/deepgram-dotnet-sdk
dotnet add package Deepgram
# Optionally, install the Deepgram Microphone package
dotnet add package Deepgram.Microphone
Make a Deepgram Text-to-Speech Request
using System.Text;
using Deepgram.Models.Authenticate.v1;
using Deepgram.Models.Speak.v1.WebSocket;
using Deepgram.Logger;
namespace SampleApp
{
class Program
{
static async Task Main(string[] args)
{
// Initialize Library with default logging
Library.Initialize();
// use the client factory with a API Key set with the "DEEPGRAM_API_KEY" environment variable
var speakClient = ClientFactory.CreateSpeakWebSocketClient();
// Subscribe to the EventResponseReceived event
speakClient.Subscribe(new EventHandler<OpenResponse>((sender, e) =>
{
Console.WriteLine($"\n\n----> {e.Type} received");
}));
speakClient.Subscribe(new EventHandler<MetadataResponse>((sender, e) =>
{
Console.WriteLine($"----> {e.Type} received");
Console.WriteLine($"----> RequestId: {e.RequestId}");
}));
speakClient.Subscribe(new EventHandler<AudioResponse>((sender, e) =>
{
Console.WriteLine($"----> {e.Type} received");
if (e.Stream != null)
{
using (BinaryWriter writer = new BinaryWriter(File.Open("output.mp3", FileMode.Append)))
{
writer.Write(e.Stream.ToArray());
}
}
}));
speakClient.Subscribe(new EventHandler<FlushedResponse>((sender, e) =>
{
Console.WriteLine($"----> {e.Type} received");
}));
speakClient.Subscribe(new EventHandler<ClearedResponse>((sender, e) =>
{
Console.WriteLine($"----> {e.Type} received");
}));
speakClient.Subscribe(new EventHandler<CloseResponse>((sender, e) =>
{
Console.WriteLine($"----> {e.Type} received");
}));
speakClient.Subscribe(new EventHandler<UnhandledResponse>((sender, e) =>
{
Console.WriteLine($"----> {e.Type} received");
}));
speakClient.Subscribe(new EventHandler<WarningResponse>((sender, e) =>
{
Console.WriteLine($"----> {e.Type} received");
}));
speakClient.Subscribe(new EventHandler<ErrorResponse>((sender, e) =>
{
Console.WriteLine($"----> {e.Type} received. Error: {e.Message}");
}));
// Start the connection
var speakSchema = new SpeakSchema();
await speakClient.Connect(speakSchema);
// Send some Text to convert to audio
speakClient.SpeakWithText("Hello World!");
//Flush the audio
speakClient.Flush();
// Wait for the user to press a key
Console.WriteLine("\n\nPress any key to stop and exit...\n\n\n");
Console.ReadKey();
// Stop the connection
await speakClient.Stop();
// Terminate Libraries
Library.Terminate();
}
}
}
Audio Output Streaming
The audio bytes representing the converted text will stream or be passed to the client via the above AudioData
event using the callback function.
It should be noted that these audio bytes are:
- Container-less audio. Meaning depending on the
encoding
value chosen, only the raw audio data is sent. As an example, if you chooselinear16
as yourencoding
for audio, aWAV
header will not be sent. Please see the Tips and Tricks for more information. - Not of standard size/length when received by the client. This is because the text is broken down into sounds representing the speech. Certain sounds chained together to form fragments of spoken words are different in length and content.
Depending on what the use case is for the generated audio bytes, please visit one of these guides to better help utilize these audio bytes for your use case:
- Sending LLM Outputs to a WebSocket
- Text Chunking for Streaming TTS Optimization
- Handling Audio Issues in Text To Speech
Where To Find Additional Examples
The SDK repository contains a good collection of text-to-speech examples, and the README contains links to them.
Some Example(s):
- Example "Hello World" - examples/text-to-speech/websocket/simple
Updated 3 months ago