The Deepgram Model Improvement Partnership Program
Learn about how Deepgram trains our AI models and how you can opt out of our Model Improvement Program.
What is the Deepgram Model Improvement Partnership Program?
In the rapidly advancing field of AI, model improvement partnerships are crucial for the frequent development and continuous improvement of increasingly powerful models that drive intelligent systems. The Deepgram Model Improvement Partnership Program provides transparency and definition for how customer data is handled, stored, and utilized by Deepgram, as well as specifies the many benefits participants enjoy. Chief among these is the opportunity for our customers to shape the future of voice AI. These customers gain early and regular access to more accurate models that perform well for their specific use cases through inclusion of relevant real-world data during the model training process.
At Deepgram, we take our customers’ data privacy concerns seriously, which is why we have implemented robust data security policies and flexible data retention options that allow our customers to strike the right balance for their individual needs.
How do we improve our models?
Deepgram utilizes end-to-end deep learning to develop all of our voice AI models. These models are built through an iterative process that learns the inherent relationships in the conversational audio data used for training. This involves hundreds of thousands of hours of conversational data broadly spanning a given language’s vocabulary, as well as inclusion of a wide variety of speaker groups across a large number of dimensions including age, sex, accents, background noise, acoustic environments, etc.
Our in-house DataOps team employs state-of-the-art techniques to curate high quality training data sets and ensure proper balance across the dimensions listed above. Both overrepresentation and underrepresentation of different sample types can have adverse effects on model accuracy. By incorporating some of the data we collect through the Deepgram Model Improvement Partnership Program during training, we are able to produce high quality, in-distribution training data sets that lead to robust model performance both generally and for the specific use cases of interest for our customers.
For speech-to-text, this results in more accurate models that work better for you and everyone speaking your language through improved recognition of the complex and nuanced aspects of real-world speech (e.g. accents, regional dialects, jargon, slang phrases, differences in sentence structure across different languages, etc.). For text-to-speech, this results in more natural models that better portray your brand through improved pronunciation, expressiveness, and emotion in everyday interactions.
After training, a deep learning model for voice AI is essentially a giant mathematical equation that approximates all of the inherent relationships and underlying concepts that comprise human speech (e.g. “‘I’ before ‘E’ except after ‘C’ or when sounded as ‘A’ as in ‘neighbor’ or ‘weigh’”). And the magic of deep learning is that it does this by learning these concepts implicitly from the training data itself instead of being explicitly programmed by humans to do so. Importantly, the model has no rote memory or storage for any of the data used to train it, meaning there is no risk of any data leakage when the model is used in production.
How do we handle your data and ensure security and privacy?
Deepgram stores fractional increments of data for the continued improvement of our voice AI models and to provide enhanced customer support when needed. The only data we will store and use in future model training is the data that is contractually included through participation in the Deepgram Model Improvement Partnership Program. We will never redistribute data to 3rd parties without our customers' permission. Your data will never be used to market our services or to create advertising profiles.
Deepgram's infrastructure, policies, and procedures are designed to meet industry-standard compliance and regulatory frameworks, including SOC-2 Type 2, HIPAA, PCI DSS, GDPR, CCPA, and all applicable local government and legal requirements. MFA, RBAC, and VPNs are used to regulate and secure all employee access to data systems. All data is encrypted in-flight and at-rest with industry-standard encryption, including TLS 1.3 and AES-256.
Why Participate?
Participation in this program is voluntary and includes a number of valuable benefits.
- Increased accuracy of our voice AI models for your domain and use case with more frequent, higher impact releases of next-gen models that continue to get better and better.
- Discounted pricing for program participants that yields significant savings.
- Better technical support with faster root cause analysis and time to resolution.
- Preferential placement on early access wait lists for future voice AI models, features, and functionality.
- Accelerated custom model training timelines for individual customers in need of additional accuracy.
- Reduced model drift. Language is fluid and constantly changing, with new jargon and slang popping up in daily conversation over time. Our model improvement partnership program ensures our models will evolve along with your customers’ speech patterns.
- Support for Responsible AI by mitigating model bias and ensuring sufficient representation of underrepresented speaker groups based on age, sex, accents, etc. in our training data sets.
Need more help?
Have additional questions? Get in touch.
Want to opt out?
Add mip_opt_out=true
as a query parameter of all API requests that you want to be excluded from the Model Improvement Program. By opting out of the Model Improvement Program, customers on Pay as you Go, Growth, or Enterprise plans will forego their 50% discount on the rates listed in your signed contract or on deepgram.com/pricing.
If you're on an Enterprise plan and want to opt out, please reach out to your Deepgram Account Representative.
Examples
Here are some examples of opting out.
The SDK examples below use a custom add on parameter. To learn more about using custom add on parameters with our SDKs refer to the Documentation on using custom add on Parameters.
curl \
--request POST \
--header 'Authorization: Token YOUR_DEEPGRAM_API_KEY' \
--header 'Content-Type: audio/wav' \
--data-binary @youraudio.wav \
--url 'https://api.deepgram.com/v1/listen?mip_opt_out=true'
// Install the SDK: npm -i @deepgram/sdk
import { createClient } from "@deepgram/sdk";
// - or -
// const { createClient } = require("@deepgram/sdk");
const { result, error } = await deepgram.listen.prerecorded.transcribeUrl(
{
url: "https://dpgr.am/spacewalk.wav",
},
{
model: "nova-2",
mip_opt_out: true
}
);
# Install the SDK: pip install deepgram-sdk
import asyncio
import os
from dotenv import load_dotenv
from deepgram import DeepgramClient, PrerecordedOptions
load_dotenv()
API_KEY = os.getenv("DEEPGRAM_API_KEY")
AUDIO_URL = {
"url": "https://static.deepgram.com/examples/Bueller-Life-moves-pretty-fast.wav"
}
options: PrerecordedOptions = PrerecordedOptions(
model="nova-2",
)
# To demonstrate using the custom addon parameters, you could enable it like this
custom_options: dict = {"mip_opt_out": "true"}
deepgram: DeepgramClient = DeepgramClient(API_KEY)
async def transcribe_url():
url_response = await deepgram.listen.asyncprerecorded.v("1").transcribe_url(
AUDIO_URL, options, addons=custom_options
)
return url_response
async def main():
try:
response = await transcribe_url()
print(response.to_json(indent=4))
except Exception as e:
print(f"Exception: {e}")
if __name__ == "__main__":
asyncio.run(main())
// Install the SDK: go get github.com/deepgram/deepgram-go-sdk
package main
import (
"context"
"encoding/json"
"fmt"
"os"
prettyjson "github.com/hokaccha/go-prettyjson"
api "github.com/deepgram/deepgram-go-sdk/pkg/api/listen/v1/rest"
interfaces "github.com/deepgram/deepgram-go-sdk/pkg/client/interfaces"
client "github.com/deepgram/deepgram-go-sdk/pkg/client/listen"
)
const (
filePath string = "./Bueller-Life-moves-pretty-fast.mp3"
)
func main() {
client.Init(client.InitLib{
LogLevel: client.LogLevelTrace,
})
ctx := context.Background()
options := &interfaces.PreRecordedTranscriptionOptions{
Model: "nova-2",
}
// create a Deepgram client
c := client.NewREST("", &interfaces.ClientOptions{
Host: "https://api.deepgram.com",
})
dg := api.New(c)
// but to demonstrate using the custom addon parameters, you could enable it like this
params := make(map[string][]string, 0)
params["mip_opt_out"] = []string{"true"}
ctx = interfaces.WithCustomParameters(ctx, params)
res, err := dg.FromFile(ctx, filePath, options)
if err != nil {
if e, ok := err.(*interfaces.StatusError); ok {
fmt.Printf("DEEPGRAM ERROR:\n%s:\n%s\n", e.DeepgramError.ErrCode, e.DeepgramError.ErrMsg)
}
fmt.Printf("FromStream failed. Err: %v\n", err)
os.Exit(1)
}
data, err := json.Marshal(res)
if err != nil {
fmt.Printf("json.Marshal failed. Err: %v\n", err)
os.Exit(1)
}
prettyJSON, err := prettyjson.Format(data)
if err != nil {
fmt.Printf("prettyjson.Marshal failed. Err: %v\n", err)
os.Exit(1)
}
fmt.Printf("\n\nResult:\n%s\n\n", prettyJSON)
vtt, err := res.ToWebVTT()
if err != nil {
fmt.Printf("ToWebVTT failed. Err: %v\n", err)
os.Exit(1)
}
fmt.Printf("\n\n\nVTT:\n%s\n\n\n", vtt)
srt, err := res.ToSRT()
if err != nil {
fmt.Printf("ToSRT failed. Err: %v\n", err)
os.Exit(1)
}
fmt.Printf("\n\n\nSRT:\n%s\n\n\n", srt)
}
//Install the SDK: dotnet add package Deepgram
using System.Text.Json;
using Deepgram.Logger;
using Deepgram.Models.Authenticate.v1;
using Deepgram.Models.PreRecorded.v1;
namespace PreRecorded
{
class Program
{
static async Task Main(string[] args)
{
Library.Initialize(LogLevel.Debug);
var deepgramClient = new PreRecordedClient();
if (!File.Exists(@"Bueller-Life-moves-pretty-fast.wav"))
{
Console.WriteLine("Error: File 'Bueller-Life-moves-pretty-fast.wav' not found.");
return;
}
// but to demonstrate using the custom addon parameters, you could enable it like this
var customOptions = new Dictionary<string, string>();
customOptions["mip_opt_out"] = "true";
var audioData = File.ReadAllBytes(@"Bueller-Life-moves-pretty-fast.wav");
var response = await deepgramClient.TranscribeFile(
audioData,
new PreRecordedSchema()
{
Model = "nova-2",
},
null, // Don't want to specify a cancellation token, use the default
customOptions
);
Console.WriteLine($"\n\n{response}\n\n");
Console.WriteLine("Press any key to exit...");
Console.ReadKey();
Library.Terminate();
}
}
}
Replace
YOUR_DEEPGRAM_API_KEY
with your Deepgram API Key.
Updated 8 days ago