Utterances

Utterances segments speech into meaningful semantic units.

utterances boolean. Default: false

Deepgram’s Utterances feature allows the chosen model to interact more naturally and effectively with speakers' spontaneous speech patterns. For example, when humans speak to each other conversationally, they often pause mid-sentence to reformulate their thoughts, or stop and restart a badly-worded sentence.

ℹ️

Looking for streaming support? Check out our Endpointing feature, which uses a Voice Activity Detector to indicate the end of speech.

Enable Feature

To enable utterances, use the following parameter in the query string when you call Deepgram’s /listen endpoint :

utterances=true

To transcribe audio from a file on your computer, run the following curl command in a terminal or your favorite API client.

curl \
  --request POST \
  --header 'Authorization: Token YOUR_DEEPGRAM_API_KEY' \
  --header 'Content-Type: audio/wav' \
  --data-binary @youraudio.wav \
  --url 'https://api.deepgram.com/v1/listen?utterances=true'

:eyes: Replace YOUR_DEEPGRAM_API_KEY with your Deepgram API Key.

Analyze Response

ℹ️

For this example, we use a WAV audio file that contains the beginning of Abraham Lincoln’s Gettysburg Address. If you would like to follow along, you can download it.

When the file is finished processing, you’ll receive a JSON response. Let’s look more closely at the utterances object:

...
"utterances":[
  {
    "start":0.41915998,
    "end":5.43012,
    "confidence":0.88172823,
    "channel":0,
    "transcript":"four score and seven years ago our fathers brought forth on this continent a new nation",
    "words":[
      {"word":"four","start":0.41915998,"end":0.85827994,"confidence":0.57893836},
 			...
    ],
    "id":"2d8211a4-3a5b-4053-8939-edf2b2b389fa"
  },
  ...
  }
]
...

In this response, we see that each utterance contains:

  • start: Start time (in seconds) from the beginning of the audio stream.
  • end: End time (in seconds) from the beginning of the audio stream.
  • confidence: Floating point value between 0 and 1 that indicates overall transcript reliability. Larger values indicate higher confidence.
  • channel: Audio channel to which the utterance belongs. When using multichannel audio, utterances are chronologically ordered by channel.
  • transcript: Transcript for the audio segment being processed.
  • words: Object containing each word in the transcript, along with its start time and end time (in seconds) from the beginning of the audio stream, and a confidence value.

ℹ️

By default, Deepgram applies its general AI model, which is a good, general purpose model for everyday situations. To learn more about the customization possible with Deepgram's API, check out the Deepgram API Reference.

Advanced Processing

You may also want to enable speaker diarization, which will detect and identify speakers for utterances, and punctuation.

To transcribe audio from a file on your computer, run the following curl command in a terminal or your favorite API client.

curl \
  --request POST \
  --header 'Authorization: Token YOUR_DEEPGRAM_API_KEY' \
  --header 'Content-Type: audio/wav' \
  --data-binary @gettysburg.wav \
  --url 'https://api.deepgram.com/v1/listen?utterances=true&diarize=true&punctuate=true'

:eyes: Replace YOUR_DEEPGRAM_API_KEY with your Deepgram API Key.

When the file is finished processing, you’ll receive a JSON response that has the same basic structure as before. Let’s take a closer look at the new utterances object:

...
"utterances":[
  {
    "start":0.41874,
    "end":5.42518,
    "confidence":0.88211584,
    "channel":0,
    "transcript":"four score and seven years ago, our fathers brought forth on this continent a new nation",
    "words":[
      {"word":"four","start":0.41874,"end":0.85742,"confidence":0.5821198,"speaker":0,"punctuated_word":"four"},
      ...
    ],
    "speaker":0,
    "id":"ec11ce4b-2d5c-4b95-9183-ba102bea1d62"
  },
...
]
...

In this response, notice that the content of transcript in each utterance is now punctuated, and each word object in the words array contains two new parameters:

  • speaker: Integer indicating the speaker who is saying the word being processed.
  • punctuated_word: Word being processed with added punctuation, if any.

To improve readability, you can use a JSON processor to parse the JSON. In this example, we use JQ.

curl \
--request POST \
--header 'Authorization: Token YOUR_DEEPGRAM_API_KEY' \
--header "Content-Type: audio/wav" \
--data-binary @gettysburg.wav \
--url "https://api.deepgram.com/v1/listen?utterances=true&diarize=true&punctuate=true" | jq -r '.results.utterances[] | "[Speaker:\(.speaker)] \(.transcript)"'

:eyes: Replace YOUR_DEEPGRAM_API_KEY with your Deepgram API Key.

When the file is finished processing, you’ll receive the following response:

[Speaker:0] four score and seven years ago, our fathers brought forth on this continent a new nation
[Speaker:0] conceived liberty and dedicated to the proposition that all men are created equal.
[Speaker:0] Now we are engaged in a great civil war testing whether that nation or any nation so conceived and so dedicated can long endure.

 

Use Cases

Some examples of use cases for utterances include:

  • Customers who would like to perform analysis on output by utterance. For example, their call center would like to track how the sentiment progresses throughout their customer calls, so they can determine whether their interactions with callers trend from negative to positive or vice versa.