Speech Started
Speech Started sends a message when the start of speech is detected in live streaming audio.
vad_events
boolean.
Deepgram's Speech Started feature can be used for speech detection and can be used to detect the start of speech while transcribing live streaming audio.
SpeechStarted complements Voice Activity Detection (VAD) to promptly detect the start of speech post-silence. By gauging tonal nuances in human speech, the VAD can effectively differentiate between silent and non-silent audio segments, providing immediate notification of speech detection.
Enable Feature
To enable the SpeechStarted event, include the parameter vad_events=true
in your request:
vad_events=true
You'll then begin receiving messages upon speech starting.
# see https://github.com/deepgram/deepgram-python-sdk/blob/main/examples/streaming/async_microphone/main.py
# for complete example code
options: LiveOptions = LiveOptions(
model="nova-2",
language="en-US",
# Apply smart formatting to the output
smart_format=True,
# Raw audio format details
encoding="linear16",
channels=1,
sample_rate=16000,
# To get UtteranceEnd, the following must be set:
interim_results=True,
utterance_end_ms="1000",
vad_events=True,
# Time in milliseconds of silence to wait for before finalizing speech
endpointing=300
)
Results
The JSON message sent when the start of speech is detected looks similar to this:
{
"type": "SpeechStarted",
"channel": [
0,
1
],
"timestamp": 9.54
}
- The
type
field is alwaysSpeechStarted
for this event. - The
channel
field is interpreted as[A,B]
, whereA
is the channel index, andB
is the total number of channels. The above example is channel 0 of single-channel audio. - The
timestamp
field is the time at which speech was first detected.
The timestamp doesn't always match the start time of the first word in the next transcript because the systems for transcribing and timing words work independently of the speech detection system.
Updated 8 months ago