Entity Detection
Entity Detection identifies and extracts key entities from content in submitted audio.
detect_entities
boolean. Default: false
Entity Detection
When Entity Detection is enabled, the Punctuation feature will be enabled by default.
Enable Feature
To enable Entity Detection, when you call Deepgram’s API, add a detect_entities
parameter set to true
in the query string:
detect_entities=true&punctuate=true
When Entity Detection is enabled, Punctuation will also be enabled by default.
To transcribe audio from a file on your computer, run the following curl command in a terminal or your favorite API client.
Replace YOUR_DEEPGRAM_API_KEY
with your Deepgram API Key.
Analyze Response
When the file is finished processing (often after only a few seconds), you’ll receive a JSON response that has the following basic structure:
Let’s look more closely at the alternatives
object:
In this response, we see that each alternative contains:
transcript
: Transcript for the audio being processed.confidence
: Floating point value between 0 and 1 that indicates overall transcript reliability. Larger values indicate higher confidence.words
: Object containing each word in the transcript, along with its start time and end time (in seconds) from the beginning of the audio stream, and a confidence value.entities
: Object containing the information about entities for the audio being processed.
And we see that each entities
object contains:
label
: Type of entity identified.value
: Text of entity identified.confidence
: Floating point value between 0 and 1 that indicates entity reliability. Larger values indicate higher confidence.start_word
: Location of the first character of the first word in the section of audio being inspected for entities.end_word
: Location of the first character of the last word in the section of audio being inspected for entities.
All entities are available in English.
Identifiable Entities
View all options here: Supported Entity Types
Use Cases
Some examples of uses for Entity Detection include:
- Customers who want to improve Conversational AI and Voice Assistant by triggering particular workflows and responses based on identified name, address, location, and other key entities.
- Customers who want to enhance customer service and user experience by extracting meaningful and relevant information about key entities such as a person, organization, email, and phone number.
- Customers who want to derive meaningful and actionable insights from the audio data based on identified entities in conversations.