Function Calling

Overview of Function Calling with Voice Agents.

Voice Agent

What is Function Calling?

Function calling is the ability of large language models (LLMs) to invoke external functions or APIs in response to user queries. For example, if a user asks for the current weather in a specific location, the LLM can use function calling to call a weather API, fetch real-time data, and present it in a structured response.

This capability allows LLMs to enhance their functionality by integrating with other systems, services, or databases to provide real-time data, perform specific tasks, or trigger actions.

How Function Calling Works

  • User Query: A user asks the LLM something that requires external data or specific action (e.g., “Check the weather in New York” or “Book an appointment”).
  • Function Identification: The LLM identifies that the query requires a specific function to be called. For instance, if the user asks for the weather, the model recognizes that it needs to call a weather API rather than generate a general response.
  • Parameter Extraction: The LLM analyzes the user’s query to extract the required parameters (e.g., location, date, or other variables). For example, in the weather query, “New York” would be extracted as the location parameter.
  • Call the Function: The LLM triggers an external function or API with the appropriate parameters. This could involve fetching live data, performing a task (e.g., making a booking), or retrieving information that is outside the LLM’s static knowledge.
  • Return the Result: The function returns the result (such as the current weather data), which the LLM incorporates into its response back to the user.

Function Calling Flow Diagram

Configuring Function Calling

Below is an example of the Settings message with the agent.think configuration object that includes function calling capabilities. To see a complete example of the Settings message, see the Configure the Voice Agent documentation.

JSON
1{
2 "type": "Settings",
3 ...// other settings fields
4 "agent": {
5 "think": {
6 "provider": {
7 "type": "open_ai",
8 "model": "gpt-4",
9 "temperature": 0.7
10 },
11 "endpoint": { // Optional for non-Deepgram LLM providers. When present, must include url field and headers object
12 "url": "https://api.example.com/llm",
13 "headers": {
14 "authorization": "Bearer {{token}}"
15 }
16 },
17 "prompt": "You are a helpful AI assistant focused on customer service.",
18 "functions": [
19 {
20 "name": "check_order_status",
21 "description": "Check the status of a customer order",
22 "parameters": {
23 "type": "object",
24 "properties": {
25 "order_id": {
26 "type": "string",
27 "description": "The order ID to check"
28 }
29 },
30 "required": ["order_id"]
31 },
32 "endpoint": { // If not provided, function is called client-side
33 "url": "https://api.example.com/orders/status",
34 "method": "post",
35 "headers": {
36 "authorization": "Bearer {{token}}"
37 }
38 }
39 }
40 ]
41 }
42 }
43}

Client-Side Function Calling

If your function will run client-side and you do not need to make a request to a server, you will not need to use the endpoint object and do not need to provide the url, headers, or method fields.

JSON
1{
2"type": "Settings",
3...// other settings fields
4"agent": {
5 "prompt": "You are a helpful AI assistant that can provide weather information.",
6 "functions": [
7 {
8 "name": "get_weather",
9 "description": "Get the current weather for a specific location",
10 "parameters": {
11 "type": "object",
12 "properties": {
13 "location": {
14 "type": "string",
15 "description": "The city or location to get weather for"
16 }
17 },
18 "required": ["location"]
19 }
20 }
21 ]
22 }
23}
24}
25...// other settings fields

In this example code below, the get_weather function gets triggered when someone asks the Agent about the weather in a particular place.

1export const getWeather = async (location: string): Promise<string | null> => {
2 const apiKey = import.meta.env.VITE_OPENWEATHER_API_KEY;
3
4 try {
5 const response = await fetch(
6 `https://api.openweathermap.org/data/2.5/weather?q=${location}&appid=${apiKey}`
7 );
8
9 if (!response.ok) {
10 throw new Error('Failed to fetch weather data');
11 }
12
13 const data = await response.json();
14
15 return `The current weather in ${data.name} is ${data.weather[0].description} with a temperature of ${data.main.temp}°K.`;
16 } catch (err) {
17 console.error(err);
18 return null;
19 }
20};

Function Calling Message Flow

2 types of Function calling messages are exchanged between the client and Deepgram’s Voice Agent API server through a websocket.

A FunctionCallRequest message is used to initiate function calls in your Voice Agent. This message can trigger either a server-side function execution or request a client-side function execution, depending on the client_side property setting.

A FunctionCallResponsecan be sent by the client or server. When sent from the client it is a response to a function call, but when sent from the server it is information about a function call that was requested by the agent.

Below is an example of a function call message flow based on the get_weather function example above.

  1. User submits a query → The agent determines a function is needed.
  2. Server sends a FunctionCallRequest → Requests function execution.
  3. Client executes the function and sends a FunctionCallResponse → Returns the function result.
  4. Server uses the response.
  5. The agent continues the conversation.

Built with