Maintaining Context
A voice agent’s behavior on any call is the sum of a few moving parts: the prompt that shaped it, the history you handed it on connect, anything either side has injected during the call, and any function results it has gathered. This page covers every lever the API gives you for managing that context.
Levers at a glance
The rest of this page walks each lever in detail. You can mix all of them in the same session.
System prompt
The system prompt is the agent’s initial brief. It defines persona, scope, and any rules the agent should follow throughout the call. Set it inside the agent.think.prompt field of your Settings message at connect time.
For voice-specific prompt-writing patterns (formatting numbers for TTS, keeping turns short, avoiding markdown the agent will try to read aloud), see Prompting Voice Agents.
Prompt update at runtime
If you need to change the agent’s behavior part-way through a call (a phase change, a hand-off to a new persona, an updated rule), send an UpdatePrompt message. The new prompt replaces the system prompt for the rest of the session.
Combine UpdatePrompt with UpdateThink if you also want to swap the LLM provider mid-call.
History
When you start a new session, you can hand the agent the history of prior interactions so it picks up where the last call left off. History is provided through agent.context.messages and supports two message shapes that can be mixed in the same array.
Conversation history
Plain back-and-forth between user and assistant.
Function call history
Function calls executed in earlier sessions, with arguments and results.
The full schema and field-by-field reference live on the History page.
Toggling history
History is enabled by default. To disable it, set:
Inject messages
Two client messages let you add to context mid-call.
InjectAgentMessage
Make the agent say something specific, immediately. The injected text is treated as if the agent had just produced it.
Useful for filler responses, status updates while a slow tool runs, or scripted follow-ups. Reference: Inject Agent Message.
InjectUserMessage
Push a synthetic user turn into the conversation. The agent processes it as if the user said it. Useful for orchestrated hand-offs from your application to the agent without going through the microphone.
Reference: Inject User Message.
Function results as context
Function call responses returned to the agent become part of its working context. The agent can reference them in later turns, decide whether to call again, and use the results to shape what it says next. See Function Calling for the full request/response loop and Function Call Context for how function results interact with conversation history.
Reusable configurations
If your agent’s prompt, providers, and tools are stable across many sessions, save them as a Reusable Agent Configuration. You get back a UUID that you can pass in place of the full agent object on every connection.
Reusable configurations are most useful when:
- The same prompt is used by many sessions
- You want to update prompts without redeploying the client
- Multiple environments need to share the same agent definition
The reusable configuration only covers the agent block. Per-session context (history, runtime injections) is still passed at connect time or during the call.