Status Endpoint
Monitor self-hosted node health and readiness.
The /v1/status endpoint provides real-time health and readiness information for your Deepgram self-hosted nodes. This endpoint is essential for monitoring your deployment and integrating with load balancers, orchestration platforms, and health check systems.
Overview
The status endpoint reports the current operational state of a Deepgram node, tracking it through various states as it starts up, serves requests, and responds to runtime conditions. The endpoint helps prevent false critical alerts and provides accurate information about whether a node is ready to handle requests.
Response Format
The status endpoint returns a JSON object with the following fields:
system_health: The current state of the node (Initializing,Ready,Healthy, orCritical)active_batch_requests: Number of pre-recorded transcription requests currently being processedactive_stream_requests: Number of real-time streaming requests currently active
Status States
The system_health field reports one of four possible states:
Initializing
Reported during node startup. When a Deepgram API node first starts, it reports Initializing status while it:
- Establishes connections to Engine drivers
- Loads configuration
- Prepares to service requests
The node automatically transitions to Ready once initialization completes successfully.
Example Response:
Ready
The node can service requests. Once initialization is complete, the node transitions to Ready status, indicating it is capable of handling transcription and other API requests.
From the Ready state, the node will:
- Transition to
Healthyafter successfully processing enough requests - Transition to
Criticalif errors occur during request processing
Example Response:
Healthy
Sustained successful operation. After a node has successfully processed multiple requests, it transitions to Healthy status, indicating stable, production-ready operation.
A Healthy node can transition to Critical if failures arise during request processing.
Example Response:
Critical
Node is experiencing failures. When a node encounters errors that prevent it from successfully servicing requests, it transitions to Critical status.
This state indicates:
- The node is experiencing operational issues
- Requests may fail or produce errors
- Intervention may be required
A node in Critical status can recover and transition back to Ready once it can successfully service requests again.
Example Response:
State Transitions
The following diagram illustrates how nodes transition between states:
- Initializing β Ready: Automatic transition when node startup completes
- Ready β Healthy: After processing enough successful requests
- Ready β Critical: If errors occur during request processing
- Healthy β Critical: If failures arise during operation
- Critical β Ready: When the node can successfully service requests again
Using the Status Endpoint
Making a Request
Query the status endpoint with a simple GET request:
Integration with Load Balancers
Configure your load balancer to use the status endpoint for health checks. Different states may require different handling:
- Initializing: Consider the node unhealthy/not ready
- Ready: Node is healthy and can receive traffic
- Healthy: Node is healthy and can receive traffic
- Critical: Remove node from rotation or reduce traffic
Example: AWS Application Load Balancer
Integration with Kubernetes
Use the status endpoint for liveness and readiness probes:
Monitoring and Alerting
The status endpoint is valuable for monitoring dashboards and alerting systems:
Best Practices
Startup Handling
During node deployment or restart:
- Wait for the
Initializingstate to transition toReadybefore sending production traffic - Allow adequate time for initialization (typically 30-60 seconds)
- Configure health checks with appropriate initial delays
Error Recovery
When a node enters Critical state:
- Check node logs for specific error messages
- Verify Engine connectivity and resource availability
- Monitor for automatic recovery to
Readystate - Consider restarting the node if it remains in
Criticalstate
High Availability
For production deployments:
- Deploy multiple API nodes for redundancy
- Configure load balancers to remove
Criticalnodes from rotation - Set up automated alerts for
Criticalstate transitions - Monitor the proportion of nodes in each state across your deployment
Monitoring Active Requests
Use the active_batch_requests and active_stream_requests fields to:
- Track node utilization and load distribution
- Identify nodes that may be overloaded
- Plan capacity based on request patterns
- Implement graceful shutdowns by waiting for active requests to complete
Troubleshooting
Node Stuck in Initializing
If a node remains in Initializing state for an extended period:
- Verify Engine containers are running and accessible
- Check network connectivity between API and Engine nodes
- Review API and Engine logs for initialization errors
- Ensure proper configuration in
api.tomlandengine.toml
Frequent Critical State Transitions
If nodes frequently transition to Critical:
- Review Engine resource allocation (GPU/CPU/memory)
- Check for model loading issues or corrupted model files
- Verify license validity and connectivity to license servers
- Monitor for request patterns that may cause failures
Status Endpoint Not Responding
If the status endpoint is unreachable:
- Verify the API container is running:
docker ps - Check API logs:
docker logs <container_id> - Ensure port 8080 is accessible and not blocked by firewall rules
- Verify the API container has started successfully
Whatβs Next
Now that you understand how to monitor node health with the status endpoint, explore related topics:
- Metrics Guide - Detailed metrics and monitoring
- System Maintenance - Keeping your deployment healthy
- Prometheus Integration - Advanced monitoring setup