Cloud-hosted on-premises (on-prem) deployments are the most common type of deployment performed by customers who want to leverage Deepgram within their own infrastructure. Specific configurations vary, but the Cloud Service Providers that are typically used include Amazon Web Services (AWS), Azure, and Google Cloud Platform (GCP). This guide details steps to set up a deployment to the cloud on an AWS instance running Red Hat Enterprise Linux (RHEL) 7.

In this guide, you will learn how to expose and access your application in a secure and stable manner. You will need to perform some of these steps in the AWS Management Console and some in your local terminal.

Prerequisites

Before you begin, you will need a Deepgram Account Representative. To reach one, contact us!

Your Deepgram Account Representative will guide you through the process of setting up:

  • a Deepgram product contract
  • a Deepgram Console account, so we can connect your license to your projects
  • an account on Docker Hub, so you can conveniently get the latest Deepgram software and updates.

Ahead of your planned on-prem deployment, your Deepgram Account Representative will need:

  • your Deepgram Console account email address
  • your Deepgram Console Project ID
  • your Docker Hub Account ID

Providing this information will allow Deepgram to supply:

  • download links to your customized configuration files, which will include your license ID
  • download links to required models, including at least one pre-trained AI model for testing purposes

In addition, you will need to make sure you have a valid RHEL subscription with Red Hat.

Accessing Your Cloud Environment

AWS uses public-key cryptography to secure login information for your instance. In a Linux instance, password authentication is disabled; you use a key pair to log in to your instance securely.

Create an Amazon EC2 Key Pair

If you don’t already have an Amazon EC2 key pair, you will need to create one in order to access the AWS EC2 Virtual Machine. To learn how, read Create a key pair using Amazon EC2 in Amazon’s documentation.

📘

Keys must be created in each AWS region in which you will deploy Deepgram on-premises.

At the end of this process, your browser should download a private-key.pem file for your key pair. Move this file to a secure and memorable location.

Create an Amazon EC2 Instance

​To begin your on-prem installation, you must create an Amazon EC2 instance.

New AWS Launch Experience

📘

If you're not yet using the new AWS launch experience, skip to instructions for creating an Amazon EC2 Instance using the old AWS launch experience.

If you're using the new AWS launch experience:

  1. Navigate to the EC2 Dashboard and confirm that the proper AWS Region is configured, then choose Launch Instance and select Launch Instance in the dropdown to open the wizard.

  2. Under Name and Tags, for Name, enter "Deepgram On-Premises (RHEL 7)".

  3. For Applications and OS Images (Amazon Machine Image), enter "Red Hat Enterprise Linux (RHEL) 7 (HVM)" in the search bar, and search.

  4. Next to the Red Hat Enterprise Linux (RHEL) 7 (HVM) image by Amazon Web Services with Ver 7.7_HVM, click Select. In the modal that appears, confirm the latest version of the image is Ver 7.7_HVM or greater, and select Continue.

  5. For Instance Type, in the search bar dropdown, locate and select "g4dn.2xlarge". Ensure it has 8 vCPU and 32GiB memory.

  6. For Key pair (login), select the key pair you created in the Create an Amazon EC2 Key Pair section of this guide. If you didn't previously create a key pair, select Create a new key pair and follow the instructions provided in the modal.

📘

Keys must be created in each AWS region in which you will deploy Deepgram on-premises.

  1. For Network Settings, select Create security group and configure it (or select an existing security group that supports the following rules):

    1. Ensure Allow SSH traffic from is selected by default and that its value is Anywhere.
    2. Ensure Allow HTTP traffic from the internet is selected by default.
    3. If you intend to use an SSL proxy in conjunction with Deepgram on-premises, select Allow HTTPS traffic from the internet.
  2. For Configure storage, increase the gp2 Root Volume value to at least 32 GiB. We recommend more if you intend on deploying multiple models, starting at 100 GiB for a DGTools model training environment.

  3. For Summary, ensure the Number of instances value is at least 1 and confirm that the rest of your instance settings are correct. They should consist of the following values:

    • Software Image (AMI): Red Hat Enterprise Linux (RHEL) 7 (HVM) ami-078a6a18fb73909b2
    • Virtual server type (instance type): g4dn.2xlarge
    • Firewall (security group): either New Security Group or the name of your selected security group
    • Storage (volumes): 2 volume(s)

    After you have confirmed these values, select Launch instance.

  4. Once the instance successfully launches, review its Launch log and other "Next Steps", then select View all instances.

  5. From the list of instances, select your new Deepgram On-Premises (RHEL 7), and make note of or copy its Public IPv4 DNS entry.

Old AWS Launch Experience

If you're using the old AWS launch experience:

  1. Navigate to the EC2 Dashboard and confirm that the proper AWS Region is configured, then choose Launch Instance to open the wizard.

  2. For the Choose an Amazon Machine Image (AMI) wizard step, choose a basic configuration to serve as a template for your instance:

    1. Search for Red Hat Enterprise Linux (RHEL) 7 (HVM).
    2. If you want the latest version of the RHEL 7 image, ensure the version is labeled Ver 7.7_HVM.
    3. Ensure the Architecture version is x86_64.
    4. Choose Select.
  3. For the Choose an Instance Type wizard step, select the hardware configuration of your instance:

    1. Filter by the g4dn family type.
    2. Select the g4dn.2xlarge instance.
    3. Select Next: Configure Instance Details.

🚧

Avoid selecting Review and Launch. If you do so, the wizard will complete the other configuration settings for you and will miss some important settings.

  1. For the Configure Instance Details wizard step, review the default instance detail selections, then select Next: Add Storage.

  2. For the Add Storage wizard step, increase the size of the Root volume to 32 GB, and then select Next: Add Tags.

  3. For the Add Tags wizard step, select “Add Tag”:

    1. For the key, type Name.
    2. For the value, type On-Premises.
    3. Select Next: Configure Security Group.
  4. For the Configure Security Group wizard step, select Create a new security group.

    1. For the name, type On-Premises.

    2. For the description, type On-Premises Security Group.

    3. Ensure the first rule (of type SSH) is listed, and provide the description SSH for Administration.

🚧

Open SSH to only necessary IP addresses.

  1. Select Add Rule, then select HTTPS from the dropdown, and provide the description HTTPS for Deepgram API.

  2. Select Review and Launch.

  3. For the Review Instance Launch wizard step, ensure all of the instance details match what you entered in the previous steps, and then select Launch.

  4. In the Select an existing key pair or create a new key pair modal, ensure that choose an existing key pair is selected, then choose the key pair you created in the Create an Amazon EC2 Key Pair section. Check the acknowledgement box, and select Launch Instances.

  5. Once the instance successfully launches, review its details in the EC2 instances list, and make note of its Public IPv4 DNS entry.

Log in to the AWS EC2 instance

To complete the rest of the installation, including configuring your environment and transferring files between your local computer and your AWS instance, you must connect to the AWS EC2 instance that you launched.

  1. Open the terminal application for your computer.

  2. Connect to your AWS instance:

    ssh -i private-key.pem ec2-user@PUBLIC_IPV4_DNS_VALUE
    

🚧

Be sure to replace the PUBLIC_IPV4_DNS_VALUE placeholder value with the DNS value for your instance. Also check that the path to your private-key.pem file is correct.

  1. If you receive a message that indicates that the authenticity of the host can’t be established, type yes, then press the Enter key on your keyboard.

Configuring Your Cloud Environment

Once you have successfully logged in to your instance, you must remove incompatible kernel drivers and install all of the necessary utilities to run GPU-accelerated containers in Docker.

Register the Red Hat Subscription Manager

Register the Red Hat subscription-manager, which allows you to register, attach, and remove subscriptions using the command line.

  1. Ensure you have a valid RHEL subscription with Red Hat, then register your system in the Red Hat Customer Portal:

    sudo subscription-manager register
    
  2. Automatically attach to the best-match subscription:

    sudo subscription-manager attach --auto
    
  3. Verify the overall status:

    sudo subscription-manager status
    

    The output should indicate the status is "Current".

  4. Enable necessary Red Hat repositories:

    sudo subscription-manager repos --enable=rhel-7-server-rpms
    sudo subscription-manager repos --enable=rhel-7-server-extras-rpms
    sudo subscription-manager repos --enable rhel-7-server-devtools-rpms
    sudo subscription-manager repos --enable rhel-7-server-optional-rpms
    

Update Red Hat’s YUM Package Management Tool

Update the YUM package management tool to RHEL 7.9.

📘

We recommend using the RHEL 7.7 image from the AWS image base because it is more recently updated.

  1. Update your AWS instance’s operating system package manager to get information on updated versions of packages or their dependencies:

    sudo yum update -y
    
  2. Verify that the update was successful:

    cat /etc/redhat-release
    

    You should expect the command to return the following output:

    >Red Hat Enterprise Linux Server release 7.9 (Maipo)
    

Install Developer Toolset and Kernel Packages

Install the Red Hat Enterprise Linux Developer Toolset and DevKernel RPM Packages, which contain the kernel headers and makefiles sufficient to build modules against the kernel package.

🚧

To avoid NVIDIA driver installation (referenced later in this guide) failure, you must follow the steps in this section exactly.

  1. Install the necessary development toolset:

    sudo yum install -y devtoolset-7
    
  2. Install the kernel-devel package:

    sudo yum install -y kernel-devel
    
  3. Restart your instance, then SSH into it again:

    sudo shutdown -r now
    

🚧

Server restart is required or NVIDIA driver installation (referenced later in this guide) will fail.

Install NVIDIA Drivers

Install the latest available NVIDIA drivers for the g4dn instance which has the Tesla T4 GPU:

  1. Install GNU Wget (wget), so you can retrieve the latest NVIDIA driver for your GPU:

    sudo yum install -y wget
    
  2. Identify the latest driver for the GPU you are using and retrieve its download URL:

    1. Go to NVIDIA Official Drivers.
    2. Select the correct product and a corresponding CUDA toolkit 11 or greater.
    3. Select Search and check that the correct driver is displayed, then select Download.
    4. Right-click Agree & Download, then copy the link to save the download URL to your clipboard.
  3. Download the latest driver for your GPU:

    # Example: wget https://us.download.nvidia.com/tesla/510.47.03/NVIDIA-Linux-x86_64-510.47.03.run
    wget LINK_TO_LATEST_NVIDIA_GPU_DRIVER
    

🚧

Be sure to replace the LINK_TO_LATEST_NVIDIA_GPU_DRIVER placeholder value with the URL to the latest driver for the GPU you are using.

  1. Change the permissions on the downloaded file to allow your user to execute it:

    # Example: chmod +x NVIDIA-Linux-x86_64-510.47.03.run
    chmod +x DOWNLOADED_FILE_NAME
    
  2. Decompress and execute the driver file to install it:

    # Example: sudo ./NVIDIA-Linux-x86_64-510.47.03.run --silent
    sudo DOWNLOADED_FILE_NAME --silent
    
  3. Test that the NVIDIA drivers are installed:

    nvidia-smi
    

Install Docker Engine

For ease of use, Deepgram provides its products in Docker containers, so you must make sure that you have installed the latest version of Docker Engine on all hosts.

  1. Install required dependencies:

    sudo yum install -y yum-utils device-mapper-persistent-data lvm2
    
  2. Install the Docker Engine. Docker Engine is only widely supported for RHEL via the CentOS distribution.

📘

If you have any trouble installing the CentOS packages on RHEL, read Install Using the Repository in Docker’s Install Docker Engine on CentOS documentation.

  1. Add the Docker CentOS repository:

    sudo yum-config-manager \
      --add-repo \
      https://download.docker.com/linux/centos/docker-ce.repo
    
  2. Install the Docker Engine components:

    sudo yum install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin
    
  3. Ensure that your AWS instance user (ec2-user) has sufficient permissions to communicate with the Docker daemon service that runs on your host operating system. To learn how to run Docker commands without elevated privileges, read Manage Docker as a Non-Root User in Docker’s optional Post-installation steps for Linux documentation.

  4. Manually start Docker:

    sudo systemctl start docker
    sudo systemctl --now enable docker.service
    sudo systemctl --now enable containerd.service
    
  5. Test the installation:

    sudo docker run hello-world
    
  6. Enable the use of Docker without sudo:

    sudo usermod -aG docker $USER
    
  7. Log out and then log back in to the AWS instance to ensure the sudo group membership has been re-evaluated for your current user session.

  8. Test the installation without sudo:

    docker run hello-world
    

Cache Docker Credentials

Once you are satisfied that Docker is installed and configured correctly, we need to login to Deepgram's container image repository, which is hosted on Quay. See our guide on self-service on-prem licensing and credentials for instructions on how to generate credentials for Quay. You may then login with those credentials (see guide again for instructions on logging in). Once your credentials are cached locally, you should not have to log in again (until after you log out (docker logout).

To pull Deepgram Docker images in later steps, make sure to login with the Quay credentials you have generated in this step.

Install Docker-Compose

Ensure that Docker Compose is installed.

  1. Download Docker Compose, and send the output to your /usr/local/bin directory:

    sudo curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
    
  2. Modify the permissions for the bin file to allow your user to execute it:

    sudo chmod +x /usr/local/bin/docker-compose
    
  3. Test the installation:

    docker-compose --version
    

    You should expect the command output to return version 1.29.2.

Install the NVIDIA-Docker Runtime

CUDA is NVIDIA's library for interacting with its GPU. CUDA support is made available to Docker containers using nvidia-docker, NVIDIA's custom runtime for Docker.

📘

If you run into problems, be sure you are following the steps in the "Installing on CentOS 7/8" section in NVIDIA's Installation documentation.

  1. Retrieve the NVIDIA runtime GPG key:

    distribution=$(. /etc/os-release;echo $ID$VERSION_ID) \
    && curl -s -L https://nvidia.github.io/libnvidia-container/$distribution/libnvidia-container.repo | sudo tee /etc/yum.repos.d/nvidia-container-toolkit.repo
    
  2. Install the NVIDIA Docker runtime:

    sudo yum install -y nvidia-docker2
    
  3. Restart the Docker service:

    sudo systemctl restart docker
    
  4. Download the nvidia-docker container image:

    docker pull nvidia/cuda:11.6.0-base-ubuntu20.04
    
  5. Run a test of the NVIDIA Docker runtime:

    docker run --runtime=nvidia --rm --gpus all nvidia/cuda:11.6.0-base-ubuntu20.04 nvidia-smi
    

Setting up the Deepgram Engine and API Containers

Now that your environment is set up, you can download and set up your Deepgram products. For ease of use, Deepgram provides its products in Docker containers.

Get Deepgram Products

  1. Identify the latest on-prem release in the Deepgram Changelog. Filter by "On-Prem", and select the latest release. You can use either the container image tag or the release tag listed for all images referenced in this documentation.

  2. Download the latest Deepgram Engine image from Docker:

    docker pull quay.io/deepgram/onprem-engine:IMAGE_OR_RELEASE_TAG
    
  3. Download the latest Deepgram API image from Docker:

    docker pull quay.io/deepgram/onprem-api:IMAGE_OR_RELEASE_TAG
    

🚧

Be sure to replace the IMAGE_OR_RELEASE_TAG placeholder value with the appropriate tag identified in step 1. Use this tag in all related configuration files.

🚧

The Deepgram Changelog may not have a domain prefix for the container images. Ensure that each image you pull has a quay.io domain prefix, as demonstrated in the commands above.

Import Your Docker Compose, Container Configuration, and Model Files

Before you can run your on-prem deployment, you must configure the required components. To do this, you will need to customize your configuration files and create a directory to house models that have been encrypted for use in your requests.

For your deployment, we provide models and configuration files to you via Amazon S3 buckets, so you can download directly to your AWS RHEL instance. If you don’t have customized configuration files, you can create configuration files using the examples included in Customize Your Configuration.

  1. Replace the default docker-compose.yml file in your root directory with the customized docker-compose.yml file provided by Deepgram:

    wget LINK_TO_YAML_FILE_PROVIDED_BY_DEEPGRAM
    

🚧

Be sure to replace the LINK_TO_YAML_FILE_PROVIDED_BY_DEEPGRAM placeholder value with the URL to the docker-compose.yml file provided by your Deepgram Account Representative.

  1. To house your configuration files, create a directory named config:

    mkdir config
    

📘

You can name this directory whatever you like, as long as you update the docker-compose.yml file accordingly.

  1. In your new directory, download each TOML configuration file using the links provided by Deepgram:

    cd config
    wget LINK_TO_TOML_CONFIGURATION_FILE_PROVIDED_BY_DEEPGRAM
    

🚧

Be sure to replace each LINK_TO_TOML_CONFIGURATION_FILE_PROVIDED_BY_DEEPGRAM placeholder value with the URL to the TOML configuration file provided by your Deepgram Account Representative.

  1. To house models that have been encrypted for use in your requests, create a directory named models:

    mkdir models
    

📘

You can name this directory whatever you like, as long as you update the docker-compose.yml and engine.toml files accordingly.

  1. In your new directory, download each model using the links provided by Deepgram:

    cd models
    wget LINK_TO_MODEL_PROVIDED_BY_DEEPGRAM
    

🚧

Be sure to replace each LINK_TO_MODEL_PROVIDED_BY_DEEPGRAM placeholder value with the URL to the model provided by your Deepgram Account Representative.

Customize Your Configuration

Once your configuration files have been created in your AWS instance, you can update your configuration to customize it for your use case.

Configuration Files

🚧

Your Deepgram Account Representative should provide you with download links to customized configuration files. Unless further customized, the example files included in this section are basic and should be used only to spin up a standard proof of concept (POC) deployment or to test your system.

docker-compose.yml

The Docker-compose configuration file makes it possible to spin up the containers with Docker using a single command. This makes spinning up a standard POC deployment quick and easy. You will need to have an environment variable DEEPGRAM_API_KEY with your on-prem API key secret. See our Self Service Licensing & Credentials guide for instructions on generating an on-prem API key for use in this section.

export DEEPGRAM_API_KEY=API_KEY_SECRET

🚧

Be sure to replace the API_KEY_SECRET placeholder value with the Deepgram On-Premises API key secret you generate in the Deepgram Console.

version: "3.7"

services:

  # The speech API service.
  api:
    image: quay.io/deepgram/onprem-api:IMAGE_OR_RELEASE_TAG

    # Here we expose the API port to the host machine. The container port
    # (right-hand side) must match the port that the API service is listening
    # on (from its configuration file).
    ports:
      - "8080:8080"

    environment:  
      DEEPGRAM_API_KEY: "${DEEPGRAM_API_KEY}"  

	# Uncomment the next two lines if using a customized configuration file
    # volumes:
      # - "/path/to/api.toml:/api.toml:ro"
      # The API configuration file needs to be accessible; this should point to
      # the file on the host machine.

    # Invoke the API server
    command: -v serve 
    # If you are using a custom configuration file mounted at /api.toml, use the following command instead 
    # command: -v serve /api.toml

  # The speech engine service.
  engine:
    image: quay.io/deepgram/onprem-engine:IMAGE_OR_RELEASE_TAG

    # Change the default runtime.
    runtime: nvidia

    # The Engine models need to be accessible; these paths
    # should point to files/directories on the host machine.
    volumes:
      # The in-container paths must
      # match the paths specified in the Engine configuration file. The default location is "/models"
      - "/path/to/models:/models:ro"
      # Uncomment the next line if using a customized configuration file
      # - "/path/to/engine.toml:/engine.toml:ro"
    
    ports:
      - "9991:9991"

    environment:  
      DEEPGRAM_API_KEY: "${DEEPGRAM_API_KEY}"  

    # Invoke the Engine service
	command: -v serve
	# If you are using a custom configuration file mounted at /api.toml, use the following command instead 
    # command: -v serve /engine.toml
api.toml and engine.toml

The API and Engine images are configured with api.toml and engine.toml files, respectively. Versions of the files with sane defaults are bundled with the API and Engine Docker images. If you want to provide custom configurations, you can use the following templates; make sure to edit the docker-compose.yml file accordingly.

The Deepgram API configuration file determines how the Deepgram API container built from the Docker image will be run. It includes important file paths and settings related to networking.

# Keep in mind that all paths are in-container paths, and do not need to exist
# on the host machine.

# Configure license validation if you do not pass the DEEPGRAM_API_KEY environment variable.
# [license]
  # Your license key.
  # key = "LICENSE_KEY"

# Configure how the API will listen for your requests
[server]
  # The base URL (prefix) for requests to the API.
  base_url = "/v1"
  # The IP address to listen on. Since this is likely running in a Docker
  # container, you will probably want to listen on all interfaces.
  host = "0.0.0.0"
  # The port to listen on
  port = 8080

  # How long to wait for a connection to a callback URL (in seconds)
  callback_conn_timeout = 1
  # How long to wait for a response to a callback URL (in seconds)
  callback_timeout = 10

  # How long to wait for a connection to a fetch URL (in seconds)
  fetch_conn_timeout = 1
  # How long to wait for a response to a fetch URL (in seconds)
  fetch_timeout = 60

[features]
  endpointing_on_by_default = true

  # Enable topic detection by setting this to true, set to false to disable
  topic_detection = true # or false

  # Enable summarization by setting this to true, set to false to disable
  summarization = true # or false

# Configure the DNS resolver, overriding the system default.
# Typically not needed, although we document it here for completeness.
# [resolver]
#   # List of nameservers to use to resolver DNS queries.
#   nameservers = ["127.0.0.11 53 udp"]
#   # Override the TTL in the DNS response (in seconds).
#   max_ttl = 10

# Configure the backend pool of speech engines (generically referred to as
# "drivers" here). The API will load-balance among drivers in the standard
# pool; if one standard driver fails, the next one will be tried.
#
# Each driver URL will have its hostname resolved to an IP address. If a domain
# name resolves to multiple IP addresses, the API will load-balance across each
# IP address.
#
# This behavior is provided for convenience, and in a production environment
# other tools can be used, such as HAProxy.

# A new Speech Engine ("driver") in the "standard" pool.
[[driver_pool.standard]]
  # Host to connect to. Here, we use "tasks.engine", which is the Docker Swarm
  # method for resolving the IP addresses of all "engine" services. If you are
  # using Docker Compose, then this should just be "engine" instead of
  # "tasks.engine". If you rename the "engine" service in the Docker Compose
  # file, then change it accordingly here. Additionally, the port and prefix
  # should match those defined in the Engine configuration file.
  # NOTE: This must be HTTPS.
  url = "https://engine:8080/v2"

  # How long to wait for a connection to be established (in seconds).
  conn_timeout = 5
  # Once a connection is established, how many seconds to wait for a response.
  timeout = 400
  # Factor to increase the timeout by for each additional retry (for
  # exponential backoff).
  timeout_backoff = 1.2

  # If you fail to get a valid response (timeout or unexpected error), then
  # how many attempts should be made in total, including the initial attempt?
  # This is applied *per IP address* that the domain name in the URL resolves
  # to. If your domain resolves to multiple IPs, then "1" may be sufficient.
  retries_per_ip = 1

  # Before attempting a retry, sleep for this long (in seconds)
  retry_sleep = 2
  # Factor to increase the retry sleep by for each additional retry (for
  # exponential backoff).
  retry_backoff = 1.6

  # Maximum response to deserialize from Driver (in bytes)
  max_response_size = 1073741824

# Additional speech engines ("drivers") can be defined here, they too should
# be in the standard pool using [[driver_pool.standard]],

The Deepgram Engine configuration file determines how the Deepgram Engine container built from the Docker image will be run. It includes important file paths and settings related to models that you will be using.

# Keep in mind that all paths are in-container paths and do not need to exist
# on the host machine.

# Configure license validation if you do not pass the DEEPGRAM_API_KEY environment variable.
# [license]
  # Your license key
  # key = "LICENSE_KEY"

# Configure the server to listen for requests from the API.
[server]
  # The IP address to listen on. Since this is likely running in a Docker
  # container, you will probably want to listen on all interfaces.
  host = "0.0.0.0"
  # The port to listen on
  port = 8080

# To support metrics we need to expose an Engine endpoint
[metrics_server]
  host = "0.0.0.0"
  port = 9991

# Speech models. You can place these in one or multiple directories.
[model_manager]
 search_paths = ["/models"]

# Enable ancillary features
# To disable any of these features, just remove to comment out the respective
# feature section.
[features]
 # Enable Language Detection by setting this to true, set to false to disable
 language_detection = true # or false

# Size of audio chunks to send in seconds. Min/max default to 7/10s,
# which is ideal for batch requests. If streaming, we generally
# recommend 0.5-3s chunks for lowest latency
# [chunking]
#  [chunking.streaming]
#   min_duration = 0.5
#   max_duration = 3.0
#
# Engine will automatically enable half precision operations if your GPU supports
# them. You can explicitly enable or disable this behavior with the state parameter
# which supports enabled, disabled, and auto (the default).
# [half_precision]
#   state = "enabled" # or "disabled" or "auto"

Starting and Using Your Container

To make sure your Deepgram on-prem deployment is properly configured and running, you will want to start Docker, run the container, and make a sample request to Deepgram.

Start Docker

Now that you have your configuration files set up and in the correct location to be used by the container, use Docker-Compose to run the container:

docker-compose up -d

Test Your Deepgram Setup with a Sample Request

Test your environment and container setup with a local file:

curl -X POST --data-binary @[FILE_PATH] http://localhost:8080/v1/listen

Finally, test that you can view the model that was automatically loaded by Deepgram Engine. To do this, download the American English Harvard Sentences file from the Open Speech Repository and run it through the model:

wget https://www.voiptroubleshooter.com/open_speech/american/OSR_us_000_0019_8k.wav

curl -X POST --data-binary @OSR_us_000_0019_8k.wav "http://localhost:8080/v1/listen?model=general-dQw4w9WgXcQ&version=2021-08-18"; echo

Adding the License Proxy Server

For customers deploying Deepgram’s on-premises solution in production, Deepgram recommends the License Proxy Server, which is a caching proxy that communicates with the Deepgram-hosted license server to ensure uptime and simplify network security.

🚧

If you aren't certain which products your contract includes or if you are interested in adding the License Proxy Server to your on-premises deployment, please consult a Deepgram Account Representative. To reach one, contact us!

There are many benefits of using the License Proxy Server:

  • For production deployments, the License Proxy Server allows your deployed on-premises services to continue to run even if your deployment loses connectivity to the Deepgram license server.

  • If network security requirements dictate that traffic over the web is allowed from only certain hosts, the License Proxy Server can be deployed statically while ASR services scale elastically.

  • If your customers will deploy Deepgram software as part of your on-premises solution and their traffic must flow back to your environment, you may deploy the License Proxy Server to relay traffic on to the Deepgram license server.

To see an architecture overview or learn about monitoring the License Proxy Server, see License Proxy Server.

Installing

Deepgram makes all of its products available through Quay.

  1. Log in to your Quay account from one of your servers.

  2. Similar to the API and Engine instructions, identify the latest on-prem release in the Deepgram Changelog. Filter by "On-Prem", and select the latest release. You can use either the container image tag or the release tag listed for all images referenced in this documentation.

  3. Run the following command:

    $ docker pull quay.io/deepgram/onprem-license-proxy:IMAGE_OR_RELEASE_TAG
    

🚧

Be sure to replace the IMAGE_OR_RELEASE_TAG placeholder value with the appropriate tag identified in step 2. Use this tag in all related configuration files.

Deploying

Before you can run your on-prem deployment with the License Proxy Server, you must configure the License Proxy Server. To do this, you will need to update your Docker Compose and container configuration files, then restart Docker.

Update Your Docker Compose File

Replace the default docker-compose.yml file in your root directory with the following:

version: "3.7"

services:

  # The speech API service.
  api:
    image: quay.io/deepgram/onprem-api:IMAGE_OR_RELEASE_TAG

    # Here we expose the API port to the host machine. The container port
    # (right-hand side) must match the port that the API service is listening
    # on (from its configuration file).
    ports:
      - "8080:8080"

    environment:  
      DEEPGRAM_API_KEY: "${DEEPGRAM_API_KEY}"  

	# Uncomment the next two lines if using a customized configuration file
    # volumes:
      # - "/path/to/api.toml:/api.toml:ro"
      # The API configuration file needs to be accessible; this should point to
      # the file on the host machine.

    # Invoke the API server
    command: -v serve 
    # If you are using a custom configuration file mounted at /api.toml, use the following command instead 
    # command: -v serve /api.toml

  # The speech engine service.
  engine:
    image: quay.io/deepgram/onprem-engine:IMAGE_OR_RELEASE_TAG

    # Change the default runtime.
    runtime: nvidia

    # The Engine models need to be accessible; these paths
    # should point to files/directories on the host machine.
    volumes:
      # The in-container paths must
      # match the paths specified in the Engine configuration file. The default location is "/models"
      - "/path/to/models:/models:ro"
      # Uncomment the next line if using a customized configuration file
      # - "/path/to/engine.toml:/engine.toml:ro"
    
    ports:
      - "9991:9991"

    environment:  
      DEEPGRAM_API_KEY: "${DEEPGRAM_API_KEY}"  

    # Invoke the Engine service
	command: -v serve
	# If you are using a custom configuration file mounted at /api.toml, use the following command instead 
    # command: -v serve /engine.toml

  license-proxy:
    image: quay.io/deepgram/onprem-license-proxy:IMAGE_OR_RELEASE_TAG

    ports:
      - "8443:8443"
      - "8089:8089"

    command: -v serve --license-key "API_KEY_SECRET" --host 0.0.0.0 --port 8443 --status-port 8089

Update Your Container Configuration Files

In your api.toml and engine.toml files, add a line that specifies the URL to your deployed proxy immediately below your license key configuration:

[license]
server_url = "https://license-proxy:8443"

Restart Docker

Restart Docker to begin directing licensing requests through the proxy:

docker-compose up -d

After restarting, test that an ASR request is processed as expected.


What’s Next

Once you've deployed your Deepgram On-premises system, it's time to take a look at maintaining your deployment or best practices related to autoscaling your system based on demand.