Drivers and Containerization Platforms

Once you have provisioned a deployment environment with a Linux operating system installed, we need to configure it.

While some cloud providers will automatically install NVIDIA drivers for use with NVIDIA GPUs, many do not, so we will walk through how to install NVIDIA drivers for the GPUs and expose them for our use.

We will also step through installing a containerization platform. We highly recommend Docker, but you may also use Podman if you are using Red Hat Enterprise Linux (RHEL) version 8 or higher, or another similar distribution that does not ship or support Docker.

📘

Other pages in Deepgram's documentation may exclusively list example commands using docker. If you are using a different containerization platform, such as podman, you may need to adjust the commands accordingly.

Prerequisites

Make sure you have completed the steps in the AWS guide or Bare-Metal Servers guide.

Note on Different Linux Distributions

Various Linux distributions have a default or preferred package manager for the installation and management of system packages. For example, apt is associated with Ubuntu and dnf is associated with RHEL.

This guide will contain instructions that should be adaptable for many Linux distributions, but are specific to Ubuntu and RHEL. You will see comments above the commands and sections when there is a distribution-specific action. If there are no comments or headers above a set of instructions, it should work cross-platform.

Update System Package Manager

Update your server’s operating system package manager to get information on updated versions of packages and their dependencies, and upgrade these packages as needed.

# Ubuntu
sudo apt update
sudo apt upgrade -y
# RHEL 
sudo dnf update -y
sudo dnf upgrade -y

Install GNU Toolchain Components

Install the GNU Compiler Collection (gcc) , GNU Make (make), and GNU Web Get (wget) tool:

# Ubuntu
sudo apt install -y gcc make wget
# RHEL
sudo dnf install -y gcc make wget

Remove Nouveau Drivers

The Nouveau kernel driver is incompatible with NVIDIA drivers, so you will need to disable it before installing any NVIDIA drivers.

  1. In your terminal, create a new configuration file:

    sudo touch /etc/modprobe.d/blacklist-nouveau.conf
    
  2. Using sudo and the text editor of your choice, add the following lines to the /etc/modprobe.d/blacklist-nouveau.conf configuration file:

    blacklist nouveau
    options nouveau modeset=0
    
  3. Regenerate the kernel with the new conf file added:

    # Ubuntu
    sudo update-initramfs -u
    # RHEL
    sudo dracut --force
    
  4. Reboot the instance:

    sudo reboot
    
  5. Once the instance has finished rebooting, reconnect via SSH and verify that Nouveau has been removed:

    lsmod | grep nouveau
    

    If you see no output, Nouveau was successfully removed.

Install NVIDIA Drivers

Install RHEL Kernel Development Tools

📘

The Install RHEL Kernel Development Tools subsection only applies to RHEL. See NVIDIA's driver documentation to see if other additional preparation steps are required before proceeding with the driver download and install.

Install the Linux kernel development tools for RHEL to support installing the NVIDIA drivers.

  1. Install the RHEL kernel development tools and associated headers:

    sudo dnf -y install kernel-devel-`uname -r` kernel-headers-`uname -r`
    
  2. Install the RHEL Extra Packages for Enterprise Linux (EPEL):

    sudo dnf -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-{VERSION_NUMBER}.noarch.rpm
    

🖥️

At the end of the above command, replace the placeholder {VERSION_NUMBER} with your specific RHEL version number without braces, e.g. 9.

  1. Install the Dynamic Kernel Module Support (DKMS) framework
    sudo dnf -y install dkms
    

Download and install the official drivers

  1. We are going to identify the latest driver for the GPU you are using and retrieve its download URL by going to the NVIDIA Official Drivers.
  2. Select the product type. For cloud instances, this will often be Data Center/Tesla.
  3. Select the product series and product. You can find this information by running the NVIDIA System Management Interface from your deployment environment, and looking for the name of the GPU you are using. See the red box in the screenshot below as an example of where your GPU information may be output.
nvidia-smi
  1. Select your operating system. For most users, like those on Ubuntu, this will be Linux 64-bit. If you are on RHEL, select the appropriate version instead.

  2. Finally, choose the Download Type (Production Branch), and choose the latest CUDA toolkit.

  3. Select Search and check that the correct driver is displayed, then select Download.

  4. Right-click Agree & Download, then copy the link to save the download URL to your clipboard.

  5. Download the latest driver for your GPU on your deployment environment:

    wget LINK_TO_LATEST_NVIDIA_GPU_DRIVER
    

    🖥️

    Be sure to replace the LINK_TO_LATEST_NVIDIA_GPU_DRIVER placeholder value with the URL to the latest driver for the GPU you are using.

  6. Follow the instructions in the Linux 64-bit subsection if you are on a Linux distribution other than RHEL, such as Ubuntu, and chose that for your operating system in step 1(ii). If you are using RHEL, skip to the Linux 64-bit RHEL subsection.

Linux 64-bit

  1. If you chose a Linux 64-bit driver, for use with all Linux distributions other than RHEL, change the permissions on the downloaded file to allow your user to execute it:

    # Example: chmod +x NVIDIA-Linux-x86_64-535.54.03.run
    chmod +x DOWNLOADED_FILE_NAME
    
  2. Decompress and execute the driver file to install it.

    # Example: sudo ./NVIDIA-Linux-x86_64-510.47.03.run --silent
    sudo DOWNLOADED_FILE_NAME --silent
    
  3. With the --silent install, you will see warnings that are similar to the following (they can be ignored):

    WARNING: Ignoring CC version mismatch:
    
    The kernel was built with gcc (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0, GNU ld (GNU Binutils for Ubuntu) 2.34, but the current compiler version is cc (Ubuntu 9.4.0-1ubuntu1~20.04) 9.4.0.
    
    WARNING: nvidia-installer was forced to guess the X library path '/usr/lib64' and X module path '/usr/lib64/xorg/modules'; these paths were not queryable from the system.  If X fails to find the NVIDIA X driver module, please install the `pkg-config` utility and the X.Org SDK/development package for your distribution and reinstall the driver
    
  4. Test that the NVIDIA drivers are installed, then proceed to Install Container Engine.

    nvidia-smi
    

Linux 64-bit RHEL

  1. If you chose a Linux 64-bit RHEL driver, for use with Red Hat Enterprise Linux specifically, add the downloaded driver package to RPM and clean the package list cache.

    # Example: sudo rpm -i cuda-repo-rhel8-12-0-local-12.0.1_525.85.12-1.x86_64.rpm
    sudo rpm -i DOWNLOADED_FILE_NAME
    sudo dnf clean all
    
  2. Install the NVIDIA drivers.

    sudo dnf -y module install nvidia-driver:latest-dkms
    
  3. You may also need to manually install the NVIDIA CUDA repository and toolkit. You may do so by following the NVIDIA CUDA Installation Guide.

  4. Test that the NVIDIA drivers are installed, then proceed to Install Container Engine.

    nvidia-smi
    

Install Container Engine

For ease of use, Deepgram provides its products in container images, so you must make sure that you have installed the latest version of Docker Engine (or an alternative such as Podman) on all hosts.

  1. Install the container engine. To learn how, read Install Using the Repository in Docker’s documentation.

    1. If using RHEL or another distribution that doesn't distribute Docker Engine, you can install Podman with their installation instructions.

      📘

      If you are using Podman, other guides in the On-Prem documentation will contain commands using docker. Change all of these to use podman.

  2. It's possible to grant your user (e.g. ubuntu or ec2-user) sufficient permissions to run container engine commands without elevated privileges (without sudo). See Manage Docker as a Non-Root User in Docker’s optional post-installation documentation.

    1. If using RHEL or another distribution that doesn't distribute Docker Engine, the process to run container engine commands without elevated privileges is somewhat more involved. See this tutorial for basic setup and use of Podman in a rootless environment.

    📘

    If you do not follow step 2, you cannot run container engine commands without elevated privileges. You must run any docker, docker-compose or podman commands in the remainder of the on-prem guides, including this one, with sudo.

Install Container Composition Tools

Container Composition tools allow users to define and manage multi-container applications using simple YAML configuration files that can be checked into source control. It enables the orchestration and coordination of services, automating the deployment, scaling, and management of containerized applications.

Docker

Docker Compose V2 is now included with Docker Engine. The plugin for CLI use should be installed with the Install Container Engine steps. If not, you can install it independently:

# Ubuntu
sudo apt install -y docker-compose-plugin

Test the installation:

docker compose version

You should expect the command output to return version 2.X.X.

Podman

The open source community maintains a podman-compose tool that seeks to be compatible with Docker Compose. You can install this with their instructions on GitHub, and test your installation:

podman-compose version

Install the NVIDIA Container Runtime

CUDA is NVIDIA's library for interacting with its GPU. CUDA support is made available to containers using the NVIDIA container toolkit.

Docker

nvidia-docker exposes the NVIDIA container toolkit for the Docker runtime. Follow the Docker instructions from NVIDIA to setup this runtime.

After you've setup the NVIDIA Docker runtime, you can test it with the following command:

docker run --runtime=nvidia --rm --gpus all nvidia/cuda:12.1.1-base-ubuntu22.04 nvidia-smi

Podman

Podman has implemented support for the Container Device Interface (CDI) standard in its container engine, which allows for direct use of the NVIDIA container toolkit. Follow the CDI Support instructions from NVIDIA to install and configure the toolkit.

After you've setup the NVIDIA container toolkit with CDI, you can test it with the following command:

 sudo podman run --rm --device nvidia.com/gpu=all ubuntu nvidia-smi

Summary

This guide walked you through installing the NVIDIA drivers to interact with our GPU that will run inference, as well as the containerization platform that we will use to run Deepgram services.

As a reminder, many of our guides assume use of Docker. If you are on Red Hat Enterprise Linux or have another reason to use Podman instead of Docker, keep in mind the commands and configuration may be slightly different.


What’s Next

Now we head to Deepgram Console to generate needed credentials for our deployment.