Setting up Docker-based CUDA Environment on a New Windows Machine
Last updated on November 6, 2025 pm
Setting up Docker-based CUDA Environment on a New Windows Machine
Since v19.03.1, Docker Engine has supported passing NVIDIA GPU devices directly to containers. There is no need to run NVIDIA images or runtimes explicitly anymore.
This guide will walk you through setting up the Docker-based CUDA environment on a new Windows machine.
Step 1: Install NVIDIA Drivers
Ensure you have the latest NVIDIA drivers installed for your GPU. You can download them from the Official NVIDIA Drivers website.
Run nvidia-smi in Command Prompt or PowerShell to verify the installation:
Run the following command in PowerShell as Administrator to enable WSL and install WSL 2:
1
wsl --install
Reboot your machine if prompted.
Here we choose Ubuntu 24.04 as our WSL base distribution.
1
wsl --install Ubuntu-24.04
Reboot your machine again if prompted.
Step 3: Install Docker Desktop
Download and install Docker Desktop from the Install Docker Desktop on Windows page. Remember to choose the WSL 2 as the backend during installation. Reboot your machine if prompted.
Step 4: Setup Docker Container with NVIDIA Toolkit
Start a Docker container with the following command:
1
docker run -dit--name cuda_dev -v"${PWD}:/workspace/cuda"-w /workspace/cuda --gpus all ubuntu:24.04
The flags used are:
-dit: Run the container in detached mode with an interactive terminal.
--name cuda_dev: Name the container cuda_dev;
-v "${PWD}:/workspace/cuda": Mount the current directory to /workspace/cuda in the container;
-w /workspace/cuda: Set the container’s working directory as /workspace/cuda;
--gpus all: Allocate all available GPUs to the container. Without this flag, the container won’t have access to the GPU.
ubuntu:24.04: Use the Ubuntu 24.04 image as the base image.
Connect to the container (via SSH or Docker CLI or VS Code Remote - Containers extension) and run the following commands to set up the CUDA environment inside the container: