Using PyTorch with CUDA on WSL2 (2020)
Introduction
I spent a couple days figuring out how to train deep learning models on Microsoft’s Windows Subsystem for Linux (WSL). The process was a bit of a hassle. While the official installation guides are adequate, there were some headaches that came up during regular use. This post summarizes my experience making it work.
What is WSL
WSL is a compatibility layer that let’s you run Linux environments directly on Windows. You can run Linux command-line tools and applications, invoke Windows applications from the Linux command-line, and access Windows drives through the Linux file system. The most recent version, WSL2, uses a real Linux kernel. This provides support for more applications such as Docker. More importantly for my purposes, it also enables GPU accelerated applications.
Motivation
I’ve been dual-booting Windows and Linux for a while now. I prefer Linux for coding and training models while Windows is supported by more applications. This setup didn’t have any drawbacks for me until I started working with the Barracuda library for Unity. Unity is installed on Windows but my environment for training deep learning models is on Linux. This is inconvenient when I want to test out a newly trained model in Unity. I decided to try WSL2 in the hopes that it would remove the need to switch between operating systems.
Installing WSL
The install process for most WSL2 use cases is straightforward. You just need to enable a few features and install your preferred Linux distribution from the Microsoft Store. However, the process for enabling CUDA support is a bit more involved.
Install Windows Insider Build
CUDA applications are only supported in WSL2 on Windows build versions 20145 or higher. These are currently only accessible through the Dev Channel for the Windows Insider Program. I confirmed it does not work with the latest public release. Microsoft requires you to enable Full telemetry collection to install Insider builds for Windows. This was annoying since the first thing I do when installing Windows is disable every accessible telemetry setting. Fortunately, I only needed to temporarily enable a couple of the settings to install an Insider build.
Install Nvidia’s Preview Driver
Nvidia provides a preview Windows display driver for their graphics cards that enables CUDA on WSL2. This Windows driver includes both the regular driver components for Windows and WSL. We’re not supposed to install display drivers on the Linux distribution itself.
Install WSL
You can install WSL with one line in the command window if you install a preview build first. I did it backwards so I had to use the slightly longer manual installation. I went with Ubuntu 20.04 for my distribution since that’s what I currently have installed on my desktop.
Setting Up Ubuntu
The set up process was basically the same as regular Ubuntu with the exception of no display drivers.
Update Ubuntu
As usual, I first checked for any updates. There were quite a few.
sudo apt update
sudo apt upgrade
Install CUDA Toolkit
The next step was to install the CUDA toolkit. Nvidia lists WSL-Ubuntu
as a separate distribution. I don’t know what makes it functionally different than the regular Ubuntu
distribution. Both worked and performed the same for me when training models. You can view the instructions I followed for both by clicking the links below.
Install Anaconda
I like to use Anaconda, so I downloaded the latest available release to the home directory and installed it like normal.
cd ~
wget https://repo.anaconda.com/archive/Anaconda3-2020.11-Linux-x86_64.sh
chmod +x Anaconda3-2020.11-Linux-x86_64.sh
./Anaconda3-2020.11-Linux-x86_64.sh
I had to restart bash to use the new python interpreter like normal as well.
exec bash
After that, the interactive python interpreter started without issue.
Python 3.8.5 (default, Sep 4 2020, 07:30:14)
[GCC 7.3.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>>
Install Fastai Library
I installed the fastai library which is built on top of PyTorch to test whether I could access the GPU. The installation went smoothly.
conda install -c fastai -c pytorch -c anaconda fastai gh anaconda
I was able to confirm that PyTorch could access the GPU using the torch.cuda.is_available()
method.
Python 3.8.5 (default, Sep 4 2020, 07:30:14)
[GCC 7.3.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> torch.cuda.is_available()
True
>>>
I opened up a jupyter notebook and trained a ResNet50 model to confirm that the GPU was actually being used. The Task Manager in Windows accurately displays the available GPU memory and temperature but not GPU usage for WSL applications. The nvidia-smi
command doesn’t work yet in WSL either. I believe Nvidia is planning on adding that functionality in a future release. However, the nvidia-smi.exe
command does accurately show GPU usage.
The Headaches
Everything seemed to be working as I’d hoped. However, I started encountering some issues the more I used WSL.
Memory Usage
By default, WSL distributions will take up as much system memory as is available and not release it. This problem is compounded since Windows already takes up a decent chuck of memory. This seems to be something Microsoft is still working on. However, you can limit the amount of memory WSL can access. The workaround involves creating a .wslconfig
file and adding it to you Windows user folder (e.g. C:\Users\Username
). You can see the contents for an example config file below.
[wsl2]
memory=6GB
GPU memory usage doesn’t suffer from this problem, so it wasn’t too big of an issue for me.
File Permissions
This is where things started to get more inconvenient for my use case. The way in which WSL handles permissions for files in attached drives isn’t readily apparent for new users. I didn’t have any problem accessing the previously mentioned jupyter notebook or the image dataset I used to train the model. However, I couldn’t access the images in a different dataset when training a different model.
I tried adding the necessary permissions in Ubuntu but that didn’t work. I even tried copying the dataset to the Ubuntu home directory. I ended up finding a solution on Stack Exchange. It involves adding another config file, this time to Ubuntu. I needed to create a wsl.conf
file in the /etc/
directory. This one enables metadata for the files so that changes in permission actually work.
[automount]
enabled = true
root = /mnt/
options = "metadata,umask=22,fmask=11"
I had to restart my computer after creating the file for it to take effect. You can learn more about wsl.conf
files and the settings in the above example at the links below.
Disk Space
This is the one that killed the whole endeavor for me. I deleted the copy of the dataset I made in the Ubuntu home directory after I was able to access the original. I noticed that my disk usage didn’t decrease after I deleted the 48GB of images. This is also a known problem with WSL. There is another workaround where you can manually release unused disk space that involves the following steps.
- Open PowerShell as an Administrator.
- Navigate to the folder containing the virtual hard drive file for your distribution.
- Shutdown WSL.
- Run
optimize-vhd
for the virtual hard drive.
cd C:\Users\UserName_Here\AppData\Local\Packages\CanonicalGroupLimited.Ubuntu20.04onWindows_79rhkp1fndgsc\LocalState
--shutdown
wsl -Path .\ext4.vhdx -Mode full optimize-vhd
You currently need to do this every time you want to reclaim disk space from WSL. By this point, any convenience I’d gain over a dual-boot setup had been wiped out.
Conclusion
I’m excited about the future of WSL. Having such tight integration between Windows and Linux has a lot of potential. Unfortunately, it’s not at a point where I’d feel comfortable switching over from a dual-boot setup. I’m hoping that the issues I encountered will get resolved in 2021. I’ll give it another shot when CUDA support comes out of preview.
I’m Christian Mills, a deep learning consultant specializing in practical AI implementations. I help clients leverage cutting-edge AI technologies to solve real-world problems.
Interested in working together? Fill out my Quick AI Project Assessment form or learn more about me.