Story Melange
  • Home/Blog
  • Technical Blog
  • Book Reviews
  • Projects
  • Archive
  • About
  • Subscribe

On this page

  • 1 Frequently changing projects
    • 1.1 How is Docker particularly useful for ML Dev?
  • 2 Quick Setup for ML Dev
  • 3 IDE Integration, Using in PyCharm (Pro)
    • 3.1 Bonus: One-Time Setup for GPU Access

Avoiding Python version chaos in ML

python
machine learning
technical
Working with machine learning often means juggling multiple Python versions, CUDA drivers, TensorFlow/PyTorch builds, and environment conflicts. Let’s checkout how docker can help us to ease the version pain. Say goodbye to “works on my machine” and Python hell.
Author

Dominik Lindner

Published

August 28, 2025

1 Frequently changing projects

When I started with my first machine learning (ML) project in 2020 it naively tried to install cuda on my pc. It took me a day. There were multiple incompatible libraries and difficult to install packages.

Even recently, another issue arose, when switching between projects quickly. A bare-metal installation is not practical in this setup. It gets messy fast — especially when one project needs Python 3.10 with TensorFlow 2.15 + GPU, and another wants 3.12 with different dependencies.

Thankfully, Docker solves all of that.

1.1 How is Docker particularly useful for ML Dev?

Docker lets you isolate your dev environment per project, without affecting your system Python. This also can be done via a virtual environment. However, once you need another base python version, you still need to change your system.

That means you can use any Python version inside the container, e.g., Python 3.10 even if your host has 3.12.

You can install TensorFlow, PyTorch as well as CUDA with NVIDIA containers without polluting your system.

2 Quick Setup for ML Dev

Dockerfile (GPU + TensorFlow + Keras):

FROM tensorflow/tensorflow:2.15.0-gpu

WORKDIR /app

COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

CMD ["bash"]

docker-compose.yml (GPU enabled):

version: "3.8"

services:
  keras-dev:
    build: .
    image: keras-dev
    volumes:
      - .:/app
      - /your/local/data:/mnt/storage
    runtime: nvidia
    environment:
      - NVIDIA_VISIBLE_DEVICES=all
    tty: true

Build and run:

docker-compose up --build

3 IDE Integration, Using in PyCharm (Pro)

  1. Add Docker-Compose interpreter
  2. Point to your docker-compose.yml
  3. Select the keras-dev service
  4. Use /usr/local/bin/python as the interpreter path
  5. Enable GPU with NVIDIA Container Toolkit

Full dev environment with terminal, debugging, and Python completion — inside the container.


3.1 Bonus: One-Time Setup for GPU Access

Install NVIDIA Container Toolkit:

sudo apt install nvidia-container-toolkit
sudo nvidia-ctk runtime configure --runtime=docker
sudo systemctl restart docker

Allow docker without sudo:

sudo usermod -aG docker $USER
newgrp docker

Like this post? Get espresso-shot tips and slow-pour insights straight to your inbox.

Comments

Join the discussion below.


© 2025 by Dr. Dominik Lindner
This website was created with Quarto


Impressum

Cookie Preferences


Real stories of building systems and leading teams, from quick espresso shots to slow pours.