Can't make pinned version of fastai 2.2.7 work inside docker

I can install fastai inside docker without seeing any error however my python program is not able to import/use either fastai or torch.

What’s more strange is that if I run the container without using “docker-compose up” the imports work fine!

I’m trying to use fastai==2.2.7 which was used to train my model.

My docker file looks like below (nothing is commented out, can’t fix the formatting):

FROM python:3.8-slim

RUN apt-get update && apt-get install -y --no-install-recommends gcc build-essential cmake \
    git python3-dev && rm -rf /var/lib/apt/lists/*

COPY requirements.txt .

RUN pip install --upgrade pip

RUN pip install --no-cache-dir -r requirements.txt

COPY app app/

And my requirements.txt looks like below:

numpy
redis
aiofiles
aiohttp
sentry_sdk
ipython
-f https://download.pytorch.org/whl/torch_stable.html
torch==1.7.1+cpu
fastai==2.2.7
tsai==0.2.15

my docker-compose file looks like:

version: '3'
services:
  web:
    image: web-server:latest
    ports:
        - "5042:5042"
    depends_on:
      - model-server
    working_dir: /app
    command: python main.py serve
  model-server:
    image: model-server:latest
    working_dir: /app
    command: python -u run_model_server.py
  redis:
    image: redis:alpine
    ports:
      - "6379:6379"

and my docker-compose.override looks like:

version: '3'
services:
  web:
    build:
      context: ./web-server
      dockerfile: Dockerfile
    image: web-dev:latest
    volumes:
        - './web-server/app:/app'
    environment:
        REDIS_HOST: 'redis'
        RELOAD: 'True'
        ENVIRONMENT: 'dev'
  model-server:
    build:
      context: ./model-server
      dockerfile: Dockerfile
    image: web-dev:latest
    volumes:
      - './model-server/app:/app'
    environment:
        REDIS_HOST: 'redis'
        ENVIRONMENT: 'dev'

The containers build ok with docker-compose up, but the model-server fails with error:

model-server_1  | Traceback (most recent call last):
model-server_1  |   File "run_model_server.py", line 11, in <module>
model-server_1  |     import torch
model-server_1  | ModuleNotFoundError: No module named 'torch'

imports in run_model_server.py look like:

from pathlib import Path
import numpy as np
import settings
import time
import json
import redis
import aiohttp
import asyncio
import pickle
import torch
from fastai.imports import pv
from fastai.learner import first
from fastai.torch_core import distrib_barrier, default_device
import tsai
from tsai.all import *
import logging
from physionet_processing import extend_ts
import sentry_sdk

Any help / advice would be greatly appreciated thanks!

When I run the container independently and try the imports…

$ docker run --rm -it model-dev
Python 3.8.7 (default, Jan 12 2021, 17:16:32) 
[GCC 8.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> import fastai
>>> torch.__version__
'1.7.1+cpu'
>>> fastai.__version__
'2.2.7'
>>> import tsai
>>> tsai.__version__
'0.2.15'
>>> 

I think you just need to change the name of the package in your requirements.txt from ‘torch’ to ‘torchvision’. The package is called ‘torchvision’ on pypi

I need to use functions from torch directly e.g. torch.load()

Ohhh I think I misunderstood. But this GitHub thread may be of use: How to add PyTorch to requirements.txt · Issue #29745 · pytorch/pytorch · GitHub

I’m not sure, I’m doing a bit of guesswork

Have you tried to use docker-compose up -d --build to force the build process?

Hi JackByte - yes the container builds fine, it throws the error as soon as the code is run and the container stops after that

I am an idiot… spotted the glaring mistake in my Docker override file where I told Dcoker to use the same container for both web and model server. They have completely different dependencies SMH!

Thanks for those who tried to help!

1 Like