Custom image classifier in 5 min on Mac - what's the alternative on Win/Linux

Prototyping with custom image classification I involved my brother, a Mac user.
He used this easy steps using CoreML and Swift
https://developer.apple.com/documentation/createml/creating_an_image_classifier_model
In 1 min he created a custom classifier with 3 rows of code, training the net in 1 min on a ultrabook (few dozen of images as training set). He can next generate the app in one step and test also on iOS.

I was wondering if there is an equivalent easy alternative on Windows/Linux, with deploy on Android.
As far as I understood the only options are TensorFlow Lite that require TensorFlow for custom categories, Caffe2 or PyTorch with the model next converted in Caffe2 through ONNX.
In all cases I need to select a pre-trained net, retrain it and test, all with a bigger effort for someone not literate in Python.
Wondering if I’m missing something or there is an equivalent alternative.
Just to know someone can give an example what’s the equivalent fast.ai code for retraining a ResNet using two new custom categories.

Hi @davide445,

One fact is that CoreML was made to consume online services (IBM Wason, AzureML, Cognitive Service, Amazon Rekognition, …) on this sense any modern IDE can do it (Visual Studio, Android Studio, …) consuming rest or soa services on a few lines of code. For example: you create a AzureML experiment, ingest the data, create the model, train and deploy the service on the same spot. Everything on the AzureML platform. Later passes the “Service Endpoint” to the application to consume it.

The other fact that CoreML can operate with offline frameworks is a plus, it can interface with products like (TensorFlow, Keras, Caffe…) but to do that it needs Machine learning expertise and GPU power to train the model.
When you deal with models that is already pre-trained you can even use the CPU to do that that will not impact on the performance of the machine. Because you just re-train the last layer of the network (one dense layer), everything else will cost days in a Mac unless you have a Black Magic eGPU.

Probably in near future we will have that kind of pipeline (of mac to build mobile products) on other platforms too. But I see as if Apple have invested its time to build a pipeline instead of a Machine Learning Framewok.

Tensorflow in other had already has a produc on its ecosystem that with a few lines of code you can use a pre-trained model.

PyTorch os coming close to it using Caffe2.

2 Likes

FastAI Example

Note: Assuming you are trying to use a image classification:

1-You have to create a structure like that , the same as if you would use any other framework for image classification

data
│
├── train
│    ├── cats 
│    |   ├── cat.zzz.jpg
│    |   ├── cat.yyy.jpg  
│    |   ├── .....
│    |  (as many as you want)
│    |
│    └── dogs
│        ├── dog.aaa.jpg
│        ├── dog.bbb.jpg  
│        ├── ..... (
│       (as many as you want)
│ 
├── valid
│    ├── cats 
│    |   ├── cat.ccc.jpg
│    |   ├── cat.ddd.jpg  
│    |   ├── .....
│    |  (as many as you want)
│    |
│    └── dogs 
│        ├── dog.000.jpg
│        ├── dog.111.jpg  
│        ├── .....
        (as many as you want)

Here you can follow the tutorial.

https://docs.fast.ai/tutorial.data.html

1 Like

Hi @willismar thanks for your answer.
My doubt are coming from the easiness of the Apple implementation.
I’m not a developer, want mostly to prototype to define specifications based on the business requirements, size the project, define tools and standards, budget, skills and resources needed.
I know using Python it’s not a so difficult task, but I’m lacking mostly the time for learning, so every option to streamline my work is welcome.
For sure the tut you linked is noting impossible to follow, simply appear to be more complex than using CoreML+xCode+Swift (1 line of code all is needed, drop the training set, done), able also to work offline on an old MacBook Air out of the box (no remote services provisioning needed, no payment for online or offline GPU, no need to Linux installation, etc).
Being not a Mac user I was searching for alternative, maybe Knime can be an idea, or might just add another layer of complexity.

Hi @davide445,

I work in Linux so I love training models on Jupyter and save my pre-trained models ready for production.
I just left the Tensorflow pipeline because of its learning curve and I am investing learn Fast.ai and PyTorch.

I relly believe that PyTorch will come very soon to a optimal pipeline to this. They are already working hard to it.

Knime is for Data Mining its more like tabular data, and algortihms you can train on CPU I may be wrong exactly since I am inclided to use Python and Scikit Learning and PyTorch as backends.

On the same line of knime you can find Weka for Java programming. Where you can train a model offline in Weka and they dump the code for you to use in a Java Application.

https://www.cs.waikato.ac.nz/ml/weka/

1 Like

Hi @willismar in fact I was mentioning Knime mostly for his “easier” approach more than for his features.
Based on node programming is able to use among others deep learning to analyze both quantitative data and images using various framework as backend (currently TF, in future also Pytorch).

Hi…

Being not a Mac user I was searching for alternative, maybe Knime can be an idea, or might just add another layer of complexity.

Knime is for analytics !

I am not sure what you wondering about anymore. There is not easy path to prototype or build stuff.

“If you do want to make a GOOD chair you have to grab the the nail , wood and saw.”

But at least you need to try. If Knime can uses another backend so do it and develop your pipeline do build stuff.

For example I don’t use these tools outside containers… I use container as desktop. I don’t care about what the market says about “How I need to do this things” I develop my own techniques.

1 Like

True, reading better can be useful to create the training set and integrate the trained model answers into a workflow, but not to actually design / train a new model.
Just trying to figure out my workflow.
Was trying also to install fast.ai and TF on my PC to actually test a simple retrain, but got a lot of problems, probably since being the CPU old didn’t support AVX.
Next days will try with new PC/server or some service.
Thanks for your thoughts!

1 Like

Hi…

Yeah these days they are negating old hardware (old GPUs with Cuda Hardware < 3.5 and old CPUs without SSE4 and AVX etc) . To really use recent tools on this kind of machines you need to build the tool from the source (another layer of complexity).

Let me know your progress on that.

Cheers

1 Like

Hi @willismar

I was testing on Docker on my new laptop and is fabulous I can test TF and Keras notebook without any problem, all is pre-configured and just need to execute on the notebook.

The problem comes with PyTorch, and also fast.ai.
I can find PyTorch dockers working only with Nvidia GPU, that I didn’t have on my laptop. Also can’t find a fast.ai docker, at least none I can understood can be working on my laptop.

There is any docker available packing fastai v1 + PyTorch + Jupiter working on CPU.

Hi

Well there are lots of things you can do.

I believe that if you start a GPU docker without a GPU it will disable the driver. Because Docker with GPU depends con a Hooker lib called nvidia-container-runtime and if you start your docker without this, the GPU will not be enabled inside the docker.

doing that you pytorch will never try to use GPU and fastai will comply. But yet you can even downgrade your Pytorch to CPU only to make sure

Also you can learn to build a docker image to have just the libraries you need. Then you will have just to install miniconda, pytorch-cpu, fastai and jupyter, then save the docker as a template image with the name you need. Doing this you can always call a new instance of your template container.

1 Like

Learning to build is absolutely above my skills and time
The official PyTorch docker image require CUDA
https://github.com/pytorch/pytorch#docker-image
Also I didn’t find any fast.ai docker image that can be used locally without calling an external server.

Well I disagree …

Let me help you … I am writing a Docker file and the command you need to use to build it

Hi again @davide445

Linux Version (CPU-Only) [Updated]

Save this content in a file called Dockerfile file inside a folder of your preference.

FROM ubuntu:16.04

ARG user=root
ENV USER=${user}

ARG uid=0
ENV UID=${uid}

ARG PYTHON_VERSION=3.7
RUN apt-get update && apt-get install -y --no-install-recommends \
         sudo \
         build-essential \
         cmake \
         git \
         curl \
         vim \
         ca-certificates \
         libjpeg-dev \
         libpng-dev \
         net-tools \
         iproute2 && \
     rm -rf /var/lib/apt/lists/*

RUN useradd -Um ${USER}

RUN echo "${USER} ALL=(ALL) NOPASSWD:ALL" > /etc/sudoers;

USER $USER

WORKDIR /home/${USER}

RUN curl -o ${HOME}/miniconda.sh -O  https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh  && \
     chmod +x ${HOME}/miniconda.sh && \
     ${HOME}/miniconda.sh -b -p ${HOME}/conda && \
     rm ${HOME}/miniconda.sh

ENV PATH ${HOME}/conda/bin:${PATH}

RUN echo "export PATH=${HOME}/conda/bin:${PATH}" | tee -a ${HOME}/.bashrc

RUN  ${HOME}/conda/bin/conda install -y python=${PYTHON_VERSION} jupyter notebook && \
     ${HOME}/conda/bin/conda install -y numpy mkl mkl-include setuptools cmake cffi typing pyyaml scipy mkl mkl-include cython typing pip && \
     ${HOME}/conda/bin/conda install -y -c pytorch pytorch-nightly-cpu torchvision-cpu && \
     ${HOME}/conda/bin/conda install -y -c mingfeima mkldnn && \
     ${HOME}/conda/bin/conda install -y -c fastai fastai
 
RUN  sudo mkdir /projects -p && sudo chown ${USER}:${USER} /projects && \
     echo "export JUPYTER_IP=\"$(ip route get 8.8.8.8 | awk '{print $NF; exit}' )\""  | tee -a ${HOME}/.bashrc && \
     echo "export JUPYTER_PORT=8888"  | tee -a ${HOME}/.bashrc && \
     echo 'alias jbook="jupyter notebook --port=${JUPYTER_PORT} --ip=${JUPYTER_IP}  --NotebookApp.notebook_dir=/projects --no-browser --allow-root"'  | tee -a ${HOME}/.bashrc && \
     echo "source activate" | tee -a ${HOME}/.bashrc

CMD "/bin/bash"

Building

From the Terminal , go inside that folder and run the following command:

cd "name of your folder"
docker build --build-arg user=${USER} --build-arg uid=${UID} -t fastai .

It will download the Ubuntu Container, install the Miniconda, Pytorch-CPU, Torchvision-CPU and Fastai and also Jupyter Notebook

Running

Create a folder on the host you can share with the guest, so the files created on this folder inside the container will reflect outside and vice-versa.

Note 1: the --rm instruction removes the container instance as soon you abandon it. Containers are ephemeral, so you use it and trash it after.

Note 2: Assuming your projects folder is located at ~/projects or ${HOME}/projects , all your files will created on your /project (on Guest) will be saved on ~/project (on your Host).

docker run -it --rm -p 8888:8888 -v ${HOME}/projects:/projects fastai:latest bash

On Container / Guest

Inside the container, just call this alias:
Note: To change the port used you need to rebuild the image, it will take just a few seconds because docker is based on layers of historical commands

jbook

Or call it manually

jupyter notebook --port=8888 --ip='0.0.0.0'  --NotebookApp.notebook_dir=/projects --no-browser --allow-root
2 Likes

Hi again @davide445

I just updated a few lines on the instructions above and the script too.
It’s very very very easy now.

1 Like

Hi @willismar
First of all thanks for your availability in doing this, I think you can publish the result in Docker Hub and will be useful for many.
In my case I got this error after the whole packages was downloaded and I suppose the build was finished:

The command '/bin/sh -c useradd -Um ${USER}' returned a non-zero code: 2

and listing the images there is one created now but with no name

REPOSITORY TAG IMAGE ID CREATED SIZE
<none> <none> 64ca1bceb169 52 seconds ago 441MB

So that trying to run it

docker run -it -p 8888:8888 fastai:latest bash
Unable to find image ‘fastai:latest’ locally
C:\Program Files\Docker\Docker\Resources\bin\docker.exe: Error response from daemon: pull access denied for fastai, repository does not exist or may require ‘docker login’.

Just a note I’m using a Windows laptop, not a Mac, this can maybe need to change something in your script?

Appear your Building not finished!
When you finish it you will have something like this as template:

REPOSITORY     TAG            IMAGE ID        CREATED         SIZE
fastai         latest         78fdc80d6f14    3 hours ago     4.85GB

Hi @davide445,

Type this on your mac prompt

echo "${USER}" 
echo "${UID}"

tell me if it asnwer with you user name correctly

If that variable User don’t exists on your Mac environment you may need to pass it manually

the name of the user as ${USER} and the ${UID} of your user for the build command so the Docker can generate the exact same user inside the container and share content with your host.

1 Like

I was able to run it using

docker run -it -p 8888:8888 64ca1bceb169 bash

And something is running

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
37b68745d7ba 64ca1bceb169 “bash” 2 minutes ago Up 2 minutes 0.0.0.0:8888->8888/tcp inspiring_minsky

But trying to run Jupiter

root@37b68745d7ba:/# jupyter notebook --port 8888 --ip=‘0.0.0.0’ --no-browser
bash: jupyter: command not found

Using Windows Power Shell (remember I’m on a Windows machine) I didn’t get any result in using these echo commands.

Hi yes but without finish the entire process correctly the final image still damaged.
Update your dockerfile script and execute the building again. so docker will only re-process what remains to be done.