From simple imageClassification model to android/ios app?

Hi there,
I have started with the course (youtube lectures), and am really fascinated with the whole topic so far.
And luckily, so far, I haven’t run into too many technical issues…

This might be a bit overambitious, but I was thinking of creating a simple phone app that runs my image classification model - just as an excercise to try and go through the whole process from training a model to practically using it in an app

All the posts that I could find regarding creating android/ios apps are relatively ‘old’

this one for example is three years old by now:
[Deployment boilerplate for fastai-v2 - fastai / fastai dev - Course Forums]Deployment boilerplate for fastai-v2

Could someone tell me if there is more up to date information on this subject?

thank you

1 Like

oh, and just as I posted my question, I found a fairly new post about this topic:

PyTorch at the Edge: Deploying Over 964 TIMM Models on Android with TorchScript and Flutter - Deep Learning - Course Forums

that might be what i needed…


ok, after days of struggling, i am getting a bit frustrated. none of the example scripts from the pytorch_lite github are working for me (in flutter, android studio etc). (compatibility issues with outdated versions)

flutter installation, dart, android studio etc is all working fine. once i try and run the pytorch example android apps (main.dart) i run into issues.

has anyone recently had any success with deploying a trained model to android?


Dependencies and version management is an issue I think in dart and flutter. What is the output of

flutter doctor?

1 Like

Hi Dickson,
thanks for getting back to me.
flutter doctor doesn’t have any issues:

[✓] Flutter (Channel stable, 3.7.11, on Microsoft Windows [Version 10.0.19045.2846], locale en-GB)
[✓] Windows Version (Installed version of Windows is version 10 or higher)
[✓] Android toolchain - develop for Android devices (Android SDK version 33.0.2)
[✓] Chrome - develop for the web
[✓] Visual Studio - develop for Windows (Visual Studio Community 2022 17.5.4)
[✓] Android Studio (version 2022.2)
[✓] VS Code (version 1.77.3)
[✓] Connected device (3 available)
[✓] HTTP Host Availability

• No issues found!

for the flutter_app, once in android studio, I do get lots of warning from gradle assembleDebug:

Running Gradle task ‘assembleDebug’…

Warning: Mapping new ns to old ns
Warning: Mapping new ns to old ns
Warning: Mapping new ns to old ns
Warning: Mapping new ns to old ns
Warning: Mapping new ns to old ns
Warning: Mapping new ns to old ns
Warning: Mapping new ns to old ns
Warning: Mapping new ns to old ns
Warning: unexpected element (uri:“”, local:“extension-level”). Expected elements are <{}codename>,<{}layoutlib>,<{}api-level>
Warning: unexpected element (uri:“”, local:“base-extension”). Expected elements are <{}codename>,<{}layoutlib>,<{}api-level>

but these are warnings, not errors

the error happens one step later:

FAILURE: Build failed with an exception.

** What went wrong:*
Execution failed for task ‘:app:processDebugMainManifest’.
> Unable to make field private final java.lang.String accessible: module java.base does not “opens” to unnamed module @3bf017f5

does that ring a bell? any ideas?

thank you

i get similar errors when i try to run the examples from the official pytorch tutorials
so my issue must be something basic that i am missing - but i assume it is version dependencies as you say

it is just weird, since all the examples are very recent - just a few months or weeks old

These look pretty common though. I’ve seen people post similar errors.

What if you create a new flutter project? eg

flutter create hello_world
cd hello_world
flutter run

This should build a folder structure for a flutter app with an updated gradle and all dependencies.

Does this run? If no then something might be wrong with the installation.

If yes, then copy content from these folders into your folder structure.

And run again.

Let me know if it works (or not) :slight_smile:


Not sure what it is it is like for prod deployment, but seems pretty quick for prototypes.

the app/script seems to be working now! Thank you very much, this is amazing.
dart/flutter might not be that complicated, but i found it faaar too overwhelming to get my head around your script.

one thing i had to change to make your script work, though:

change this: minSdkVersion flutter.minSdkVersion
to this: minSdkVersion 21
in the android/app/build.gradle file

and of course I had to add my own model and labels to the assets folder.

now i need to add some sample images (in assets) to try the classifier in the emulator.

thank you so much for your help!!

1 Like

oooh, this is interesting.
thanks for the link, i will have a look.

1 Like

Happy to know that it worked for you @muff! Sorry for my messy codes :sweat_smile:

PS - If anyone is wondering where is the code we were discussing about it’s here. Hope it helps. I’m open to suggestions and edits to make it easier for anyone trying to replicate the example. :slight_smile:


Haha, don’t worry, I am sure your code is not messy at all.
I just don’t know much about flutter/dart (yet).

silly question:
is the image being transformed before it is being fed to the image classification model?
I know that the image width/height are given (224px in your case), but how does it actually handle the images that come in larger sizes?

i wonder if i need to add an image crop and resize process before feeding the images to the classifier?
i do get much better results when i feed cropped images - but i was assuming the transform would happen inside the classifier anyway?

In the Android app, we convert the model to TFLite and resize the image to the size expected by the model (224 mostly).

In my case, I resized the image in the app before feeding it into the model which expects 224x224px image.