Visualizer for deep learning and machine learning models for your debugging, papers and presentations!

I found a great visualizer for our NN models. I wanted to share it here with you.
Please refer to my reply here on how to use it for fastai models.

Netron is a viewer for neural network, deep learning and machine learning models.

Netron supports ONNX ( .onnx , .pb , .pbtxt ), Keras ( .h5 , .keras ), CoreML ( .mlmodel ), Caffe2 ( predict_net.pb , predict_net.pbtxt ), MXNet ( .model , -symbol.json ) and TensorFlow Lite ( .tflite ). Netron has experimental support for Caffe ( .caffemodel , .prototxt ), PyTorch ( .pth ), CNTK ( .model ), scikit-learn ( .pkl ), TensorFlow.js ( model.json , .pb ) and TensorFlow ( .pb , .meta , .pbtxt ).

Install

macOS : Download the .dmg file or run brew cask install netron

Linux : Download the .AppImage or .deb file.

Windows : Download the .exe installer.

Browser : Start the browser version.

Python Server : Run pip install netron and netron -b [MODEL_FILE] . In Python run import netron and netron.start('model.onnx') .

Download Models

Sample model files you can download and open:

ONNX Models : Inception v1, Inception v2, ResNet-50, SqueezeNet

Keras Models : resnet, tiny-yolo-voc

CoreML Models : MobileNet, Places205-GoogLeNet, Inception v3

TensorFlow Lite Models : Smart Reply 1.0 , Inception v3 2016

Caffe Models : BVLC AlexNet, BVLC CaffeNet, BVLC GoogleNet

Caffe2 Models : BVLC GoogleNet, Inception v2

MXNet Models : CaffeNet, SqueezeNet v1.1

TensorFlow models : Inception v3, Inception v4, Inception 5h

https://github.com/lutzroeder/netron

14 Likes

For example I wanted to see: squeezeNet 1.1
Here it is (the complete arch. image attached as pdf file):
squeeze_predict_net_original.pdf (715.0 KB)

1 Like

A bit of background what is this tool for and why we need it?

hayder78, init_27 and I are working together in the part 1, v3 course project in our virtual study group.

https://hackernoon.com/anothernothotdog-280ee5b86fb3

Share your work here ✅ - #316 by init_27 (accessible only to part1-v3 participants)

Instead of doing the Lesson2 homework-which was trying web deployment of a model, we- a few members of the Fast.ai Asia Virtual study group are trying building a mobile app (everything to run on the phone) : “Another Not Hotdog” app, but using PyTorch.

Our goal of starting this project:

Make it easier to ship and test your neural network model in PyTorch on mobile devices.

Please see my GitHub repo for more details:

What is happening in this project that such tool is valuable? We need tool to visualize and debug our ONNX graph and see the network architecture:

7 Likes

Good work, guys! Gonna to catch up with you soon after my 4 finals.

@hwasiti That’s a nice tool!

Does it make sense to visualize a model that we can create with FastAI which can be found in data/models/name.h5 ? Trying to do so, I get this error:

I haven’t used it yet in fastai. I needed the visualizer for debugging squeezenet for a pytorch project that Cedric mentioned. Ported the model to Caffe2 ( predict_net.pb , predict_net.pbtxt ) using ONNX. And it worked great.

This is how worked for me to visualize fastai models:

Instead of reading default fastai model saved file learn.save('stage-1') which saves only the model parameters, export the entire model like the following. Please note that pytorch support is experimental in Netron and import dill as dill; torch.save(learn.model, path/'resnet34-entire-model-save.pth', pickle_module=dill) did not work for me. Although it saved the entire fastai model without issues.

Here is a simple script which exports your fastai model into ONNX. It runs a single round of inference and then saves the resulting traced model to resnet34-entire-model.onnx. This is important since Pytorch models are dynamic and not static like Tensorflow models. And hence, it needs this extra step to estimate the model architecture and exports it in ONNX format:

import torch.onnx

dummy_input = torch.randn(10, 3, 224, 224).cuda()

# Providing input and output names sets the display names for values
# within the model's graph. Setting these does not change the semantics
# of the graph; it is only for readability.
#
# The inputs to the network consist of the flat list of inputs (i.e.
# the values you would pass to the forward() method) followed by the
# flat list of parameters. You can partially specify names, i.e. provide
# a list here shorter than the number of inputs to the model, and we will
# only set that subset of names, starting from the beginning.
# The keyword argument verbose=True causes the exporter to print out a human-readable representation of the network

input_names = [ "actual_input_1" ] + [ "learned_%d" % i for i in range(16) ]
output_names = [ "output1" ]

torch.onnx.export(learn.model, dummy_input, path/"resnet34-entire-model.onnx", verbose=True, input_names=input_names, output_names=output_names)

source code from the Pytorch docs

You should have installed ONNX following the steps here. For me I got package not found error when tried to install it from conda-forge, so I had to compile it as shown in the github page.

Example of resnet34 lesson1 pets model:

Entire resnet model attached as resnet34-entire-model.pdf (1.5 MB)

LIMITATIONS

  • The ONNX exporter is a trace-based exporter, which means that it operates by executing your model once, and exporting the operators which were actually run during this run. This means that if your model is dynamic, e.g., changes behavior depending on input data, the export won’t be accurate. Similarly, a trace is likely to be valid only for a specific input size (which is one reason why we require explicit inputs on tracing.) We recommend examining the model trace and making sure the traced operators look reasonable.

Source

cc : @jeremy Do you have a better model visualizer? Something cool like this to impress my wife! :slight_smile:

6 Likes

@hwasiti Thanks for your post. I already have implemented this solution by converting my FastAI weights into onnx or pb and visualizing the identical network graphs. My initial concern was how I could visualize the original .h5 weight file in order to verify the rightness of the conversion to ONNX or Caffe.

That seems does not work yet. Pytorch support is experimental and that was the only way to visualize our fastai models.

The ‘Not a valid HDF5 file’ error will show for some fast.ai files as these are saved with PyTorch ‘.pth’ format but have an incorrect ‘.h5’ extension (see https://github.com/fastai/fastai/issues/1181). Renaming to ‘.pth’ will fix the issue. For ‘.pth’ format the full model needs to be saved to render a graph. ‘.pth’ support is experimental at the moment supporting basic sequential models only.

Here is how to save the full model instead of the default fastai save method that saves the model parameters only:

import dill as dill
torch.save(learn.model, path/‘resnet34-entire-model-save.pth’, pickle_module=dill)

Yes, I can confirm that it did not work for my resnet34 model. The workaround is to export it to ONNX and that worked.

1 Like

Unfortunately onnx fails to convert models with complex ops like upsample_nearest3d, group_norm, etc :frowning:

1 Like

Hello, Can you share any example to visualize AWD_LSTM with the help of the tool. Thanks in advance!