Visualizer for deep learning and machine learning models for your debugging, papers and presentations!

This is how worked for me to visualize fastai models:

Instead of reading default fastai model saved file learn.save('stage-1') which saves only the model parameters, export the entire model like the following. Please note that pytorch support is experimental in Netron and import dill as dill; torch.save(learn.model, path/'resnet34-entire-model-save.pth', pickle_module=dill) did not work for me. Although it saved the entire fastai model without issues.

Here is a simple script which exports your fastai model into ONNX. It runs a single round of inference and then saves the resulting traced model to resnet34-entire-model.onnx. This is important since Pytorch models are dynamic and not static like Tensorflow models. And hence, it needs this extra step to estimate the model architecture and exports it in ONNX format:

import torch.onnx

dummy_input = torch.randn(10, 3, 224, 224).cuda()

# Providing input and output names sets the display names for values
# within the model's graph. Setting these does not change the semantics
# of the graph; it is only for readability.
#
# The inputs to the network consist of the flat list of inputs (i.e.
# the values you would pass to the forward() method) followed by the
# flat list of parameters. You can partially specify names, i.e. provide
# a list here shorter than the number of inputs to the model, and we will
# only set that subset of names, starting from the beginning.
# The keyword argument verbose=True causes the exporter to print out a human-readable representation of the network

input_names = [ "actual_input_1" ] + [ "learned_%d" % i for i in range(16) ]
output_names = [ "output1" ]

torch.onnx.export(learn.model, dummy_input, path/"resnet34-entire-model.onnx", verbose=True, input_names=input_names, output_names=output_names)

source code from the Pytorch docs

You should have installed ONNX following the steps here. For me I got package not found error when tried to install it from conda-forge, so I had to compile it as shown in the github page.

Example of resnet34 lesson1 pets model:

Entire resnet model attached as resnet34-entire-model.pdf (1.5 MB)

LIMITATIONS

  • The ONNX exporter is a trace-based exporter, which means that it operates by executing your model once, and exporting the operators which were actually run during this run. This means that if your model is dynamic, e.g., changes behavior depending on input data, the export won’t be accurate. Similarly, a trace is likely to be valid only for a specific input size (which is one reason why we require explicit inputs on tracing.) We recommend examining the model trace and making sure the traced operators look reasonable.

Source

cc : @jeremy Do you have a better model visualizer? Something cool like this to impress my wife! :slight_smile:

6 Likes