[Pre Release] MagNet Deep Learning Framework - Help Needed

Hi everyone :grin:,

I’ve been a FastAI student for a year, and have been doing my own projects for a while.
Really appreciate Jeremy and Rachel’s vision about AI and using intelligence to create a positive impact.

For about six months now, I’ve been working on my own high-level DL framework.

It’s called MagNet.

MagNet Logo

MagNet’s core aim is to enable the developer to create
Deep Learning projects that build themselves.

It’s a wrapper around PyTorch (like Keras around TF),
that allows you to create, train and debug models easier.

For instance, here’s how simple it is to classify MNIST:

MNIST-Quickstart

Click here for more.


I’m really excited to see how this framework will enable Deep Learning practitioners to write better AI,
and discover new possibilities.

I would really appreciate if you guys could take a look at it and tell me what you think.

@jeremy @rachel Any advice on how to take this forward?

P.S. If anyone’s interested in contributing, do join the Gitter chat.
Any help is greatly needed.


Links:

1 Like

this looks really cool vaisakh, but your animated gif goes just too fast, it’s very hard to get what is going on. Could you put some plain code not animated to get an idea of your api?

Thanks for checking it out :slightly_smiling_face: .

I’ll make the gif slower.

In the meantime, here’s a Jupyter Notebook I made as a quick start.

The core idea of MagNet is Nodes that can react to the input/computational graph and change their parameters (dimensionality, arguments, shape, activation etc).
This allows you to create complex architectures easier and with more clarity.

Secondly, the Trainer class allows streamlined training of models via callbacks (similar to Keras).
Unlike most frameworks, you can bake your own training logic into the trainer by overriding the optimize() method.

The workflow looks something like this:

# Get the MNIST dataset
data = Data.get('mnist')

# Create a simple 3-layer CNN
# Note that you only need to specify the bare essentials.
# The Nodes attach to each other like magnets (hence the name).
model = nn.Sequential(mn.Conv(32), *mn.Conv() * 2, mn.Linear(10, act=None))

summarize(model, next(data)[0])

+----------+------------+----------------------+
|   Node   |   Shape    | Trainable Parameters |
+----------+------------+----------------------+
|  input   | 1, 28, 28  |          0           |
+----------+------------+----------------------+
|   Conv   | 32, 14, 14 |         320          |
+----------+------------+----------------------+
|   Conv   |  64, 7, 7  |        18,496        |
+----------+------------+----------------------+
|   Conv   | 128, 4, 4  |        73,856        |
+----------+------------+----------------------+
|  Linear  |     10     |        20,490        |
+----------+------------+----------------------+
Total Trainable Parameters: 113,162

# Train the model for one epoch
trainer = SupervisedTrainer(model)

# MagNet provides callbacks that can add various features to training
# This callback tracks the loss, metrics and adds a nice progress-bar
monitor_callback = callbacks.Monitor()
trainer.train(data(batch_size=64, shuffle=True), callbacks=[monitor_callback])

# See the progress
monitor_callback

Do check the repo out.

There’s lot more features that I haven’t mentioned.

2 Likes