Introducing PyTorch-accelerated

Hi all,

I’m a long-time fastai user, but over the last year or so - due to both the seemingly reduced pace of development on fastai, as well as wanting to use distributed training without modifying my code - I have found myself using plain PyTorch much more. Whilst this solved some of my issues, I quickly tired of the amount of boilerplate that I had to write!

Whilst there are other popular frameworks that act as high-level APIs, I struggled to find one that suited my needs so I ended up creating a (very) lightweight library for my own use; after using this internally where I work for a while I was encouraged by my colleges to make this available publicly!

pytorch-accelerated is a lightweight library designed to accelerate the process of training PyTorch models by providing a minimal, but extensible training loop - encapsulated in a single Trainer object - which is flexible enough to handle the majority of use cases, and capable of utilizing different hardware options with no code changes required.

It’s built on top of Hugging Face accelerate (developed by Sylvain), and the key differentiator is the streamlined feature set and huge emphasis on simplicity and transparency; to enable users to understand exactly what is going on under the hood, but without having to write and maintain the boilerplate themselves! It is primarily aimed at users who like, and are comfortable with, selecting and creating their own models, loss functions, optimizers and datasets.

A very basic example of use, taken from here, looks like this:

import os

from torch import nn, optim
from torch.utils.data import random_split
from torchvision import transforms
from torchvision.datasets import MNIST

from pytorch_accelerated import Trainer

class MNISTModel(nn.Module):
    def __init__(self):
        super().__init__()
        self.main = nn.Sequential(
            nn.Linear(in_features=784, out_features=128),
            nn.ReLU(),
            nn.Linear(in_features=128, out_features=64),
            nn.ReLU(),
            nn.Linear(in_features=64, out_features=10),
        )

    def forward(self, input):
        return self.main(input.view(input.shape[0], -1))

def main():
    dataset = MNIST(os.getcwd(), download=True, transform=transforms.ToTensor())
    train_dataset, validation_dataset, test_dataset = random_split(dataset, [50000, 5000, 5000])
    model = MNISTModel()
    optimizer = optim.SGD(model.parameters(), lr=0.001, momentum=0.9)
    loss_func = nn.CrossEntropyLoss()

    trainer = Trainer(
            model,
            loss_func=loss_func,
            optimizer=optimizer,
    )

    trainer.train(
        train_dataset=train_dataset,
        eval_dataset=validation_dataset,
        num_epochs=8,
        per_device_batch_size=32,
    )

    trainer.evaluate(
        dataset=test_dataset,
        per_device_batch_size=64,
    )

if __name__ == "__main__":
    main()

If you are interested in learning more, check out PyTorch-accelerated on GitHub, read the introductory blog post or read the docs.

I was slightly hesitant about posting this here, as it is an alternative to fastai, but I believe that they ultimately serve different purposes and some people here may find it useful!

I look forward to any feedback that anyone may have.

9 Likes

Very nice development.