Gatys et al. style transfer implementation

As an exercise, i am trying to implement the Gatys et al. style transfer in fastaiV1. But there are some details that are tripping me up.

I started with the old course notebook: https://github.com/fastai/fastai/blob/master/courses/dl2/style-transfer.ipynb#scrollTo=aFYZSnsdZO_D

But a lot of the syntax is completely unknown to me. SO i was pointed in the direction of a fastaiV2 implementation: https://github.com/muellerzr/Practical-Deep-Learning-for-Coders-2.0/blob/master/Computer%20Vision/05_Style_Transfer.ipynb

This is a lot more understandable to me. Although there are still some things I have problems implementing. The issue at hand is the function:

def get_style_im(url):
download_url(url, 'style.jpg')
fn = 'style.jpg'
dset = Datasets(fn, tfms=[PILImage.create])
dl = dset.dataloaders(after_item=[ToTensor()], after_batch=[IntToFloatTensor(), Normalize.from_stats(*imagenet_stats)], bs=1)
return dl.one_batch()[0]

From what I understand this takes an image and returns it as a batch. So how to i make a single image batch in fastaiV1?

My current solution is:

dl = DataLoader(style_img, batch_size=1)
db = DataBunch(dl, dl)
batch = db.one_batch

But in this way I am forced to give a validation set. And I cannot get ‘.normalize(imagenet_stats)’ to work on ‘db’. Is there a simple solution that I am missing?