Conv VAE examples using fastai


(James Maxwell) #1

I’m wondering if anybody knows of any sample code/examples of building a Conv VAE using the fastai library?


(James Maxwell) #2

Bump…

Okay, I’ve adapted a PyTorch VAE and the Learner seems happy enough loading it. However, I’m trying to run lr_find() on it and it’s complaining about the data. I just have two folders of images, in train and test, but I’m really not sure how I’m supposed to provide the same image as input and output (i.e., for the autoencoder)??

I’m getting the following error:

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-42-f01cf5c6afa7> in <module>
----> 1 learner.lr_find()

~/src/fastai/fastai/train.py in lr_find(learn, start_lr, end_lr, num_it, stop_div, wd)
 30     cb = LRFinder(learn, start_lr, end_lr, num_it, stop_div)
 31     a = int(np.ceil(num_it/len(learn.data.train_dl)))
---> 32     learn.fit(a, start_lr, callbacks=[cb], wd=wd)
 33 
 34 def to_fp16(learn:Learner, loss_scale:float=512., flat_master:bool=False)->Learner:

~/src/fastai/fastai/basic_train.py in fit(self, epochs, lr, wd, callbacks)
176         callbacks = [cb(self) for cb in self.callback_fns] + listify(callbacks)
177         fit(epochs, self.model, self.loss_func, opt=self.opt, data=self.data, metrics=self.metrics,
--> 178             callbacks=self.callbacks+callbacks)
179 
180     def create_opt(self, lr:Floats, wd:Floats=0.)->None:

~/src/fastai/fastai/utils/mem.py in wrapper(*args, **kwargs)
101 
102         try:
--> 103             return func(*args, **kwargs)
104         except Exception as e:
105             if ("CUDA out of memory" in str(e) or

~/src/fastai/fastai/basic_train.py in fit(epochs, model, loss_func, opt, data, callbacks, metrics)
 88             for xb,yb in progress_bar(data.train_dl, parent=pbar):
 89                 xb, yb = cb_handler.on_batch_begin(xb, yb)
---> 90                 loss = loss_batch(model, xb, yb, loss_func, opt, cb_handler)
 91                 if cb_handler.on_batch_end(loss): break
 92 

~/src/fastai/fastai/basic_train.py in loss_batch(model, xb, yb, loss_func, opt, cb_handler)
 22 
 23     if not loss_func: return to_detach(out), yb[0].detach()
---> 24     loss = loss_func(out, *yb)
 25 
 26     if opt is not None:

~/src/fastai/fastai/layers.py in __call__(self, input, target, **kwargs)
237 
238     def __call__(self, input:Tensor, target:Tensor, **kwargs)->Rank0Tensor:
--> 239         input = input.transpose(self.axis,-1).contiguous()
240         target = target.transpose(self.axis,-1).contiguous()
241         if self.floatify: target = target.float()

AttributeError: 'tuple' object has no attribute 'transpose'

Presumably this is because it’s not getting what its expecting from the dataBunch… but I’ve no idea where to start with fixing it.

I’ve done nothing special with the data, just:

PATH = "/home/jbmaxwell/src/data/image_data/"
data = ImageDataBunch.from_folder(PATH, ds_tfms=None, size=64)

J.


(James Maxwell) #3

On the whimsical notion that perhaps train and valid need to be the same thing (i.e., since it’s auto-encoding), I tried:

data = ImageDataBunch.from_folder(PATH, train='train', valid='train', ds_tfms=None, size=64, bs=32)

But I get the same error.
I just need to know how to provide the same image as both the input and the target for the optimizer… ugh…

(It would be really, really, extremely helpful if there were some examples that were less black-box-ish and magical.)