How to access the raw tensor inputs and outputs passed to the model?

Hi everyone, I have a very simple doubt in fastai2. I couldn’t figure out how to access the raw tensor inputs and outputs passed to the model.
Could anyone help me?
learn.dls.train.____ ???

The input x and outputs y are mostly calculated on the fly. If you are looking for how a batch of data look like then try

x,y = dls.one_batch()
1 Like

I need to access all the raw tensors of all the valid set not just one_batch.

dls.valid.one_batch() grabs the next(iter()) from the DataLoader, so this should be a hint :wink:

2 Likes

You can also access a valid dataset element at the index idx using dls.valid_ds[idx]. dls.valid_ds[idx] returns a tuple of both your input (x) and label (y) tensors.

You can cycle through the whole valid dataset or use slices like this: dls.valid_ds[0:3]

1 Like

Note though, it’s not a tensor here yet, it’s still a PILImage, so that doesn’t work as the item and batch transforms have not been applied :slight_smile:

You can also shave off a second or two by doing:

with dls.valid.fake_l.no_multiproc():
    out = next(iter(dls.valid))
1 Like

My understanding is the question was about the valid raw data before the batch :slightly_smiling_face:.

dls.valid_ds[idx] returns a TensorImage, the item_tfms pipepline is already applied at that stage but not those related to the batch(before_batch and after_batch pipelines).

You can check out 10_tutorial.pets.ipynb, and run this dls.valid_ds[0], and it will output the following:


(TensorImage([[[237, 233, 217,  ..., 122, 139, 149],
          [205, 202, 197,  ..., 141, 147, 148],
          [192, 192, 189,  ..., 146, 148, 175],
          ...,
          [110, 114, 111,  ...,  64,  60,  52],
          [104, 111, 115,  ...,  71,  58,  63],
          [115, 124, 121,  ...,  70,  56,  61]],
 
         [[236, 232, 216,  ..., 111, 126, 136],
          [196, 189, 186,  ..., 112, 124, 128],
          [183, 183, 181,  ..., 121, 128, 155],
          ...,
          [104, 107, 107,  ...,  70,  68,  72],
          [100, 108, 112,  ...,  77,  69,  72],
          [109, 120, 119,  ...,  78,  61,  65]],
 
         [[211, 205, 181,  ...,  88, 110, 120],
          [148, 143, 134,  ..., 101, 111, 114],
          [145, 140, 139,  ..., 115, 114, 144],
          ...,
          [107, 122, 130,  ...,  55,  61,  64],
          [100, 115, 117,  ...,  70,  59,  52],
          [ 89, 101,  99,  ...,  66,  51,  57]]], dtype=torch.uint8), 27)

You might also check the dls.valid_ds.tfms, it outputs:

Pipeline: PetTfm -> FlipItem -> Resize -> ToTensor
1 Like

This isn’t the behavior I’m seeing right now. If I do the following:

pets = DataBlock(blocks=(ImageBlock, CategoryBlock),
                 get_items=get_image_files,
                 splitter=RandomSplitter(),
                 get_y=RegexLabeller(pat = r'/([^/]+)_\d+.*'),
                 item_tfms=item_tfms,
                 batch_tfms=batch_tfms)
dls = pets.dataloaders(path_im)

With item_tfms being RandomResizeCrop, dls.valid_ds[0] gives me:
(PILImage mode=RGB size=500x350, TensorCategory(32)), as I would expect it to because its the dataset which a Dataset doesn’t have any item or batch transforms applied, just type transforms

Which notebook does this example come from and what is the item_tfms?

Thanks

Just the course pets notebook. Item and batch are:

batch_tfms = [*aug_transforms(size=224, max_warp=0), Normalize.from_stats(*imagenet_stats)]
item_tfms = RandomResizedCrop(460, min_scale=0.75, ratio=(1.,1.))

Otherwise if that were the case then everything would be preprocessed and not done on the fly as the library is built to be

Also this timing lines up (more or less) with doing all the transforms manually by doing the fake_l.no_multiproc()

The difference resides from the fact that in my example (10_tutorial.pets.ipynb) the DataLoaders object is built using a Datasets object, and in your example (05_pet_breeds.ipynb) the DataLoaders object is built using a DataBlock object.

In my example, dls.valid_ds.tfms[0] returns a tensor because it has the following pipeline:

Pipeline: PetTfm -> FlipItem -> Resize -> ToTensor

and dls.after_item is equal to None

In your example, dls.valid_ds.tfms[0] returns returns a PILImage because it has the following pipeline:

Pipeline: PILBase.create

and dls.after_item is equal to Pipeline: Resize -> ToTensor.

Therefore, depending when ToTensor is applied you might get either a PILImage or Tensor object.

For those interested in roots of that difference, you might check out how the Datasets object is built:

In the Datasets, it is created this way (source code):

def _new(self, items, *args, **kwargs): return super()._new(items, tfms=self.tfms, do_setup=False, **kwargs)

Therefore, if ToTensor is part of the tfms, the output of a dataset element is a Tensor.
On the other hand if we wait, and pass ToTensor transform to the after_item argument of the dataloaders() method, the output of a dataset element has the type of the last transform of the tfms pipeline

In the DataBlock`, it is created this way (source code):

return Datasets(items, tfms=self._combine_type_tfms(), splits=splits, dl_type=self.dl_type, n_inp=self.n_inp, verbose=verbose)

You might notice that tfms only gets the type_tfms (more precisely self._combine_type_tfms()). Therefore no ToTensor transform yet. The latter is injected during the DataLoaders object creation:

return dsets.dataloaders(path=path, after_item=self.item_tfms, after_batch=self.batch_tfms, **kwargs)

ToTensor is hidden in the after_item=self.item_tfms argument. self.item_tfms pulls out ToTensor transform from the TransformBlock which has self.item_tfms = ToTensor + L(item_tfms)

1 Like

This was starting to high-jack the chat a little bit too much, so moved to its own topic.

3 Likes

I agree. The latter (your example) would be very specific if it was built that way, where as the prior (my example) would be if it was built with the high level API, as ImageBlock chucks it on as an item transform, and there are three different pipelines to consider as well. :slight_smile: Also he’s also wanting the batch transforms being applied too, and your method just gets the item_tfms I believe? :slight_smile: (though I could be wrong!)

Yes, I needed preprocessed input data. Something you could directly input to learn.model

I’ve been doing it via:

with dls.valid.fake_l.no_multiproc():
    out = next(iter(dls.valid))

For my scripts. (though I also played around with just building the pipeline manually too but that’s much more overhead :wink: )

1 Like

I think we do this using a custom callback. I’m doing this because I have a mini-library that does some “advanced” metrics for multi-label stuff and, the way I wrote it originally, it takes a list of predictions and a list of labels, so I need to have all predictions as they come out of the model… does this sound like what you are trying to do?

2 Likes

I was actually trying to do inference with Object detection in a simpler way, I will try the callbacks approach. I was hacking my way to the point it wasn’t simpler than the earlier approach, So I figured out If I can get the raw inputs, I can do something with it. I also found nothing that looks “fastai way” for object detection, something like object_detection_learner or something…

In case it helps, the key step in my callback is this:

def after_batch(self, **kwargs) -> None:
    if not self.learn.training:
        if len(self.yb) > 0:
            # here your predictions are: self.pred
            # here your labels are: self.yb[0]
            # you may want to get "just the numbers", using self.pred.detach().cpu().data.numpy()

I added the if not self.learn.training so that this only works for validation. I forgot why I added the other condition… but there was an error at some point otherwise.

1 Like

Can we just do something like this.

for o in dls.valid:
     somefn(o)