Fastai v2 chat

It was a mistake when we split the notebook in several parts. Fixed now!

2 Likes

You might also need the master version of nbdev since this is a bug I fixed recently.

1 Like

I ran into an error of notebook 43_tabular.learner when running on google colab.

learn.predict(df.iloc[0])
---------------------------------------------------------------------------

TypeError                                 Traceback (most recent call last)

<ipython-input-19-7bebb4d34d49> in <module>()
----> 1 learn.predict(df.iloc[0])

13 frames

/usr/local/lib/python3.6/dist-packages/fastai2/torch_core.py in tensor(x, *rest, **kwargs)
    110            else torch.tensor(x, **kwargs) if isinstance(x, (tuple,list))
    111            else _array2tensor(x) if isinstance(x, ndarray)
--> 112            else as_tensor(x.values, **kwargs) if isinstance(x, (pd.Series, pd.DataFrame))
    113            else as_tensor(x, **kwargs) if hasattr(x, '__array__') or is_iter(x)
    114            else _array2tensor(array(x), **kwargs))

TypeError: can't convert np.ndarray of type numpy.object_. The only supported types are: float64, float32, float16, int64, int32, int16, int8, uint8, and bool

I installed fastai2 with:

import os
!pip install git+https://github.com/fastai/fastai2
os._exit(00)

@dhoa (as it says in the FAQ), you should be using the git version of fastcore too. Try that and see if you still get the error

Are random transforms being applied to the validation set?

I noticed this when running learn.get_preds when I got slightly different results each time.

Faskbook2 notebook7 has this error when using tta in colab

https://colab.research.google.com/drive/19ZOGPzgn3afmCoO-dWuj_C59UnV4xOHz
image

@lgvaz how are you declaring the transforms? (And the block?)

No magic, aug_tfms as batch_tfms and GrandparentSplitter as splitter in Datablock

1 Like

I still get the error. I tried first with

!pip install git+https://github.com/fastai/fastcore
!pip install git+https://github.com/fastai/fastai2

I tried also with the first cell in your walk-through:

!pip install -q feather-format kornia pyarrow wandb nbdev fastprogress fastai2 fastcore --upgrade 
!pip install torch==1.3.1

It doesn’t work either.

Hi, If you want, you can try my notebook here. It works.

Thanks @JonathanSum :smiley: But I think you forgot to attach your notebook

https://colab.research.google.com/github/JonathanSum/Fastbook_colab/blob/master/01_intro.ipynb

2 Likes

Thanks @JonathanSum. However I ran into a problem of tabular notebook that I wrote in this post Fastai v2 chat . I tried to install fastai2 from git repo and pip. Both cases doesn’t work. I will take a closed look at your notebook but I think there are not different from mine for installation.

Quick question: what is the status of multifit for v2? Is it possible to make it work in v2 right now even if the official port is not ready yet?

How can I unregister a function in TypeDispatch?

Example:

I created a new tensor type: class TensorImageX(TensorImage): pass

I now want to inherit from Normalize and create a class that only normalizes TensorImageX but not TensorImage.

@Pablo No one ported it to v2 AFAIK but all the underlying tools are here

@lgvaz Have a look at the doc, it’s all explained here.

2 Likes

It’s on my todo list :wink:

1 Like

What’s the best way to create a dataset of tensor inputs and float labels? This feels like it should be very obvious but I haven’t been able to get it working. Here’s what I mean:


This is the sort of thing I’ve been experimenting with to get it working. Any pointers would be much appreciated! I think the toughest thing for me to understand at the moment is what to do with transforms when I don’t really have any, except to separate the xs from the ys.

data = list(zip(bitboards, labels))

def get_x(d): return d[0]
def get_y(d): return d[1]

splits = RandomSplitter()(data)
tfms = [[get_x], [get_y, Categorize()]]
dsets = Datasets(data, tfms=tfms, splits=splits)

dls = dsets.dataloaders(bs=4, device='cpu', num_workers=0)

Do keep us posted!

For the time being I think I will try Multifit on V1 with a smaller dataset that we have, to evaluate how promising it is in our case :slight_smile:

Hi,
is there a way to rename the first and the last layer of my model? or do i have to add a custom layer for that purpose?