I am trying to use the Migrating notebook from fastai2 to try to use some PyTorch code in fastai2.
It seems that everything work until I tried to fit_one_cycle when I got an error about pbar. Disable it does not seem to solve the issue. Here is the code with the error.
Any ideas how to solve the problem? Thanks
P.S.: Since is only kind of related, should I open a new post?
I was looking today at how to run models with multiple data types (e.g. in my case, tabular and image). I finally managed to find this awesome blog post. It is for fastaiv1 but I will be exploring whereas it works for v2 also or could be easily adapted and update you all. I am sharing the link to it since I believe it could be of your interest at some point of your career!
A small suggestion @muellerzr , Since you mute some part of the video during the live stream, viewers who view the video after the live stream has to skip them manually, I found this project https://github.com/carykh/jumpcutter which will speed up those segments with no audio, save both your time and the viewers.
I tested it on one of your videos with default parameters and it converted 1h 15mins video down to 48mins video! Saved 17mins!!
I suggest you upload a copy of these edited videos for after stream viewers.
Sorry. Too many other comments, what ifs and other unrelated changes that I made in the nb so I better understand how to work with various concepts so prob best to leave your nb alone.
On top of Gloria’s mods to support Stratified-K Fold that are already incorporated in the nb,
the only two other things changes are:
Use defined skf instead of kf in the loop:
for _, val_idx in skf.split(np.array(train_imgs[:7220]), train_labels):
as shown on the line above pass in np.arrary(train_imgs[:7220]) instead of np.array(train_imgs) as first parameter to skf.split.
1 frames
/usr/local/lib/python3.6/dist-packages/fastai2/data/block.py in _apply_pipeline(p, x)
109 print(f" {p}\n starting from\n {_short_repr(x)}")
110 for f in p.fs:
–> 111 name = f.name
112 try:
113 x = f(x)
AttributeError: ‘Transform’ object has no attribute ‘name’
I’m unfamiliar with the audio library right now and I know they’re working on adjusting a number of things for v2. You’d want to look at specifically the transforms. It also looks like you’re not including them into your DataBlock call, pass in your item_tfms and batch_tfms there.
This week was again great! I loved the insights on how to debug the code and also the imagenette leaderboard with working code examples!!
Please correct me if I am wrong: these leaderboard code examples take models available at vision.models and play with the parameters. So the architecture of resnet50 is not the same as xresnet50, is it? Is xresnet an improved version?
Is this the complete list of models? I believe we can also use any of these (coming from pytorch) without doing anything special.