Part 1 (2020) - Weekly Beginner Only Review and Q&A

PART 1 (2020) - BEGINNER ONLY REVIEW & QA

Will be holding a weekly zoom meeting starting next week, through the duration of the course, for those new to fast.ai and/or PyTorch and/or deep learning in general. Ask whatever you want as long as its related to a past lecture and let those of us that have been around here for a few years try to help. In all likelihood, we’ve asked and struggled with the very things you are … so be not afraid to ask.

Resources

  • Dates : March 26 - May 7
  • Schedule : Thursdays, 6:30-7:30pm PST

Recordings

Resources

Game Plan

We’ll be doing these zoom sessions two days after every class with the goal of answering any questions or clearing up any confusion related to any past lecture.

In order to make the best use of our time, I would ask those interested in participating to reply to this thread with what they want to discuss after each lecture (and if you see something a fellow classmate has posted that you’re interested in, to like it). Each week, we’ll prioritize the most burning questions while also allowing for ad-hoc Q&A as time permits. We’ll see how this works week to week and adjust accordingly.

Hope to meet and get to know some of the folks new to the fast.ai community!

21 Likes

@jeremy (sorry for the @ mention), but can you wikify and optionally make sticky so folks can find this?

Wikified :slight_smile: I’ll mention it at start of class - remind me if I forget. (You can at-mention @rachel)

2 Likes

I would love to volunteer to help @wgpubs, Thank you for starting this!

I have a zoom account that I use for my podcast, I could setup links and auto record for the session, if it’s useful :slight_smile:

4 Likes

@wgpubs - Would it be possible for to you to record these sessions? Really keen to listen (and sure i’ll be contributing questions too) but timewise the session is 3am-ish for me so a rough one to attend!

4 Likes

@wgpubs would love to be a part of this too!

Keen to volunteer and help if required :slight_smile:

I’ll try.

They will be over zoom so if anyone knows how that is done then let me know.

Thanks much!

Yah I got some ideas … want to see how the first weeks goes and gauge how many folks will be there. Will get you in there and others over the next few weeks. Getting help from the perspectives of diverse personalities helps everyone.

If you’re the meeting host, there is an option to record. It’s located along the bottom bar (far right hand side).

1 Like

Found it!

It shall be done then. Not sure where it goes if I choose to record to cloud so may record locally and upload to youtube??? Thoughts?

@wgpubs happy to help you-DM-ing you! :slight_smile:

This is exactly what I’ve been doing :slight_smile: Just make sure the video is unlisted

Thanks for the great initiative to help. If time permits, it would be nice if you could discuss what kind of projects would help as the course progresses. I was thinking doing a mini project for each one like CV, NLP and so on that way I can get my hands on in each of the major topic rather than going deep into one area.

1 Like

Dear all,

Thank you very much for this initiative! I look forward meeting you!

Here are a couple of points I would be interested to discuss based on the small projects I’ve been working on during the course.

I’ve been working on reproducing this publication. The objective is to predict Canadian tree species based on the bark, either using one crop of the bark, or all the crops or all the pictures of a tree.

So far, it looks that I manage to get higher accuracy for single crop prediction but I need to let the 5-fold cross-validation finish to be sure of it – having it based on the trees and not the images (there are several images per tree) was a nice way to get familiarized with the DataBlock. I will publish the result in the Share your V2 projects section this weekend hopefully.

So, based on that learning, I would propose the following points. I guess some will be discussed later in more detail though:

  • I think there are a lot of things to say about data augmentation, it can be a bit confusing what the different options really do, and often it did not achieve the gains I would have expected.

    • What applies to training data and validation?
    • What are the practical differences between applying augmentation between item and batch?
    • How many “variations” are being created?
    • Can we visualize the full range of the different variations?
    • In this project, one way to improve accuracy is to make different crops only the tree trunk, is there a way to do that with data augmentation?
  • What happened with the learning rate finder? I was the main focus of the previous courses, a bit surprised that we didn’t talk about it yet.

Cheers,

Yann

How can we use CNN for regression? How should we formulate the input? Would it be images and the prediction label (salary for example), or how?

@Dina I have an example here:

2 Likes

Thank you Zach, I will try it out

I see an error like this on paperspace while running dls.valid.show_batch(max_n=4, rows=1)
Any thoughts? Looks like an error with matplotlib

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-12-d24c61455493> in <module>
----> 1 dls.valid.show_batch(max_n=4, rows=1)

/opt/conda/envs/fastai/lib/python3.7/site-packages/fastai2/data/core.py in show_batch(self, b, max_n, ctxs, show, **kwargs)
     90         if b is None: b = self.one_batch()
     91         if not show: return self._pre_show_batch(b, max_n=max_n)
---> 92         show_batch(*self._pre_show_batch(b, max_n=max_n), ctxs=ctxs, max_n=max_n, **kwargs)
     93 
     94     def show_results(self, b, out, max_n=9, ctxs=None, show=True, **kwargs):

/opt/conda/envs/fastai/lib/python3.7/site-packages/fastcore/dispatch.py in __call__(self, *args, **kwargs)
     96         if not f: return args[0]
     97         if self.inst is not None: f = MethodType(f, self.inst)
---> 98         return f(*args, **kwargs)
     99 
    100     def __get__(self, inst, owner):

/opt/conda/envs/fastai/lib/python3.7/site-packages/fastai2/vision/data.py in show_batch(x, y, samples, ctxs, max_n, nrows, ncols, figsize, **kwargs)
     43 def show_batch(x:TensorImage, y, samples, ctxs=None, max_n=10, nrows=None, ncols=None, figsize=None, **kwargs):
     44     if ctxs is None: ctxs = get_grid(min(len(samples), max_n), nrows=nrows, ncols=ncols, figsize=figsize)
---> 45     ctxs = show_batch[object](x, y, samples, ctxs=ctxs, max_n=max_n, **kwargs)
     46     return ctxs
     47 

/opt/conda/envs/fastai/lib/python3.7/site-packages/fastai2/data/core.py in show_batch(x, y, samples, ctxs, max_n, **kwargs)
     13     if ctxs is None: ctxs = Inf.nones
     14     for i in range_of(samples[0]):
---> 15         ctxs = [b.show(ctx=c, **kwargs) for b,c,_ in zip(samples.itemgot(i),ctxs,range(max_n))]
     16     return ctxs
     17 

/opt/conda/envs/fastai/lib/python3.7/site-packages/fastai2/data/core.py in <listcomp>(.0)
     13     if ctxs is None: ctxs = Inf.nones
     14     for i in range_of(samples[0]):
---> 15         ctxs = [b.show(ctx=c, **kwargs) for b,c,_ in zip(samples.itemgot(i),ctxs,range(max_n))]
     16     return ctxs
     17 

/opt/conda/envs/fastai/lib/python3.7/site-packages/fastai2/torch_core.py in show(self, ctx, **kwargs)
    296     _show_args = ArrayImageBase._show_args
    297     def show(self, ctx=None, **kwargs):
--> 298         return show_image(self, ctx=ctx, **{**self._show_args, **kwargs})
    299 
    300 # Cell

/opt/conda/envs/fastai/lib/python3.7/site-packages/fastai2/torch_core.py in show_image(im, ax, figsize, title, ctx, **kwargs)
     52     if figsize is None: figsize = (_fig_bounds(im.shape[0]), _fig_bounds(im.shape[1]))
     53     if ax is None: _,ax = plt.subplots(figsize=figsize)
---> 54     ax.imshow(im, **kwargs)
     55     if title is not None: ax.set_title(title)
     56     ax.axis('off')

/opt/conda/envs/fastai/lib/python3.7/site-packages/matplotlib/__init__.py in inner(ax, data, *args, **kwargs)
   1541     def inner(ax, *args, data=None, **kwargs):
   1542         if data is None:
-> 1543             return func(ax, *map(sanitize_sequence, args), **kwargs)
   1544 
   1545         bound = new_sig.bind(ax, *args, **kwargs)

/opt/conda/envs/fastai/lib/python3.7/site-packages/matplotlib/cbook/deprecation.py in wrapper(*args, **kwargs)
    356                 f"%(removal)s.  If any parameter follows {name!r}, they "
    357                 f"should be pass as keyword, not positionally.")
--> 358         return func(*args, **kwargs)
    359 
    360     return wrapper

/opt/conda/envs/fastai/lib/python3.7/site-packages/matplotlib/cbook/deprecation.py in wrapper(*args, **kwargs)
    356                 f"%(removal)s.  If any parameter follows {name!r}, they "
    357                 f"should be pass as keyword, not positionally.")
--> 358         return func(*args, **kwargs)
    359 
    360     return wrapper

/opt/conda/envs/fastai/lib/python3.7/site-packages/matplotlib/axes/_axes.py in imshow(self, X, cmap, norm, aspect, interpolation, alpha, vmin, vmax, origin, extent, shape, filternorm, filterrad, imlim, resample, url, **kwargs)
   5611         im = mimage.AxesImage(self, cmap, norm, interpolation, origin, extent,
   5612                               filternorm=filternorm, filterrad=filterrad,
-> 5613                               resample=resample, **kwargs)
   5614 
   5615         im.set_data(X)

/opt/conda/envs/fastai/lib/python3.7/site-packages/matplotlib/image.py in __init__(self, ax, cmap, norm, interpolation, origin, extent, filternorm, filterrad, resample, **kwargs)
    897             filterrad=filterrad,
    898             resample=resample,
--> 899             **kwargs
    900         )
    901 

/opt/conda/envs/fastai/lib/python3.7/site-packages/matplotlib/image.py in __init__(self, ax, cmap, norm, interpolation, origin, filternorm, filterrad, resample, **kwargs)
    259         self._imcache = None
    260 
--> 261         self.update(kwargs)
    262 
    263     def __getstate__(self):

/opt/conda/envs/fastai/lib/python3.7/site-packages/matplotlib/artist.py in update(self, props)
   1004 
   1005         with cbook._setattr_cm(self, eventson=False):
-> 1006             ret = [_update_property(self, k, v) for k, v in props.items()]
   1007 
   1008         if len(ret):

/opt/conda/envs/fastai/lib/python3.7/site-packages/matplotlib/artist.py in <listcomp>(.0)
   1004 
   1005         with cbook._setattr_cm(self, eventson=False):
-> 1006             ret = [_update_property(self, k, v) for k, v in props.items()]
   1007 
   1008         if len(ret):

/opt/conda/envs/fastai/lib/python3.7/site-packages/matplotlib/artist.py in _update_property(self, k, v)
   1000                 if not callable(func):
   1001                     raise AttributeError('{!r} object has no property {!r}'
-> 1002                                          .format(type(self).__name__, k))
   1003                 return func(v)
   1004 

AttributeError: 'AxesImage' object has no property 'rows'
1 Like

Thanks folks for suggesting. This was solved using a git pull for the course materials.

Thanks everyone!

If you have any feedback on how to make these sessions better … or if you are in different timezones and are interested in having a session in a more friendly timezone that mine let me know.

Will post video at the top of this thread in the next day or two.

-wg

3 Likes