General course chat

This topic is for anyone to chat about anything you want, as long as it is at least somewhat related to the course! (It’s fine if you drift off topic a bit though.)

fast.ai folks probably won’t be following this thread closely however, so if you want to ensure that your questions that answered, put them in a relevant topic.

4 Likes

slowly lifts the lid of a crypt and sits up “Good to see you again, old friends” :joy:

It’s been a while! But I just saw the likes on Jeremy’s posts, and the couple of posts people already contributed, and wow, the feelings and excitement from a while back are here again!

May the festival of learning begin yet again! :partying_face: So looking forward to this!

30 Likes

Great to see you too here @radek ! I’m really glad to be joining this journey again as well.

Since it’s around a week remaining before the brand new course gets going, I have a quick question for everybody who’s reading this thread.

What has worked the best for you to get the most out of this course ?

Let’s help each other out with tips and tricks that might have worked well for you. :raised_hands:

3 Likes

Practice :slight_smile:

Also, I think this was a good approach.

Maybe consider reading this book if you’d like to convince yourself how important from a neurological perspective consistency is. Things become easy to a lesser extent through the intensity we approach them with, but rather through spending time with them over a considerable period of time.

Plus, the only way to arrive at mind-blowing outcomes (something that everyone is capable of achieving) is through perseverence.

9 Likes

I agree with @radek. The key to learning is deliberate practice and perseverance. Reading a book or watching a lecture is good, but what you learned won’t stick unless you try to implement/use it. Practice will also uncover parts of the material that you think you understood but haven’t.

Some things that are working for me are:

  • Kaggle
  • Create personal projects and share them on GitHub
  • Create (and refine) an Anki deck
  • Sketch ideas/take notes in a notebook that I can revisit/improve over time

I try to do a bit of this every day, and slowly but steadily, you will notice you have improved a lot over time.

9 Likes

Working on a project helps me always.
For example, working on a Kaggle competition forced me to

  • read papers
  • understand old solutions
  • optimize code

Another example is, that I wanted to learn web development for years.
But when I started building my startup it became a necessity and I picked up really fast.

6 Likes

Thanks a lot for sharing these tips. Looking forward to going through the new course, focus more on implementing projects this time around. :raised_hands:

3 Likes

My advice is to find a way to have fun with it. It’s tempting to try to make plans for the most efficient route to learn, and start trying to check off boxes. Rather than trying to find the most efficient way to apply the knowledge to learn, I prefer to pick your projects based on what I am interested in, passionate about, or think will be fun.

If you are really blown away by a particular technique and want to know exactly how it works, dive into that! Or maybe you like motorcycles - figure out a project that relates to that. Or whatever your interests are. I try to pick projects that I am interested and will enjoy as my primary selection criteria, the skills and learning that comes with it is a secondary goal that follows naturally.

I agree that the #1 thing to learning is sticking with and perseverance like others have said. The #1 thing that helps me persevere is doing things that are related to thinks I am interested in and having fun with it.

5 Likes

Check out this book, I really loved how the author explains what it takes to become good at something.

5 Likes

Hi @jeremy,

I already asked this on discord but got no response so I hope its okay to ask it here again.

Would it still be ok to do a group watch/stream event? This is for an internal fastai study group I’m planning for my co-workers – just like what was done for the TWIML study group the last time we held live sessions. I was thinking of using this [metastream chrome extension] (Metastream Remote - Chrome Web Store) for a limited number of participants (BTW this is what we use for the cluster-of-stars study group when we view YT videos).

1 Like

We’ve allowed since once before for TWIML since they’ve been great supporters of fast.ai. Best to ask me directly over PM if you’re interested in something similar for another group.

2 Likes

Hey folks,
I’m thrilled to create the first technical topic of 2022. :sweat_smile:

I’ve been working on a new type of Tensor (I called it TensorRawImage) that can read RAW images.

TL;DR: I’ve seen that ToTensor transformation has an order=5, and RandomResizedCrop has an order=1. So the order of transforms goes like this: RandomResizedCropToTensor. Can I swap their order? Or at least configure RandomResizedCrop's order as 6?

Why do I ask that: Because fastai’s RandomResizedCrop supports only Image.Image objects (or TensorPoint, TensorBBox) and so I will need to recreate RandomResizedCrop to support TensorRawImage (based on Rawpy objects).

Before I complete that, I will use RandomResizedCrop from torchvision.transforms, which supports Tensor objects of images, just to see that the whole business keeps on track.

My question here was about transformations, but just so things would make sense, a quick background:

Why is it necessary to create that TensorRawImage?
TensorImage accepts image files as Image.Image objects, which can store values of pixels between 0 to 255 (depth of 8bits). That’s cool, but Rawpy objects can store values of pixels between 0 to 65536 (depth of 16bits), and hence, a TensorRawImage would be able to have far more data for the training.

Thanks!

1 Like

Just found out that there is a “RandomResizedCropGPU” function that is doing exactly that… :wink:

Hey guys,

Is it way too slow for a reason?

Normally, epochs run about 7-16 seconds each, but here it takes at least 16-18 minutes for ecah single epoch.

image

The only reason that I can think of is the size of the image files.

Normally the files (JPG) weighed 800-950KB. Now the files (RAW) weigh about 15-20MB.

But even then: Either JPG or RAW, they will be opened and loaded into np.array objects (ndarr), with uint8 and uint16 types respectively.
The arrays would be resized into sizes of 450x450 pixels.
Then they would be converted into tensors (of type float32).
And so from here onwards, the learner doesn’t care where what types of files they were originally from.
So why does it take much longer for each epoch? And how can I make it quicker or debug it?

Is it possible to feed the DataBlock class a list of the images files being already opened, instead of a list of the images paths?

(So instead of a list of paths, it gets a list of items where every item is the PILImage/RAWImage class of the file being already opened)

Maybe this will make sessions faster, since maybe opening and loading the files is what takes more time, every epoch

Hey guys,

TL;DR: I’m writing my own __array__ method, but once calling it, I get this error:

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-12-23ae7b1bc4a5> in <module>()
      1 im = RAWImage.create(fn=items[1])
----> 2 x=array(im)
      3 x

TypeError: 'dict' object is not callable

Long story short:

I’m trying to add either an __array__ or __array_interface__ method (property) under my class RAWImage (which is to replace PILImage, which is based on Image.Image).

Why? That’s so I can pass a RAWImage object in array() (NumPy method) and get back an array of the image alongside its dtype. It’s supposed to look like this: (but RAWImage instead of PILImage)

Output:

(fastai.vision.core.RAWImage, array([[[ 3,  3,  3],
         [ 3,  3,  3],
         [ 3,  3,  3],
         ...,
         [20, 20, 22],
         [19, 21, 20],
         [19, 21, 20]],
 
        [[ 3,  3,  3],
         [ 3,  3,  3],
         [ 3,  3,  3],
         ...,
         [20, 20, 22],
         [20, 20, 22],
         [20, 20, 22]],
 
        [[ 3,  3,  3],
         [ 3,  3,  3],
         [ 3,  3,  3],
         ...,
         [20, 20, 22],
         [21, 21, 23],
         [20, 20, 22]],
 
        ...,
 
        [[ 3,  3,  3],
         [ 3,  3,  3],
         [ 2,  2,  2],
         ...,
         [37, 37, 35],
         [29, 29, 27],
         [28, 28, 26]],
 
        [[ 3,  3,  3],
         [ 2,  2,  2],
         [ 2,  2,  2],
         ...,
         [33, 33, 31],
         [23, 23, 21],
         [22, 22, 20]],
 
        [[ 3,  3,  3],
         [ 2,  2,  2],
         [ 2,  2,  2],
         ...,
         [29, 29, 27],
         [21, 21, 19],
         [23, 23, 21]]], dtype=np.float32))

So I copied the Image.Image equivalent __array_interface__ and made some changes to this:

    @property
    def __array__(self):
        # numpy array interface support
        new = {}
        new["shape"] = self.ndarr.shape #Here is the image array
        new["typestr"] = "|f4" #to support float32
        new["data"] = id(self) #it's supposed to hold the memory address of the object, but it doesn't work
        return new #It's supposed to return a dict type of object

I tried to run this:

im = RAWImage.create(fn=items[1])
x=array(im)
x

But the error is as the following:

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-12-23ae7b1bc4a5> in <module>()
      1 im = RAWImage.create(fn=items[1])
----> 2 x=array(im)
      3 x

TypeError: 'dict' object is not callable

What did I miss out here? How to fix it?
Thanks for helping with this!

1 Like

@Danrohn FYI I’ve moved your various tech help requests into this general chat thread, since they’re not related to the course content.

2 Likes

Is anyone aware of a resource for hard datasets in different vision tasks? E.g., Hard datasets for image captioning. By hard I mean images in which state of the art: Microsoft OFA model for example, provides an accurate object-exhaustive caption E.g., there is a man holding a red bag, but fails to capture the semantic meaning e.g. there is a man stealing a red bag.

Alternatively is anyone aware of a study where they have examined the images in current benchmark datasets for image captioning that achieve the lowest BLEU-4 or other metric? E.g., images in coco that are hardest to caption accurately for a bunch of different models.

Admins please move this comment if it is not in the right spot, I couldn’t find a general datasets thread or similar! Cheers

1 Like

What kind of labels are used for making text summarization dataset?

In popular datasets like cnn_dailymail, I have noticed it contains both article and highlights. Is such a labelled dataset always required?

As I was reading on various materials on the course, I ended up finding this video of Jeremy interviewing Sanyam: Jeremy Howard Interviews Kaggle Grandmaster Sanyam Bhutani - YouTube
Highly recommending it, in the world of noises, it was full of good signals :slight_smile:

7 Likes