Share your work here ✅

@reshama and @npatta01

Great work! Congratulations!

I would like to ask you if the inference was done on mobile or in the cloud?

Thanks!

@chho6822 The inference is done on the cloud
We haven’t looked into running it on mobile.

Thanks. I guess I need to run more experiments :smiley:

1 Like

I created another starter pack :slight_smile: This time it is for the Kaggle Humpback Whale Identification Competition. I like it way more than the previous one. Also, the competition is a lot of fun - many things could be attempted that can be a good learning experience and that can lead to a better score. The competition just launched and there are still over two months and a half to go!

Here are a couple of images from the training set:
DuSbWVDXgAEcQG3

Here is the Kaggle thread and a Twitter thread.

21 Likes

Sorry for not being able to share an awesome project like everyone else.

Instead I want to share an interview: I had interviewed @rachel about her DL Journey and fast.ai, I totally forgot to share it with the community. Here is the link to the interview.

5 Likes

Based on the crap to no crap GAN of lesson 7, I tried the same approach to add colors to a crap black & white image. First I downloaded high quality images from EyeEm using this approach.

Training the Generator with simple MSE loss function, I got not exciting results (the only thing it learned is sky has to be BLUE :laughing: ):

Then after adding the Discriminator I and training in a ping-pong fashion, the Generator got better:

Now trying on some test images :crossed_fingers:

Here is the notebook

15 Likes

I replied on Kaggle, but wanted to reply here as well: Your methodology very closely followed the intuitions I’ve been working off of, and your code gave me a couple of "ah hah!"s from my own experiments so far. You can see my detailed response there.

Thanks for posting the useable map5 code also.

1 Like

@radek. Thank you for sharing the notebook. I am trying to run the notebook. I got the error

PicklingError: Can’t pickle <function crop_pad at 0x000001F3E7E5F1E0>: it’s not the same object as fastai.vision.transform.crop_pad

I am using FastAi library 1.0.36.post1 on windows 10.

Please suggest how to fix this error.

Thanks,
Ritika Agarwal

Chances are just restarting the kernel should fix this.

I answered here - for what works for me at least with Windows 10

Worked perfectly- thanks for sharing this!

1 Like

@radek I have restarted the kernel.Then also I am getting this error. I fixed this issue by putting NUM_WORKERS =0 and adding padding_mode=‘zeros’

But Now i am getting the below error while fitting the model. Pls suggest how to fix this error.
RuntimeError: Expected object of type torch.cuda.LongTensor but found type torch.cuda.IntTensor for argument #2 ‘target’

Thanks,
Riitka

Thanks @brismith. I have replicated the steps shared by you. Still I got this error.

I fixed this issue by adding NUM_WORKERS =0 and adding padding_mode=‘zeros’

Thanks,
Ritika

2 Likes

I played around with different loss functions for super resolution to see the impact on the generated images. Interesting to see that impacts quality. I’m also trying to figure out how to deploy a super resolution model on Zeit. If anyone knows how to make the app return an image, let me know.

7 Likes

That means your fastai version is not up to date.

I :pencil2: a brief post on the Naive Bayes classifier (Introduced in the fastai Machine Learning Course, Lesson #10) on Towards Data Science. Hoping some folks might find it useful.

Hi Jeremy,

I am using fastai version 1.0.36.post1 on windows 10. Do i need to update fastai package?

Thanks,
Ritika

One way to do this is export the .ipynb in Jupiter as a .py script.

Then you can browse (and jump to source) or execute the script in vscode.

Jeremy, thanks for the suggestion. I read that post on data leakage - interesting; I hadn’t even heard the term before - and it took a while to figure out what he means by “cross validation folds”! I do see where it would be a problem in the scenarios he focuses on (mostly k-fold cross-validation, if I’m reading it correctly) but I didn’t see much there that seems directly applicable to this case, except of course his solution of holding back a validation ds, which I take as gospel and I think is pretty well baked into fastai.

I’ve read a random sample of the dataset article texts (makes for some fascinating reading!) and don’t see anything that might be a marker showing whether it’s ‘real’ or ‘fake’, but maybe I don’t know what to look for? One obvious ‘marker’ might be the words ‘real’ or ‘fake’ in the text, and that definitely occurs, so to eliminate any chance of that causing leakage, I ran with a ‘clean’ df from which I had removed any records where ‘real’ was in the text and the label was REAL, or ‘fake’ was in the text and the label was FAKE. That took the dataset down to about 4800 records (from 6000). It still produced about 98% accuracy.

I previously had run a few more times with the full dataset, and saw a little variability in losses and accuracy, but generally around 98% accuracy, so it seems like removing the ‘real-REAL’ and ‘fake-FAKE’ records didn’t make much difference.

Interesting situation. Maybe fastai is just that good! Although I admit I’m still a little suspicious… Any other factors I should be looking at?

2 Likes

Interesting. I’ll try this. Though, I’ve now managed to get VS Code working with the Jupyter extension as well. Thanks!