Hi guys, I just did the Lesson 1 Homework with the Cricket and Baseball classification. Can anyone give me feedback for my code? Thank you!
Hi naraB hope you are well!
Your notebook looks good, its well laid out and easy to follow.
If I were you I would try and deploy it somewhere so a friend can see it.
I would then do as Jeremy says, go full speed ahead and try and complete the other lessons.
There is plenty to do, as the lessons get a little more challenging.
Hello everyone, I want to ask what is the better way to share Jupyter notebook on GitHub, using Gist or a repo? Thanks.
Hey everyone, I just wanted to share some work I’ve been doing. At my university, I have been preaching fastai to both undergraduate and graduate level students. As a result I “teach” the course there through my club. Essentially I use the lessons as a base, and expand from there. Through this, I’ve been able to get four research projects for other students using the fastai library and the professors love it. I wanted to share with you guys my lecture/meetup material, in case anyone else would find it useful. This year I made it two days a week, where the first day we go over a particular type of problem (tabular, images, etc) and the second day is focused on state of the art practices with code examples, along with helpful tips/resources/functions for applying fastai and deep learning for research. If anyone wants to take a look, my notebooks are here
It may look slightly disorganized, I’m still preparing for the next class for this semester. Should be completely done with the new material in the next week or so.
The notebooks are all finished
Hi muellerzr hope all is well!
Thanks once again for sharing your work.
Have you got any time management tips or a specific work ethic I could learn or emulate?
You seem to create and help so much.
Hey @mrfabulous1! Sure I usually find some project where I can just get lost in it, explore it until it frustrates me, and continue until it doesn’t. Also, trying to teach and guide others at my school has really helped me make sure I know the material, as the people I am helping come in never even touching python in some cases. That takes a lot of prep work and thinking into how to help gear them into the right direction.
For the past few months also, I work roughly 1-2hrs a day on smaller projects (this was before the meetup work), just exploring what some functions do, how they work, and applying it to any dataset I could find. Since most of my research is tabular, I was going through datasets found on the UCI.
Then, I’d explore pure pytorch code papers and try to migrate it to fastai. Sometimes this is easy, eg the new optimizer LessW2020 got to work, where it’s a simple port of a function, other times it’s trying to pull full architectures from papers such as NTS-Net or Deep High Res. Again only working at most 2 hours a day so I don’t get too frustrated.
I also explore the source code and lecture notebooks. Often. How does x work. Why does x work. And why does doing y break x’s code? (What did I do wrong). Most of the time, simply tracing back what a function does answers most of my questions. And for the course notebooks, I still can’t remember how to write an image databunch from memory so I cheat (oh no!). I try to not, and if it doesn’t quite work, the course notebooks show an example for most any problem so I debug there.
I write (or try to) when I can. I haven’t lately for my blog as things have been crazy, but I found writing blogs have helped me figure out what’s the most important bits from lectures, the library, etc and also helps me to be able to explain it to others.
And lastly, for lectures (the actual fastai course). Honestly I didn’t complete course v3 for four months. Why? I focused on what I needed then, and slowly worked my way through. Doing this allowed me to not get overwhelmed with the super advanced topics at the end of the course right away, and instead focused on what I needed to learn and do at the time for my various tasks.
I know I said lastly but just came to me, also don’t be afraid to be curious. Einstein once said “ The important thing is not to stop questioning. Curiosity has its own reason for existing.” This can come in many ways such as feature engineering, playing around with the number of variables, classes, hyperparameters tuning, etc. even if someone’s done it, assume their way may not be the best, and try to see if you can outthink it. Even if that somebody is yourself! I had a research project where I was trying to beat a baseline in random forests. I spent two months on it and couldn’t quite do it. I always fell 1-2% short. Then I had discovered a paper on feature engineering for sensors a few months later, revisited it with my new knowledge and practices and wound up blowing them out of the water! Patience, persistence, and curiosity is everything. While I know a decent amount about the library, there is much I don’t know, and I always remember that to stay level-headed. Everyday I’m learning something new just by playing around.
So basic sum-up:
- Spend 1-2hrs a day on mini projects that I can get deep into for a month or two at most.
- Look over the source code and notebooks often
- Write blogs and lecturing geared towards those who either barely know what fastai is or are getting the basics to make sure I know it well enough and can explain it.
- Go through the lectures and courses slowly, relistening and running the notebooks often.
- You are your own rival. Try to outperform yourself on your projects and you will see growth.
- Read the forum daily. Even a casual browse of a topic. I may not understand something people are talking about, but I know it exists and I can revisit it later if I need to
Hope some of that will help you or others keep going I’ve only been in this for 9 months now, and doing the above has helped me solidify my comprehension of the material to a point where it’s allowed me to teach and help others at a young age (I’m still 21) and opened many research and job opportunities. It doesn’t take much to get there
Hi muellerzr thank you for providing a comprehensive reply.
I’m happy to say I have always had a lot of perseverance, curiosity and patience with others but according to my partner not with my self. I do a few of the things you mentioned. But from your reply I can see I can do a lot more. I will endeavor to add some of your tips to my repertoire.
Many Thanks mrfabulous1
I’ve been going through lesson 1 and 2, I think I got the ideas okay, my problem has been trying to deploy to a webapp for free. Tried heroku a few times but been having problems. I’d like to do it but it’s been hard, I’ll check android next.
But anyway, the thing I’m doing is a basic art classifier that tells you the artistic movement. Here’s a sample of the dataset:
error_rate is down to between 2-5%.
Hi @LauraB. Great job!.. in fact, wow, you did a LOT of work adding and trimming fastai to the Lambda. But why?
I just deployed a small proyect (I’ll share it soon), but I didn’t have to add fastai, so I saved a lot of time there. I just exported the model to PyTorch and then used the dockerfile from Pytorch which had all the modules I needed ( https://github.com/brunosan/iris-ai/blob/master/iris-aws-lambda/pytorch/app.py#L4 ). What made you need fastai? Just curious, you must have spent A LOT of time on that bit, but I don’t know why. I think the reason is that you didn’t port your model from the fastai format to the PyTorch format (explained here: https://course.fast.ai/deployment_aws_lambda.html#export-your-trained-model-and-upload-to-s3 )
You are right in that you don’t need the fast.ai library for inference if you export your model to the pytorch format.
I wanted to see if it was possible to have the fast.ai library running on AWS lambda, and it was a good learning experience for me
Weeds vs Grass
So there is great interest in reducing the use of herbicides in parks where children play, so I though I would take a walk in the park and take photos (25 weeds/25 grass) with my iphone and see whether the resnet34 classifier would work. And it did! The error rate was 12.5%. I had 8 in my validation set and 17 in my training. I did it again with a different mix of train and validation. I got an error rate of 25%. But I then swapped to resnet50 and that dropped back down to 12.5%.
BTW, I’m totally running this on my windows machine and I spent very little effort in installing fastai and pytorch in my virtual environment using visual studio and pip. I’ve only really tested all of lesson 1 though - fingers crossed.
I’m a non-engineer business guy diving into this world of deep learning, and I’m loving it. After the 1st lesson I have created (painstakingly) my first trained model AND web app (took me hours for figure this out). You can input any picture of a human face and my model will be able to tell you if it is smiling, frowning, or sad! It has a 82% accuracy rate. I’m not sure if that’s a bad or good accuracy rate, but I’m proud of it. Looking forward to lesson 2-7!
Here is my web app! https://expressive.onrender.com/
I wrote about it here: https://www.instagram.com/p/B1afnSgjzY3/
Hey everyone, I tried to see if I could beat the IMDB results by including SentencePiece and ensembling four different models (Forward + backwards, SentencePiece + Spacy). I did not quite achieve state of the art, but I need to see what I missed as I did not get Jeremy’s results, but they look promising! CrossPost, Article, Notebook
thank you very much for sharing this app! I learned a lot deploying it in a local server. It took a while since I had to learn about docker and other stuff.
Anyway I got it running, but I am doing something wrong.
I compared my version with the heroku version and everything works fine except for the heatmap the heroku version its fine mine version looks a bit scrambled.
This is the Heroku
This is mine:
Any help will be appreciated.
Thanks for your appreciation!
My best guess right now is that I did some changes (and committed them without deploy) after the last Heroku deploy. Maybe I upgraded the model or changed the predict.py file. I’m not able to dig into this right now. But you might want to check the latest commits to see if a change broke this part of the app.
Continuing the discussion from Share your work here :
Should SentencePiece help on an English corpus? I treat it as a necessary evil for Polish as we have too many forms for each word to make standard vocabulary work, but wasn’t aware this is needed/ helpful for English language…
In the class (NLP), Jeremy discussed trying a blend of all four, that’s why I did it. Overall I noticed sentence piece performing slightly less, but only barely.
That’s cool - what did you use for data? How many images and did you manually label?
I scraped images of the members of congress from https://congress.gov using Beautiful Soup and built a classifier model using the lesson 2 notebook to determine whether an image was of a Republican or Democrat. It is deployed on render at https://repubordem.onrender.com.
I used images for 304 Republican members of Congress and 249 images of Democratic members of Congress. I got the overall error rate down to 35%, which I interpreted to mean the model was picking up something meaningful to distinguish between Republicans and Democrats, but not that great since you could get an error rate of 45% by just picking Republican every time.
New member here. Thanks for this open source software!
Recently completed lesson 1 and 2 of the fast.ai course and I wanted to get stuck in. Decided to classify ships. Wrote a blog post on this at https://sites.google.com/view/raybellwaves/blog/classifying-ship-classes
Had a few hiccups along the way but as a result I am more familiar with the software. Here’s a list of some of my stalling points as how I got around them:
.split_none()when creating a learn object on the ‘cleaned.csv’
.databunch(bs=16)to specify batch size with
interp.plot_confusion_matrixwas bringing up the call to the tabular module. Restarting the kernal seems to get rid of this issue (this post helped Plot_top_losses() throwing "AttributeError: 'ImageList' object has no attribute 'cat_names'")