Share your work here ✅

More info on this:

2 Likes

I include some nice resources on working with Vue/Quasar in the repo’s README.

Wow there’s so many files there - I can’t keep track of what’s going on! I guess that’s the result of some kind of auto-scaffolding or something? Or how the framework expects things?

Which files did you have to create by hand? Which bits are boilerplate which are always basically the same?

Is there any way to restructure things so it’s easier for vue n00bs like me to more quickly see what’s going on? Or are all the separate files and folders strictly needed?

All will be answered in my livestream (I’m just trying to figure out a good time for this weekend that works for me and most folks in the class).

Like most SPA frameworks, yah, there are a lot of files to support the application (routing, state mgmt, static resources, ability to deploy as an electron/ios/android/web app, etc…). The file structure you see is the scaffolding generated by initializing a Quasar app via quasar init. Outside of a few misc. changes to the config and layout, the only page I had to create by hand was src/pages/IndexPage.vue.

The dist/spa folder is your compiled production app that is generated as a result of running quasar build. This is where I have gh-pages looking to serve the application.

It looks complex, but I’ll show folks how to create, develop, build, and deploy a Quasar/Vue app to github pages in less than 30 minutes. Folks can time me.

7 Likes

Excellent. Please do record it!

1 Like

Hi Fastai Friends,

After the second and third lessons I implemented some of the things Jeremy discussed and that’s what I have to show you in this post.

My cloud image classifier has evolved significantly by adding some data cleaning and then data augmentation. N.B. I am still using resnet18 but at some point i’ll train a version on a more modern model.

Details of exactly what i did can be read in my blog post here

Conclusions:

Data cleaning brought me form ~75% accurate to ~81%, data augmentation took me from ~81% to ~85% accuracy.

In the end I have a model that is about ~85% accurate using pretty much out of the box fastai methods and using a very small dataset - the final version only used 50 images for each category. As far as I know there is no other model out there that can classify clouds from images with this accuracy.

Huggingface Gradio app is here

I’d love it if you could try it and post some screenshots of the clouds you classify. Thanks for reading.

9 Likes

You should run a timer on your livestream, just so that the timing is recorded as well :sweat_smile: I’ll def. be watching the recorded version.

3 Likes

Are you tired of not being able to tell other people’s emotions?

  • Unsure what Chuck Norris thinks of you while giving you a thumbs up?
  • Not sure if that lady from the Real Housewives of Beverly Hills across you is furious or happy?
  • Uncertain what Ben Affleck is feeling during his interview during Batman V Superman?

Well now you kinda can!

Introducing the Emotions Classifier 3000! (can only do happy, angry & sad though :joy:)

T&C’s: Do not test on spouse to figure out their emotions unless you want to sleep outside or on the couch. :rofl:

8 Likes

After reading the first few chapters of “Deep Learning for Coders with fastai and PyTorch” book and giving up, I started reading the book this time from the last chapter and share my learning as videos. As part of this, I published my 2nd video on the simplified version of Fastai Learner class as explained in the 19th chapter.

Hope you like it. In the next video, I will be sharing my understanding of the actual Learner class and I am super excited about some of the new concepts I learned while understanding the main fastai code.

Hoping to complete the book this time and share my learnings😁.

13 Likes

@VishnuSubramanian

Loved the crystal clear explanation about the Learner class and a nice complement to the book. Can’t believe how elegantly this class was designed.

1 Like

Finished posting the 2nd part of the Fastai Learner class, where we explore what goes inside the Learner class.

It’s a bit long video, so please grab a cup of Coffee :coffee:.

For the next video, I will be exploring what happens inside vision_learner.

Hope you like the video, please share your suggestions/feedback :blush:

7 Likes

Looks to be working for clouds in Ottawa :canada:. So cool to see the x-industry applications.

2 Likes

Thanks for posting the 2nd part Vishnu. If I could make a suggestion it would be to maybe do something about the sound. I’m not sure if it’s on my end, but it sounds like it’s out of a phone on a bad connection (maybe it’s still being processed by YT?).

The other thing I would suggest is to maybe give a heads up before making a “context switch”. As a beginner to the nuances of Python, it is hard for me to follow things as they’re happening so quickly as you are explaining and then quickly changing from screen to screen to screen to back to the original screen. It’s probably not for n00bs like me, but from a n00b perspective (if you need that opinion) it’s a little bit difficult to follow based solely on my own shortcomings as a python novice. I’m basicallly backtracking a lot and hopefully I’ll be able to grasp the finer points of what’s being taught after enough viewings as I’m not a Python coder.

As an example, for most of the people out there, it may be an obvious thing, but for me, I noticed there was a place where some params are being passed as func_name(*blah) and I wasn’t sure what that’s about (the * before the params.) I know **kwargs is for passing a variable number of arguments , maybe *args is a similar thing?

With the above example, I’m just trying to illustrate how these innocuous things can seem daunting to the un-initiated.

1 Like

Yup exactly - it is: Python args and kwargs: Demystified – Real Python

(BTW we covered this feature briefly in the last lesson.)

3 Likes

You can also generate bounding boxes from class activation maps and pinpoint the locations of the highest activations for each class.

Have a look here and all implemented in fastai:

4 Likes

Thanks so much for sharing!! I really loved seeing someone else use the app - you have motivated me to build more things and to keep improving my models.

I think the next step for this model is to expand the number of classes to match what the WMO currently recognizes as ten cloud genera (basic classifications), which describe where in the sky they form and their approximate appearance. (Classifying clouds | World Meteorological Organization)

Thank you

2 Likes

Thank you for that link Jeremy! the unpacking and merging operations of these operators seem quite useful!

I do have to confess that back in my comp sci days, I struggled with C pointers and references a lot, and even to this day, when I see a variable name starting with a ‘*’ or an ‘&’ cold shivers run down my spine :sweat_smile:

Thank you so much for your detailed feedback. :pray:

Audio quality: I am currently using apple earpods, I will explore to get some better ones. If anyone has recommendations please share a good/simple mike.

Pace: Oh, got it. Watching it again, I think I should have been slower. Will keep this in mind for the upcoming videos in the series.

Even after taking the course and using plain PyTorch for years now, it took me several hours to understand what was going on in the Learner class. I was trying it out of curiosity.

You might understand it better if you watch this video again after completing the course material or use it as a companion video when you want to understand what goes inside the Learner class.

2 Likes

Any headset with a boom works well. By far the most important thing that impacts audio quality is how far your mouth is from the mic.

2 Likes

Amazing! Can’t wait to see more. Keep us posted on how you get on.