General course chat


(Alex) #489

it may be off-topic, but still nobody raise it so I would

Why don’t we use Chat for chatting? Discord - can be good alternative.


(benedikt herudek) #490

Hi,

is anyone aware how the MOOC registration for the 2nd part will be working?

can see the dates here: https://www.usfca.edu/data-institute/certificates/deep-learning-part-two

and a registration link for the in-person class here: https://www.fast.ai/

But I am looking for a registration link for the MOOC, ideally even a curriculum (am aware of the brief curriculum at the end of lesson 7 course 1).

thx !


(Sanyam Bhutani) #491

@Benudek Rachel had confirmed on twitter that there will be a live version, I haven’t come across any information yet, but I’d suggest keeping an eye out at fast.ai website as well Jeremy, Rachel’s twitter profiles.

I’ll definitely notify you here if I come across any announcements.


(benedikt herudek) #492

great, thx! I couldnt find that twitter announcement but I be patient and grateful if you say sth when you hear sth :wink:


(Sanyam Bhutani) #493

Found the tweet.

Will def post here if I see any updates.


(Dhyey Pankaj Mehta) #494

The Data Block API can be used for all applications? Where cant’t we use data block. Why not directly use DataBunch for most applications, especially for images


(benedikt herudek) #495

thx a lot !


(Sanyam Bhutani) #496

@Benudek Here is the official announcement thread by Jeremy link.


(Jona R) #497

~ Congrats to everyone invited to participate in Part 2 this year! ~

I have been poking around the forum, and haven’t found quite the right place for the types of question that I’m having. They are mainly implementation-flavored and usually one-offs. Sometimes they are “is X a valid approach to Y?”, that I imagine someone more experienced would be able to give a heading and en/dis/couragement appropriately. Can anyone point me towards a thread or place where I should post things like this?

For example, my most recent question is around whether I can train a CNN to understand addition and subtraction. I’ve got a dataset and an intuition and a naive strategy, but I don’t see the best place to banter about stuff like this. Maybe reply if you have a suggestion about where to post, or message me directly if you want to chat?

Thanks!


(Kunal Gurnani) #498

hi guys, im just starting this course, what is the best way to take it? watch all the videos first and then apply all the knowledge on different datasets or do it right after finishing every lesson?


(Siddhartha) #499

I’d recommend an iterative approach. Follow along with the videos while running the code on your end and trying to simulate the results, then, watch the videos again and try to do all the little assignments that Jeremy mentions off-hand and use different datasets.


(Kunal Gurnani) #500

hi thanks for the recommendation, is there a list somewhere about all the assignments he mentions offhand? because i may miss it, so having a list would be useful


#501

I was wondering if its possible to access notebooks from my local computer just to view them after I run them on something like GCP?


(Dana Ludwig) #502

For those like me who completed the 2018 courses, I urge you to at least go through all the videos of the 2019 course! So many things are better, mostly related to features of the fastai V1 library. For instances, the tricks for getting fast access to documentation are far easier to use than before.

Also, as much as I wanted a shortcut of just studying text lessons, everything does build on earlier lessons (eg, in vision) so better not to take shortcuts (as the Donner party learned).

Also, Jeremy’s presentations are get more precise and more polished. I had a bunch of “ahah” moments, even though I finished most of 2018 class 2.


(Kelvin Idanwekhai) #503

@jeremy i was reading from this fastai article that i can create my own language model for any language, i have figured out how to get the data from Wikipedia for my local language, but i really do not have much experience with NLP.
Please what next steps should i take?


(Ritika Arora) #504

how to do object detection?


(Jona R) #505

I’ve got a question that I hope someone can help me with.

I have one databunch with 10 classes (MNIST), and I have trained a resnet34 to arbitrary accuracy on it. Now, I would like to use transfer learning to leverage the weights I just trained to fit another databunch with 100 classes (the compound numbers 0 through 99). My thought was that after training, I would set learn.data equal to the second dataset, freeze the learner, and then run fit_one_cycle. However, the program (running on kaggle) crashes every time I run the second fit_one_cycle. [RuntimeError: CUDA error: device-side assert triggered]

Is it not possible to use the weights from one model to bootstrap another? Any idea what I am doing wrong?


(Sanyam Bhutani) #506

@jona It definitely is possible!
When we do transfer learning, we’re using the weights from ImageNet. Sorry if this is something you already know.

But as you may imagine, the classes in ImageNet differ from the no. of classes in for example MNIST so fastai takes care of the architectural changes for you behind the scenes.

What I think you can try here is:

  • Train on MNIST
  • Save the weights
  • Create a new learner
  • Load the weights.
  • Re-train.

Regards,
Sanyam.


(Brian Smith) #507

Do any deep learning models for vision use both monochrome and colour through different paths? I’m thinking of rods/cones in our eyes, but also the use of multiple exposures in mobile and compact cameras that use a similar approach (the new Nokia Pureview and other techniques like fast shutterspeed monochrome + slower colour) to get sharp pictures in low light. I’m sure deep learning can get the same mono stuff from the 3 channels, but would 1 mono channel at higher resolution then the 3 colour at lower give faster training? I couldn’t find any research on it - but it seems like it could have benefits.


(Sanyam Bhutani) #508

That sounds like a very interesting approach!

Are you suggesting a stacked model to benefit from both inputs?