Introduce yourself here

Huge congrats! Thanks so much for sharing.

BTW (and feel free to not answer, of course) what’s David Shaw’s involvement? Is he contributing directly to the research? Or just funding it? I was surprised to see his name on the paper - I think it’s great he’s using his talents for this kind of work :slight_smile:

1 Like

Hello everybody!

I have been doing Fast.ai for several years now and wrote several articles and programs. I loved it so much I even went out and completed Amazon’s Machine Learning course.

You can see some examples below.

My Goal this round:
I am going to push to get information into either AWS or Azure. During the course, I hope to provide some interesting items in that direction.

13 Likes

[I didn’t know where to put this]

I need some help in getting my DL priorities straight.

I was hoping that the people on this forum might be able to give me some direction

TLDR: I’ve got a lot of goals this semester along with a lot of work. And I am unable to decide how to properly proceed with my Deep Learning journey in a way that is most highly impactful for me in future given my goals after graduation.
I am very short on time and I need to make a few key decisions regarding how best to go about: Part 1 2020, V2 Walkthroughs, Part 2 2020, The Deep Learning Book.


A little bit about me for some context: Introduction Post
https://akashpalrecha.me/about.html

The Problem:

So, this semester (3rd year) in college, I’ve got a bunch of fixed, ** have to do ** priorities:

  1. Pixxel, a space-tech startup (will sell Hyperspectral Imagery of the earth commercially for the first time) where I’m working as an AI Researcher and technical team lead.
  2. College Math Courses, of course.
  3. A semester project with a professor which requires me to be familiar with the first half of FastAI’s Computational Linear Algebra Course
  4. A CVPR Competition Track I’m participating in with my colleague in Pixxel.

Among all of this, I want to take out as much time as possible for Part 1 2020 when it starts streaming live. Which I will, with all the discipline I have.
As of now, I’ve completed Part 1 of the course from the last 2 years along with (most of) 2019’s Part 2 and I’m deeply familiar with FastAI V1’s source code.

So I was thinking whether or not should I go through all the FastAI V2’s Walkthroughs that Jeremy has posted online. I’m concerned about the fact that since I am going to do Part 1 2020 anyway, if the material for those is heavily intersecting, it might prove counter-productive for me.
This is a concern because I’m going to be very very busy this semester in college and I want to use my time as efficiently as possible and avoid re-doing things.
(Also, Part 1 2020 will be beginning when my midsemester exams start and ending with my final examinations. That’ll make things a bit more uncomfortable than I’d like)

Also, @jeremy, is Part 2 of 2020 going to cover a lot of FastAI V2’s internals like last year? Since I’m going to be dedicatedly doing that part of the course too, I’d rather skip the Walkthroughs for now if that’s the case (I don’t want to. I really don’t want to. But I’m hard-pressed for time).

I was also hoping to comprehensively go through the Deep Learning book by Ian Goodfellow along with the FastAI courses. So, if Part 2 this year is going to discuss a lot of deep learning theory, then I’d defer reading the book for when the course comes out and would ideally read it while going through the course. I’ve coded a lot of DL models already and feel like now is the time to finally go through the theory properly.

What am I hoping to achieve: By the time Part 2 of this year ends (nearing the beginning of my 4th year), I want to be extremely well versed in both DL theory and practical applications such that I can confidently apply to and interview for even the most competitive AI positions in great companies/universities around the world.

In Summary:

  1. Is Part 1 2020 going to cover things similar to the walkthroughs?
  2. If the answer to the above is NO, then since I’m extremely short of time, should I go through the walkthroughs or will that content be covered in Part 2 of the course anyway?
  3. Should I read the Deep Learning book before Part 2 starts or read it along as the course proceeds?

I’d be very grategul if people around here can give their viewpoints on how I should go about things.

1 Like

By the v2 walkthroughs, do you mean the one’s Jeremy has uploaded? Or my study group? :slight_smile: (Once I know that I can try to help out some)

2 Likes

The ones that Jeremy has uploaded.

1 Like
  1. It’s going to be similar to how the course was last year, plus a bit of RF included too, and probably a few other surprises Jeremy has planned.
  2. If you want an in and out of how the library is operating then I’d recommend the walkthrough and then look at the (now numerous) examples that Jeremy and I have (as almost any topic will/has been covered with examples)
  3. I can’t quite say to that one. I’ll be reading it the moment I get it. From what I’ve seen about what’s been released/talked about on Twitter, it’s sounding like by the end you’ll understand a lot about the theory and applications of it all, probably similar to last year’s part 2 if not a bit more info.
2 Likes

@muellerzr
If you’ve done the walkthroughs, if it’s not too much to ask, could you please post a bullet list with one line descriptions of what each walkthrough video focuses on? :sweat_smile: It’ll help me better decide the schedule for how I should go through those videos.

@akashpalrecha sure, see right here: :wink:

Basic schedule, we’re on week 5 right now of vision:

Vision

  • Lesson 1: PETs and Custom Datasets (a warm introduction to the DataBlock API)
  • Lesson 2: Image Classification Models from Scratch, Stochastic Gradient Descent, Deployment, Exploring the Documentation and Source Code
  • Lesson 3: Multi-Label Classification, Dealing with Unknown Labels, and K-Fold Validation
  • Lesson 4: Image Segmentation, State-of-the-Art in Computer Vision, Custom Pytorch Models and EfficentNet implementation
  • Lesson 5: Style Transfer, nbdev , and Deployment
  • Lesson 6: Keypoint Regression and Object Detection
  • Lesson 7: Pose Detection and Image Generation
  • Lesson 8: Audio

Tabular:

  • Lesson 1: Pandas Workshop and Tabular Classification
  • Lesson 2: Feature Engineering and Tabular Regression
  • Lesson 3: Permutation Importance, Bayesian Optimization, Cross-Validation, and Labeled Test Sets
  • Lesson 4: NODE, TabNet, DeepGBM (unsure on this, as I haven’t seen NODE be worth it, definitely doing TabNet though)

NLP:

  • Lesson 1: Introduction to NLP and the LSTM
  • Lesson 2: Full Sentiment Classification, Tokenizers, and Ensembling
  • Lesson 3: Other State-of-the-Art NLP Models
  • Lesson 4: Multi-Lingual Data, DeViSe
4 Likes

Hello everyone,

My name is Bo. I am one of the creator of BentoML. BentoML is an open-source framework for building cloud-native model serving services. BentoML supports the most popular ML training frameworks and deployment platforms, including major cloud providers and docker/kubernetes.

I have been working on productionlizing ML for the past few years. I am looking forward to learning new changes in fastai v2 and make sure it is well supported in BentoML.

2 Likes

Thanks, Jeremy!

David is directly involved in the research at D. E. Shaw Research. I joined the company because of his long term vision to use computing to make an impact on human health.

1 Like

Thanks a lot for this!
I was originally referring to Jeremy’s walkthroughs all the while although this helps too.

Ah my apologies! I’m afraid I don’t have a reference per say of how to go about those :confused: You could most likely see what notebooks he goes into in each one quickly (by jump cutting in the video) and that could help some :slight_smile:

Hi everyone I’m excited to have been offered the opportunity to attend these lectures with new content. I have a CS background but I work as a systems admin/DBA in my “day job”. I am interested in Neuroscience, Brains and how intelligence emerges out of “simple” components. I have done some ML/DL courses (Ng’s DL cert for example.) I tried doing the fast.ai 2019 part 1 but only got to doing things upto lecture 3 while I just watched the remaining lectures.

In my day job I really don’t have much opportunity to apply AI/DL in my field as the job entails making sure enterprise systems keep humming along without any “excitement” … ie; it gets boring after a few years of doing it :slight_smile:

I hope to find some ways to apply this in my current line of work, but even if I can’t, since I took the first course in Neural Networks long long time ago during the AI winter, I’ve been fascinated with how “some kind of intelligence” emerges out of simple parts connected to each other, doing their thing.

So, this is kind of a hobby for me, maybe I will get some insights into how the brains work, maybe I will be able to contribute something (probably not) but I think just the journey is worth it even if it is just for satisfying my curiosity re: this subject.

I hope to see you all in the forums while we go through this course, a truly wonderful resource Jeremy has made available to the world.

All the best!

1 Like

Hello, I’m Andrei from Russia.
@yasyrev_andrei
For a long time i worked in business (distribution, construction and others). Couple years before i step side to be a full-time Dad, and now only join short time projects, as my kids take all my time for their school and exercises.
At free time (actually it only instead of sleeping) i learning DL.
It started almost occasionally about 5 years before. I started experiments with micro controllers for home automation and suddenly meet python on raspberry Pi. I find Python very interesting and i start learn it. I started code at school, it was Fortran and we use dark board and chalk for coding. Later we used punched cards and now real computer - some IBM mainframe. It was so interesting so i start to learn more. Then it was Pascal (again BIG IBM comp) and even C at at286. At Uni we don’t learn to code - i studied as engineer mechanic constructions. Once it was short experience with analog coding, it was curios but almost useless. After Uni i gone to business and for a very long time didn’t write any line of code. For business i used exel and intuition and often it was more accuracy then ours well paid analitics with Phd in math.
So i started learn Python. I like it very much. I find what a lot of things and tasks i can now do myself without “special people” and so on. Then i found what a lot of articles in my news line is about ML, DL with Python and other strange but interesting things. In 17-s i take Andrew Ng course. It was on Matlab and it was a lot of math - it scary me a lot but I did it! I understand what my brain is not rusted yet! After course i even rewrite a lot of staff to python, but still did not understand what to do with this. Later i found fastai, it looked very promising, but a cant find time for it - no time after work and small kids. And only late 18 i started part 1 3 edition course. Top - down method is rely great! After this i started understand how it works. And later at part 2 i continued dive to details. It was a lot of low level staff - i understood what i have to learn a lot! I stacked there a little because i like understand details of what i do. I do this as my hobby (yet!) so cant spent to much time for it. But now i feel very comfortable with python, pytorch and fastai (steel v1).
To better understand i started refactor and write from scratch resnet (and xresnet) and use it for imagenette/imagewooff training. Guys from [How we beat the 5 epoch ImageWoof
…] did great work, i tried to do it parallel, but they are SO FAST and active! Anyway i found some interesting things, now testing it, hope i can share it soon. And right now i just show some small trick to resnet model, that help beat current score on Woof leader board Imagenette/ Imagewoof Leaderboards. And thank to nbdev i can easy share my constructor for model https://github.com/ayasyrev/model_constructor, what i use in my experiments for easy change activations, pools and other staff. It not so powerful as xresnet in fastai v2, but hope it can be helpful for study and understanding.
Big thank whole fastai team (including forum guru) for you job and for invitation!

9 Likes

@mike.moloch best to use @muellerzr’s thread dedicated to his walkthru for questions about it.

2 Likes

Those are amazing goals and I’m sure you might be able to accomplish them-but I’d also recommend not stressing yourself if you cannot.

Personally, I’m still to achieve a few of the goals that I first posted on this forums-2 years ago-okay maybe I’m stupid and you might get it right in the first attempt. I’d like to post a gentle reminder to not get burned out and take your time.

A few of us-at least myself-even though the course lasts 7 weeks, take 6 month-I sometimes take longer to digest the course.

Remember: “Completing the course” isn’t the end goal-it’s to get better at DL-however you achieve that goal.

I can promise you, Jeremy won’t punish us if we get super interested in a Kaggle Comp and take time off the lectures-go gold/silver finish it (More than 10 folks have achieved this) and then come back to it later.

The field isn’t going away soon (Or So I hope) and if you feel you absolutely need to complete Part 1 and 2 before being ready-I’m yet to complete Part 2 myself. (Or have the courage to say it out loud)

I might be a bad example, but as I’ve failed or found new ways to fail with the course, here are the things that I’ve always found useful:

  • Building ideas/projects/kernels/pipelines
  • Kaggle Competitions: Not just for rankings but for understanding the problem and field. (Ex: Transformers via a NLP Comp)
  • Working on blogposts.

Things I never found returning to me (yet) in any useful manner:

  • Reading Theory-I usually forget it for most of the parts.

Okay, granted that at one point, you’ll have to digest the veggies to stay healthy but I’m not past that point yet.

So, please take your time, don’t stress out if there’s a lot to do-because there will be. Remember, it’s most important to enjoy the course, take it easy and build great things :slight_smile:

Everyone of us will welcome your questions later or anytime-so follow your own pace :tea:

10 Likes

Hi,
my name is Jonas and I work in a research department for Artificial Intelligence at a large German manufacturing company. I already participated in (all!) previous versions of this course and have worked on some Kaggle competitions as recommended in the courses, notably the Corporación Favorita Grocery Sales Forecasting :slight_smile: . In 2017 I visited the first ever Data Institute Conference in San Francisco and appreciated it a lot.
My work is primarily related to symbolic AI but our research department also deals with probabilistic AI and machine learning. Deep Learning is a fascinating topic and I’m looking forward to learn whats new in this course, and also to learn the new fastai library.
Thanks for inviting me again and thanks for all I learned here so far.
Jonas
Update: this is my Twitter account

6 Likes

Hi, I am Yash Mittal currently working as Data Scientist in Bangalore, India. I am a fastai student and fan. I am associated with fastai since year 2018 and learned a lot. Last year I have attended all live lectures of Jermey. In year 2018 and 2019 I worked more in computer vision, and now I am learning more stuff in NLP. I have recently done Rachel NLP course. I am happy and excited for this new version of course.

Thanks Jeremy for the invitation

Linkedin: https://www.linkedin.com/in/ymittal23/
Twitter: https://twitter.com/mittaltechie

3 Likes

Aren’t you just teaching the whole v4 course :sweat_smile:

No, I certainly wouldn’t do it if I were :wink: it’s application focused without most of the theory (whereas Jeremy does go into the math a bit etc) And just to show some of the other techniques that aren’t focused on as much :slight_smile: