Introduce yourself here

Hi Everyone,
I am Tanya Roosta, and work at Amazon in Alexa org based in Sunnyvale, CA. I work on NLP feature design for various Alexa capabilities.
What brought me to fast.ai was the quest for a really good explanation of how to “actually” implement high level ML theories. I ended up going through all archived classes, and then took Part 2 course last year.
I am looking forward to seeing new content and hearing Jeremy discuss the latest and greatest.

8 Likes

Hey, I’m Jeremy Blythe (twitter: @jerbly). I run an R&D team making control and management software for the big name video content owners and broadcasters. In the last couple of years I’ve been self learning the wonderful world of ML and was happy to learn fastai through the previous two live streams. This lead to a bunch of personal projects which I’ve been meaning to put up on github and blog - I’ll convert these to v2 first now. :wink:

I’ve also completed the fastai ML course and really enjoyed Rachel’s Computational Linear Algebra course. Here’s a fun video thing I did with that: Part 3 – Background Removal with Robust PCA

This year I have the exciting opportunity to build out a small ML focused team in my group to tackle some interesting video ML applications. What better platform to do that with than fastai!

Really looking forward to this!

2 Likes

Welcome back Tanya! Is that a recent job for you, or have you been there a while? IIRC you were doing something else when you joined the course in person?..

Firstly, a big thanks to fast.ai and team for their content rich, fresh and innovative classes. It is my go to place to learn cutting edge work in ML. I can’t imagine where I’m now, given, 2 years back I was a full stack developer, without such resources! It was a memorable journey.

I’m currently working on personalizing performance as an Applied MLE at LinkedIn. We recently productionized a model that predicts network quality conditions of LinkedIn users to customize their session experience in realtime. It gave me a chance to practice a lot of ML concepts learned from fast.ai, deeplearning.ai and ML blogs. I hope to contribute back to fast.ai some day.

I wish to see some classes on examples of applied ML solutions or upcoming techniques like Graph Neural Networks.

I blog things I learned or things I came up with at work at https://medium.com/@Nithanaroy. I’m also active on https://www.linkedin.com/in/nitinpasumarthy

4 Likes

Alexa … Great fun :slight_smile:

2 Likes

Hi everyone,

I am Karel Zuiderveld (not on Twitter) and am an self-employed C++ software developer/medical imaging scientist/technologist. After a long career doing medical imaging (mainly visualization of 3D datasets and implementing high-performance algorithms on CPUs and GPUs), I started my journey in deep learning with Jeremy’s first fast.ai course and am very grateful to be invited back again - thanks Jeremy!

I watched all of Jeremy’s courses (I am a big fan of his way of teaching), but alas never had the time to get real critical mass on becoming a DL expert. However, I certainly achieved my initial goal of knowing enough about DL to be able to have a good discussion with DL scientists that cross my path.

This new course couldn’t come at a better time for me. I am currently involved in a project that aims to deploy DL algorithms (I am using ONNX runtime for that); the scientist that developed the model (using tensorflow 1.13/keras) is leaving for another role in the company. Would be awesome to use fastai2 to reproduce/retrain his model and then improve upon it.

Thanks for having me again!

10 Likes

Thanks Jeremy. Yes I was at a startup, but moved last June to Amazon. We are also hiring for anyone looking :slight_smile: PM me for job descriptions.

2 Likes

Hello All,

I am Rajat working for Insurance company in Boston. I have been part of Fast AI since 2017.
In my current role, I am mainly doing Natural Language Processing and Machine learning for various insurance related use cases.

I am super excited to attend 2020 version of the course.

Cheers!

2 Likes

Hello All,

I am Machine/Deep Learning specialist. I started reading and learning about AI in 2010. I was able to coauthor and publish 4 papers with about 50 citations in total since then. My interests are in Computer Vision and Bioinformatics. I live in London and currently looking for a position.

Faris

3 Likes

Please share links! :slight_smile:

1 Like

Hello Everyone,

I took fastai v3 and loved it and am really looking forward to this class. I currently run engineering at a small mobile startup.

It would be great to connect with other students, please feel free to connect with me on https://www.linkedin.com/in/foobar8675/

2 Likes

Hi everyone, I’m Gaurav.

I’m a ML Engineer at Haptik working on fundamental Conversational-AI problems. I’ll always be thankful to this community for all these “firsts” in my life! First AI job, First 450+ stars repo (iNLTK), First 100+ stars repo(Code with AI), First Talk (on iNLTk).

I’m looking forward to learn more, write blogs and make some coding contributions to fast.ai (that’ll be a dream come true) :slight_smile:

My homepage: https://goru001.github.io/
LinkedIn: https://in.linkedin.com/in/gaurav-arora-23593220
Twitter: http://twitter.com/massthaiyaar

9 Likes

I really enjoyed the fastai deep learning and ml courses, which i did in a typical for me haphazard way jumping around and then going back through later. I think my first lecture was a lecture 14 one year… I was going to do the computational one by Rachel, I’m still going to do the computational one by Rachel.

I followed Jeremy’s suggestion and got on twitter, lurking mainly but finding it one of the best resources for uptodate useful information. There’s just so much of it, I don’t keep up, my todo list or wish list just grows.

I’m a non-diverse white australian guy with an engineering and computer science background. sorry.

I’m curous how much of the course I’ll be able to run on a 1050 Ti (4Gb). Should I even? (I stopped getting 16Gb Colab runtimes recently :slightly_frowning_face:)
I wrote in the other thread, its been a while for me. I was going to take a look at v2 a while back. OS / Driver issues stopped me then but meanwhile Jeremy and Sylvain (others?) have been busy… this is just v1??:

git fetch
remote: Enumerating objects: 1625, done.
remote: Counting objects: 100% (1625/1625), done.
remote: Compressing objects: 100% (20/20), done.
remote: Total 23671 (delta 1605), reused 1621 (delta 1605), pack-reused 22046
Receiving objects: 100% (23671/23671), 242.67 MiB | 5.57 MiB/s, done.
Resolving deltas: 100% (18141/18141), completed with 216 local objects.
From https://github.com/fastai/fastai
e50fa778…ef79ccb6 master -> origin/master

  • [new branch] ImageCleaner -> origin/ImageCleaner
  • [new branch] StopAfterNBatches-learner -> origin/StopAfterNBatches-learner
  • [new branch] ci -> origin/ci
  • [new branch] release-1.0.10 -> origin/release-1.0.10
  • [new branch] release-1.0.11 -> origin/release-1.0.11
  • [new branch] release-1.0.12 -> origin/release-1.0.12
  • [new branch] release-1.0.13 -> origin/release-1.0.13
  • [new branch] release-1.0.14 -> origin/release-1.0.14
  • [new branch] release-1.0.15 -> origin/release-1.0.15
  • [new branch] release-1.0.16 -> origin/release-1.0.16
  • [new branch] release-1.0.17 -> origin/release-1.0.17
  • [new branch] release-1.0.18 -> origin/release-1.0.18
  • [new branch] release-1.0.19 -> origin/release-1.0.19
  • [new branch] release-1.0.20 -> origin/release-1.0.20
  • [new branch] release-1.0.21 -> origin/release-1.0.21
  • [new branch] release-1.0.22 -> origin/release-1.0.22
  • [new branch] release-1.0.24 -> origin/release-1.0.24
  • [new branch] release-1.0.25 -> origin/release-1.0.25
  • [new branch] release-1.0.26 -> origin/release-1.0.26
  • [new branch] release-1.0.27 -> origin/release-1.0.27
  • [new branch] release-1.0.28 -> origin/release-1.0.28
  • [new branch] release-1.0.29 -> origin/release-1.0.29
  • [new branch] release-1.0.30 -> origin/release-1.0.30
  • [new branch] release-1.0.31 -> origin/release-1.0.31
  • [new branch] release-1.0.32 -> origin/release-1.0.32
  • [new branch] release-1.0.33 -> origin/release-1.0.33
  • [new branch] release-1.0.34 -> origin/release-1.0.34
  • [new branch] release-1.0.35 -> origin/release-1.0.35
  • [new branch] release-1.0.36 -> origin/release-1.0.36
  • [new branch] release-1.0.37 -> origin/release-1.0.37
  • [new branch] release-1.0.38 -> origin/release-1.0.38
  • [new branch] release-1.0.39 -> origin/release-1.0.39
  • [new branch] release-1.0.40 -> origin/release-1.0.40
  • [new branch] release-1.0.41 -> origin/release-1.0.41
  • [new branch] release-1.0.42 -> origin/release-1.0.42
  • [new branch] release-1.0.43 -> origin/release-1.0.43
  • [new branch] release-1.0.44 -> origin/release-1.0.44
  • [new branch] release-1.0.46 -> origin/release-1.0.46
  • [new branch] release-1.0.47 -> origin/release-1.0.47
  • [new branch] release-1.0.48 -> origin/release-1.0.48
  • [new branch] release-1.0.49 -> origin/release-1.0.49
  • [new branch] release-1.0.50 -> origin/release-1.0.50
  • [new branch] release-1.0.51 -> origin/release-1.0.51
  • [new branch] release-1.0.52 -> origin/release-1.0.52
  • [new branch] release-1.0.53 -> origin/release-1.0.53
  • [new branch] release-1.0.54 -> origin/release-1.0.54
  • [new branch] release-1.0.55 -> origin/release-1.0.55
  • [new branch] release-1.0.56 -> origin/release-1.0.56
  • [new branch] release-1.0.57 -> origin/release-1.0.57
  • [new branch] release-1.0.58 -> origin/release-1.0.58
  • [new branch] release-1.0.59 -> origin/release-1.0.59
  • [new branch] release-1.0.6 -> origin/release-1.0.6
  • [new branch] release-1.0.60 -> origin/release-1.0.60
  • [new branch] release-1.0.7 -> origin/release-1.0.7
  • [new branch] release-1.0.8 -> origin/release-1.0.8
  • [new branch] release-1.0.9 -> origin/release-1.0.9
  • [new branch] revert-1570-patch-1 -> origin/revert-1570-patch-1
  • [new branch] revert-1595-from_name_re -> origin/revert-1595-from_name_re
  • [new branch] revert-1654-master -> origin/revert-1654-master
  • [new branch] revert-1725-return_figures -> origin/revert-1725-return_figures
  • [new branch] revert-2034-patch-1 -> origin/revert-2034-patch-1
  • [new branch] revert-2070-index_row -> origin/revert-2070-index_row
  • [new branch] ulmfit_v1 -> origin/ulmfit_v1

Hi Everyone,

I am an oldish NLP R&D person, specialized in Natural Language Generation (NLG) and Knowledge Representation. I used to work full time in academia in those areas, especially NLG, when it used to be still mostly symbolic. I did play a bit with Machine Learning trying to solve some problems back when the platform of choice was Weka.

I really started getting excited about machine learning around 2015-16 when I started experimenting with word embeddings using Gensim to enrich an ontology I developed for an e-commerce platform.

Then I took Andrew Ng’s Coursera ML course in 2017, followed by his deep learning specialization in 2018, both of which were great. Last year 2019, after hearing about it from some friends working in industry, I took, and absolutely loved, the fastai course and its more practical approach. I also used the stuff about NMT in the NLP course to explore end-to-end NLG. I actually wrote a medium post about it.

Thanks to the Fastai library I was able to quickly get into the meat of the problem without wasting time implementing the code. That’s on the plus side. On the minus side, I got frustrated by not having enough pytorch skills to extend the code my own way.

I still got a terribly long list of things I’d like to experiment with/test. I have tried for example graph embeddings on a custom DBPedia knoweldge graph: it was nice to see how clustering using those embeddings replicated the ontology hierarchy. I would like also to work out GANs, variational auto-encoders, and the Bidaf model.

I recently got to write some entries on a blog with github and hugo to keep track of stuff I play with. I’d also like to incorporate notebooks in my blog, as Jeremy suggested.

5 Likes

Maybe you could try colab pro?
https://colab.research.google.com/signup#

2 Likes

Hello All, I am Pradeep. I currently work as a Data Scientist for a consulting company. I strongly believe that 2019 course is one of the primary reasons that got me the job and I sincerely thank Jeremy, Rachel and this wonderful community.

I look forward to learning the newest features of fastai V2, implement and teach others around me.

Twitter: @tpradeep

1 Like

Hi everyone.

I am a fastai old timer from New York City— this will be my fourth year of following Jeremy’s lectures(!)

In my day job, I use computer simulations of proteins and small molecules to help discover new drugs. Inspired by Jeremey’s courses, colleagues and I developed a technology that just appeared on the preprint server today. Our paper is on the deep learning view of chemical space designed to facilitate drug discovery. It presents the new state of the art for generating improved small molecules (potential new drugs) from pairs of molecules in which one element of the pair is better than the other in some desirable property. Here is a link to the preprint:

13 Likes

Hi everyone,

I have been following fastai part 1 and part 2 as an international fellow since v2 and I’m looking forward to this next version. I am a software engineer and have been looking at ways to apply algorithms in user applications. I have found a great deal of inspiration in different techniques introduced in the class to prototype ideas and am looking forward to learning more.

1 Like

Thank you so much for the notes. This looks like a good point to jump of and get started.

Sounds pretty neat. Thanks for sharing.