Introduce yourself here

Hi everyone! I’m Edwin. I have been taking FastAI moocs from the past few years and definitely enjoy them. Definitely best DL info on the net hands down. Looking forward to the new course.

Being a developer is my day job. My Twitter @edwiiin_v

1 Like

Hello to all fastai enthusiasts

I am very grateful for again being selected, this is indeed a privilege. My home country is England, I don’t see many compatriots in this space but I am sure some are there.

I came to this space following the completion of many Coursera courses in AI. The one that pushed me here was Machine Learning Specialization from Washington University, after the 4th course Carlos Guestrin left to take up with Apple and the 5th and 6th courses were never finished.

I started with the first versions of the Moocs and look forward to the next versions each year to see what has changed.

Unfortunately because of other issues I have not been as active as I would like, hopefully that will change this time. I really enjoyed the promise of last years sessions and potential of Swift and Tensorflow, but nothing stays concrete for long, something else will best this but the thought behind it is brilliant.

At this time I have no links to offer, I don’t work for anyone but I like to play with these tools, I am perhaps the oldest enthusiast here. My work history covers many occupations, being time served as a mechanical engineer and mainly worked with my hands rather than my head until I took an Open University degree (UK) and then worked in Software specifcally in Flexible Manufacturing Systems. One of which was for delivery to Merck-Medco’s automated pharmacy in Nevada.

Anyway I note just on this page alone the diverse information presented by you world changers and know that we will succeed.

Thanks very much for the invite.

7 Likes

Oh I didn’t realize that. How were the 1st 4 courses?

The first 4 courses gave me the introduction to python and notebooks, it was also dependent on Graphlab Create if my memory serves me right. They covered Regression, Classification, and Clustering & Retrieval. The disappointing part was we did not get to the Capstone Project which was Course 6 which was to tie all the previous learning together. Although they still offer these four courses it’s probably not very popular as there is nothing to tie it all together.

1 Like

Hi Everyone,
I am Tanya Roosta, and work at Amazon in Alexa org based in Sunnyvale, CA. I work on NLP feature design for various Alexa capabilities.
What brought me to fast.ai was the quest for a really good explanation of how to “actually” implement high level ML theories. I ended up going through all archived classes, and then took Part 2 course last year.
I am looking forward to seeing new content and hearing Jeremy discuss the latest and greatest.

8 Likes

Hey, I’m Jeremy Blythe (twitter: @jerbly). I run an R&D team making control and management software for the big name video content owners and broadcasters. In the last couple of years I’ve been self learning the wonderful world of ML and was happy to learn fastai through the previous two live streams. This lead to a bunch of personal projects which I’ve been meaning to put up on github and blog - I’ll convert these to v2 first now. :wink:

I’ve also completed the fastai ML course and really enjoyed Rachel’s Computational Linear Algebra course. Here’s a fun video thing I did with that: Part 3 – Background Removal with Robust PCA

This year I have the exciting opportunity to build out a small ML focused team in my group to tackle some interesting video ML applications. What better platform to do that with than fastai!

Really looking forward to this!

2 Likes

Welcome back Tanya! Is that a recent job for you, or have you been there a while? IIRC you were doing something else when you joined the course in person?..

Firstly, a big thanks to fast.ai and team for their content rich, fresh and innovative classes. It is my go to place to learn cutting edge work in ML. I can’t imagine where I’m now, given, 2 years back I was a full stack developer, without such resources! It was a memorable journey.

I’m currently working on personalizing performance as an Applied MLE at LinkedIn. We recently productionized a model that predicts network quality conditions of LinkedIn users to customize their session experience in realtime. It gave me a chance to practice a lot of ML concepts learned from fast.ai, deeplearning.ai and ML blogs. I hope to contribute back to fast.ai some day.

I wish to see some classes on examples of applied ML solutions or upcoming techniques like Graph Neural Networks.

I blog things I learned or things I came up with at work at https://medium.com/@Nithanaroy. I’m also active on https://www.linkedin.com/in/nitinpasumarthy

4 Likes

Alexa … Great fun :slight_smile:

2 Likes

Hi everyone,

I am Karel Zuiderveld (not on Twitter) and am an self-employed C++ software developer/medical imaging scientist/technologist. After a long career doing medical imaging (mainly visualization of 3D datasets and implementing high-performance algorithms on CPUs and GPUs), I started my journey in deep learning with Jeremy’s first fast.ai course and am very grateful to be invited back again - thanks Jeremy!

I watched all of Jeremy’s courses (I am a big fan of his way of teaching), but alas never had the time to get real critical mass on becoming a DL expert. However, I certainly achieved my initial goal of knowing enough about DL to be able to have a good discussion with DL scientists that cross my path.

This new course couldn’t come at a better time for me. I am currently involved in a project that aims to deploy DL algorithms (I am using ONNX runtime for that); the scientist that developed the model (using tensorflow 1.13/keras) is leaving for another role in the company. Would be awesome to use fastai2 to reproduce/retrain his model and then improve upon it.

Thanks for having me again!

10 Likes

Thanks Jeremy. Yes I was at a startup, but moved last June to Amazon. We are also hiring for anyone looking :slight_smile: PM me for job descriptions.

2 Likes

Hello All,

I am Rajat working for Insurance company in Boston. I have been part of Fast AI since 2017.
In my current role, I am mainly doing Natural Language Processing and Machine learning for various insurance related use cases.

I am super excited to attend 2020 version of the course.

Cheers!

2 Likes

Hello All,

I am Machine/Deep Learning specialist. I started reading and learning about AI in 2010. I was able to coauthor and publish 4 papers with about 50 citations in total since then. My interests are in Computer Vision and Bioinformatics. I live in London and currently looking for a position.

Faris

3 Likes

Please share links! :slight_smile:

1 Like

Hello Everyone,

I took fastai v3 and loved it and am really looking forward to this class. I currently run engineering at a small mobile startup.

It would be great to connect with other students, please feel free to connect with me on https://www.linkedin.com/in/foobar8675/

2 Likes

Hi everyone, I’m Gaurav.

I’m a ML Engineer at Haptik working on fundamental Conversational-AI problems. I’ll always be thankful to this community for all these “firsts” in my life! First AI job, First 450+ stars repo (iNLTK), First 100+ stars repo(Code with AI), First Talk (on iNLTk).

I’m looking forward to learn more, write blogs and make some coding contributions to fast.ai (that’ll be a dream come true) :slight_smile:

My homepage: https://goru001.github.io/
LinkedIn: https://in.linkedin.com/in/gaurav-arora-23593220
Twitter: http://twitter.com/massthaiyaar

9 Likes

I really enjoyed the fastai deep learning and ml courses, which i did in a typical for me haphazard way jumping around and then going back through later. I think my first lecture was a lecture 14 one year… I was going to do the computational one by Rachel, I’m still going to do the computational one by Rachel.

I followed Jeremy’s suggestion and got on twitter, lurking mainly but finding it one of the best resources for uptodate useful information. There’s just so much of it, I don’t keep up, my todo list or wish list just grows.

I’m a non-diverse white australian guy with an engineering and computer science background. sorry.

I’m curous how much of the course I’ll be able to run on a 1050 Ti (4Gb). Should I even? (I stopped getting 16Gb Colab runtimes recently :slightly_frowning_face:)
I wrote in the other thread, its been a while for me. I was going to take a look at v2 a while back. OS / Driver issues stopped me then but meanwhile Jeremy and Sylvain (others?) have been busy… this is just v1??:

git fetch
remote: Enumerating objects: 1625, done.
remote: Counting objects: 100% (1625/1625), done.
remote: Compressing objects: 100% (20/20), done.
remote: Total 23671 (delta 1605), reused 1621 (delta 1605), pack-reused 22046
Receiving objects: 100% (23671/23671), 242.67 MiB | 5.57 MiB/s, done.
Resolving deltas: 100% (18141/18141), completed with 216 local objects.
From https://github.com/fastai/fastai
e50fa778…ef79ccb6 master -> origin/master

  • [new branch] ImageCleaner -> origin/ImageCleaner
  • [new branch] StopAfterNBatches-learner -> origin/StopAfterNBatches-learner
  • [new branch] ci -> origin/ci
  • [new branch] release-1.0.10 -> origin/release-1.0.10
  • [new branch] release-1.0.11 -> origin/release-1.0.11
  • [new branch] release-1.0.12 -> origin/release-1.0.12
  • [new branch] release-1.0.13 -> origin/release-1.0.13
  • [new branch] release-1.0.14 -> origin/release-1.0.14
  • [new branch] release-1.0.15 -> origin/release-1.0.15
  • [new branch] release-1.0.16 -> origin/release-1.0.16
  • [new branch] release-1.0.17 -> origin/release-1.0.17
  • [new branch] release-1.0.18 -> origin/release-1.0.18
  • [new branch] release-1.0.19 -> origin/release-1.0.19
  • [new branch] release-1.0.20 -> origin/release-1.0.20
  • [new branch] release-1.0.21 -> origin/release-1.0.21
  • [new branch] release-1.0.22 -> origin/release-1.0.22
  • [new branch] release-1.0.24 -> origin/release-1.0.24
  • [new branch] release-1.0.25 -> origin/release-1.0.25
  • [new branch] release-1.0.26 -> origin/release-1.0.26
  • [new branch] release-1.0.27 -> origin/release-1.0.27
  • [new branch] release-1.0.28 -> origin/release-1.0.28
  • [new branch] release-1.0.29 -> origin/release-1.0.29
  • [new branch] release-1.0.30 -> origin/release-1.0.30
  • [new branch] release-1.0.31 -> origin/release-1.0.31
  • [new branch] release-1.0.32 -> origin/release-1.0.32
  • [new branch] release-1.0.33 -> origin/release-1.0.33
  • [new branch] release-1.0.34 -> origin/release-1.0.34
  • [new branch] release-1.0.35 -> origin/release-1.0.35
  • [new branch] release-1.0.36 -> origin/release-1.0.36
  • [new branch] release-1.0.37 -> origin/release-1.0.37
  • [new branch] release-1.0.38 -> origin/release-1.0.38
  • [new branch] release-1.0.39 -> origin/release-1.0.39
  • [new branch] release-1.0.40 -> origin/release-1.0.40
  • [new branch] release-1.0.41 -> origin/release-1.0.41
  • [new branch] release-1.0.42 -> origin/release-1.0.42
  • [new branch] release-1.0.43 -> origin/release-1.0.43
  • [new branch] release-1.0.44 -> origin/release-1.0.44
  • [new branch] release-1.0.46 -> origin/release-1.0.46
  • [new branch] release-1.0.47 -> origin/release-1.0.47
  • [new branch] release-1.0.48 -> origin/release-1.0.48
  • [new branch] release-1.0.49 -> origin/release-1.0.49
  • [new branch] release-1.0.50 -> origin/release-1.0.50
  • [new branch] release-1.0.51 -> origin/release-1.0.51
  • [new branch] release-1.0.52 -> origin/release-1.0.52
  • [new branch] release-1.0.53 -> origin/release-1.0.53
  • [new branch] release-1.0.54 -> origin/release-1.0.54
  • [new branch] release-1.0.55 -> origin/release-1.0.55
  • [new branch] release-1.0.56 -> origin/release-1.0.56
  • [new branch] release-1.0.57 -> origin/release-1.0.57
  • [new branch] release-1.0.58 -> origin/release-1.0.58
  • [new branch] release-1.0.59 -> origin/release-1.0.59
  • [new branch] release-1.0.6 -> origin/release-1.0.6
  • [new branch] release-1.0.60 -> origin/release-1.0.60
  • [new branch] release-1.0.7 -> origin/release-1.0.7
  • [new branch] release-1.0.8 -> origin/release-1.0.8
  • [new branch] release-1.0.9 -> origin/release-1.0.9
  • [new branch] revert-1570-patch-1 -> origin/revert-1570-patch-1
  • [new branch] revert-1595-from_name_re -> origin/revert-1595-from_name_re
  • [new branch] revert-1654-master -> origin/revert-1654-master
  • [new branch] revert-1725-return_figures -> origin/revert-1725-return_figures
  • [new branch] revert-2034-patch-1 -> origin/revert-2034-patch-1
  • [new branch] revert-2070-index_row -> origin/revert-2070-index_row
  • [new branch] ulmfit_v1 -> origin/ulmfit_v1

Hi Everyone,

I am an oldish NLP R&D person, specialized in Natural Language Generation (NLG) and Knowledge Representation. I used to work full time in academia in those areas, especially NLG, when it used to be still mostly symbolic. I did play a bit with Machine Learning trying to solve some problems back when the platform of choice was Weka.

I really started getting excited about machine learning around 2015-16 when I started experimenting with word embeddings using Gensim to enrich an ontology I developed for an e-commerce platform.

Then I took Andrew Ng’s Coursera ML course in 2017, followed by his deep learning specialization in 2018, both of which were great. Last year 2019, after hearing about it from some friends working in industry, I took, and absolutely loved, the fastai course and its more practical approach. I also used the stuff about NMT in the NLP course to explore end-to-end NLG. I actually wrote a medium post about it.

Thanks to the Fastai library I was able to quickly get into the meat of the problem without wasting time implementing the code. That’s on the plus side. On the minus side, I got frustrated by not having enough pytorch skills to extend the code my own way.

I still got a terribly long list of things I’d like to experiment with/test. I have tried for example graph embeddings on a custom DBPedia knoweldge graph: it was nice to see how clustering using those embeddings replicated the ontology hierarchy. I would like also to work out GANs, variational auto-encoders, and the Bidaf model.

I recently got to write some entries on a blog with github and hugo to keep track of stuff I play with. I’d also like to incorporate notebooks in my blog, as Jeremy suggested.

5 Likes

Maybe you could try colab pro?
https://colab.research.google.com/signup#

2 Likes

Hello All, I am Pradeep. I currently work as a Data Scientist for a consulting company. I strongly believe that 2019 course is one of the primary reasons that got me the job and I sincerely thank Jeremy, Rachel and this wonderful community.

I look forward to learning the newest features of fastai V2, implement and teach others around me.

Twitter: @tpradeep

1 Like