Introduce yourself here

Also one of our top 10 most “liked” users! :slight_smile:

image

7 Likes

Hi,

I’m Edward Ross in Melbourne, Australia. Last year I used fastai to build WhatCar.xyz a classifier for Australian car makes and models.

Now I’m learning Natural Language Processing to try to extract information from text and have started blogging about it. I’m really excited about trying some of the ideas from Steven Merity’s SHA-RNN in fastai v2.

Even though I’ve got a background in mathematics and am comfortable with statistical learning, I found Deep Learning too hard to get into until I took my first fast.ai course 2 years ago. I found the top down approach really useful for learning what was going on and what was important rather than spending a long time wading through disconnected ideas. I’ve run some informal study groups with the last fastai course at work and found explaining the ideas to others is a great way to understand them.

I’m still not confident enough with experimenting with Deep Learning (what should I try changing? how do I see the effects of changes?), but will keep practicing.

8 Likes

Hi Dmytro! Could you expand a bit on what 3D reconstruction is, and where’s the field at right now? I’m very curious about it!

Hi All, i am Vineet Singh. I work in the field of finance, with a strong focus on tabular data in combination with NLP. Currently, i work for Citigroup where we work on insurance, re-insurance and lending (credit) to institutional clients and their clients. fast.ai is our framework of choice as it lends to rapid prototyping of models. Highly obliged for the invitation @jeremy. Hoping to share my experience and learn from this amazing group of practitioners.

7 Likes

Hello everyone!

I was a software engineer before taking a break to learn as much about deep learning as possible about two years ago. I took the in person fastai class part 1 and 2 last year, though I had done the previous ones as well. Currently using an older version of fastai2 to do fp16 GAN training(network UGATIT), as I fell behind on the updates. Hoping to start applying for jobs in DL engineering soon.

As for what I would like to get out of the course: Learning what is new over the past year that I may have missed, along with Jeremy’s various tricks. Shoring up my own understanding by helping others. I have definitely found over the past year of running a meetup that answering beginner questions about deep learning makes your own knowledge of the fundamentals a lot more solid, and you get better at explaining it every time.

I organize the a study group every Tuesday for people looking to learn about deep learning, and we have been going to over a year now! Meeting is open to both pyLadies and pyGents. The group was originally formed by members of the last fastai in person cohort.

Twitter handle is @marii18052483, though I really only use it to follow Jeremy. I am the absolute worst about using social media.

7 Likes

Hi guys, I’ve compiled a list of best practices for using fastai to train neural networks, as well as some general tips shared by Jeremy in the fastai courses or his twitter, you can check it out here. I plan on doing the same thing for the 2020 version of the course, hopefully Jeremy shares some more amazing tips for us :slight_smile:

5 Likes

Hello, I am Frederick Kautz. I do work on a few fronts. First, I help companies and people build systems that scale massively. I advise some of the standardization groups defining the next generation infrastructure for 5G telecommunications and beyond.

I also work with health companies. I have implemented a variety of AI related projects on both infrastructure and AI research with a strong focus on federated learning and privacy preserving learning.

My interests at this point are a bit lower level. I want to see MLIR and SwiftAI succeed. I’ll be approaching this course with that context in mind.

Cheers

p.s. It’s nice meeting you all. I’m generally very impressed with the things people create here.

5 Likes

Hi everyone!

My name is William, and I’m a software engineer working on machine learning applications at Compass, a tech-focused real brokerage in NYC. For an example of the the kinds of things I work on day-to-day, here’s a project I helped build last year: Launching Similar Homes and Real-Time Personalized Recommendations.

I’ve been following fast.ai since I discovered the first (Keras) version of the videos, and the course, in its many iterations, has played a tremendous role in my career direction. It’s what really got me hooked on machine learning, gave me a ton of great knowledge, and also empowered me to keep learning more.

I’m grateful to have been able to apply what I’ve learned in fast.ai to Kaggle competitions, where I’ve won two bronze medals so far. Would love to team up with folks from the course and see what we can do with fastai2! I also speak about machine learning topics, including a few fastai-related: Human Protein Image Classification using PyTorch and fastai (from the NYC PyTorch Meetup) and You Can Do Deep Learning! (PyOhio 2018).

I’ve had some opportunities to give back to the fastai library and community, including implementing SWA in the pre-v1 version, and working on getting past lessons to run on Kaggle Kernels.

I haven’t been on the forums for a while, so this invitation surprised me a bit, but I’m very excited to get up to speed on all the new things!

My Twitter is @hortonhearsafoo (I’m fairly active)

9 Likes

I will definitely do!

Hi All, I’m Brian Smith and am an escalation engineer with Microsoft working on Microsoft Project, Project for the web, and Planner. Beyond that, it is my interest in ML/AI and the great teaching methods of Jeremy, Rachel and Sylvain - and more recently the library work on v2 that keeps me coming back for more at fast.ai. I have a home Windows machine I’ve used for the previous courses, and I’ve also used the Data Science VMs in Azure. Always happy to help with any Windows/Azure specific questions that come up - and looking forward to us (Microsoft) having a VM template ready for the course. I’m thinking too of building a Linux machine for home use (my current Linux box is my previous Windows box - and 10 years old). I’m on Twitter and LinkedIn - and my interest in photography has attracted me more to vision, but I want to look more at time series and NLP too. It is certainly inspiring reading through the introductions so far and the very cool work that you are all doing.
I look forward to learning more about you all - and of course fastai v2!

12 Likes

I’m ready and waiting to help you folks if needed - just say the word! :slight_smile: There’s a lot of new stuff so might be worth sharing our early drafts with the MSFT team…

1 Like

Hi @morgan,
The great thing about these conferences is that you can live stream or even see recorded sessions within hours after the session was taped(except posters). NeurIPS for example, introduced global meetups last year, so that folks who were unable to join in person can view via meetups globally. This being said you can start with last year’s ICML material to get an idea of what to expect(high level). The only aspects that you will miss is human aspect (socials etc.).

However, some conferences do offer diversity scholarships for travel. So you can also use that as an avenue to get some financial help if you qualify. I think it’s great to at least experience one of these more academic conferences, because you have a unique opportunity to meet folks who are at levels within this space(social aspect). But the truth is, after a day or two of slides filled with equations, you’ll realize that you’ve learned more by reading at your own pace at home.

Hope this helps!

3 Likes

Hi All, I’m Even, and I’ve previously blogged my fastai journey which began with the very first version of the course. I owe my current role and passion for deep learning to Jeremy and Rachel. I was more heavily involved in the fastai community early on but have been busy the last few years raising two young boys. I’m really looking forward to diving deep into the new library and to revisiting this passion.

I now work at NVidia leading a team focused on deep learning based recommender systems (RecSys) and tabular data. We’re currently working on accelerating feature engineering and preprocessing on the GPU for arbitrarily large datasets, along with other interesting research questions related to RecSys like large (50K+) batch size training. I’ve spent some time accelerating the previous fastai tabular library in a number of ways, and am very interested in scaling deep learning systems to production. I’m hoping to start blogging some of my RecSys knowledge and experiences this year using fast_template, assuming I can find the time.

It’s great to see so many familiar faces from the fastai family here and I’m looking forward to reconnecting and also getting to know new (to me) members, especially those who are as interested in tabular and RecSys as I am.

17 Likes

Thanks. This is amazing. This is my 3 rd time in this course.
I have been doing sw engg for 10 yrs. N now looking for challenges in this domain.

1 Like

Super, thanks for that, helps a lot! :slight_smile:

Hello everybody,

My name is Minh Nguyen. I’m a Master student in Electrical Engineering at King Abdullah University of Science and Technology. I got interested in deep learning and especially fastai because after each lesson I can actually implement my own model (so cool!!!).

What I like to get out of this course is how to maintain what I have learned about deep learning relevant for a while, given the fact that deep learning is a very fast changing field. For example, new versions of deep learning libraries keep coming out every few months and I am still not comfortable with keeping up with the changes. I hope this issue will be addressed in the coming course.

I’m looking forward to the 2020 deep learning course. I wish fastai team all the best in 2020. You guys are true heroes.

3 Likes

Thanks Dmytro, that could be a good option too!

I’m not a great writer, When I started working on fastai courses in 2018 there weren’t any blogs that explained how to train an image classification model from reading the data to predicting the results on Keras, So I made one on medium, it did well, almost has 130k views, and to date, it wasn’t monetized and anyone can read them for free. If you’re not serious about maintaining a blog site, medium is the best choice in my opinion.

2 Likes

Hi maxmatical Hope all is well in your world!

Great notebooks!

cheers mrfabulous1 :smiley: :smiley:

In the nutshell 3d reconstruction is generation of 3d model in a form of 3d point cloud and/or meshes from images and other data you may have, as depth maps, etc.

The current open source state-of-the-art is colmap, which written in C++ and CUDA. It takes a directory with images and outputs 3d model. It works great if you have images dense enough covering the different views of the object and if the object is textured. If no, e.g. from low quaility images of trees or like this, it does not work that great or at all.

Internally it performs SIFT detection and matching, then RANSAC to get relative camera poses and then global bundle adjustment to correct the errors.

Use-cases are different, from obtaining the 3d model itself for visualization purposes, augmented reality, to having it as ground truth for image retrieval, depth prediction.

6 Likes