Introduce yourself here

Hi All, I’m Even, and I’ve previously blogged my fastai journey which began with the very first version of the course. I owe my current role and passion for deep learning to Jeremy and Rachel. I was more heavily involved in the fastai community early on but have been busy the last few years raising two young boys. I’m really looking forward to diving deep into the new library and to revisiting this passion.

I now work at NVidia leading a team focused on deep learning based recommender systems (RecSys) and tabular data. We’re currently working on accelerating feature engineering and preprocessing on the GPU for arbitrarily large datasets, along with other interesting research questions related to RecSys like large (50K+) batch size training. I’ve spent some time accelerating the previous fastai tabular library in a number of ways, and am very interested in scaling deep learning systems to production. I’m hoping to start blogging some of my RecSys knowledge and experiences this year using fast_template, assuming I can find the time.

It’s great to see so many familiar faces from the fastai family here and I’m looking forward to reconnecting and also getting to know new (to me) members, especially those who are as interested in tabular and RecSys as I am.

17 Likes

Thanks. This is amazing. This is my 3 rd time in this course.
I have been doing sw engg for 10 yrs. N now looking for challenges in this domain.

1 Like

Super, thanks for that, helps a lot! :slight_smile:

Hello everybody,

My name is Minh Nguyen. I’m a Master student in Electrical Engineering at King Abdullah University of Science and Technology. I got interested in deep learning and especially fastai because after each lesson I can actually implement my own model (so cool!!!).

What I like to get out of this course is how to maintain what I have learned about deep learning relevant for a while, given the fact that deep learning is a very fast changing field. For example, new versions of deep learning libraries keep coming out every few months and I am still not comfortable with keeping up with the changes. I hope this issue will be addressed in the coming course.

I’m looking forward to the 2020 deep learning course. I wish fastai team all the best in 2020. You guys are true heroes.

3 Likes

Thanks Dmytro, that could be a good option too!

I’m not a great writer, When I started working on fastai courses in 2018 there weren’t any blogs that explained how to train an image classification model from reading the data to predicting the results on Keras, So I made one on medium, it did well, almost has 130k views, and to date, it wasn’t monetized and anyone can read them for free. If you’re not serious about maintaining a blog site, medium is the best choice in my opinion.

2 Likes

Hi maxmatical Hope all is well in your world!

Great notebooks!

cheers mrfabulous1 :smiley: :smiley:

In the nutshell 3d reconstruction is generation of 3d model in a form of 3d point cloud and/or meshes from images and other data you may have, as depth maps, etc.

The current open source state-of-the-art is colmap, which written in C++ and CUDA. It takes a directory with images and outputs 3d model. It works great if you have images dense enough covering the different views of the object and if the object is textured. If no, e.g. from low quaility images of trees or like this, it does not work that great or at all.

Internally it performs SIFT detection and matching, then RANSAC to get relative camera poses and then global bundle adjustment to correct the errors.

Use-cases are different, from obtaining the 3d model itself for visualization purposes, augmented reality, to having it as ground truth for image retrieval, depth prediction.

6 Likes

With all respect, I’d say that conference is people (chatting, random meeting, etc) and posters. I often completely ignore oral presentations :slight_smile: But as an free remote option - they are great for sure.

1 Like

Heyo, I’ve been in analytics for ~6 years, working at startups and advertising agencies in NYC, but then I left advertising to work as an algorithmic trader (using mostly ML/DL strategies). I did that for two years, but stopped after I got tired of the constant stress. Moved to Berlin and am now working as a data scientist in their rapidly growing startup scene. My side projects have been mostly focused on NLP and using DL with music/audio.

I love this course because no matter how many years I am into my DL journey, I always learn a ton of new stuff. I love learning about all the new techniques and the discussions about the content throughout the course. And Jeremy’s teaching style / course layout is particularly effective for me.

I also love this community, so many smart people that are very passionate and committed to contributing and to helping everyone, no matter where they are in their journey. I’ve also begun volunteering as a teacher in Berlin so I hope to take some of my new learnings here and share them with my students.

5 Likes

Hello and Namaste.

I am a senior undergraduate in India and soon relocating to the States for advancing my academic career where I’ll be joining NYU as a Research Scholar. I was introduced into the world of Deep Learning during an exchange program to Bangkok University. Since then it has been a roller-coaster, have met some amazing people and built great things together.

Recently, I designed a novel activation function called - Mish (Paper)(Code). This community has helped me immensely in the success I’ve achieved for Mish. Completely owe it to @jeremy and the whole Fast.AI community.

Right now, I’m working on Mean Field Theory and Dynamic Isometry at my lab - Landskape. I also play the piano and rarely write poems and Haiku on my blog.

Feel free to get connected with me on my Twitter - @DigantaMisra1

16 Likes

Hi,

I’m referred to as chatuur. I am a self taught software engineer and data scientist. I work at Naukri, India’s largest job portal. I am part of the machine learning team where I have worked on the crawling pipeline and the search pipeline. My job involves injecting machine learning solutions to enhance these pipelines and automate things.

I also have a passion for teaching I really like breaking down beautiful concepts and explaining them. I believe there’s deep learning applications to this as well. Injecting unsupervised learning into helping one’s learning is one of my longer term goals. But I believe a deep learning application at such a broad level is still a bit far. Inventions and discoveries need to be made to make this possible as of now. I believe inventions and experiments in the sector of unsupervised learning have still some way to go and is gonna be pretty exciting.

Part II of the course was so inspiring. I have developed a hobby of reproducing research papers. Some have been easy and require only a callback; others where changes are needed at the DataBunch level have been more difficult. Hope v2 DataBunch is much easier to break down though I haven’t gotten around it exploring it yet.

My twitter handle is vimarsh_c and my github page is github.com/vimarshc

4 Likes

Glad to be here back.
My twitter is https://twitter.com/arunoda
I implemented Google Cloud based fastai dev environment in the last time.

This time, I’d like to work more on doing more research related to Music tech.
Thanks again for letting me in.

3 Likes

I must say this. I feel kind of intimidated already among this group of really amazing and accomplished people. I see everyone here and they’re all working at these amazing places, or have achieved something amazing already, or are rather just, in plain terms, much more experienced.

I’m still in my third year in college and have begun both coding and deep learning only in the past year. And in my opinion, I haven’t made much progress here because of trying to juggle working in a start-up, college acads and my band (I play drums).

Here’s a long list of things I still have to properly go though: The Deep Learning book, Linear Algebra by Gilbert Strang, properly participate in a Kaggle Competition, write at least 2 blogs, get my first internship outside of Pixxel (the start-up where I work), and so much more.

On the bright side, this is a group that is going to push me a lot. People in my college are rather dispassionate about deep learning and it isn’t a group which motivates me in any significant way.

I’m going to take this up as a challenge and try and learn more and more about AI until I no longer feel intimidated here.

Hoping this works out!

11 Likes

On the bright side, you’re still in only in your third year and have already started deep learning and stuff. :beers:

3 Likes

this sounds about right to the most of us here :grimacing:

Jokes aside, I’m building a box after many years (last one was win xp), so I’ll def. be asking you Windows questions once/if I get windows on it. :paperclips:

2 Likes

Great work @akashpalrecha
All the best !

Regards,
S.Ajaykumaar

Man, you so much better than me on my 3rd year. I didn’t worked anywhere, I knew nothing about DL, kaggle and so on. And not really started coding at that time :slight_smile:

2 Likes

I’d say you’re doing great. It’s certainly not easy trying to juggle college with this :slight_smile:

2 Likes

Well, the fact that DL can minimize a ‘loss’ on a huge n-dimensions problem is what put me on this track.
The main trend is that we have to roll back from the taylorized city planning of the modernism era. This vision is not sustainable in a world where the travel distance is the enemy. Cities’ functions have to become layered instead of spread across the territory. But this is much more complicated because each decision impacts a lot of metrics each time.
Also, we rarely make huge theory-fueled massive city planning like in Barcelona or NYC. Cities are built incrementally on long periods of time by lots of different people that rarely have a global and informed vision.
So we could imagine a tool that helps cities’ technical services to dynamically adapts urbanism rules to preserve a good performance on target metrics. Instead of having global zone-types rules updated every 10 years with a strong influence of the current mayor. Also, a tool that can keep up to the pace of rapid growth and unbalanced neighborhood formations such as slums or commercial areas.

Therefore we could imagine some kind of critic NN model trained with the city growing and modifications history to predict the impact on travel density and distances, energy consumption, air quality, education levels , etc…
If we can build such a dataset?.. Also to be efficient this model would need to consider the context in which those constructions take place and not just their sequential occurrences.
But this might also lead to a huge “Weapons of math destruction” case, depending on the implementation and the dataset building.

Else we could imagine some kind of GAN mixed with generative design that would generate optimal city grows with a tailored loss function meant to optimize specific metrics such as:

  • Travel distance to schools, cultural centers, workplaces, etc…
  • Travel concentration points
  • sun exposition on streets and building’s facades (to be maximized or minimized depending on the situation)
  • planted green spaces distribution
  • housing and social types distribution
  • cost estimate

I’m speaking a lot out of intuition here, we may be far from this kind of implementations and datasets but here’s my take on the subject…

Also here’s my twitter: @BBrainkite

1 Like