Practical Ways to apply all learnings in Production at work using Tensorflow or other production-ready libraries

Hi All,

My name is Sugianto. I only found out about this 2 weeks ago and am on way to course #3 in DL.

My question is, since we are using different library (fastai) on PyTorch vs the production-ready ones out there (Tensorflow)…how does anyone apply what they have learnt here and do it in Tensorflow?

I am aware that fastai library is not for production yet and once I have completed DL part 1, I would like to deploy something in production at work.

What would be the best way to apply the learnings here and do it in TF or any other libraries?

Best,

Sugi

3 Likes

Yes, can anyone describe how they have (or will) move from having a fastai/pytorch kernel to a production-ready, service (eg real time web service) that tests an image or piece of text against a model. Advice may be to redo in tensorflow, which is fine, but it would be good to know the approach to take and learn from others forays.
I feel it critical to have this info for the ‘practical’ keyword in the course. Perhaps it’s a Part 2 challenge :slight_smile:

Would it be worthwhile to watch and do the previous version of the course that used Keras in order to learn how to implement it in production?

Hi @xjdeng, yeah that is what i think the best way forward.

Well, guess I have to start over with the previous year’s course and do the keras and TF version.

Especially since the code is still on the github repo, and the resources (links, notes, etc) are still there too.