I think something like this:
- get @jeremy ok for this or maybe even he agrees to help us
- teaming up
- discuss and approve the structure
- collect materials that already written
- write and look for writers for yet non-written materials
I think something like this:
GitHub will do?
Sounds great!!! I am interested too. I remember someone has already created a repo with articles. Should be a good starting point.
what a great idea - the interactive book perhaps, in jupyter perhaps? plus mini-tutorials, stuff like this? anyway, i’ll play
Looks like a great idea.
Yes indeed I can create a github repo, and perhaps we can use the github wiki to create an ebook of course notes for the lesson? We used a mediawiki wiki for last year’s course: http://wiki.fast.ai/index.php/Lesson_1 . The course notes were written by @bckenstler : http://wiki.fast.ai/index.php/Lesson_1_Notes .
I think a github wiki might be better, since you can edit github wikis through your preferred editor: https://help.github.com/articles/about-github-wikis/ . But you also have the benefit that you can link the pages to each other, and edit them directly through github.com if you prefer. Also it’s nice that we can use PRs for those that don’t yet have direct edit access.
@timlee has also created some draft notes for the course, and I believe he’s planning to help turn them into something more polished, which could be a great part of this too.
I’ll aim to set something up early next week and can then start inviting contributors.
Where’s the love for the RNNs guys?
hope y’all enjoy…
yup! I am currently in final exams, but during the break, I am planning on re-writing a lot of the notes into a topical format with proper pictures / illustrations. I hope whatever format we choose will let us break up some of the information into smaller chunks!
This is neat idea. If the plan is to have something of a book like format, publishing on Gitbook ( https://www.gitbook.com/ ) via Github markdown files, also works really well. (also can self-host it on S3 etc. later)
How are we planning to collab?
Waiting for Jeremys instructions. He planned to organise this journey.
Count me in too.
Just created a blog post on chatbots in Enterprise Software. Would love your feedback
Thanks @nahidalam - I wasn’t aware of most of these products in the chatbot space. Personally, I’d be really interested to see how a DL approach might compare or even add to these more rule-based approaches - do you have any plans to try training a neural net?
(This would require sequence to sequence models, which we won’t cover until part 2…)
I’m building chatbots at work. Recently I did comparison of a rule-based system (based on ChatScript engine) with a simple ML (logistic regression on bag-of-words) for intent classification. ML is very far ahead. For a task of intent classification with ~20 classes and ~4500 examples in the dataset, rule-based system had ~30% accuracy and ML ~70% accuracy.
This is just one use case and I didn’t do wider study because building rule-based systems is very expensive, much more laborious process than even collecting labeled data for ML.
@jeremy I am planning to train the Natural Language Processing Classifiers for domain, intent, entity classification. The MindMeld library I am using supports various ML techniques to do that. Personally I am curious about DL techniques too and planning to use bi-directional LSTM for entity classification.
Check this post out…
Published a blog post on ML approach to cloud detection in satellite imagery. Before we start with deep learning approach we made pixel-based classifier in order for us to learn and get better understanding of the problem. Nevertheless, I found fast.ai course and fast.ai blogs very helpful. In particular, Rachel’s post on what’s appropriate validation dataset is very relevant for remote sensing with satellite imagery. What may be interesting for some is the fact that we’re using machine labeled dataset as our training set due to lack of appropriate and large enough human labeled dataset.
Any comments or suggestions are most welcome.
This was very comprehensive case study! The way augmentation was done was done (augmented the training sample with misclassified pixels to avoid systematic misclassification) is very interesting and can be applied to some other problem domains.