@jcatanza I get a 404 when I click the link on #2.
@jcatanza i started looking a head to video 8 and am a bit torn on using the fastai v1 library since v2 is coming soon. Do you have any thoughts on that?
Notes from the Saturday 1/25/2020 meetup discussing the notebook 3-logreg-nb-imdb_jcat.ipynb
-
In order to get the notebook to produce the table of accuracies at the end, you have to first install the
tabulate
package: in a shell terminal window, run the command:
conda install tabulate
-
I added a brief discussion of Bayesâ Theorem to the notebook.
I think you should press on using the v1
library. My feeling is that familiarizing with v1
will ultimately make it easier to learn v2
.
The Fastai NLP Study Group will meet
Saturday February 01, at 8 AM PST, 11 AM EST, 5 PM CET, 9:30 PM IST
Join the Zoom Meeting when itâs time!
Topics: Fun with Bayes' Theorem; Numerical Stability; regex (regular expressions)
Suggested homework / preparation:
- Watch videos #6 and #7. These two videos are relatively short (about an hour total). The lesson will focus mainly on video #7:
regex
(regular expressions).
Video playlist is here
-
Read and work through notebooks 3b-more-details_jcat.ipynb and 4-regex_jcat.ipynb
-
Note: in order to access and run the
_jcat.ipynb
notebooks youâll need toclone
the Study Groupâs github repository >
To join via Zoom phone
Dial US: +1 669 900 6833
or +1 646 876 9923
Meeting ID: 832 034 584
The current meetup schedule is here.
Sign up here to receive meetup announcements via email.
Yes, I have.
The Fastai NLP Study Group will meet
Saturday February 08, at 8 AM PST, 11 AM EST, 5 PM CET, 9:30 PM IST
Join the Zoom Meeting when itâs time!
Topic: Introduction to Language Modeling using Deep Learning
Suggested preparation:
-
Read and work through course notebook #5
In this reworked and annotated version of notebook 5-nn-imdb.ipynb
, I
- Fixed a few minor errors, enabling the notebook to run to completion
- Implemented a workaround for a bug in
fastai's text data API
that seems to affect only Windows 10 - Organized the material into coherent sections
- Added step-by-step explanations/annotations throughout, indicating where transfer learning comes into play.
- Note: in order to access and run the
_jcat.ipynb
notebooks youâll need toclone
our Study Groupâs github repository.
To join via Zoom phone
Dial US: +1 669 900 6833
or +1 646 876 9923
Meeting ID: 832 034 584
The current meetup schedule is here.
Sign up here to receive meetup announcements via email.
The Fastai NLP Study Group will meet
Saturday February 22, at 8 AM PST, 11 AM EST, 5 PM CET, 9:30 PM IST
Join the Zoom Meeting when itâs time!
Topic: ULMFit for non-English Languages
Suggested preparation:
-
Watch video #10
Video playlist is here -
Read and work through these course notebooks
nn-imdb-more_jcat.ipynb
,
nn-vietnamese_jcat.ipynb
, and
nn-turkish_jcat.ipynb
Note: in order to access and run the _jcat.ipynb
notebooks youâll need to clone
our Study Groupâs github repository.
To join via Zoom phone
Dial US: +1 669 900 6833
or +1 646 876 9923
Meeting ID: 832 034 584
The current meetup schedule is here.
Sign up here to receive meetup announcements via email.
Can you share some links for better understanding basics of pytorchâŚ
I would start with these two resources
This would get you going. Good Luck!! Happy Learning.
Thank you sirâŚ
Has anyone successfully run through the 7-seq2seq-translation.ipynb notebook on
Google Collab even using the Pro version.
I am unable to get the line:
with open(path/âgiga-fren.release2.fixed.enâ) as f: en = f.read().split(â\nâ)
to work as Collab keeps running out of memory.
never mind - reading it line by line helped. figured it out.
Iâm having trouble understanding something in 2-svd-nmf-topic-modeling. It says:
But if we had one vector with the relative frequency of each vocabulary word out of the total word count, and one with the average number of words per document, then that outer product would be as close as we can get.
The phrase âaverage number of words per documentâ seems wrong to me. I interpret that as:
a = [count_across_all_docs(word) / total_word_count for word in vocab]
b = [total_word_count / len(docs) for doc in docs]
It seems to me that it should be:
a = [count_across_all_docs(word) / total_word_count for word in vocab]
b = [word_count(doc) for doc in docs]
Is my understanding correct? If so, I would change the text âaverage number of words per documentâ to just ânumber of words per documentâ.
Or am I misunderstanding something?
Zoom link seems to be invalid https://zoom.us/5167464197 . Does anybody have the right link for the meetup ??
[Wiki] Updated the wiki with the right zoom link https://zoom.us/j/5167464197 that worked for me