@anamariapopescug, yes, and so are academics LOL
Is this a word model or a character model - so if jeremy gave it an incomplete word like toward would it complete that
and while we are at it, another neural network will proof-read it and another shall peer-review it.
Okay last one, the List of Sequentially Torched Matrices (LSTM) committee shall decide whether to accept it or reject it.
Neural Net book/article Editor maybe?
because of cost of computing can be improved if we learn the basics first
like the way we showed smaller images first…then bigger in CV
Can someone provide an intuitive explanation(brief)
My best guess is its equivalent to CNN’s Architectures?
it’s a probability distribution over sequences of words. you’re basically learning how likely/unlikely sequences of words are from training data. So you’ll learn that “convolutional neural network” has high prob but “convolutional neural algorithm” is an unlikely sequence
words are pixels, their relations have meanings like pixels express images.
You model those relationships to then be able to classify/predict words in the same space/domain
This blog intro’s it well - by one of the thought leaders:
hopefully, fake news classification too
What is the difference between this and word2vec from Google
If anyone needs understanding around “Word Embeddings” without diving deeper into RNNs
The first half of this post talks about it.
Only RNNs take IMDB reviews so seriously!!
They way you learn the embeddings is different.
Is vocal similar to bag of words?
what about removing stopwords?
why would we not use word vectors?
Stopwords are important in this kind of problem
Yeah can’t say that
the is all that important in classification.