Not a paper, but something to be aware of deepmind losses
Little note: looking at the insane amount of money spent for a bunch of guys with fancy PhD and the very few concrete results they produced I’m wondering if it is fair.
Not a paper, but something to be aware of deepmind losses
Little note: looking at the insane amount of money spent for a bunch of guys with fancy PhD and the very few concrete results they produced I’m wondering if it is fair.
AI Application:
Interactive Body-Driven Graphics for Augmented Video Performance : A system that augments live presentation videos with interactive graphics to create a powerful and expressive storytelling environment.
Turns out a paper came out earlier this month that basically accomplishes what I was describing here.
Training generative models to synthesize 1024x1024 images on a single GPU with 3 days training time.
How does an infinitely wide net perform?
Blog post : Ultra-Wide Deep Nets and Neural Tangent Kernel (NTK)
and the paper : On Exact Computation with an Infinitely Wide Neural Net
Transport-Based Neural Style Transfer for Smoke Simulations
Abs : Transfer features from natural images to smoke simulations, enabling general content-aware manipulations ranging from simple patterns to intricate motifs. The proposed algorithm is physically inspired, since it computes the density transport from a source input smoke to a desired target configuration. Temporal consistency is ensured by transporting and aligning subsequent stylized velocities, and 3D reconstructions are computed by seamlessly merging stylizations from different camera viewpoints.
Top Ten Research Papers of 2019
From Mariya Yao’s excellent Topbots blog
Just saw this paper to help with Data Augmentation for NLP.
Really great data augmentation system using label-conditioned language model generation to supplement data. And using a pretrained classifier to filter generated sentences to be close to the distribution of the original dataset.
Interesting… They’re quoting Jeremy too
This is an interesting publication by S. Merity about a single headed attention LSTM with promising results for such a small model:
„Single Headed Attention RNN: Stop Thinking With Your Head“
“http://www.marekrei.com/blog/74-summaries-of-machine-learning-and-nlp-research/“
Stumbled upon this, short summaries on selected machine learning papers:
URL is broken, gives a 404 error.
The last "
is being included in the url
Thanks @urmas.pitsi , this looks like an excellent source. Could you please edit your post to fix the link?
Sorry, my bad! Thanks for noticing! I fixed the link hopefully
Pretty neat blogpost about compressing Bert. Various related papers and results.
This is very interesting work on transfer learning for computer vision with medical image data (covers NN architectures and initializations):
Your “deep double descent” reading list:
(Optional but very nice) Intro blog post:
Deep double descendant blog post:
(Optional but very nice) Additional summary/comment:
(You can find the links to the official papers in the blog posts.)
Deep Learning for Symbolic Mathematics, is now on arXiv https://arxiv.org/abs/1912.01412