Theory
How to understand “individual neurons are the basis directions of activation space”?
(3)
Crazy Thoughts -- Residuals, Transfer, Capsules
(1)
New AlphaGo Zero reinventing Go without data
(4)
Why do we use accuracy as metric rather than something like f1 score/AUC?
(4)
Does deep learning require dense data and works bad with sparse data?
(2)
3D face Reconstruction using single image
(1)
Theoretical ML book with solutions
(2)
Can we calculate the impact of an image to the model?
(6)
Loss on hierarchical categories
(1)
Ridge regression for model ensembling - why do we want to use it?
(1)
SWISH: Google researchers found new activation function to replace ReLU
(7)
Vocabulary Complexity?
(1)
New MOOC: deeplearning.ai
(
2
)
(27)
How good is "Differentiable Neural Computers"?
(1)
How to deal with missing labels when you have multiple losses?
(5)
Modern way to estimate homography matrix(by light weight cnn)
(1)
Writeup about Neural Machine Translation
(1)
Father of AI says we need to start over
(1)
RNN Design Guidelines?
(5)
How to transform 4 points parameter matrix to homography matrix
(2)
Cyclic Cosign Annealing
(5)
Question on weight initialization
(1)
Clock Work RNNs
(2)
Question about gradient boosting
(1)
Why does AdaGrad work?
(1)
Uncertainty in Deep Learning (Bayesian networks, Gaussian processes)
(2)
Journey down the nostalgia lane: A paper on CNNs from 1989
(1)
Training RNNs as fast as CNNs
(1)
How to train the RPN in Faster R-CNN?
(2)
New results on transfer learning
(1)
← previous page
next page →