Jeremy AMA

Isn’t DL a more of Plug and play (added tricks, changing few things, substituting one for the other ) and then people try to reason out?
Like isn’t it the reverse of how Science works actually?( First the theory comes)

1 Like

If I may speak, @ecdrid I think scientific laws are determined by extensive experimentation as well.
So for both, its experimentation, “Plug and play” driven by intuition/anti-intuitive appraoches.

1 Like

part 1 v2 there were many datesets from kaggle, and kaggle was recommended as great way to learn DL.
part 1 v3, only one? dataset form kaggle and i don’t recall hearing recommendation to do kaggle competitions. what’s the reason behind that?

2 Likes

there is an opposing view, nassim taleb’s book Antifragile would argue that many a times science happens the other way: the practice comes before the theory

4 Likes
3 Likes
  1. How to design custom loss functions.
  2. How to validate if the new loss function is suitable.
  3. How to combine multiple loss functions and validate if they will work well.
  4. After understanding the problem how to say that this loss function will not work but I should combine it with some other loss function
1 Like

Many a times when I’m working with ML in general, I come across tricks which sometimes work and sometimes doesn’t; or some of the times somethings I try work, but I am really discouraged to use it because it’s non-best practise. How to confirm my experiments, since you have done a quite a lot of them?

Where do you even come up with the ideas of your experiments, and how could we as your students start trying them with proper scientific method. For example, the dropout experiments you introduced in lesson 6, was mind blowing. It was so simple, yet few ever tried it. How to do them, rather how to think about them?

If you look into the history how Thermodynamics developed, you will be surprised to find out that it was engineers who actually solved major parts of it and far far later Boltzmann, Joule and Carnot theorized it properly in the way we know it today. I mean steam engines were developed and perfected as early as 1720s, where as Carnot cycle was actually proposed like in 1840s. And that is why I find study of deep learning fascinating because we have a scanty idea of why it works, but it is the tool of the 21st century industrial age, much like steam and electricity was of the Industrial age.

4 Likes

Good point. And the development of the microscope and telescope led to experimental results which in turn led to breakthroughs in theory.

6 Likes

But, is there ever such a thing as a ‘wrong’ direction? (especially in science)

Yes. Greed.

1 Like

Will we be able to use Dask and fast.ai together one day for big data?

2 Likes

I havent worked on GAN before much. But looks like GAN poses challenge to DeepFake Detection problem. I was wondering how GAN itself can be used to resolve the problem. Not sure if any of the papers have talked about GAN usage in solving the problem, All what i came across is building aug data for train and test .

Yes plenty.



1 Like

thanks @digitalspecialists
grt it was a vague thought but it has got an existence . I was wondering any git hub repository for any of papers implementation or link to understand model work…