Beginner: Beginner questions that don't fit elsewhere ✅

Oh that’s a good point. I forgot I was testing out Kaggle’s new (to me) quick save option. Will try a “Save & Run All” and see if that fixes it.

1 Like

If your downloaded directory is : /kaggle/working/mydata you can do it like this:

#zip it up
!zip -r /kaggle/working/mydata 

# see if the file is there
!ls -lrt | tail 

# get a link to it
from IPython.display import FileLink

The last cell will print an html link in the output, click on it, it will download to your desktop.


Thank you Jeremy.

On 02_production.ipyth notebook, I want to download images from using the Bing image search. I have a MS azure account already, but when I try to create a Bing search resource, the pricing tier shows “no available items”. Is there another way for getting my Azure search key?


Please don’t use azure you can download from duck duck go which doesn’t require a key. if you search for image_search_ddg on the forums you’ll find the code to do that


Below is from Jeremy’s Bird detector example on Kaggle (he demo’d it in the lecture)

from fastcore.all import *
import time

def search_images(term, max_images=200):
    url = ''
    res = urlread(url,data={'q':term})
    searchObj ='vqd=([\d-]+)\&', res)
    requestUrl = url + 'i.js'
    params = dict(l='us-en', o='json', q=term,, f=',,,', p='1', v7exp='a')
    urls,data = set(),{'next':1}
    while len(urls)<max_images and 'next' in data:
        data = urljson(requestUrl,data=params)
        requestUrl = url + data['next']
    return L(urls)[:max_images]
1 Like

3 posts were merged into an existing topic: Non-Beginner Discussion

Thanks Mike for the help! I was searching for that too.

1 Like

If I’ve done learn.fine_tune(3)
and then looking error_rate decide I want to do a few more epochs of training,
is there a way to continue from where that finished… i.e. learn.fine_tune_more(3)
rather than needing a full restart, like… learn.fine_tune(6).

Hi team, how would transcription engine works? I assumed they grab audio, and use something to get spectrogram maybe something like librosa and then some how converting each sound to an image, then run it through a model to predict a letter from the image.
maybe using a different model to create word based on the letter and pauses.

I know this is not 100% AI question :frowning: sorry about that.
I did lots of research on that which ended up no where.
Will someone be able to put me in the right direction? Thanks

I think that when you call learn.fine_tune(6) after having already run learn.fine_tune(3) the model will actually not retrain from scratch but start with the already fine-tuned weights. So unless you create the learner again, e.g. by calling learn = vision_learner(...) the model actually continues training.

Another question is what’s the best way to continue fine-tuning (how many epochs, which learning rate, how to handle learning rate schedule etc.). I am not 100% sure about what’s the answer to that, but you might want to try learn.fit_one_cycle(...) instead of learn.fine_tune(...) to continue training. I’m sure Jeremy will cover these concepts in detail in the upcoming lessons.


Yes that’s the basic idea, more or less. The details definitely aren’t appropriate for a beginner topic however! :wink:

Thank you for your response, I didn’t know where to ask that question hence I asked it here.
do you have any document or reference that I can study this? I am hoping to work on this as my project. :slight_smile:
Or which thread do you want me to ask this question?

I’m not sure it’s an ideal project if you’re a beginner - it’s generally something I’d recommend after a year or so of deep learning study and practice. The problem I’ve seen with really bold starting projects is a see few students finishing them.

But if you’re confident of your tenacity, then give it a go! Maybe start here: Speech Recognition with Wav2Vec2 — PyTorch Tutorials 1.11.0+cu102 documentation


I did some experimenting with the hyper-parameters (if that is the correct terminology) of my Which Watersport project (code on kaggle) to develop my sense of their effect. While recognising this is a pinhole view from a single use case (a dataset with 37 categories), it examines:

  • successive application of multiple fine_tune_ing
  • comparing 4 x fine_tune(5), 2 x fine_tune(10), 1 x fine_tune(20)
  • comparing resnet16 versus resnet34
  • comparing resize(400) versus random_resize_crop(400)

The progressive epoch error is visualised in this chart, which I’ve annotated with a few features…

Raw data is in google sheet here. (Note, chart actually generated in Excel since it has a better charting tool) On thing that initially struck me as really bizzare is the great amount of six-decimal-place-numbers that are repeated. I’ve since speculated that the 20% size of the validation set means there are only some many combinations of fractions formed by this equation…

Another curious thing looking at the [resnet34 RRC150] tab, is that
RandomResizedCrop(150) with resnet18 performed much worse than the others.
I am wondering if the crop size might be too small, so too many irrelevant images containing only water are processed??

And that leads me to wondering… Lesson Three [20:48] indicates that the transformes done during training are repeated during inference processing. Is it possible that RandonResizedCrop is causing inferencing to be performed on crops containing only water, which would be really hard to classify to a pariticular sport??


In Lesson Four…

Found it available for free here: Python for data analysis


I really like the Wes McKinney book mentioned by Jeremy and I’ve been trying to go through it along with Jake VanderPlas’s book. I really would like some resource that takes a dataset and goes through a project and during that process, it uses various features of these libraries to accomplish those tasks. The book is more a reference type book which describes various capabilities of the library quite well, but for beginners the problem is two fold: learning the moving parts of the library and applying that to a goal-oriented project type problem.

I remember taking the John’s Hopkins course on Data Science and I found it really helpful that I learned R while in the process of trying to do various class assignments. Their approach was to ask students to do small tasks using R and those tasks built up to a data product over the course of the few weeks as we progressed towards the final problem/project to be solved.

P.S. Daniel Chen has been doing some really great Pandas tutorial videos but for some reason most of these things quickly devolve into: “This function does this… to get the 0th axis you call this function” without telling me why the heck would I need to get the 0’th access to begin with and what problem should I apply this reference information to. Not knocking it, it’s just super difficult for experts to get back into the beginner mindset despite trying really hard to help beginners.


One of my favourite AI youtube channels Two Minute Papers
indicates Weights and Biases provides useful insight into NN training. Has anyone used this? (note, they are his sponsor)

btw, here is one of my all time favourite vids from the channel…

1 Like

Hi! I am applying DL to clinical prediction research with x-ray and clinical data. As clinicians, we have to have more than one thing (x-ray, lab result, or clinical observation) to diagnose. I want to do the same with DL by adding the layer (layer may not be the right term… perhaps a different prediction model ensembled together) of prediction from clinical data and the layer from lab data onto the x-ray prediction. Is there any good example webpage where I can start understanding how to do this?

1 Like

We used matrix multiplication in our linear models and in Excel implementation as well.
However, in some literature I have encountered “dot product”. Are they the same for our use or are there any scenarios where one makes more sense than the other? I am a bit confused on this one. Should/can I just ignore “dot product” and use matmul everywhere?

1 Like

I don’t have an authoritive answer, but it was an interesting question that lead me to these:

Do they help?

1 Like