Lesson 2 In-Class Discussion ✅

(Keerat Singh) #759

That makes so much more sense. That also explains where the non-linearity in the model is coming in from. Thank you so much Lucas!

(aradhana) #760

Hi ,

I am getting below error from class -2 script

Any idea?



I am still having trouble understanding the distinction between a asynchronous and a synchronous web framework such as Flask and Starlette. Can someone explain in layman terms why asynchronous frameworks work well for model inference? Thanks in advance!

`@app.route(’/analyze’, methods=[‘POST’])
async def analyze(request):

  data = await request.form()
  img_bytes = await (data['file'].read())
  img = open_image(BytesIO(img_bytes))
  prediction = learn.predict(img)[0]
  return JSONResponse({'result': str(prediction)})



How to choose the best learning rate?

(Qi Zhou ) #763

Hi there, I have a little puzzle.
When I run ‘too few epochs’ part:
learn = create_cnn(data, models.resnet34, metrics=error_rate, pretrained=False) learn.fit_one_cycle(1).
It is supposed to be train_loss > valid_loss, but somehow I got this:
What am I doing wrong?


Did anyone try to train another model with cleaned dataset?
I am struggling with creating ImageDataBunch with the constructor ImageDataBunch.from_csv(). The file cleaned.csv generated by DatasetFormatter and saved in the ‘data/bears’ directory doesn’t work with the from_csv() method. How one should use the from_csv() to get the cleaned dataset created?

Moreover, after deleting the files with the DatasetFormatter and performingImageDataBunch.from_folder() like at the beginning of the notebook I got the same statistics for number of examples in training and validation dataset.

From Lesson-2 notebook (clearly speaking about deletion of files):

Flag photos for deletion by clicking ‘Delete’. Then click ‘Next Batch’ to delete flagged photos and keep the rest in that row. ImageCleaner will show you a new row of images until there are no more to show. In this case, the widget will show you images until there are none left from top_losses.ImageCleaner(ds, idxs)

Thank you for your help!

(Kieran) #765

Hey Jeff

Really simply - the code will run in order that it is written, however, if some part of the code is taking time, in this case the request.form(), then the code will continue to run before the request has been 100% completed.

This generally results in errors because the following code is dependent on the response of request.form.

asynchronous or async is a way of telling the computer to wait. In this case:
data = await request.form() … the await call is halting the code from running until the request.form() is fully complete.

It can definitely be tricky - but keep at it and it becomes pretty simple. Check this out for some more async await info: https://www.youtube.com/watch?v=XO77Fib9tSI

(Kieran) #766

That is all exactly as it is written in the notebook.

My guess would be that its the data variable that is wrong. It may have gotten messed up somewhere along the way. I would try re-running the notebook after a reset making sure you do the image download folder and file bit correctly. See the “Create directory and upload urls file into your server.” section.

To be sure your data set is correct look in your path (should be ‘data/bears’) and there should be 3 folders ‘black’, ‘teddys’ and ‘grizzly’.

If you just run the code from top to bottom you will probably only have a ‘grizzly’ folder.

Hope that helps…

(Kieran) #767

Hey Preka

Running learner.fit_one_cycle(2, slice(lr)) is different to running learner.fit_one_cycle(1, slice(lr)) twice

Fit_one_cycle refers to the way the model will handle the mini batches and the first argument of the function relates to how many epochs. Does that make sense.

Check this for more details. https://medium.com/@nachiket.tanksale/finding-good-learning-rate-and-the-one-cycle-policy-7159fe1db5d6

(Qi Zhou ) #768

Thanks, actually I wrote a loop to load the data, and I made a double check about it. That’s probably not the case. To figure out, I tried different models with or without pretrained, and I got this.

The pretrained models seem more reasonable, but I still don’t know why :confused:

(Kieran) #769

What happens if you run more epochs on it?
Are you using your own data set or are you using the bears?

Confusing !!

(Ooi CY) #770

hey fast.ai folks :slight_smile: As a classical music buff I created a CNN (with 4.5% error rate) based on the “download” notebook - it classifies grand pianos, upright pianos & violins. (I am stoked!)

But before creating this instruments classifier, I have tried creating a few different classifiers that failed terribly, the error rates were 30-40%:

  1. a car brand classifier, e.g. the CNN is supposed to differentiate between BMW and Mercedes cars.
  2. dinosaurs classifier, e.g. the CNN needs to differentiate between T-rex, triceratops and velociraptors.

Do you know the possible reasons certain types of datasets seem to work really well, while some others didn’t? Thanks!

CY Ooi