Lesson 1 In-Class Discussion ✅

The homework was given at the end of the video. From what I remember, it’s run the notebook and try building a classifier using your own dataset/images.

Hello all,
I have a small query. I was downloading the pets dataset using URLs.PETS. But I got this erroe instead.

HTTPSConnectionPool(host=‘s3.amazonaws.com’, port=443): Max retries exceeded with url: /fast-ai-imageclas/oxford-iiit-pet.tgz (Caused by NewConnectionError(’<urllib3.connection.VerifiedHTTPSConnection object at 0x7fc1a53d45c0>: Failed to establish a new connection: [Errno -3] Temporary failure in name resolution’,))

When I tried to access the link manually, I got this screen. Please find the attachment.
Is there any other way to access dataset ?

The detailed course notes for Lesson 1 tells you the timestamp of the video and explains the HW part too.

If you’re running this in Kaggle, then make sure you have internet enabled and restart your kernel after you enable internet.

What a great start! Thank you all for being part of a vibrant community.

For homework, I tried to classify beaches vs swimming pool. I was able to follow the instructions using Selenium driver to scrape images. With a little bit of cleaning, I had about 250 images and train vs test was in the ratio of 9:1.

On using resnet38 and I got the following loss and error.

fit_cycle

I tried to unfreeze, fine tune and adjust the learning rate now I got error rate of 0.

Question: Did I test with little samples? the results seem unrealistic.

Thanks for your insights!

What does your top losses look like?

Yes, It was a silly mistake that I couldn’t notice… The issue has been resolved… Thankyou :slight_smile:

Edit:
probably solved: I ran the whole notebook once and on a second pass things got weird because variable names get re-used further down I guess. (now got to figure out how to undo this, but that sounds managable)


Original Question:

In the very first learn.fit_one_cycle my final error rate is 0.4 instead of 0.06. I didn’t touch anything, how comes?
mine:

I also noticed some slight deviations in the notebook, could that be related? (mine e.g. uses create_cnn() instead of ConvLearner(), but I thought I have probably the newer fast.ai library, not one lagging behind)

Hi I’ve just started on lesson 1. I’m getting the following error when performing the “path = untar_data(URLs.PETS); path” command.

The error I get is below. I’m on Windows 10, Python 3.7.1 and running the script in an Anaconda virtual environment. This is on Jupyter on my local computer. I saw an earlier post suggesting changing the address to add “.tgz” to the end. If I add that to the URL I can download in my browser just fine but I get the same error message when trying to pull via Jupyter. Any ideas?

Downloading https://s3.amazonaws.com/fast-ai-imageclas/oxford-iiit-pet

Error Traceback (most recent call last)
~\AppData\Local\Continuum\anaconda3\envs\torch\lib\site-packages\urllib3\contrib\pyopenssl.py in wrap_socket(self, sock, server_side, do_handshake_on_connect, suppress_ragged_eofs, server_hostname)
452 try:
–> 453 cnx.do_handshake()
454 except OpenSSL.SSL.WantReadError:

~\AppData\Local\Continuum\anaconda3\envs\torch\lib\site-packages\OpenSSL\SSL.py in do_handshake(self)
1914 result = _lib.SSL_do_handshake(self._ssl)
-> 1915 self._raise_ssl_error(self._ssl, result)
1916

~\AppData\Local\Continuum\anaconda3\envs\torch\lib\site-packages\OpenSSL\SSL.py in _raise_ssl_error(self, ssl, result)
1646 else:
-> 1647 _raise_current_error()
1648

~\AppData\Local\Continuum\anaconda3\envs\torch\lib\site-packages\OpenSSL_util.py in exception_from_error_queue(exception_type)
53
—> 54 raise exception_type(errors)
55

Error: [(‘SSL routines’, ‘tls_process_server_certificate’, ‘certificate verify failed’)]

During handling of the above exception, another exception occurred:

SSLError Traceback (most recent call last)
~\AppData\Local\Continuum\anaconda3\envs\torch\lib\site-packages\urllib3\connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw)
593 if is_new_proxy_conn:
–> 594 self._prepare_proxy(conn)
595

~\AppData\Local\Continuum\anaconda3\envs\torch\lib\site-packages\urllib3\connectionpool.py in _prepare_proxy(self, conn)
804 conn.set_tunnel(self._proxy_host, self.port, self.proxy_headers)
–> 805 conn.connect()
806

~\AppData\Local\Continuum\anaconda3\envs\torch\lib\site-packages\urllib3\connection.py in connect(self)
343 server_hostname=server_hostname,
–> 344 ssl_context=context)
345

~\AppData\Local\Continuum\anaconda3\envs\torch\lib\site-packages\urllib3\util\ssl_.py in ssl_wrap_socket(sock, keyfile, certfile, cert_reqs, ca_certs, server_hostname, ssl_version, ciphers, ssl_context, ca_cert_dir)
343 if HAS_SNI and server_hostname is not None:
–> 344 return context.wrap_socket(sock, server_hostname=server_hostname)
345

~\AppData\Local\Continuum\anaconda3\envs\torch\lib\site-packages\urllib3\contrib\pyopenssl.py in wrap_socket(self, sock, server_side, do_handshake_on_connect, suppress_ragged_eofs, server_hostname)
458 except OpenSSL.SSL.Error as e:
–> 459 raise ssl.SSLError(‘bad handshake: %r’ % e)
460 break

SSLError: (“bad handshake: Error([(‘SSL routines’, ‘tls_process_server_certificate’, ‘certificate verify failed’)])”,)

During handling of the above exception, another exception occurred:

MaxRetryError Traceback (most recent call last)
~\AppData\Local\Continuum\anaconda3\envs\torch\lib\site-packages\requests\adapters.py in send(self, request, stream, timeout, verify, cert, proxies)
448 retries=self.max_retries,
–> 449 timeout=timeout
450 )

~\AppData\Local\Continuum\anaconda3\envs\torch\lib\site-packages\urllib3\connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw)
637 retries = retries.increment(method, url, error=e, _pool=self,
–> 638 _stacktrace=sys.exc_info()[2])
639 retries.sleep()

~\AppData\Local\Continuum\anaconda3\envs\torch\lib\site-packages\urllib3\util\retry.py in increment(self, method, url, response, error, _pool, _stacktrace)
397 if new_retry.is_exhausted():
–> 398 raise MaxRetryError(_pool, url, error or ResponseError(cause))
399

MaxRetryError: HTTPSConnectionPool(host=’’, port=443): Max retries exceeded with url: /fast-ai-imageclas/oxford-iiit-pet.tgz (Caused by SSLError(SSLError(“bad handshake: Error([(‘SSL routines’, ‘tls_process_server_certificate’, ‘certificate verify failed’)])”)))

During handling of the above exception, another exception occurred:

SSLError Traceback (most recent call last)
in
----> 1 path = untar_data(URLs.PETS); path

~\AppData\Local\Continuum\anaconda3\envs\torch\lib\site-packages\fastai\datasets.py in untar_data(url, fname, dest, data, force_download)
160 shutil.rmtree(dest)
161 if not dest.exists():
–> 162 fname = download_data(url, fname=fname, data=data)
163 data_dir = Config().data_path()
164 assert _check_file(fname) == _checks[url], f"Downloaded file {fname} does not match checksum expected! Remove that file from {data_dir} and try your code again."

~\AppData\Local\Continuum\anaconda3\envs\torch\lib\site-packages\fastai\datasets.py in download_data(url, fname, data)
142 if not fname.exists():
143 print(f’Downloading {url}’)
–> 144 download_url(f’{url}.tgz’, fname)
145 return fname
146

~\AppData\Local\Continuum\anaconda3\envs\torch\lib\site-packages\fastai\core.py in download_url(url, dest, overwrite, pbar, show_progress, chunk_size, timeout, retries)
165 s = requests.Session()
166 s.mount(‘http://’,requests.adapters.HTTPAdapter(max_retries=retries))
–> 167 u = s.get(url, stream=True, timeout=timeout)
168 try: file_size = int(u.headers[“Content-Length”])
169 except: show_progress = False

~\AppData\Local\Continuum\anaconda3\envs\torch\lib\site-packages\requests\sessions.py in get(self, url, **kwargs)
544
545 kwargs.setdefault(‘allow_redirects’, True)
–> 546 return self.request(‘GET’, url, **kwargs)
547
548 def options(self, url, **kwargs):

~\AppData\Local\Continuum\anaconda3\envs\torch\lib\site-packages\requests\sessions.py in request(self, method, url, params, data, headers, cookies, files, auth, timeout, allow_redirects, proxies, hooks, stream, verify, cert, json)
531 }
532 send_kwargs.update(settings)
–> 533 resp = self.send(prep, **send_kwargs)
534
535 return resp

~\AppData\Local\Continuum\anaconda3\envs\torch\lib\site-packages\requests\sessions.py in send(self, request, **kwargs)
644
645 # Send the request
–> 646 r = adapter.send(request, **kwargs)
647
648 # Total elapsed time of the request (approximately)

~\AppData\Local\Continuum\anaconda3\envs\torch\lib\site-packages\requests\adapters.py in send(self, request, stream, timeout, verify, cert, proxies)
512 if isinstance(e.reason, _SSLError):
513 # This branch is for urllib3 v1.22 and later.
–> 514 raise SSLError(e, request=request)
515
516 raise ConnectionError(e, request=request)

SSLError: HTTPSConnectionPool(host=‘s3.amazonaws.com’, port=443): Max retries exceeded with url: /fast-ai-imageclas/oxford-iiit-pet.tgz (Caused by SSLError(SSLError(“bad handshake: Error([(‘SSL routines’, ‘tls_process_server_certificate’, ‘certificate verify failed’)])”)))

Weird results with plot_top_losses. I am not getting the actual images, instead heatmap type images like these

Can someone tell me how to fix this please?

Since you’re able to download the file at the URI via the browser but not via Python, it seems the issue is with your Python install. Maybe check if you have the required versions of Python and packages etc installed?

The error indicates SSL certificate verification is failing.

It turns out the issue is that it was a firewall issue. I’m sitting behind a corporate proxy so I needed to turn SSL verification off (which I’m fine to do in this case as it’s a trusted source) which I did by adding this block of code:

import os, ssl
if (not os.environ.get(‘PYTHONHTTPSVERIFY’, ‘’) and
getattr(ssl, ‘_create_unverified_context’, None)):
ssl._create_default_https_context = ssl._create_unverified_context

lesson1-fastai
This is the link to the jupyter notebook of my modification of lesson 1.I tried to classify between three types of shot in cricket.(this is my first ever experience in deep learning,So Kindly Bear with me if if is too bad).I have several doubts in this.
1.how much amount of data is needed to train a image classification model?(Iust used 18 images in total,6 0f each class).
2.I cant able to interpret the learning rate recorder plot?It will be really helpful if someone could help me out in this.

I Will be glad,If anyone can look a my code and point out my mistakes.

@Wils

  1. not sure, but probably more for such a nuanced problem (I’m just at lesson 2 though, so no expert). Just see how good predictions get :). Keep in mind a few of the images are used for testing, not for training, that leaves you with even fewer.
  2. you know how to decode the 7e-2 spelling? Between 1e-2 (0.01) and 1e-3 (0.001) there would be 5e-3 (0.005). Type it in to duckduckgo to convert to a normal float. Further the scale is logarithmic, look at the little marks, they are not evenly distributed. Spelling in the video differs a bit on the graph.
1 Like

I guess it’s a new feature. Was the same for me.

@LeonW Thanks for the help.

Could anybody explain more about the following in Lesson 1:

We run

fnames = get_image_files(path_img)

then

data = ImageDataBunch.from_name_re(path_img, fnames, pat, ds_tfms=get_transforms(), size=224, bs=bs
                                  ).normalize(imagenet_stats)

Can’t the above function get_image_files(path_img) be inside from_name_re since we are already passing the path of images to it?
I don’t believe it would any sense to pass path_img different than the one used to obtain fnames! so can’t we save an extra step here? Please explain

what is train_loss, valid_loss and error_loss?

Hi, guys I want to host this model with a flask and react app. My question is how do I pass a random dog image to this model of same size & predict breed?

My second question is. What if users pass some random pictures to this model? Does the accuracy of model get bad or what?