How to check what features are the intermediate layers learning in a particular model? And how can we display it in the notebook by highlighting those features in the image?
Any help is appreciated. Thanks.
How to check what features are the intermediate layers learning in a particular model? And how can we display it in the notebook by highlighting those features in the image?
Any help is appreciated. Thanks.
After 2 days of searching and digging I’m starting to clear things, but I feel like I need someone’s help on this.
[+] Can anyone explain what does the rand_pad() function does, and why does the tuple for the ds_tmfs argument have to have 2 elements?
[+] Secondly when normalizing data, why do we pass imagenet_stats as the argument?
Okay. Starting watching the videos again. Third time’s a charm!
Hi everyone.
Would you suggest, how to check, is CUDA is used or no?
Thank you
Anyone encounter this error when deploying an NLP model on Flask? I generated my ‘export.pkl’ and successfully made predictions on jupyter. But when I try to do that on vscode using a flask endpoint, I get this attribute error.
#!/usr/bin/env python3
from flask import Flask
from flask import request
from pathlib import Path
import asyncio
import aiohttp
import uvicorn
from fastai import *
from fastai.text import *
export_file_url = ‘XXXXXXXXXXXX’
export_file_name = ‘exportmodel.pkl’
path = Path(file).parent
app = Flask(name)
async def download_file(url, dest):
if dest.exists(): return
async with aiohttp.ClientSession() as session:
async with session.get(url) as response:
data = await response.read()
with open(dest, ‘wb’) as f:
f.write(data)
async def setup_learner():
await download_file(export_file_url, path / export_file_name)
try:
learn = load_learner(path, export_file_name)
# print(’########### {}’.format(learn))
return learn
except RuntimeError as e:
if len(e.args) > 0 and ‘CPU-only machine’ in e.args[0]:
print(e)
message = “\n\nThis model was trained with an old version of fastai and will not work in a CPU environment.\n\nPlease update the fastai library in your training environment and export your model again.\n\nSee instructions for ‘Returning to work’ at https://course.fast.ai.”
raise RuntimeError(message)
else:
raise
loop = asyncio.get_event_loop()
tasks = [asyncio.ensure_future(setup_learner())]
learn = loop.run_until_complete(asyncio.gather(*tasks))[0]
loop.close()
@app.route(’/’)
def version():
return {
‘result’: ‘server is up’
}
@app.route(’/analyze’)
def analyze():
prediction = learn.predict(‘we must’, 50, temperature = 0.75)
print(prediction)
# # prediction = learn.predict(random.choice(unique_start_words), random.choice(tweet_count), temperature=0.75)
# return { “result”: prediction }
return {
“prediction”: “correct”
}
if name == ‘main’:
app.run(debug = True)
Hi.
Can anyone help me through this error. I’m actually trying to extract labels from list of file paths using ImageDataBunch.from_name_re
and ImageDataBunch.from_name_func
, but I get the same error every time. I tried to update the pillow library, but it seems to be using python 3.7 where as my conda enviorment is using python 3.6.
First I extract the file paths using the function I created.
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
~/.local/lib/python3.7/site-packages/PIL/Image.py in open(fp, mode)
2846 try:
-> 2847 fp.seek(0)
2848 except (AttributeError, io.UnsupportedOperation):
AttributeError: 'PosixPath' object has no attribute 'seek'
During handling of the above exception, another exception occurred:
AttributeError Traceback (most recent call last)
<ipython-input-21-415196e37479> in <module>
----> 1 data = ImageDataBunch.from_name_func(path, fn_paths, label_func = get_labels, size = 24)
~/anaconda3/lib/python3.7/site-packages/fastai/vision/data.py in from_name_func(cls, path, fnames, label_func, valid_pct, seed, **kwargs)
145 "Create from list of `fnames` in `path` with `label_func`."
146 src = ImageList(fnames, path=path).split_by_rand_pct(valid_pct, seed)
--> 147 return cls.create_from_ll(src.label_from_func(label_func), **kwargs)
148
149 @classmethod
~/anaconda3/lib/python3.7/site-packages/fastai/vision/data.py in create_from_ll(cls, lls, bs, val_bs, ds_tfms, num_workers, dl_tfms, device, test, collate_fn, size, no_check, resize_method, mult, padding_mode, mode, tfm_y)
95 "Create an `ImageDataBunch` from `LabelLists` `lls` with potential `ds_tfms`."
96 lls = lls.transform(tfms=ds_tfms, size=size, resize_method=resize_method, mult=mult, padding_mode=padding_mode,
---> 97 mode=mode, tfm_y=tfm_y)
98 if test is not None: lls.add_test_folder(test)
99 return lls.databunch(bs=bs, val_bs=val_bs, dl_tfms=dl_tfms, num_workers=num_workers, collate_fn=collate_fn,
~/anaconda3/lib/python3.7/site-packages/fastai/data_block.py in transform(self, tfms, **kwargs)
503 if not tfms: tfms=(None,None)
504 assert is_listy(tfms) and len(tfms) == 2, "Please pass a list of two lists of transforms (train and valid)."
--> 505 self.train.transform(tfms[0], **kwargs)
506 self.valid.transform(tfms[1], **kwargs)
507 if self.test: self.test.transform(tfms[1], **kwargs)
~/anaconda3/lib/python3.7/site-packages/fastai/data_block.py in transform(self, tfms, tfm_y, **kwargs)
722 def transform(self, tfms:TfmList, tfm_y:bool=None, **kwargs):
723 "Set the `tfms` and `tfm_y` value to be applied to the inputs and targets."
--> 724 _check_kwargs(self.x, tfms, **kwargs)
725 if tfm_y is None: tfm_y = self.tfm_y
726 tfms_y = None if tfms is None else list(filter(lambda t: getattr(t, 'use_on_y', True), listify(tfms)))
~/anaconda3/lib/python3.7/site-packages/fastai/data_block.py in _check_kwargs(ds, tfms, **kwargs)
591 if (tfms is None or len(tfms) == 0) and len(kwargs) == 0: return
592 if len(ds.items) >= 1:
--> 593 x = ds[0]
594 try: x.apply_tfms(tfms, **kwargs)
595 except Exception as e:
~/anaconda3/lib/python3.7/site-packages/fastai/data_block.py in __getitem__(self, idxs)
118 "returns a single item based if `idxs` is an integer or a new `ItemList` object if `idxs` is a range."
119 idxs = try_int(idxs)
--> 120 if isinstance(idxs, Integral): return self.get(idxs)
121 else: return self.new(self.items[idxs], inner_df=index_row(self.inner_df, idxs))
122
~/anaconda3/lib/python3.7/site-packages/fastai/vision/data.py in get(self, i)
269 def get(self, i):
270 fn = super().get(i)
--> 271 res = self.open(fn)
272 self.sizes[i] = res.size
273 return res
~/anaconda3/lib/python3.7/site-packages/fastai/vision/data.py in open(self, fn)
265 def open(self, fn):
266 "Open image in `fn`, subclass and overwrite for custom behavior."
--> 267 return open_image(fn, convert_mode=self.convert_mode, after_open=self.after_open)
268
269 def get(self, i):
~/anaconda3/lib/python3.7/site-packages/fastai/vision/image.py in open_image(fn, div, convert_mode, cls, after_open)
396 with warnings.catch_warnings():
397 warnings.simplefilter("ignore", UserWarning) # EXIF warning from TiffPlugin
--> 398 x = PIL.Image.open(fn).convert(convert_mode)
399 if after_open: x = after_open(x)
400 x = pil2tensor(x,np.float32)
~/.local/lib/python3.7/site-packages/PIL/Image.py in open(fp, mode)
2847 fp.seek(0)
2848 except (AttributeError, io.UnsupportedOperation):
-> 2849 fp = io.BytesIO(fp.read())
2850 exclusive_fp = True
2851
AttributeError: 'PosixPath' object has no attribute 'read'
Hi, I am a beginner in data science and wanted to try an image classification problem:
I am trying to use the following code on a kaggle kernel:
learn = cnn_learner(data1, models.resnet34, metrics=error_rate)
But this error is coming and the download is not happening:
<urlopen error [Errno -3] Temporary failure in name resolution>
Any Resolution?
Hey everyone!
Is it worth starting the fast.ai v3 course now, given that the v4 MOOC will be released in July? I’m relatively new to deep learning, so I don’t have the best judgement to decide. Will the differences in the libraries be worth the wait?
Thanks!
Even though fastai v4 has new features but learning v3 will be bonus, since all concepts will be same. So its better to learn v3 and there is you can check dev.fast.ai to learn whats new in v4.
Hi all, I am new to the course and I want to build a model that identifies different musical instruments. However, some images that I plan on getting from google images will contain multiple instruments I am trying to detect and I am not sure how to label these images. Can anyone offer some tips/advice on how to go about handling this? Thanks!
Hey guys, I’m a newbie to this course. I just started using the paperspace fastai machine template and i’m getting really stuck. I’ve spent 3 hours trying to get jupyter notebook in my browser, but all I get is an error message saying “This site cant be reached” in my browser. Can someone please help me in fixing this error? Thanks!
hey khalil were you able to download images into your virtual machine? If so, can you let me know what kind of virtual machine your using and the process you took to download those images?
Thank you, Adam
Hey guys, I’m like really new to fast.ai so I am a bit reluctant for posting my question as a topic since I am sure that it has been answered before.
I wanted to get started on course-v3, part one and while I was setting up my Jupyter Notebook, I tried to get access to the files of the version 3 course but I was unable to. When I try to change my directory it says it can’t access it, although it does have access to course-v4. What I see in terminal is shown below through this screenshot:
I’m able to pull the course-v4 just fine but it does not align with the notebooks in this lesson
How would you guys recommend getting access to the course-v3 files? I also tried to download and upload them locally but that didn’t work out either. Thank you!
what you might have done is pressed ‘cd’ and then pressed enter. Thus you are in the second highest directory ‘/root’ and you cant run commands from there because you aren’t in teh right directory. What I would do (and what I think is the easiest way) is just go back to jupyter notebook (top left hand corner of the screen) and then just click on a terminal again. After starting the terminal, type ‘pwd’ to see what directory you are in. It should say ‘/notebooks’. After that you shouldn’t have any problems
Thank you for your reply and screenshot! I found my mistake, when I made my Notebook I had selected it to be for fast.ai v4 hence the reason I could not get into the course-v3 directory. I had assumed them to be the same but I found the v3 version and everything works well.
Thanks for your help, I appreciate it.
Hello, fellow coders!
I finished Lesson 1: Deep Learning 2019 - Image classification! Thank you, thank you, please hold your applause.
Postmortem
No surprise, fast.ai did the lesson well, and I appreciate Mr. Howard’s plain teaching style. I’ve had unnecessarily pedantic teachers that get in the way of learning. I think there’s a place for both teaching styles, for an intro course though Mr. Howard nails it. It’s refreshing to learn an advanced topic with applications first then theory. It boosts my motivation. My only feedback is to maybe record with a better microphone for v4. Sometimes when Mr. Howard says the letter “s,” it spikes the volume and makes it harder to listen with loud volume, which is important to me because I’m a little deaf.
You do a great job of framing the rapid progress in the field! It’s incredible how we can beat the state-of-the-art breed classification for dogs and casts from less than a decade ago with a few code lines. I’m excited about further lessons!
For homework, I want to build my classifier, any dataset suggestions?
Follow-up Questions
/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:2854: UserWarning: The default behavior for interpolate/upsample with float scale_factor will change in 1.6.0 to align with other frameworks/libraries, and use scale_factor directly, instead of relying on the computed output size. If you wish to keep the old behavior, please set recompute_scale_factor=True. See the documentation of nn.Upsample for details.
warnings.warn("The default behavior for interpolate/upsample with float scale_factor will change "
I’m running the code on Google Colab, with the latest version of
!curl -s https://course.fast.ai/setup/colab | bash
It mentions to set recompute_scale_factor=True
to keep old behavior, yet I can’t find that as a paremeter of the function fit_one_cycle()
.
Does anyone else have this warning, and is there any way to disable it? Please, my notebook is filled with these warnings!
r'/([^/]+)_\d+.jpg$'
I’ve studied the basics of regex expressions, but I can’t figure out how this code works. Individually, what does [^/]+ do? I know the + means one or more of the brackets, thus does it mean to match the start of the subgroup to start with /? I also don’t understand what the brackets do here. I tried designing my regex with the modifiers /(.*) to grab any character after the /, but it predictably catches the whole path, not just the end. Thanks for the help!
You can remove the warnings by following @oo92’s instructions here. This means
adding a new cell after the colab setup statement and running the command
!pip install "torch==1.4" "torchvision==0.5.0"
[^/]
is saying match any character that is not a slash .
[^/]+
is saying match 1 or more characters that is not a slash
([^/]+)
is saying group the set that matches 1 or more characters that is not a slash
/([^/]+)_
is saying group the set that matches 1 or more characters that is not a slash but is prefixed by a slash and followed by an underscore .This will filter out any matches on the directory path (which do have slashes) if they dont have underscores. It will only match the filename which has an underscore to separate a digit from the name of the species..
/([^/]+)_\d+
is saying the same as above plus should be followed by 1 or more digits
/([^/]+)_\d+.jpg
is saying the same as above plus should be followed by any character plus the sequence jpg
/([^/]+)_\d+.jpg$
is saying the same as above but followed by the end of the string (ie. no more characters).
Regexes are complicated to grok and it took me a while to understand them too. There are websites that allow you to play with regexes and can help you debug them if you are still having problems.
Hope this helps.
Best regards,
Butch
Thanks Butch, this helped a lot!