Lesson 1 official topic

Thank you @bencoman for you quick asnwer. For sure. is the first notebook from the book. I checked the dependencies, but it looks right.

The public notebook link: https://nbviewer.org/github/fastai/fastbook/blob/master/01_intro.ipynb

The full exception:


UnsupportedOperation Traceback (most recent call last)
~\AppData\Local\Temp\ipykernel_19780\1814909663.py in
2
3 path = untar_data(URLs.IMDB)
----> 4 dls = TextDataLoaders.from_folder(path)
5 dls.show(max_n=3)
6 learn = text_classifier_learner(dls, AWD_LSTM, drop_mult=0.5, metrics=accuracy)

~\anaconda3\envs\torchCudaEnv\lib\site-packages\fastai\text\data.py in from_folder(cls, path, train, valid, valid_pct, seed, vocab, text_vocab, is_lm, tok_tfm, seq_len, splitter, backwards, **kwargs)
253 if splitter is None:
254 splitter = GrandparentSplitter(train_name=train, valid_name=valid) if valid_pct is None else RandomSplitter(valid_pct, seed=seed)
→ 255 blocks = [TextBlock.from_folder(path, text_vocab, is_lm, seq_len, backwards, tok=tok_tfm)]
256 if not is_lm: blocks.append(CategoryBlock(vocab=vocab))
257 get_items = partial(get_text_files, folders=[train,valid]) if valid_pct is None else get_text_files

~\anaconda3\envs\torchCudaEnv\lib\site-packages\fastai\text\data.py in from_folder(cls, path, vocab, is_lm, seq_len, backwards, min_freq, max_vocab, **kwargs)
240 def from_folder(cls, path, vocab=None, is_lm=False, seq_len=72, backwards=False, min_freq=3, max_vocab=60000, **kwargs):
241 “Build a TextBlock from a path
→ 242 return cls(Tokenizer.from_folder(path, **kwargs), vocab=vocab, is_lm=is_lm, seq_len=seq_len,
243 backwards=backwards, min_freq=min_freq, max_vocab=max_vocab)
244

~\anaconda3\envs\torchCudaEnv\lib\site-packages\fastai\text\core.py in from_folder(cls, path, tok, rules, **kwargs)
278 def from_folder(cls, path, tok=None, rules=None, **kwargs):
279 path = Path(path)
→ 280 if tok is None: tok = WordTokenizer()
281 output_dir = tokenize_folder(path, tok=tok, rules=rules, **kwargs)
282 res = cls(tok, counter=load_pickle(output_dir/fn_counter_pkl),

~\anaconda3\envs\torchCudaEnv\lib\site-packages\fastai\text\core.py in init(self, lang, special_toks, buf_sz)
114 “Spacy tokenizer for lang
115 def init(self, lang=‘en’, special_toks=None, buf_sz=5000):
→ 116 import spacy
117 from spacy.symbols import ORTH
118 self.special_toks = ifnone(special_toks, defaults.text_spec_tok)

~\anaconda3\envs\torchCudaEnv\lib\site-packages\spacy_init_.py in
9
10 # These are imported as part of the API
—> 11 from thinc.api import prefer_gpu, require_gpu, require_cpu # noqa: F401
12 from thinc.api import Config
13

~\anaconda3\envs\torchCudaEnv\lib\site-packages\thinc_init_.py in
3
4 from .about import version
----> 5 from .config import registry

~\anaconda3\envs\torchCudaEnv\lib\site-packages\thinc\config.py in
11 from pydantic.main import ModelMetaclass
12 from pydantic.fields import ModelField
—> 13 from wasabi import table
14 import srsly
15 import catalogue

~\anaconda3\envs\torchCudaEnv\lib\site-packages\wasabi_init_.py in
10 from .about import version # noqa
11
—> 12 msg = Printer()

~\anaconda3\envs\torchCudaEnv\lib\site-packages\wasabi\printer.py in init(self, pretty, no_print, colors, icons, line_max, animation, animation_ascii, hide_animation, ignore_warnings, env_prefix, timestamp)
54 self.pretty = pretty and not env_no_pretty
55 self.no_print = no_print
—> 56 self.show_color = supports_ansi() and not env_log_friendly
57 self.hide_animation = hide_animation or env_log_friendly
58 self.ignore_warnings = ignore_warnings

~\anaconda3\envs\torchCudaEnv\lib\site-packages\wasabi\util.py in supports_ansi()
262 if “ANSICON” in os.environ:
263 return True
→ 264 return _windows_console_supports_ansi()
265
266 return True

~\anaconda3\envs\torchCudaEnv\lib\site-packages\wasabi\util.py in _windows_console_supports_ansi()
234 raise ctypes.WinError()
235
→ 236 console = msvcrt.get_osfhandle(sys.stdout.fileno())
237 try:
238 # Try to enable ANSI output support

~\anaconda3\envs\torchCudaEnv\lib\site-packages\ipykernel\iostream.py in fileno(self)
309 return self._original_stdstream_copy
310 else:
→ 311 raise io.UnsupportedOperation(“fileno”)
312
313 def _watch_pipe_fd(self):

UnsupportedOperation: fileno

really thank you in advance.

Can you report the value in path

and also get a directory listing: !ls {path}

Sure, but i’m not sure. I’m working on windows10 , do you refer to a enviroment variable??

Hi Tymek,
thanks for the reply your answer makes perfect sence and understand it completely, im just bit confused on the work book itself, see below. how does what you present what i have from the workbook?

thanks again!

image

No. Not referring to an environemnt variable.

Look at the error you posted…

3 path = untar_data(URLs.IMDB)
----> 4 dls = TextDataLoaders.from_folder(path)

The only thing affecting the behaviour of TextDataLoaders.from_folder() on Line 4 are:

  • the path variable
  • the content of that path on disk.

Without understanding what those are, you are unable to help yourself,
and no-one else can provide effective advice.

A browser is not special. Its an application made up of code that makes function calls to retrieve urls from web servers. In the same way, the code executed in the Jupyter notebook makes function calls to retrieve urls from web servers.

The only difference is the former renders the retrieved url data to display so you can read it directly, while the latter stores the retrieved url data in a variable that you can further process.

2 Likes

thanks!

Thank you @bencoman . I’m going to check it!

Hello, sorry for this very basic question. This is my first time running commands in kaggle notebook. I am trying to run the first few commands of lesson 1.

But when I try to execute the code lines -
iskaggle = os.environ.get(‘KAGGLE_KERNEL_RUN_TYPE’, ‘’)
if iskaggle:
!pip install -Uqq fastai duckduckgo_search

I get the following error:

And when I try to execute the code line -
urls = search_images(‘bird photos’, max_images=1)

I get the following error
“NameError: name ‘search_images’ is not defined”

as shown below:

How do I fix this issue?

Throughout the course the first error “pip’s dependency resolver does not…” was generally ignored.

For the second error, can you link to both your whole ntoebook, and the original notebook you replicated yours from.

1 Like

No worries, everybody starts with basic questions :slight_smile:
search_images is defined in the code cell right above

urls = search_images('bird photos', max_images=1)

You should run that before you call the function.

If you don’t see this (maybe you are copying the cells contents of the lessons notebook to you own) make sure to click the “Copy & Edit” button at the top. Kaggle offers the ability to hide certain cells in the “presentation” mode to focus on the important parts (well, from the authors view), but you can see all cells in the “edit” mode which you get with said button.

3 Likes

Thank you @benkarr
Yes I was copying cells one by one of lesson’s notebook and missed this cell:)
Is this a bad way to try out the lesson code?

Also now when try to execute the cell above I get a module error as shown below:

[Edit]: Restarting the session somehow solved this issue. Not sure what was happening before

You can try importing as below :slightly_smiling_face:

from fastbook import search_images_ddg

Im totally new to all of this. I have read the first chapter of the book and watched the lesson 1 video like 3 times. Experimented with the code. I just cant understand how the parameters are given to a model. In the book i read that pretrained models already have the parameters and are tweaked and fine tuned according to our requirements. My doubt is what about the models which were never trained like just the architecture. Sorry if it was a stupid question

Hey @Voi_l8ight,
when using an untrained model or when you create a model from scratch, the models parameters have to be initialized. Usually that is done by picking random values (according to different distributions) or zeros and usually fastai, or torch rather, does that for you automatically.
The book goes a bit into that in Chapter 4 - " Stochastic Gradient Descent (SGD)".
Hope that helps :slight_smile:

2 Likes

Thanks for that

1 Like

My Lecture 1 summary :
Google Doc View

1 Like

I think I found a minor issue in lesson 1, can someone review?

It doesn’t look like I can post. At least I can’t find this functionality on the forum. So I will ask here: which package/library ‘verify_images’ method comes from? Lesson 1, jupyter notebook 1.

You can read up on what you “need to do” to receive more “privileges” in the forum here. :slight_smile:

To your question: You can:

  • run verify_images?? in the notebook. Beneath the functions definition there is
    File: /path/to/fastai/module/submodule.py, so you can import it with from fastai.module.submodule import verify_image ;
  • search the fastai Github repo (or clone the repo and use your favorite editor) it should also point you towards the file that contains the function.

Specifically the repo returns fastai/vision/utils.py so you can import it from fastai.vision.utils

1 Like