Part 2 Lesson 10 wiki


#498

Hi all. I’m using Google Colaboratory and I’m getting an AttributeError when trying to install scipy sparse - module ‘scipy’ has no attribute ‘sparse’. I’ve searched the forums and notice others having the same issue.

Not a fast.ai problem but, just wondering if anyone has found a patch…

I have installed scipy-1.1.0 and also imported it directly using - from scipy import sparse as sp

Any help would be greatly appreciated!


AttributeError Traceback (most recent call last)
in ()
----> 1 from fastai.text import *
2 import html

/usr/local/lib/python3.6/dist-packages/fastai/text.py in ()
----> 1 from .core import *
2 from .learner import *
3 from .lm_rnn import *
4 from torch.utils.data.sampler import Sampler
5 import spacy

/usr/local/lib/python3.6/dist-packages/fastai/core.py in ()
----> 1 from .imports import *
2 from .torch_imports import *
3
4 def sum_geom(a,r,n): return an if r==1 else math.ceil(a(1-r**n)/(1-r))
5

/usr/local/lib/python3.6/dist-packages/fastai/imports.py in ()
3 import pandas as pd, pickle, sys, itertools, string, sys, re, datetime, time, shutil, copy
4 import seaborn as sns, matplotlib
----> 5 import IPython, graphviz, sklearn_pandas, sklearn, warnings, pdb
6 import contextlib
7 from abc import abstractmethod

/usr/local/lib/python3.6/dist-packages/sklearn_pandas/init.py in ()
1 version = ‘1.6.0’
2
----> 3 from .dataframe_mapper import DataFrameMapper # NOQA
4 from .cross_validation import cross_val_score, GridSearchCV, RandomizedSearchCV # NOQA
5 from .categorical_imputer import CategoricalImputer # NOQA

/usr/local/lib/python3.6/dist-packages/sklearn_pandas/dataframe_mapper.py in ()
5 import numpy as np
6 from scipy import sparse
----> 7 from sklearn.base import BaseEstimator, TransformerMixin
8
9 from .cross_validation import DataWrapper

/usr/local/lib/python3.6/dist-packages/sklearn/init.py in ()
132 else:
133 from . import __check_build
–> 134 from .base import clone
135 __check_build # avoid flakes unused variable error
136

/usr/local/lib/python3.6/dist-packages/sklearn/base.py in ()
11 from scipy import sparse
12 from .externals import six
—> 13 from .utils.fixes import signature
14 from . import version
15

/usr/local/lib/python3.6/dist-packages/sklearn/utils/init.py in ()
9
10 from .murmurhash import murmurhash3_32
—> 11 from .validation import (as_float_array,
12 assert_all_finite,
13 check_random_state, column_or_1d, check_array,

/usr/local/lib/python3.6/dist-packages/sklearn/utils/validation.py in ()
13
14 import numpy as np
—> 15 import scipy.sparse as sp
16
17 from …externals import six

AttributeError: module ‘scipy’ has no attribute ‘sparse’


(Erik Chan) #499

Is it me or the files are no longer existing at files.fast.ai?


(Emil) #500

To me, the files are still there.


(Erik Chan) #501

Thanks they’re back online now


(WG) #502

Is there a definitive answer to this question anywhere?

Based on the ULMFiT paper, the recommendation is to fine-tune “only the last layer” (section 3.2) before unfreezing and applying discriminative learning rates to the other layers. As such, why is there the line learner.unfreeze() immediately before fitting the model begins? It seems that it should be learner.freeze(-1) unless I’m missing something (which is typically the case :slight_smile: )


(Even Oldridge) #503

Hey Christine, I’m starting to take a look at something similar and I’m curious what your results were here?


(adrian) #504

I thought it was a typo but didn’t get round to reporting it, i do the same with just unfreezing final layer


(Erik Chan) #505

Did you have any luck with this? Can you not simply add more classes?


(Christine) #506

Hi - Unfortunately I wasn’t able to get better results with the different losses. (I got everything up and running, but the accuracy was always less good.) I’m sure I didn’t exhaust every possibility though, so let me know if you have better success than I did!


(Gavin Francis) #507

Reading Universal Language Model Fine-tuning for Text Classification
Not sure if anyone has commented on this before, but in @jeremy and Sebastian’s excellent paper, there seems to be an error in the STLR formula (3) for p when t>cut. In order to match the figure, it should be something like (T-t)/(T-cut)


(gram) #509

I understand why this is much better than word embeddings but with this model is there a way to use the words (in the context of the corpus you fed the model) like you would with word embeddings?
For example: the King - man + woman = queen equation?
What exactly does implementing the transfer learned, trained model look like? What can it do besides classifying?
I know chatbots use LSTMs. Maybe someone can point me in the direction to how this model would work in a chatbot? How does one glom meaning from new text run through the trained model? How it would work with translation? How it works with a GAN? How it works with a search? How it works as a Q and A? This would help me understand what exactly is happening with the whole process. Please?

(If someone just wants to give an answer to one of these that’d be appreciated. I know no one person will answer all of this.)


(Thomas) #510

It’s a bit handwavy, but I thought about this a bit (actually the second half).
One thing the LM does over word vectors is keep quite a bit more context - as we are looking at LSTM states.
As such I would expect the equivalent of King - Man + Woman = Queen to be a relatively poor use of such a model.
For chatbots, QA, MT, I think using the encoder (or in the latter encoder + decoder) will be beneficial, as for those the history is natural.


(gram) #511

Thanks for the answer. I’d like to make a Q and A ‘bot’ with this but I feel I don’t know which direction to go to learn how to make one.
Maybe I can just take a chatbot using word embeddings and modify it to use this lesson’s model instead? I’m too new at this. Sometimes the steps to progress are too high to climb.


(Shubham Gupta) #512

Hey guys, have written a blog on Generating your own music using RNNs. Hope you enjoy it.

https://www.hackerearth.com/blog/machine-learning/jazz-music-using-deep-learning/


(gram) #513

Oh, now I see lesson 11 is a translator.
Lesson 11 used to be about a CNN with pictures of fish at the beginning.

(I ripped the videos to my hard drive to play on my other devices and hadn’t seen the switch-a-roo)


(Dusten) #514

the robots.txt file is a file that would live on the root or any sub-dir of the root. The goal is to inform web-crawlers to not index or index on the connected site.

I don’t think it’s used much anymore but still a hold over of a earlier version of the internet.


(Francisco Rodes) #515

Hello everyone!

I have been reading the paper and investigating the ULMFiT model. Does anybody know what is exactly test? In the paper there are some tables that make reference to Test error and others to Validation error. Are they the same?

As I understand, in IMDb model, the validation set is the only test set that is used. Am I wrong and there is another one? Thanks!


(Santhanam Elumalai) #516

Going through sentiment analysis through the twitter dataset, found the dataset contains lots of url and text emoji what is the best way to handle this, remove or leave it? Also I am seeing lot of continues exclamation marks like !!! any way to avoid those things.?


(Max Marion) #517

I’m getting an error when trying to tokenize the data. “os.fork()” is returning “OSError: [Errno 22] Invalid argument,” which seems pretty bizarre. I’m running on windows subsystem linux. One solution would be for someone to send me their saved file of the tokens for the model (I would love you so so so much). Otherwise, any help is appreciated.


OSError Traceback (most recent call last)
in ()
----> 1 tok_trn, trn_labels = get_all(df_trn, 1)
2 tok_val, val_labels = get_all(df_val, 1)

in get_all(df, n_lbls)
3 for i, r in enumerate(df):
4 print(i)
----> 5 tok_, labels_ = get_texts(r, n_lbls)
6 tok += tok_;
7 labels += labels_

in get_texts(df, n_lbls)
5 texts = texts.apply(fixup).values.astype(str)
6
----> 7 tok = Tokenizer().proc_all_mp(partition_by_cores(texts))
8 return tok, list(labels)

~/fastai/courses/dl2/fastai/text.py in proc_all_mp(ss, lang, ncpus)
99 ncpus = ncpus or num_cpus()//2
100 with ProcessPoolExecutor(ncpus) as e:
–> 101 return sum(e.map(Tokenizer.proc_all, ss, [lang]*len(ss)), [])
102
103

~/anaconda3/envs/fastai/lib/python3.6/concurrent/futures/process.py in map(self, fn, timeout, chunksize, *iterables)
494 results = super().map(partial(_process_chunk, fn),
495 _get_chunks(*iterables, chunksize=chunksize),
–> 496 timeout=timeout)
497 return _chain_from_iterable_of_lists(results)
498

~/anaconda3/envs/fastai/lib/python3.6/concurrent/futures/_base.py in map(self, fn, timeout, chunksize, *iterables)
573 end_time = timeout + time.time()
574
–> 575 fs = [self.submit(fn, *args) for args in zip(*iterables)]
576
577 # Yield must be hidden in closure so that the futures are submitted

~/anaconda3/envs/fastai/lib/python3.6/concurrent/futures/_base.py in (.0)
573 end_time = timeout + time.time()
574
–> 575 fs = [self.submit(fn, *args) for args in zip(*iterables)]
576
577 # Yield must be hidden in closure so that the futures are submitted

~/anaconda3/envs/fastai/lib/python3.6/concurrent/futures/process.py in submit(self, fn, *args, **kwargs)
464 self._result_queue.put(None)
465
–> 466 self._start_queue_management_thread()
467 return f
468 submit.doc = _base.Executor.submit.doc

~/anaconda3/envs/fastai/lib/python3.6/concurrent/futures/process.py in _start_queue_management_thread(self)
425 if self._queue_management_thread is None:
426 # Start the processes so that their sentinels are known.
–> 427 self._adjust_process_count()
428 self._queue_management_thread = threading.Thread(
429 target=_queue_management_worker,

~/anaconda3/envs/fastai/lib/python3.6/concurrent/futures/process.py in _adjust_process_count(self)
444 args=(self._call_queue,
445 self._result_queue))
–> 446 p.start()
447 self._processes[p.pid] = p
448

~/anaconda3/envs/fastai/lib/python3.6/multiprocessing/process.py in start(self)
103 ‘daemonic processes are not allowed to have children’
104 _cleanup()
–> 105 self._popen = self._Popen(self)
106 self._sentinel = self._popen.sentinel
107 # Avoid a refcycle if the target function holds an indirect

~/anaconda3/envs/fastai/lib/python3.6/multiprocessing/context.py in _Popen(process_obj)
221 @staticmethod
222 def _Popen(process_obj):
–> 223 return _default_context.get_context().Process._Popen(process_obj)
224
225 class DefaultContext(BaseContext):

~/anaconda3/envs/fastai/lib/python3.6/multiprocessing/context.py in _Popen(process_obj)
275 def _Popen(process_obj):
276 from .popen_fork import Popen
–> 277 return Popen(process_obj)
278
279 class SpawnProcess(process.BaseProcess):

~/anaconda3/envs/fastai/lib/python3.6/multiprocessing/popen_fork.py in init(self, process_obj)
17 util._flush_std_streams()
18 self.returncode = None
—> 19 self._launch(process_obj)
20
21 def duplicate_for_child(self, fd):

~/anaconda3/envs/fastai/lib/python3.6/multiprocessing/popen_fork.py in _launch(self, process_obj)
64 code = 1
65 parent_r, child_w = os.pipe()
—> 66 self.pid = os.fork()
67 if self.pid == 0:
68 try:

OSError: [Errno 22] Invalid argument


#518

I am using Paperspace Gradient notebook for the course and would appreciate some help in downloading the IMDb data to the environment. It appears as though Gradient notebook is some closed environment which only contains some fast.ai data sets (https://paperspace.zendesk.com/hc/en-us/articles/360003092514-Public-Datasets) but not IMDb!

Asked another way: can we use Gradient notebook to run Lesson 10 ULMFit?

Thanks