Misc issues

(Jeremy Howard (Admin)) #1

Various issues that currently don’t belong to any specific thread are posted here.

Some misplaced posts are moved here. If you find your post moved here, you can await a follow up here or delete your post and repost it in a more suitable topic or start a new topic instead.

0 Likes

(Ralph) #2
from fastai.vision import *

path = './data'
data = ImageDataBunch.from_folder(path, ds_tfms=get_transforms(), size=224, num_workers=0).normalize(imagenet_stats)
learn = create_cnn(data, models.resnet50, metrics=accuracy)
learn.fit_one_cycle(1)

When I run the above code, I get the following error

NameError: name 'accuracy' is not defined

I’m running v1.0.34 installed with Anaconda3

0 Likes

Fastai v1 install issues thread
(Stas Bekman) #3

because you haven’t imported accuracy. Add:

import fastai.metrics
1 Like

#4

Hello, how do I start a new post? There is no where I can find a start my own post button… I have problem grouping column data to pass over the model… please help thanks!

0 Likes

Fastai v1 install issues thread
(Stas Bekman) #5
  1. go to https://forums.fast.ai/c/fastai-users
  2. click [+New Topic] in the right upper corner
0 Likes

(Stephen Johnson) #6

Also, if you aren’t seeing the New Topic button, then perhaps you haven’t yet reached the below requirement.

New users can only create a topic after they first spend 10 minutes (total) reading at least 3 different posts on the forum.

1 Like

(Javier Abellán Abenza) #7

I would like a resize transform in the library. Currently I don’t know how to add a resize transform after my random crop transform:

tfms = [crop(size=420, row_pct=(0,1), col_pct=(0,1)),
   resize(size=140)] ????

I need this especific pipeline tranforms because i am replicating a paper.

I think it should be an instance of TfmCoord() class, but im not sure.

0 Likes

Documentation improvements
(Stas Bekman) #8

untested:

size = 140
tfms = [crop(size=420, row_pct=(0,1), col_pct=(0,1))]
tfms_kwargs = {'size':size, 'resize_method':ResizeMethod.SQUISH}
transform(tfms=tfms, **tfms_kwargs)

so it will perform the size transform after the crop.

0 Likes

(Javier Abellán Abenza) #9

Yes, i need the resize after the crop. No idea on how to do that.

Here is the output of your code:

0 Likes

#10

It’s what crop_pad does. Normally using

tfms = [crop_pad(size=420, row_pct=(0,1), col_pct=(0,1))]
tfms_kwargs = {'size':140}

should work.

0 Likes

(Javier Abellán Abenza) #11

Thanks a lot! Works perfectly. :slight_smile:

0 Likes

(Imran) #12

Hi,

I have setup fastai, and am executing lesson1 notebook. I was able to download the tar files and unzip them. I have also replaced the “pat” regex to suit windows format, however, the notebook keeps on executing (without aborting / getting killed) when i execute the “create_cnn” step.

learn = create_cnn(data, models.resnet34, metrics=error_rate)

However if I replace the “learn” with the below code, I am able to proceed.

def get_model(pretrained=True, model_name = ‘resnet34’, **kwargs ):
arch = models.resnet34(pretrained, **kwargs )
return arch

learn = Learner(data, get_model(), metrics=[accuracy])

I’ve got Windows 10, GTX 1080, latest NVIDIA drivers (GeForce), updated latest available version of pytorch and fastai library. Can someone suggest how to debug and resolve this issue?

Update:
I extracted the code for “create_cnn” and added a pdb and this is where its stuck now

from fastai.vision import *
from fastai.metrics import error_rate
import torch
torch.cuda.set_device(0)
bs = 64
path = untar_data(URLs.PETS); path
WindowsPath(‘C:/Users/Imran/.fastai/data/oxford-iiit-pet’)
path_anno = path/‘annotations’
path_img = path/‘images’
fnames = get_image_files(path_img)
np.random.seed(2)
pat = re.compile(r’\([^\]+)_\d+.jpg$’)
data = ImageDataBunch.from_name_re(path_img, fnames, pat, ds_tfms=get_transforms(), size=224, bs=bs ).normalize(imagenet_stats)

import pdb;

def _default_split(m:nn.Module): return (m[1],)
def _resnet_split(m:nn.Module): return (m[0][6],m[1])

_default_meta = {‘cut’:-1, ‘split’:_default_split}
_resnet_meta = {‘cut’:-2, ‘split’:_resnet_split }

model_meta = {
… models.resnet18 :{_resnet_meta}, models.resnet34: {_resnet_meta},
… models.resnet50 :{_resnet_meta}, models.resnet101:{_resnet_meta}}

def cnn_config(arch):
… “Get the metadata associated with arch.”
… torch.backends.cudnn.benchmark = True
… return model_meta.get(arch, default_meta)

class Hook():
… “Create a hook on m with hook_func.”
… def init(self, m:nn.Module, hook_func:HookFunc, is_forward:bool=True, detach:bool=True):
… self.hook_func,self.detach,self.stored = hook_func,detach,None
… f = m.register_forward_hook if is_forward else m.register_backward_hook
… self.hook = f(self.hook_fn)
… self.removed = False
… def hook_fn(self, module:nn.Module, input:Tensors, output:Tensors):
… “Applies hook_func to module, input, output.”
… if self.detach:
… input = (o.detach() for o in input ) if is_listy(input ) else input.detach()
… output = (o.detach() for o in output) if is_listy(output) else output.detach()
… self.stored = self.hook_func(module, input, output)
… def remove(self):
… “Remove the hook from the model.”
… if not self.removed:
… self.hook.remove()
… self.removed=True
… def enter(self, *args): return self
… def exit(self, *args): self.remove()

class Hooks():
… “Create several hooks on the modules in ms with hook_func.”
… def init(self, ms:Collection[nn.Module], hook_func:HookFunc, is_forward:bool=True, detach:bool=True):
… self.hooks = [Hook(m, hook_func, is_forward, detach) for m in ms]
… def getitem(self,i:int)->Hook: return self.hooks[i]
… def len(self)->int: return len(self.hooks)
… def iter(self): return iter(self.hooks)
@property
… def stored(self): return [o.stored for o in self]
… def remove(self):
… “Remove the hooks from the model.”
… for h in self.hooks: h.remove()
… def enter(self, *args): return self
… def exit (self, *args): self.remove()

def dummy_batch(m: nn.Module, size:tuple=(64,64))->Tensor:
… “Create a dummy batch to go through m with size.”
… ch_in = in_channels(m)
… pdb.set_trace()
… return one_param(m).new(1, ch_in, *size).requires_grad
(False).uniform_(-1.,1.)

def dummy_eval(m:nn.Module, size:tuple=(64,64)):
… “Pass a dummy_batch in evaluation mode in m with size.”
… return m.eval()(dummy_batch(m, size))

def _hook_inner(m,i,o): return o if isinstance(o,Tensor) else o if is_listy(o) else list(o)

def hook_outputs(modules:Collection[nn.Module], detach:bool=True, grad:bool=False)->Hooks:
… "Return Hooks that store activations of all modules in self.stored"
… return Hooks(modules, _hook_inner, detach=detach, is_forward=not grad)

def model_sizes(m:nn.Module, size:tuple=(64,64))->Tuple[Sizes,Tensor,Hooks]:
… “Pass a dummy input through the model m to get the various sizes of activations.”
… with hook_outputs(m) as hooks:
… x = dummy_eval(m, size)
… return [o.stored.shape for o in hooks]

def num_features_model(m:nn.Module)->int:
… “Return the number of output features for model.”
… sz = 64
… while True:
… try: return model_sizes(m, size=(sz,sz))[-1][1]
… except Exception as e:
… sz *= 2
… if sz > 2048: raise

def create_cnn1(data:DataBunch, arch:Callable, cut:Union[int,Callable]=None, pretrained:bool=True,
… lin_ftrs:Optional[Collection[int]]=None, ps:Floats=0.5,
… custom_head:Optional[nn.Module]=None, split_on:Optional[SplitFuncOrIdxList]=None,
… bn_final:bool=False, **learn_kwargs:Any)->Learner:
… meta = cnn_config(arch)
… body = create_body(arch, pretrained, cut)
… nf = num_features_model(body) * 2
… head = custom_head or create_head(nf, data.c, lin_ftrs, ps=ps, bn_final=bn_final)
… model = nn.Sequential(body, head)
… learn = Learner(data, model, **learn_kwargs)
… learn.split(ifnone(split_on,meta[‘split’]))
… if pretrained: learn.freeze()
… apply_init(model[1], nn.init.kaiming_normal_)
… return learn

learn = create_cnn1(data, models.resnet34, metrics=error_rate)
(5)dummy_batch()
(Pdb) n
–Return–
(5)dummy_batch()->tensor([[[[ 3…5320e-01]]]])
(Pdb) n
–Call–
c:\users\anaconda3\envs\fastai\lib\site-packages\torch\nn\modules\module.py(483)call()
-> def call(self, *input, **kwargs):
(Pdb) n
c:\users\anaconda3\envs\fastai\lib\site-packages\torch\nn\modules\module.py(484)call()
-> for hook in self._forward_pre_hooks.values():
(Pdb) n
c:\users\anaconda3\envs\fastai\lib\site-packages\torch\nn\modules\module.py(486)call()
-> if torch._C._get_tracing_state():
(Pdb) n
c:\users\anaconda3\envs\fastai\lib\site-packages\torch\nn\modules\module.py(489)call()
-> result = self.forward(*input, **kwargs)
(Pdb) n

Execution does not proceed after this.

output of the install script

=== Software ===
python        : 3.6.6
fastai        : 1.0.42
fastprogress  : 0.1.18
torch         : 1.0.1
torch cuda    : 9.0 / is available
torch cudnn   : 7005 / is enabled

=== Hardware ===
torch devices : 1
  - gpu0      : GeForce GTX 1080

=== Environment ===
platform      : Windows-10-10.0.17134-SP0
conda env     : fastai
python        : C:\Users\Imran\Anaconda3\envs\fastai\python.exe
sys.path      :
C:\Users\Imran\Anaconda3\envs\fastai\python36.zip
C:\Users\Imran\Anaconda3\envs\fastai\DLLs
C:\Users\Imran\Anaconda3\envs\fastai\lib
C:\Users\Imran\Anaconda3\envs\fastai
C:\Users\Imran\Anaconda3\envs\fastai\lib\site-packages
no nvidia-smi is found
0 Likes

Fastai v1 install issues thread
(Terence Lim) #13

I’ve been trying to follow Fast.AI lessons using Google Colab notebooks. Recently, when I call data.save() after the following text block,

data = (TextList.from_csv(path, ‘train.csv’, cols=‘description’)
.split_from_df(col=‘is_valid’)
.label_from_df(cols=‘label’)
.databunch())
type(data) returns fastai.text.data.TextClasDataBunch.

However, when I do this, data_lm = TextLMDataBunch.load(path, 'tmp', bs=bs) , I get FileNotFoundError: [Errno 2] No such file or directory: ‘data/tmp/itos.pkl’ . And, looking into the path directory, I only see a data_save.pkl being formed and not the tmp folder. Anyone faced a similar problem before or does anyone have any idea what is causing this funky behaviour?

0 Likes

Fastai v1 install issues thread
(Stas Bekman) #14

this is not an installation issue, please try to find a better suitable thread for the future posts. For now moved your post here, please free to repost elsewhere more suitable. Thank you.

0 Likes

(Sanjaya Gebrial) #15

I’m having an issue with using fastAI on Windows. I got everything installed and I’m working my way through lesson-1 to test. However when I get to the line learn = create_cnn(data, models.resnet34, metrics=error_rate) it hangs after downloading the model. That line never stops running. However I tried changing the model to vgg16_bn and it downloaded and loaded fine. Has anyone seen this issue before?

1 Like

Fastai v1 install issues thread
(Ralph) #17

I’m getting the following error when I try to get predictions on my test set. I created my data from a folder with a train(standard labeled directory structure) and a test(unlabeled images) directory.
I created like this: data = ImageDataBunch.from_folder(path, valid_pct=0.1, size=224). Here is the full error log:

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-22-6be93b3089df> in <module>
      1 from fastai.basic_data import *
      2 
----> 3 preds, _ = learn.get_preds(DatasetType.Test)

~\AppData\Local\conda\conda\envs\machine_learning\lib\site-packages\fastai\basic_train.py in get_preds(self, ds_type, with_loss, n_batch, pbar)
    253         lf = self.loss_func if with_loss else None
    254         return get_preds(self.model, self.dl(ds_type), cb_handler=CallbackHandler(self.callbacks),
--> 255                          activ=_loss_func2activ(self.loss_func), loss_func=lf, n_batch=n_batch, pbar=pbar)
    256 
    257     def pred_batch(self, ds_type:DatasetType=DatasetType.Valid, batch:Tuple=None, reconstruct:bool=False) -> List[Tensor]:

~\AppData\Local\conda\conda\envs\machine_learning\lib\site-packages\fastai\basic_train.py in get_preds(model, dl, pbar, cb_handler, activ, loss_func, n_batch)
     38     "Tuple of predictions and targets, and optional losses (if `loss_func`) using `dl`, max batches `n_batch`."
     39     res = [torch.cat(o).cpu() for o in
---> 40            zip(*validate(model, dl, cb_handler=cb_handler, pbar=pbar, average=False, n_batch=n_batch))]
     41     if loss_func is not None: res.append(calc_loss(res[0], res[1], loss_func))
     42     if activ is not None: res[0] = activ(res[0])

~\AppData\Local\conda\conda\envs\machine_learning\lib\site-packages\fastai\basic_train.py in validate(model, dl, loss_func, cb_handler, pbar, average, n_batch)
     49     with torch.no_grad():
     50         val_losses,nums = [],[]
---> 51         if cb_handler: cb_handler.set_dl(dl)
     52         for xb,yb in progress_bar(dl, parent=pbar, leave=(pbar is not None)):
     53             if cb_handler: xb, yb = cb_handler.on_batch_begin(xb, yb, train=False)

~\AppData\Local\conda\conda\envs\machine_learning\lib\site-packages\fastai\callback.py in set_dl(self, dl)
    203         "Set the current `dl` used."
    204         if hasattr(self, 'cb_dl'): self.callbacks.remove(self.cb_dl)
--> 205         if isinstance(dl.dataset, Callback):
    206             self.callbacks.append(dl.dataset)
    207             self.cb_dl = dl.dataset

AttributeError: 'NoneType' object has no attribute 'dataset'

If it helps I am running fastai v1.0.42, pytorch v1.0.0 on Windows 10.

0 Likes

(Kamal Eldin) #18

okay,I have the same issue, the download freezes at [87306240it [00:45, 1935425.68it/s], the line never finishes…did you have any luck resolving this ?

0 Likes