Welcome to forums.fast.ai

I am not sure. But I suggest try to start form a lower learning rate to a higher learning rate for “recorder.plot()”. Or you should just use recorder.plot().

Your problem is too easy. Two of the categories for imagenet are lion and tiger, so the pretrained weights have been extensively trained to recognize lions and tigers. There is also the possibility that the exact images you are using to train with are in imagenet.

http://image-net.org

I dont see any area where I can post about multi class classifier, I saw multi label but not multi class classifiers

I suggest you post it on Part 1 2019 section rather on this welcome section.


As you can see, it will classify more than one object in the satellite image. For more detail, you should read the doc.

Try using kaggle or Google Colab, which are both by google. I believe kaggle has gives 30 hrs of TPU and GPU, and Colab has unlimited GPU, but no TPU.

Hello,I am new to time series classification and I want to apply this developed software :
https://github.com/fastai/fastai2. One question I had after testing the code, is whether is possible to do multivariate time series classification using this code, cuz here they used the univariate data. If so, how could I formulate the data into pandas DataFrame to put it as df_train?

Gradient also has some free virtual machines and you’re able to hop into it pretty quickly. I’m unsure how they compare in speed to other options as I’m just starting this course, but I imagine it will get the job done. You can see other options here ‘https://course.fast.ai/’ under ‘Server setup’.

Hi Kaiyun. I suggest you post it in the 2019 lesson 1 section of the forum. In addition, the Rossman dataset video and its previous videos have talked it using a simple linear model. Moreover, if you can, please show us the dataset so people can help you with panda.

Thanks!

Hi all, I’ve recently started the fast.ai course, and completed Part 1 Lesson 1, including training on my own dataset. I started going through some of the papers referenced in the lesson, and had a question regarding Leslie Smith’s paper, A Disciplined Approach to Neural Network Hyper-Parameters.

In this paper Smith references Liu et al., 2018, mentioning that since larger momentum can help escape saddle point, but can hurt final convergence, that decreasing momentum is a good idea.

My question is, in the fit_one_cycle training method, does the “moms” (momentum) argument default use decreasing momentum or do we have to specify that ourselves?

1 Like

Hello, Jordancoil :smile:

Congrats on completing Part 1, Lesson 1. I just finished it myself! :partying_face:

As to your question, the short answer, I think, is Yes.

Let’s check out the source code for fit_one_cycle:

def fit_one_cycle(learn:Learner, cyc_len:int, max_lr:Union[Floats,slice]=defaults.lr,
                  moms:Tuple[float,float]=(0.95,0.85), div_factor:float=25., pct_start:float=0.3, final_div:float=None,
                  wd:float=None, callbacks:Optional[CallbackList]=None, tot_epochs:int=None, start_epoch:int=None)->None:

The momentum (moms) is a tuple with two decreasing float values. fit_one_cycle calls the function
OneCycleScheduler. Let’s check that out:

class OneCycleScheduler(LearnerCallback):
    "Manage 1-Cycle style training, as outlined in Leslie Smith's [paper](https://arxiv.org/pdf/1803.09820.pdf)."

That paper is Leslie Smith’s paper: A DISCIPLINED APPROACH TO NEURAL NETWORK
HYPER-PARAMETERS.

Therefore, fit_one_cycle does default to using decreasing momentum, though I’m not 100% sure.

If you want to talk more about this, I suggest you post it in the 2019 lesson 1 section of the forum.
Also, I’m curious to hear about your dataset and image classifier, what did you use and did it work well?

Hi Danny,

Thanks for your response. You did a really good job of breaking down the code for me.

As for my dataset/classifier, I used 30 images of cake and 30 images of things that were not cake (including, dogs, cats, pie, and cupcakes) along with transfer learning to achieve an error rate of 6%. A result I’m pretty happy with considering the small amount of images, and the fact that cupcakes and cakes are very similar.

1 Like

Hello! I have a problem runing SageMaker instance from a template.
Where should I post to get help on that?
My problem is that start script seems to hang up during ‘update fast ai library’ step.
I already tried modifying the script like so:
https://aws.amazon.com/premiumsupport/knowledge-center/sagemaker-lifecycle-script-timeout/
What could be the problem?

1 Like

Hi, I suggest you to go to the deep learning section or the FastAiV2 section for version of FastAiV2.

Hey I just started the course, wanted to know what is the purpose of the line np.random.seed(2) in lesson1-pets?

The explaination says : “Set the random seed to two to guarantee that the same validation set is every time. This will give you consistent results with what you see in the lesson video.”

My guess is that the ImageDataBunch selects some pictures randomly as the validation set to test the model after training. The line of code helps with it. If so, why isn’t the random seeding included inside the ImageDataBunch constructor?

I’m trying to find the next mistake but I can’t

I have the following code

import matplotlib.pyplot as plt
%reload_ext autoreload
%autoreload 2
%matplotlib inline
from fastai.imports import *
from fastai.torch_imports import *
from fastai.transforms import *
from fastai.conv_learner import *
from fastai.model import *
from fastai.dataset import *
from fastai.sgdr import *
from fastai.plots import *
from random import sample
from itertools import chain
from sklearn.metrics import confusion_matrix
from itertools import chain
from random import sample

use first GPU if you have many

PATH = “/Users/jorgemariomartinez/Documents/Tesis Maestria/Python/Codigos articulos/data/pics/” ### path to where your pictures are downloaded and the .csv files with val sets
sz = 224 ### resize images to this px by px
arch = resnext101_64 ### pre-trained network choice
bs = 200 ### batch size for minibatches

def get_val_cv_byclass(label_csv):
label_df = pd.read_csv(label_csv)
val_idxs = []
for x in label_df[‘class’].unique(): ### should be class but reversed column labels
start= label_df.index[label_df[‘class’] == x].tolist()[0]
end = start+len(label_df.index[label_df[‘class’] == x].tolist())-1
n_sample= int(round((end-start)*0.2,0))
val_idxs.append(random.sample(range(start,end),n_sample))
val_idxs = list(chain.from_iterable(val_idxs))
return val_idxs

def get_val_idx_fromfile(validx_csv):
validx_df =pd.read_csv(validx_csv, header=None)
return validx_df[0].tolist()

def get_data(sz, bs, val_idxs, label_csv): # sz: image size, bs: batch size
tfms = tfms_from_model(arch, sz, aug_tfms=transforms_top_down, max_zoom=1.1)
data = ImageClassifierData.from_csv(PATH, ‘train2’, label_csv,
val_idxs=val_idxs, suffix=’.png’, tfms=tfms, bs=bs, num_workers=3)
return data if sz > 300 else data.resize(340, ‘tmp’)

label_csv = f’{PATH}3cls_rmsaltol.csv’
#n = len(list(open(label_csv))) - 1
vacc =[]
reps=5
start=0
bs=200
valididx_base = ‘3cls_val_ids’

for rep in range(reps):
print(rep+start)
val_idxs = get_val_idx_fromfile(f’{PATH}’+valididx_base+str(rep+start)+’.csv’)
data = get_data(sz, bs, val_idxs, label_csv)
learn = ConvLearner.pretrained(arch, data, precompute=True)
val_loss, val_acc = learn.fit(1e-2, 10, cycle_len=1, cycle_mult=2)
vacc.append(val_acc)
print(‘3 class average, cyclic learning’)
print(np.mean(vacc))
print(np.std(vacc))

BUT WHEN I TRY TO RUN THE CODE IT PRESENTS THE FOLLOWING ERROR

0
HBox(children=(FloatProgress(value=0.0, max=6.0), HTML(value=’’)))

HBox(children=(FloatProgress(value=0.0, description=‘Epoch’, max=1023.0, style=ProgressStyle(description_width…
epoch trn_loss val_loss accuracy


AttributeError Traceback (most recent call last)
AttributeError: ‘float’ object has no attribute ‘rint’

The above exception was the direct cause of the following exception:

TypeError Traceback (most recent call last)
in
12 data = get_data(sz, bs, val_idxs, label_csv)
13 learn = ConvLearner.pretrained(arch, data, precompute=True)
—> 14 val_loss, val_acc = learn.fit(1e-2, 10, cycle_len=1, cycle_mult=2)
15 vacc.append(val_acc)
16 print(‘3 class average, cyclic learning’)

~/opt/anaconda3/envs/fastai-cpu/lib/python3.6/site-packages/fastai/learner.py in fit(self, lrs, n_cycle, wds, **kwargs)
285 self.sched = None
286 layer_opt = self.get_layer_opt(lrs, wds)
–> 287 return self.fit_gen(self.model, self.data, layer_opt, n_cycle, **kwargs)
288
289 def warm_up(self, lr, wds=None):

~/opt/anaconda3/envs/fastai-cpu/lib/python3.6/site-packages/fastai/learner.py in fit_gen(self, model, data, layer_opt, n_cycle, cycle_len, cycle_mult, cycle_save_name, best_save_name, use_clr, use_clr_beta, metrics, callbacks, use_wd_sched, norm_wds, wds_sched_mult, use_swa, swa_start, swa_eval_freq, **kwargs)
232 metrics=metrics, callbacks=callbacks, reg_fn=self.reg_fn, clip=self.clip, fp16=self.fp16,
233 swa_model=self.swa_model if use_swa else None, swa_start=swa_start,
–> 234 swa_eval_freq=swa_eval_freq, **kwargs)
235
236 def get_layer_groups(self): return self.models.get_layer_groups()

~/opt/anaconda3/envs/fastai-cpu/lib/python3.6/site-packages/fastai/model.py in fit(model, data, n_epochs, opt, crit, metrics, callbacks, stepper, swa_model, swa_start, swa_eval_freq, **kwargs)
158
159 if epoch == 0: print(layout.format(*names))
–> 160 print_stats(epoch, [debias_loss] + vals)
161 ep_vals = append_stats(ep_vals, epoch, [debias_loss] + vals)
162 if stop: break

~/opt/anaconda3/envs/fastai-cpu/lib/python3.6/site-packages/fastai/model.py in print_stats(epoch, values, decimals)
171 def print_stats(epoch, values, decimals=6):
172 layout = “{!s:^10}” + " {!s:10}" * len(values)
–> 173 values = [epoch] + list(np.round(values, decimals))
174 print(layout.format(*values))
175

<array_function internals> in round_(*args, **kwargs)

~/opt/anaconda3/envs/fastai-cpu/lib/python3.6/site-packages/numpy/core/fromnumeric.py in round_(a, decimals, out)
3597 around : equivalent function; see for details.
3598 “”"
-> 3599 return around(a, decimals=decimals, out=out)
3600
3601

<array_function internals> in around(*args, **kwargs)

~/opt/anaconda3/envs/fastai-cpu/lib/python3.6/site-packages/numpy/core/fromnumeric.py in around(a, decimals, out)
3222
3223 “”"
-> 3224 return _wrapfunc(a, ‘round’, decimals=decimals, out=out)
3225
3226

~/opt/anaconda3/envs/fastai-cpu/lib/python3.6/site-packages/numpy/core/fromnumeric.py in _wrapfunc(obj, method, *args, **kwds)
56 bound = getattr(obj, method, None)
57 if bound is None:
—> 58 return _wrapit(obj, method, *args, **kwds)
59
60 try:

~/opt/anaconda3/envs/fastai-cpu/lib/python3.6/site-packages/numpy/core/fromnumeric.py in _wrapit(obj, method, *args, **kwds)
45 except AttributeError:
46 wrap = None
—> 47 result = getattr(asarray(obj), method)(*args, **kwds)
48 if wrap:
49 if not isinstance(result, mu.ndarray):

TypeError: loop of ufunc does not support argument 0 of type float which has no callable rint method

Thanks!

Hi, I suggest you to post it in the part 1 2019 section. In addition, I suggest you to share the notebook(on colab if possible), so people can debug you issue.

It set the np.random.seed(2) is because it is for you. The course wants you to have a reproductive result. But if you keep watching videos, you will see it is bad because you should have a mean of 0 and std of 1, even though it is random.

Hey there, first post here just want to introduce myself.

I graduated with an accounting degree and couldn’t find a job in the field. I spent years working deadend jobs and since May 2019 I have been in Lambda School’s data science and machine learning program. I’m hoping second time is a charm since I don’t want to go back to minimum wage work.

I just finished the first lesson video and it’s material I’ve seen before. I just have to run the code and tinker around with the variables.

2 Likes