Developer chat

Released 1.0.22 which fixes learn.predict, and also avoids importing submodules directly in to the namespace.

That was a bug - the submodules weren’t meant to be imported directly. Fixed now. data won’t be clobbered. My fix is really ugly, so if any python experts know how to make our __init__.py files less awful, please let me know :slight_smile:

2 Likes

Hi. Are you accepting PRs from non-core developers? I’ve been looking at the library for the past couple days to find a way to integrate “observation weights” into the codebase. I think the change in code would be very minimal and would be completely confined to fit() function and its dependencies, validate() and loss_batch(). The gist of the PR would be about allowing yb to a be a list where the last item in the list is a tensor representing the observation weights.

Forgot to reply to your other post. There’s no tweak needed, a target can already be a list of tensors. You have to properly address it with your loss function, that’s all.

For a new features, it’s best to prepare a notebook showing how it works so we can help refactor your code, but otherwise we’re happy to accept PRs from anyone!

1 Like

This post has been moved to Memory, stability & performance of fastai v1

2 Likes

This post has been moved to Memory, stability & performance of fastai v1

You’re right! So long as the Dataset returns a list, the fit() and loss_batch() function will work just fine. However, I would like to propose one line of code to change the validate() function.

I’ve created a notebook for this and bundled it alongside a PR.

I was reading through the fastai code and I came across the Stepper class

My question is why are we using a class when all we need to do is iterations can’t we use generators in here and it will be beneficial also less code, less use of memory, lazy execution.

wrote a generator which does the same thing

linear_anneal = lambda start,end,pct : start+(end-start)*pct

def stepper(start,end,n_iter):
    n=1
    while n<=n_iter:
          yield linear(start,end,n/n_iter)
          n+=1
step = stepper(1,15,100)  #intialize it like this
step.__next__() #use it like this

This post has been moved to Memory, stability & performance of fastai v1.

moved the post to a separated thread as @stas suggested

Thanks for taking the lead on starting a focused thread based on my earlier posts, @piotr.czapla. I felt that your title was much broader than the very specific-intention my posts had - avoid restarting the kernel all the time. So I renamed it to a more specific: Getting the most out of your GPU RAM in jupyter notebook.

But please don’t let it prevent you from starting a much more important topic on stability and performance of fastai v1.

Thank you.

1 Like

Just merged: huge refactor of the data block API. If you were only using the databunch factory methods, this shouldn’t impact you.
If you were using the data block API note that the calls to dataset, numericalize and tokenize don’t exist anymore and that you now have to split your data before labeling it.
If you were using the internal datasets of fastai… learn how to use the datablock API very quickly because those don’t exist anymore.

The basic idea is that to allow more flexibility, there is no dataset anymore: you explain what are your xs and your ys with the datablock API and that’s it. That way regression (or single classification or multi classification) for computer vision has the same underlying class than for text or tabular.

Update to the docs will follow shortly. Lessons should run smoothly.

3 Likes

Does this mean we now have a way to solve all types of ML problems (classification, multi-classification, regression) for all types of data (vision, text, tabular)?

I’m observing that the suggested use of partial functions for metrics leads to misleading results, e.g. in lesson3-planet nb:

acc_02 = partial(accuracy_thresh, thresh=0.2)
f_score = partial(fbeta, thresh=0.2)
learn = create_cnn(data, arch, metrics=[acc_02, f_score])
epoch  train_loss  valid_loss  accuracy_thresh  fbeta   

the metrics column names are misleading, because these are not the metrics functions that were used (the defaults are different).

There must be a better way to have the used metrics match the names displayed in the header of the results.

The relevant code is:

def on_train_begin(self, epochs:int, pbar:PBar, metrics:MetricFuncList)->None:
    "About to start learning."
    self.state_dict = _get_init_state()
    self.state_dict['n_epochs'],self.state_dict['pbar'],self.state_dict['metrics'] = epochs,pbar,metrics
    names = [(met.name if hasattr(met, 'name') else camel2snake(met.__class__.__name__)) for met in self.metrics]
    self('train_begin', metrics_names=names)

I see we already have AverageMetric class, so this could be now fixed with a hack:

acc_02 = AverageMetric(partial(accuracy_thresh, thresh=0.2))
acc_02.name = "acc_02"
learn = create_cnn(data, arch, metrics=[acc_02])

now, the metric header is displayed correctly.

epoch  train_loss  valid_loss  acc_02

But perhaps we can add a new wrapper class?

acc_02 = MakeMetric(partial(accuracy_thresh, thresh=0.2), "acc_02")
learn = create_cnn(data, arch, metrics=[acc_02])

I also researched partial() and it’s possible to write a wrapper around partial to inject a name, say under partial_func.__name__ but it won’t be the same as normal functions which also have __class__.__name__ set and this can’t be set in the partial function. So probably, this is not a good approach.

Just realized this major change while watching todays lesson.

I think the possibility to easily inject our own Dataset classes via the datablock api was kind of an important feature!? And it was kind of compatible with regular pytorch, so you could reuse dataset classes others had written for pytorch with slight modifications.

So how do I do that now? And what do I do with my modified own Dataset classes?

2 Likes

Hi @sgugger i belive that the line. “self.create_func = open_image” overrides whatever you set af argument for createfunc ?

class ImageItemList(ItemList):
_bunch = ImageDataBunch

def __post_init__(self):
    super().__post_init__()
    self.sizes={}
    self.create_func = open_image

to make it use my own i have to set:
vision.data.open_image = my_own_open_image

You can still use your own datasets and pass them to DataBunch.create, that hasn’t changed.

The data block API separates in two blocks the inputs and the outputs now, because it’s more flexible this way. One block of output (like classification) can be directly used for multiple blocks of inputs (images, texts, tabular lines etc…).

1 Like

Looks like there is a mistake there, will dig in to this at some point today.

Okay thanks, that means DataBunch.create will not be deprecated at some point? I had understood that all the old methods would go away at some point?!

2 Likes

No, the current factory methods will stay (as they are useful for beginners) and DataBunch.create is what we use all the time behind the scene whenever we build a databunch, so that one will stay too.

2 Likes