Default to completely disable progress bar

I am curious if others have a need to disable all progress bar updates. I find that they can get in the way when I am trying to test out other features.

I found this link to go to the “console” version. Which is helpful, but I actually don’t want any updates at all. From this link, I could get it to print just the headers for epoch, train_loss, valid_loss, time but even that seems like it should be suppressible. And to get that to work, I have this code block which I call to disable/enable.

import fastai
import fastprogress

def disable_progress():
    fastprogress.fastprogress.NO_BAR = True
    master_bar, progress_bar = fastprogress.force_console_behavior()
    fastai.basic_train.master_bar, fastai.basic_train.progress_bar = master_bar, progress_bar
    
def enable_progress():
    fastai.basic_train.master_bar, fastai.basic_train.progress_bar = fastprogress.master_bar, fastprogress.progress_bar

Is it worth settting up a default and/or flag in fastprogress that is a full disable so that there is no output? I am happy to do a PR on this if others might find it useful.

4 Likes

I would go in the direction of context.

So in the end it should be something like

with progress_disabled():
    BLAHBLAHBLAH
2 Likes

I would definitely find it useful, and like the idea of making it work in a context.

I’d just add that there seem at this point to be quite a few references to these bars that need to be overridden. It took me a while to find the one I needed (fastai.text.data.[master|progress]_bar). Here are some I’ve found, though they may not all need to be overridden:

fastprogress.fastprogress.NO_BAR = True
master_bar, progress_bar = fastprogress.force_console_behavior()
fastai.basic_train.master_bar, fastai.basic_train.progress_bar = master_bar, progress_bar
fastai.basic_data.master_bar, fastai.basic_data.progress_bar = master_bar, progress_bar
dataclass.master_bar, dataclass.progress_bar = master_bar, progress_bar
fastai.text.master_bar, fastai.text.progress_bar = master_bar, progress_bar
fastai.text.data.master_bar, fastai.text.data.progress_bar = master_bar, progress_bar
fastai.core.master_bar, fastai.core.progress_bar = master_bar, progress_bar

Also, it would be good if this works without import fastai, since the recommended import is from fastai.something import * where something is text, tabular, image etc.

1 Like

This is out in the dev version and will be part of the next release:

https://docs.fast.ai/utils.mod_display.html

1 Like

Hi @bfarzin,

I use load_learner() in a jupyter notebook for text classification (after the use of learn.export()). I can see that it uses the progress bar.

This is not a problem in a notebook but it becomes one when converting this notebook to a python script (by using import fire): when running my python script in a terminal, it tries to launch the progress bar and then, I got the followning error message in my terminal:

/opt/anaconda3/lib/python3.7/site-packages/fastprogress/fastprogress.py:102: 
UserWarning: Your generator is empty.
warn("Your generator is empty.")
(....)
filled_len = int(self.length * val // self.total)
ZeroDivisionError: integer division or modulo by zero

I found the origin of the error in fastai/text/data.py at line 463: there is a call of the progress bar (see code below).

def process(self, ds):
    ds.items = _join_texts(ds.items, self.mark_fields, self.include_bos, self.include_eos)
    ds.items = [apply_rules(t, pre_rules=self.pre_rules, post_rules=self.post_rules) 
                for t in progress_bar(ds.items, leave=False)]
    if self.sp_model is None or self.sp_vocab is None:
        cache_dir = self.train_func(ds.items, ds.path)
        self.sp_model,self.sp_vocab = cache_dir/'spm.model',cache_dir/'spm.vocab'
    if not getattr(self, 'vocab', False): 
        with open(self.sp_vocab, 'r', encoding=self.enc) as f: self.vocab = Vocab([line.split('\t')[0] for line in f.readlines()])
    if self.n_cpus <= 1: ds.items = self._encode_batch(ds.items)
    else:
        with ProcessPoolExecutor(self.n_cpus) as e:
            ds.items = np.array(sum(e.map(self._encode_batch, partition_by_cores(ds.items, self.n_cpus)), []))
    ds.vocab = self.vocab

How could I disable the call of the progress bar when using load_learner() for text?

I cc @sgugger as I think he can be interessed on this issue.

Have you managed to find a solution for this issue? I have tried everything mentioned on this forum without any sort of success.

Did you ever find a solution to your problem? We ran into a very similar problem. As you know I love SentencePiece but this problem is preventing using it in production. My spacy models work just fine, but when we try with the SPM models, we get the same error that you did.

Any tips you have would be a huge help.

Is the problem just that you cannot disable the progress bar? Why does that prevent you from using SP?

There appears to be a call to both the SP and the Tokenize processor calls. Do both ignore the context manager? Do you have a replicating example I could experiment with?

Thank you for the response. I am not 100 sure what the exact issue was when our developer tried to put it into production. All I know that it was a divisible by zero error, that is somehow connected to the progress bar and sp. I was told that the issue was the same as what @pierreguillou experience. It works in the notebook but when we tried to use it from the cli it resulted in an error. I try to get you more information. Thank you for trying to help us, I appreciate it.

1 Like