You could also potentially try Ranger
optimizer + fit_flat_cos
…
I’m trying to use RandomErasing()
and apply it only on the input
images, while it applies it on both input
and target
.
Like this guy tried to do this here:
I found out that this augmentation
is being called twice:
- once for all of the
input
images in a batch; - and another time for all of the
target
images of the batch.
I need to make it be called only once, for the input
.
Here’s the source code:
# Cell
def cutout_gaussian(x, areas):
"Replace all `areas` in `x` with N(0,1) noise"
chan,img_h,img_w = x.shape[-3:]
for rl,rh,cl,ch in areas: x[...,rl:rh, cl:ch].normal_()
return x
# Cell
def _slice(area, sz):
bound = int(round(math.sqrt(area)))
loc = random.randint(0, max(sz-bound, 0))
return loc,loc+bound
# Cell
class RandomErasing(RandTransform):
"Randomly selects a rectangle region in an image and randomizes its pixels."
order = 100 # After Normalize
def __init__(self, p=0.5, sl=0., sh=0.3, min_aspect=0.3, max_count=1):
store_attr()
super().__init__(p=p)
self.log_ratio = (math.log(min_aspect), math.log(1/min_aspect))
def _bounds(self, area, img_h, img_w):
r_area = random.uniform(self.sl,self.sh) * area
aspect = math.exp(random.uniform(*self.log_ratio))
return _slice(r_area*aspect, img_h) + _slice(r_area/aspect, img_w)
def encodes(self,x:TensorRawImage):
count = random.randint(1, self.max_count)
_,img_h,img_w = x.shape[-3:]
area = img_h*img_w/count
areas = [self._bounds(area, img_h, img_w) for _ in range(count)]
return cutout_gaussian(x, areas)
Maybe the solution is to change somewhere the source code to let it apply only on the input
. Any idea?
I think the problem is that the transform is being type-dispatched, so since both x and y are images, it is being applied to both. One alternative is to have a separate type for y images so it doesn’t get applied there, but that may be too complicated. I think this solution is probably easier:
Why it works is that it redefines __call__
which usually checks for the type of the data and applies based on the type dispatch, but here will be applied however you define it.
Thanks!
If I use his solution (which I also thought about doing so), but I still want the RandomErasing
mechanism, then shall I do something like this?
# Cell
def cutout_gaussian(x, areas):
"Replace all `areas` in `x` with N(0,1) noise"
chan,img_h,img_w = x.shape[-3:]
for rl,rh,cl,ch in areas: x[...,rl:rh, cl:ch].normal_()
return x
# Cell
def _slice(area, sz):
bound = int(round(math.sqrt(area)))
loc = random.randint(0, max(sz-bound, 0))
return loc,loc+bound
# Cell
class RandomErasing(RandTransform):
"Randomly selects a rectangle region in an image and randomizes its pixels."
order = 100 # After Normalize
def __init__(self, p=0.5, sl=0., sh=0.3, min_aspect=0.3, max_count=1):
store_attr()
super().__init__(p=p)
self.log_ratio = (math.log(min_aspect), math.log(1/min_aspect))
def _bounds(self, area, img_h, img_w):
r_area = random.uniform(self.sl,self.sh) * area
aspect = math.exp(random.uniform(*self.log_ratio))
return _slice(r_area*aspect, img_h) + _slice(r_area/aspect, img_w)
def __call__(self, b, **kwargs):
x,y = b
count = random.randint(1, self.max_count)
_,img_h,img_w = x.shape[-3:]
area = img_h*img_w/count
areas = [self._bounds(area, img_h, img_w) for _ in range(count)]
return cutout_gaussian(x, areas), y
Or am I still missing out something critical here because of lack of understanding __call__
or encodes()
. In other words, could I give up on either one of these?
I think that should work… Try it out and let us know…
Sadly it didn’t work… It seemed like the __call__
wasn’t even called for.
I tried to get smarter with this tutorial:
Where it kinda explains crucial things like how to apply a transform only on the train
dataset, but not the valid
dataset.
But it doesn’t really explain applying a transform only on the input
.
I also found this guy’s advice:
Followed what you said:
I think the problem is that the transform is being type-dispatched, so since both x and y are images, it is being applied to both.
Seems that you were right, and the other guy found a way to solve that by improvising on that basis.
I also found this topic:
I think that I got inspired by them for a good solution.
My idea is to add some flag
on my TensorRawImage
that holds either input
or target
. The transform later would check out whether it’s input
or target
, just like it would have checked on this whether it’s a train
or valid
(with the flat split_idx
).
I will post here a nice solution if it works.
Edit: Wow, almost made it. It was very challenging
But I got some problem but well, can’t solve this much
Huh that’s interesting, I would have thought overriding __call__
would have worked.
But yeah the other solution does what I mentioned before: having a separate type for y images so it doesn’t get applied there.
That should work, I think I have done it myself in the past too. Let me know how your flag idea goes…
The thing with the flag
is that I can’t pass it in from the equivalent Image.Image
object (Mine is RawObj
) to TensorImage
(Mine is TensorRawImage
).
If I could do so, then inside the RandomErasing()
I’d have only added an if
clause that checks:
if x.flag=="input":
< rest of the same code >
return cutout_gaussian(x)
else:
return x
I can make PILImage
inherits something from Image.Image
(as equivalently as with my new objects), but I can’t let TensorImage
inherits from PILImage
(and accordingly with my objects).
It’s way beyond my knowlodge of PyTorch/Fastai source code, but if there was a way to do so, then I’d easily be then able to pass in that flag
from the Image.Image to TesnorImage
for each input
and target
accordingly.
Maybe @Jeremy could enlighten us here…
They’re not the same thing. Discrim LR is different LRs for different layers. fit_one_cycle
is the 1-cycle scheduler.
Please don’t at-mention me for non-admin things.
That’s what I’d do.
That’s what I’d do.
Yeah, it works, although not being the most elegant solution, but hey, it works.
My inner programmer likes to find scalable solutions for further uses (other Transform
s, other TensorTypes
, less unknown declarations, etc)
Thanks though!
Please don’t at-mention me for non-admin things.
Thanks for letting me know. Only tagged you because my solution could be scalable, but anyway I solved my case.
Hi Florian,
Thank you very much for sharing your experience. I agree that using CLIP would probably be a more practical approach to the problem but I was looking for something that could be train in an unsupervised manner on any dataset (not only ImageNet-like). I confirm that self_supervised
is a great library – simple and easy to work with.
I agree with everything you wrote but I am wondering if there may be a better/simpler way. I found two interesting papers that fueled my interest in this:
- The Unreasonable Effectiveness of Deep Features as a Perceptual Metric – they show that all the (un-pooled) multi-scale features are very good predictors of perceptual similarity
- On the surprising tradeoff between ImageNet accuracy and perceptual similarity – the worse model you train the better it’s perceptual match to humans
I, just like you, tried several pooling methods and did not find any to be much better than the rest. I then tested the localized ResNet18 features from the 14x14x256 layer and they were surprisingly accurate at finding similar parts of other images. That’s why I suspect that the pooling methods do not make any real sense and in fact actively reduce the capability of the network to figure out the image content (unless we “train around” them by fine-tuning the whole network).
I remember Jeremy saying in one of the previous courses that you should strive to make the task “easy to learn” for the network – if calculating element-wise avg or max of the local features does not have sensible semantic meaning then I suspect we are making the task harder, not easier.
PS. I think NetVLAD is based on a similar observation but I have yet to dive deeper into this approach.
I’m wanting to learn more about defaults in the Fast AI library. Ideally how they are set and why they are set.
One topic of interest to me is the optimizer. Reading chapter 16 of the text ( https://colab.research.google.com/github/fastai/fastbook/blob/master/16_accel_sgd.ipynb#scrollTo=W8YIFHdHRGVf) it states: “In fastai, Adam is the default optimizer we use since it allows faster training”
Specific questions:
- Where in the fast AI library can I see this is the default optimizer?
- How do I see what other defaults are chosen for us in the fast AI library?
I can see fastai code for the optimizers here: fastai/12_optimizer.ipynb at master · fastai/fastai · GitHub but am not seeing or understand how defaults are setup.
I’m not sure if this will be covered in the 2022 version of the course, so this might not even be the right forum to post this type of question.
So, what I did to try to figure this out was a quick grep for the term adam
& Adam
as a starting point, that led me to the top level learner.py
, and there I found the default value for opt_func
to be set to Adam
https://github.com/fastai/fastai/blob/master/fastai/learner.py#L85
If you look at the top of the file, you’ll notice it’s generated from nbs/13a_learner.ipynb
, which then you can go browsing further.
Since the learner is an entrypoint for a lot of “configuration” settings, it’s not a bad place to start looking into all the arguments that are pre-defined, those are the default settings for the learner object. And, similarly there’s default settings (usually in form of default arguments value) for other functions/objects as well.
Sometimes the best thing to do is just start searching/grepping in the codebase. (given this is a non-beginner discussion). I can recommend tools such as grep
, ag
or rg
to help with this. Or, anything that’s just built into the editor “Search in project” feature as well.
I hope this was somewhat helpful.
@suvash - Thanks! I was just using the github search feature and wasn’t able to find that on my own.
I personally think it would be good to identify at a high level all default decisions made by a library. Part of the cool benefit of the fastai library is the work of many people over many years identifying good and sensible defaults that work well for a wide range of problems. Knowing those defaults and the reasons behind them can provide knowledge that may transfer to other deep learning problems or libraries.
That would certainly be great to have. It would be thousands of hours of work though – so as folks in the community study the code and learn about fastai, it’s very helpful if they write posts about what they learn. There’s already many posts about these topics out there. For instance, @sgugger wrote about why AdamW was chosen as the default here:
Hi Jakub,
you are right, the pooling layers don’t make sense if you want to get the best results. Basically you are losing the spacial information where in the image a feature is located. You still know which features are present in the image but not where. But after pooling you have only a 256 dim vector instead of 50176 dim vector :D.
Did you try to just flatten the Resnet18 features and compute the similarities? I guess that should give pretty good results but won’t scale that well if you have to store a 50176 dim vector for each image. Probably the way to go are Vision transformers. No pooling layers and (depending on the model) a 768 dim output vector. CLIP was trained with Resnet50 and ViT … afaik ViT gave the better results so I’d suspect ViT give better representations when trained correctly (contrastive).
Thanks for the link - started reading the post and so far seems to be well written and easy to understand at my current level of knowledge.
I didn’t realize that it would take so much effort to document and explain defaults, but after seeing the AdamW post, the amount of effort required seems very large - in many aspects including development, testing, documenting and sharing knowledge with others.
Each of us becoming advocates sounds like the best way to continue the great fastai work!
Hey folks,
I’m trying to implement a callback
that visualizes my image prediction. Here’s my code:
class VisualisePredictions(Callback):
"Visualize predictions"
order = ProgressCallback.order+1
def after_epoch(self, **kwargs):
dl=self.dls[1]
b=dl.one_batch()
_,_,prd=self.get_preds(dl=[b],with_decoded=True)
dec = self.dls.after_batch.decode((TensorRawImage(prd),))[0][0]
show_raw_image(dec) # This is my personalised function that gets a tensor, converts it to np.float32 array, then coverts it from 65536 to 255, then displays it as an image
It was based on the Learner.show_results()
.
It works perfectly fine when before/after the training ends,
but while fit_one_cycle()
running, the Google Colab keeps getting stuck.
It looks like there is an endless loop or what.
Here’s the tracebook. Any idea what could I do to solve this problem?
41 frames
/usr/local/lib/python3.7/dist-packages/fastai/callback/schedule.py in fit_one_cycle(self, n_epoch, lr_max, div, div_final, pct_start, wd, moms, cbs, reset_opt)
114 scheds = {'lr': combined_cos(pct_start, lr_max/div, lr_max, lr_max/div_final),
115 'mom': combined_cos(pct_start, *(self.moms if moms is None else moms))}
--> 116 self.fit(n_epoch, cbs=ParamScheduler(scheds)+L(cbs), reset_opt=reset_opt, wd=wd)
117
118 # Cell
/usr/local/lib/python3.7/dist-packages/fastai/learner.py in fit(self, n_epoch, lr, wd, cbs, reset_opt)
220 self.opt.set_hypers(lr=self.lr if lr is None else lr)
221 self.n_epoch = n_epoch
--> 222 self._with_events(self._do_fit, 'fit', CancelFitException, self._end_cleanup)
223
224 def _end_cleanup(self): self.dl,self.xb,self.yb,self.pred,self.loss = None,(None,),(None,),None,None
/usr/local/lib/python3.7/dist-packages/fastai/learner.py in _with_events(self, f, event_type, ex, final)
162
163 def _with_events(self, f, event_type, ex, final=noop):
--> 164 try: self(f'before_{event_type}'); f()
165 except ex: self(f'after_cancel_{event_type}')
166 self(f'after_{event_type}'); final()
/usr/local/lib/python3.7/dist-packages/fastai/learner.py in _do_fit(self)
211 for epoch in range(self.n_epoch):
212 self.epoch=epoch
--> 213 self._with_events(self._do_epoch, 'epoch', CancelEpochException)
214
215 def fit(self, n_epoch, lr=None, wd=None, cbs=None, reset_opt=False):
/usr/local/lib/python3.7/dist-packages/fastai/learner.py in _with_events(self, f, event_type, ex, final)
164 try: self(f'before_{event_type}'); f()
165 except ex: self(f'after_cancel_{event_type}')
--> 166 self(f'after_{event_type}'); final()
167
168 def all_batches(self):
/usr/local/lib/python3.7/dist-packages/fastai/learner.py in __call__(self, event_name)
140
141 def ordered_cbs(self, event): return [cb for cb in self.cbs.sorted('order') if hasattr(cb, event)]
--> 142 def __call__(self, event_name): L(event_name).map(self._call_one)
143
144 def _call_one(self, event_name):
/usr/local/lib/python3.7/dist-packages/fastcore/foundation.py in map(self, f, gen, *args, **kwargs)
153 def range(cls, a, b=None, step=None): return cls(range_of(a, b=b, step=step))
154
--> 155 def map(self, f, *args, gen=False, **kwargs): return self._new(map_ex(self, f, *args, gen=gen, **kwargs))
156 def argwhere(self, f, negate=False, **kwargs): return self._new(argwhere(self, f, negate, **kwargs))
157 def argfirst(self, f, negate=False): return first(i for i,o in self.enumerate() if f(o))
/usr/local/lib/python3.7/dist-packages/fastcore/basics.py in map_ex(iterable, f, gen, *args, **kwargs)
777 res = map(g, iterable)
778 if gen: return res
--> 779 return list(res)
780
781 # Cell
/usr/local/lib/python3.7/dist-packages/fastcore/basics.py in __call__(self, *args, **kwargs)
762 if isinstance(v,_Arg): kwargs[k] = args.pop(v.i)
763 fargs = [args[x.i] if isinstance(x, _Arg) else x for x in self.pargs] + args[self.maxi+1:]
--> 764 return self.func(*fargs, **kwargs)
765
766 # Cell
/usr/local/lib/python3.7/dist-packages/fastai/learner.py in _call_one(self, event_name)
144 def _call_one(self, event_name):
145 if not hasattr(event, event_name): raise Exception(f'missing {event_name}')
--> 146 for cb in self.cbs.sorted('order'): cb(event_name)
147
148 def _bn_bias_state(self, with_bias): return norm_bias_params(self.model, with_bias).map(self.opt.state)
/usr/local/lib/python3.7/dist-packages/fastai/callback/core.py in __call__(self, event_name)
55 res = None
56 if self.run and _run:
---> 57 try: res = getattr(self, event_name, noop)()
58 except (CancelBatchException, CancelEpochException, CancelFitException, CancelStepException, CancelTrainException, CancelValidException): raise
59 except Exception as e:
<ipython-input-23-a2093953dc27> in after_epoch(self, **kwargs)
6 b=dl.one_batch()
7 try:
----> 8 _,_,prd=self.get_preds(dl=[b],with_decoded=True)
9 except TypeError as e:
10 raise TypeError(f"problem with get_preds")
/usr/local/lib/python3.7/dist-packages/fastai/learner.py in get_preds(self, ds_idx, dl, with_input, with_decoded, with_loss, act, inner, reorder, cbs, **kwargs)
262 if with_decoded: res.insert(pred_i+2, getattr(self.loss_func, 'decodes', noop)(res[pred_i]))
263 if reorder and hasattr(dl, 'get_idxs'): res = nested_reorder(res, tensor(idxs).argsort())
--> 264 return tuple(res)
265 self._end_cleanup()
266
/usr/local/lib/python3.7/dist-packages/fastcore/xtras.py in __exit__(self, *args, **kwargs)
493 def __init__(self, mgrs): self.default,self.stack = L(mgrs),ExitStack()
494 def __enter__(self): self.default.map(self.stack.enter_context)
--> 495 def __exit__(self, *args, **kwargs): self.stack.__exit__(*args, **kwargs)
496
497 # Cell
/usr/lib/python3.7/contextlib.py in __exit__(self, *exc_details)
522 # set-up context
523 fixed_ctx = exc_details[1].__context__
--> 524 raise exc_details[1]
525 except BaseException:
526 exc_details[1].__context__ = fixed_ctx
/usr/lib/python3.7/contextlib.py in __exit__(self, *exc_details)
507 assert is_sync
508 try:
--> 509 if cb(*exc_details):
510 suppressed_exc = True
511 pending_raise = False
/usr/lib/python3.7/contextlib.py in _exit_wrapper(exc_type, exc, tb)
375 def _create_exit_wrapper(cm, cm_exit):
376 def _exit_wrapper(exc_type, exc, tb):
--> 377 return cm_exit(cm, exc_type, exc, tb)
378 return _exit_wrapper
379
/usr/local/lib/python3.7/dist-packages/fastcore/xtras.py in __exit__(self, *args, **kwargs)
493 def __init__(self, mgrs): self.default,self.stack = L(mgrs),ExitStack()
494 def __enter__(self): self.default.map(self.stack.enter_context)
--> 495 def __exit__(self, *args, **kwargs): self.stack.__exit__(*args, **kwargs)
496
497 # Cell
/usr/lib/python3.7/contextlib.py in __exit__(self, *exc_details)
522 # set-up context
523 fixed_ctx = exc_details[1].__context__
--> 524 raise exc_details[1]
525 except BaseException:
526 exc_details[1].__context__ = fixed_ctx
/usr/lib/python3.7/contextlib.py in __exit__(self, type, value, traceback)
128 value = type()
129 try:
--> 130 self.gen.throw(type, value, traceback)
131 except StopIteration as exc:
132 # Suppress StopIteration *unless* it's the same exception that
/usr/local/lib/python3.7/dist-packages/fastai/learner.py in replacing_yield(o, attr, val)
22 "Context manager to temporarily replace an attribute"
23 old = getattr(o,attr)
---> 24 try: yield setattr(o,attr,val)
25 finally: setattr(o,attr,old)
26
/usr/lib/python3.7/contextlib.py in __exit__(self, *exc_details)
507 assert is_sync
508 try:
--> 509 if cb(*exc_details):
510 suppressed_exc = True
511 pending_raise = False
/usr/lib/python3.7/contextlib.py in _exit_wrapper(exc_type, exc, tb)
375 def _create_exit_wrapper(cm, cm_exit):
376 def _exit_wrapper(exc_type, exc, tb):
--> 377 return cm_exit(cm, exc_type, exc, tb)
378 return _exit_wrapper
379
/usr/local/lib/python3.7/dist-packages/fastai/learner.py in __exit__(self, exc_type, exc_value, tb)
224 def _end_cleanup(self): self.dl,self.xb,self.yb,self.pred,self.loss = None,(None,),(None,),None,None
225 def __enter__(self): self(_before_epoch); return self
--> 226 def __exit__(self, exc_type, exc_value, tb): self(_after_epoch)
227
228 def validation_context(self, cbs=None, inner=False):
/usr/local/lib/python3.7/dist-packages/fastai/learner.py in __call__(self, event_name)
140
141 def ordered_cbs(self, event): return [cb for cb in self.cbs.sorted('order') if hasattr(cb, event)]
--> 142 def __call__(self, event_name): L(event_name).map(self._call_one)
143
144 def _call_one(self, event_name):
/usr/local/lib/python3.7/dist-packages/fastcore/foundation.py in map(self, f, gen, *args, **kwargs)
153 def range(cls, a, b=None, step=None): return cls(range_of(a, b=b, step=step))
154
--> 155 def map(self, f, *args, gen=False, **kwargs): return self._new(map_ex(self, f, *args, gen=gen, **kwargs))
156 def argwhere(self, f, negate=False, **kwargs): return self._new(argwhere(self, f, negate, **kwargs))
157 def argfirst(self, f, negate=False): return first(i for i,o in self.enumerate() if f(o))
/usr/local/lib/python3.7/dist-packages/fastcore/basics.py in map_ex(iterable, f, gen, *args, **kwargs)
777 res = map(g, iterable)
778 if gen: return res
--> 779 return list(res)
780
781 # Cell
/usr/local/lib/python3.7/dist-packages/fastcore/basics.py in __call__(self, *args, **kwargs)
762 if isinstance(v,_Arg): kwargs[k] = args.pop(v.i)
763 fargs = [args[x.i] if isinstance(x, _Arg) else x for x in self.pargs] + args[self.maxi+1:]
--> 764 return self.func(*fargs, **kwargs)
765
766 # Cell
/usr/local/lib/python3.7/dist-packages/fastai/learner.py in _call_one(self, event_name)
144 def _call_one(self, event_name):
145 if not hasattr(event, event_name): raise Exception(f'missing {event_name}')
--> 146 for cb in self.cbs.sorted('order'): cb(event_name)
147
148 def _bn_bias_state(self, with_bias): return norm_bias_params(self.model, with_bias).map(self.opt.state)
/usr/local/lib/python3.7/dist-packages/fastai/callback/core.py in __call__(self, event_name)
55 res = None
56 if self.run and _run:
---> 57 try: res = getattr(self, event_name, noop)()
58 except (CancelBatchException, CancelEpochException, CancelFitException, CancelStepException, CancelTrainException, CancelValidException): raise
59 except Exception as e:
<ipython-input-23-a2093953dc27> in after_epoch(self, **kwargs)
6 b=dl.one_batch()
7 try:
----> 8 _,_,prd=self.get_preds(dl=[b],with_decoded=True)
9 except TypeError as e:
10 raise TypeError(f"problem with get_preds")
/usr/local/lib/python3.7/dist-packages/fastai/learner.py in get_preds(self, ds_idx, dl, with_input, with_decoded, with_loss, act, inner, reorder, cbs, **kwargs)
253 ctx_mgrs = self.validation_context(cbs=L(cbs)+[cb], inner=inner)
254 if with_loss: ctx_mgrs.append(self.loss_not_reduced())
--> 255 with ContextManagers(ctx_mgrs):
256 self._do_epoch_validate(dl=dl)
257 if act is None: act = getattr(self.loss_func, 'activation', noop)
/usr/local/lib/python3.7/dist-packages/fastcore/xtras.py in __enter__(self)
492 "Wrapper for `contextlib.ExitStack` which enters a collection of context managers"
493 def __init__(self, mgrs): self.default,self.stack = L(mgrs),ExitStack()
--> 494 def __enter__(self): self.default.map(self.stack.enter_context)
495 def __exit__(self, *args, **kwargs): self.stack.__exit__(*args, **kwargs)
496
/usr/local/lib/python3.7/dist-packages/fastcore/foundation.py in map(self, f, gen, *args, **kwargs)
153 def range(cls, a, b=None, step=None): return cls(range_of(a, b=b, step=step))
154
--> 155 def map(self, f, *args, gen=False, **kwargs): return self._new(map_ex(self, f, *args, gen=gen, **kwargs))
156 def argwhere(self, f, negate=False, **kwargs): return self._new(argwhere(self, f, negate, **kwargs))
157 def argfirst(self, f, negate=False): return first(i for i,o in self.enumerate() if f(o))
/usr/local/lib/python3.7/dist-packages/fastcore/basics.py in map_ex(iterable, f, gen, *args, **kwargs)
777 res = map(g, iterable)
778 if gen: return res
--> 779 return list(res)
780
781 # Cell
/usr/local/lib/python3.7/dist-packages/fastcore/basics.py in __call__(self, *args, **kwargs)
762 if isinstance(v,_Arg): kwargs[k] = args.pop(v.i)
763 fargs = [args[x.i] if isinstance(x, _Arg) else x for x in self.pargs] + args[self.maxi+1:]
--> 764 return self.func(*fargs, **kwargs)
765
766 # Cell
/usr/lib/python3.7/contextlib.py in enter_context(self, cm)
425 _cm_type = type(cm)
426 _exit = _cm_type.__exit__
--> 427 result = _cm_type.__enter__(cm)
428 self._push_cm_exit(cm, _exit)
429 return result
/usr/local/lib/python3.7/dist-packages/fastcore/xtras.py in __enter__(self)
492 "Wrapper for `contextlib.ExitStack` which enters a collection of context managers"
493 def __init__(self, mgrs): self.default,self.stack = L(mgrs),ExitStack()
--> 494 def __enter__(self): self.default.map(self.stack.enter_context)
495 def __exit__(self, *args, **kwargs): self.stack.__exit__(*args, **kwargs)
496
/usr/local/lib/python3.7/dist-packages/fastcore/foundation.py in map(self, f, gen, *args, **kwargs)
153 def range(cls, a, b=None, step=None): return cls(range_of(a, b=b, step=step))
154
--> 155 def map(self, f, *args, gen=False, **kwargs): return self._new(map_ex(self, f, *args, gen=gen, **kwargs))
156 def argwhere(self, f, negate=False, **kwargs): return self._new(argwhere(self, f, negate, **kwargs))
157 def argfirst(self, f, negate=False): return first(i for i,o in self.enumerate() if f(o))
/usr/local/lib/python3.7/dist-packages/fastcore/basics.py in map_ex(iterable, f, gen, *args, **kwargs)
777 res = map(g, iterable)
778 if gen: return res
--> 779 return list(res)
780
781 # Cell
/usr/local/lib/python3.7/dist-packages/fastcore/basics.py in __call__(self, *args, **kwargs)
762 if isinstance(v,_Arg): kwargs[k] = args.pop(v.i)
763 fargs = [args[x.i] if isinstance(x, _Arg) else x for x in self.pargs] + args[self.maxi+1:]
--> 764 return self.func(*fargs, **kwargs)
765
766 # Cell
/usr/lib/python3.7/contextlib.py in enter_context(self, cm)
425 _cm_type = type(cm)
426 _exit = _cm_type.__exit__
--> 427 result = _cm_type.__enter__(cm)
428 self._push_cm_exit(cm, _exit)
429 return result