Hey, thanks for the idea but it looks like you can’t blindly pass a lambda to parallel because at some stage in the pipeline it gets pickled and python is picky about pickling lambdas. There is a way to do it but it didnt seem advantageous over just making a function copy since this is mostly for preprocessing and not production code. I could be implementing it wrong though.
Here’s what I did. Code’s a bit messy, but it zips all the args together in a tuple, and instead of rewriting my function to accept a tuple (since parallel only takes functions that accept 1 arg), I make a lambda that calls my function after unpacking the tuple. If I run my lambda manually in line 3 it works, if I pass to parallel it blows up and I get the error at bottom for all threads.
arg_tuple_list = list(zip(fnames_100[0:100], [path_audio]*100, [path_spectrogram]*100))
func = lambda x: gen_spec_parallel(x, 0, x, x)
Traceback (most recent call last):
File "/opt/conda/envs/fastai/lib/python3.6/multiprocessing/queues.py", line 234, in _feed
obj = _ForkingPickler.dumps(obj)
File "/opt/conda/envs/fastai/lib/python3.6/multiprocessing/reduction.py", line 51, in dumps
_pickle.PicklingError: Can't pickle <function <lambda> at 0x7f300e427e18>: attribute lookup <lambda> on __main__ failed