[SOLVED] Feed multiple images to one model

Hi all,

I am trying to create a model which feeds multiple multi-channel microscopy images into a model at once. I still need to figure out how to re-use the model to annotate features, but haven’t even gotten that far because I’m stumbling on the image feed. The images have 6 channels, so I’m subclassing ImageList, then trying to approximate the ImageTuple custom item list tutorial. Problem is that I’m getting a Recursion error (see below). I’ve copied my code into a Gist, and am trying to feed from a dataframe with 2 filepath columns and one label column.

data = (MultiChannelImageTupleList.from_dfs(proc_df, data_folder+’/train/’)
)
data


RecursionError Traceback (most recent call last)
~.conda\envs\pytorch\lib\site-packages\IPython\core\formatters.py in call(self, obj)
700 type_pprinters=self.type_printers,
701 deferred_pprinters=self.deferred_printers)
–> 702 printer.pretty(obj)
703 printer.flush()
704 return stream.getvalue()

~.conda\envs\pytorch\lib\site-packages\IPython\lib\pretty.py in pretty(self, obj)
400 if cls is not object
401 and callable(cls.dict.get(‘repr’)):
–> 402 return _repr_pprint(obj, self, cycle)
403
404 return _default_pprint(obj, self, cycle)

~.conda\envs\pytorch\lib\site-packages\IPython\lib\pretty.py in repr_pprint(obj, p, cycle)
695 “”“A pprint that just redirects to the normal repr function.”""
696 # Find newlines and replace them with p.break
()
–> 697 output = repr(obj)
698 for idx,output_line in enumerate(output.splitlines()):
699 if idx:

~.conda\envs\pytorch\lib\site-packages\fastai\data_block.py in repr(self)
75 def repr(self)->str:
76 items = [self[i] for i in range(min(5,len(self.items)))]
—> 77 return f’{self.class.name} ({len(self.items)} items)\n{show_some(items)}\nPath: {self.path}’
78
79 def process(self, processor:PreProcessors=None):

~.conda\envs\pytorch\lib\site-packages\fastai\core.py in show_some(items, n_max, sep)
368 “Return the representation of the first n_max elements in items.”
369 if items is None or len(items) == 0: return ‘’
–> 370 res = sep.join([f’{o}’ for o in items[:n_max]])
371 if len(items) > n_max: res += ‘…’
372 return res

~.conda\envs\pytorch\lib\site-packages\fastai\core.py in (.0)
368 “Return the representation of the first n_max elements in items.”
369 if items is None or len(items) == 0: return ‘’
–> 370 res = sep.join([f’{o}’ for o in items[:n_max]])
371 if len(items) > n_max: res += ‘…’
372 return res

~.conda\envs\pytorch\lib\site-packages\fastai\core.py in repr(self)
180 “Base item type in the fastai library.”
181 def init(self, data:Any): self.data=self.obj=data
–> 182 def repr(self)->str: return f’{self.class.name} {str(self)}’
183 def show(self, ax:plt.Axes, **kwargs):
184 “Subclass this method if you want to customize the way this ItemBase is shown on ax.”

… last 1 frames repeated, from the frame below …

~.conda\envs\pytorch\lib\site-packages\fastai\core.py in repr(self)
180 “Base item type in the fastai library.”
181 def init(self, data:Any): self.data=self.obj=data
–> 182 def repr(self)->str: return f’{self.class.name} {str(self)}’
183 def show(self, ax:plt.Axes, **kwargs):
184 “Subclass this method if you want to customize the way this ItemBase is shown on ax.”

RecursionError: maximum recursion depth exceeded while calling a Python object

You need to define __str__ for your ItemBase, else it falls in this recursion loop

Hi @shein, please see my post over here where I am using the experimental feature MixedItemList from fastai to provide two images to my network. Unfortunately, MixedItemList isn’t exported/loaded correctly to it appears it can’t be used for inference yet but it may lead you to solve your issue (or mine, if you can figure out how to load it for inference)

This worked! Thank you very much, apparently it was just the string representation of the items that was breaking. Next step, add the data_show methods in and then figure out how to feed it in :slight_smile:

I’ve updated the Gist for posterity in case anybody else has this issue

2 Likes

Hi Skumar, I had seen your post before and it was helpful for getting me as far as I did get. But the problem is that I need to be able to do inference, as I’m trying to train for predictions.

1 Like

I’ve been using the following to work with hyperspectral (+400 channels) images: gist. You should be able to use that after you modify how you open your images.

Can you describe your data a bit more, like what do the two filepaths mean? Are they different images of the same object or something else, like filepaths for different image channels?

2 Likes

Just a quick update - I further modified the gist to include the functions needed to show_batch and normalize the data, in case anybody needs to see the whole data processing pipeline in the future

2 Likes