Hello from Brazil, this is my first post around here!

I’m investigating possible deep learning problems for my undergraduate thesis in computer vision. So far satellite image segmentation seems somewhat manageable for a beginner.

I also want my model to compute predictions as to how the segmentation is likely to be in the future, so as to potentially identify which areas are going to be rotated to which crops ahead of time. My plan is to make this information available through my university for general competitive analysis in the industry.

So far I have identified one paper covering spatiotemporal sequences. However, it employs an architecture that does not seem to be implemented in the fast.ai framework (conv-lstm).

I plan on using sentinel 2 data, which is freely available online

Here are my questions:

Is this feasible at my skill level? The little bit I know about deep learning came from this introductory course I managed to complete on time: https://www.udacity.com/facebook-pytorch-scholarship
I’m also watching ‘deep learning for coders’ and general python skills are of course at a passable level.

What shortcomings should I be aware of before I start? Will I fail due to how big satellite imagery is?

Should I try another approach to this problem, such as a regular CNN instead of Conv-LSTM?

How many images do I need for my training and validation sets? Images sourced from google look all the same, should I look somewhere else?

I’m open to suggestions.
Thank you very much for your time.

Sentinel 2 data can be a challenge due to the amount of data. Up to 10m spatial resolution each 5 days is quite a high volume of data, depending on the extent of the region you are considering.

If this is too much for the time you have to finish your thesis you can start with a simpler problem, like using coarser resolution images (for example VIIRS has daily images at 750m and some channels at 375m).

Or you can start with regular segmentation (with month or year averages for example) and if the time allows you move for the spatiotemporal sequences. The idea is to make sure you have results along the way so even if you have no time for the big problem you can write the thesis with the results you got so far.

I like your approach. If things go south I’ll still have something to show for my work.

I’m somewhat lost on the architecture I should use. Do I really need a convLSTM for spatiotemporal analysis? If so, how would I go about transfer learning, assuming I want to start with a pretrained network? It is my understanding that since fast.ai doesn’t support this natively and neither does pytorch, I’ll have to resort to some other framework, such as keras, is this correct?

You can implement anything in PyTorch and thus in fastai. But again, if you feel lost start with something you can do, like standard segmentation with u-net that fastai has ready to use. If you do that for every month using monthly composites then you can think on easy ways to evaluate the changes in time. Maybe with a simple tabular model that is also ready to use in fastai.

An end-to-end approach with conv lstm may give better results but you need a baseline to compare the results. And by the time you solved the simpler problems you have more knowledge to tackle architectures that may require you to go beyond the standard fastai problems.