Just started the course a couple of weeks ago - I’m doing an internship in Ghana right now and finding myself with plenty spare time. Have completed Andrew Ng’s foundations course previously but really enjoying the top-down approach here that’s allowing me to apply deep learning to my own interests.
One of the malaria virology labs here would really benefit from a web app that could count uninfected red blood cells and those infected with Plasmodium parasites from images of blood smears. This is a problem that’s been solved many times with various approaches but I thought it might be a nice project to learn deep learning with while I complete the course.
Ideally the malaria researchers would be able to upload images from Android tablets connected to their microscrope cameras to a web app that would give the proportion of infected/total blood cells. So far I’ve used the fastai library and transfer learning with theNLM malaria database (27,000 individually segmented cell images) and have a model that can (remarkably) accurately predict whether my cells are infected after I’ve segemented them from the researchers’ smear images.
Currently I’m performing the segmentation using classic image processing methods (adaptive thresholding, morphology, watershed transform), but I’m struggling to construct a pipeline that works consistently on all images, especially where there are many cells overlapping.
I was wondering if it would be possible to take a deep learning approach to segmentation using the NLM database? From what I’ve read this would be a object detection task and probably a bit ambitious for this stage of my learning? Any suggestions would be welcomed. Datasets exist that have blood cells identified on images of microscopic smears but they use bounding boxes instead of pixel-by-pixel segmentation - any pointers on how I can implement this to identify the blood cells in my smears?
Thanks in advance!