After 6 weeks of fast.ai DL1 education… I extend my thanks to the community.
It would be nice to hear other ‘real world’ applications in the forum. Detecting cancer is amazing, but there is so much lower-end inefficiency waiting to be aided. Here is an example.
In the UK we have active reporting of street cleanliness issues. I took 2000 publicly reported photos from a small part of London, along with labels corresponding to dog mess, trash, and graffiti, which make up the bulk of reports. Classifying these reports, amounting to several hundred thousand a year, takes real human effort paid for by taxes.
I found few examples of people attempting similar tasks with ML/DL. A 75% success rate in a university project with recyclable materials imaged against whiteboard. Sidewalk scanning cameras on street sweepers. Smart recycle bins.
In this real world task, the fast.ai library with a resnext101 model achieved 95% accuracy against the three categories. Accounting for questionable public labelling in the test set, perhaps 97% accuracy. From idea to result the task took 2 hours, mostly to get the images, and only needs knowledge from the first 3 lessons. No doubt spending more time on this could yield superior results.
This is all only made possible due to people like Jeremy removing barriers to entry with services like fastai to get new capability into practical hands. Kudos!
I would love to hear about how you’ve applied DL to other non cutting edge applications in the real world.