My son’s favorite activity since he started high school a few months ago is his afterschool board game club, and his favorite game is a Dungeons and Dragons campaign they play at the club. I sometimes get to hear of funny and heroic acts or complex strategies when he returns home late at night. Although the story telling is engaging, I don’t understand much of the subtle character details, their personalities, and abilities. It turns out, until two weeks ago I couldn’t even tell an Orc from a Goblin most of the time (Orcs are bigger, and Goblins are sometimes green…) or a Dragonborn from a Tiefling; I didn’t even know what a Tiefling was.
The weekend after the second fastai lecture, I became good at telling Tieflings apart from other races. Dragonborns, at advanced levels, can get wings and start to look a lot like Dragons, but, they are much smaller, they stand on two legs, they wear human-like clothes and use weapons in a human way; Dragons on the other hand are huge and very wise, think of the dragon in the Hobbit. To make things more complicated, Halflings still really look like Humans to me, but they tend to be merry and small, though reasonably easy to tell apart from Dwarfs. Since all of these creatures were invented by creative minds, their exact characteristics differ a lot more in the wild than the characteristics of well-defined cat or dog breeds, so the weekend project became an adventure involving data collection, hacking minor tools (sha1sum, FileDeleter mods), and help from my expert son. Here is a neural net that is better than I am at telling the D&D character race from an image:
Lots of the feedback I had from the satellite app recognizing the city from the satellite image revolved around the same question: “what is it looking at in the image of my neighborhood that makes it look like it is from my country?”
I tried to implement a couple of interpretation methods I found linked on the forums (see further discussion in this thread).
The results are a little mixed, but I went ahead and added the feature to the app; let me know what you think: yourcityfrom.space
Example output - you can see here that it avoids the parks and is very interested in the shape of the blocks:
I have been successfully converting 2D LIDAR data into images. I am using a rotating lidar inside of pipes to look for defects in the pipe. The lidar itself provides 1 measurement every 1/4 of a degree for 270 degrees. So each “row” is 1080 points of data and each data represents the distance at that angle.
A single scan of the data looks like this:
I am squeezing 200 scans on top of each other and normalizing and getting out data that looks like this:
Yes! The validation set was from a separate set of recordings conducted on a different date with the same experiment. The plants were kept in sealed chambers too so there is very little likelihood that ones labeled as infected are non-infected.
I’ll do a write up as soon as possible and share. It would be interesting to get some feedback.
@bjcmit I moved the pre-processing of audio (the steps that download, generate spectrogram, stores in google drive) into a separate notebook, here is the a blog post that goes through everything link. Sorry I should have done/shared this earlier.
It’s currently trained on very small dataset I collected from eight trees in the local park. As such it only knows about London plane, Sweet chestnut, European oak and Field maple and its accuracy is ~70% (see update below).
Importantly, it also uploads the submitted photo to AWS S3 then asks you if the classification was correct. Based on your feedback it labels the uploaded image which means the more people use it, the better it will get. It doesn’t yet retrain itself automatically though.