In an effort to actually absorb the class material, I’m interested in applying all this DL to a totally new project, and trying to produce something worthy of a sweet blog post, or maybe even publishing. I imagine others are too, so this is a thread to start forming those groups. I don’t know what others want to do, but personally, I’m a musician, and would love to work with audio data. I think it’s understudied compared to vision/nlp, and my love of music and audio would give me a bit of special knowledge here. So if anyone is interested in some of the following project ideas, just respond and let me know!
Making awesome music generation models
–> Cycle GANS for transforming songs from one genre to another (eg. create a reggae version of any song)
–> Something to generate lifelike piano pieces from sketches or from nothing
–> Build a text-to-singing model. Can we make it sound like Adele sang a Beatles song?
Automatically single out instrument parts in a song
-> I’d love to be able to just get the vocals from any tune. Or just the guitar part, etc. DJ’s would love this!
If you have your own project ideas, you should also post those here.