Hi, everyone calls me “Ducky”.
I got into ML first because I wanted to catch The Next Big Thing, and got even more serious about it when I realized its dangerous potential.
I am very interested in Defense Against The Dark Arts in ML, e.g. adversarial images, but have a little bit of despair that the Forces of (Sometimes Unintentional) Evil are too strong.
The ML area that I am most intellectually curious about is word embeddings. There have been (at least) three different papers on how to merge two embeddings made with monolingual corpuses into a bilingual dictionary (here, here, and here), I think making monolingual polysemantic embeddings (like they do in this paper) ought to work even better for making a bilingual dictionary; I also have a hunch that non-negative sparse embedding would improve results.
My most interesting tech semi-hobby has been polygon-heavy maps. Outside of tech, I have a side interest in writing systems (which I blogged about for a while) and have done some glyph-based artwork, including the most recent Unicode Conference T-shirt. I have a useless superpower of being able to identify most writing systems given a paragraph of text.
I live in Vancouver, BC, and would be interested in finding other learners who are local or even semi-local. (Bellingham? Seattle? Whistler?)
I watched part1v1 lectures mostly by myself (which was okay), watched part2v1 with a small group of former strangers (which was awesome), and watched part1v2 by myself in a hurry. (I got laid off on 15 Feb, so thought I’d have plenty of time to do the part1v2 machine problems… but then a refugee family I’m helping to sponsor arrived suddenly, and… well… complications. (I told email@example.com that things had changed and that I could not promise to do the part1v2 homework before part2v2 started, but they let me in anyway. ¯\_(ツ)_/¯ Thanks Jeremy and Rachel and whoever decided!))
Edit: I forgot to mention, I’m a software developer by trade, mostly working on backend Python code recently.