In the last lecture, it was mentioned that Swift’s (Deep) Machine Learning ecosystem is still pretty small and that it is worth to make some efforts to contribute to new tools and libraries. How do you think, where it is better to start?
Is it a good idea to start with models and datasets training snippets? I have some knowledge of Swift (in relation to iOS development mostly) and lots of experience with Python but not sure how to start building general-purpose stuff in Swift. Like, read files, serialize data onto disk, process tabular or image data, etc. And, I never tried to use Swift in a Unix environment.
Probably someone has a Swift-based project where contributions are required? I am not sure if I capable commit something useful into S4FT or Harebrain, but would definitely like to take part in building Machine Learning ecosystem for this great language.
I think the models are a good place to get going and make sure your setup is working properly.
read files, serialize data onto disk, process tabular or image data, etc.
I think this is a lot of what this collaboration with the Google Brain team is all about. Short term, you can re-use your python skills by importing libraries/code into your scripts. Long term these paradigms are going to be re-implemented in Swift, but hopefully with new and improved pattens!
never tried to use Swift in a Unix environment.
I like the command line, but I’m biased! Having the compiler catch things makes debugging much easier. But honestly, I think if you are willing to wait a little bit one of the first things that is going to figure out is a clean/simple way to get a working Jupyter setup that is easy to deploy and get going. From there, it should be off to the races!
Ok, got it! I guess you’re right and we can just go with
import Python for now to bridge familiar tools into a new ecosystem.
Probably I should try to at least deploy the Swift stack on my machine before I can do anything useful…