Got it. Will edit my post to something more appropriate thanks for the reminder
Iâve been making iOS apps with Swift since it first came out 5 years ago, and doing ML on iOS devices for the past 3 years. Swift has gone through many changes in that time and is only now becoming a stable language. I expect Swift for TF to be unstable for another while to come.
Itâs certainly possible to use Swift on the backend already and there are a number of frameworks for doing so (Vapor, Kitura). But itâs still very much early days for Swift outside the macOS / iOS ecosystem.
I wouldnât expect Swift for TF to be usable for practical applications of ML on mobile / embedded devices for a few more years. Even TF Lite is only just now becoming a realistic choice and itâs not even the best choice on iOS.
The idea of using just a single tool for the entire ML pipeline (i.e. Swift for TF), front-end and backend, is a nice dream but not even close to a reality right now.
If youâre interested in the possibilities offered by Swift for TF, then by all means get involved. But realize that stuff is going to break a lot. Donât get into Swift for TF at this point when your goal is to get actual data science / machine learning work done.
Thank you for your answer! Very useful perspective. I think even the fact that thereâs an inkling of a possibility of having a somewhat less fragemented research-to-deployment pipeline like the one you are mentioning, compounded with the obvious benefits of the language, makes this a very exciting project.
P.S. Iâm also enjoying your blog posts very much!
My one worry is that Swift has changed a lot over the years. Iâve got code written in Swift versions 1 - 3 that no longer runs and would require a massive rewrite on my part. I think weâre mostly past that point now and the language will no longer see such big changes, but on the other hand a large portion of ML practitioners still havenât moved on from Python 2. Iâd hate to see that happen too in the Swift for TF ecosystem where some developers are stuck on Swift version N, some on version N+1, some on version N+2, and so on.
Here is a great blog post from my online friend and ML / graphics programming expert Brad Larson about using Swift for TensorFlow to train AlexNet: https://www.perceptuallabs.com/blog/2019/2/27/training-alexnet-using-swift-for-tensorflow
Hi
Thanks for that insight.
I would like to add it is my understanding that this version of Swift relies on Xcode 10 which is only available on the later versions of OSX operating system, correct me if I am wrong.
So for myself on a 2009 architecture I can only get Xcode 7 as I would need a later version of OSX greater than 10.11.6 , which are incompatible with the 2009 architecture, to get Xcode 10 . Therefore I would have to use my later Ubuntu 16.04 box; would I need to go to 18.04 or higher for Swift?
This sort of begs the question is there a specification of the architecture we would need for the upcoming course and if so perhaps it could be published for those who want to follow along with the notebooks etc. I am sure it is in hand so please forgive my panic.
It seems there are some ready built packages for âSwift for Tensorflowâ but are only for >= OSX 10.13.5 and Ubuntu 18.04. Further looking through the Swift github README says there in no GPU support and that Ubuntu 14.04 or 16.04 are required to build the above image.
I guess I may build the image but how the dependencies will affect my environment I have no Idea. Itâs been a long time since I have done this sort of thing. For what I remember a fresh install of operating systems and source is required especially if it is shared with others. Perhaps best done in a virtual environment for which I donât really have the resources.
my wish is avoiding XCode as an IDE wasnt there a way to use notebooks? https://twitter.com/Pranjal_Yadav/status/1104819086050107394
If your old mac has ssd try this : http://dosdude1.com/mojave/
but backup your data first
I havenât looked in detail at Swift for TF yet but I wouldnât think Xcode is the preferred tool for it. Using a notebook environment such as Google Colab seems more suitable.
Thatâs a great thing, I would say. Iâve started learning Swift a couple of years ago but wasnât able to figure out what to do with it except iOS/macOS development
Thatâs a great idea to include these topics into course. Looking forward to starting using Swift in high-performance computations domains. Does anybody want to start a kind of study group?
By the way, I believe that objc.io is a great source to learn about intermediate-to-advanced topics of Swift language. Their Advanced Swift and Functional Swift books are great sources of knowledge.
Today, I read deeper into the Swift language specification and I would argue there lies the hidden opportunity of truly modular neuronal networks.
I used to study CS with a major in concurrency verification especially w.r.t. programming language support and statical verification on compiler level.
An often overlooked pain point in programming languages design is the deep integration of the type system into the way the language handles composition i.e inheritance. I think Javaâs incoherent collections that ultimately lead to the introduction of raw types to enable broken generics remains the prime example of how bad that can turn over time.
Modern functional programming languages support traits and mixins but doing them safe and sane in a statically typed language that yields high performance isnât exactly an easy feat. Scala did a reasonable job but still suffers from sluggish performance due to the JVM legacy.
Swift, however, nails performance, type safety, and protocol based mixins. That already allows a lot of elegant composition, but also allows easier concurrency due simpler type checking against protocols that specify required constraints.
For neural networks, the big deal is the easy composition through programming against multiple interfaces as to plug together implementing traits in a typesafe way.
I canât wait to see trait based programming converging with language level support for deep learning in a statically typed functional language. It would change AI system design in the most profound way as to compose complex AI system from smaller modules in a language that takes care of the rest during compilation.
TensorFlow tries to do that on the framework level, but boy is that an overhead for something that should be done in the programming language.
Great news!
It is always exciting to learn a new language and get one more superpower in developers toolbox. Are there any plans regarding TensorFlow lite on Swift?
https://groups.google.com/a/tensorflow.org/forum/#!topic/swift/s48m3F93wro
That said, our priorities for now are in the research and education spaces (e.g. our collaboration with fast.ai) for now. We want to focus the whole team on one domain to make it really good. (We donât want to spread ourselves too thin.) In due time, we absolutely will expand our scope to include running on mobile and IoT devices.
Right now you can use TF Lite from Swift but you have to write a simple C wrapper since TF Lite is C++ and Swift canât talk to C++ directly. (But if youâre doing ML on iOS or macOS, then TF Lite isnât necessarily the best choice.)
Can you share with us the best choice for ML in iOS or macOS?
apple machine learning tool kit perhaps⌠https://developer.apple.com/documentation/coreml
I guess weâre getting a little off-topic here but it depends on your model and the device youâre targeting.
- Core ML is the best default choice because it allows you to use the Neural Engine in modern iPhones, which gives better performance than the GPU. Itâs usually possible to convert a frozen TF model into a Core ML model.
- On older devices, using a lower-level library such as MPS (Metal Performance Shaders) is often necessary for the best performance. This requires writing a lot of code by hand but you get complete control over the GPU. Often this is faster than using Core ML (except on the latest devices, which have the Neural Engine).
- At this point, GPU support for TF Lite is still in prerelease. I would only use TF Lite if your model was made using TensorFlow and Core ML does not support one or more operations from your graph. Both Core ML and TF Lite only support a limited number of operations, but itâs possible TF Lite can do some operation that Core ML doesnât have.
- Roll your own. I have done this several times for things like LSTMs. You donât have the overhead of a library such as TF Lite and you get full control. Bonus: you get to write matrix math!
- ML Kit. For things such as text detection, itâs easiest to use one of the built-in models from Googleâs ML Kit.
So TF Lite is definitely an option, but it doesnât have any major benefits over the native solutions (Core ML / MPS).
I really wished it was easier to work with Colab, but hey I completed part 1 on it, itâs tricky sometimes, but for a frugal and miser like me living in the third world, itâs worth it.
If Swift turns out to be powerful enough for models embedded systems. Iâm definitely switching from python.
I think you missed my point. Maybe my fault. Nobody complains about colab here. I was skeptical about the ultimate goal behind this project as I didnât fully understand it, but Jeremy well clarified with these words :
Said that, I love prototyping new ideas and I like to do it as fast as I can. So I am gonna stay with python for the next few years. Tomorrow never knows