What kind of performance issues do we see while importing python into swift vs directly running the code in python? and will s4tf eventually have enough functionality to bypass the python import or is that the goal?
Detail on that which @pcuenq helped me learn yesterday, the repo has to be arranged as a Swift package; if it’s just some random swift file then it won’t work. Also, if there are no releases, instead of using a version number in the from:
keyword, you can use a branch name eg master
or even a git commit ID.
Nitty-gritty answer: we defined the time
function in such a way that it is compiled without knowing what the f
function is. Because the compiler doesn’t know what f
is, it can’t optimize it away.
There are ways to request the compiler “inline” the f
function, which would allow it to optimize things away (including cross-module optimizations). For an example check out the generated assembly of reduce(0, +)
: Compiler Explorer and the implementation of reduce
in the standard library (in particular the @inlinable
attribute).
Ok, so Shift+Tab is still in the works, what about ? and ??
Nothing that helps you get docs exists in swift-jupyter now. But it is an area that I want to improve. Not sure exactly how it’ll work, because I haven’t thought much about it yet.
Maybe I’m not getting something obvious, but why was it “Unsafe” just now?
Doesn’t check the bounds of the array - you could end up referencing random memory.
It doesn’t check for array bounds and such. I suspect you’re on your own with respect to memory allocation/deallocation etc as well.
Swift is a memory-safe language with automatic reference counting. By default, you don’t ever need to think about allocating/freeing memory.
Manual memory management and pointer APIs are unsafe.
The names of Pointer
types start with Unsafe
: UnsafePointer
, UnsafeMutablePointer
, etc.
When you Python APIs in Swift, it’s going through the Python runtime and should have the similar efficiency as running interpreting Python directly. It still has the limitation of GIL, for example.
We are definitely interested in expanding the numerics and data science library ecosystem in Swift! We hope that libraries like data loading, plotting, unzipping will eventually replace Python APIs and provide a Swifty feel and better performance. We’d love to create those APIs together with the community!
Ok, I would say that’s one of my favorite parts of jupyter notebook. Usually ?? will let you see the source code of the function or class you are ??ing and Shift+Tab will display different information depending on how many times you press it. Pretty great functionality and lets you stay in your notebook and keeps you from having to look up help docs and you know that the doc you are looking at is for the version of the code you are running.
Vapor is great. And the SwiftNIO library (that Vapor uses) is fantastic.
Anyway to make sure the matmul or other functions are correctly uses shared memory on the GPU? For example using tiling to make sure you aren’t constantly busting the cache of shared memory on the GPU?
- will swift be supported by kaggle ?
- is our way of writting code using fastai change with swift for fastai in place?
are any scikit-learn
4 swift projects in the works?
How many shipping ship shipping ships could XLA ship if XLA could ship ship shipping ship shipping shipping ships?
This is a great question! Currently, one can’t write GPU kernels directly in Swift and all our TensorFlow operations go through the TensorFlow runtime. Teams at Google Brain are actively developing the next-generation compiler technologies such as MLIR that will enable us to write kernels in Swift directly even in a Colab!
Check out our Google Summer of Code ideas and follow along on the swift@tensorflow.org mailing list.
Maybe 42?
how is swift for probabilistic programming in comparison to say julia?