Lesson 14 (2019) discussion and wiki

How widely do Objective-C libraries work on Ubuntu (and not rely on proprietary Apple APIs)?

Swift is very exciting for me, because I remember trying to contribute to pytorch and getting overwhelmed by the fact that it went fairly quickly to C code in order to add anything. Swift a more modern language, is one that would be much easier for me to contribute.


Please do contribute! We’d love to have your help building S4TF. :slight_smile:

1 Like

+1! Please make sure to join the discussion on the Swift for TensorFlow mailing list, as well. :slightly_smiling_face:


Getting up to speed on the project and swift now, but I definitely will be contributing soon. Just joined the mailing list.

why Swift for TensorFlow and not Swift for PyTorch? Dum question probably but I missed it

1 Like

Thanks for your reply!

I’d like to find the best tool for the job, to be sure. But my time and knowledge are already limited, even without access to C libraries and beyond.

When I start blazing a new trail, does using Swift distinctively support me “in the wilderness” in any way? Or do I need to do old-school social search and such to decide on the potential libraries to explore? I get that things will integrate nicely in Swift, if I find them, which is great, but am I on my own to find those valuable things?

1 Like

I imagine mostly because Chris Lattner (who started Swift) works at Google (who develops TensorFlow). No reason Swift for PyTorch couldn’t exist, but there would need to be support from Facebook for such a project.

1 Like

Whatever you do, don’t forget that all of this is very much the definition of bleeding edge. There’s no obligation to use it. Your current python workflows aren’t going to stop working. There’s no reason not to - and many reasons to - keep doing what you’re doing.

When this whole S4TF/Harebrain system is ready to use, you as a data scientist won’t really have to think about any of what Jeremy & Chris are teaching here. But, unlike in python, if you ever have to, you’ll be able to. That’s the difference.

FWIW, If I was still a manager of a DS/Eng/product team, I’d tell them to definitely not try doing anything in Swift. Maybe in a year or 3 when Swift and MLIR/XLA have realised actual performance gains - or at LEAST the APIs have stabilised - there might be some reason to even begin to consider using it for a “real” system.


The protocol-oriented programming is a pretty cool feature of Swift language. Very powerful and flexible. (Especially when combined with extensions).

Out of curiosity, could you give an estimate of how long it could take someone to go from a fairly level of knowledge in Python / TF / DL, to start to be a competent contributor to S4TF?

I’m asking because it seems there’s a lot of stuff to learn but maybe I’m feeling overwhelmed because Chris and Jeremy are talking about so much different things (MLIR, XLA, C/Python interoperability, compiler stuff, …).
Like how much could you contribute to S4TF with just some okay knowledge in Swift?


How does the Swift protocol approach avoid the inheritance-tree-hell problem in languages like C# where you end up with enormous trees that are impossible to refactor??


Just to clarify, Swift probably isn’t the best tool for the job right now. Part II is about “Impractical Deep Learning”, basically stuff that won’t necessarily help you with your job.

However, it’s the hope that in a year (or thereabouts) Swift will be the best tool for the job. This part was about giving us an opportunity to get on board with a brand new project well before it’s ready for mainstream use.


Similar to @ThomM’s response, it may be useful to revisit Jeremy’s post on Swift:

"The combination of Python, PyTorch, and fastai is working really well for us, and for our community. We have many ongoing projects using fastai for PyTorch, including a forthcoming new book, many new software features, and the majority of the content in the upcoming courses. This stack will remain the main focus of our teaching and development.

It is very early days for Swift for TensorFlow. We definitely don’t recommend anyone tries to switch all their deep learning projects over to Swift just yet! Right now, most things don’t work. Most plans haven’t even been started. For many, this is a good reason to skip the project entirely.

But for me, it’s a reason to jump in! I love getting involved in the earliest days of projects that I’m confident will be successful, and helping our community to get involved too. Indeed, that’s what we did with PyTorch, including it in our course within a few weeks of its first pre-release version. People who are involved early in a project like this can have a big influence on its development, and soon enough they find themselves the “insiders” in something that’s getting big and popular!

I’ve been looking for a truly great numerical programming language for over 20 years now, so for me the possibility that Swift could be that language is hugely exciting. There are many project opportunities for students to pick something that’s not yet implemented in Swift for TensorFlow, and submit a PR implementing and testing that functionality."


Similarly, what are the top opinions around using the mixin pattern in Swift which have been found to be an anti-pattern in other contexts.

1 Like

Is mixin an extension? Does that mean we are mixing functionality into existing types?

As with monkey patching, won’t extension make code hard to read. Because once a functionality of a particular api(class) is extended in this way, you wont know if this functionality is coming from original class or from somewhere.


Swift is Haskel for Humans :wink:


Oh, I remember my attempts to deal with Haskell :smile: Also a very beautiful language I would say, but a bit “impractical” for daily usage, at least for me. Though I would definitely advice one to learn some bits of this language if you’ve never worked with the functional programming paradigm. It could give you some new ideas and a new perspective.

1 Like

Again, very great questions. Because the Swift data science ecosystem is still relatively nascent, we as a community are still discovering and developing and defining the best libraries. Over the next few months and years, I expect we as a community will establish and disseminate best practices. And I expect this fast.ai community will be right at the center of this technology innovation.