Lesson 13 (2019) discussion and wiki

https://groups.google.com/a/tensorflow.org/forum/#!msg/swift/Bhp5uvHAkDQ/E1w-DW0bAwAJ

4 Likes

Any pointers about what these currently unexplored ideas look like?

1 Like

How do LLVM, MLIR, and XLA relate to each other?

6 Likes

I feel like Jeremy should’ve used towels instead of socks :wink:

3 Likes

So we’re talking about a lot of new things here, that wasn’t mentioned a lot in fastai before AFAIK (like low-level CPU/GPU/language stuff). All of this is very interesting, and it would be great if some resources to learn more about this stuff could be shared on this topic wiki (or in a separate post?) :slight_smile:

2 Likes

What is a type of deep learning architecture that would really benefit a lot from the MLIR?

So will the XLA presented next week be an improvement over pytorch with speed?

With this Raw framework, can we access the various TF contrib.* modules/ops?

1 Like

is swift thread safe ?.

4 Likes

Every time you want to tweak a batchnorm or an RNN and don’t want to lose all performance :wink:

4 Likes

Chris gave a great explanation of how they relate to each other!

Dependency-wise, XLA depends on LLVM for its CPU and GPU backends, and MLIR is a meta-level IR that is capable of representing LLVM, SIL, the TensorFlow graph, and many many other compiler IRs. MLIR is the next-generation compiler technology that provides shared tooling among different compilers that are implemented using it, and makes it significantly easier to convert between different representations.

8 Likes

I’m not sure we’ll see some dramatic improvement over PyTorch before we get to the MLIR step.

@rxwei gave a great response above!

LLVM is a general-purpose compiler infrastructure. It’s good at handling general-purpose programs.

XLA is a compiler for tensor operations. It’s good at making tensor operations fast, and uses LLVM for some things.

MLIR is a new meta compiler intermediate representation (IR) that can represent multiple other IRs, including LLVM IR and TensorFlow graphs. It unifies different compilers to solve different requirements in a singe framework.

5 Likes

Raw TensorFlow operators are imported directly from TensorFlow’s C++ implementation. These operators are “raw” as they are not carefully curated Swifty APIs. But you can use them to achieve pretty much everything TensorFlow operators can! Here’s a tutorial about it: https://www.tensorflow.org/swift/tutorials/using_raw_tensorflow_operators.

4 Likes

How is “&”/referencing different from struct? I missed that bit of Jeremy’s explanation.

1 Like

Cool, thanks! I was poking at this today and couldn’t work out how to access C++ ops in the contrib branches, eg. https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/ffmpeg - is that possible, or not yet?

1 Like

Is XLA available now or will it be available in 3 months time?

When you pass a struct or class to an inout argument, Swift requires you to add the & symbol. You can think of inout arguments as copy-in-copy-out, so your function returns a new struct that gets re-assigned to your variable. To learn more, check out the documentation for functions.

3 Likes

It’s used when the function is allowed to change the argument. They’ll talk more about it in next lesson.

XLA is available today, but relatively experimental. We have an example in one of our notebooks, but it sometimes segfaults. :frowning:

1 Like