In my view, no issue. From the tutorials I saw swift will be easier to learn and read with eg types
Furthermore, the windows environment does not cope too well with LLVM.
Web based dev environments like Jupyter and Colab work great, and native Windows support is early but well underway.
If you are interested, I recommend following the windows tag
I am hoping Swift could solve the following problems that I see in Deep learning in practice:
- One language for experimentation and deployment. Today experimentation is mostly in Python and deployment in Python/Java/C++, which creates this unnecessary handoffs between teams making training->deploy->retraining cycle harder
- Language which is designed to be used in production and designed for collaboration/long term maintenance etc. There are many problems which come with using Python as a primary language, like: multithreading problems, costs of dynamic typed language, etc
- Hoping that LLVM being a first class citizen in Swift world would mean moving towards a dream world of better interoperability across hardware.
- As value in understanding down until hardware acceleration layer is important with Deep Learning, one language for the entire stack seems like can enable people to understand things and modify things at a deeper level.
Would love to hear your thoughts on the above. Also, would love if you can give a primer on LLVM and MLIR.
Will Swift 4 TensorFlow enable the practical use of DL on GPU’s besides NVidia and Google TPU? The monopoly of Nvidia seems to be a huge barrier to exponential growth of DL. Their latest 2080ti is 2x faster than 1080ti, but also 2x as expensive.
Will Swift 4 TensorFlow enable dynamic optimization of the allocation of GPU cores and GPU memory to advance hardware parallelization with GPUs? Will it improve on CUDA or will it just use CUDA?
Will Swift 4 TensorFlow simplify or improve performance in the use of multiple GPUs into the training loop?
I thought about that, too. Perhaps once MLIR is out it will be easier to create “backends” for AMD Vega and other cards.
Using the fastai python libraries are well documented and has great support through the Fast.ai community, forums, courses, etc. It is very easy to provision virtual resources to easily learn deep learning and apply deep learning to a problem of interest.
How does the setup of Swift 4 TensorFlow compare to fastai setup? What hosting options are available?
Fastai can only perform well on machines with Nvidia GPUs. Does S4TF have the same limitations? Others have asked about Windows compatibility - what about macOS?
Why not use swift on top of pytorch? Why use tensorflow?