OpenCV in Swift

Hey Brad. As you know, I do a lot of ML work on iOS. Occasionally I recommend that clients use OpenCV instead (because ML isn’t always the correct solution), or that they use OpenCV for certain pre- and postprocessing tasks (old school CV stuff). It turns out that a lot of people hate using it. Not sure if that’s an issue with OpenCV itself, that it’s a C++ API, or the fact that it doesn’t really fit into the iOS or Swift ecosystem (or perhaps that it’s not from Apple). So from that point-of-view, a Swift-based rewrite/rethinking of OpenCV would probably be welcome. :wink:

3 Likes

Agree, the idea to re-implement some OpenCV stuff in Swift sounds interesting. Is it possible to develop some “front-end” solution with different “backends”? Like it was done for Keras which supports different tensor computation libraries but exposes them via single API? Then one could have Swift solution that can fall-back to GPUImage on iOS/macOS/Metal platforms, to OpenCV – on Linux/Windows, or something entirely new in future. Something similar to how the game engines are built on top of “low-level” rendering APIs. Though I believe it could be not an easy task, of course.

1 Like

Good thinking. If there will be well-thought public API for the image library it should be possible to update low level details (i.e. swap backends between OpenCV and native Swift) w/o much churn for library consumers.
@bradlarson do you think it’s possible to have public API that would work for both OpenCV and native Swift implementations?

1 Like

With an appropriate level of abstraction, I definitely think you could have a common image processing interface and different backends. For example, the Metal and OpenGL Swift implementations of GPUImage have a roughly identical external API, with external differences only in endpoints that touch the underlying implementation (passing or retrieving textures, interactions with OpenGL sharegroups, etc.). Inside the Metal implementation, we’re building the ability to switch between custom-written Metal shaders and Metal Performance Shader equivalents for many operations for cases where one or the other isn’t supported or is more performant on a given device.

This is something that was recently discussed in regards to a proposed Google Summer of Code project for a Swift plotting library over in the Swift for TensorFlow group. I think many of the same things said there could apply here. The primary challenge would be in making sure things were still architected for performance, avoiding GPU<->CPU transfers at intermediate stages, planning for efficient memory use, and so on.

1 Like

I hate OpenCV - installation is a pain, python multiproc is broken, docs are crappy, API is poor, etc… BUT it’s fast and reliable, which is most important for such a foundational thing!

So something with similar performance and reliability but less crappy on every other dimension would be much appreciated.

For GPU stuff all we need is a decent implementation of grid_sample - we can do everything else with tensor computations (see our recent GPU augmentation lesson). Personally I’m more interested in something that works well on single images on CPU.

2 Likes

New link for the install:

1 Like

Thanks @Matthieu, fixed above now.

@jeremy did you have a chance to look at this PR?
I’ve made a simple function that does jpeg load/rotate/blur/crop/resize/etc. and timed it using opencv compiled with original install script vs. script in PR. I’m seeing 2x improvement in avg speed for my particular function (~45ms vs ~85ms) and also opencv compilation time is 2x faster (~8m vs ~16m).

Timing code:

func test_perf(_ path:String) -> Mat {
    var cvImg = imread(path)
    cvImg = cvtColor(cvImg, nil, ColorConversionCode.COLOR_BGR2RGB)
    let rotMat = getRotationMatrix2D(Size(cvImg.cols / 2, cvImg.rows / 2), 20, 1)
    cvImg = GaussianBlur(cvImg, nil, Size(25, 25))
    cvImg = warpAffine(cvImg, nil, rotMat, Size(cvImg.cols, cvImg.rows))
    cvImg = copyMakeBorder(cvImg, nil, 40, 40, 40, 40, BorderType.BORDER_CONSTANT, RGBA(0, 127, 0, 0))
    cvImg = flip(cvImg, nil, FlipMode.HORIZONTAL)
    let zoomMat = getRotationMatrix2D(Size(cvImg.cols, cvImg.rows / 2), 0, 1)
    cvImg = warpAffine(cvImg, nil, zoomMat, Size(600, 600))
    cvImg = resize(cvImg, nil, Size(300, 200), 0, 0, InterpolationFlag.INTER_AREA)
    return cvImg
}

let imgpath = FileManager.default.currentDirectoryPath + "/SwiftCV/Tests/SwiftCVTests/fixtures/zoom.jpg"
time(repeating:30) {_ = test_perf(imgpath)}

Thanks for the PR! There are 2 install options in the original file - you removed both and replaced with just one. The idea of having both is to allow people to be able to easily switch between a more aggressively optimized version or not. The more aggressive one is commented out - but that’s the one you should compare to.

I already had FAST_MATH and IPP; do you know if CPU_BASELINE is needed? When I checked my config results it seemed to be using all my CPU features AFAICT.

You can remove $(nproc --all) entirely FYI since I believe -j defaults to that.

I don’t think we want TBB (or any other threading) since it seems likely to interfere with Swift threads. You should check your tests are with SetNumThreads(0).

There was just one option when I made PR :slight_smile: When merging latest changes, I thought you’ve just improved some options and commented out older variant (didn’t think it’s something that people can choose from). Maybe it makes sense to support an argument in the install script (e.g. --agressive) to switch between two? Actually, I’d consider “more aggressive” to be uncommented one because of fast_math.

Hmm, I think fast_math is only used in the uncommented line, and IPP is not used in both? Not sure if it’s enabled by default.

I was following cpu build optimization doc which says CPU_BASELINE sets minimum set of CPU optimizations. Setting it to DETECT or NATIVE runs auto-detect on CPU features and compiles exactly for your CPU and adds -march=native flag for gcc. I have consumer-level CPU (old Intel i7 something), perhaps opencv may use even more optimizations on Intel Xeon architecture (e.g. used in AWS instances), but I only checked on my PC and it still seem to give some boost.

Not exactly, -j defaults to max number of jobs regardless of CPU cores and may run too many jobs than machine can handle (e.g. memory-wise). But actually I don’t think something more than 4 makes a huge difference. What’s make difference in compile time is BUILD_LIST option that compiles only listed modules we need for loading and transforming images.

Disabled TBB and re-checked with SetNumThreads(0) (but note that your original commented out line has TBB enabled). Original commented and uncommented options have similar performance ~85ms (hmm, except that commented-out took 35m to compile vs. 16m uncommented). With options from PR, it’s still slightly ~15% faster (~70ms), but the biggest gain is compilation time of course :slight_smile:

I try to run 08b_data_block_opencv.ipynb in Google Colab. I setup the Swift environment by run following code:
!curl -s https://course.fast.ai/setup/colab | bash
!git clone https://github.com/fastai/course-v3.git
!mv course-v3/nbs/swift/* /content

Then, run the following code:
%install-location $cwd/swift-install
%install ‘.package(path: “$cwd/FastaiNotebook_07_batchnorm”)’ FastaiNotebook_07_batchnorm
%install ‘.package(path: “$cwd/SwiftCV”)’ SwiftCV

I got the following messages:

Installing packages:
.package(path: “/content/FastaiNotebook_07_batchnorm”)
FastaiNotebook_07_batchnorm
.package(path: “/content/SwiftCV”)
SwiftCV
With SwiftPM flags: []
Working in: /tmp/tmparid2d7t/swift-install
‘opencv4’ opencv4.pc: warning: couldn’t find pc file
/content/SwiftCV/Sources/COpenCV/imgproc.cpp:253:9: error: no member named ‘HoughLinesPointSet’ in namespace ‘cv’
cv::HoughLinesPointSet(*points, *lines, linesMax, threshold,
^
/content/SwiftCV/Sources/COpenCV/imgproc.cpp:356:35: error: no viable conversion from ‘cv::Mat’ to ‘int’
cv::applyColorMap(*src, *dst, *colormap);
^
~~~~
//usr/include/opencv2/core/mat.hpp:1541:28: note: candidate template ignored: could not match ‘vector<type-parameter-0-0, allocator >’ against ‘int’
template operator std::vector<_Tp>() const;
^
//usr/include/opencv2/core/mat.hpp:1542:35: note: candidate template ignored: could not match ‘Vec<type-parameter-0-0, cn>’ against ‘int’
template<typename _Tp, int n> operator Vec<_Tp, n>() const;
^
//usr/include/opencv2/core/mat.hpp:1543:42: note: candidate template ignored: could not match ‘Matx<type-parameter-0-0, m, n>’ against ‘int’
template<typename _Tp, int m, int n> operator Matx<_Tp, m, n>() const;
^
//usr/include/opencv2/imgproc.hpp:4101:70: note: passing argument to parameter ‘colormap’ here
CV_EXPORTS_W void applyColorMap(InputArray src, OutputArray dst, int colormap);
^
2 errors generated.
[0/5] Compiling COpenCV imgproc.cpp
[0/5] Compiling COpenCV version.cpp

Install Error: swift-build returned nonzero exit code 1.

Any idea how I can fix this error?

@jimlou SwiftCV expects OpenCV v4, but your Colab runtime seem to have v2 (not sure if that’s what fastai script installs or Colab default). Also SwiftCV code in fastai repo needs some fixes to work with the latest s4tf in Colab.
Here’s notebook with SwiftCV that works in Colab - https://colab.research.google.com/github/vvmnnnkv/SwiftCV/blob/master/Extra/Tests.ipynb
It installs pre-compiled OpenCV4 first.

@jeremy is 08b_data_block_opencv.ipynb notebook still actual/needed? I could try to fix it, but I’ve got impression that opencv won’t be used in the future anyway.

Thank you for the information.

I think it is, at the very least, a useful learning resource. And we don’t really have an alternative for fast image loading in swift that I know of. So IMO it would be great to fix it…

There is the FreeImage wrapper (SFImage) that I wrote SFImage: A Swift Wrapper for FreeImage Not sure how FreeImage compares to OpenCV in terms of speed, but it seems pretty fast to me.

1 Like

Hi @jimlou
https://course.fast.ai/setup/colab script doesn’t install OpenCV4.
Here’s what worked for me:
In your python notebook where you setup runtime environment (git clone & mv swift folder to /content), also execute:

!/bin/bash SwiftCV/install/install_cv4.sh

This will compile and install OpenCV4. This is not fast and will take time.
Faster alternative is to download pre-compiled OpenCV4:

!curl -sL https://github.com/vvmnnnkv/opencv-colab/raw/master/opencv4.tar.gz | tar zxf - -C / && ldconfig /opt/opencv-4.1.0/lib/ && ln -s /opt/opencv-4.1.0/lib/pkgconfig/opencv4.pc /usr/lib/pkgconfig/opencv4.pc

After that, open or restart (not reset!) Swift notebook and it should be able to %install SwiftCV.
I guess you know that Python & Swift runtimes type should be same (CPU/GPU) otherwise you’ll be using different VM’s and file changes from Python runtime won’t affect Swift runtime.

In Swift notebook, if all packages in %install cell compile but there’s import error in the end, restart (not reset :)) runtime and try running it again - it will skip build step as it’s already done and will import the final package just fine (this is probably swift-jupyter bug).

Sorry, I was wrong, @jeremy already made the fix :blush: Just installing opencv4 is enough for both opencv notebooks to work!

I think https://course.fast.ai/setup/colab could be improved to download opencv4 binaries pre-compiled for colab (compiling from source each time would be annoying). But I couldn’t find that colab script on github to make PR. @jeremy if you think this can be useful, please put colab script somewhere to update.

There’re lots of timings in those notebooks, you could try substituting swiftcv with your library and compare, it would be interesting!

Thanks for creating the precompiled binary - that’s great! How long does it take to download and unzip? If it’s not too long, then I’d be happy to add it to setup. The file is here:

It’s about ~7 secs to download from github and ~1 sec to unpack on Colab. The file is about 30M, maybe if placed somewhere else download will be faster, not sure :slight_smile:

Hi @jeremy
Please see PR for colab: https://github.com/fastai/course-v3/pull/391 :slight_smile:

I tried to run your notebook https://colab.research.google.com/github/vvmnnnkv/SwiftCV/blob/master/Extra/Tests.ipynb . This stack trace was produced; it covers 4 pages in markdown. Any idea why swiftCV is having issues installing on colab?
swiftCV_install_errorMsg.pdf (36.9 KB)