Fastest background subtraction is BackgroundSubtractorCNT
Background subtraction is a basic operation for computer vision. If you have a fast system, then choosing one from the choices that come with OpenCV is fine. On the other hand, trying to use any of them on a low spec system will kill your FPS. Learn here why and how the fastest background subtraction is BackgroundSubtractorCNT.
UPDATE: This project now has it’s own site.
Since my goal was to perform background subtraction on a low spec system, I came to implement this new algorithm, which to my best knowledge is currently the fastest (tested against OpenCV 3.1.0 background subtraction implementations).
BackgroundSubtractorCNT is going to be covered in several parts:
- Practical for your immediate use – getting and using the code – this post.
The algorithm behind BackgroundSubtractorCNT (link pending). OpenCV optimization tricks used in BackgroundSubtractorCNT (link pending).
Why the fastest background subtraction is BackgroundSubtractorCNT?
The basic reason is that it is very simple, and thoroughly optimized with Valgrind. Simple, because the algorithm was developed with simplicity in mind, trying to capture the essence of the background subtraction process as performed in human vision. The implementation of a simple algorithm is fast as is, but was further optimized by practical software development methods.
Is BackgroundSubtractorCNT faster than BackgroundSubtractorMOG2?
The short answer is YES. Tested for a high end PC, with 16 fast cores, the difference is very small. On the other hand for cheap hardware with a low end ARM processor (Pi3) BackgroundSubtractorCNT is about 2.5 times faster than BackgroundSubtractorMOG2.
It could be the difference between project failure and success. Assuming your requirement minimum acceptance level is 15 FPS:
- You’re getting to below 10 FPS due to other background subtraction methods – project failure.
- With the BackgroundSubtractorCNT you’ll be at about 20 FPS easy – project success.
BackgroundSubtractorCNT is outdoor light resilient
More often than you would like, the background lighting changes – especially for outdoors semi clouded sky. The algorithm includes some field tested hard coded thresholds to account for these situations.
Where to get BackgroundSubtractorCNT and is it free ?
Simply download it from BackgroundSubtractorCNT on Github.
I licensed it under the same license type as OpenCV – to makes it easy for you to use it in commercial and private projects.
What about detection quality?
When I wrote this class I intended for it to blend into existing OpenCV code seamlessly. Since this class inherits from the BackgroundSubtractor of OpenCV, it can be used as a drop in replacement for any other background subtractor implementation of OpenCV.
If for OpenCV you would normally do:
Then for this class the only difference is inserting declarations with a new include file and namespace:
And then you simply use it in place of any other OpenCV BackgroundSubtractor:
You can tune the behavior when the BackgroundSubtractor is created (or later with setters) –
Use your estimated FPS as the base for tuning, as explained below (it doesn’t have to be accurate).
- How long to wait before considering a pixel to be a background?
When you and I look at a scene, we wait for some time before we consider an item to be part of a background. The assumption here is that it takes about 1 second, but you can play with it. I recommend using your expected FPS as the value of minPixelStability when using createBackgroundSubtractorCNT(). The value represents the number of frames to wait when a pixel is not changing before marking it as background. The demo is doing exactly that in main.cpp.
- How long to wait before recognizing the background changed?
Okay – so we’ve set something to be a background, and things are passing in front of it. When something is in front of it for a long time, then it’s time to treat it as a background instead of the previous one, but how long to wait before doing this replacement? The algorithm here was tested with a 60 seconds value and gave good results. You can change that as you want, but I recommend setting maxPixelStability to “minPixelStability*60″ in createBackgroundSubtractorCNT(). The demo is doing exactly that in main.cpp.
But what if you want to REACT VERY FAST TO SCENE CHANGES? If reducing maxPixelStability is not enough, you can use ‘false‘ for useHistory in createBackgroundSubtractorCNT(). In this case maxPixelStability is ignored. Because the background distinction is weaker, you’ll see small ghosts following your foreground objects and the background image will have some ghosts images fading in it. Using “minPixelStability=FPS/5” will reduce this phenomena.
- To parallel or not to parallel?
In my experience paralleling everything automatically is a double edged sword. On one hand you don’t need to worry about optimizations if you have enough processing power. On the other hand, splitting your processing carefully can yield a better optimization. I leave this to you to experiment and decide for your specific design.
Getting the source and using it
Simply take the files into your project or install or build a package… this is up to you. See how to do it on the project page.
If you’re having troubles then follow this tutorial:
That’s all for this time. Stay tuned 🙂
See you soon in an upcoming post.