B4A Library OpenCV 3.x

OpenCV (Open Source Computer Vision Library) is a really huge project/framework actively developed, mainly written in C++ . It is released under a BSD license.
Read more here: http://opencv.org/
OpenCV versions: https://opencv.org/releases/


OpenCV library for B4A: wraps the official OpenCV 3.x release for Android (in fact not all, about 95% of it)
  • Feel free to test and use it. You can even
  • License: You can use it for your projects, but you are not allowed to distribute nor sell this library. Of course you can distribute apps that use it (remember that OpenCV itself has BSD license as stated before)
  • Supported hardware is: armeabi-v7a and arm64-v8
  • Versions
    • 1.04 (2020/05/17)
      • This version wraps OpenCV 3.4.1 Android release and fixes some bugs (mostly some instance methods and some not exposed classes) of the previous version.
      • One of the major additions is the DNNmodule and related classes.
      • Link to the B4A library files: HERE :)
    • 1.00 (2017/09/27)
      • First B4A library wrapper (OpenCv320forB4A V1.00) which replicates (95%) the official OpenCV 3.20 Java API for Android.
      • (removed link. Use 1.04)

  • Please note that my support will be limited to issues with the wrapper itself, not to help translating OpenCV code from other languages to B4A 🤷‍♂️


========================================================================

(There may be some inaccuracies in what I expose in this post, related to the OpenCV project, since I am relatively new to it and don't know all about its internals. If you find any, please let me know and I will correct it)

A bit of explanation
There exist 'official' OpenCV wrappers for different languages and platforms. Android is one of them.
The official OpenCV 3.20 for Android API includes a lot of classes, organized in modules. But it does not include "all" the original OpenCV modules (since there are other 'experimental', non-free, or platform specific modules which may be present in other platforms but not for Android). Also, there are build options to "tune" it...

I have played quite a lot with it this last year, with a huge project which I started with inline Java, and also translating examples or testing features. But what I have used is just a small percent of the classes and methods exposed. So, there may be some (let's hope not too many) things to fix.

How to learn OpenCVforB4A
If you have worked before with OpenCV, the learning curve will be easy.
If it was using Java with OpenCV for Android, then it will be immediate, since all the methods have exactly the same syntax (except for initializers, polimorphism, and some special cases where simply I did my best).
Anyhow, the ways that I can think of, are (will add links later, also suggestions as online tutorials,.. are welcome)
  • Attached examples.
  • B4A OpenCV Tutorials. I will write a couple of them with what I consider the most important building pieces (for instance the Mat class, which in B4A is OCVMat) and modules
  • Internet examples: there are A LOT of examples over there, written in C++, Java, Python, JavaCV. I would look for examples in the language that is easier to understand for you and then try to translate. Some tips about it (based on my experience)
  • OpenCV syntax has changed as new versions. So there is an 'old' syntax in which nearly everything started with "cv...". Since version 3.X, there was 'cleaner' organization (project was written in C++ instead of C), and there were major syntax changes.
  • JavaCV: Translating from JavaCV to OpenCV should be quite easy but not always direct. JavaCV uses a mix of the old OpenCV syntax with some of its own, and at the beginning it can be a bit confusing, but then it is also easy.
  • Python: there is a lot of material...

First steps. Prepare for some crashes...
  • In OpenCV nearly everything takes part in the native code.
  • When we call a Sub/method/algorithm, it performs some internal checks to see if all the input data is correct. This check is perfomed in the native side. If something is not correct (wrong OCVMat dimensions, some incoherent parameters,...) it throws an exception and crashes. If we are lucky, perhaps we see in the log some clues about the check that made it crash.
  • On the good side, it is very easy to achieve results with OpenCV (check the examples). The real difficult part, as with many other things, is to fine-tune it: OpenCV has a collection of really powerful 'primitive' objects and operations, and really complex algorithms that can do many things. But it is the user who has to glue all of them to achieve the desired results.


(from the previous Beta announcement)
  • IMPORTANT: you must take this into account:
  • OpenCV (the included binary modules) is a free(*) project, but subject to license terms as described here: http://opencv.org/
    • (*): There are some modules in the OpenCV project which are on-free, but here I am refering to the ones included in the library
  • My work: (the B4A library) is free to test and use, but you can for it :). I'll keep donators updated with "advanced" material and examples
  • If you are interested, please PM me with your mail address and I will send you a link with the library and some basic examples. (be patient if you don't receive it immediately, I'll do it as soon as possible).
  • There is no documentation. In short, the syntax is nearly-exactly the same as the OpenCV3.20 Java API, adding "OCV" prefix and only the minimum modifications to adapt to B4A, For reference (taking into account described syntax changes) you can look at http://docs.opencv.org/java/3.1.0/ (which is not the latest one, but the API is nearly the same).
  • It would be preferable if you have worked before with OpenCV and/or can translate examples from Java/C++ and/or simply are interested in it.
  • I recommend starting with the examples and try to understand what is done. Just experimenting can lead to crash after crash of the native libraries with nearly no useful information, and can be very discouraging.
  • I forgot, the included binaries are for ameabi-v7a and arm64-v8 devices
---------------------------------------------------------------------
Some screenshots taken from the examples
Canny operator - Features2D - Color space conversion
s1.png
2D-FFT
s2.png
Color Blob detection
s3.png
 

Attachments

  • JavaCameraView2.zip
    2.7 KB · Views: 1,679
  • CameraOpenCvTest7.zip
    8.9 KB · Views: 1,162
  • BlobDetector5.zip
    15.3 KB · Views: 1,090
  • FaceDetector8.zip
    21 KB · Views: 1,149
Last edited:

JordiCP

Expert
Licensed User
Longtime User
Start OffTopic:
Sub OffTopic_Resume :D

' Please make sure that topic has been previously loaded​
  • I imagine that they have a really huge database continuously trained with features of every object, so I also suppose that OpenCV or similar must be somewhere in the middle, with a lot of algorithms running on it.
  • Related to the offTopic, the other day someone talked me about Pinterest Lens, I suppose the idea behind is similar. Also, google image search must use some kind of feature-based search....
  • There are so many things that can be done...if one had enough time for it :rolleyes:
End Sub
 

padvou

Active Member
Licensed User
Longtime User
Sub OffTopic_Resume :D

' Please make sure that topic has been previously loaded​
  • I imagine that they have a really huge database continuously trained with features of every object, so I also suppose that OpenCV or similar must be somewhere in the middle, with a lot of algorithms running on it.
  • Related to the offTopic, the other day someone talked me about Pinterest Lens, I suppose the idea behind is similar. Also, google image search must use some kind of feature-based search....
  • There are so many things that can be done...if one had enough time for it :rolleyes:
End Sub
Maybe they are crowling the entire net for images and their descriptions, who knows?
 

JordiCP

Expert
Licensed User
Longtime User
The goal is to detect the position of the center of the circle in the panel, so that I can take the X and Y coordinates for further use.
I have added the "imageManipulations2" example, adding your picture, and an option (the last checkbox) to detect circles using Hough transforms. The rest of the options will do nothing when you enable the last one (check code) , but they are good for learning
You will see that it works ok for your image, and there are some "false positives" for the other two, which don't have "real" circles. That's how detection works, you tune de image and parameters based on what you already know of it, in order to "help" the algorithm as much as possible.
Also, in case of some detectors (as the Hough transform here), depending on the parameters used, the algorithm will not "come back" for many seconds
(Take into account that the full example is only expecting 640x480 source images, so it may crash if you feed it with other resolutions.)
 

Attachments

  • imageManipulations2.zip
    341.6 KB · Views: 389

padvou

Active Member
Licensed User
Longtime User
I have added the "imageManipulations2" example, adding your picture, and an option (the last checkbox) to detect circles using Hough transforms. The rest of the options will do nothing when you enable the last one (check code) , but they are good for learning
You will see that it works ok for your image, and there are some "false positives" for the other two, which don't have "real" circles. That's how detection works, you tune de image and parameters based on what you already know of it, in order to "help" the algorithm as much as possible.
Also, in case of some detectors (as the Hough transform here), depending on the parameters used, the algorithm will not "come back" for many seconds
(Take into account that the full example is only expecting 640x480 source images, so it may crash if you feed it with other resolutions.)

That's a great example!
However, how do I detect the circles when they exist on another image?
Here's an example image attached, where I would like to detect the position of the circles as you do in the imagemanipulations2 and log the circle coordinates.
Also, why only 640x480?
 

Attachments

  • Capture.PNG
    Capture.PNG
    439.8 KB · Views: 246

JordiCP

Expert
Licensed User
Longtime User
That's a great example!
However, how do I detect the circles when they exist on another image?
Here's an example image attached, where I would like to detect the position of the circles as you do in the imagemanipulations2 and log the circle coordinates.
Also, why only 640x480?
Hi,

In fact the 640x480 was a limitation of the example, since the images were loaded and converted to a mutable bitmap. Now it dows not have this limitation.

I have made it to work with your other sample picture. But I have had to tune the parameters for HoughCircle function. The circle is drawn and coordinates written to the log.
Note that there is not a "magic call" that will detect always what you want, because in the same image there are "sets of points" that an algorithm "could think" that are circles. You can play with the parameters and you will see that it detects other circles, even if they are not such, or doesn't detect anything. That is what I told you that algorithms need 'clues'.
See THIS for a reference on the used function

Just a thought, if the circles in the image will always be that logo, even if it is with different sizes and positions, perhaps you could use other approaches...
 

Attachments

  • imageManipulations3.zip
    444.4 KB · Views: 346

padvou

Active Member
Licensed User
Longtime User
Hi,

In fact the 640x480 was a limitation of the example, since the images were loaded and converted to a mutable bitmap. Now it dows not have this limitation.

I have made it to work with your other sample picture. But I have had to tune the parameters for HoughCircle function. The circle is drawn and coordinates written to the log.
Note that there is not a "magic call" that will detect always what you want, because in the same image there are "sets of points" that an algorithm "could think" that are circles. You can play with the parameters and you will see that it detects other circles, even if they are not such, or doesn't detect anything. That is what I told you that algorithms need 'clues'.
See THIS for a reference on the used function

Just a thought, if the circles in the image will always be that logo, even if it is with different sizes and positions, perhaps you could use other approaches...

Thank you very much for the example. It really helped me, because it fits with what I have in mind, so it's easier for me to understand.
How do you deal with concentric circles? I find this parameter: "minDist – Minimum distance between the centers of the detected circles. If the parameter is too small, multiple neighbor circles may be falsely detected in addition to a true one. If it is too large, some circles may be missed." to not be able to set it to 0.
 

JordiCP

Expert
Licensed User
Longtime User
I would try with mindist=1 for concentric circles, I don't know if it accepts 0, you can test it.
Anyway, the HoughCircles function is using HOUGH_GRADIENT method (it is one of the parameters), since it is the only one implemented in the android version. So, it means that it works with Canny ( convert to grayscale and then apply canny operator to detect edges). It means that it does not take into account color information.

Again, if you know particularities of the circles that you want to find (background color , circle color, maximum size, if there is always more than one since they are concentric....) you can tune your algorithm to improve its "detection rate". Not only by function parameters, but also "preprocessing" your image according to this info.
 

padvou

Active Member
Licensed User
Longtime User
I would try with mindist=1 for concentric circles, I don't know if it accepts 0, you can test it.
Anyway, the HoughCircles function is using HOUGH_GRADIENT method (it is one of the parameters), since it is the only one implemented in the android version. So, it means that it works with Canny ( convert to grayscale and then apply canny operator to detect edges). It means that it does not take into account color information.

Again, if you know particularities of the circles that you want to find (background color , circle color, maximum size, if there is always more than one since they are concentric....) you can tune your algorithm to improve its "detection rate". Not only by function parameters, but also "preprocessing" your image according to this info.

I have added some seekbars in the project you uploaded, so I can try some tuning in real time and not load and reload the project.
However, I find that tuning may work for a specific image but since I will not be able to know in my application, the exact size of the circles, it will either not detect them at all, detect them partially or have false positives.
So if there is no better approach, I think I should move to image detection and not circle detection.
What do you think?
 

JordiCP

Expert
Licensed User
Longtime User
So if there is no better approach, I think I should move to image detection and not circle detection.
That was one of the reasons of my previous questions ;).
Circle detection may work directly (without tuning) only for 'lab' images. If all your images will have the white logo with the concentric circles and this is what you are looking for, then you could try image detection or even another approach: detecting white blobs, then check if their boundings are more or less square, and run the circle detection only on that subregion (or even better, assume that the circle center will be near the center of that square).
 

padvou

Active Member
Licensed User
Longtime User
That was one of the reasons of my previous questions ;).
Circle detection may work directly (without tuning) only for 'lab' images. If all your images will have the white logo with the concentric circles and this is what you are looking for, then you could try image detection or even another approach: detecting white blobs, then check if their boundings are more or less square, and run the circle detection only on that subregion (or even better, assume that the circle center will be near the center of that square).


Image detection is that thing with the XML files, right?
A fellow suggested a procedure some posts ago in order to create them.
Is one of this thread's examples relative to this? Does this way work even if the image to be detected is proportionally resized to the original?
 

JordiCP

Expert
Licensed User
Longtime User
If the image to be found is always the same, but scaled in different positions, you could also try multiscale template matching.

Is the image to be detected always the same?
 

padvou

Active Member
Licensed User
Longtime User
If the image to be found is always the same, but scaled in different positions, you could also try multiscale template matching.

Is the image to be detected always the same?
Well,
To be exact, I have to detect three images and find out the distances between them. But to answer your question, yes basically they are always the same.
 

padvou

Active Member
Licensed User
Longtime User
Well,
To be exact, I have to detect three images and find out the distances between them. But to answer your question, yes basically they are always the same.

I 've figured out that it must be this. Could you please help me with the B4A code?
 

biometrics

Active Member
Licensed User
Longtime User
Hi Jordi,

I'm trying to port my Windows OpenCV app to Android. I need some help with the following two issues:

1. After using detectMultiScale to find faces I want access to the mat of a face. In C++ I do the following:

B4X:
face_cascade.detectMultiScale( frame_gray, faces, 1.1, 2, 0 | CV_HAAR_SCALE_IMAGE, Size(iFaceWidth, iFaceHeight));
for( int i = 0; i < faces.size(); i++ ) {
  Mat faceROI = frame_gray( faces[i] );

In B4A I can't figure out how to access it, so far I've got:

B4X:
Dim frame_gray As OCVMat
Dim faces As OCVMatOfRect
Dim facesArray() As OCVRect
Dim faceROI As OCVMat

libOpenCvFaceCascade.detectMultiScale(frame_gray, faces, dScaleFactor, iMinimumNeighbours, iHaarScaleImage, minimumSize, maximumSize)  
facesArray = faces.toArray
  
For iFaceIndex = 0 To facesArray.Length - 1
  faceROI = facesArray(iFaceIndex).clone

The last line produces and error:

B4X:
faceROI = facesArray(iFaceIndex).clone
javac 1.8.0_131
src\isenzo\audiencemeasurement\main.java:517: error: incompatible types: Rect cannot be converted to Mat
mostCurrent._faceroi = (com.appiotic.ocv4b4a.core.Mat)(_facesarray[_ifaceindex].clone());

I can see it's the wrong type but how do I access the detected faces mats?

2. I'm looking for the OpenCV FastMatchTemplate function. Where can I find it?

Thanks a lot, this is an awesome library.
 

biometrics

Active Member
Licensed User
Longtime User
I've changed the clone line to:

B4X:
faceROI = faces.submat(facesArray(iFaceIndex).y, facesArray(iFaceIndex).y + facesArray(iFaceIndex).height, facesArray(iFaceIndex).x, facesArray(iFaceIndex).x + facesArray(iFaceIndex).width)

and I'm now getting this error:

B4X:
CvException [com.appiotic.ocv4b4a.core.CvException: cv::Exception: C:/Develop/git/opencv_320/modules/core/src/matrix.cpp:483: error: (-215) 0 <= _rowRange.start && _rowRange.start <= _rowRange.end && _rowRange.end <= m.rows in function cv::Mat::Mat(const cv::Mat&, const cv::Range&, const cv::Range&)
 

JordiCP

Expert
Licensed User
Longtime User
I suppose that you want to get a ROI of the original picture, that is frame_gray. This is not what you are doing in code

'faces' is an OCVMat, more precisely an OCVMatOfRect. It contains the OCVRects that it has detected. So you can get the rectangles from here in order to have the ROIs that you want from the original image
B4X:
faceROI = frame_gray.submat(facesArray(iFaceIndex).y, facesArray(iFaceIndex).y + facesArray(iFaceIndex).height, facesArray(iFaceIndex).x, facesArray(iFaceIndex).x + facesArray(iFaceIndex).width)

faceROI will be a subMat of the original. Whatever you do to it, will be done to the original. So, if you need a copy you must do (not tested)
B4X:
Dim faceROIcloned as OCVMat
faceROIcloned = faceROI.clone()
 

biometrics

Active Member
Licensed User
Longtime User
I suppose that you want to get a ROI of the original picture, that is frame_gray. This is not what you are doing in code

'faces' is an OCVMat, more precisely an OCVMatOfRect. It contains the OCVRects that it has detected. So you can get the rectangles from here in order to have the ROIs that you want from the original image
B4X:
faceROI = frame_gray.submat(facesArray(iFaceIndex).y, facesArray(iFaceIndex).y + facesArray(iFaceIndex).height, facesArray(iFaceIndex).x, facesArray(iFaceIndex).x + facesArray(iFaceIndex).width)

faceROI will be a subMat of the original. Whatever you do to it, will be done to the original. So, if you need a copy you must do (not tested)
B4X:
Dim faceROIcloned as OCVMat
faceROIcloned = faceROI.clone()

Thanks Jordi.

As to my second question, where can I find the FastMatchTemplate function?
 

JordiCP

Expert
Licensed User
Longtime User
Thanks Jordi.
As to my second question, where can I find the FastMatchTemplate function?
You won't believe, but I don't know :D. On one side, Android version of OpenCV does not include all the features available in C++. The second reason is that I have never used it (but would like to test it soon)

Anyway, you can try with
B4X:
Dim mImgProc as OCVImgProc
mImgProc.matchTemplate(.....)
...but I am not sure if it only works with parts of the image that are of the same size, so it will only be valid for still pictures.
If this is not the case and you are working with the camera (so the object that you want to find will not have the same dimensions and the above does not fit your needs), then you should look at features2D and descriptors derived classes. It implements matching methods that are based on invariant descriptors, so it makes them suitable for 'real time' images.


I am also willing to experiment with it, but won't have time till next week :(. Besides, I think that @padvou question above could be solved using this.
 
Top