Experimenting with Raspberry PI + Android Things + OpenCVforB4A

JordiCP

Expert
Licensed User
Longtime User
I have been experimenting with the following ingredients, to check if they could be successfully mixed

Raspberry Pi 3B running Android Things
A CSI-2 camera
OpenCV library
nearly two full 12-hour working days:eek:
and of course, B4A

(The long story)
The first problem came with the camera, since in order to control it from Android in a RPi3, only camera2 API can be used. This API is so complex and very different to the deprecated (but still present in mobile devices) camera(1) API.

Second, Android Things is in a 'developer preview' state. This means that it 'works' but there are known bugs, performance issues, and things that simply can't yet be done. There is not much information apart from the examples. But I like it, mostly because it can be used with B4A.

Once I had a minimum basic example for the camera2 API debugged and working in my mobile phone, I tested it in my RPI: nothing worked!!! By debugging, I found the first limitation, it is that TextureViews (needed for the preview) can't be used because of an issue with hardware acceleration (I suppose it is a limitation of the current preview state of Android Things).

Also, the camera2 API is not only complex but there are (again because of the Android Things preview state) some limitations on its use: the number of output surfaces, some lags, ...

Once I have been able to overcome these limitations and had access to the preview data, I just mixed it with my OpenCVforB4A library and,,.surprise!! it worked like a charm!!!

At the end I got all the ingredients mixed in a basic example!! The whole thing is quite slow, as the above limitations and workarounds to overcome them have 'cumulative' effects, but I expect that most of them will be transitory and also will look for optimizations in the OpenCV part.
Also, the good news is that I have it working beside me for a couple of hours and it seems quite robust :)

The video (not very good quality) shows the RPI 3 with its camera pointing to the screen where the output will be shown. The camera preview data is sent to a B4A Sub, converted to a OCVMat using the OpenCV library, a circle is drawn on it, and converted to a bitmap that will be shown in a View. The output is shown in the same screen through HDMI.

 

JordiCP

Expert
Licensed User
Longtime User
Not yet (I used the same 'original' version for it). Instead of using and integrating the camera2 API in the the same lib, I set it up separately (mixing some internet examples in a dirty manner) to ease things.

Regarding the library, at this moment there are some known bugs, but most of them are only syntax typos. I'll come back to it and hope to release a new version soon :)
 
Top