Java Question Include .so library in B4A library

canalrun

Well-Known Member
Licensed User
Longtime User
Hello,
I am creating a B4A library to interface with Microsoft Cognitive Services (MCS) speech recognition.

MCS requires the inclusion of a .so library file, libandroid_platform.so.

I have the library file. How do I include it in the B4A library project?

I'm still using Eclipse and I am using the Simple Library Compiler (SLC).

On the SLC tutorial webpage I see the following:
- If you need to include any additional files, such as .so files, in the jar then you can create a folder named 'additional'. Any file or folder inside this folder will be added to the jar file.

I'm not 100% sure where to add the "additional" directory, but I added the directory to my Eclipse project directly beneath the Project directory (the "additional" directory is at the same level as the src and lib directories).

If I open the .jar file with WinZip I see that the .so library has been added.

The problem is when I run my B4A program that accesses the library, I can access methods I have created that return strings and debug messages via raiseevents – so I am fairly confident the library is doing something. But, as soon as I access a method that invokes a routine in my library via B4A code that's part of Microsoft Cognitive Services I get an " Unfortunately your app has stopped…" Crash message on my phone.

The error message in the B4A IDE:
B4X:
java.lang.UnsatisfiedLinkError: dalvik.system.PathClassLoader[DexPathList[[zip file "/data/app/b4a.example-2/base.apk"],nativeLibraryDirectories=[/vendor/lib, /system/lib]]] couldn't find "libandroid_platform.so"
    at java.lang.Runtime.loadLibrary(Runtime.java:366)
    at java.lang.System.loadLibrary(System.java:989)
    at com.microsoft.bing.speech.Conversation.<clinit>(Conversation.java:68)
    at com.microsoft.cognitiveservices.speechrecognition.SpeechRecognitionServiceFactory.createMicrophoneClient(SpeechRecognitionServiceFactory.java:747)
    at canalrun.com.crbvr.CRBingVR.StartVR(CRBingVR.java:63)
    at b4a.example.main._bnstart_click(main.java:414)
    at java.lang.reflect.Method.invoke(Native Method)
    at java.lang.reflect.Method.invoke(Method.java:372)
    at anywheresoftware.b4a.BA.raiseEvent2(BA.java:187)
    at anywheresoftware.b4a.BA.raiseEvent2(BA.java:175)
    at anywheresoftware.b4a.BA.raiseEvent(BA.java:171)
    at anywheresoftware.b4a.objects.ViewWrapper$1.onClick(ViewWrapper.java:77)
    at android.view.View.performClick(View.java:4764)
    at android.view.View$PerformClick.run(View.java:19844)
    at android.os.Handler.handleCallback(Handler.java:739)
    at android.os.Handler.dispatchMessage(Handler.java:95)
    at android.os.Looper.loop(Looper.java:135)
    at android.app.ActivityThread.main(ActivityThread.java:5349)
    at java.lang.reflect.Method.invoke(Native Method)
    at java.lang.reflect.Method.invoke(Method.java:372)
    at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:908)
    at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:703)

It seems like it's not finding the libandroid_platform.so file. How should I include this file into Eclipse so that Simple Library Compiler will include it in my jar file?

Thanks,
Barry.
 

DonManfred

Expert
Licensed User
Longtime User

Attachments

  • BasisB4A.zip
    3.7 KB · Views: 812
Last edited:

canalrun

Well-Known Member
Licensed User
Longtime User
Thanks. That worked.

I was able to put together a library that uses Microsoft Cognitive Services for speech recognition.

It works nicely – a few things MCS allows is longer speech input (about two minutes of speech) and it returns partial results during speech.

After I check it out a little bit more, I will upload my B4A example and Eclipse library project source code.

Barry.
 

canalrun

Well-Known Member
Licensed User
Longtime User
Thanks to everyone who helped get this working – especially Rusty and DonManfred.

I've uploaded an example B4A app along with its source code and Eclipse source code for the CRBingVR B4A library I developed.
https://www.dropbox.com/s/s9zuhayjhhgdw67/CRBingVR 3.zip?dl=0

I've also included the CRBingVR library .jar and .xml library files.

I placed everything in a zip archive uploaded to Dropbox - the zip file is likely too large to upload directly to the forum.

in order compile and run the included B4A example, you must obtain a free (or paid if you prefer) Bing Speech key from Microsoft Cognitive Services (MCS). The instructions for doing this are in the B4A file comments.

To build the CRBingVR B4A library I used the Simple Library Compiler (SLC) available elsewhere on these forums.

I used Eclipse mainly to take advantage of the IDE and Project structure, but it was not used to build the library. The library was actually built using the SLC.


upload_2016-12-21_15-42-14.png


When I run the SLC a window similar to the above opens. The string in A points to the Eclipse Project structure for CRBingVR library on my computer. The text field B points to where the compiled .jar and .xml files will be placed. I don't know what C does or where the string in this field came from :D.

Unless you want to make changes and compile the library, you can just:
  1. Use the link above to download the zip file and extract the B4A code into a B4A project directory.
  2. Sign up for a Bing Speech Key on the MCS website. Instructions are in the B4A file comments.
  3. Copy the CRBingVR .jar and .xml files to your B4A additional libraries folder.
  4. Insert your Bing Speech key into the code as described. Compile the B4A app.
I'm still using version 4.30 of B4A so you might see a warning if you open the B4A example with a later version. Everything should work, however.

Here's a screenshot of the executing demo B4A app.
  • Click on the start button and wait for the green "recording" status message. The microphone will capture audio, send it to Bing Speech, and the text VR will display.
  • Both interim partial results (pre-pended with PR: ) and the final result (pre-pended with FR: ) are displayed. The microphone will stop recording approximately 2 seconds after it recognizes silence.
  • Click on the From File button to VR speech contained in an embedded wav file.

upload_2016-12-21_16-11-12.png



Barry.
 
Last edited:

touchsquid

Active Member
Licensed User
Longtime User
Your library works perfectly in your B4A example. But when I try to use in in my own app I get this error trying to start listening. Any ideas what I could be doing wrong?


main_bing_listen (java line: 375)
java.lang.UnsatisfiedLinkError: dalvik.system.PathClassLoader[DexPathList[[zip file "/data/app/tangleberry.net.saga-1/base.apk"],nativeLibraryDirectories=[/data/app/tangleberry.net.saga-1/lib/arm, /vendor/lib, /system/lib]]] couldn't find "libandroid_platform.so"
at java.lang.Runtime.loadLibrary(Runtime.java:366)
at java.lang.System.loadLibrary(System.java:989)
at com.microsoft.bing.speech.Conversation.<clinit>(Conversation.java:68)
at com.microsoft.cognitiveservices.speechrecognition.SpeechRecognitionServiceFactory.createMicrophoneClient(SpeechRecognitionServiceFactory.java:747)
at canalrun.com.crbvr.CRBingVR.StartVR(CRBingVR.java:75)
at tangleberry.net.saga.main._bing_listen(main.java:375)
at java.lang.reflect.Method.invoke(Native Method)
at java.lang.reflect.Method.invoke(Method.java:372)
at anywheresoftware.b4a.BA.raiseEvent2(BA.java:169)
at anywheresoftware.b4a.keywords.Common$5.run(Common.java:996)
at android.os.Handler.handleCallback(Handler.java:739)
at android.os.Handler.dispatchMessage(Handler.java:95)
at android.os.Looper.loop(Looper.java:145)
at android.app.ActivityThread.main(ActivityThread.java:5942)
at java.lang.reflect.Method.invoke(Native Method)
at java.lang.reflect.Method.invoke(Method.java:372)
at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:1399)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:1194)
 

touchsquid

Active Member
Licensed User
Longtime User
I got it working. If I remove the PocketSphinx library, it works. Looks like I can't use both libraries at once.
 

canalrun

Well-Known Member
Licensed User
Longtime User
Your library works perfectly in your B4A example. But when I try to use in in my own app I get this error trying to start listening. Any ideas what I could be doing wrong?

main_bing_listen (java line: 375)
java.lang.UnsatisfiedLinkError: dalvik.system.PathClassLoader[DexPathList[[zip file "/data/app/tangleberry.net.saga-1/base.apk"],nativeLibraryDirectories=[/data/app/tangleberry.net.saga-1/lib/arm, /vendor/lib, /system/lib]]] couldn't find "libandroid_platform.so"
at java.lang.Runtime.loadLibrary(Runtime.java:366)

Edited:
I saw your reply after posting the reply below. I'm glad you got it working.

Hello,
I'll try my best, but I am definitely no expert library developer.

To me it looks like the error involves not being able to find libandroid_platform.so during compilation.

I see I have this library along with the provided XML and jar files in my Additional Libraries folder.

I am guessing that if you extract this .so library from the downloaded "CRBingVR 3.zip" file and place it in your Additional Libraries folder, this will solve the problem.

If this works I will have to add the requirement to my instructions above.

Barry.
 

touchsquid

Active Member
Licensed User
Longtime User
Yesterday the library worked perfectly, today it fails with a login failure. I used a key supplied by Microsoft. Tried getting a new key but it made no difference.

Microsoft has changed the API endpoint from api.projectoxford.ai to
https://api.cognitive.microsoft.com/sts/v1.0/

I suspect that is the reason for the failure. Perhaps it would be good to be able to set the endpoint in the initialize function.

Read my new novel, Tiger and the Robot, on Kindle.
 

canalrun

Well-Known Member
Licensed User
Longtime User
I just noticed it wasn't working today for me either – that's the reason.

I have a feeling the endpoint may have to be changed in the compiled library provided by Microsoft. It may not be changeable in my code. I will have to investigate.

Barry.
 

touchsquid

Active Member
Licensed User
Longtime User
It started working again. The server must have been down. But I guess the endpoint should still change.
 

canalrun

Well-Known Member
Licensed User
Longtime User
It's too bad that the Microsoft Cognitive Services (Bing) speech recognition does not support as many languages as the free Google solution.

I have not found a way to send raw audio microphone data or data from a wave file to the free Google speech recognition API.

The Google interface works using their own microphone audio data capture routine.

Google does have a cloud service, but it costs money. I have seen that it supports raw data, but have never used it.

I asked about the endpoints on the Miicrosoft Cognitive Services StackOverflow support site. The endpoints are embedded in the speechSDK.jar code and were said to be up-to-date.

I'm sorry that I did forget to mention a few details in my original source code post. The speech SDK.jar is one. Another is that the paths to my android jar embedded in the eclipse project and probably other embedded paths need to be changed for those who want to recompile my included library.

Barry.
 

Rusty

Well-Known Member
Licensed User
Longtime User
Hi Barry,
A couple years ago, I found that google will support FLAC encoded files... Steve posted this to allow FLAC file creation.
I haven't tried it.
Rusty
 

canalrun

Well-Known Member
Licensed User
Longtime User
Try it again in a couple days. It stopped working for a couple days once before, but then started up again.

Microsoft has been transitioning between calling this library Bing VR and Microsoft Cognitive Services since this library was written.

I have not figured out how to sign up for Microsoft Cognitive Services (MCS).

The MCS library allows you to send audio buffers for translation. This makes it easier to implement Continuous Voice Recognition (real-time recognition of an ongoing conversation).

The new Google paid VR service will supposedly also accept audio buffers. The last time I looked at them they were still in their early stages and did not support certain things.

I would love to continue improving a library for Continuous Voice Recognition. The Microsoft Cognitive Services or paid Google (they have a lower use free version also) may be the best way to go since they accept audio buffers as input for translation.

This is actually a very tricky problem – it means that the library must record audio including detecting start and stop of speech, limiting continuous audio buffer length, and working in possibly noisy environments.

I hope to continue development. Even better would be a "group project" as someone mentioned in a different thread. A Continuous Voice Recognition library could be combined with a Continuous Keyword and Command detection project.

Barry.
 
Top