Android Question Improve B4A speed using the video card?

yo3ggx

Active Member
Licensed User
Longtime User
Hello.
I was not able to find any post in the forum related to this question.
Can B4X IDE benefit from a good video card to speed up the compilation?
Currently I'm using a nVIDIA Quadro P600, Ryzen 7 2700 and 96GB or RAM, but it takes ~35s for a full compilation of my B4A application (~18000 lines of code).

Do you have any personal experience with better video cards? Any recommendation?
I intend to buy the new AMD Radeon Pro W7500 (or even W7600) video card for video editing and some AI work, but I wonder if I can expect some improvement in B4X IDE speed too.

Thank you.
 

Andrew (Digitwell)

Well-Known Member
Licensed User
Longtime User
As far as I know the compilers, mostly java, do not use the GPU. You can check this by using the Task Manager and viewing the GPU screens whilst performing a compile.

For comparisons of your GPU vs others go here https://www.techpowerup.com/gpu-specs/quadro-p600.c2933 and look at the relative performance box.

Is 35 secs for the first compile, which is always slower, or subsequent compiles?

Compilation will depend on CPU, Memory and Harddisk. Again the task manager will help pinpoint if there any bottlenecks

You can also use VisualVM to monitor the java engine.
 
Upvote 0

yo3ggx

Active Member
Licensed User
Longtime User
Is 35 secs for the first compile, which is always slower, or subsequent compiles?
In Release mode it takes ~15 for the subsequent compilations.
Compilation will depend on CPU, Memory and Harddisk. Again the task manager will help pinpoint if there any bottlenecks
During compilation, the CPU peaks at ~80% (for very short period of times, ~40% for the rest), memory peaks at ~30% (splitted across several processes), GPU at 2% and SSD at 1% all the time.
If the CPU is the bottleneck, I wonder why does not go to 100%.

What is strange is that during the compilation (only), a process named NV.Display.Container.exe peaks to 50% of the 96GB RAM.
This is normally related to nVIDIA tray icon. This does not happen with other applications.

During compilation, the most time consuming phases are:
Compiling resources (2.43s)
Linking resources (0.86s)
Compiling generated Java code. (5.30s)
Dex code (6.40s)
Dex merge (5.50s)
Copying libraries resources (1.34s)
Signing package file (private key). (1.42s)
 
Upvote 0

Alessandro71

Well-Known Member
Licensed User
Longtime User
i don't think GPU can have any impact on compilation time, but as a benchmark, here is my 2.575.643 lines app compile time on my old i5-3350P with 16GB RAM, GTX 650, and SSD:
not much different from your 35 sec

B4X:
B4A Version: 12.50
Parsing code.    (2.11s)
    Java Version: 14
Building folders structure.    (0.03s)
Running custom action.    (0.05s)
Compiling code.    (1.25s)
    
ObfuscatorMap.txt file created in Objects folder.
Compiling layouts code.    (0.06s)
Organizing libraries.    (0.64s)
    (AndroidX SDK)
Compiling resources    (1.76s)
Linking resources    (0.75s)
Compiling generated Java code.    (13.69s)
Finding libraries that need to be dexed.    (0.03s)
Dex code    (16.45s)
Dex merge    (6.05s)
Copying libraries resources    (1.54s)
ZipAlign file.    (0.06s)
Signing package file (private key).    (0.71s)
Running custom action.    (0.06s)
Installing file to device.    (0.13s)
    Installing with B4A-Bridge.
Completed successfully.
 
Upvote 0

BlueVision

Active Member
Licensed User
Longtime User
...but I wonder if I can expect some improvement in B4X IDE speed too.
I'm afraid the graphics card's processing power is disregarded when compiling a programme. I am also of the opinion that it would probably only play a very small part in the overall result. The processors of graphics cards tend to specialise in fast rendering of image information (i.e. they are specialised technical idiots) and not in general arithmetic calculations that a "normal" processor with a mathematical coprocessor can perform. This would probably also make a mockery of the whole concept of "pipelining" in today's multi-core processors, provided that multi-core processors are supported by the compiler at all and the calculation work is not ultimately carried out by just one core. I myself have not checked whether all cores are busy during a compiler run. Either way, if someone rings the doorbell and disturbs this well-coordinated team of specialists at work and wants to bring in their calculations from outside, this interrupts the entire work of this team, and the few milliseconds that you might gain are immediately lost again in order to synchronise the overall result.

A baker should bake bread, a shoemaker should take care of shoes. That works better.

Ultimately, the question remains as to why a compiler run takes so “long”. In my opinion, these times are completely normal. A compiler does not process program code sequentially like an interpreter. This is not a line-by-line translation. In principle, of course. Logical. It has to start somewhere. But as the processing of a program code progresses, the workload increases "exponentially" with each new subroutine, so to speak. Libraries must be integrated and interfaces for parameter transfer must be created. You have to check again and again. It is a very complex process. I'm always amazed at how quickly and well modern compilers produce functional code based on a simple "shopping list", the actual script of the program. Seen this way, you won't really be able to dramatically shorten the time it takes to run a compiler with a graphics card. Help the compiler with a well-structured program and follow all instructions in the development environment. There should no longer be any warnings there. For example, it's about the correct assignment of a variable definition. Of course, you can assign a numerical value (if possible a floating point number) to a string variable, for example. But that means additional work for the compiler if these strings suddenly have to be used for calculations. (But that won't kill him now).
 
Upvote 0

yo3ggx

Active Member
Licensed User
Longtime User
The processors of graphics cards tend to specialise in fast rendering of image information (i.e. they are specialised technical idiots) and not in general arithmetic calculations that a "normal" processor with a mathematical coprocessor can perform.
This is not true. Since early 2010’s, GPUs can also be used to accelerate calculations involving massive amounts of data. Remember the fact that GPUs are used for crypto and AI because of that, not to render image information.

Something must be a bottleneck for the long compiling time. The question is ... who is?
I suppose the IDE code does not use delays just to slow down the compiling process, no matter what the power of the hardware is.
 
Upvote 0

Andrew (Digitwell)

Well-Known Member
Licensed User
Longtime User
Personally, I think that this compilation speed is fine, but obviously this is totally subjective.

compiling the java code and Dexing seems to take most of the time and these are both performed by external applications. Perhaps you can try different JDK's to see if this helps. I think that the standard dexer from the Android SDK is used.
 
Upvote 0

BlueVision

Active Member
Licensed User
Longtime User
Something must be a bottleneck for the long compiling time. The question is ... who is?
You misunderstood.
Yes, of course, today's graphics processors can also carry out complex mathematical translations. But: The software itself (in this case the compiler, usually integrated in a modern IDE) must also be designed for this. I don't know of any computer whose software can decide independently whether the graphics processor's capacity is also being used. Most programs find it difficult to adapt to a 32-bit operating system or a 64-bit operating system. There are different versions for this. If additionally a “pipeline” is to be built to the graphics processor so that another processor can support the computing work of the main processor and its architecture, the program must also have been explicitly written for this purpose.
 
Upvote 0

Filippo

Expert
Licensed User
Longtime User
B4x is not a graphics application, so a good graphics card is totally unnecessary and is not needed. What you really need is a PC with a fast CPU. Please do not confuse this with many CPUs, because many CPUs do not mean that a PC is fast.
 
Upvote 0

yo3ggx

Active Member
Licensed User
Longtime User
B4x is not a graphics application, so a good graphics card is totally unnecessary and is not needed. What you really need is a PC with a fast CPU. Please do not confuse this with many CPUs, because many CPUs do not mean that a PC is fast.
My Ryzen 7 CPU with 96GB of RAM seems not to be used 100% during compilation (more ~40%), so... how performance can be improved then?
 
Upvote 0

Filippo

Expert
Licensed User
Longtime User
My Ryzen 7 CPU with 96GB of RAM seems not to be used 100% during compilation (more ~40%), so... how performance can be improved then?
How many GHz does your Ryzen CPU run at?

I have a dell with 4 CPU, but they work with about 4 GHz, and the compilation is very fast.
 
Upvote 0

yo3ggx

Active Member
Licensed User
Longtime User
How many GHz does your Ryzen CPU run at?
8-cores, 3200MHz
Is your CPU maxed at 100% during compilation?
This is how it looks for all 16 logical processors. Some processors are saturated only for a very short period of time.

 
Upvote 0

Andrew (Digitwell)

Well-Known Member
Licensed User
Longtime User
Did you see my comments? Most of the time is spent in the Java compiler and the Dexer. Which version of Java are you using?
 
Upvote 0

yo3ggx

Active Member
Licensed User
Longtime User
See here:
As you can see from the picture, most of the time all 16 logical processors are used almost identical.
I may upgrade the CPU from Ryzen 7 2700 to Ryzen 9 5950x, maximum supported by the MB. I will have 16 cores instead of 8 (32 threads).
 
Upvote 0

Filippo

Expert
Licensed User
Longtime User
8-cores, 3200MHz
The number of cores does not matter. The MHz of the individual cores are important because, as far as I know, b4x only uses one CPU for compilation.

Only certain graphics applications can use several CPUs at the same time.
 
Upvote 0

yo3ggx

Active Member
Licensed User
Longtime User
The number of cores does not matter. The MHz of the individual cores are important because, as far as I know, b4x only uses one CPU for compilation.
This is not true. See the picture above. All 12 logical CPUs are used during compilation.
 
Upvote 0
Cookies are required to use this site. You must accept them to continue using the site. Learn more…