This is what I have. It seems it only occurs while transferring many files (25 GB in 10 concurrent threads in Filezilla)
java.lang.OutOfMemoryError: Java heap space
at anywheresoftware.b4a.randomaccessfile.AsyncStreams$AIN.run(AsyncStreams.java:225)
at java.lang.Thread.run(Thread.java:748)
java.lang.OutOfMemoryError: Java heap space
at anywheresoftware.b4a.randomaccessfile.AsyncStreams$AIN.run(AsyncStreams.java:225)
at java.lang.Thread.run(Thread.java:748)
java.lang.OutOfMemoryError: GC overhead limit exceeded
java.lang.OutOfMemoryError: GC overhead limit exceeded
client: PWD
The FTP server is not using prefix mode;I f I look at the source code it is on the allocation of "data = new byte[]": nothing strange there ...
UPDATE: I did some more profiling: it seems the problem occurs when the FTP server needs to serve about 10 concurrent connections for large files (3-5 GB).
It seems as if when the client is STORing (and thus the server uses the AsyncStreams inputstream) Java keeps the entire files in memory, which for 10 connection with 3-5 GB just is too much too handle ... Is this correct ? And can this be solved ?
UPDATE2: I think I found the reason: the AsyncStreams reads from the inputstream and then passes the buffer data into a new array data (that is being passed to the NewData event). This data byte array will remain in memory until the end of the file.
I create an an own version of AsyncStreams in which I pass the buffer directly to the NewData event (with an extra param indicating the length): this does not work probably because raiseEventFromDifferentThread is asynchroneous ? If this is the case: how would I be able to handle this ?
while (working) {
if (!prefix) {
int count = in.read(buffer);
if (count == 0)
continue;
if (count < 0) {
closeUnexpected();
break;
}
if (!working)
break;
ba.raiseEventFromDifferentThread(AsyncStreams.this, null, AsyncStreams.this, ev, true, new Object[] {buffer, count});
}