Android Question Vacuum SQLite DB with SQLCipher

RB Smissaert

Well-Known Member
Licensed User
Longtime User
Have a large SQLite DB, file size is 4.9 Gb, this is according to B4A File.Size (In Windows it shows at about 4.5 Gb).
The file is in File.DirInternal.
I copied the file to external storage (after closing the DB connection) and then copied the file to my Windows PC.
I then did a vacuum in DB Browser for SQLite and this took a bit over 3 minutes and reduced the file size to 3.8 Gb.

I then did a vacuum in the B4A app (connecting with SQLCipher) and this ran all smoothly with no errors.
It took 5 minutes.
File size remains the same though at 4.9 Gb (B4A File.Size, 4.5 Gb in Windows).

I closed the app and restarted and also shut down the phone and restarted, but the sizes for that DB remain the same.

I am guessing that the file has indeed been vacuumed OK, but that for some reason Android (or B4A) sticks to the old file size.
Any suggestions what might be going on here?

RBS
 

BlueVision

Active Member
Licensed User
Longtime User
Agree. But I am sure you are able to write your "personal defragmentation-routine" within 30 minutes. Advantage: No need of a PC and all that data-transferring.
Additionally, you can prove the suspected problem with the vacuum command depending on the "database environment".
 
Upvote 0

RB Smissaert

Well-Known Member
Licensed User
Longtime User
Agree. But I am sure you are able to write your "personal defragmentation-routine" within 30 minutes. Advantage: No need of a PC and all that data-transferring.
Additionally, you can prove the suspected problem with the vacuum command depending on the "database environment".
OK, will think about that option, thanks.

RBS
 
Upvote 0

RB Smissaert

Well-Known Member
Licensed User
Longtime User
Agree. But I am sure you are able to write your "personal defragmentation-routine" within 30 minutes. Advantage: No need of a PC and all that data-transferring.
Additionally, you can prove the suspected problem with the vacuum command depending on the "database environment".
Will leave this for now and just vacuum those large files on the PC.
As to writing my own vacuum code:
1. I guess it will take me lot more than 30 mins.
2. I guess it will be a lot slower than running the optimized C code Dr Hipp wrote.
3. Likely it will run into the same problem.

RBS
 
Upvote 0

Mahares

Expert
Licensed User
Longtime User
I have no way to test it for a large database, but SQLite has 3 types of vacuum.: NONE, FULL, INCREMENTAL. I do not know if this can apply to your db but since I did not see it mentioned in any of the posts, I am noting it here:
B4X:
SQL.ExecNonQuery("PRAGMA auto_vacuum = 2")   'To set incremental  Must be set before tables are created
B4X:
SQL.ExecQuery("PRAGMA incremental_vacuum")  'To execute the vacuum
B4X:
'Below displays the autovacum type of the database
    Dim MyVac As String = SQL.ExecQuerySingleResult("PRAGMA auto_vacuum")
    Log("auto vacuum: " & MyVac)  'displays the auto vacuum type of your database
See SQLite official docs for detailed informatiion
 
Upvote 0

RB Smissaert

Well-Known Member
Licensed User
Longtime User
I have no way to test it for a large database, but SQLite has 3 types of vacuum.: NONE, FULL, INCREMENTAL. I do not know if this can apply to your db but since I did not see it mentioned in any of the posts, I am noting it here:
B4X:
SQL.ExecNonQuery("PRAGMA auto_vacuum = 2")   'To set incremental  Must be set before tables are created
B4X:
SQL.ExecQuery("PRAGMA incremental_vacuum")  'To execute the vacuum
B4X:
'Below displays the autovacum type of the database
    Dim MyVac As String = SQL.ExecQuerySingleResult("PRAGMA auto_vacuum")
    Log("auto vacuum: " & MyVac)  'displays the auto vacuum type of your database
See SQLite official docs for detailed informatiion
Thanks, will look into that and report back.

RBS
 
Upvote 0

BlueVision

Active Member
Licensed User
Longtime User
Will leave this for now and just vacuum those large files on the PC.
As to writing my own vacuum code:
1. I guess it will take me lot more than 30 mins.
2. I guess it will be a lot slower than running the optimized C code Dr Hipp wrote.
3. Likely it will run into the same problem.
Agree in all what you wrote, except point 3.
Somehow I have in mind that you noticed, when transferring the "deflated" database back to Android, the filesize increased again to it's original value. So keeping it on the same system could bring some light into that.
Sure, it depends on the complexity of that database, assuming you must have 100.000s of entries in it, very "long" datasets of a single record or large data stored in a single record (blob-content or something like this).
The vacuum function implemented in SQL is lightyears faster, no discussion. It is just the suspect of a possible malfunction in it, when handling really large amount of data. So doing it record by record with the regular commands could check this on your large database in both modes, encrypted and unencrypted.
 
Upvote 0

RB Smissaert

Well-Known Member
Licensed User
Longtime User
Agree in all what you wrote, except point 3.
Somehow I have in mind that you noticed, when transferring the "deflated" database back to Android, the filesize increased again to it's original value. So keeping it on the same system could bring some light into that.
Sure, it depends on the complexity of that database, assuming you must have 100.000s of entries in it, very "long" datasets of a single record or large data stored in a single record (blob-content or something like this).
The vacuum function implemented in SQL is lightyears faster, no discussion. It is just the suspect of a possible malfunction in it, when handling really large amount of data. So doing it record by record with the regular commands could check this on your large database in both modes, encrypted and unencrypted.
I had 2 tables in the encrypted file (file size 4.5Gb) with a column holding small blobs (serialized types). I dropped these tables and tried the vacuum again, but it failed
again, this time with a silent crash. There are no rows holding large amounts of data, they are all simple strings and numbers. Also I ran PRAGMA INTEGRITY_CHECK and that showed OK.
I ran the vacuum with an unfiltered log and attached that log, which was running during that silent crash.
I am not sure what to look for in this file, but maybe somebody knows.

RBS
 

Attachments

  • Vacuum_UnfilteredLog.zip
    61.6 KB · Views: 129
Upvote 0

RB Smissaert

Well-Known Member
Licensed User
Longtime User
I had 2 tables in the encrypted file (file size 4.5Gb) with a column holding small blobs (serialized types). I dropped these tables and tried the vacuum again, but it failed
again, this time with a silent crash. There are no rows holding large amounts of data, they are all simple strings and numbers. Also I ran PRAGMA INTEGRITY_CHECK and that showed OK.
I ran the vacuum with an unfiltered log and attached that log, which was running during that silent crash.
I am not sure what to look for in this file, but maybe somebody knows.

RBS
Slight progress by looking at the result of pragma freelist_count:

From the book Using SQLite by Jay A.Kreibich:
The freelist_count pragma returns a single integer indicating how many database pages are currently marked as free and available (contain no valid data).
These pages can be recovered by vacuuming the database.

I dropped a table and ran that pragma and it returned 7.
I then ran the vacuum and finished fine. I missed the message of the reduction in bytes, but when I ran again that pragma it returned 0, indicating that the
vacuum had at least done something.
I then copied the database to PC and ran that pragma in DB Browser and it showed the same 0.
As the size of the DB was still 4.5 Gb rather than the 3.8 Gb after I ran the vacuum in DB Browser a few days ago I ran a vacuum (although maybe with that
pragma returning 0 it was to expected it wouldn't work) and it returned with an error:

Execution finished with errors: Database or disc is full.

Not sure what to make of all this, but at least I have run one vacuum on the phone app that has done something and without a crash.
The other thing is that maybe it makes sense to run pragma freelist_count before doing a vacuum and if it return 0 and don't do the vacuum.

RBS
 
Upvote 0

RB Smissaert

Well-Known Member
Licensed User
Longtime User
Slight progress by looking at the result of pragma freelist_count:

From the book Using SQLite by Jay A.Kreibich:
The freelist_count pragma returns a single integer indicating how many database pages are currently marked as free and available (contain no valid data).
These pages can be recovered by vacuuming the database.

I dropped a table and ran that pragma and it returned 7.
I then ran the vacuum and finished fine. I missed the message of the reduction in bytes, but when I ran again that pragma it returned 0, indicating that the
vacuum had at least done something.
I then copied the database to PC and ran that pragma in DB Browser and it showed the same 0.
As the size of the DB was still 4.5 Gb rather than the 3.8 Gb after I ran the vacuum in DB Browser a few days ago I ran a vacuum (although maybe with that
pragma returning 0 it was to expected it wouldn't work) and it returned with an error:

Execution finished with errors: Database or disc is full.

Not sure what to make of all this, but at least I have run one vacuum on the phone app that has done something and without a crash.
The other thing is that maybe it makes sense to run pragma freelist_count before doing a vacuum and if it return 0 and don't do the vacuum.

RBS
One more thing to mention:
Although I have 2 DB files (both from the same phone app DB) on the PC with different file sizes (3.857.912 Kb for the one vacuumed in DB Browser and 4.746.160 Kb
for the one not vacuumed in DB Browser) when I run this SQL in DB Browser:

SELECT(cast(page_count - freelist_count as real)) * page_size /(1024 * 1024 * 1024) as Gb FROM
pragma_page_count(), pragma_freelist_count(), pragma_page_size()

They give both very similar figures of 3.67 Gb

In both files the pragma freelist_count gives zero.
The reason the larger file (as shown on Windows) gives zero as well is that this is the file copied from the phone after running the successful vacuum (which made
this figure go from 7 to 0).

Sorry to pass all this very confusing information, but I thought it may give somebody some more insight.

RBS
 
Upvote 0

RB Smissaert

Well-Known Member
Licensed User
Longtime User
Have a large SQLite DB, file size is 4.9 Gb, this is according to B4A File.Size (In Windows it shows at about 4.5 Gb).
The file is in File.DirInternal.
I copied the file to external storage (after closing the DB connection) and then copied the file to my Windows PC.
I then did a vacuum in DB Browser for SQLite and this took a bit over 3 minutes and reduced the file size to 3.8 Gb.

I then did a vacuum in the B4A app (connecting with SQLCipher) and this ran all smoothly with no errors.
It took 5 minutes.
File size remains the same though at 4.9 Gb (B4A File.Size, 4.5 Gb in Windows).

I closed the app and restarted and also shut down the phone and restarted, but the sizes for that DB remain the same.

I am guessing that the file has indeed been vacuumed OK, but that for some reason Android (or B4A) sticks to the old file size.
Any suggestions what might be going on here?

RBS
I think I have solved this now, that is I can compact the DB on the phone without errors and with a good speed and also on very large files (eg 5 Gb).
This is with the help of a nice SQLCipher function called: sqlcipher_export.
All explained here:

I checked the result of this by running it on a large DB (3.7 Gb) after dropping a large table.
It reduced the DB size nicely. I then copied the DB to the PC and did a vacuum in DB Browser (for SQLCipher) and the file size remained exactly the same.
Using sqlcipher_export is actually a lot faster than doing a vacuum. In the above test this was 30 seconds versus 3 minutes.

I will do some further testing to make sure it is all fine and report back if I see any problem.

RBS
 
Upvote 0

RB Smissaert

Well-Known Member
Licensed User
Longtime User
Have a large SQLite DB, file size is 4.9 Gb, this is according to B4A File.Size (In Windows it shows at about 4.5 Gb).
The file is in File.DirInternal.
I copied the file to external storage (after closing the DB connection) and then copied the file to my Windows PC.
I then did a vacuum in DB Browser for SQLite and this took a bit over 3 minutes and reduced the file size to 3.8 Gb.

I then did a vacuum in the B4A app (connecting with SQLCipher) and this ran all smoothly with no errors.
It took 5 minutes.
File size remains the same though at 4.9 Gb (B4A File.Size, 4.5 Gb in Windows).

I closed the app and restarted and also shut down the phone and restarted, but the sizes for that DB remain the same.

I am guessing that the file has indeed been vacuumed OK, but that for some reason Android (or B4A) sticks to the old file size.
Any suggestions what might be going on here?

RBS
I think the reason for this failing (without any error message) is that the app is being killed by the Android OS, due to (perceived) inactivity of the app.
I tried running a non-stop service with keeping track of sattelites but that just doesn't run while the vacuum is running.
Running a SQL with wait for seems no option as that will put the query in an transction and in that case vacuum can't work.
I have a good alternative I think with the SQLCipher sqlcipher_export function, but am still interested to find a solution to make the vacuum work.
Note it works fine with a small DB.

RBS
 
Upvote 0
Top