B4J Question Query Cancel

Harris

Expert
Licensed User
Longtime User
Is there a way to cancel a long running query?

Example:
res = SQL.ExecQuery("Select * from loaddet where mastid >= "&SActMap.GetKeyAt(i))
Then stop this long running query later, if desired.

Thanks
 

Harris

Expert
Licensed User
Longtime User
You cannot cancel the query but you can ignore the result.
Ok, thanks... Sounds useful. I usually pause and resume the page (ABM) when executing to provide visual of something happening.

However, it is hard to ignore the unwanted query results that takes 2 hours to finally complete.... Hence the desire to stop it while executing.

I (may) simply close or refresh the browser page to stop it on the client...
God knows what actually happens to the running query on the server at this point - with a dropped connection...
 
Upvote 0

Harris

Expert
Licensed User
Longtime User
No, it is a simple query. It is just what it has to do!

It looks at each record in a table and gets the "filename" of the associated CSV ( actually something quite similar ) that resides on disk.
It must then parse each line in this file to get results (based on what was wanted (speed > 70 kph for example).
There may be many files (1000's) for each query result row.

I warned my customer to use smaller date ranges (small query result) - but oh no, he wouldn't listen to sound advice.
I guess I should have to force a max date range to prevent long running quiries in the first place... Keep them from shooting their foot off...

Thanks
 
Upvote 0

udg

Expert
Licensed User
Longtime User
It looks at each record in a table and gets the "filename" of the associated CSV ( actually something quite similar ) that resides on disk.
Surely you had your reasons to code it this way, but why they didn't export those files to a db operating subsequent queries on it?
 
Upvote 0

Harris

Expert
Licensed User
Longtime User
Surely you had your reasons to code it this way, but why they didn't export those files to a db operating subsequent queries on it?
I knew that question would be asked... So here is your (rather complex) answer.

This data has no business in the DB. It is not well suited since the structure (fields) may grow and contract as per required...

The data is sourced from the vehicles (Android Tablets) in which they are running. Every second of each work shift (typically 14 - 16 hours) is being captured.
For each second - up to 60 vehicle (ECM data), driver, GPS, datetime, what was had for lunch (not really) type parameters are being recorded.
I tried stuffing this in the SQLite tables - but way too much (overkill - size - management - RDC transfer, etc.).
A simple text file works well where I can randomly define (or re-define) its' structure, which IS stored for a parsing reference (one table record per file / hour).
These text files are created every hour - on the fly for 24 hours per day - so if a disaster occurs - ALL is not lost.
These text files are zipped to get the air out and end up quite small - for OTA transfer to the server...

On the server, each file is stored in a folder named and associated from where it came - the vehicle unit number.
The table (1 record per hour) knows what file to fetch when needed...

This data is used on occasion - like when some incident was reported / occurred - and we need to view the "fine" details.
This bloat can be put on a backup drive for storage and future reference.

At this point, the DB for this project is 93 meg in size - running for almost 2 years.
The text files in question however are a total of 56 gigabyte in size....

So, when is some data TOO MUCH data? Apparently - according to my client and others - NEVER! Just get a bigger disk.

So as they say, bigger is better...

Now, back to my dummies lesson I was working on...
 
Upvote 0

udg

Expert
Licensed User
Longtime User
what was had for lunch
salmon, salmon, salmon.. eheh
So as they say, bigger is better...
better refrain from any comment..eh eh

On the server, each file is stored in a folder
My original question was about the data storage on the server. I mean, once the small data files arrive to the server, couldn't they be parsed and stored in a MySql or similar DBMS? Millions of records pose non problem to that kind of DBMSes.
Again, I'm sure you evaluated all the approaches before choosing the one you're using, so it was just out of curiosity to understand what lead you to that solution.

udg
 
Upvote 0

Harris

Expert
Licensed User
Longtime User
couldn't they be parsed and stored in a MySql
Yes, of course. But then I (they) have a 56 GB size database to maintain (backup, etc.) - and growing fast daily.

The key to this explanation was the statement - "This data is used on occasion". And only a slight portion thereof...
I suppose I could store each line of the text file in a single record of a table - but why? It still needs to be parsed.

Then, how do you store the BIG table of a DB on a different drive (that has capacity) from the rest of the tables???
It's the same reason why I don't store images (binary) files in a table (BLOB). It only serves to bloat the table and potentially corrupt it.
Experience in devastation is a great teacher. I am not one who tends to forget the past (bad things) only to repeat it...
 
  • Like
Reactions: udg
Upvote 0

OliverA

Expert
Licensed User
Longtime User
Top of my head:

It looks like the query itself is quick, since all you are getting is a list of CVS files to parse. It is the parsing that takes long. I'm guessing this parsing takes place in a loop. Before processing a CVS file, you could check a global variable and see if it set to True (for example). If it is false, stop processing. Create another means to access this variable and toggle it to false when you want to stop processing. Then when your loop sees the false (before processing the next CVS), exit the loop and stop processing.
 
Upvote 0
Top