mirror of
https://github.com/percona/pg_stat_monitor.git
synced 2026-02-04 14:06:20 +00:00
PG-244: Move pending queries' text to new bucket after bucket expiration.
Added code to move all pending queries text from an expired bucket's query
buffer to the next, active query buffer.
The previous implementation was not very efficient, it worked like this,
as soon as a query is processed and a bucket expires:
1. Allocate memory to save the contents of the next query buffer.
2. Clear the next query buffer.
3. Iterate over pgss_query_hash, then, for each entry:
- If the entry's bucket id is equal to the next bucket then:
-- If the query for this entry has finished or ended in error, then
remove it from the hash table.
-- Else, if the query is not yet finished, copy the query from the
backup query buffer to the new query buffer.
Now, this copy was really expensive, because it was implemented
using read_query() / SaveQueryText(), and read_query() scans the
whole query buffer looking for some query ID, since we do this
extra lookup loop for each pending query we end up with a O(n^2)
algorithm.
4. Release the backup query buffer.
Since now we always move pending queries from an expired bucket to the
next one, there is no need to scan the next query buffer for pending
queries (the pending queries are always in the current bucket, and when
it expires we move them to the next one).
Taking that into consideration, the new implementation works as follows,
whenever a bucket expires:
1. Clear the next query buffer (all entries).
2. Define an array to store pending query ids from the expired bucket,
we use this array later on in the algorithm.
3. Iterate over pgss_query_hash, then, for each entry:
- If the entry's bucket id is equal to the next bucket then:
-- If the query for this entry has finished or ended in error, then
remove it from the hash table. This is equal to the previous
implementation.
- Else, if the entry's bucket id is equal to the just expired bucket
id (old bucket id) and the query state is pending (not yet finished),
then add this query ID to the array of pending query IDs.
Note: We define the array to hold up to 128 pending entries, if there
are more entries than this we try to allocate memory in the heap to
store them, then, if the allocation fails we manually copy every
pending query past the 128th to the next query buffer, using the
previous algorithm (read_query() / SaveQueryText), this should be a
very rare situation.
4. Finally, if there are pending queries, copy them from the previous
query buffer to the next one using copy_queries.
Now, copy_queries() is better than looping through each query entry and
calling read_query() / SaveQueryText() to copy each of them to the new
buffer, as explained, read_query() scans the whole query buffer for
every call.
copy_queries(), instead, scans the query buffer only once, and for every
element it checks if the current query id is in the list of queries to
be copied, this is an array of uint64 that is small enough to fit in L1
cache.
Another important fix in this commit is the addition of the line 1548 in
pg_stat_monitor.c / pgss_store():
query_entry->state = kind;
Before the addition of this line, all entries in the pgss_query_hash
hash table were not having their status updated when the query entered
execution / finished or ended in error, effectively leaving all entries
as pending, thus whenever a bucket expired all entries were being copied
from the expired bucket to the next one.
This commit is contained in:
@@ -1545,6 +1545,7 @@ pgss_store(uint64 queryid,
|
||||
out_of_memory = true;
|
||||
break;
|
||||
}
|
||||
query_entry->state = kind;
|
||||
entry = pgss_get_entry(bucketid, userid, dbid, queryid, ip, planid, appid);
|
||||
if (entry == NULL)
|
||||
{
|
||||
@@ -2030,7 +2031,6 @@ get_next_wbucket(pgssSharedState *pgss)
|
||||
|
||||
if (update_bucket)
|
||||
{
|
||||
unsigned char *buf;
|
||||
char file_name[1024];
|
||||
int sec = 0;
|
||||
|
||||
@@ -2040,9 +2040,8 @@ get_next_wbucket(pgssSharedState *pgss)
|
||||
prev_bucket_id = pg_atomic_exchange_u64(&pgss->current_wbucket, new_bucket_id);
|
||||
|
||||
LWLockAcquire(pgss->lock, LW_EXCLUSIVE);
|
||||
buf = pgss_qbuf[new_bucket_id];
|
||||
hash_entry_dealloc(new_bucket_id, prev_bucket_id);
|
||||
hash_query_entry_dealloc(new_bucket_id, buf);
|
||||
hash_query_entry_dealloc(new_bucket_id, prev_bucket_id, pgss_qbuf);
|
||||
|
||||
snprintf(file_name, 1024, "%s.%d", PGSM_TEXT_FILE, (int)new_bucket_id);
|
||||
unlink(file_name);
|
||||
|
||||
Reference in New Issue
Block a user