mirror of
https://github.com/percona/pg_stat_monitor.git
synced 2026-02-04 14:06:20 +00:00
PG-230: Fix duplicated query entries.
A problem during bucket management was allowing some queries to be duplicated, old entries would sit around without having their statistics updated. The problem would arise during the following chain of events: 1. A query goes through pgss_post_parse_analyze, in this stage (PGSS_PARSE) we only save the query into the query buffer and create an entry in the query hash table. 2. The query then goes through pgss_ExecutorStart (PGSS_EXEC), in this stage we create an entry for query statistic counters with default values, all time stats equal zero, etc. 3. The query then goes through pgss_ExecutorEnd (PGSS_FINISH), in this stage we update the query statistis, no. calls, total time taken, min_time, etc. The problem is that between steps 2 and 3, the current bucket ID timer may have been expired. For example, during steps 1 and 2 the query may have been stored in bucket ID 1, but when the query is finished (pgss_ExecutorEnd) the current bucket ID may have been updated to 2. This is leaving an entry for the query in bucket ID 1 with state ACTIVE, with time statistics not updated yet. This is also creating an entry for the query in the bucket ID 2, with all statistics (time and others) being updated for this entry. To solve this problem, during transition to a new bucket id, we scan all pending queries in the previous bucket id and move them to the new bucket id. This way finished queries will always be associated with the bucket id that was active at the time they've finished.
This commit is contained in:
@@ -1591,7 +1591,7 @@ pg_stat_monitor_reset(PG_FUNCTION_ARGS)
|
||||
(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
|
||||
errmsg("pg_stat_monitor: must be loaded via shared_preload_libraries")));
|
||||
LWLockAcquire(pgss->lock, LW_EXCLUSIVE);
|
||||
hash_entry_dealloc(-1);
|
||||
hash_entry_dealloc(-1, -1);
|
||||
hash_query_entryies_reset();
|
||||
#ifdef BENCHMARK
|
||||
for (int i = STATS_START; i < STATS_END; ++i) {
|
||||
@@ -2007,7 +2007,7 @@ get_next_wbucket(pgssSharedState *pgss)
|
||||
bucket_id = (tv.tv_sec / PGSM_BUCKET_TIME) % PGSM_MAX_BUCKETS;
|
||||
LWLockAcquire(pgss->lock, LW_EXCLUSIVE);
|
||||
buf = pgss_qbuf[bucket_id];
|
||||
hash_entry_dealloc(bucket_id);
|
||||
hash_entry_dealloc(bucket_id, pgss->current_wbucket);
|
||||
hash_query_entry_dealloc(bucket_id, buf);
|
||||
|
||||
snprintf(file_name, 1024, "%s.%d", PGSM_TEXT_FILE, (int)bucket_id);
|
||||
|
||||
Reference in New Issue
Block a user