The memory area reserved for query text (pgsm_query_shared_buffer) was
divided evenly for each bucket, this allowed to have the same query,
e.g. "SELECT 1", duplicated in different buckets, thus wasting space.
This commit fix the query text duplication by adding a new hash table
whose only purpose is to verify if a given query is already added to the
buffer (by using the queryID).
This allows different buckets that share the same query to point to a
unique entry in the query buffer (pgss_qbuf).
When pg_stat_monitor moves to a new bucket id, by avoiding adding a
query that already exists in the buffer it can also save some CPU time.
There were couple issues to handle, the main one was that our log hook
(pgsm_emit_log_hook) was being called after the shared memory hook
completed (pgss_shmem_startup) but before PostgreSQL boostraping code
finished, thus triggering the following assertion during a call to
LWLockAcquire():
Assert(!(proc == NULL && IsUnderPostmaster));
proc is a pointer to MyProc, a PostgreSQL's shared global variable that
was not yet initalized by PostgreSQL.
We must also check for a NULL pointer return in pg_get_backend_status()
the pgstat_fetch_stat_local_beentry() function may return a NULL pointer
during initialization, in which case we use "127.0.0.1" for the client
address, and "postmaster" for application name.
If both modules are loaded then pg_stat_monitor detects that and avoid
calling standard_ProcessUtility() in ProcessUtility_hook hook, as
calling it twice is an error and triggers an assertion on PostgreSQL.
On PostgreSQL 13, pg_stat_monitor must be loaded after
pg_stat_statements, as pg_stat_statements doesn't do such verifications,
it end calling standard_ProcessUtility() and other functions even if
another module is registered, that is an error.
They fixed this problem with pg_stat_statements in PostgreSQL 14 and onward.
The loop that resets the query buffers was incorrecly using MAX_BUCKETS
to indicate the number of buckets to clear, which defaults to 10. If a
user lowers this value the loop would access a pointer beyond the number
of query buffers allocated.
Fix the problem by using the correct PGSM_MAX_BUCKETS GUC as the limit
to the loop.
This commit fix some issues when the query buffer overflows and
pg_stat_monitor attempts to dump its contents to a file.
The dump process is now as follows:
1. The dump will always be a full dump of the current query buffer,
meaning pg_stat_monitor will dump MAX_QUERY_BUFFER_BUCKET bytes to
the dump file.
2. When scanning the dump file, read chunks of size
MAX_QUERY_BUFFER_BUCKET, then look for the query ID using that chunk
and the query position metadata, this allows pg_stat_monitor to avoid
scanning the whole chunk when looking for a query ID.
The code in charge to read from/write to the dump file now takes into
account that read() and write() may return less bytes than what it was
asked for, the code now ensures that we actually read or write the
amount of bytes required (MAX_QUERY_BUFFER_BUCKET), also it handles
rare but posssible interrupts when doing those operations.
In SaveQueryText() we check for a possible overflow in the query buffer,
but if overflow would happen and pgsm_overflow_target value is 1 (the
default), then we dump the query buffer to a temporary file and reset
the buffer (start saving queries from the start of the buffer). The
problem is that after resetting the buffer we don't check if the current
query length would exceed the buffer size of MAX_QUERY_BUFFER_BUCKET, if
that is the case the buffer would overflow and probably crash the
process or in the worst case become an attack vector for exploitation.
This commit fix the problem by adding an additional check for overflow
after resetting the query buffer.
The regular expression was updated to properly capture comments in SQL
in the form /* */.
The previous regex was capturing everything from /* until the last */
because regex are greedy, this was presenting problems if a input query
has something like:
SELECT /* comment 1 */ field /* comment 2 */ from FOO;
As such, the previous regex would capture anytying between /* comment 1
and comment 2 */, the result would be:
/* comment 1 field comment 2*/.
Multiline comments are also captured.
Multiple comments in one query are now stored in the pg_stat_monitor
comments field in the form: /* Comment 1 */, ... , /* Comment N */.
The code added in pgss_store() to handle an assertion failure when
GetUserId() was being called introduced a problem with regression tests
in some builds, specifically our PG11 and PG12 distributions for Ubuntu.
The problem was detected when calling some json functions that would
trigger parallel workers, to solve the problem now we check if our hooks
are being called by parallel workers, in this case we can avoid doing
work, it also fixes the issue mentioned above as we won't call
GetUserId() anymore in an invalid context.
Don't display queries in PARSE or PLAN state, to keep it consistent
with previous behavior, this avoids showing intermediate scripts like
part of a procedure, such as:
$1
$1 := $3
Whenever a new query is added to a query buffer, record the position
in which the query was inserted, we can then use this information to
locate the query in a faster way later on when required.
This allowed to simplify the logic in hash_entry_dealloc(), after
creating the list of pending queries, as the list is scanned we can copy
the query from the previous query buffer to the new one by using the
query position (query_pos), this avoids scanning the whole query buffer
when looking up for the queryid.
Also, when moving a query to a new buffer (copy_query), we update
the query_pos for the hash table entry, so it points to the right place
in the new query buffer.
read_query() function was updated to allow query position to be passed
as argument, if pos != 0 use it to locate the query directly, otherwise
fallback to the previous mode of scanning the whole buffer.
SaveQueryText() was updated to pass a reference to the query position as
argument, this value is updated after the function finishes with the
position that the query was stored into the buffer.
There was no necessity for using a separate hash table to keep track
of queries as all the necessary information is already available in
the pgss_hash table.
The code that moves pending queries' text in hash_query_entry_dealloc
was merged into hash_entry_dealloc.
Couple functions were not necessary anymore, thus were removed:
- hash_create_query_entry
- pgss_store_query_info
- pgss_get_entry (this logic was added directly into pgss_store).
We simplified the logic in pgss_store, it basically works as follows:
1. Create a key (bucketid, queryid, etc...) for the query event.
2. Lookup the key in pgss_hash, if no entry exists for the key, create a
new one, save the query text into the current query buffer.
3. If jstate == NULL, then update stats counters for the entry.
There were many variables being initialized in pgss_store() before
checking if the module was actually active, this would waste cpu
processor if the module is disabled.
To fix that, declare variables and initialize them only after check
that pg_stat_monitor is active.
Added code to move all pending queries text from an expired bucket's query
buffer to the next, active query buffer.
The previous implementation was not very efficient, it worked like this,
as soon as a query is processed and a bucket expires:
1. Allocate memory to save the contents of the next query buffer.
2. Clear the next query buffer.
3. Iterate over pgss_query_hash, then, for each entry:
- If the entry's bucket id is equal to the next bucket then:
-- If the query for this entry has finished or ended in error, then
remove it from the hash table.
-- Else, if the query is not yet finished, copy the query from the
backup query buffer to the new query buffer.
Now, this copy was really expensive, because it was implemented
using read_query() / SaveQueryText(), and read_query() scans the
whole query buffer looking for some query ID, since we do this
extra lookup loop for each pending query we end up with a O(n^2)
algorithm.
4. Release the backup query buffer.
Since now we always move pending queries from an expired bucket to the
next one, there is no need to scan the next query buffer for pending
queries (the pending queries are always in the current bucket, and when
it expires we move them to the next one).
Taking that into consideration, the new implementation works as follows,
whenever a bucket expires:
1. Clear the next query buffer (all entries).
2. Define an array to store pending query ids from the expired bucket,
we use this array later on in the algorithm.
3. Iterate over pgss_query_hash, then, for each entry:
- If the entry's bucket id is equal to the next bucket then:
-- If the query for this entry has finished or ended in error, then
remove it from the hash table. This is equal to the previous
implementation.
- Else, if the entry's bucket id is equal to the just expired bucket
id (old bucket id) and the query state is pending (not yet finished),
then add this query ID to the array of pending query IDs.
Note: We define the array to hold up to 128 pending entries, if there
are more entries than this we try to allocate memory in the heap to
store them, then, if the allocation fails we manually copy every
pending query past the 128th to the next query buffer, using the
previous algorithm (read_query() / SaveQueryText), this should be a
very rare situation.
4. Finally, if there are pending queries, copy them from the previous
query buffer to the next one using copy_queries.
Now, copy_queries() is better than looping through each query entry and
calling read_query() / SaveQueryText() to copy each of them to the new
buffer, as explained, read_query() scans the whole query buffer for
every call.
copy_queries(), instead, scans the query buffer only once, and for every
element it checks if the current query id is in the list of queries to
be copied, this is an array of uint64 that is small enough to fit in L1
cache.
Another important fix in this commit is the addition of the line 1548 in
pg_stat_monitor.c / pgss_store():
query_entry->state = kind;
Before the addition of this line, all entries in the pgss_query_hash
hash table were not having their status updated when the query entered
execution / finished or ended in error, effectively leaving all entries
as pending, thus whenever a bucket expired all entries were being copied
from the expired bucket to the next one.
The if condition bellow in geta_next_wbucket() was subject to a race
condition:
if ((current_usec - pgss->prev_bucket_usec) > (PGSM_BUCKET_TIME * 1000 *
1000))
Two or more threads/processes could easily evaluate this condition to
true, thus executing more than once the block that would calculate a
new bucket id, clear/move old entries in the pgss_query_hash and
pgss_hash hash tables.
To avoid this problem, we define prev_bucket_usec and current_wbucket
variables as atomic and execute a loop to check if another thread has
updated prev_bucket_usec before the current one.
A problem during bucket management was allowing some queries to be
duplicated, old entries would sit around without having their statistics
updated.
The problem would arise during the following chain of events:
1. A query goes through pgss_post_parse_analyze, in this stage (PGSS_PARSE)
we only save the query into the query buffer and create an entry in the
query hash table.
2. The query then goes through pgss_ExecutorStart (PGSS_EXEC), in this stage
we create an entry for query statistic counters with default values,
all time stats equal zero, etc.
3. The query then goes through pgss_ExecutorEnd (PGSS_FINISH), in this stage
we update the query statistis, no. calls, total time taken, min_time, etc.
The problem is that between steps 2 and 3, the current bucket ID timer may
have been expired.
For example, during steps 1 and 2 the query may have been stored in
bucket ID 1, but when the query is finished (pgss_ExecutorEnd) the
current bucket ID may have been updated to 2.
This is leaving an entry for the query in bucket ID 1 with state ACTIVE,
with time statistics not updated yet.
This is also creating an entry for the query in the bucket ID 2, with all
statistics (time and others) being updated for this entry.
To solve this problem, during transition to a new bucket id, we scan all
pending queries in the previous bucket id and move them to the new
bucket id.
This way finished queries will always be associated with the bucket id
that was active at the time they've finished.
The deadlock scenario is describe below:
1. pgss_store is called, it acquires the lock pgss->lock.
2. An error ocurr, mostly out of memory when accessing internal hash
tables used to store internal data, on functions
pgss_store_query_info and pgss_get_entry.
3. Call elog() to report out of memory error.
4. Our pgsm_emit_log_hook is called, it calls pgss_store_error, which in
turn calls pgss_store.
5. Try to acquire already acquired lock pgss->lock, deadlock happens.
To fix the problem, there are two modifications worth mentioning done by
this commit:
1. We are now passing HASH_ENTER_NULL flag to hash_search, instead of
HASH_ENTER, as read in postgresql sources, this prevents this
function from reporting error when out of memory, but instead it will
only return NULL if we pass HASH_ENTER_NULL, so we can handle the
error ourselves.
2. In pgss_store, if an error happens after the pgss->lock is acquired,
we only set a flag, then, after releasing the lock, we check if the
flag is set and report the error accordingly.
There were two objects leaking memory in this function, the comments
variable of type char * allocated using palloc0, and the regular
expression object preg.
If the regcomp function failed, the function was returning without
releasing the comments variable allocated previously.
If the regexec function failed, the function was returning without
releasing the preg object and the comments variable.
This commit does two changes, first it turns the comments in
extract_query_comments an argument to the function, in pgss_store we
declare the comments variable in the stack, so it will clean up after
itself.
The second change was to move the regular expression object to global
scope, this way we compile the object only once, during module
initialization.
With these two changes we fix the memory leak and avoid
allocating/releasing memory for every call to this function.
We redefine macro _snprintf to use memcpy, which performs better, we
also update call sites using this macro to add the null terminator
'\0' to the source string length, this way memcpy also correctly
copies the null terminator to the destination string.
We update _snprintf2 macro to use strlcpy, the reason we don't use
memcpy here is because in the place where this macro is called,
pgss_update_entry, only the maximum string length of REL_LEN=1000 is
specified as an upper bound to copy the relations string vector to the
destination counters, but since this data is string, we don't need to
copy 1k bytes for every entry, by using strlcpy the copy ends as soon as
the null terminator '\0' is found in the source string.
These variables can't be in shared state, as the following problem was
taking place:
1. Process1 call pgss_ExecutorCheckPerms(), acquire lock, update
relations and num_relations, release lock.
2. Process 2 call pgss_ExecutorCheckPerms(), acquire lock, update
num_relations = 0;
3. Process 1 read num_relations = 0 in pgss_update_entry, this value is
wrong as it was updated by Process 2.
Even if we acquire the lock in pgss_update_entry to read num_relations
and relations variable, Process 1 may end up acquiring the lock after
Process 2 has ovewritten the variable values, leading to Process 1
reading of wrong data.
By defining relations and num_relations to be static and global in
pg_stat_monitor.c we take advantage that each individual PostgreSQL
backend will have its own copy of this data, which allows us to remove
the locking in pgss_ExecutorCheckPerms to update these variables,
improving pg_stat_monitor overall performance.
Added a new view 'pg_stat_monitor_hook_stats' that provide execution
time statistics for all hooks installed by the module, following is a
description of the fields:
- hook: The hook function name.
- min_time: The fastest execution time recorded for the given hook.
- max_time: The slowest execution time recorded for the given hook.
- total_time: Total execution time taken by all calls to the hook.
- avg_time: Average execution time of a call to the hook.
- ncalls: Total number of calls to the hook.
- load_comparison: A percentual of time taken by an individual hook
compared to every other hook.
To enable benchmark, code must be compiled with -DBENCHMARK flag, this
will make the hook functions to be replaced by a function with the same
name plus a '_benchmark' suffix, e.g. hook_function_benchmark.
The hook_function_benchmark will call the original function and
calculate the amount of time it took to execute, than it will update
statistics for that hook.
The query_txt variable is allocated at the beginning of the
pg_stat_monitor_internal() function and released at the end, but an
extra malloc call to allocate it was added within an internal loop in
the funcion, thus allocating memory for every loop iteration, without
releasing the memory in the loop.
The query_txt variable can be reused inside the loop body, so this
commit removes the redundant declaration of query_txt from inside the
loop, which also fixes the leak.
Add application name to the key used to identify queries in the hash
table, this allows different applications to have separate entries in
pg_stat_monitor view if they issued the same query.
If pg_stat_monitor is loaded after pg_stat_statement, then it will end
up calling standard_planner function twice in the pgss_planner_hook()
function, this will trigger an assertion failure from PostgreSQL as this
function expects an untouched Query* object, and the first call to
standard_planner() done by pg_stat_statements modifies the object.
To address the problem, we avoid calling standard_planner function twice
in pg_stat_monitor, if a previous handler is installed for the hook
planner_hook, then we assume that this previous hook has already called
standard_planner function and don't do it again.