Regardless of the database or the user, the same query will yield the
same query ID. As part of this, a new column, 'pgsm_query_id', is added.
* pgsm_query_id:
pgsm_query_id has the same data type of int8 as the queryid column. If
the incoming SQL command includes any constants, it internally normalizes
the query to remove those constant values with placeholders. Otherwise,
it uses the query directly to generate the query hash.
Since we no longer depend on the server's parse tree mechanism, we can
generate the same hash for the same query text for all server versions.
Also, it is important to note that the hash being calculated is a database,
schema and user independent. So same query text in different databases
will generate the same hash.
This column is not part of the key; rather, for observability purposes only.
* Regression
SQL test case pgsm_query_id.sql is added to the SQL regression.
Reimplement the storage mechanism of buckets and query texts
using Dynamic shared memory. Since the dynamic shared memory
can grow into a swap area, so we get the overflow out of the box.
oreover the new design saves the query pointer inside the bucket
and eventually, the query text gets evicted with the bucket recycle.
Finally, the dynamic shared memory hash has a built-in locking
mechanism so we can revisit the whole locking in pg_stat_monitor
has potential for lots of performance improvements
Removing the view for 2.0. Updating the required SQL files to manage
the upgrade. Downgrade from 2.x to 1.x is not supported.
Also part of this fix is the SQL regression. This does not update the
tap test cases.
In line with pg_stat_statments for PG15, This commit adds eight new cumulative
counters for jit operations, making it easier to diagnose how JIT is used in an
installation. And two new columns, temp_blk_read_time, and temp_blk_write_time,
respectively, show the time spent reading and writing temporary file blocks
on disk.
Moreover, The commit also contains a few indentations and API adjustments.
A new column is added to mention that bucket is active or done. there is
some timing based adjustment was required with that too.
Co-authored-by: Hamid Akhtar <hamid.akhtar@percona.com>
Creating the infrastructure that'll allow using newer versions
of the loadable module with old SQL declarations.
Also updating the build version to 2.0.0-dev
There was a typo while checking the PostgreSQL version in the SQL file. This commit
will fix the typo, and only the necessary columns will be visible in the view.
The query status monitoring code was used to track the current query state, for example,
parsing, executing and finishing. After careful review, we have figured out that
it does not make sense while a lot of time same query is running. Therefore it
is also consuming resources. This commit will remove that feature. The upgrade
SQL from 1.0 - 2.0 is also updated.
PG 15 requires additional shared memory and LWLocks requests to be made from the
newly introduced shmem_request_hook and disallows the requests initiated
from outside the hook.
The commit makes moves the additional shared memory and LWLocks requests
from _PG_init to shmem_request_hook for PG15
pg_stat_monitor is a bit longer; therefore, it requires some code cleanup.
Therefore I decided to turn these tasks into multiple commits and PR to avoid
various changes in one PR. This will ease the review and Q/A process.
In this commit, I have done these tasks.
1 - Fixing compilation issue, cause by previous commit, where we removed
the benchmarking code. It was causing problem for PostgreSQL-12.
pg_stat_monitor is a bit longer; therefore, it requires some code cleanup.
Therefore I decided to turn these tasks into multiple commits and PR to avoid
various changes in one PR. This will ease the review and Q/A process.
In this commit, I have done these tasks.
1 - Remove all benchmarking and debugging code.
* PG-475: Inconsistent behaviour of PGSM
Reverting the bucket locking mechanism to previous behavior. This has
a lot of room for improvement that needs to be part of a major refactoring
in the 2.x release.
* PG-481 Release notes 1.1.0 (#294)
modified: RELEASE_NOTES.md
* PG-500: Bump the version of pg_stat_monitor to 1.1.0 (#297)
* PG-501: Missing Buckets and incorrect calls count. (#298)
prev_bucket_sec holds the actual time at which the previous bucket was created
and it is used to compute if the previous bucket time has elapsed and when is
the time to create a new one. But since the bucket start time is rounded down
to logical time window start, that makes the prev_bucket_sec and bucket start
time out of sync with each other, and depending on the query arrival time there
is a high probability that a bucket gets missed especially when the last bucket
was created around the end of the bucket time window.
Solution is to keep the prev_bucket_sec and bucket start time in-sync.
Moreover, we are using the unint64 for storing the prev_bucket_sec which is kind
of an overkill and a simple uint should be good enough for the purpose. But that
change can be taken up as part of the create-bucket function refactoring task.
* PG-501: Missing Buckets and incorrect calls count.
Ensuring the outer bound for the bucket is an exclusive boundary and it
as it belongs to the next bucket. To explain the point further, a set of
five second bucket would be:
Bucket 1: 00:00:00.00 -> 00:00:04.99...
Bucket 2: 00:00:00.05 -> 00:00:09.99...
Bucket 3: 00:00:00.10 -> 00:00:14.99...
...
Co-authored-by: Ibrar Ahmed <ibrar.ahmed@percona.com>
Co-authored-by: Anastasia Alexandrova <anastasia.alexandrova@percona.com>
Co-authored-by: Muhammad Usama <m.usama@gmail.com>
PG-456: Running pgindent to make source indentation/spacing PostgreSQLCompatible.
PostgreSQL uses pgindent from time to time to make source code PostgreSQL
style guide compatible, it is a very long time since we have not done that.
Commit fixes a lot of indentation and spacing issues.
Co-authored-by: Hamid Akhtar <hamid.akhtar@gmail.com>
PG-453: Normalize query does not work with INSERT statements.
The commit fixes the issue similar to pg_stat_statements. Jumble query skips in
case of INSERT statements to avoid the duplicate queryid, but in another
commit we already solved that problem with another way.
The buckets are now created with the start time a modulus of the bucket time size.
So if we have a 10 second bucket, the start times would reflect that:
- Bucket1: 00:00:00
- Bucket2: 00:00:10
- Bucket3: 00:00:20
...
Previously, the start time of the bucket was aligned with the first query that
arrives in that bucket. However, now the behaviour is changed. So, even if the
first query for bucket 2 arrives at 00:00:13, the start time would still be set
to 00:00:10.
This change now makes the bucketing separated out by fixed time windows so that
external applications can easily consume that data and chart it.
Also, as part of this change, locking of pgss is updated now and extended
to last the bucket related changes.
There was no maximum limit set for the number of maximum histograms
bucket, which can lead to a crash in case of higher value.
PG-382: Adjust the maximum value for histogram buckets.
Fix the regression issue for PostgreSQL-12.
PG-382: Adjust the maximum value for histogram buckets.
Fix the TAP test cases.
Similar to pg_stat_statements, pg_stat_monitor tracks wal data metrics
since PostgreSQL 13, the problem was that for PostgreSQL versions 11 and
12 we left the WalUsage variable declared in the stack unitialized,
thus leading to garbage values.
Fixed the problem by ignoring the WalUsage variable value for PG <= 12.
To check if a bucket has expired, a comparison of the time elapsed
since last bucket change was being done in get_next_wbucket() function
using the following line:
while ((current_usec - current_bucket_usec) > (PGSM_BUCKET_TIME
* 1000 * 1000))
The problem is that the expression compares a uint64 (current_usec)
with a int32 (PGSM_BUCKET_TIME), if a given user configures a value for
pgsm_bucket_time GUC (let's call it T) that could overflow int32 range
in the expression T*1000*1000 > 2**31-1, then the result would be a
negative integer cast to (uint64), resulting in a large uint64 value that
would evaluate the expression as false, thus never updating bucket
number.
When querying pg_stat_monitor view, for every entry it's verified if
the entry has not yet expired by calling IsBucketValid(bucket_number).
Using the entry's bucket number the function calculates if the time
since the bucket started, using shared global variable
pgss->bucket_start_time[bucket_id], is still valid.
Since pgss->bucket_start_time is not properly initialized in
get_next_wbucket(), the function IsBucketValid() will always
return false, thus not listing any entries in the view.
There was a missing increment/decrement to exec_nested_level in
pgss_ProcessUtility hook, due to this, some utility statements could
end up being processed more than once, as PostgreSQL may recurse into
this hook for sub-statements or when processing a query string
containing multiple semicolon-separated statements.
If a query exceeds pg_stat_monitor.pgsm_query_max_len, then it's
truncated before we save it into the query buffer (SaveQueryText).
When reading the query back, on pg_stat_monitor_internal, we allocate a
buffer for the query with length = pg_stat_monitor.pgsm_query_max_len,
the problem is that the read_query function adds a '\0' to the end of
the buffer when reading a query, thus if a query has been truncated, for
example, to 1024, when reading it back read_query will store the '\0' at
the position 1025, an out of array bounds position.
Then, when we call pfree to release the buffer, PostgreSQL notices the
buffer overrun and triggers an error assertion, the assertion calls our
error hook which attempts to acquire the shared pgss->lock before
pg_stat_monitor_internal has released it, leading to a deadlock.
To avoid the problem we add 1 more byte to the extra '\0' during palloc
call for query_text and parent_query_text.
Also, we release the lock before calling pfree, just in case PostgreSQL
finds a problem in pfree we won't deadlock again and get the error
reported correctly.
If a backend would change the application name during execution,
pg_stat_monitor would then fail to read the updated value, as it caches
the result in order to avoid calling the expensive functions
pgstat_fetch_stat_numbackends() and pgstat_fetch_stat_local_beentry().
A workaround was found, we can just read an exported GUC from
PostgreSQL backend itself, namely application_name, from utils/guc.h,
thus saving us from having to call those expensive functions.
Updated sql files (pg_stat_monitor_settings view).
Using right variable name and level checking on pgss_store:
key.toplevel = ((exec_nested_level + plan_nested_level) == 0);
This commit introduces serveral improvements:
1. Removal of pgss_store_query and pgss_store_utility functions: To
store a query, we just use pgss_store(), this makes the code more
uniform.
2. Always pass the query length to the pgss_store function using parse
state from PostgreSQL to avoid calculating query length again.
3. Always clean the query (extra spaces, update query location) in
pgss_store.
4. Normalize queries right before adding them to the query buffer, but
only if user asked for query normalization.
5. Correctly handle utility queries among different PostgreSQL versions:
- A word about how utility functions are handled on PG 13 and later
versions:
- On PostgreSQL <= 13, we have to compute a query ID, on later
versions we can call EnableQueryId() to inform Postmaster we
want to enable query ID computation.
- On PostgreSQL <= 13, post_parse hook is called after process
utility hook, on PostgreSQL >= 14, post_parse hook is called
before process utility functions.
- Based on that information, on PostgreSQL <= 13 / process utility,
we pass 0 as queryid to the pgss_store function, then we calculate a
queryid after cleaning the query (CleanQueryText) using
pgss_hash_string.
- On PostgreSQL 14 onward, post_parse() is called before
pgss_ProcessUtility, we Clear queryId for prepared statements
related utility, on process utility hook, we save the query ID for
passing it to the pgss_store function, but mark the query ID with
zero to avoid instrumenting it again on executor hooks.
After couple CPU profiling sessions with perf, it was detected that the
function pgstat_fetch_stat_numbackends() is very expensive, reading the
implementation on PostgreSQL's backend_status.c just confirmed that.
We use that function on pg_stat_monitor to retrieve the application name
and IP address of the client, we now cache the results in order to avoid
calling it for every query being processed.
If pgsm_overflow_target is ON (default, 1) and the query buffer
overflows, we now dump the buffer and keep track of how many times
pg_stat_monitor changed bucket since that.
If an overflow happen again before pg_stat_monitor cycle through
pgsm_max_buckets buckets (default 10), then we don't dump the buffer
again, but instead report an error, this ensures that only one dump file
of size pgsm_query_shared_buffer will be in disk at any time, avoiding
slowing down queries to the pg_stat_monitor view.
As soon as pg_stat_monitor cycles through all buckets, we remove the
dump file and reset the counter (pgss->n_bucket_cycles).
pgss_ExecutorEnd: Avoid unnecessary memset(plan_info, 0, ...).
We only use this object if the condition below is true, in which case we
already initialize all the fields in the object, also we now store the
plan string length (plan_info.plan_len) to avoid calling strlen on it
again later:
if (queryDesc->operation == CMD_SELECT && PGSM_QUERY_PLAN) {
... here we initialize plan_info
If the condition is false, then we pass a NULL PlanInfo* to the
pgss_store to avoid more unnecessary processing.
pgss_planner_hook: Similar, avoid memset(plan_info, 0, ...) this object
is not used here, so we pass NULL to pgss_store.
pg_get_application_name: Remove call to strlen, snprintf already give us
the calculated string length, so we just return it.
pg_get_client_addr: Cache localhost, avoid calling
ntohl(inet_addr("127.0.0.1")) all the time.
pgss_update_entry: Make use of PlanInfo->plan_len, avoiding a call to
strlen again.
intarray_get_datum: Init the string by setting the first byte to '\0'.
The memory area reserved for query text (pgsm_query_shared_buffer) was
divided evenly for each bucket, this allowed to have the same query,
e.g. "SELECT 1", duplicated in different buckets, thus wasting space.
This commit fix the query text duplication by adding a new hash table
whose only purpose is to verify if a given query is already added to the
buffer (by using the queryID).
This allows different buckets that share the same query to point to a
unique entry in the query buffer (pgss_qbuf).
When pg_stat_monitor moves to a new bucket id, by avoiding adding a
query that already exists in the buffer it can also save some CPU time.
There were couple issues to handle, the main one was that our log hook
(pgsm_emit_log_hook) was being called after the shared memory hook
completed (pgss_shmem_startup) but before PostgreSQL boostraping code
finished, thus triggering the following assertion during a call to
LWLockAcquire():
Assert(!(proc == NULL && IsUnderPostmaster));
proc is a pointer to MyProc, a PostgreSQL's shared global variable that
was not yet initalized by PostgreSQL.
We must also check for a NULL pointer return in pg_get_backend_status()
the pgstat_fetch_stat_local_beentry() function may return a NULL pointer
during initialization, in which case we use "127.0.0.1" for the client
address, and "postmaster" for application name.
If both modules are loaded then pg_stat_monitor detects that and avoid
calling standard_ProcessUtility() in ProcessUtility_hook hook, as
calling it twice is an error and triggers an assertion on PostgreSQL.
On PostgreSQL 13, pg_stat_monitor must be loaded after
pg_stat_statements, as pg_stat_statements doesn't do such verifications,
it end calling standard_ProcessUtility() and other functions even if
another module is registered, that is an error.
They fixed this problem with pg_stat_statements in PostgreSQL 14 and onward.
The loop that resets the query buffers was incorrecly using MAX_BUCKETS
to indicate the number of buckets to clear, which defaults to 10. If a
user lowers this value the loop would access a pointer beyond the number
of query buffers allocated.
Fix the problem by using the correct PGSM_MAX_BUCKETS GUC as the limit
to the loop.
This commit fix some issues when the query buffer overflows and
pg_stat_monitor attempts to dump its contents to a file.
The dump process is now as follows:
1. The dump will always be a full dump of the current query buffer,
meaning pg_stat_monitor will dump MAX_QUERY_BUFFER_BUCKET bytes to
the dump file.
2. When scanning the dump file, read chunks of size
MAX_QUERY_BUFFER_BUCKET, then look for the query ID using that chunk
and the query position metadata, this allows pg_stat_monitor to avoid
scanning the whole chunk when looking for a query ID.
The code in charge to read from/write to the dump file now takes into
account that read() and write() may return less bytes than what it was
asked for, the code now ensures that we actually read or write the
amount of bytes required (MAX_QUERY_BUFFER_BUCKET), also it handles
rare but posssible interrupts when doing those operations.
In SaveQueryText() we check for a possible overflow in the query buffer,
but if overflow would happen and pgsm_overflow_target value is 1 (the
default), then we dump the query buffer to a temporary file and reset
the buffer (start saving queries from the start of the buffer). The
problem is that after resetting the buffer we don't check if the current
query length would exceed the buffer size of MAX_QUERY_BUFFER_BUCKET, if
that is the case the buffer would overflow and probably crash the
process or in the worst case become an attack vector for exploitation.
This commit fix the problem by adding an additional check for overflow
after resetting the query buffer.
The regular expression was updated to properly capture comments in SQL
in the form /* */.
The previous regex was capturing everything from /* until the last */
because regex are greedy, this was presenting problems if a input query
has something like:
SELECT /* comment 1 */ field /* comment 2 */ from FOO;
As such, the previous regex would capture anytying between /* comment 1
and comment 2 */, the result would be:
/* comment 1 field comment 2*/.
Multiline comments are also captured.
Multiple comments in one query are now stored in the pg_stat_monitor
comments field in the form: /* Comment 1 */, ... , /* Comment N */.
The code added in pgss_store() to handle an assertion failure when
GetUserId() was being called introduced a problem with regression tests
in some builds, specifically our PG11 and PG12 distributions for Ubuntu.
The problem was detected when calling some json functions that would
trigger parallel workers, to solve the problem now we check if our hooks
are being called by parallel workers, in this case we can avoid doing
work, it also fixes the issue mentioned above as we won't call
GetUserId() anymore in an invalid context.
Don't display queries in PARSE or PLAN state, to keep it consistent
with previous behavior, this avoids showing intermediate scripts like
part of a procedure, such as:
$1
$1 := $3
Whenever a new query is added to a query buffer, record the position
in which the query was inserted, we can then use this information to
locate the query in a faster way later on when required.
This allowed to simplify the logic in hash_entry_dealloc(), after
creating the list of pending queries, as the list is scanned we can copy
the query from the previous query buffer to the new one by using the
query position (query_pos), this avoids scanning the whole query buffer
when looking up for the queryid.
Also, when moving a query to a new buffer (copy_query), we update
the query_pos for the hash table entry, so it points to the right place
in the new query buffer.
read_query() function was updated to allow query position to be passed
as argument, if pos != 0 use it to locate the query directly, otherwise
fallback to the previous mode of scanning the whole buffer.
SaveQueryText() was updated to pass a reference to the query position as
argument, this value is updated after the function finishes with the
position that the query was stored into the buffer.
There was no necessity for using a separate hash table to keep track
of queries as all the necessary information is already available in
the pgss_hash table.
The code that moves pending queries' text in hash_query_entry_dealloc
was merged into hash_entry_dealloc.
Couple functions were not necessary anymore, thus were removed:
- hash_create_query_entry
- pgss_store_query_info
- pgss_get_entry (this logic was added directly into pgss_store).
We simplified the logic in pgss_store, it basically works as follows:
1. Create a key (bucketid, queryid, etc...) for the query event.
2. Lookup the key in pgss_hash, if no entry exists for the key, create a
new one, save the query text into the current query buffer.
3. If jstate == NULL, then update stats counters for the entry.
There were many variables being initialized in pgss_store() before
checking if the module was actually active, this would waste cpu
processor if the module is disabled.
To fix that, declare variables and initialize them only after check
that pg_stat_monitor is active.
Added code to move all pending queries text from an expired bucket's query
buffer to the next, active query buffer.
The previous implementation was not very efficient, it worked like this,
as soon as a query is processed and a bucket expires:
1. Allocate memory to save the contents of the next query buffer.
2. Clear the next query buffer.
3. Iterate over pgss_query_hash, then, for each entry:
- If the entry's bucket id is equal to the next bucket then:
-- If the query for this entry has finished or ended in error, then
remove it from the hash table.
-- Else, if the query is not yet finished, copy the query from the
backup query buffer to the new query buffer.
Now, this copy was really expensive, because it was implemented
using read_query() / SaveQueryText(), and read_query() scans the
whole query buffer looking for some query ID, since we do this
extra lookup loop for each pending query we end up with a O(n^2)
algorithm.
4. Release the backup query buffer.
Since now we always move pending queries from an expired bucket to the
next one, there is no need to scan the next query buffer for pending
queries (the pending queries are always in the current bucket, and when
it expires we move them to the next one).
Taking that into consideration, the new implementation works as follows,
whenever a bucket expires:
1. Clear the next query buffer (all entries).
2. Define an array to store pending query ids from the expired bucket,
we use this array later on in the algorithm.
3. Iterate over pgss_query_hash, then, for each entry:
- If the entry's bucket id is equal to the next bucket then:
-- If the query for this entry has finished or ended in error, then
remove it from the hash table. This is equal to the previous
implementation.
- Else, if the entry's bucket id is equal to the just expired bucket
id (old bucket id) and the query state is pending (not yet finished),
then add this query ID to the array of pending query IDs.
Note: We define the array to hold up to 128 pending entries, if there
are more entries than this we try to allocate memory in the heap to
store them, then, if the allocation fails we manually copy every
pending query past the 128th to the next query buffer, using the
previous algorithm (read_query() / SaveQueryText), this should be a
very rare situation.
4. Finally, if there are pending queries, copy them from the previous
query buffer to the next one using copy_queries.
Now, copy_queries() is better than looping through each query entry and
calling read_query() / SaveQueryText() to copy each of them to the new
buffer, as explained, read_query() scans the whole query buffer for
every call.
copy_queries(), instead, scans the query buffer only once, and for every
element it checks if the current query id is in the list of queries to
be copied, this is an array of uint64 that is small enough to fit in L1
cache.
Another important fix in this commit is the addition of the line 1548 in
pg_stat_monitor.c / pgss_store():
query_entry->state = kind;
Before the addition of this line, all entries in the pgss_query_hash
hash table were not having their status updated when the query entered
execution / finished or ended in error, effectively leaving all entries
as pending, thus whenever a bucket expired all entries were being copied
from the expired bucket to the next one.
The if condition bellow in geta_next_wbucket() was subject to a race
condition:
if ((current_usec - pgss->prev_bucket_usec) > (PGSM_BUCKET_TIME * 1000 *
1000))
Two or more threads/processes could easily evaluate this condition to
true, thus executing more than once the block that would calculate a
new bucket id, clear/move old entries in the pgss_query_hash and
pgss_hash hash tables.
To avoid this problem, we define prev_bucket_usec and current_wbucket
variables as atomic and execute a loop to check if another thread has
updated prev_bucket_usec before the current one.
A problem during bucket management was allowing some queries to be
duplicated, old entries would sit around without having their statistics
updated.
The problem would arise during the following chain of events:
1. A query goes through pgss_post_parse_analyze, in this stage (PGSS_PARSE)
we only save the query into the query buffer and create an entry in the
query hash table.
2. The query then goes through pgss_ExecutorStart (PGSS_EXEC), in this stage
we create an entry for query statistic counters with default values,
all time stats equal zero, etc.
3. The query then goes through pgss_ExecutorEnd (PGSS_FINISH), in this stage
we update the query statistis, no. calls, total time taken, min_time, etc.
The problem is that between steps 2 and 3, the current bucket ID timer may
have been expired.
For example, during steps 1 and 2 the query may have been stored in
bucket ID 1, but when the query is finished (pgss_ExecutorEnd) the
current bucket ID may have been updated to 2.
This is leaving an entry for the query in bucket ID 1 with state ACTIVE,
with time statistics not updated yet.
This is also creating an entry for the query in the bucket ID 2, with all
statistics (time and others) being updated for this entry.
To solve this problem, during transition to a new bucket id, we scan all
pending queries in the previous bucket id and move them to the new
bucket id.
This way finished queries will always be associated with the bucket id
that was active at the time they've finished.
The deadlock scenario is describe below:
1. pgss_store is called, it acquires the lock pgss->lock.
2. An error ocurr, mostly out of memory when accessing internal hash
tables used to store internal data, on functions
pgss_store_query_info and pgss_get_entry.
3. Call elog() to report out of memory error.
4. Our pgsm_emit_log_hook is called, it calls pgss_store_error, which in
turn calls pgss_store.
5. Try to acquire already acquired lock pgss->lock, deadlock happens.
To fix the problem, there are two modifications worth mentioning done by
this commit:
1. We are now passing HASH_ENTER_NULL flag to hash_search, instead of
HASH_ENTER, as read in postgresql sources, this prevents this
function from reporting error when out of memory, but instead it will
only return NULL if we pass HASH_ENTER_NULL, so we can handle the
error ourselves.
2. In pgss_store, if an error happens after the pgss->lock is acquired,
we only set a flag, then, after releasing the lock, we check if the
flag is set and report the error accordingly.
There were two objects leaking memory in this function, the comments
variable of type char * allocated using palloc0, and the regular
expression object preg.
If the regcomp function failed, the function was returning without
releasing the comments variable allocated previously.
If the regexec function failed, the function was returning without
releasing the preg object and the comments variable.
This commit does two changes, first it turns the comments in
extract_query_comments an argument to the function, in pgss_store we
declare the comments variable in the stack, so it will clean up after
itself.
The second change was to move the regular expression object to global
scope, this way we compile the object only once, during module
initialization.
With these two changes we fix the memory leak and avoid
allocating/releasing memory for every call to this function.
We redefine macro _snprintf to use memcpy, which performs better, we
also update call sites using this macro to add the null terminator
'\0' to the source string length, this way memcpy also correctly
copies the null terminator to the destination string.
We update _snprintf2 macro to use strlcpy, the reason we don't use
memcpy here is because in the place where this macro is called,
pgss_update_entry, only the maximum string length of REL_LEN=1000 is
specified as an upper bound to copy the relations string vector to the
destination counters, but since this data is string, we don't need to
copy 1k bytes for every entry, by using strlcpy the copy ends as soon as
the null terminator '\0' is found in the source string.
These variables can't be in shared state, as the following problem was
taking place:
1. Process1 call pgss_ExecutorCheckPerms(), acquire lock, update
relations and num_relations, release lock.
2. Process 2 call pgss_ExecutorCheckPerms(), acquire lock, update
num_relations = 0;
3. Process 1 read num_relations = 0 in pgss_update_entry, this value is
wrong as it was updated by Process 2.
Even if we acquire the lock in pgss_update_entry to read num_relations
and relations variable, Process 1 may end up acquiring the lock after
Process 2 has ovewritten the variable values, leading to Process 1
reading of wrong data.
By defining relations and num_relations to be static and global in
pg_stat_monitor.c we take advantage that each individual PostgreSQL
backend will have its own copy of this data, which allows us to remove
the locking in pgss_ExecutorCheckPerms to update these variables,
improving pg_stat_monitor overall performance.
Added a new view 'pg_stat_monitor_hook_stats' that provide execution
time statistics for all hooks installed by the module, following is a
description of the fields:
- hook: The hook function name.
- min_time: The fastest execution time recorded for the given hook.
- max_time: The slowest execution time recorded for the given hook.
- total_time: Total execution time taken by all calls to the hook.
- avg_time: Average execution time of a call to the hook.
- ncalls: Total number of calls to the hook.
- load_comparison: A percentual of time taken by an individual hook
compared to every other hook.
To enable benchmark, code must be compiled with -DBENCHMARK flag, this
will make the hook functions to be replaced by a function with the same
name plus a '_benchmark' suffix, e.g. hook_function_benchmark.
The hook_function_benchmark will call the original function and
calculate the amount of time it took to execute, than it will update
statistics for that hook.
The query_txt variable is allocated at the beginning of the
pg_stat_monitor_internal() function and released at the end, but an
extra malloc call to allocate it was added within an internal loop in
the funcion, thus allocating memory for every loop iteration, without
releasing the memory in the loop.
The query_txt variable can be reused inside the loop body, so this
commit removes the redundant declaration of query_txt from inside the
loop, which also fixes the leak.
Add application name to the key used to identify queries in the hash
table, this allows different applications to have separate entries in
pg_stat_monitor view if they issued the same query.
If pg_stat_monitor is loaded after pg_stat_statement, then it will end
up calling standard_planner function twice in the pgss_planner_hook()
function, this will trigger an assertion failure from PostgreSQL as this
function expects an untouched Query* object, and the first call to
standard_planner() done by pg_stat_statements modifies the object.
To address the problem, we avoid calling standard_planner function twice
in pg_stat_monitor, if a previous handler is installed for the hook
planner_hook, then we assume that this previous hook has already called
standard_planner function and don't do it again.