Ensuring the outer bound for the bucket is an exclusive boundary and it
as it belongs to the next bucket. To explain the point further, a set of
five second bucket would be:
Bucket 1: 00:00:00.00 -> 00:00:04.99...
Bucket 2: 00:00:00.05 -> 00:00:09.99...
Bucket 3: 00:00:00.10 -> 00:00:14.99...
...
prev_bucket_sec holds the actual time at which the previous bucket was created
and it is used to compute if the previous bucket time has elapsed and when is
the time to create a new one. But since the bucket start time is rounded down
to logical time window start, that makes the prev_bucket_sec and bucket start
time out of sync with each other, and depending on the query arrival time there
is a high probability that a bucket gets missed especially when the last bucket
was created around the end of the bucket time window.
Solution is to keep the prev_bucket_sec and bucket start time in-sync.
Moreover, we are using the unint64 for storing the prev_bucket_sec which is kind
of an overkill and a simple uint should be good enough for the purpose. But that
change can be taken up as part of the create-bucket function refactoring task.
Reverting the bucket locking mechanism to previous behavior. This has
a lot of room for improvement that needs to be part of a major refactoring
in the 2.x release.
PG-456: Running pgindent to make source indentation/spacing PostgreSQLCompatible.
PostgreSQL uses pgindent from time to time to make source code PostgreSQL
style guide compatible, it is a very long time since we have not done that.
Commit fixes a lot of indentation and spacing issues.
Co-authored-by: Hamid Akhtar <hamid.akhtar@gmail.com>
PG-453: Normalize query does not work with INSERT statements.
The commit fixes the issue similar to pg_stat_statements. Jumble query skips in
case of INSERT statements to avoid the duplicate queryid, but in another
commit we already solved that problem with another way.
The buckets are now created with the start time a modulus of the bucket time size.
So if we have a 10 second bucket, the start times would reflect that:
- Bucket1: 00:00:00
- Bucket2: 00:00:10
- Bucket3: 00:00:20
...
Previously, the start time of the bucket was aligned with the first query that
arrives in that bucket. However, now the behaviour is changed. So, even if the
first query for bucket 2 arrives at 00:00:13, the start time would still be set
to 00:00:10.
This change now makes the bucketing separated out by fixed time windows so that
external applications can easily consume that data and chart it.
Also, as part of this change, locking of pgss is updated now and extended
to last the bucket related changes.
There was no maximum limit set for the number of maximum histograms
bucket, which can lead to a crash in case of higher value.
PG-382: Adjust the maximum value for histogram buckets.
Fix the regression issue for PostgreSQL-12.
PG-382: Adjust the maximum value for histogram buckets.
Fix the TAP test cases.
Similar to pg_stat_statements, pg_stat_monitor tracks wal data metrics
since PostgreSQL 13, the problem was that for PostgreSQL versions 11 and
12 we left the WalUsage variable declared in the stack unitialized,
thus leading to garbage values.
Fixed the problem by ignoring the WalUsage variable value for PG <= 12.
To check if a bucket has expired, a comparison of the time elapsed
since last bucket change was being done in get_next_wbucket() function
using the following line:
while ((current_usec - current_bucket_usec) > (PGSM_BUCKET_TIME
* 1000 * 1000))
The problem is that the expression compares a uint64 (current_usec)
with a int32 (PGSM_BUCKET_TIME), if a given user configures a value for
pgsm_bucket_time GUC (let's call it T) that could overflow int32 range
in the expression T*1000*1000 > 2**31-1, then the result would be a
negative integer cast to (uint64), resulting in a large uint64 value that
would evaluate the expression as false, thus never updating bucket
number.
When querying pg_stat_monitor view, for every entry it's verified if
the entry has not yet expired by calling IsBucketValid(bucket_number).
Using the entry's bucket number the function calculates if the time
since the bucket started, using shared global variable
pgss->bucket_start_time[bucket_id], is still valid.
Since pgss->bucket_start_time is not properly initialized in
get_next_wbucket(), the function IsBucketValid() will always
return false, thus not listing any entries in the view.
There was a missing increment/decrement to exec_nested_level in
pgss_ProcessUtility hook, due to this, some utility statements could
end up being processed more than once, as PostgreSQL may recurse into
this hook for sub-statements or when processing a query string
containing multiple semicolon-separated statements.
If a query exceeds pg_stat_monitor.pgsm_query_max_len, then it's
truncated before we save it into the query buffer (SaveQueryText).
When reading the query back, on pg_stat_monitor_internal, we allocate a
buffer for the query with length = pg_stat_monitor.pgsm_query_max_len,
the problem is that the read_query function adds a '\0' to the end of
the buffer when reading a query, thus if a query has been truncated, for
example, to 1024, when reading it back read_query will store the '\0' at
the position 1025, an out of array bounds position.
Then, when we call pfree to release the buffer, PostgreSQL notices the
buffer overrun and triggers an error assertion, the assertion calls our
error hook which attempts to acquire the shared pgss->lock before
pg_stat_monitor_internal has released it, leading to a deadlock.
To avoid the problem we add 1 more byte to the extra '\0' during palloc
call for query_text and parent_query_text.
Also, we release the lock before calling pfree, just in case PostgreSQL
finds a problem in pfree we won't deadlock again and get the error
reported correctly.
If a backend would change the application name during execution,
pg_stat_monitor would then fail to read the updated value, as it caches
the result in order to avoid calling the expensive functions
pgstat_fetch_stat_numbackends() and pgstat_fetch_stat_local_beentry().
A workaround was found, we can just read an exported GUC from
PostgreSQL backend itself, namely application_name, from utils/guc.h,
thus saving us from having to call those expensive functions.
Updated sql files (pg_stat_monitor_settings view).
Using right variable name and level checking on pgss_store:
key.toplevel = ((exec_nested_level + plan_nested_level) == 0);
This commit introduces serveral improvements:
1. Removal of pgss_store_query and pgss_store_utility functions: To
store a query, we just use pgss_store(), this makes the code more
uniform.
2. Always pass the query length to the pgss_store function using parse
state from PostgreSQL to avoid calculating query length again.
3. Always clean the query (extra spaces, update query location) in
pgss_store.
4. Normalize queries right before adding them to the query buffer, but
only if user asked for query normalization.
5. Correctly handle utility queries among different PostgreSQL versions:
- A word about how utility functions are handled on PG 13 and later
versions:
- On PostgreSQL <= 13, we have to compute a query ID, on later
versions we can call EnableQueryId() to inform Postmaster we
want to enable query ID computation.
- On PostgreSQL <= 13, post_parse hook is called after process
utility hook, on PostgreSQL >= 14, post_parse hook is called
before process utility functions.
- Based on that information, on PostgreSQL <= 13 / process utility,
we pass 0 as queryid to the pgss_store function, then we calculate a
queryid after cleaning the query (CleanQueryText) using
pgss_hash_string.
- On PostgreSQL 14 onward, post_parse() is called before
pgss_ProcessUtility, we Clear queryId for prepared statements
related utility, on process utility hook, we save the query ID for
passing it to the pgss_store function, but mark the query ID with
zero to avoid instrumenting it again on executor hooks.
After couple CPU profiling sessions with perf, it was detected that the
function pgstat_fetch_stat_numbackends() is very expensive, reading the
implementation on PostgreSQL's backend_status.c just confirmed that.
We use that function on pg_stat_monitor to retrieve the application name
and IP address of the client, we now cache the results in order to avoid
calling it for every query being processed.
If pgsm_overflow_target is ON (default, 1) and the query buffer
overflows, we now dump the buffer and keep track of how many times
pg_stat_monitor changed bucket since that.
If an overflow happen again before pg_stat_monitor cycle through
pgsm_max_buckets buckets (default 10), then we don't dump the buffer
again, but instead report an error, this ensures that only one dump file
of size pgsm_query_shared_buffer will be in disk at any time, avoiding
slowing down queries to the pg_stat_monitor view.
As soon as pg_stat_monitor cycles through all buckets, we remove the
dump file and reset the counter (pgss->n_bucket_cycles).
pgss_ExecutorEnd: Avoid unnecessary memset(plan_info, 0, ...).
We only use this object if the condition below is true, in which case we
already initialize all the fields in the object, also we now store the
plan string length (plan_info.plan_len) to avoid calling strlen on it
again later:
if (queryDesc->operation == CMD_SELECT && PGSM_QUERY_PLAN) {
... here we initialize plan_info
If the condition is false, then we pass a NULL PlanInfo* to the
pgss_store to avoid more unnecessary processing.
pgss_planner_hook: Similar, avoid memset(plan_info, 0, ...) this object
is not used here, so we pass NULL to pgss_store.
pg_get_application_name: Remove call to strlen, snprintf already give us
the calculated string length, so we just return it.
pg_get_client_addr: Cache localhost, avoid calling
ntohl(inet_addr("127.0.0.1")) all the time.
pgss_update_entry: Make use of PlanInfo->plan_len, avoiding a call to
strlen again.
intarray_get_datum: Init the string by setting the first byte to '\0'.
The memory area reserved for query text (pgsm_query_shared_buffer) was
divided evenly for each bucket, this allowed to have the same query,
e.g. "SELECT 1", duplicated in different buckets, thus wasting space.
This commit fix the query text duplication by adding a new hash table
whose only purpose is to verify if a given query is already added to the
buffer (by using the queryID).
This allows different buckets that share the same query to point to a
unique entry in the query buffer (pgss_qbuf).
When pg_stat_monitor moves to a new bucket id, by avoiding adding a
query that already exists in the buffer it can also save some CPU time.
There were couple issues to handle, the main one was that our log hook
(pgsm_emit_log_hook) was being called after the shared memory hook
completed (pgss_shmem_startup) but before PostgreSQL boostraping code
finished, thus triggering the following assertion during a call to
LWLockAcquire():
Assert(!(proc == NULL && IsUnderPostmaster));
proc is a pointer to MyProc, a PostgreSQL's shared global variable that
was not yet initalized by PostgreSQL.
We must also check for a NULL pointer return in pg_get_backend_status()
the pgstat_fetch_stat_local_beentry() function may return a NULL pointer
during initialization, in which case we use "127.0.0.1" for the client
address, and "postmaster" for application name.
If both modules are loaded then pg_stat_monitor detects that and avoid
calling standard_ProcessUtility() in ProcessUtility_hook hook, as
calling it twice is an error and triggers an assertion on PostgreSQL.
On PostgreSQL 13, pg_stat_monitor must be loaded after
pg_stat_statements, as pg_stat_statements doesn't do such verifications,
it end calling standard_ProcessUtility() and other functions even if
another module is registered, that is an error.
They fixed this problem with pg_stat_statements in PostgreSQL 14 and onward.
The loop that resets the query buffers was incorrecly using MAX_BUCKETS
to indicate the number of buckets to clear, which defaults to 10. If a
user lowers this value the loop would access a pointer beyond the number
of query buffers allocated.
Fix the problem by using the correct PGSM_MAX_BUCKETS GUC as the limit
to the loop.
This commit fix some issues when the query buffer overflows and
pg_stat_monitor attempts to dump its contents to a file.
The dump process is now as follows:
1. The dump will always be a full dump of the current query buffer,
meaning pg_stat_monitor will dump MAX_QUERY_BUFFER_BUCKET bytes to
the dump file.
2. When scanning the dump file, read chunks of size
MAX_QUERY_BUFFER_BUCKET, then look for the query ID using that chunk
and the query position metadata, this allows pg_stat_monitor to avoid
scanning the whole chunk when looking for a query ID.
The code in charge to read from/write to the dump file now takes into
account that read() and write() may return less bytes than what it was
asked for, the code now ensures that we actually read or write the
amount of bytes required (MAX_QUERY_BUFFER_BUCKET), also it handles
rare but posssible interrupts when doing those operations.
In SaveQueryText() we check for a possible overflow in the query buffer,
but if overflow would happen and pgsm_overflow_target value is 1 (the
default), then we dump the query buffer to a temporary file and reset
the buffer (start saving queries from the start of the buffer). The
problem is that after resetting the buffer we don't check if the current
query length would exceed the buffer size of MAX_QUERY_BUFFER_BUCKET, if
that is the case the buffer would overflow and probably crash the
process or in the worst case become an attack vector for exploitation.
This commit fix the problem by adding an additional check for overflow
after resetting the query buffer.