Can't call elog() function from inside the pgsm_log as the pgss_hash
lock could be already acquired in exclusive mode, since elog() triggers
the psmg_emit_log hook, when it calls pgss_store it will try to acquire
the pgss_hash lock again, leading to a deadlock.
As the query normalization and query cleaning is always done in the
right place (pgss_store), no more parsed queries have a trailling comma
';' at the end.
Also, on error regression test, after fixing some problems with utility
related queries, we now have two entries for the RAISE WARNING case, the
first entry is the utility query itself, the second entry is the error
message logged by emit_log_hook.
Some queries have the order adjusted due to the fix introduced by the
previous commits.
This commit introduces serveral improvements:
1. Removal of pgss_store_query and pgss_store_utility functions: To
store a query, we just use pgss_store(), this makes the code more
uniform.
2. Always pass the query length to the pgss_store function using parse
state from PostgreSQL to avoid calculating query length again.
3. Always clean the query (extra spaces, update query location) in
pgss_store.
4. Normalize queries right before adding them to the query buffer, but
only if user asked for query normalization.
5. Correctly handle utility queries among different PostgreSQL versions:
- A word about how utility functions are handled on PG 13 and later
versions:
- On PostgreSQL <= 13, we have to compute a query ID, on later
versions we can call EnableQueryId() to inform Postmaster we
want to enable query ID computation.
- On PostgreSQL <= 13, post_parse hook is called after process
utility hook, on PostgreSQL >= 14, post_parse hook is called
before process utility functions.
- Based on that information, on PostgreSQL <= 13 / process utility,
we pass 0 as queryid to the pgss_store function, then we calculate a
queryid after cleaning the query (CleanQueryText) using
pgss_hash_string.
- On PostgreSQL 14 onward, post_parse() is called before
pgss_ProcessUtility, we Clear queryId for prepared statements
related utility, on process utility hook, we save the query ID for
passing it to the pgss_store function, but mark the query ID with
zero to avoid instrumenting it again on executor hooks.
After couple CPU profiling sessions with perf, it was detected that the
function pgstat_fetch_stat_numbackends() is very expensive, reading the
implementation on PostgreSQL's backend_status.c just confirmed that.
We use that function on pg_stat_monitor to retrieve the application name
and IP address of the client, we now cache the results in order to avoid
calling it for every query being processed.
If pgsm_overflow_target is ON (default, 1) and the query buffer
overflows, we now dump the buffer and keep track of how many times
pg_stat_monitor changed bucket since that.
If an overflow happen again before pg_stat_monitor cycle through
pgsm_max_buckets buckets (default 10), then we don't dump the buffer
again, but instead report an error, this ensures that only one dump file
of size pgsm_query_shared_buffer will be in disk at any time, avoiding
slowing down queries to the pg_stat_monitor view.
As soon as pg_stat_monitor cycles through all buckets, we remove the
dump file and reset the counter (pgss->n_bucket_cycles).
pgss_ExecutorEnd: Avoid unnecessary memset(plan_info, 0, ...).
We only use this object if the condition below is true, in which case we
already initialize all the fields in the object, also we now store the
plan string length (plan_info.plan_len) to avoid calling strlen on it
again later:
if (queryDesc->operation == CMD_SELECT && PGSM_QUERY_PLAN) {
... here we initialize plan_info
If the condition is false, then we pass a NULL PlanInfo* to the
pgss_store to avoid more unnecessary processing.
pgss_planner_hook: Similar, avoid memset(plan_info, 0, ...) this object
is not used here, so we pass NULL to pgss_store.
pg_get_application_name: Remove call to strlen, snprintf already give us
the calculated string length, so we just return it.
pg_get_client_addr: Cache localhost, avoid calling
ntohl(inet_addr("127.0.0.1")) all the time.
pgss_update_entry: Make use of PlanInfo->plan_len, avoiding a call to
strlen again.
intarray_get_datum: Init the string by setting the first byte to '\0'.
The memory area reserved for query text (pgsm_query_shared_buffer) was
divided evenly for each bucket, this allowed to have the same query,
e.g. "SELECT 1", duplicated in different buckets, thus wasting space.
This commit fix the query text duplication by adding a new hash table
whose only purpose is to verify if a given query is already added to the
buffer (by using the queryID).
This allows different buckets that share the same query to point to a
unique entry in the query buffer (pgss_qbuf).
When pg_stat_monitor moves to a new bucket id, by avoiding adding a
query that already exists in the buffer it can also save some CPU time.
There were couple issues to handle, the main one was that our log hook
(pgsm_emit_log_hook) was being called after the shared memory hook
completed (pgss_shmem_startup) but before PostgreSQL boostraping code
finished, thus triggering the following assertion during a call to
LWLockAcquire():
Assert(!(proc == NULL && IsUnderPostmaster));
proc is a pointer to MyProc, a PostgreSQL's shared global variable that
was not yet initalized by PostgreSQL.
We must also check for a NULL pointer return in pg_get_backend_status()
the pgstat_fetch_stat_local_beentry() function may return a NULL pointer
during initialization, in which case we use "127.0.0.1" for the client
address, and "postmaster" for application name.
If both modules are loaded then pg_stat_monitor detects that and avoid
calling standard_ProcessUtility() in ProcessUtility_hook hook, as
calling it twice is an error and triggers an assertion on PostgreSQL.
On PostgreSQL 13, pg_stat_monitor must be loaded after
pg_stat_statements, as pg_stat_statements doesn't do such verifications,
it end calling standard_ProcessUtility() and other functions even if
another module is registered, that is an error.
They fixed this problem with pg_stat_statements in PostgreSQL 14 and onward.