Instead of trying to fix every case where we could throw an error and
handling that properly we just make sure to disable the error capture
of the hook while our backend holds the lock.
We keep the check for IsSystemOOM() in the hook even though that might
not be relevant anymore because if we are in OOM it is not like there
would be any point to log the error anyway.
This is done via a global variable, similar to the
__pgsm_do_not_capture_error variable that we are replacing, which we
also use in one place to disable recursive calls to the log hook
where we do not hold the lock.
A potential future improvement would be to make this variable a counter,
or have two separate globals, so that we could guard against recursive
calls to the hook running us out of stack and not just prevent the
deadlocks.
* Denormalize prepared statement queries
Added support for extracting query arguments for prepared statements
when `pg_stat_monitor.pgsm_normalized_query` is off.
Previously pg_stat_monitor was unable to extract the arguments for
prepared statements, thus leaving queries with placeholders $1
.. $N instead of the actual arguments.
* Optmize query denormalization
Instead of copying original query text byte by byte, copy data between
query placeholders in chunks, example:
`INSERT INTO foo(a, b, c) VALUES('test', 100, 'test again)'`
Would result in normalized query:
`INSERT INTO foo(a, b, c) VALUES($1, $2, $3)`
The original patch would copy the parts between placeholders byte by
byte, e.g. `INSERT INTO foo(a, b, c) VALUES(`, instead we can copy this
whole block at once, 1 function call and maybe 1 buffer re-allocation
per call.
Also make use of `appendBinaryStringInfo` to avoid calculating string
length as we have this info already.
* Optmize query denormalization(2)
Avoid allocating an array of strings for extracting query argument
values, instead append the current parameter value directly in the
buffer used to store the denormalized query.
This avoids not only unnecessary memory allocations, but also copying
data between temporary memory and the buffer.
* Store denormalized query only under certain constraints
This commit introduces a little optimization along with a feature, it
stores the query in denormalized form only under the circumstances
below:
- The psgm_normalized_query GUC is disabled (off).
- The query is seem for the first time, or the query total
execution time exceeds the mean execution time calculated for
the previous queries.
Having the query which took most execution time along with it's
arguments could help users in further investigating performance issues.
* Fix regression tests
When query normalization is disabled utility queries like SELECT 10+20
are now stored as is, instead of SELECT $1+$2.
Also when functions or sub queries are created the arguments used
internally by the function or subqueries will be replaced by NULL instead
of $1..$N. The actual arguments will be displayed when the function or
subquery is actually invoked.
* Add query denormalization regression test for prepared statements
Ensures that the denormalization of prepared statements is working, also
ensure that a query which takes more time to execute replaces the
previous denormalized query.
* Updated pgsm_query_id regression tests
With the query dernomalization feature, having integer literals used in
sql like 1, or 2 could create some confusion on whether those are
placeholders or constant values, thus this commit updates the
pgsm_query_id regression test to use different integer literals to avoid
confusion.
* Improve query denormalization regression test
Add a new test case:
1. Execute a prepared statement with larger execution time first.
2. Execute the same prepared statement with cheap execution time.
3. Ensures that the denormalized heavy query is not replaced by the
cheaper.
* Format source using pgindent
* Fix top query regression tests on PG 12,13
On PG 12, 13, the internal return instruction in the following function:
```
CREATE OR REPLACE FUNCTION add(int, int) RETURNS INTEGER AS
$$
BEGIN
return (select $1 + $2);
END; $$ language plpgsql;
```
Is stored as SELECT (select expr1 + expr2)
On PG 14 onward it's stored just as SELECT (expr1 + expr2)
* PG-592: Treat queries with different parent queries as separate entries
1. Previously pg_stat_monitor had a `topquery` and `topqueryid` field, but it was only a sample:
it showed one of the top queries executing the specific query.
With this change, the same entry executed by two different functions will result in two entries in the statistics table.
2. This also fixes a bug where the content of these field disappeared for every second query executed:
previously the update function changed topqueryid to `0` if it was non zero, and changed it to the proper id when it was 0.
This resulted in an alternating behavior, where for every second executed query the top query disappeared.
After these changes, the top query is always shown.
3. The previous implementation also leaked dsa memory used to store the parent queries. This is now also fixed.
* PG-502: Fixing review comments
* dsa_free changed to assert as it can never happen
* restructured the ifs to be cleaner
Note: kept the two-level ifs, as that makes more sense with the assert
Note: didn't convert nested_level checks to macro, as it is used differently at different parts of the code
* PG-502: Fixing review comments
* PG-592 Add regression test
* Make test compatible with PG12
* Remove redundant line
---------
Co-authored-by: Artem Gavrilov <artem.gavrilov@percona.com>
* Cache application name for every backed instance
* Improve pg_get_backend_status performance for PG16 and PG17
* Fix
* Make application_name tracking disabled by default
* Meke app name tracking opt-out
* Format newly added code with pgindent
* Fix build for PG17
* Fix
* Temporary disable all workflows
* Add build workflow with PG17
* Fix incompatibilities
* Fix 007_settings_pgsm_query_shared_buffer.pl test
* Fix 018_column_names.pl
* Fix 025_compare_pgss.pl
* Remove tuplestore_donestoring usage at all
* Rename I/O timing statistics columns to shared_blk_{read|write}_time
* Fix comments with fileds numbers
* Fix format
* Revert "Temporary disable all workflows"
This reverts commit 12e75beb63.
* Disable all workflows except check and build for PG 15, 16 and 17
* Fix
* Fix comments
* Fix migration
* Use REL_17_BETA1 in CI
* Add timers tests to 028_temp_block.pl
* Add local blocks timing statistics columns local_blk_{write|read}_time
* Fix t/027_local_blocks.pl test for older PG versions
* Fix
* Add jit_deform_{count|time} metrics
* Fix
* Add stats_since and minmax_stats_since fields
* Revert "Disable all workflows except check and build for PG 15, 16 and 17"
This reverts commit 73febf3aee.
* Fix t/028_temp_block.pl for PG14 and below
* Fix build for PG12
* Add pgdg workflow for PG17
* Try to fix PG pgdg workflow
* Fixes and formatting
* Format code
* Add level tracking regression test
* Fix nesting level tracking
* Format code
* Add level tracking test expected result for PG13
* Fix for PG12
* Skip level tracking regression test for PG version less than 14
* Fix toplevel calculation for older PG version
* Fix level tracking test results
* Fix nesting level counting for older PG version
* Revert "Fix nesting level counting for older PG version"
This reverts commit 3e91da8010.
* Fix level tracking for older PG versions once again
* Set REL_17_BETA2 tag for PG
* Add CI badge for PG17
* Use PG17 for examples in readme
* Fix MAX_BUCKETS_MEM overflow
* Fix MAX_QUERY_BUF overflow
* Fix int overflow in IsBucketValid function
* Add missing newline
* Remove test for max value of pgsm_query_shared_buffer parameter
* Tune tests
* Cleanup
* Use int64 type instead of long long
A potential lock contention could've been caused when an OOM warning
was being emitted by the pgsm_store function. This could lead the
pg_store_error function calling pgsm_store function and thereby trying
to acquire and exclusive lock when a shared was already by the same
process. This warning is now guarded by protection block.
PG-624: pg_stat_monitor: Possible server crash when running pgbench
with pg_stat_monitor loaded
It appears that this issue was being caused by improper handling of
dynamic number of buckets. This commit resolves the issue.
Also, as part of a larger cleanup, memory context has been moved to
local space from shared storage. Also, some unwanted
and no-longer-needed variables are removed.
Co-authored-by: Muhammad Usama <muhammad.usama@percona.com>
The return value for snprintf was incorrectly being recorded as plan
length. That's been resolved.
As part of this fix, we've also elminated the possibility of a potential
memory leak when plan text was being saved.
Co-authored-by: Hamid Akhtar <hamid.akhtar@percona.com>
Co-authored-by: Muhammad Usama <muhammad.usama@percona.com>
* PG-607: Allow histogram to track queries in sub-ms time brackets
Updated the GUC configuration and the relevant histogram functionality
to track queries in lower cardinality than ms. This is done by saving
the GUC values for histogram min and max values in real (double) type.
All test cases except for the 030 tap test are passing. The test case
needs an update.
* Fixing regression issues for v12 and below because of histogram changes.
* PG-606: New GUC required for enabling/disabling of pgsm_query_id calculation
Adds a new GUC pg_stat_monitor.pgsm_enable_pgsm_query_id to enable/disable
pgsm query id calculation. Apart from that patch also refactors the GUC-related
code to match PostgreSQL conventions.
Moreover, the commit also changes the pgsm_enable_overflow GUC to boolean
instead of enum.
Saving the client IP address once per the lifetime of a backend. This avoid
the expensive operation multiple times, and hence improving performance
significantly.
Performance related changes where some calculations are moved out
of the spinlock in the pgsm_update_entry function. This should
improve the performance a bit.
Also, moved the histogram calculation function to init. The update
function now only searches an array rather than recalculatiing the
histogram bucket timings.
Updated conditional statement to update parent query only when
required.
Saving the client IP address once per the lifetime of a backend. This avoid
the expensive operation multiple times, and hence improving performance
significantly.
This bug uncovered serious issues with how the data was being stored by PSGM.
So it require a complete redesign.
pg_stat_monitor now stores the data locally within the backend process's local
memory. The data is only stored when the query completes. This reduces the
number of lock acquisitions that were previously needed during various stages
of the execution. Also, this avoids data loss in case the current bucket
changes during execution. Also, the unavailability of jumble state during later
stages of executions was causing pg_stat_monitor to save non-normalized query.
This was a major problem as well.
pg_stat_monitor specific memory context is implemented. It is used for saving
data locally. The context memory callback helps us clear the locally saved data
so that we do not store it multiple times in the shared hash.
As part of this major rewrite, pgss reference in function and variable names
is changed to pgsm. Memory footprint for the entries is reduced, data types
are corrected where needed, and we've removed unused variables, functions and
macros.
This patch was mutually created by:
Co-authored-by: Hamid Akhtar <hamid.akhtar@percona.com>
Co-authored-by: Muhammad Usama <muhammad.usama@percona.com>
Disallow V1 API to be used with V2.0 lib and remove pg_stat_monitor--1.0.sql
as part of that. A few adjustments to 1.x to 2.0 upgrade script are also
part of the commit
Resolved the issue with histogram outlier buckets. Also updated
the printing of bucket ranges to be in correct set notation with
reference to brackets. The lower bounds of buckets always have an
exclusive range except for the first bucket, and the upper bounds
always have an inclusive value.
( or ) => exclusive
{ or } => inclusive
The entire range is enclosed within the {} brackets.
for utility statements as well
Added the necessary capture of resource usage in the process
utility function. We are now storing CPU and user timings for a
utility statement.
As an observability tool that serves data to other tools, data must be output without any loss. So rounding off causes data loss and rounding off errors when comparing different columns.
Therefore, it was decided to eliminate rounding off when outputting values. Any consumer of this data should round off data to whatever precision it prefers.
This behaviour is also consistent with pg_stat_statements.
There is no specific test case where I can either reproduce or validate
the fix. Though, one of the suspects is this condition in pgss_store.
Therefore removed, and it requires verification.
Replaced the error on server start with a warning. The functionality
now handles "pgsm_histogram_buckets" as the maximum number of histogram
buckets to be created. On init, pg_stat_monitor calculates the max
number of buckets that can be created within the given min/max time
range. If the number is below the user configuration, it emits a
warning in the log file stating the number of max buckets set.
Added buckets for queries that take less than minimum histogram time
and one for the ones taking more than the max value specified.
Also, in case the buckets end up overlapping, on server start, an
error will be thrown informing the user of this issue and requesting
a rectification.
Refactored the code to consolidate the calculations in a single
function.
* PG-543: pg_stat_monitor: PostgreSQL's pg_stat_statements compatible view.
The view now carries all the columns as pg_stat_statements. This required fixing
data types of some of the columns, renaming a few, as well inclusion of new
columns to make the view fully compatible with pg_stat_statements.
* PG-543: pg_stat_monitor: PostgreSQL's pg_stat_statements compatible view.
Updating the upgrade sql file from 1.0 to 2.0 version linked with this issue
changes.
* PG-543: pg_stat_monitor: PostgreSQL's pg_stat_statements compatible view.
Updating datum calls to use UInt64 rather than Int64.
* PG-582: blk_read_time and blk_write_time are not being rounded.
Added the round off within the internal function so that values for
blk_read_time, blk_write_time are rounded off to 4 decimal places.
Additionally, added rounding off for the PG15+ columns of
temp_blk_read_time and temp_blk_write_time.
* PG-582: blk_read_time and blk_write_time are not being rounded.
Added rounding off for four JIT related columns introduced for PG15.
The bucket start time reported by pg_stat_monitor does not match the PG time and
timezone. The fix is to use TimestampTz for recording the bucket start time.
pgsm_get_ss() must only be called when pg_stat_monitor.so is loaded.
Fix is to move the pgsm_get_ss() call after checking if the pg_stat_monotor
library is loaded or not.
* PG-488: pg_stat_monitor: Overflow management.
Reimplement the storage mechanism of buckets (for PG-15 onward) and query texts
using Dynamic shared memory. Since the dynamic shared memory can grow into a
swap area, so we get the overflow out of the box.
As PostgreSQL versions prior to V15 does not support sequence scan on dynamic
shared memory hashes, so older versions has to live with the classic shared
memory hash for storing the buckets.
Another noteworthy change with the new design is: it saves the query pointer
inside the bucket, and eventually, the query text gets evicted with the bucket
recycle.
Finally, the dynamic shared memory hash has a built-in locking mechanism, so we
can revisit the whole locking in pg_stat_monitor has the potential for lots of
performance improvements.
* Fixing tap test reported issues and also disabling dynamic hash for all versions
* Updating the expected out file for top_query test case
Co-authored-by: Hamid Akhtar <hamid.akhtar@percona.com>
queryid creation mechanism.
Resolving the crash identified by regression and reported by Naeem.
This fix resolves the issue with incorrect query length in case of
normalized query when the query length exceeds PGSM_QUERY_MAX_LEN.
Resolving the crash identify by regression and reported by Naeem.
Regardless of the database or the user, the same query will yield the
same query ID. As part of this, a new column, 'pgsm_query_id', is added.
* pgsm_query_id:
pgsm_query_id has the same data type of int8 as the queryid column. If
the incoming SQL command includes any constants, it internally normalizes
the query to remove those constant values with placeholders. Otherwise,
it uses the query directly to generate the query hash.
Since we no longer depend on the server's parse tree mechanism, we can
generate the same hash for the same query text for all server versions.
Also, it is important to note that the hash being calculated is a database,
schema and user independent. So same query text in different databases
will generate the same hash.
This column is not part of the key; rather, for observability purposes only.
* Regression
SQL test case pgsm_query_id.sql is added to the SQL regression.