Commit Graph

85 Commits (8bbb49e409b0ffbe74257764b4ccb020cfaa6aa0)

Author SHA1 Message Date
Andreas Karlsson 4ebb3d1f36 PG-1349 Prevent LWLock deadlocks from happening
Instead of trying to fix every case where we could throw an error and
handling that properly we just make sure to disable the error capture
of the hook while our backend holds the lock.

We keep the check for IsSystemOOM() in the hook even though that might
not be relevant anymore because if we are in OOM it is not like there
would be any point to log the error anyway.

This is done via a global variable, similar to the
__pgsm_do_not_capture_error variable that we are replacing, which we
also use in one place to disable recursive calls to the log hook
where we do not hold the lock.

A potential future improvement would be to make this variable a counter,
or have two separate globals, so that we could guard against recursive
calls to the hook running us out of stack and not just prevent the
deadlocks.
2025-02-20 11:38:37 +01:00
Artem Gavrilov 3bb65798fd
Format sources (#475)
* Temporary disable workflows

* Add indent target to makefiel

* Add CI workflow to check if sources formatted

* Fix

* Fix

* Fix

* Fix

* Fix

* Fix

* Fix

* Fix

* Fix

* Format sources

* Add comments

* Revert "Temporary disable workflows"

This reverts commit 7e11cf6154.

* Revert "Format sources"

This reverts commit 6ef992d9f0.

* Use PG17 for code formatt

* Format sources

* Revert "Format sources"

This reverts commit 34061e1f82.

* Format sources
2024-08-07 15:12:24 +02:00
Zsolt Parragi 130d6b5fce
PG-592: Treat queries with different parent queries as separate entries (#403)
* PG-592: Treat queries with different parent queries as separate entries

1. Previously pg_stat_monitor had a `topquery` and `topqueryid` field, but it was only a sample:
it showed one of the top queries executing the specific query.

With this change, the same entry executed by two different functions will result in two entries in the statistics table.

2. This also fixes a bug where the content of these field disappeared for every second query executed:
previously the update function changed topqueryid to `0` if it was non zero, and changed it to the proper id when it was 0.
This resulted in an alternating behavior, where for every second executed query the top query disappeared.

After these changes, the top query is always shown.

3. The previous implementation also leaked dsa memory used to store the parent queries. This is now also fixed.

* PG-502: Fixing review comments

* dsa_free changed to assert as it can never happen
* restructured the ifs to be cleaner
  Note: kept the two-level ifs, as that makes more sense with the assert
  Note: didn't convert nested_level checks to macro, as it is used differently at different parts of the code

* PG-502: Fixing review comments

* PG-592 Add regression test

* Make test compatible with PG12

* Remove redundant line

---------

Co-authored-by: Artem Gavrilov <artem.gavrilov@percona.com>
2024-08-06 23:43:48 +02:00
Artem Gavrilov d7999f1acf
[PG-644] Add option to disable application name tracking (#469)
* Cache application name for every backed instance

* Improve pg_get_backend_status performance for PG16 and PG17

* Fix

* Make application_name tracking disabled by default

* Meke app name tracking opt-out

* Format newly added code with pgindent

* Fix build for PG17

* Fix
2024-07-23 18:49:33 +02:00
Artem Gavrilov dacb41f9e4
[PG-810] PG-17 Support (#463)
* Temporary disable all workflows

* Add build workflow with PG17

* Fix incompatibilities

* Fix 007_settings_pgsm_query_shared_buffer.pl test

* Fix 018_column_names.pl

* Fix 025_compare_pgss.pl

* Remove tuplestore_donestoring usage at all

* Rename I/O timing statistics columns to shared_blk_{read|write}_time

* Fix comments with fileds numbers

* Fix format

* Revert "Temporary disable all workflows"

This reverts commit 12e75beb63.

* Disable all workflows except check and build for PG 15, 16 and 17

* Fix

* Fix comments

* Fix migration

* Use REL_17_BETA1 in CI

* Add timers tests to 028_temp_block.pl

* Add local blocks timing statistics columns local_blk_{write|read}_time

* Fix t/027_local_blocks.pl test for older PG versions

* Fix

* Add jit_deform_{count|time} metrics

* Fix

* Add stats_since and minmax_stats_since fields

* Revert "Disable all workflows except check and build for PG 15, 16 and 17"

This reverts commit 73febf3aee.

* Fix t/028_temp_block.pl for PG14 and below

* Fix build for PG12

* Add pgdg workflow for PG17

* Try to fix PG pgdg workflow

* Fixes and formatting

* Format code

* Add level tracking regression test

* Fix nesting level tracking

* Format code

* Add level tracking test expected result for PG13

* Fix for PG12

* Skip level tracking regression test for PG version less than 14

* Fix toplevel calculation for older PG version

* Fix level tracking test results

* Fix nesting level counting for older PG version

* Revert "Fix nesting level counting for older PG version"

This reverts commit 3e91da8010.

* Fix level tracking for older PG versions once again

* Set REL_17_BETA2 tag for PG

* Add CI badge for PG17

* Use PG17 for examples in readme
2024-07-18 14:59:57 +02:00
Artem Gavrilov 288ec6325f
Add license headers validation (#458)
* Add .licenserc.yaml file

* Fix license headers

* Add github action to check license headers

* Fix workflow

* Fix checkout path

* Rename workflow

* Add debug info

* Disable workflows

* Try fix

* Split check workflow in two jobs

* Try invalid license header

* Comment of failure

* Disable cppcheck job

* Fix licenserc file

* Enable debug logging

* Prevent comments from licence-eye

* Revert "Disable cppcheck job"

This reverts commit 10f55373ea.

* Revert "Disable workflows"

This reverts commit 2e2ead2fa5.

* Fix typo

* Revert "Try invalid license header"

This reverts commit 0cc0c883d2.

* Update year in license headers

* Cleanup

* Fix indention in license header
2024-04-26 10:55:50 +02:00
Artem Gavrilov 64c71f98de
Fix integer overflow (#435)
* Fix MAX_BUCKETS_MEM overflow

* Fix MAX_QUERY_BUF overflow

* Fix int overflow in IsBucketValid function

* Add missing newline

* Remove test for max value of pgsm_query_shared_buffer parameter

* Tune tests

* Cleanup

* Use int64 type instead of long long
2024-04-05 14:34:30 +02:00
Hamid Akhtar 39d9419bd0 PG-624: pg_stat_monitor: Possible server crash when running pgbench with pg_stat_monitor loaded (#396)
PG-624: pg_stat_monitor: Possible server crash when running pgbench
with pg_stat_monitor loaded

It appears that this issue was being caused by improper handling of
dynamic number of buckets. This commit resolves the issue.

Also, as part of a larger cleanup, memory context has been moved to
local space from shared storage. Also, some unwanted
and no-longer-needed variables are removed.

Co-authored-by: Muhammad Usama <muhammad.usama@percona.com>
2023-05-24 12:27:40 +05:00
Hamid Akhtar 9ecd2ccbb7 Updating formatting of source code 2023-03-01 19:40:24 +05:00
Hamid Akhtar faa938b8f1 Fixing code indentation with pgindent 2023-02-27 14:47:27 +05:00
Muhammad Usama fe23d31bf9
PG-607: Allow histogram to track queries in sub-ms time brackets (#384)
* PG-607: Allow histogram to track queries in sub-ms time brackets

Updated the GUC configuration and the relevant histogram functionality
to track queries in lower cardinality than ms. This is done by saving
the GUC values for histogram min and max values in real (double) type.

All test cases except for the 030 tap test are passing. The test case
needs an update.

* Fixing regression issues for v12 and below because of histogram changes.
2023-02-23 21:24:40 +05:00
Muhammad Usama 05ffcac2fa
PG-606: New GUC required for enabling/disabling of pgsm_query_id calculation… (#383)
* PG-606: New GUC required for enabling/disabling of pgsm_query_id calculation

Adds a new GUC pg_stat_monitor.pgsm_enable_pgsm_query_id to enable/disable
pgsm query id calculation. Apart from that patch also refactors the GUC-related
code to match PostgreSQL conventions.

Moreover, the commit also changes the pgsm_enable_overflow GUC to boolean
instead of enum.
2023-02-23 19:08:09 +05:00
Hamid Akhtar 5f6425a6f8 PG-542: Performance improvement of pg_stat_monitor.
Performance related changes where some calculations are moved out
of the spinlock in the pgsm_update_entry function. This should
improve the performance a bit.

Also, moved the histogram calculation function to init. The update
function now only searches an array rather than recalculatiing the
histogram bucket timings.

Updated conditional statement to update parent query only when
required.
2023-02-23 01:53:30 +05:00
Hamid Akhtar de66ef0fce PG-588: Some queries are not being normalised.
This bug uncovered serious issues with how the data was being stored by PSGM.
So it require a complete redesign.

pg_stat_monitor now stores the data locally within the backend process's local
memory. The data is only stored when the query completes. This reduces the
number of lock acquisitions that were previously needed during various stages
of the execution. Also, this avoids data loss in case the current bucket
changes during execution. Also, the unavailability of jumble state during later
stages of executions was causing pg_stat_monitor to save non-normalized query.
This was a major problem as well.

pg_stat_monitor specific memory context is implemented. It is used for saving
data locally. The context memory callback helps us clear the locally saved data
so that we do not store it multiple times in the shared hash.

As part of this major rewrite, pgss reference in function and variable names
is changed to pgsm. Memory footprint for the entries is reduced, data types
are corrected where needed, and we've removed unused variables, functions and
macros.

This patch was mutually created by:
Co-authored-by: Hamid Akhtar <hamid.akhtar@percona.com>
Co-authored-by: Muhammad Usama <muhammad.usama@percona.com>
2023-02-22 19:31:52 +05:00
Muhammad Usama 5648b99eee
PG-585: pg_stat_monitor: Add code comments to the DSA related funcs.. (#360)
Adding code comments for the DSA related functionality.
2023-01-23 14:36:23 +05:00
Hamid Akhtar 1662e9efa1 PG-562: Histogram Ranges/Buckets are not correct.
Replaced the error on server start with a warning. The functionality
now handles "pgsm_histogram_buckets" as the maximum number of histogram
buckets to be created. On init, pg_stat_monitor calculates the max
number of buckets that can be created within the given min/max time
range. If the number is below the user configuration, it emits a
warning in the log file stating the number of max buckets set.
2023-01-23 12:37:51 +05:00
Hamid Akhtar 209f370cef PG-562: Histogram Ranges/Buckets are not correct.
Added buckets for queries that take less than minimum histogram time
and one for the ones taking more than the max value specified.

Also, in case the buckets end up overlapping, on server start, an
error will be thrown informing the user of this issue and requesting
a rectification.

Refactored the code to consolidate the calculations in a single
function.
2023-01-23 12:37:51 +05:00
Muhammad Usama a75e47add9 PG-400: pg_stat_monitor: Timezone in msgtime column...
The bucket start time reported by pg_stat_monitor does not match the PG time and
timezone. The fix is to use TimestampTz for recording the bucket start time.
2023-01-18 16:38:23 +05:00
Muhammad Usama 2c5e12af0a
PG-488: pg_stat_monitor: Overflow management. (#342)
* PG-488: pg_stat_monitor: Overflow management.

Reimplement the storage mechanism of buckets (for PG-15 onward) and query texts
using Dynamic shared memory. Since the dynamic shared memory can grow into a
swap area, so we get the overflow out of the box.

As PostgreSQL versions prior to V15 does not support sequence scan on dynamic
shared memory hashes, so older versions has to live with the classic shared
memory hash for storing the buckets.

Another noteworthy change with the new design is: it saves the query pointer
inside the bucket, and eventually, the query text gets evicted with the bucket
recycle.

Finally, the dynamic shared memory hash has a built-in locking mechanism, so we
can revisit the whole locking in pg_stat_monitor has the potential for lots of
performance improvements.

* Fixing tap test reported issues and also disabling dynamic hash for all versions

* Updating the expected out file for top_query test case

Co-authored-by: Hamid Akhtar <hamid.akhtar@percona.com>
2023-01-10 17:54:17 +05:00
Hamid Akhtar b20eda7066 PG-545: pg_stat_monitor: Same query text should generate same queryid
Regardless of the database or the user, the same query will yield the
same query ID. As part of this, a new column, 'pgsm_query_id', is added.

* pgsm_query_id:
pgsm_query_id has the same data type of int8 as the queryid column. If
the incoming SQL command includes any constants, it internally normalizes
the query to remove those constant values with placeholders. Otherwise,
it uses the query directly to generate the query hash.

Since we no longer depend on the server's parse tree mechanism, we can
generate the same hash for the same query text for all server versions.

Also, it is important to note that the hash being calculated is a database,
schema and user independent. So same query text in different databases
will generate the same hash.

This column is not part of the key; rather, for observability purposes only.

* Regression
SQL test case pgsm_query_id.sql is added to the SQL regression.
2022-12-28 14:24:19 +05:00
Ibrar Ahmed 802774a2a7
PG-488: Revert pg_stat_monitor: Overflow management. (#338)
PG-488: Revert pg_stat_monitor: Overflow management.

This patch does not work for  < PostgreSQL - 15. More work required.
2022-12-22 19:15:14 +05:00
Muhammad Usama df0580b741 PG-488: pg_stat_monitor: Overflow management.
Reimplement the storage mechanism of buckets and query texts
using Dynamic shared memory. Since the dynamic shared memory
can grow into a swap area, so we get the overflow out of the box.

oreover the new design saves the query pointer inside the bucket
and eventually, the query text gets evicted with the bucket recycle.

Finally, the dynamic shared memory hash has a built-in locking
mechanism so we can revisit the whole locking in pg_stat_monitor
has potential for lots of performance improvements
2022-12-20 17:29:15 +05:00
Muhammad Usama 913064b68d
PG-435: Adding new counters that are available in PG15 (#329)
In line with pg_stat_statments for PG15, This commit adds eight new cumulative
counters for jit operations, making it easier to diagnose how JIT is used in an
installation. And two new columns, temp_blk_read_time, and temp_blk_write_time,
respectively, show the time spent reading and writing temporary file blocks
on disk.
Moreover, The commit also contains a few indentations and API adjustments.
2022-12-07 15:40:13 +05:00
Ibrar Ahmed f7860b472f
PG-310: Bucket is “Done” vs still being current/last. (#321)
A new column is added to mention that bucket is active or done. there is    
some timing based adjustment was required with that too.   

Co-authored-by: Hamid Akhtar <hamid.akhtar@percona.com>
2022-11-23 02:23:28 +05:00
Ibrar Ahmed 0af6295513 PG-174: Code cleanup.
pg_stat_monitor is a bit longer; therefore, it requires some code cleanup.
Therefore I decided to turn these tasks into multiple commits and PR to avoid
various changes in one PR. This will ease the review and Q/A process.
In this commit, I have done these tasks.

1 - Remove all benchmarking and debugging code.
2022-10-24 17:42:38 +00:00
Ibrar Ahmed a9187117f9
PG-456: Running pgindent to make source PostgreSQL compatible. (#269)
PG-456: Running pgindent to make source indentation/spacing PostgreSQLCompatible.

PostgreSQL uses pgindent from time to time to make source code PostgreSQL
style guide compatible, it is a very long time since we have not done that.
Commit fixes a lot of indentation and spacing issues.

Co-authored-by: Hamid Akhtar <hamid.akhtar@gmail.com>
2022-06-29 00:42:40 +05:00
Hamid Akhtar 7efc3fa50e [PG-159] pg_stat_monitor: Bucket start time should be aligned with fix number of seconds
The buckets are now created with the start time a modulus of the bucket time size.
So if we have a 10 second bucket, the start times would reflect that:
- Bucket1: 00:00:00
- Bucket2: 00:00:10
- Bucket3: 00:00:20
...

Previously, the start time of the bucket was aligned with the first query that
arrives in that bucket. However, now the behaviour is changed. So, even if the
first query for bucket 2 arrives at 00:00:13, the start time would still be set
to 00:00:10.

This change now makes the bucketing separated out by fixed time windows so that
external applications can easily consume that data and chart it.

Also, as part of this change, locking of pgss is updated now and extended
to last the bucket related changes.
2022-06-20 17:53:43 +05:00
Ibrar Ahmed 13eedd579e PG-382: Adjust the maximum value for histogram buckets.
There was no maximum limit set for the number of maximum histograms
bucket, which can lead to a crash in case of higher value.

PG-382: Adjust the maximum value for histogram buckets.

Fix the regression issue related to GUC and set the maximum
buckets value correctly.

PG-382: Adjust the maximum value for histogram buckets.

Fix the TAP test cases.
2022-05-24 17:50:07 +00:00
Ibrar Ahmed 153f8d2e87 PG-338: Calls count is not correct in PG-13.
cherry-pick patch (b6838049b6) by Diego
and I did some refatoring.
2022-03-14 18:14:11 +00:00
Diego Fronza 8c61e24f95 PG-286: Fix query buffer overflow management.
If pgsm_overflow_target is ON (default, 1) and the query buffer
overflows, we now dump the buffer and keep track of how many times
pg_stat_monitor changed bucket since that.

If an overflow happen again before pg_stat_monitor cycle through
pgsm_max_buckets buckets (default 10), then we don't dump the buffer
again, but instead report an error, this ensures that only one dump file
of size pgsm_query_shared_buffer will be in disk at any time, avoiding
slowing down queries to the pg_stat_monitor view.

As soon as pg_stat_monitor cycles through all buckets, we remove the
dump file and reset the counter (pgss->n_bucket_cycles).
2022-02-17 19:47:07 +05:00
Diego Fronza df89c3f4a3 PG-286: Small performance improvements.
pgss_ExecutorEnd: Avoid unnecessary memset(plan_info, 0, ...).
We only use this object if the condition below is true, in which case we
already initialize all the fields in the object, also we now store the
plan string length (plan_info.plan_len) to avoid calling strlen on it
again later:
if (queryDesc->operation == CMD_SELECT && PGSM_QUERY_PLAN) {
... here we initialize plan_info

If the condition is false, then we pass a NULL PlanInfo* to the
pgss_store to avoid more unnecessary processing.

pgss_planner_hook: Similar, avoid memset(plan_info, 0, ...) this object
is not used here, so we pass NULL to pgss_store.

pg_get_application_name: Remove call to strlen, snprintf already give us
the calculated string length, so we just return it.

pg_get_client_addr: Cache localhost, avoid calling
ntohl(inet_addr("127.0.0.1")) all the time.

pgss_update_entry: Make use of PlanInfo->plan_len, avoiding a call to
strlen again.

intarray_get_datum: Init the string by setting the first byte to '\0'.
2022-02-17 19:46:39 +05:00
Diego Fronza c21a3de00d PG-286: Avoid duplicate queries in text buffer.
The memory area reserved for query text (pgsm_query_shared_buffer) was
divided evenly for each bucket, this allowed to have the same query,
e.g. "SELECT 1", duplicated in different buckets, thus wasting space.

This commit fix the query text duplication by adding a new hash table
whose only purpose is to verify if a given query is already added to the
buffer (by using the queryID).

This allows different buckets that share the same query to point to a
unique entry in the query buffer (pgss_qbuf).

When pg_stat_monitor moves to a new bucket id, by avoiding adding a
query that already exists in the buffer it can also save some CPU time.
2022-02-17 19:46:35 +05:00
Ibrar Ahmed 680c7fda42 PG-210: Add new column toplevel. 2021-11-16 11:23:59 +00:00
Ibrar Ahmed 5f6177daa3 PG-210: Add new column toplevel. 2021-11-16 10:48:11 +00:00
Diego Fronza f66b45afd6 PG-220: Fix read/write of dumped query buffer to files.
This commit fix some issues when the query buffer overflows and
pg_stat_monitor attempts to dump its contents to a file.

The dump process is now as follows:

1. The dump will always be a full dump of the current query buffer,
   meaning pg_stat_monitor will dump MAX_QUERY_BUFFER_BUCKET bytes to
   the dump file.
2. When scanning the dump file, read chunks of size
   MAX_QUERY_BUFFER_BUCKET, then look for the query ID using that chunk
   and the query position metadata, this allows pg_stat_monitor to avoid
   scanning the whole chunk when looking for a query ID.

The code in charge to read from/write to the dump file now takes into
account that read() and write() may return less bytes than what it was
asked for, the code now ensures that we actually read or write the
amount of bytes required (MAX_QUERY_BUFFER_BUCKET), also it handles
rare but posssible interrupts when doing those operations.
2021-10-25 16:55:30 -03:00
Diego Fronza fa55eb333d PG-220: Fix pgsm_overflow_target defaults.
The GUC variable pgsm_overflow_target was pointing to the wrong index
(12, pgsm_track_planning) in the "GucVariable conf[MAX_SETTINGS]" array.

Adjusted it to the right index, 11.
2021-10-22 15:40:27 -03:00
Diego Fronza 8fdf0946fe PG-254: Add query location to hash table entries.
Whenever a new query is added to a query buffer, record the position
in which the query was inserted, we can then use this information to
locate the query in a faster way later on when required.

This allowed to simplify the logic in hash_entry_dealloc(), after
creating the list of pending queries, as the list is scanned we can copy
the query from the previous query buffer to the new one by using the
query position (query_pos), this avoids scanning the whole query buffer
when looking up for the queryid.

Also, when moving a query to a new buffer (copy_query), we update
the query_pos for the hash table entry, so it points to the right place
in the new query buffer.

read_query() function was updated to allow query position to be passed
as argument, if pos != 0 use it to locate the query directly, otherwise
fallback to the previous mode of scanning the whole buffer.

SaveQueryText() was updated to pass a reference to the query position as
argument, this value is updated after the function finishes with the
position that the query was stored into the buffer.
2021-10-07 10:06:20 -03:00
Diego Fronza fcb70ffed1 PG-254: Removal of pgss_query_hash hash table.
There was no necessity for using a separate hash table to keep track
of queries as all the necessary information is already available in
the pgss_hash table.

The code that moves pending queries' text in hash_query_entry_dealloc
was merged into hash_entry_dealloc.

Couple functions were not necessary anymore, thus were removed:
- hash_create_query_entry
- pgss_store_query_info
- pgss_get_entry (this logic was added directly into pgss_store).

We simplified the logic in pgss_store, it basically works as follows:
1. Create a key (bucketid, queryid, etc...) for the query event.
2. Lookup the key in pgss_hash, if no entry exists for the key, create a
   new one, save the query text into the current query buffer.
3. If jstate == NULL, then update stats counters for the entry.
2021-10-06 14:57:15 -03:00
Diego Fronza a959acb3d5 PG-244: Move pending queries' text to new bucket after bucket expiration.
Added code to move all pending queries text from an expired bucket's query
buffer to the next, active query buffer.

The previous implementation was not very efficient, it worked like this,
as soon as a query is processed and a bucket expires:

1. Allocate memory to save the contents of the next query buffer.
2. Clear the next query buffer.
3. Iterate over pgss_query_hash, then, for each entry:
   - If the entry's bucket id is equal to the next bucket then:
     -- If the query for this entry has finished or ended in error, then
        remove it from the hash table.
     -- Else, if the query is not yet finished, copy the query from the
        backup query buffer to the new query buffer.
	Now, this copy was really expensive, because it was implemented
	using read_query() / SaveQueryText(), and read_query() scans the
	whole query buffer looking for some query ID, since we do this
	extra lookup loop for each pending query we end up with a O(n^2)
	algorithm.
4. Release the backup query buffer.

Since now we always move pending queries from an expired bucket to the
next one, there is no need to scan the next query buffer for pending
queries (the pending queries are always in the current bucket, and when
it expires we move them to the next one).

Taking that into consideration, the new implementation works as follows,
whenever a bucket expires:

1. Clear the next query buffer (all entries).
2. Define an array to store pending query ids from the expired bucket,
   we use this array later on in the algorithm.
3. Iterate over pgss_query_hash, then, for each entry:
   - If the entry's bucket id is equal to the next bucket then:
     -- If the query for this entry has finished or ended in error, then
        remove it from the hash table. This is equal to the previous
	implementation.
   - Else, if the entry's bucket id is equal to the just expired bucket
     id (old bucket id) and the query state is pending (not yet finished),
     then add this query ID to the array of pending query IDs.

     Note: We define the array to hold up to 128 pending entries, if there
     are more entries than this we try to allocate memory in the heap to
     store them, then, if the allocation fails we manually copy every
     pending query past the 128th to the next query buffer, using the
     previous algorithm (read_query() / SaveQueryText), this should be a
     very rare situation.
4. Finally, if there are pending queries, copy them from the previous
   query buffer to the next one using copy_queries.

Now, copy_queries() is better than looping through each query entry and
calling read_query() / SaveQueryText() to copy each of them to the new
buffer, as explained, read_query() scans the whole query buffer for
every call.

copy_queries(), instead, scans the query buffer only once, and for every
element it checks if the current query id is in the list of queries to
be copied, this is an array of uint64 that is small enough to fit in L1
cache.

Another important fix in this commit is the addition of the line 1548 in
pg_stat_monitor.c / pgss_store():
query_entry->state = kind;

Before the addition of this line, all entries in the pgss_query_hash
hash table were not having their status updated when the query entered
execution / finished or ended in error, effectively leaving all entries
as pending, thus whenever a bucket expired all entries were being copied
from the expired bucket to the next one.
2021-10-01 15:27:45 -03:00
Diego Fronza 89743e9243 PG-244: Fix race condition in get_next_wbucket().
The if condition bellow in geta_next_wbucket() was subject to a race
condition:
if ((current_usec - pgss->prev_bucket_usec) > (PGSM_BUCKET_TIME * 1000 *
1000))

Two or more threads/processes could easily evaluate this condition to
true, thus executing more than once the block that would calculate a
new bucket id, clear/move old entries in the pgss_query_hash and
pgss_hash hash tables.

To avoid this problem, we define prev_bucket_usec and current_wbucket
variables as atomic and execute a loop to check if another thread has
updated prev_bucket_usec before the current one.
2021-09-30 17:13:27 -03:00
Diego Fronza 273f23b161 PG-230: Fix duplicated query entries.
A problem during bucket management was allowing some queries to be
duplicated, old entries would sit around without having their statistics
updated.

The problem would arise during the following chain of events:
1. A query goes through pgss_post_parse_analyze, in this stage (PGSS_PARSE)
   we only save the query into the query buffer and create an entry in the
   query hash table.
2. The query then goes through pgss_ExecutorStart (PGSS_EXEC), in this stage
   we create an entry for query statistic counters with default values,
   all time stats equal zero, etc.
3. The query then goes through pgss_ExecutorEnd (PGSS_FINISH), in this stage
   we update the query statistis, no. calls, total time taken, min_time, etc.

The problem is that between steps 2 and 3, the current bucket ID timer may
have been expired.

For example, during steps 1 and 2 the query may have been stored in
bucket ID 1, but when the query is finished (pgss_ExecutorEnd) the
current bucket ID may have been updated to 2.

This is leaving an entry for the query in bucket ID 1 with state ACTIVE,
with time statistics not updated yet.

This is also creating an entry for the query in the bucket ID 2, with all
statistics (time and others) being updated for this entry.

To solve this problem, during transition to a new bucket id, we scan all
pending queries in the previous bucket id and move them to the new
bucket id.

This way finished queries will always be associated with the bucket id
that was active at the time they've finished.
2021-09-28 16:12:34 -03:00
Andrew Pogrebnoy aecfb7a5cd PG-232: remove unused function
and fix a couple of typos
2021-09-21 10:37:10 +03:00
Diego Fronza a847ed95de PG-223: Remove relations and num_relations from pgssSharedState.
These variables can't be in shared state, as the following problem was
taking place:
1. Process1 call pgss_ExecutorCheckPerms(), acquire lock, update
   relations and num_relations, release lock.
2. Process 2 call pgss_ExecutorCheckPerms(), acquire lock, update
   num_relations = 0;
3. Process 1 read num_relations = 0 in pgss_update_entry, this value is
   wrong as it was updated by Process 2.

Even if we acquire the lock in pgss_update_entry to read num_relations
and relations variable, Process 1 may end up acquiring the lock after
Process 2 has ovewritten the variable values, leading to Process 1
reading of wrong data.

By defining relations and num_relations to be static and global in
pg_stat_monitor.c we take advantage that each individual PostgreSQL
backend will have its own copy of this data, which allows us to remove
the locking in pgss_ExecutorCheckPerms to update these variables,
improving pg_stat_monitor overall performance.
2021-08-27 15:53:11 -04:00
Diego Fronza 775c087fb2 PG-222: Add benchmark support for measuring hook execution time.
Added a new view 'pg_stat_monitor_hook_stats' that provide execution
time statistics for all hooks installed by the module, following is a
description of the fields:
  -       hook: The hook function name.
  -   min_time: The fastest execution time recorded for the given hook.
  -   max_time: The slowest execution time recorded for the given hook.
  - total_time: Total execution time taken by all calls to the hook.
  -   avg_time: Average execution time of a call to the hook.
  -     ncalls: Total number of calls to the hook.
  - load_comparison: A percentual of time taken by an individual hook
                     compared to every other hook.

To enable benchmark, code must be compiled with -DBENCHMARK flag, this
will make the hook functions to be replaced by a function with the same
name plus a '_benchmark' suffix, e.g. hook_function_benchmark.

The hook_function_benchmark will call the original function and
calculate the amount of time it took to execute, than it will update
statistics for that hook.
2021-08-23 11:50:56 -04:00
Diego Fronza 99f01d37e3 PG-200: Add application name to the bucket ID.
Add application name to the key used to identify queries in the hash
table, this allows different applications to have separate entries in
pg_stat_monitor view if they issued the same query.
2021-07-29 15:05:08 -04:00
Ibrar Ahmed f470ba591a PG-194: PostgreSQL-14 support added. 2021-05-21 21:29:58 +05:00
Ibrar Ahmed 67b3d961ca PG-193: Comment based tags to identify different parameters. 2021-05-19 22:15:37 +05:00
Ibrar Ahmed 89614e442b PG-190: Does not show query, if query elapsed time is greater than bucket time. 2021-04-08 15:29:42 +05:00
Ibrar Ahmed f8ed33a92a PG-188: Added a new column to monitor the query state. 2021-03-21 00:04:39 +05:00
Ibrar Ahmed 96a7603aae PG-187: Compilation Error for PostgreSQL 11 and PostgreSQL 12. 2021-03-17 20:42:58 +05:00