Commit Graph

432 Commits (f09643f592fcf1b0b7e74f0944018a0bf13d45b8)

Author SHA1 Message Date
Ibrar Ahmed 3d9273e9b1
Merge pull request #118 from Naeem-Akhter/naeem
PG-258: Re-enable Coverall code coverage to pg_stat_monitor.
2021-10-13 23:09:37 +05:00
Naeem Akhter 32a13f8462 PG-258: Re-enable Coverall code coverage to pg_stat_monitor.
As codecov is not approved by IT due to some known vulnerabilities in the past, reverting back to coverall.io to make sure that at least code coverage is available.
2021-10-13 22:00:41 +05:00
Diego Fronza 8fdf0946fe PG-254: Add query location to hash table entries.
Whenever a new query is added to a query buffer, record the position
in which the query was inserted, we can then use this information to
locate the query in a faster way later on when required.

This allowed to simplify the logic in hash_entry_dealloc(), after
creating the list of pending queries, as the list is scanned we can copy
the query from the previous query buffer to the new one by using the
query position (query_pos), this avoids scanning the whole query buffer
when looking up for the queryid.

Also, when moving a query to a new buffer (copy_query), we update
the query_pos for the hash table entry, so it points to the right place
in the new query buffer.

read_query() function was updated to allow query position to be passed
as argument, if pos != 0 use it to locate the query directly, otherwise
fallback to the previous mode of scanning the whole buffer.

SaveQueryText() was updated to pass a reference to the query position as
argument, this value is updated after the function finishes with the
position that the query was stored into the buffer.
2021-10-07 10:06:20 -03:00
Anastasia Alexadrova f986ac0096 PG-255 Doc - Fix README title and TOC
modified:   README.md
2021-10-07 15:11:25 +03:00
Diego Fronza fcb70ffed1 PG-254: Removal of pgss_query_hash hash table.
There was no necessity for using a separate hash table to keep track
of queries as all the necessary information is already available in
the pgss_hash table.

The code that moves pending queries' text in hash_query_entry_dealloc
was merged into hash_entry_dealloc.

Couple functions were not necessary anymore, thus were removed:
- hash_create_query_entry
- pgss_store_query_info
- pgss_get_entry (this logic was added directly into pgss_store).

We simplified the logic in pgss_store, it basically works as follows:
1. Create a key (bucketid, queryid, etc...) for the query event.
2. Lookup the key in pgss_hash, if no entry exists for the key, create a
   new one, save the query text into the current query buffer.
3. If jstate == NULL, then update stats counters for the entry.
2021-10-06 14:57:15 -03:00
Diego Fronza fc9e630497 PG-254: Postpone variable initialization.
There were many variables being initialized in pgss_store() before
checking if the module was actually active, this would waste cpu
processor if the module is disabled.

To fix that, declare variables and initialize them only after check
that pg_stat_monitor is active.
2021-10-06 14:44:57 -03:00
Ibrar Ahmed a93ba37ac3 PG-246, PG-211, PG-147: Fix performance issue while querying the pg_stat_monitor.
This performance fix resolves two more issues (PG-211, PG-147).
2021-10-05 20:33:34 +00:00
Ibrar Ahmed 8e39b02bce
Merge pull request #114 from darkfronza/PG-244_move_pending_queries_to_next_query_buffer
Pg 244 move pending queries to next query buffer
2021-10-04 23:59:36 +05:00
Diego Fronza a959acb3d5 PG-244: Move pending queries' text to new bucket after bucket expiration.
Added code to move all pending queries text from an expired bucket's query
buffer to the next, active query buffer.

The previous implementation was not very efficient, it worked like this,
as soon as a query is processed and a bucket expires:

1. Allocate memory to save the contents of the next query buffer.
2. Clear the next query buffer.
3. Iterate over pgss_query_hash, then, for each entry:
   - If the entry's bucket id is equal to the next bucket then:
     -- If the query for this entry has finished or ended in error, then
        remove it from the hash table.
     -- Else, if the query is not yet finished, copy the query from the
        backup query buffer to the new query buffer.
	Now, this copy was really expensive, because it was implemented
	using read_query() / SaveQueryText(), and read_query() scans the
	whole query buffer looking for some query ID, since we do this
	extra lookup loop for each pending query we end up with a O(n^2)
	algorithm.
4. Release the backup query buffer.

Since now we always move pending queries from an expired bucket to the
next one, there is no need to scan the next query buffer for pending
queries (the pending queries are always in the current bucket, and when
it expires we move them to the next one).

Taking that into consideration, the new implementation works as follows,
whenever a bucket expires:

1. Clear the next query buffer (all entries).
2. Define an array to store pending query ids from the expired bucket,
   we use this array later on in the algorithm.
3. Iterate over pgss_query_hash, then, for each entry:
   - If the entry's bucket id is equal to the next bucket then:
     -- If the query for this entry has finished or ended in error, then
        remove it from the hash table. This is equal to the previous
	implementation.
   - Else, if the entry's bucket id is equal to the just expired bucket
     id (old bucket id) and the query state is pending (not yet finished),
     then add this query ID to the array of pending query IDs.

     Note: We define the array to hold up to 128 pending entries, if there
     are more entries than this we try to allocate memory in the heap to
     store them, then, if the allocation fails we manually copy every
     pending query past the 128th to the next query buffer, using the
     previous algorithm (read_query() / SaveQueryText), this should be a
     very rare situation.
4. Finally, if there are pending queries, copy them from the previous
   query buffer to the next one using copy_queries.

Now, copy_queries() is better than looping through each query entry and
calling read_query() / SaveQueryText() to copy each of them to the new
buffer, as explained, read_query() scans the whole query buffer for
every call.

copy_queries(), instead, scans the query buffer only once, and for every
element it checks if the current query id is in the list of queries to
be copied, this is an array of uint64 that is small enough to fit in L1
cache.

Another important fix in this commit is the addition of the line 1548 in
pg_stat_monitor.c / pgss_store():
query_entry->state = kind;

Before the addition of this line, all entries in the pgss_query_hash
hash table were not having their status updated when the query entered
execution / finished or ended in error, effectively leaving all entries
as pending, thus whenever a bucket expired all entries were being copied
from the expired bucket to the next one.
2021-10-01 15:27:45 -03:00
Ibrar Ahmed 87cfcc7a06
Merge pull request #112 from nastena1606/PG-207-Readme-updates
PG-207 Readme updates
2021-10-01 22:26:53 +05:00
Diego Fronza 89743e9243 PG-244: Fix race condition in get_next_wbucket().
The if condition bellow in geta_next_wbucket() was subject to a race
condition:
if ((current_usec - pgss->prev_bucket_usec) > (PGSM_BUCKET_TIME * 1000 *
1000))

Two or more threads/processes could easily evaluate this condition to
true, thus executing more than once the block that would calculate a
new bucket id, clear/move old entries in the pgss_query_hash and
pgss_hash hash tables.

To avoid this problem, we define prev_bucket_usec and current_wbucket
variables as atomic and execute a loop to check if another thread has
updated prev_bucket_usec before the current one.
2021-09-30 17:13:27 -03:00
Ibrar Ahmed 361c7d370a PG-211: Fix the issue while generating histogram. 2021-09-30 18:47:12 +00:00
Anastasia Alexadrova d1e01aeaab PG-207 Readme updates
modified:   README.md
2021-09-30 19:37:23 +03:00
Ibrar Ahmed f269af3da2
Merge pull request #111 from darkfronza/PG-230_fix_duplicate_query_entries
PG-230: Fix duplicated query entries.
2021-09-29 14:24:52 +05:00
Ibrar Ahmed 619a929dfa
Merge pull request #110 from Naeem-Akhter/naeem
PG-227: Enable PGSM master branch regression/test with latest PG releases.
2021-09-29 00:43:38 +05:00
Diego Fronza 273f23b161 PG-230: Fix duplicated query entries.
A problem during bucket management was allowing some queries to be
duplicated, old entries would sit around without having their statistics
updated.

The problem would arise during the following chain of events:
1. A query goes through pgss_post_parse_analyze, in this stage (PGSS_PARSE)
   we only save the query into the query buffer and create an entry in the
   query hash table.
2. The query then goes through pgss_ExecutorStart (PGSS_EXEC), in this stage
   we create an entry for query statistic counters with default values,
   all time stats equal zero, etc.
3. The query then goes through pgss_ExecutorEnd (PGSS_FINISH), in this stage
   we update the query statistis, no. calls, total time taken, min_time, etc.

The problem is that between steps 2 and 3, the current bucket ID timer may
have been expired.

For example, during steps 1 and 2 the query may have been stored in
bucket ID 1, but when the query is finished (pgss_ExecutorEnd) the
current bucket ID may have been updated to 2.

This is leaving an entry for the query in bucket ID 1 with state ACTIVE,
with time statistics not updated yet.

This is also creating an entry for the query in the bucket ID 2, with all
statistics (time and others) being updated for this entry.

To solve this problem, during transition to a new bucket id, we scan all
pending queries in the previous bucket id and move them to the new
bucket id.

This way finished queries will always be associated with the bucket id
that was active at the time they've finished.
2021-09-28 16:12:34 -03:00
Naeem Akhter a9bd805322 PG-227: Enable PGSM master branch regression/test with latest PG releases.
Enable PGSM master branch regression/test with last Postgres release (actual
released packages) for supported server versions. This will make sure that we
would have already tested the PGSM compatibility well before PPG releases are done.
2021-09-27 15:24:17 +05:00
Hamid Akhtar 0403ca951c
Create code-of-conduct.md 2021-09-27 13:10:39 +05:00
Ibrar Ahmed d42fd93bbb
Merge pull request #109 from Naeem-Akhter/master
PG-226: Enable installcheck-world regression.
2021-09-23 22:07:03 +05:00
Ibrar Ahmed c1863b2a72
Merge pull request #108 from dAdAbird/unused_func
PG-232: Remove unused function
2021-09-23 03:13:31 +05:00
Naeem Akhter f7588532be PG-226: Enable installcheck-world regression.
1) Enabled configure and build with proper flags and environment to make sure that
built server is aligned with pg and ppg package distribution in terms of features
and configurations. Earlier build (configure) was not aligned with pg community standard
configuration that are used for community builds and distribution.

2) Enabled installcheck-world regression test suites of the pg server, to verify
the stability and compatibility of pg server after loading pg_stat_monitor in
server.

3) Change in expected files of error.out and error_1.out for error testacase.

4) Disbaled and removed coverage using coveralls.io, as it was not serving the purpose.

(Note: installcheck-world was failing on pg-14 due to some changes that are done in pg14
or upstream PGSS, and that is causing additional line (Query Identifier) is output of some
of the test cases of installcheck-world so it is not enabled in this commit. Problem with
pg14 server installcheck regression is output of an extra line (Query identifier ****)
in some of the test cases after loading extension PGSM and that causes regression to fail
for server after library load.)
2021-09-23 03:10:06 +05:00
Andrew Pogrebnoy b4d4dae29f PG-232: revome unsed declrations 2021-09-22 17:10:52 +03:00
Andrew Pogrebnoy aecfb7a5cd PG-232: remove unused function
and fix a couple of typos
2021-09-21 10:37:10 +03:00
Ibrar Ahmed 4a30cccdac
Merge pull request #104 from LenzGr/doc-fixes
Documentation: Minor textual/formatting improvements
2021-09-20 10:41:15 +05:00
Ibrar Ahmed 0d1bf6b237
Merge pull request #103 from nastena1606/PG-208-Contributing-guide
PG-208: Doc Added Contibuting guide
2021-09-20 10:40:56 +05:00
Mikhail Samoylov 4a2a042955 Test pg_stat_monitor with pg from packages 2021-09-11 10:12:01 +03:00
Anastasia Alexadrova 88492e5b9b PG-208 Doc Added Contibuting guide
new file: CONTRIBUTING.md
2021-09-09 15:50:30 +03:00
Lenz Grimmer 40e6dff0f9 Documentation: Minor textual/formatting improvements
Updated `README.md` and `USER_GUIDE.md`, improved wording
and formatting. Slightly updated description in `META.json`.

Signed-off-by: Lenz Grimmer <lenz.grimmer@percona.com>
2021-09-09 11:05:31 +02:00
Ibrar Ahmed 0ceef74071
Bumping version for upcoming 0.9.2-beta1 release 2021-09-06 15:40:53 +05:00
Ibrar Ahmed b48522961b
Merge pull request #102 from EngineeredVirus/master
Bumping version for upcoming 0.9.2-beta1 release
2021-09-06 15:30:48 +05:00
Hamid Akhtar d633126a2d Bumping version for upcoming 0.9.2-beta1 release 2021-09-06 15:21:55 +05:00
Ibrar Ahmed bd4ae3ec37
Merge pull request #101 from darkfronza/PG-225_fix_deadlock
PG-225: Fix deadlock in pgss_store.
2021-09-06 15:08:19 +05:00
Diego Fronza eafb2e89a8 PG-225: Fix deadlock in pgss_store.
The deadlock scenario is describe below:
1. pgss_store is called, it acquires the lock pgss->lock.
2. An error ocurr, mostly out of memory when accessing internal hash
   tables used to store internal data, on functions
   pgss_store_query_info and pgss_get_entry.
3. Call elog() to report out of memory error.
4. Our pgsm_emit_log_hook is called, it calls pgss_store_error, which in
   turn calls pgss_store.
5. Try to acquire already acquired lock pgss->lock, deadlock happens.

To fix the problem, there are two modifications worth mentioning done by
this commit:
1. We are now passing HASH_ENTER_NULL flag to hash_search, instead of
   HASH_ENTER, as read in postgresql sources, this prevents this
   function from reporting error when out of memory, but instead it will
   only return NULL if we pass HASH_ENTER_NULL, so we can handle the
   error ourselves.
2. In pgss_store, if an error happens after the pgss->lock is acquired,
   we only set a flag, then, after releasing the lock, we check if the
   flag is set and report the error accordingly.
2021-09-03 14:43:32 -04:00
Ibrar Ahmed e541633670
Merge pull request #100 from darkfronza/PG-204_save_previous_emit_log_hook
PG-204: Fix save previous emit_log_hook.
2021-09-01 23:43:36 +05:00
Diego Fronza b8198278cd PG-204: Fix save previous emit_log_hook.
pg_stat_monitor was not saving the pointer to the previous hook for
emit_log_hook, this commit fix the issue.
2021-09-01 11:19:48 -04:00
Ibrar Ahmed 81a2c705b6 Removing commit (4d1c2e6), because it must be in its branch. 2021-09-01 11:27:19 +00:00
Ibrar Ahmed bd2ebf2a5b Regression output fix. 2021-09-01 10:25:42 +00:00
Ibrar Ahmed 6695670d73
Merge pull request #99 from darkfronza/PG-224_fix_memory_leak
PG-224: Fix memory leak on extract_query_comments.
2021-09-01 14:57:11 +05:00
Ibrar Ahmed 7a543bafb0 Regression output fix. 2021-09-01 09:55:35 +00:00
Diego Fronza b5aa300e82 pg-224: Fix memory leak on extract_query_comments.
There were two objects leaking memory in this function, the comments
variable of type char * allocated using palloc0, and the regular
expression object preg.

If the regcomp function failed, the function was returning without
releasing the comments variable allocated previously.

If the regexec function failed, the function was returning without
releasing the preg object and the comments variable.

This commit does two changes, first it turns the comments in
extract_query_comments an argument to the function, in pgss_store we
declare the comments variable in the stack, so it will clean up after
itself.

The second change was to move the regular expression object to global
scope, this way we compile the object only once, during module
initialization.

With these two changes we fix the memory leak and avoid
allocating/releasing memory for every call to this function.
2021-08-31 18:29:22 -04:00
Ibrar Ahmed 3d3ece2f99 PG-221: Use alternate of GetUserID function in error hook. 2021-08-31 14:55:53 +00:00
Ibrar Ahmed aee45ebe52 PG-221: Use alternat of GetUserID function in error hook. 2021-08-31 12:10:11 +00:00
Ibrar Ahmed bdab22e7cc
Merge pull request #98 from mu-samoylov/master
Add PPG job
2021-08-28 23:07:50 +05:00
Ibrar Ahmed f7f38e363f
Merge pull request #97 from darkfronza/PG-223_improve_performance
PG-223: improve performance
2021-08-28 23:07:32 +05:00
Diego Fronza 549347025d PG-223: Use memcpy and strlcpy to copy relations to counters.
We redefine macro _snprintf to use memcpy, which performs better, we
also update call sites using this macro to add the null terminator
'\0' to the source string length, this way memcpy also correctly
copies the null terminator to the destination string.

We update _snprintf2 macro to use strlcpy, the reason we don't use
memcpy here is because in the place where this macro is called,
pgss_update_entry, only the maximum string length of REL_LEN=1000 is
specified as an upper bound to copy the relations string vector to the
destination counters, but since this data is string, we don't need to
copy 1k bytes for every entry, by using strlcpy the copy ends as soon as
the null terminator '\0' is found in the source string.
2021-08-27 17:07:34 -04:00
Mikhail Samoylov 58f707c6d9 Add PPG job 2021-08-28 00:05:52 +03:00
Diego Fronza a847ed95de PG-223: Remove relations and num_relations from pgssSharedState.
These variables can't be in shared state, as the following problem was
taking place:
1. Process1 call pgss_ExecutorCheckPerms(), acquire lock, update
   relations and num_relations, release lock.
2. Process 2 call pgss_ExecutorCheckPerms(), acquire lock, update
   num_relations = 0;
3. Process 1 read num_relations = 0 in pgss_update_entry, this value is
   wrong as it was updated by Process 2.

Even if we acquire the lock in pgss_update_entry to read num_relations
and relations variable, Process 1 may end up acquiring the lock after
Process 2 has ovewritten the variable values, leading to Process 1
reading of wrong data.

By defining relations and num_relations to be static and global in
pg_stat_monitor.c we take advantage that each individual PostgreSQL
backend will have its own copy of this data, which allows us to remove
the locking in pgss_ExecutorCheckPerms to update these variables,
improving pg_stat_monitor overall performance.
2021-08-27 15:53:11 -04:00
Ibrar Ahmed f3962509fb
Merge pull request #94 from darkfronza/PG-203_possible_memory_leak
PG-203: Fix memory leak.
2021-08-23 21:30:54 +05:00
Ibrar Ahmed 8202900f3a
Merge pull request #96 from darkfronza/PG-222_add_hook_runtime_execution_benchmark
PG-222: Add benchmark support for measuring hook execution time.
2021-08-23 21:30:31 +05:00
Ibrar Ahmed a78dddd790
Merge pull request #95 from Naeem-Akhter/master
PG-195: Change pgsm_bucket_time value from 300 to 60.
2021-08-23 21:30:17 +05:00