There was no maximum limit set for the number of maximum histograms
bucket, which can lead to a crash in case of higher value.
PG-382: Adjust the maximum value for histogram buckets.
Fix the regression issue for PostgreSQL-12.
PG-382: Adjust the maximum value for histogram buckets.
Fix the TAP test cases.
While analyzing the pg_stat_monitor installation scripts I found several
vulnerabilities. pg_stat_monitor uses CREATE OR REPLACE to install its
functions which is a security hazard. An attacker can precreate the functions
have a superuser install the extension and after installation the attacker
can switch out the function with a malicious version since he would still
be the owner of the function. Instead of CREATE OR REPLACE the installation
script should use plain CREATE to prevent this attack.
For reference
https://www.postgresql.org/docs/current/extend-extensions.html#EXTEND-EXTENSIONS-SECURITYhttps://github.com/timescale/pgspot
Similar to pg_stat_statements, pg_stat_monitor tracks wal data metrics
since PostgreSQL 13, the problem was that for PostgreSQL versions 11 and
12 we left the WalUsage variable declared in the stack unitialized,
thus leading to garbage values.
Fixed the problem by ignoring the WalUsage variable value for PG <= 12.
This commit brings following changes to this branch:
1) Port changes/additions TAP testing from main branch to this branch, under PG-292.
2) Changes to test cases due to GUCs change, under PG-331.
3) Call counts verfications, under PG-338.
4) Changes to github workflows to accomodate automation for TAP testing, under PG-343.
To check if a bucket has expired, a comparison of the time elapsed
since last bucket change was being done in get_next_wbucket() function
using the following line:
while ((current_usec - current_bucket_usec) > (PGSM_BUCKET_TIME
* 1000 * 1000))
The problem is that the expression compares a uint64 (current_usec)
with a int32 (PGSM_BUCKET_TIME), if a given user configures a value for
pgsm_bucket_time GUC (let's call it T) that could overflow int32 range
in the expression T*1000*1000 > 2**31-1, then the result would be a
negative integer cast to (uint64), resulting in a large uint64 value that
would evaluate the expression as false, thus never updating bucket
number.
When querying pg_stat_monitor view, for every entry it's verified if
the entry has not yet expired by calling IsBucketValid(bucket_number).
Using the entry's bucket number the function calculates if the time
since the bucket started, using shared global variable
pgss->bucket_start_time[bucket_id], is still valid.
Since pgss->bucket_start_time is not properly initialized in
get_next_wbucket(), the function IsBucketValid() will always
return false, thus not listing any entries in the view.
After fixing the problem with utility statements, this whole block:
do $$
declare
n integer:= 1;
begin
loop
PERFORM a,b,c,d FROM t1, t2, t3, t4
WHERE t1.a = t2.b AND t3.c = t4.d ORDER BY a;
exit when n = 1000;
n := n + 1;
end loop;
end $$;
Is only processed once, as those are nested statements, in order to
match the 1000 statements the GUC pg_stat_monitor.track must be set to
'all' and then back to the default of 'top' when done testing it.
There was a missing increment/decrement to exec_nested_level in
pgss_ProcessUtility hook, due to this, some utility statements could
end up being processed more than once, as PostgreSQL may recurse into
this hook for sub-statements or when processing a query string
containing multiple semicolon-separated statements.