The deadlock scenario is describe below:
1. pgss_store is called, it acquires the lock pgss->lock.
2. An error ocurr, mostly out of memory when accessing internal hash
tables used to store internal data, on functions
pgss_store_query_info and pgss_get_entry.
3. Call elog() to report out of memory error.
4. Our pgsm_emit_log_hook is called, it calls pgss_store_error, which in
turn calls pgss_store.
5. Try to acquire already acquired lock pgss->lock, deadlock happens.
To fix the problem, there are two modifications worth mentioning done by
this commit:
1. We are now passing HASH_ENTER_NULL flag to hash_search, instead of
HASH_ENTER, as read in postgresql sources, this prevents this
function from reporting error when out of memory, but instead it will
only return NULL if we pass HASH_ENTER_NULL, so we can handle the
error ourselves.
2. In pgss_store, if an error happens after the pgss->lock is acquired,
we only set a flag, then, after releasing the lock, we check if the
flag is set and report the error accordingly.
There were two objects leaking memory in this function, the comments
variable of type char * allocated using palloc0, and the regular
expression object preg.
If the regcomp function failed, the function was returning without
releasing the comments variable allocated previously.
If the regexec function failed, the function was returning without
releasing the preg object and the comments variable.
This commit does two changes, first it turns the comments in
extract_query_comments an argument to the function, in pgss_store we
declare the comments variable in the stack, so it will clean up after
itself.
The second change was to move the regular expression object to global
scope, this way we compile the object only once, during module
initialization.
With these two changes we fix the memory leak and avoid
allocating/releasing memory for every call to this function.
We redefine macro _snprintf to use memcpy, which performs better, we
also update call sites using this macro to add the null terminator
'\0' to the source string length, this way memcpy also correctly
copies the null terminator to the destination string.
We update _snprintf2 macro to use strlcpy, the reason we don't use
memcpy here is because in the place where this macro is called,
pgss_update_entry, only the maximum string length of REL_LEN=1000 is
specified as an upper bound to copy the relations string vector to the
destination counters, but since this data is string, we don't need to
copy 1k bytes for every entry, by using strlcpy the copy ends as soon as
the null terminator '\0' is found in the source string.
These variables can't be in shared state, as the following problem was
taking place:
1. Process1 call pgss_ExecutorCheckPerms(), acquire lock, update
relations and num_relations, release lock.
2. Process 2 call pgss_ExecutorCheckPerms(), acquire lock, update
num_relations = 0;
3. Process 1 read num_relations = 0 in pgss_update_entry, this value is
wrong as it was updated by Process 2.
Even if we acquire the lock in pgss_update_entry to read num_relations
and relations variable, Process 1 may end up acquiring the lock after
Process 2 has ovewritten the variable values, leading to Process 1
reading of wrong data.
By defining relations and num_relations to be static and global in
pg_stat_monitor.c we take advantage that each individual PostgreSQL
backend will have its own copy of this data, which allows us to remove
the locking in pgss_ExecutorCheckPerms to update these variables,
improving pg_stat_monitor overall performance.
Added a new view 'pg_stat_monitor_hook_stats' that provide execution
time statistics for all hooks installed by the module, following is a
description of the fields:
- hook: The hook function name.
- min_time: The fastest execution time recorded for the given hook.
- max_time: The slowest execution time recorded for the given hook.
- total_time: Total execution time taken by all calls to the hook.
- avg_time: Average execution time of a call to the hook.
- ncalls: Total number of calls to the hook.
- load_comparison: A percentual of time taken by an individual hook
compared to every other hook.
To enable benchmark, code must be compiled with -DBENCHMARK flag, this
will make the hook functions to be replaced by a function with the same
name plus a '_benchmark' suffix, e.g. hook_function_benchmark.
The hook_function_benchmark will call the original function and
calculate the amount of time it took to execute, than it will update
statistics for that hook.
Changed the default value of pgsm_bucket_time from 300 to 60. Now neither PMM users will need to adjust the default value, nor
restart of PG server will be required for this purpose.
The query_txt variable is allocated at the beginning of the
pg_stat_monitor_internal() function and released at the end, but an
extra malloc call to allocate it was added within an internal loop in
the funcion, thus allocating memory for every loop iteration, without
releasing the memory in the loop.
The query_txt variable can be reused inside the loop body, so this
commit removes the redundant declaration of query_txt from inside the
loop, which also fixes the leak.
Added 'Coverall' (https://coveralls.io) code coverage to pg_stat_monitor repo so that in future we don't have to perform this activity manually. Secondly, it will also make sure that code coverage detailed report link and status is readily available on repos github main page for each checkin (PR & Push), eventually helping in development velocity.
Add application name to the key used to identify queries in the hash
table, this allows different applications to have separate entries in
pg_stat_monitor view if they issued the same query.
If pg_stat_monitor is loaded after pg_stat_statement, then it will end
up calling standard_planner function twice in the pgss_planner_hook()
function, this will trigger an assertion failure from PostgreSQL as this
function expects an untouched Query* object, and the first call to
standard_planner() done by pg_stat_statements modifies the object.
To address the problem, we avoid calling standard_planner function twice
in pg_stat_monitor, if a previous handler is installed for the hook
planner_hook, then we assume that this previous hook has already called
standard_planner function and don't do it again.