mirror of
https://github.com/percona/pg_stat_monitor.git
synced 2026-02-04 05:56:21 +00:00
PG-225: Fix deadlock in pgss_store.
The deadlock scenario is describe below: 1. pgss_store is called, it acquires the lock pgss->lock. 2. An error ocurr, mostly out of memory when accessing internal hash tables used to store internal data, on functions pgss_store_query_info and pgss_get_entry. 3. Call elog() to report out of memory error. 4. Our pgsm_emit_log_hook is called, it calls pgss_store_error, which in turn calls pgss_store. 5. Try to acquire already acquired lock pgss->lock, deadlock happens. To fix the problem, there are two modifications worth mentioning done by this commit: 1. We are now passing HASH_ENTER_NULL flag to hash_search, instead of HASH_ENTER, as read in postgresql sources, this prevents this function from reporting error when out of memory, but instead it will only return NULL if we pass HASH_ENTER_NULL, so we can handle the error ourselves. 2. In pgss_store, if an error happens after the pgss->lock is acquired, we only set a flag, then, after releasing the lock, we check if the flag is set and report the error accordingly.
This commit is contained in:
@@ -147,7 +147,7 @@ hash_entry_alloc(pgssSharedState *pgss, pgssHashKey *key,int encoding)
|
||||
return NULL;
|
||||
}
|
||||
/* Find or create an entry with desired hash code */
|
||||
entry = (pgssEntry *) hash_search(pgss_hash, key, HASH_ENTER, &found);
|
||||
entry = (pgssEntry *) hash_search(pgss_hash, key, HASH_ENTER_NULL, &found);
|
||||
if (!found)
|
||||
{
|
||||
pgss->bucket_entry[pgss->current_wbucket]++;
|
||||
@@ -285,7 +285,7 @@ hash_create_query_entry(uint64 bucket_id, uint64 queryid, uint64 dbid, uint64 us
|
||||
key.ip = ip;
|
||||
key.appid = appid;
|
||||
|
||||
entry = (pgssQueryEntry *) hash_search(pgss_query_hash, &key, HASH_ENTER, &found);
|
||||
entry = (pgssQueryEntry *) hash_search(pgss_query_hash, &key, HASH_ENTER_NULL, &found);
|
||||
return entry;
|
||||
}
|
||||
|
||||
|
||||
Reference in New Issue
Block a user