DESCRIPTION: Fix a segfault caused by use after free in ConnectionsPlacementHash
Fix a segfault caused by retaining data in any of the hashmaps making up the Placement Connection Management.
We have seen production systems segfault due to random data referenced from ConnectionPlacementHash.
On investigation we found that the backends segfaulting on this had OOM errors closely prior to the segfault.
It has shown there are at least 15 places where an allocation can OOM that would cause ConnectionPlacementHash to retain pointers to memory from contexts that are subsequently freed. This would reproduce the segfault we have observed in production.
Conditions for these allocations are:
- allocated after first call to `AssociatePlacementWithShard`: https://github.com/citusdata/citus/blob/v10.0.3/src/backend/distributed/connection/placement_connection.c#L880-L881
- allocated before `StartNodeUserDatabaseConnection`: https://github.com/citusdata/citus/blob/v10.0.3/src/backend/distributed/connection/connection_management.c#L291
At least 15 points of memory allocation (which could fail) are between the callsites of both in a primary key lookup on a reference table - where we have seen an OOM cause a segfault moments later.
Instead of leaving any references in ConnectionPlacementHash, ConnectionShardHash and ColocatedPlacementsHash that could retain any pointers that are freed due to the TopTransactionContext being reset we clear all these hashes irregardless of the state of CurrentCoordinatedTransactionState.
Downside is that on any transaction abort we will now iterate through 4 hashmaps and clear their contents. Given that they are either already empty, which should cause a quick iteration, or non-empty, causing segfaults in subsequent executions, this overhead seems reasonable.
A better solution would be to move the creation of these hashmaps so they would live in the TopTransactionContext themself, assuming their contents would never outlive a transaction. This needs more investigation and is an involved refactor Hence fixing this quickly here.
All the callers except columnar_relation_copy_for_cluster were already
switching to right memory context when creating ColumnarReadState.
With this commit, we embed that logic into init_columnar_read_state
to avoid further such bugs.
That way, we start using the right memory context for
columnar_relation_copy_for_cluster too.
- Add support for CRETE INDEX ... ON ONLY: Before that commit we were not sending "ONLY" option to the worker nodes at all. With this commit, "ONLY" parameter will be sent to the worker nodes if it is necessary. (#4938)
- Add support for ALTER INDEX ... ATTACH PARTITION: Attach child_index to parent_index by creating same inheritance on shard level in addition to table level. (#4980)
* Synchronize hasmetadata flag on mx workers
* Switch to sequential execution
* Add test
* Use SetWorkerColumn
* Add test for stop_sync
* Remove usage of UpdateHasmetadataOnWorkersWithMetadata
* Remove MarkNodeMetadataSynced
* Fix test for metadatasynced
* Remove MarkNodeMetadataSynced
* Style
* Remove MarkNodeHasMetadata
* Remove UpdateDistNodeBoolAttr
* Refactor SetWorkerColumn
* Use SetWorkerColumnLocalOnly when setting up dependencies
* Use SetWorkerColumnLocalOnly in TriggerSyncMetadataToPrimaryNodes
* Style
* Make update command generator functions static
* Set metadatasynced before syncing
* Call SetWorkerColumn only if the sync is successful
* Try to sync all nodes
* Fix indexno
* Update metadatasynced locally first
* Break if a node fails to sync metadata
* Send worker commands optional
* Style & Rebase
* Add raiseOnError param to SetWorkerColumn
* Style
* Set metadatasynced for all metadata nodes
* Style
* Introduce SetWorkerColumnOptional
* Polish
* Style
* Dont send set command to not synced metadata nodes
* Style
* Polish
* Add test for stop_sync
* Add test for shouldhaveshards
* Add test for isactive flag
* Sort by placementid in the function verify_metadata
* Cover edge cases for failing nodes
* Add comments
* Add nodeport to isactive test
* Add warning if metadata out of sync
* Update warning message
In short, add wrappers around Postgres' AddWaitEventToSet() and
ModifyWaitEvent().
AddWaitEventToSet()/ModifyWaitEvent*() may throw hard errors. For
example, when the underlying socket for a connection is closed by
the remote server and already reflected by the OS, however
Citus hasn't had a chance to get this information. In that case,
if replication factor is >1, Citus can failover to other nodes
for executing the query. Even if replication factor = 1, Citus
can give much nicer errors.
So CitusAddWaitEventSetToSet()/CitusModifyWaitEvent() simply puts
AddWaitEventToSet()/ModifyWaitEvent() into a PG_TRY/PG_CATCH block
in order to catch any hard errors, and returns this information to
the caller.
As we use the current user to sync the metadata to the nodes
with #5105 (and many other PRs), there is no reason that
prevents us to use the coordinated transaction for metadata syncing.
This commit also renames few functions to reflect their actual
implementation.
Before this commit, creating a partition after a DROP column
on the parent (position before dist. key) was leading to
partition to have the wrong distribution column.
update_distributed_table_colocation can be called by the relation
owner, and internally it updates pg_dist_partition. With this
commit, update_distributed_table_colocation uses an internal
UDF to access pg_dist_partition.
As a result, this operation can now be done by regular users
on MX.
Re-cost columnar table sequential scan paths
With the changes in this pr, we adjust the cost estimates done by postgres for sequential scan paths for columnar tables.
We want to make better decisions when columnar custom scan is disabled too. That means, there are cases where index scan is more preferable over sequential scan for heapAM but not for columnarAM.
For this reason, we want to make better decisions regarding whether to choose index scan or sequential scan when columnar custom is scan is **disabled**.
So with this pr, we re-estimate costs for sequential scan paths in a way that is quite similar to what we do for columnar custom scan.
The idea is that columnar custom scan uses projection pushdown so the cost is directly proportional to column selectivity. However, for sequential scan, we re-estimate the cost considering **all** the columns since projection pushdown is not supported for plain sequential scan.
One thing to note here is that we still don't consider chunk group filtering when estimating the cost for columnar custom scan. For this reason, we calculate the same costs for sequential scan & columnar custom scan if query reads all columns, regardless of the filters in the `where` clause.
To avoid mistakenly choosing sequential scan in such cases, we still remove non `IndexPath`s if columnar custom scan is enabled.
That way, even when we calculate the same cost for sequential scan and columnar scan, we will anyway remove sequential one and guarantee that we would choose either columnar custom scan or index scan.
Re-cost columnar table index scan paths
With the changes in this pr, we adjust the cost estimate done by indexAM for `IndexPath` according to columnar tables when the index is on a columnar table.
This is because, the way indexAM estimates the cost is not appropriate for indexes on columnar tables.
The most basic reason is that indexAM assumes we will only need to read single page to access a single tuple of the table.
On the other hand for columnar tables, we read the whole stripe from disk for a single tuple too, regardless of the optimization done in #5058.
Note that we don't simply assign startup / total costs but we add the cost estimated by us to the cost estimated by indexAM.
This is because we need to take "the cost due to index data-structure traversal" into account too.
Before explaining the logic that we follow for `IndexPath`, let's first summarize what we were / are doing for `ColumnarCustomScan`:
```math
X <- cost for reading single column of single stripe // 1
cost = X * (number of columns after projection pushdown) // 2
cost = cost * (number of stripes that relation has) // 3
```
The logic that we follow to calculate the additional cost for index scan is as follows:
```math
X <- cost for reading single column of single stripe // same as 1 above
cost = X * (number of columns that relation has) // index scan cannot do projection pushdown, so different than 2 above
cost = cost * (estimated number of stripes that we need to read)
```
where, we calculate `estimated number of stripes that we need to read` as follows:
```math
indexCorrelation, indexSelectivity <- calculate by using amcostestimate_function
estimatedReadRows = (relation row count) * indexSelectivity
minEstimateStripeReads = estimatedReadRows / (average stripe row count) // full correlation, we will not do any redundant stripe reads
maxEstimateStripeReads = estimatedReadRows // no correlation, we will read a different stripe for each tuple
complementCorrelation = 1 - abs(indexCorrelation)
estimatedStripeCount = minEstimateStripeReads +
complementCorrelation * (maxEstimateStripeReads - minEstimateStripeReads)
```