All the callers except columnar_relation_copy_for_cluster were already
switching to right memory context when creating ColumnarReadState.
With this commit, we embed that logic into init_columnar_read_state
to avoid further such bugs.
That way, we start using the right memory context for
columnar_relation_copy_for_cluster too.
- Add support for CRETE INDEX ... ON ONLY: Before that commit we were not sending "ONLY" option to the worker nodes at all. With this commit, "ONLY" parameter will be sent to the worker nodes if it is necessary. (#4938)
- Add support for ALTER INDEX ... ATTACH PARTITION: Attach child_index to parent_index by creating same inheritance on shard level in addition to table level. (#4980)
* Synchronize hasmetadata flag on mx workers
* Switch to sequential execution
* Add test
* Use SetWorkerColumn
* Add test for stop_sync
* Remove usage of UpdateHasmetadataOnWorkersWithMetadata
* Remove MarkNodeMetadataSynced
* Fix test for metadatasynced
* Remove MarkNodeMetadataSynced
* Style
* Remove MarkNodeHasMetadata
* Remove UpdateDistNodeBoolAttr
* Refactor SetWorkerColumn
* Use SetWorkerColumnLocalOnly when setting up dependencies
* Use SetWorkerColumnLocalOnly in TriggerSyncMetadataToPrimaryNodes
* Style
* Make update command generator functions static
* Set metadatasynced before syncing
* Call SetWorkerColumn only if the sync is successful
* Try to sync all nodes
* Fix indexno
* Update metadatasynced locally first
* Break if a node fails to sync metadata
* Send worker commands optional
* Style & Rebase
* Add raiseOnError param to SetWorkerColumn
* Style
* Set metadatasynced for all metadata nodes
* Style
* Introduce SetWorkerColumnOptional
* Polish
* Style
* Dont send set command to not synced metadata nodes
* Style
* Polish
* Add test for stop_sync
* Add test for shouldhaveshards
* Add test for isactive flag
* Sort by placementid in the function verify_metadata
* Cover edge cases for failing nodes
* Add comments
* Add nodeport to isactive test
* Add warning if metadata out of sync
* Update warning message
In short, add wrappers around Postgres' AddWaitEventToSet() and
ModifyWaitEvent().
AddWaitEventToSet()/ModifyWaitEvent*() may throw hard errors. For
example, when the underlying socket for a connection is closed by
the remote server and already reflected by the OS, however
Citus hasn't had a chance to get this information. In that case,
if replication factor is >1, Citus can failover to other nodes
for executing the query. Even if replication factor = 1, Citus
can give much nicer errors.
So CitusAddWaitEventSetToSet()/CitusModifyWaitEvent() simply puts
AddWaitEventToSet()/ModifyWaitEvent() into a PG_TRY/PG_CATCH block
in order to catch any hard errors, and returns this information to
the caller.
As we use the current user to sync the metadata to the nodes
with #5105 (and many other PRs), there is no reason that
prevents us to use the coordinated transaction for metadata syncing.
This commit also renames few functions to reflect their actual
implementation.
Before this commit, creating a partition after a DROP column
on the parent (position before dist. key) was leading to
partition to have the wrong distribution column.
update_distributed_table_colocation can be called by the relation
owner, and internally it updates pg_dist_partition. With this
commit, update_distributed_table_colocation uses an internal
UDF to access pg_dist_partition.
As a result, this operation can now be done by regular users
on MX.
Re-cost columnar table sequential scan paths
With the changes in this pr, we adjust the cost estimates done by postgres for sequential scan paths for columnar tables.
We want to make better decisions when columnar custom scan is disabled too. That means, there are cases where index scan is more preferable over sequential scan for heapAM but not for columnarAM.
For this reason, we want to make better decisions regarding whether to choose index scan or sequential scan when columnar custom is scan is **disabled**.
So with this pr, we re-estimate costs for sequential scan paths in a way that is quite similar to what we do for columnar custom scan.
The idea is that columnar custom scan uses projection pushdown so the cost is directly proportional to column selectivity. However, for sequential scan, we re-estimate the cost considering **all** the columns since projection pushdown is not supported for plain sequential scan.
One thing to note here is that we still don't consider chunk group filtering when estimating the cost for columnar custom scan. For this reason, we calculate the same costs for sequential scan & columnar custom scan if query reads all columns, regardless of the filters in the `where` clause.
To avoid mistakenly choosing sequential scan in such cases, we still remove non `IndexPath`s if columnar custom scan is enabled.
That way, even when we calculate the same cost for sequential scan and columnar scan, we will anyway remove sequential one and guarantee that we would choose either columnar custom scan or index scan.
Re-cost columnar table index scan paths
With the changes in this pr, we adjust the cost estimate done by indexAM for `IndexPath` according to columnar tables when the index is on a columnar table.
This is because, the way indexAM estimates the cost is not appropriate for indexes on columnar tables.
The most basic reason is that indexAM assumes we will only need to read single page to access a single tuple of the table.
On the other hand for columnar tables, we read the whole stripe from disk for a single tuple too, regardless of the optimization done in #5058.
Note that we don't simply assign startup / total costs but we add the cost estimated by us to the cost estimated by indexAM.
This is because we need to take "the cost due to index data-structure traversal" into account too.
Before explaining the logic that we follow for `IndexPath`, let's first summarize what we were / are doing for `ColumnarCustomScan`:
```math
X <- cost for reading single column of single stripe // 1
cost = X * (number of columns after projection pushdown) // 2
cost = cost * (number of stripes that relation has) // 3
```
The logic that we follow to calculate the additional cost for index scan is as follows:
```math
X <- cost for reading single column of single stripe // same as 1 above
cost = X * (number of columns that relation has) // index scan cannot do projection pushdown, so different than 2 above
cost = cost * (estimated number of stripes that we need to read)
```
where, we calculate `estimated number of stripes that we need to read` as follows:
```math
indexCorrelation, indexSelectivity <- calculate by using amcostestimate_function
estimatedReadRows = (relation row count) * indexSelectivity
minEstimateStripeReads = estimatedReadRows / (average stripe row count) // full correlation, we will not do any redundant stripe reads
maxEstimateStripeReads = estimatedReadRows // no correlation, we will read a different stripe for each tuple
complementCorrelation = 1 - abs(indexCorrelation)
estimatedStripeCount = minEstimateStripeReads +
complementCorrelation * (maxEstimateStripeReads - minEstimateStripeReads)
```
Instead of setting stripeReadState to NULL, call ColumnarResetRead
before re-scanning a columnar table since this function is already
designed for doing the necessary clean up when finishing a stripe
read.
Note that this change shouldn't have a great effect on memory usage
since AdvanceStripe was already doing the clean-up for all the
stripes except the last one.
Previously, we were only using chunk group reader for sequential scan.
However, to support index scans on columnar tables, now we use very
same low level functions for index scan too.
Since those low-level functions were only used for sequential scan, it
was guaranteed that we would never read the same chunk group more than
once, so we were freeing chunk buffers after deserializing them into a
separate buffer.
Now that we use those low level functions for index scan, we cannot
free chunk buffers since it's possible to read the same chunk group
again, such that:
- read chunk group 1 of stripe 5
- read chunk group 2 of stripe 5
- read chunk group 1 of stripe 5 again
Here, when we decide to read chunk group 1 for a second time,
chunk group 1 is not cached. Plus, before this commit, we were
freeing the chunk buffers for chunk group 1 after the first
read and then we were getting segfault or errors from low-level
de-compression APIs.
Keep supported indexes when converting table to columnar.
Previously, as indexes were not supported by columnar tables, we were ignoring
all the indexes & index-based constraints of table when converting it to a
columnar table.
However, now that we support `btree` & `hash` indexAM's for columnar tables,
we only ignore the indexAM's other than those two.
However, the way we ignore the unsupported indexes is now a bit different
than before.
Previously we were just _not creating_ any index types after converting table
to columnar as we didn't support any of the index types.
Now that we support `btree` & `hash` indexAMs for columnar tables, now we
really drop the unsupported index types since re-creating the remaining ones
is easier than adding some code that creates only the supported indexes.