In short, add wrappers around Postgres' AddWaitEventToSet() and
ModifyWaitEvent().
AddWaitEventToSet()/ModifyWaitEvent*() may throw hard errors. For
example, when the underlying socket for a connection is closed by
the remote server and already reflected by the OS, however
Citus hasn't had a chance to get this information. In that case,
if replication factor is >1, Citus can failover to other nodes
for executing the query. Even if replication factor = 1, Citus
can give much nicer errors.
So CitusAddWaitEventSetToSet()/CitusModifyWaitEvent() simply puts
AddWaitEventToSet()/ModifyWaitEvent() into a PG_TRY/PG_CATCH block
in order to catch any hard errors, and returns this information to
the caller.
As we use the current user to sync the metadata to the nodes
with #5105 (and many other PRs), there is no reason that
prevents us to use the coordinated transaction for metadata syncing.
This commit also renames few functions to reflect their actual
implementation.
Before this commit, creating a partition after a DROP column
on the parent (position before dist. key) was leading to
partition to have the wrong distribution column.
update_distributed_table_colocation can be called by the relation
owner, and internally it updates pg_dist_partition. With this
commit, update_distributed_table_colocation uses an internal
UDF to access pg_dist_partition.
As a result, this operation can now be done by regular users
on MX.
Re-cost columnar table sequential scan paths
With the changes in this pr, we adjust the cost estimates done by postgres for sequential scan paths for columnar tables.
We want to make better decisions when columnar custom scan is disabled too. That means, there are cases where index scan is more preferable over sequential scan for heapAM but not for columnarAM.
For this reason, we want to make better decisions regarding whether to choose index scan or sequential scan when columnar custom is scan is **disabled**.
So with this pr, we re-estimate costs for sequential scan paths in a way that is quite similar to what we do for columnar custom scan.
The idea is that columnar custom scan uses projection pushdown so the cost is directly proportional to column selectivity. However, for sequential scan, we re-estimate the cost considering **all** the columns since projection pushdown is not supported for plain sequential scan.
One thing to note here is that we still don't consider chunk group filtering when estimating the cost for columnar custom scan. For this reason, we calculate the same costs for sequential scan & columnar custom scan if query reads all columns, regardless of the filters in the `where` clause.
To avoid mistakenly choosing sequential scan in such cases, we still remove non `IndexPath`s if columnar custom scan is enabled.
That way, even when we calculate the same cost for sequential scan and columnar scan, we will anyway remove sequential one and guarantee that we would choose either columnar custom scan or index scan.
Re-cost columnar table index scan paths
With the changes in this pr, we adjust the cost estimate done by indexAM for `IndexPath` according to columnar tables when the index is on a columnar table.
This is because, the way indexAM estimates the cost is not appropriate for indexes on columnar tables.
The most basic reason is that indexAM assumes we will only need to read single page to access a single tuple of the table.
On the other hand for columnar tables, we read the whole stripe from disk for a single tuple too, regardless of the optimization done in #5058.
Note that we don't simply assign startup / total costs but we add the cost estimated by us to the cost estimated by indexAM.
This is because we need to take "the cost due to index data-structure traversal" into account too.
Before explaining the logic that we follow for `IndexPath`, let's first summarize what we were / are doing for `ColumnarCustomScan`:
```math
X <- cost for reading single column of single stripe // 1
cost = X * (number of columns after projection pushdown) // 2
cost = cost * (number of stripes that relation has) // 3
```
The logic that we follow to calculate the additional cost for index scan is as follows:
```math
X <- cost for reading single column of single stripe // same as 1 above
cost = X * (number of columns that relation has) // index scan cannot do projection pushdown, so different than 2 above
cost = cost * (estimated number of stripes that we need to read)
```
where, we calculate `estimated number of stripes that we need to read` as follows:
```math
indexCorrelation, indexSelectivity <- calculate by using amcostestimate_function
estimatedReadRows = (relation row count) * indexSelectivity
minEstimateStripeReads = estimatedReadRows / (average stripe row count) // full correlation, we will not do any redundant stripe reads
maxEstimateStripeReads = estimatedReadRows // no correlation, we will read a different stripe for each tuple
complementCorrelation = 1 - abs(indexCorrelation)
estimatedStripeCount = minEstimateStripeReads +
complementCorrelation * (maxEstimateStripeReads - minEstimateStripeReads)
```
Instead of setting stripeReadState to NULL, call ColumnarResetRead
before re-scanning a columnar table since this function is already
designed for doing the necessary clean up when finishing a stripe
read.
Note that this change shouldn't have a great effect on memory usage
since AdvanceStripe was already doing the clean-up for all the
stripes except the last one.
Previously, we were only using chunk group reader for sequential scan.
However, to support index scans on columnar tables, now we use very
same low level functions for index scan too.
Since those low-level functions were only used for sequential scan, it
was guaranteed that we would never read the same chunk group more than
once, so we were freeing chunk buffers after deserializing them into a
separate buffer.
Now that we use those low level functions for index scan, we cannot
free chunk buffers since it's possible to read the same chunk group
again, such that:
- read chunk group 1 of stripe 5
- read chunk group 2 of stripe 5
- read chunk group 1 of stripe 5 again
Here, when we decide to read chunk group 1 for a second time,
chunk group 1 is not cached. Plus, before this commit, we were
freeing the chunk buffers for chunk group 1 after the first
read and then we were getting segfault or errors from low-level
de-compression APIs.
Keep supported indexes when converting table to columnar.
Previously, as indexes were not supported by columnar tables, we were ignoring
all the indexes & index-based constraints of table when converting it to a
columnar table.
However, now that we support `btree` & `hash` indexAM's for columnar tables,
we only ignore the indexAM's other than those two.
However, the way we ignore the unsupported indexes is now a bit different
than before.
Previously we were just _not creating_ any index types after converting table
to columnar as we didn't support any of the index types.
Now that we support `btree` & `hash` indexAMs for columnar tables, now we
really drop the unsupported index types since re-creating the remaining ones
is easier than adding some code that creates only the supported indexes.
* Fix UNION not being pushdown
Postgres optimizes column fields that are not needed in the output. We
were relying on these fields to understand if it is safe to push down a
union query.
This fix looks at the parse query, which has the original column fields
to detect if it is safe to push down a union query.
* Add more tests
* Simplify code and make it more robust
* Process varlevelsup > 0 in FindReferencedTableColumn
* Only look for outers vars in union path
* Add more comments
* Remove UNION ALL specific logic for pulling up childvars
The progress monitor wouldn't actually update the size of the shard on
the target node when using "block_writes" as the `shard_transfer_mode`.
The reason for this is that the CREATE TABLE part of the shard creation
would only be committed once all data was moved as well. This caused
our size calculation to always return 0, since the table did not exist
yet in the session that the progress monitor used.
This is fixed by first committing creation of the table, and only then
starting the actual data copy.
The test output changes slightly. Apparently splitting this up in two
transactions instead of one, increases the table size after the copy by
about 40kB. The additional size used doesn't increase when with the
amount of data in the table is larger (it stays ~40kB per shard). So
this small change in test output is not considered an actual problem.
These two options were not included when creating the sequences on the
workers as part of metadata syncing.
The missing `data_type` part of the definition made finding the cause
of #5126 harder than necessary, because of confusing errors.