- Drop PRIMARY KEY for Citus 10 compatibility
- Drop columnar for PG 12
- Do not start/stop metadata sync as stop is not implemented in 10.1
- PG 11 parallel query changes explain outputs
Before this commit, creating a partition after a DROP column
on the parent (position before dist. key) was leading to
partition to have the wrong distribution column.
(cherry picked from commit 32124efd83)
This happens only when we have a "<" or "<=" filter on distribution
column of a range distributed table and that filter falls in between
two shards.
When the filter falls in between two shards:
If the filter is ">" or ">=", then UpperShardBoundary was
returning "upperBoundIndex - 1", where upperBoundIndex is
exclusive shard index used during binary seach.
This is expected since upperBoundIndex is an exclusive
index.
If the filter is "<" or "<=", then LowerShardBoundary was
returning "lowerBoundIndex + 1", where lowerBoundIndex is
inclusive shard index used during binary seach.
On the other hand, since lowerBoundIndex is an inclusive
index, we should just return lowerBoundIndex instead of
doing "+ 1". Before this commit, we were missing leftmost
shard in such queries.
* Remove useless conditional branches
The branch that we delete from UpperShardBoundary was obviously useless.
The other one in LowerShardBoundary became useless after we remove "+ 1"
from there.
This indeed is another proof of what & how we are fixing with this pr.
* Improve comments and add more
* Add some tests for upper bound calculation too
(cherry picked from commit b118d4188e)
* Also fix a debug message diff for 9.5
/*
* Colocated intermediate results are just files and not required to use
* the same connections with their co-located shards. So, we are free to
* use any connection we can get.
*
* Also, the current connection re-use logic does not know how to handle
* intermediate results as the intermediate results always truncates the
* existing files. That's why, we use one connection per intermediate
* result.
*/
(cherry picked from commit 5d5a357487)
Logical replication status can take wal_receiver_status_interval
seconds to get updated. Default is 10s, which means tests in
which logical replication is used can take a long time to finish.
We reduce it to 1 second to speed these tests up.
Logical replication apply launcher launches workers every
wal_retrieve_retry_interval, so if we have many shard moves with
logical replication consecutively, they will be throttled by this
parameter. Default is 5s, we reduce it to 1s so we finish tests
faster.
(cherry picked from commit 0e0fd6599a)
Before this commit, we let AdaptiveExecutorPreExecutorRun()
to be effective multiple times on every FETCH on cursors.
That does not affect the correctness of the query results,
but adds significant overhead.
(cherry picked from commit c433c66f2b)
CitusTableTypeIdList() function iterates on all the entries of pg_dist_partition
and loads all the metadata in to the cache. This can be quite memory intensive
especially when there are lots of distributed tables.
When partitioned tables are used, it is common to have many distributed tables
given that each partition also becomes a distributed table.
CitusTableTypeIdList() is used on every CREATE TABLE .. PARTITION OF.. command
as well. It means that, anytime a partition is created, Citus loads all the
metadata to the cache. Note that Citus typically only loads the accessed table's
metadata to the cache.
(cherry picked from commit 7accbff3f6)
Conflicts:
src/test/regress/bin/normalize.sed
When a relation is used on an OUTER JOIN with FALSE filters,
set_rel_pathlist_hook may not be called for the table.
There might be other cases as well, so do not rely on the hook
for classification of the tables.
Aliases that postgres choose for partitioned tables in explain output
might change in different pg versions, so normalize them and remove
the alternative test output
* Fix incorrect join related fields
Ruleutils expect to give the original index of join columns hence we
should consider the dropped columns while setting the fields in
SetJoinRelatedFieldsCompat.
* add some more tests for joins
* Move tests to join.sql and create a utility function
Disallow `ON TRUE` outer joins with reference & distributed tables
when reference table is outer relation by fixing the logic bug made
when calling `LeftListIsSubset` function.
Also, be more defensive when removing duplicate join restrictions
when join clause is empty for non-inner joins as they might still
contain useful information for non-inner joins.
It seems like Postgres could call set_rel_pathlist() for
the same relation multiple times. This breaks the logic
where we assume relationCount eqauls to the number of
entries in relationRestrictionList.
In summary, relationRestrictionList may contain duplicate
entries.
With this commit, we make sure that local execution adds the
intermediate result size as the distributed execution adds. Plus,
it enforces the citus.max_intermediate_result_size value.
Before this commit, the logic was:
- As long as the outer side of the JOIN is not a JOIN (e.g., relation
or subquery etc.), we check for the existence of any recurring
tuples. There were two implications of this decision.
First, even if a subquery which is on the outer side contains
distributed table JOIN reference table, Citus would unnecessarily throw
an error. Note that, the JOIN inside the subquery would already
be going to be tested recursively. But, as long as that check
passes, there is no reason for the upper JOIN to fail. An example, which
used to fail and now works:
SELECT * FROM (SELECT * FROM dist JOIN ref) as foo LEFT JOIN dist;
Second, certain JOINs, especially with ON (true) conditions were not
represented as Citus expects the JOINs to be in the format
DeferredErrorIfUnsupportedRecurringTuplesJoin().
Use short lived per-tuple context in citus_evaluate_expr like
(pg) evaluate_expr does.
We should not use planState->ExprContext when evaluating expressions
as it might lead to freeing the same executor twice (first one happens
in citus_evaluate_expr itself and the other one happens when postgres
doing clean-up for the top level executor state), which in turn might
cause seg.faults.
However, now as we don't have necessary planState info to evaluate
prepared statements, we also add planState->es_param_list_info to
per-tuple ExprContext.