mirror of https://github.com/citusdata/citus.git
3342 Commits (be6668e4400deb89cf0fd0198d157b649c10b09d)
| Author | SHA1 | Message | Date |
|---|---|---|---|
|
|
be6668e440
|
Snapshot-Based Node Split – Foundation and Core Implementation (#8122)
**DESCRIPTION:**
This pull request introduces the foundation and core logic for the
snapshot-based node split feature in Citus. This feature enables
promoting a streaming replica (referred to as a clone in this feature
and UI) to a primary node and rebalancing shards between the original
and the newly promoted node without requiring a full data copy.
This significantly reduces rebalance times for scale-out operations
where the new node already contains a full copy of the data via
streaming replication.
Key Highlights:
**1. Replica (Clone) Registration & Management Infrastructure**
Introduces a new set of UDFs to register and manage clone nodes:
- citus_add_clone_node()
- citus_add_clone_node_with_nodeid()
- citus_remove_clone_node()
- citus_remove_clone_node_with_nodeid()
These functions allow administrators to register a streaming replica of
an existing worker node as a clone, making it eligible for later
promotion via snapshot-based split.
**2. Snapshot-Based Node Split (Core Implementation)**
New core UDF:
- citus_promote_clone_and_rebalance()
This function implements the full workflow to promote a clone and
rebalance shards between the old and new primaries. Steps include:
1. Ensuring Exclusivity – Blocks any concurrent placement-changing
operations.
2. Blocking Writes – Temporarily blocks writes on the primary to ensure
consistency.
3. Replica Catch-up – Waits for the replica to be fully in sync.
4. Promotion – Promotes the replica to a primary using pg_promote.
5. Metadata Update – Updates metadata to reflect the newly promoted
primary node.
6. Shard Rebalancing – Redistributes shards between the old and new
primary nodes.
**3. Split Plan Preview**
A new helper UDF get_snapshot_based_node_split_plan() provides a preview
of the shard distribution post-split, without executing the promotion.
**Example:**
```
reb 63796> select * from pg_catalog.get_snapshot_based_node_split_plan('127.0.0.1',5433,'127.0.0.1',5453);
table_name | shardid | shard_size | placement_node
--------------+---------+------------+----------------
companies | 102008 | 0 | Primary Node
campaigns | 102010 | 0 | Primary Node
ads | 102012 | 0 | Primary Node
mscompanies | 102014 | 0 | Primary Node
mscampaigns | 102016 | 0 | Primary Node
msads | 102018 | 0 | Primary Node
mscompanies2 | 102020 | 0 | Primary Node
mscampaigns2 | 102022 | 0 | Primary Node
msads2 | 102024 | 0 | Primary Node
companies | 102009 | 0 | Clone Node
campaigns | 102011 | 0 | Clone Node
ads | 102013 | 0 | Clone Node
mscompanies | 102015 | 0 | Clone Node
mscampaigns | 102017 | 0 | Clone Node
msads | 102019 | 0 | Clone Node
mscompanies2 | 102021 | 0 | Clone Node
mscampaigns2 | 102023 | 0 | Clone Node
msads2 | 102025 | 0 | Clone Node
(18 rows)
```
**4 Test Infrastructure Enhancements**
- Added a new test case scheduler for snapshot-based split scenarios.
- Enhanced pg_regress_multi.pl to support creating node backups with
slightly modified options to simulate real-world backup-based clone
creation.
### 5. Usage Guide
The snapshot-based node split can be performed using the following
workflow:
**- Take a Backup of the Worker Node**
Run pg_basebackup (or an equivalent tool) against the existing worker
node to create a physical backup.
`pg_basebackup -h <primary_worker_host> -p <port> -D
/path/to/replica/data --write-recovery-conf
`
**- Start the Replica Node**
Start PostgreSQL on the replica using the backup data directory,
ensuring it is configured as a streaming replica of the original worker
node.
**- Register the Backup Node as a Clone**
Mark the registered replica as a clone of its original worker node:
`SELECT * FROM citus_add_clone_node('<clone_host>', <clone_port>,
'<primary_host>', <primary_port>);
`
**- Promote and Rebalance the Clone**
Promote the clone to a primary and rebalance shards between it and the
original worker:
`SELECT * FROM citus_promote_clone_and_rebalance('clone_node_id');
`
**- Drop Any Replication Slots from the Original Worker**
After promotion, clean up any unused replication slots from the original
worker:
`SELECT pg_drop_replication_slot('<slot_name>');
`
|
|
|
|
f743b35fc2
|
Parallelize Shard Rebalancing & Unlock Concurrent Logical Shard Moves (#7983)
DESCRIPTION: Parallelizes shard rebalancing and removes the bottlenecks
that previously blocked concurrent logical-replication moves.
These improvements reduce rebalance windows—particularly for clusters
with large reference tables and enable multiple shard transfers to run in parallel.
Motivation:
Citus’ shard rebalancer has some key performance bottlenecks:
**Sequential Movement of Reference Tables:**
Reference tables are often assumed to be small, but in real-world
deployments, they can grow significantly large. Previously, reference
table shards were transferred as a single unit, making the process
monolithic and time-consuming.
**No Parallelism Within a Colocation Group:**
Although Citus distributes data using colocated shards, shard
movements within the same colocation group were serialized. In
environments with hundreds of distributed tables colocated
together, this serialization significantly slowed down rebalance
operations.
**Excessive Locking:**
Rebalancer used restrictive locks and redundant logical replication
guards, further limiting concurrency.
The goal of this commit is to eliminate these inefficiencies and enable
maximum parallelism during rebalance, without compromising correctness
or compatibility. Parallelize shard rebalancing to reduce rebalance
time.
Feature Summary:
**1. Parallel Reference Table Rebalancing**
Each reference-table shard is now copied in its own background task.
Foreign key and other constraints are deferred until all shards are
copied.
For single shard movement without considering colocation a new
internal-only UDF '`citus_internal_copy_single_shard_placement`' is
introduced to allow single-shard copy/move operations.
Since this function is internal, we do not allow users to call it
directly.
**Temporary Hack to Set Background Task Context** Background tasks
cannot currently set custom GUCs like application_name before executing
internal-only functions. 'citus_rebalancer ...' statement as a prefix in
the task command. This is a temporary hack to label internal tasks until
proper GUC injection support is added to the background task executor.
**2. Changes in Locking Strategy**
- Drop the leftover replication lock that previously serialized shard
moves performed via logical replication. This lock was only needed when
we used to drop and recreate the subscriptions/publications before each
move. Since Citus now removes those objects later as part of the “unused
distributed objects” cleanup, shard moves via logical replication can
safely run in parallel without additional locking.
- Introduced a per-shard advisory lock to prevent concurrent operations
on the same shard while allowing maximum parallelism elsewhere.
- Change the lock mode in AcquirePlacementColocationLock from
ExclusiveLock to RowExclusiveLock to allow concurrent updates within the
same colocation group, while still preventing concurrent DDL operations.
**3. citus_rebalance_start() enhancements**
The citus_rebalance_start() function now accepts two new optional
parameters:
```
- parallel_transfer_colocated_shards BOOLEAN DEFAULT false,
- parallel_transfer_reference_tables BOOLEAN DEFAULT false
```
This ensures backward compatibility by preserving the existing behavior
and avoiding any disruption to user expectations and when both are set
to true, the rebalancer operates with full parallelism.
**Previous Rebalancer Behavior:**
`SELECT citus_rebalance_start(shard_transfer_mode := 'force_logical');`
This would:
Start a single background task for replicating all reference tables
Then, move all shards serially, one at a time.
```
Task 1: replicate_reference_tables()
↓
Task 2: move_shard_1()
↓
Task 3: move_shard_2()
↓
Task 4: move_shard_3()
```
Slow and sequential. Reference table copy is a bottleneck. Colocated
shards must wait for each other.
**New Parallel Rebalancer:**
```
SELECT citus_rebalance_start(
shard_transfer_mode := 'force_logical',
parallel_transfer_colocated_shards := true,
parallel_transfer_reference_tables := true
);
```
This would:
- Schedule independent background tasks for each reference-table shard.
- Move colocated shards in parallel, while still maintaining dependency
order.
- Defer constraint application until all reference shards are in place.
-
```
Task 1: copy_ref_shard_1()
Task 2: copy_ref_shard_2()
Task 3: copy_ref_shard_3()
→ Task 4: apply_constraints()
↓
Task 5: copy_shard_1()
Task 6: copy_shard_2()
Task 7: copy_shard_3()
↓
Task 8-10: move_shard_1..3()
```
Each operation is scheduled independently and can run as soon as
dependencies are satisfied.
|
|
|
|
2095679dc8
|
Fix memory corruptions around pg_dist_object accessors after a Citus downgrade is followed by an upgrade (#8120)
DESCRIPTION: Fixes potential memory corruptions that could happen when accessing pg_dist_object after a Citus downgrade is followed by a Citus upgrade. In case of Citus downgrade and further upgrade an undefined behavior may be encountered. The reason is that Citus hardcoded the number of columns in the extension's tables, but in case of downgrade and following update some of these tables can have more columns, and some of them can be marked as dropped. This PR fixes all such tables using the approach introduced in #7950, which solved the problem for the pg_dist_partition table. See #7515 for a more thorough explanation. --------- Co-authored-by: Karina Litskevich <litskevichkarina@gmail.com> Co-authored-by: Onur Tirtir <onurcantirtir@gmail.com> |
|
|
|
badaa21cb1
|
Fix memory corruptions around pg_dist_transaction accessors after a Citus downgrade is followed by an upgrade (#8121)
DESCRIPTION: Fixes potential memory corruptions that could happen when accessing pg_dist_transaction after a Citus downgrade is followed by a Citus upgrade. In case of Citus downgrade and further upgrade an undefined behavior may be encountered. The reason is that Citus hardcoded the number of columns in the extension's tables, but in case of downgrade and following update some of these tables can have more columns, and some of them can be marked as dropped. This PR fixes all such tables using the approach introduced in #7950, which solved the problem for the pg_dist_partition table. See #7515 for a more thorough explanation. Co-authored-by: Karina Litskevich <litskevichkarina@gmail.com> |
|
|
|
8d929d3bf8
|
Push down recurring outer joins when possible (#7973)
DESCRIPTION: Adds support for pushing down LEFT/RIGHT outer joins having a reference table in the outer side and a distributed table on the inner side (e.g., <reference table> LEFT JOIN <distributed table>) Partially addresses #6546 1) `<outer:reference>` LEFT JOIN `<inner:distributed>` 2) `<inner:distributed>` RIGHT JOIN `<outer:reference>` Previously, for outer joins of types (1) and (2), the distributed side was computed recursively. This was necessary because, when the inner side of a recurring outer join is a distributed table, it is not possible to directly distribute the join; the preserved (outer and recurring) side may generate rows with join keys that hash to different shards. To implement distributed planning while maintaining consistency with global execution semantics, this PR restricts the outer side only to those partition key values that route to the selected shard during distributed shard query computation. This method is employed )when the following criteria are met: (recursive planning applied otherwise) - The join type is (1) or (2) (lateral joins are not supported). - The outer side is a reference table. - The outer join qualifications include an equality condition between the partition column of a distributed table and the recurring table. - The join is not part of a chained join. - The “enable_recurring_outer_join_pushdown” GUC is enabled (default is on). --------- Co-authored-by: ebruaydingol <ebruaydingol@microsoft.com> Co-authored-by: Onur Tirtir <onurcantirtir@gmail.com> |
|
|
|
87a1b631e8
|
Not automatically create citus_columnar when creating citus extension (#8081)
DESCRIPTION: Not automatically create citus_columnar when there are no relations using it. Previously, we were always creating citus_columnar when creating citus with version >= 11.1. And how we were doing was as follows: * Detach SQL objects owned by old columnar, i.e., "drop" them from citus, but not actually drop them from the database * "old columnar" is the one that we had before Citus 11.1 as part of citus, i.e., before splitting the access method ands its catalog to citus_columnar. * Create citus_columnar and attach the SQL objects leftover from old columnar to it so that we can continue supporting the columnar tables that user had before Citus 11.1 with citus_columnar. First part is unchanged, however, now we don't create citus_columnar automatically anymore if the user didn't have any relations using columnar. For this reason, as of Citus 13.2, when these SQL objects are not owned by an extension and there are no relations using columnar access method, we drop these SQL objects when updating Citus to 13.2. The net effect is still the same as if we automatically created citus_columnar and user dropped citus_columnar later, so we should not have any issues with dropping them. (**Update:** Seems we've made some assumptions in citus, e.g., citus_finish_pg_upgrade() still assumes columnar metadata exists and tries to apply some fixes for it, so this PR fixes them as well. See the last section of this PR description.) Also, ideally I was hoping to just remove some lines of code from extension.c, where we decide automatically creating citus_columnar when creating citus, however, this didn't happen to be the case for two reasons: * We still need to automatically create it for the servers using columnar access method. * We need to clean-up the leftover SQL objects from old columnar when the above is not case otherwise we would have leftover SQL objects from old columnar for no reason, and that would confuse users too. * Old columnar cannot be used to create columnar tables properly, so we should clean them up and let the user decide whether they want to create citus_columnar when they really need it later. --- Also made several changes in the test suite because similarly, we don't always want to have citus_columnar created in citus tests anymore: * Now, columnar specific test targets, which cover **41** test sql files, always install columnar by default, by using "--load-extension=citus_columnar". * "--load-extension=citus_columnar" is not added to citus specific test targets because by default we don't want to have citus_columnar created during citus tests. * Excluding citus_columnar specific tests, we have **601** sql files that we have as citus tests and in **27** of them we manually create citus_columnar at the very beginning of the test because these tests do test some functionalities of citus together with columnar tables. Also, before and after schedules for PG upgrade tests are now duplicated so we have two versions of each: one with columnar tests and one without. To choose between them, check-pg-upgrade now supports a "test-with-columnar" option, which can be set to "true" or anything else to logically indicate "false". In CI, we run the check-pg-upgrade test target with both options. The purpose is to ensure we can test PG upgrades where citus_columnar is not created in the cluster before the upgrade as well. Finally, added more tests to multi_extension.sql to test Citus upgrade scenarios with / without columnar tables / citus_columnar extension. --- Also, seems citus_finish_pg_upgrade was assuming that citus_columnar is always created but actually we should have never made such an assumption. To fix that, moved columnar specific post-PG-upgrade work from citus to a new columnar UDF, which is columnar_finish_pg_upgrade. But to avoid breaking existing customer / managed service scripts, we continue to automatically perform post PG-upgrade work for columnar within citus_finish_pg_upgrade, but only if columnar access method exists this time. |
|
|
|
f73da1ed40
|
Refactor background worker setup for security improvements (#8078)
Enhance security by addressing a code scanning alert and refactoring the background worker setup code for better maintainability and clarity. --------- Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com> |
|
|
|
a6161f5a21
|
Fix CTE traversal for outer Vars in FindReferencedTableColumn (remove assert; correct parentQueryList handling) (#8106)
fixes #8105 This change lets `FindReferencedTableColumn()` correctly resolve columns through a CTE even when the expression comes from an outer query level (`varlevelsup > 0`, `skipOuterVars = false`). Before, we hit an `Assert(skipOuterVars)` in this path. **Problem** * Hitting a CTE after walking outer Vars triggered `Assert(skipOuterVars)`. * Cause: we modified `parentQueryList` in place and didn’t rebuild the correct parent chain before recursing into the CTE, so the path was considered unsafe. **Fix** * Remove the `Assert(skipOuterVars)` in the `RTE_CTE` branch. * Find the CTE’s owning level via `ctelevelsup` and compute `cteParentListIndex`. * Rebuild a private parent list for recursion: `list_copy` → `list_truncate` → `lappend(current query)`. * Add a bounds check before indexing the CTE’s `targetList`. **Why it works** ```diff -parentQueryList = lappend(parentQueryList, query); -FindReferencedTableColumn(targetEntry->expr, parentQueryList, - cteQuery, column, rteContainingReferencedColumn, - skipOuterVars); + /* hand a private, bounded parent list to the recursion */ + List *newParent = list_copy(parentQueryList); + newParent = list_truncate(newParent, cteParentListIndex + 1); + newParent = lappend(newParent, query); + + FindReferencedTableColumn(targetEntry->expr, + newParent, + cteQuery, + column, + rteContainingReferencedColumn, + skipOuterVars); +} ``` **Before:** We changed `parentQueryList` in place (`parentQueryList = lappend(...)`) and didn’t trim it to the CTE’s owner level. **After:** We copy the list, trim it to the CTE’s owner level, then append the current query. This keeps the parent list accurate for the current recursion and safe when following outer Vars. **Example: Nested subquery referencing the CTE (two levels down)** ``` WITH c AS MATERIALIZED (SELECT user_id FROM raw_events_first) SELECT 1 FROM raw_events_first t WHERE EXISTS ( SELECT 1 FROM (SELECT user_id FROM c) c2 WHERE c2.user_id = t.user_id ); ``` Levels: Q0 = top SELECT Q1 = EXISTS subquery Q2 = inner (SELECT user_id FROM c) When resolving c2.user_id inside Q2: - parentQueryList is [Q0, Q1, Q2]. - `ctelevelsup`: 2 `cteParentListIndex = length(parentQueryList) - ctelevelsup - 1` - Recurse into the CTE’s query with [Q0, Q2]. **Tests (added in `multi_insert_select`)** * **T1:** Correlated subquery that references a CTE (one level down) Verifies that resolving through `RTE_CTE` after following an outer `Var` succeeds, row count matches source table. * **T2:** Nested subquery that references a CTE (two levels down) Exercises deeper recursion and confirms identical to T1. * **T3:** Scalar subquery in a target list that reads from the outer CTE Checks expected row count and that no NULLs are inserted. These tests cover the cases that previously hit `Assert(skipOuterVars)` and confirm CTE references while following outer Vars. |
|
|
|
71d6328378
|
Fix memory corruptions around pg_dist_background_task accessors after a Citus downgrade is followed by an upgrade (#8114)
DESCRIPTION: Fixes potential memory corruptions that could happen when accessing pg_dist_background_task after a Citus downgrade is followed by a Citus upgrade. In case of Citus downgrade and further upgrade an undefined behavior may be encountered. The reason is that Citus hardcoded the number of columns in the extension's tables, but in case of downgrade and following update some of these tables can have more columns, and some of them can be marked as dropped. This PR fixes all such tables using the approach introduced in #7950, which solved the problem for the pg_dist_partition table. See #7515 for a more thorough explanation. --------- Co-authored-by: Karina Litskevich <litskevichkarina@gmail.com> Co-authored-by: Onur Tirtir <onurcantirtir@gmail.com> |
|
|
|
3d8fd337e5
|
Check outer table partition column (#8092)
DESCRIPTION: Introduce a new check to push down a query including union and outer join to fix #8091 . In "SafeToPushdownUnionSubquery", we check if the distribution column of the outer relation is in the target list. |
|
|
|
f0789bd388
|
Fix memory corruptions that could happen when a Citus downgrade is followed by an upgrade (#7950)
DESCRIPTION: Fixes potential memory corruptions that could happen when a Citus downgrade is followed by a Citus upgrade. In case of citus downgrade and further upgrade citus crash with core dump. The reason is that citus hardcoded number of columns in pg_dist_partition table, but in case of downgrade and following update table can have more columns, and some of then can be marked as dropped. Patch suggest decision for this problem with using tupleDescriptor->nattrs(postgres internal approach). Fixes #7933. --------- Co-authored-by: Onur Tirtir <onurcantirtir@gmail.com> |
|
|
|
c183634207
|
Move "DROP FUNCTION" for older version of UDF to correct file (#8085)
We never update an older version of a SQL object for consistency across release tags, so this commit moves "DROP FUNCTION .." for the older version of "pg_catalog.worker_last_saved_explain_analyze();" to the appropriate migration script. See https://github.com/citusdata/citus/pull/8017. |
|
|
|
889aa92ac0
|
EXPLAIN ANALYZE - Prevent execution of the plan during the plan-print (#8017)
DESCRIPTION: Fixed a bug in EXPLAIN ANALYZE to prevent unintended (duplicate) execution of the (sub)plans during the explain phase. Fixes #4212 ### 🐞 Bug #4212 : Redundant (Subplan) Execution in `EXPLAIN ANALYZE` codepath #### 🔍 Background In the standard PostgreSQL execution path, `ExplainOnePlan()` is responsible for two distinct operations depending on whether `EXPLAIN ANALYZE` is requested: 1. **Execute the plan** ```c if (es->analyze) ExecutorRun(queryDesc, direction, 0L, true); ``` 2. **Print the plan tree** ```c ExplainPrintPlan(es, queryDesc); ``` When printing the plan, the executor should **not run the plan again**. Execution is only expected to happen once—at the top level when `es->analyze = true`. --- #### ⚠️ Issue in Citus In the Citus implementation of `CustomScanMethods.ExplainCustomScan = CitusExplainScan`, which is a custom scan explain callback function used to print explain information of a Citus plan incorrectly performs **redundant execution** inside the explain path of `ExplainPrintPlan()` ```c ExplainOnePlan() ExplainPrintPlan() ExplainNode() CitusExplainScan() if (distributedPlan->subPlanList != NIL) { ExplainSubPlans(distributedPlan, es); { PlannedStmt *plan = subPlan->plan; ExplainOnePlan(plan, ...); // ⚠️ May re-execute subplan if es->analyze is true } } ``` This causes the subplans to be **executed again**, even though they have already been executed during the top-level plan execution. This behavior violates the expectation in PostgreSQL where `EXPLAIN ANALYZE` should **execute each node exactly once** for analysis. --- #### ✅ Fix (proposed) Save the output of Subplans during `ExecuteSubPlans()`, and later use it in `ExplainSubPlans()` |
|
|
|
f31bcb4219
|
PG18 - Assert("HaveRegisteredOrActiveSnapshot() fix for cluster creation (#8073)
fixes #8072
fixes #8055
|
|
|
|
6b9962c0c0
|
[doc] wrong code comments for function PopUnassignedPlacementExecution (#8079)
Fixes #7621 DESCRIPTION: function comment correction |
|
|
|
f1160b0892
|
Fix assert failure introduced in 245a62df3e
The assert on the number of shards incorrectly used the value of citus.shard_replication_factor; it should check the table's metadata to determine the replication factor of its data, and not assume it is the current GUC value. |
|
|
|
9327df8446
|
Add PG 18Beta2 Build compatibility (#8060)
Fixes #8061 Add PG 18Beta2 Build compatibility Revert "Don't lock partitions pruned by initial pruning Relevant PG commit: 1722d5eb05d8e5d2e064cd1798abcae4f296ca9d https://github.com/postgres/postgres/commit/1722d5e |
|
|
|
9ccf758bb8
|
Fix PG15 compiler error introduced in commit 245a62df3e (#8069)
Commit
|
|
|
|
0c1b31cdb5
|
Fix UPDATE stmts with indirection & array/jsonb subscripting with more than 1 field (#7675)
DESCRIPTION: Fixes problematic UPDATE statements with indirection and array/jsonb subscripting with more than one field. Fixes #4092, #7674 and #5621. Issues #7674 and #4092 involve an UPDATE with out of order columns and a sublink (SELECT) in the source, e.g. `UPDATE T SET (col3, col1, col4) = (SELECT 3, 1, 4)` where an incorrect value could get written to a column because query deparsing generated an incorrect SQL statement. To address this the fix adds an additional check to `ruleutils` to ensure that the target list of an UPDATE statement is in an order so that deparsing can be done safely. It is needed when the source of the UPDATE has a sublink, because Postgres `rewrite` will have put the target list in attribute order, but for deparsing to produce a correct SQL text the target list needs to be in order of the references (or `paramids`) to the target list of the sublink(s). Issue #5621 involves an UPDATE with array/jsonb subscripting that can behave incorrectly with more than one field, again because Citus query deparsing is receiving a post-`rewrite` query tree. The fix also adds a check to `ruleutils` to enable correct query deparsing of the UPDATE. --------- Co-authored-by: Ibrahim Halatci <ihalatci@gmail.com> Co-authored-by: Colm McHugh <colm.mchugh@gmail.com> |
|
|
|
245a62df3e
|
Avoid query deparse and planning of shard query in local execution. (#8035)
DESCRIPTION: Avoid query deparse and planning of shard query in local execution. Adds citus.enable_local_execution_local_plan GUC to allow avoiding unnecessary query deparsing to improve performance of fast-path queries targeting local shards. If a fast path query resolves to a shard that is local to the node planning the query, a shortcut can be taken so that the OID of the shard is plugged into the parse tree, which is then planned by Postgres. In `local_executor.c` the task uses that plan instead of parsing and planning a shard query. How this is done: The fast path planner identifies if the shortcut is possible, and then the distributed planner checks, using `CheckAndBuildDelayedFastPathPlan()`, if a local plan can be generated or if the shard query should be generated. This optimization is controlled by a GUC `citus.enable_local_execution_local_plan` which is on by default. A new regress test `local_execution_local_plan` tests both row-sharding and schema sharding. Negative tests are added to `local_shard_execution_dropped_column` to verify that the optimization is not taken when the shard is local but there is a difference between the shard and distributed table because of a dropped column. |
|
|
|
743c9bbf87
|
fix #7715 - add assign hook for CDC library path adjustment (#8025)
DESCRIPTION: Automatically updates dynamic_library_path when CDC is enabled fix : #7715 According to the documentation and `pg_settings`, the context of the `citus.enable_change_data_capture` parameter is user. However, changing this parameter — even as a superuser — doesn't work as expected: while the initial copy phase works correctly, subsequent change events are not propagated. This appears to be due to the fact that `dynamic_library_path` is only updated to `$libdir/citus_decoders:$libdir` when the server is restarted and the `_PG_init` function is invoked. To address this, I added an `EnableChangeDataCaptureAssignHook` that automatically updates `dynamic_library_path` at runtime when `citus.enable_change_data_capture` is enabled, ensuring that the CDC decoder libraries are properly loaded. Note that `dynamic_library_path` is already a `superuser`-context parameter in base PostgreSQL, so updating it from within the assign hook should be safe and consistent with PostgreSQL’s configuration model. If there’s any reason this approach might be problematic or if there’s a preferred alternative, I’d appreciate any feedback. cc. @jy-min --------- Co-authored-by: Hanefi Onaldi <Hanefi.Onaldi@microsoft.com> Co-authored-by: ibrahim halatci <ihalatci@gmail.com> |
|
|
|
da24ede835
|
Support PostgreSQL 18’s new RTE kinds in Citus deparser (#8023)
Fixes #8020
PostgreSQL 18 introduces two new, *pseudo* rangetable‐entry kinds that
Citus’ downstream deparser must recognize:
1. **Pulled-up shard RTE clones** (`CITUS_RTE_SHARD` with `relid ==
InvalidOid`)
2. **Grouping-step RTE** (`RTE_GROUP`, alias `*GROUP*`, not actually in
the FROM clause)
Without special handling, Citus crashes or emits invalid SQL when
running against PG 18beta1:
* **`ERROR: could not open relation with OID 0`**
Citus was unconditionally calling `relation_open(rte->relid,…)` on
entries whose `relid` is 0.
* **`ERROR: missing FROM-clause entry for table "*GROUP*"`**
Citus’ `set_rtable_names()` assigned the synthetic `*GROUP*` alias but
never printed a matching FROM item.
This PR teaches Citus’ `ruleutils_18.c` to skip catalog lookups for RTEs
without valid OIDs and to suppress the grouping-RTE alias, restoring
compatibility with both PG 17 and PG 18.
---
## Background
* **Upstream commit
[[247dea8](
|
|
|
|
5005be31e6
|
PG18 - Handle PG18’s synthetic `RTE_GROUP` in `FindReferencedTableColumn` for correct GROUP BY pushdown (#8034)
Fixes #8032 PostgreSQL 18 introduces a dedicated “grouping-step” range table entry (`RTE_GROUP`) whose target columns are exactly the expressions in our `GROUP BY` clause, rather than hiding them as `resjunk` items. In Citus’s distributed planner, the function `FindReferencedTableColumn` must be able to map from a `Var` referencing a grouped column back to the underlying table column. Without special handling for `RTE_GROUP`, queries that rely on pushdown of `GROUP BY` expressions can fail or mis-identify their target columns. This PR adds support for `RTE_GROUP` in Citus when built against PG 18 or later, ensuring that: * Each grouped expression is correctly resolved. * The pushdown planner can trace a `Var`’s `varattno` into the corresponding `groupexprs` list. * Existing behavior on PG < 18 is unchanged. --- ## What’s Changed In **`src/backend/distributed/planner/multi_logical_optimizer.c`**, inside `FindReferencedTableColumn`: * **Under** `#if PG_VERSION_NUM >= PG_VERSION_18` Introduce an `else if` branch for ```c rangeTableEntry->rtekind == RTE_GROUP ``` * **Extraction of grouped expressions:** ```c List *groupexprs = rangeTableEntry->groupexprs; AttrNumber groupIndex = candidateColumn->varattno - 1; ``` * **Safety check** to guard against malformed `Var` numbers: ```c if (groupIndex < 0 || groupIndex >= list_length(groupexprs)) return; /* malformed Var */ ``` * **Recursive descent:** Fetch the corresponding expression from `groupexprs` and call ```c FindReferencedTableColumn(groupExpr, parentQueryList, query, column, rteContainingReferencedColumn, skipOuterVars); ``` so that the normal resolution logic applies to the underlying expression. * **Unchanged code path** for PG < 18 and for other `rtekind` values. --- |
|
|
|
9e42f3f2c4
|
Add PG 18Beta1 compatibility (Build + RuleUtils) (#7981)
This PR provides successful build against PG18Beta1. RuleUtils PR was reviewed separately: #8010 ## PG 18Beta1–related changes for building Citus ### TupleDesc / Attr layout **What changed in PG:** Postgres consolidated the `TupleDescData.attrs[]` array into a more compact representation. Direct field access (tupdesc->attrs[i]) was replaced by the new `TupleDescAttr()` API. **Citus adaptation:** Everywhere we previously used `tupdesc->attrs[...]`, we now call `TupleDescAttr(tupdesc, idx)` (or our own `Attr()` macro) under a compatibility guard. * |
|
|
|
4cd8bb1b67 | Bump Citus version to 13.2devel | |
|
|
55a0d1f730
|
Add skip_qualify_public param to shard_name() to allow qualifying for "public" schema (#8014)
DESCRIPTION: Adds skip_qualify_public param to `shard_name()` UDF to allow qualifying for "public" schema when needed. |
|
|
|
c98341e4ed
|
Bump PG versions to 17.5, 16.9, 15.13 (#7986)
Nontrivial bump because of the following PG15.3 commit 317aba70e https://github.com/postgres/postgres/commit/317aba70e Previously, when views were converted to RTE_SUBQUERY the relid would be cleared in PG15. In this patch of PG15, relid is retained. Therefore, we add a check with the "relkind and rtekind" to identify the converted views in 15.13 Sister PR https://github.com/citusdata/the-process/pull/164 Using dev image sha because I encountered the libpq symlink issue again with "-v219b87c" |
|
|
|
8d2fbca8ef
|
Fix unsafe memory access in citus_unmark_object_distributed() (#7985)
_Since we've never released a Citus release that contains the commit that introduced this bug (see #7461), we don't need to have a DESCRIPTION line that shows up in release changelog._ From 8 valgrind test targets run for release-13.1 with PG 17.5, we got 1344 stack traces and except one of them, they were all about below unsafe memory access because this is a very hot code-path that we execute via our drop trigger. On main, even `make -C src/test/regress/ check-base-vg` dumps this stack trace with PG 16/17 to src/test/regress/citus_valgrind_test_log.txt when executing "multi_cluster_management", and this is not the case with this PR anymore. ```c ==27337== VALGRINDERROR-BEGIN ==27337== Conditional jump or move depends on uninitialised value(s) ==27337== at 0x7E26B68: citus_unmark_object_distributed (home/onurctirtir/citus/src/backend/distributed/metadata/distobject.c:113) ==27337== by 0x7E26CC7: master_unmark_object_distributed (home/onurctirtir/citus/src/backend/distributed/metadata/distobject.c:153) ==27337== by 0x4BD852: ExecInterpExpr (home/onurctirtir/.pgenv/src/postgresql-16.2/src/backend/executor/execExprInterp.c:758) ==27337== by 0x4BFD00: ExecInterpExprStillValid (home/onurctirtir/.pgenv/src/postgresql-16.2/src/backend/executor/execExprInterp.c:1870) ==27337== by 0x51D82C: ExecEvalExprSwitchContext (home/onurctirtir/.pgenv/src/postgresql-16.2/src/backend/executor/../../../src/include/executor/executor.h:355) ==27337== by 0x51D8A4: ExecProject (home/onurctirtir/.pgenv/src/postgresql-16.2/src/backend/executor/../../../src/include/executor/executor.h:389) ==27337== by 0x51DADB: ExecResult (home/onurctirtir/.pgenv/src/postgresql-16.2/src/backend/executor/nodeResult.c:136) ==27337== by 0x4D72ED: ExecProcNodeFirst (home/onurctirtir/.pgenv/src/postgresql-16.2/src/backend/executor/execProcnode.c:464) ==27337== by 0x4CA394: ExecProcNode (home/onurctirtir/.pgenv/src/postgresql-16.2/src/backend/executor/../../../src/include/executor/executor.h:273) ==27337== by 0x4CD34C: ExecutePlan (home/onurctirtir/.pgenv/src/postgresql-16.2/src/backend/executor/execMain.c:1670) ==27337== by 0x4CAA7C: standard_ExecutorRun (home/onurctirtir/.pgenv/src/postgresql-16.2/src/backend/executor/execMain.c:365) ==27337== by 0x7E1E475: CitusExecutorRun (home/onurctirtir/citus/src/backend/distributed/executor/multi_executor.c:238) ==27337== Uninitialised value was created by a heap allocation ==27337== at 0x4848899: malloc (in /usr/libexec/valgrind/vgpreload_memcheck-amd64-linux.so) ==27337== by 0x9AB1F7: AllocSetContextCreateInternal (home/onurctirtir/.pgenv/src/postgresql-16.2/src/backend/utils/mmgr/aset.c:438) ==27337== by 0x4E0D56: CreateExprContextInternal (home/onurctirtir/.pgenv/src/postgresql-16.2/src/backend/executor/execUtils.c:261) ==27337== by 0x4E0E3E: CreateExprContext (home/onurctirtir/.pgenv/src/postgresql-16.2/src/backend/executor/execUtils.c:311) ==27337== by 0x4E10D9: ExecAssignExprContext (home/onurctirtir/.pgenv/src/postgresql-16.2/src/backend/executor/execUtils.c:490) ==27337== by 0x51EE09: ExecInitSeqScan (home/onurctirtir/.pgenv/src/postgresql-16.2/src/backend/executor/nodeSeqscan.c:147) ==27337== by 0x4D6CE1: ExecInitNode (home/onurctirtir/.pgenv/src/postgresql-16.2/src/backend/executor/execProcnode.c:210) ==27337== by 0x5243C7: ExecInitSubqueryScan (home/onurctirtir/.pgenv/src/postgresql-16.2/src/backend/executor/nodeSubqueryscan.c:126) ==27337== by 0x4D6DD9: ExecInitNode (home/onurctirtir/.pgenv/src/postgresql-16.2/src/backend/executor/execProcnode.c:250) ==27337== by 0x4F05B2: ExecInitAppend (home/onurctirtir/.pgenv/src/postgresql-16.2/src/backend/executor/nodeAppend.c:223) ==27337== by 0x4D6C46: ExecInitNode (home/onurctirtir/.pgenv/src/postgresql-16.2/src/backend/executor/execProcnode.c:182) ==27337== by 0x52003D: ExecInitSetOp (home/onurctirtir/.pgenv/src/postgresql-16.2/src/backend/executor/nodeSetOp.c:530) ==27337== ==27337== VALGRINDERROR-END ``` |
|
|
|
088ba75057
|
Add citus_nodes view (#7968)
DESCRIPTION: Adds `citus_nodes` view that displays the node name, port, role, and "active" for nodes in the cluster. This PR adds `citus_nodes` view to the `pg_catalog` schema. The `citus_nodes` view is created in the `citus` schema and is used to display the node name, port, role, and active status of each node in the `pg_dist_node` table. The view is granted `SELECT` permission to the `PUBLIC` role and is set to the `pg_catalog` schema. Test cases was added to `multi_cluster_management` tests. structs.py was modified to add white spaces as `citus_indent` required. --------- Co-authored-by: Alper Kocatas <alperkocatas@microsoft.com> |
|
|
|
a18040869a
|
Error out for queries with outer joins and pseudoconstant quals in PG<17 (#7937)
PG15 commit d1ef5631e620f9a5b6480a32bb70124c857af4f1 and PG16 commit 695f5deb7902865901eb2d50a70523af655c3a00 disallow replacing joins with scans in queries with pseudoconstant quals. This commit prevents the set_join_pathlist_hook from being called if any of the join restrictions is a pseudo-constant. So in these cases, citus has no info on the join, never sees that the query has an outer join, and ends up producing an incorrect plan. PG17 fixes this by commit 9e9931d2bf40e2fea447d779c2e133c2c1256ef3 Therefore, we take this extra measure here for PG versions less than 17. hasOuterJoin can never be true when set_join_pathlist_hook is absent. |
|
|
|
a4040ba5da
|
Planner: lift volatile target‑list items in `WrapSubquery` to coordinator (prevents sequence‑leap in distributed `INSERT … SELECT`) (#7976)
This PR fixes #7784 and refactors the `WrapSubquery(Query *subquery)` function to improve clarity and correctness when handling volatile expressions in subqueries during Citus insert-select rewriting. ### Background The `WrapSubquery` function rewrites a query of the form: ```sql INSERT INTO target_table SELECT ... FROM ... ``` ...by wrapping the `SELECT` in a subquery: ```sql SELECT <outer-TL> FROM ( <subquery with volatile expressions replaced with NULL> ) citus_insert_select_subquery ``` This transformation allows: * **Volatile expressions** (e.g., `nextval`, `now`) **not used in `GROUP BY` or `ORDER BY`** to be evaluated **exactly once on the coordinator**. * **Stable/immutable or sort-relevant expressions** to remain in the worker-executed subquery. * Placeholder `NULL`s to maintain column alignment in the inner subquery. ### Fix Details * Restructured the code into labeled logical sections: 1. Build wrapper query (`SELECT … FROM (subquery)`) 2. Rewrite target lists with volatility analysis 3. Assign and return updated query trees * Preserved existing behavior, focusing on clarity and maintainability. ### How the new code handles volatile items stage | what we look for | what we do | why -- | -- | -- | -- scan target list once | 1. `expr_is_volatile(te->expr)` 2. `te->ressortgroupref != 0` (is the column used in GROUP BY / ORDER BY?) | decide whether to hoist or keep | we must not hoist an expression the inner query still needs for sorting/grouping, otherwise its `SortGroupClause` breaks volatile & not used in sort/group | deep‑copy the expression into the outer target list | executes once on the coordinator | | leave a typed `NULL `placeholder (visible, not `resjunk`) in the inner target list | keeps column numbering stable for helpers that already ran (reorder, cast); the worker sends a cheap constant | stable / immutable, or volatile but used in sort/group | keep the original expression in the inner list; outer list references it via a `Var `| workers can evaluate it safely and, if needed, the inner ORDER BY still works | ### Example Given this query: ```sql INSERT INTO t SELECT nextval('s'), 42 FROM generate_series(1, 2); ``` The planner rewrites it as: ```sql SELECT nextval('s'), col2 FROM (SELECT NULL::bigint AS col1, 42 AS col2 FROM generate_series(1, 2)) citus_insert_select_subquery; ``` This ensures `nextval('s')` is evaluated only once per row on the **coordinator**, not on each worker node, preserving correct sequence semantics. #### **Outer‑Var guard (`FindReferencedTableColumn`)** Because `WrapSubquery` adds an extra query level, lots of Vars that the old code never expected become “outer” Vars; without teaching `FindReferencedTableColumn` to climb that extra level reliably, Citus would intermittently reject valid foreign keys and even hit asserts. * Re‑implemented the outer‑Var guard so that the function: * **Walks deterministically up the query stack** when `skipOuterVars = false` (default for FK / UNION checks). A new while‑loop copies — rather than truncates — `parentQueryList` on each hop, eliminating list‑aliasing that made *issue 5248* fail intermittently in parallel regressions. * Handles multi‑level `varlevelsup` in a single loop; never mutates the caller’s list in place. |
|
|
|
d4dd44e715
|
Propagate SECURITY LABEL on tables and columns. (#7956)
Issue #7709 asks for security labels on columns to be propagated, to support the `anon` extension. Before, Citus supported security labels on roles (#7735) and this PR adds support for propagating security labels on tables and columns. All scenarios that involve propagating metadata for a Citus table now include the security labels on the table and on the columns of the table. These scenarios are: - When a table becomes distributed using `create_distributed_table()` or `create_reference_table()`, its security labels (if any) are propageted. - When a security label is defined on a distributed table, or one of its columns, the label is propagated. - When a node is added to a Citus cluster, all distributed tables have their security labels propagated. - When a column of a distributed table is dropped, any security labels on the column are also dropped. - When a column is added to a distributed table, security labels can be defined on the column and are propagated. - Security labels on a distributed table or its columns are not propagated when `citus.enable_metadata_sync` is enabled. Regress test `seclabel` is extended with tests to cover these scenarios. The implementation is somewhat involved because it impacts DDL propagation of Citus tables, but can be broken down as follows: - distributed_object_ops has `Role_SecLabel`, `Table_SecLabel` and `Column_SecLabel` to take care of security labels on roles, tables and columns. `Any_SecLabel` is used for all other security labels and is essentially a nop. - Deparser support - `DeparseRoleSecLabelStmt()`, `DeparseTableSecLabelStmt()` and `DeparseColumnSecLabelStmt()` take care of deparsing security label statements on roles, tables and columns respectively. - When reconstructing the DDL for a citus table, security labels on the table or its columns are included by having `GetPreLoadTableCreationCommands()` call a new function `CreateSecurityLabelCommands()` to take care of any security labels on the table or its columns. - When changing a distributed table name to a shard name before running a command locally on a worker, function `RelayEventExtendNames()` checks for security labels on a table or its columns. |
|
|
|
ea7aa6712d
|
Move stat view implementations into a submodule (#7975)
Also move serialize_distributed_ddls into commands submodule, seems like an oversight from last year (by me). |
|
|
|
d2e6cf1de0
|
Fix dev documentation for stat counters (#7974)
Minor updates on the relevant portion of the tech readme and a code comment stat_counters.c |
|
|
|
3d61c4dc71
|
Add citus_stat_counters view and citus_stat_counters_reset() function to reset it (#7917)
DESCRIPTION: Adds citus_stat_counters view that can be used to query stat counters that Citus collects while the feature is enabled, which is controlled by citus.enable_stat_counters. citus_stat_counters() can be used to query the stat counters for the provided database oid and citus_stat_counters_reset() can be used to reset them for the provided database oid or for the current database if nothing or 0 is provided. Today we don't persist stat counters on server shutdown. In other words, stat counters are automatically reset in case of a server restart. Details on the underlying design can be found in header comment of stat_counters.c and in the technical readme. ------- Here are the details about what we track as of this PR: For connection management, we have three statistics about the inter-node connections initiated by the node itself: * **connection_establishment_succeeded** * **connection_establishment_failed** * **connection_reused** While the first two are relatively easier to understand, the third one covers the case where a connection is reused. This can happen when a connection was already established to the desired node, Citus decided to cache it for some time (see citus.max_cached_conns_per_worker & citus.max_cached_connection_lifetime), and then reused it for a new remote operation. Here are the other important details about these connection statistics: 1. connection_establishment_failed doesn't care about the connections that we could establish but are lost later in the transaction. Plus, we cannot guarantee that the connections that are counted in connection_establishment_succeeded were not lost later. 2. connection_establishment_failed doesn't care about the optional connections (see OPTIONAL_CONNECTION flag) that we gave up establishing because of the connection throttling rules we follow (see citus.max_shared_pool_size & citus.local_shared_pool_size). The reaason for this is that we didn't even try to establish these connections. 3. For the rest of the cases where a connection failed for some reason, we always increment connection_establishment_failed even if the caller was okay with the failure and know how to recover from it (e.g., the adaptive executor knows how to fall back local execution when the target node is the local node and if it cannot establish a connection to the local node). The reason is that even if it's likely that we can still serve the operation, we still failed to establish the connection and we want to track this. 4. Finally, the connection failures that we count in connection_establishment_failed might be caused by any of the following reasons and for now we prefer to _not_ further distinguish them for simplicity: a. remote node is down or cannot accept any more connections, or overloaded such that citus.node_connection_timeout is not enough to establish a connection b. any internal Citus error that might result in preparing a bad connection string so that libpq fails when parsing the connection string even before actually trying to establish a connection via connect() call c. broken citus.node_conninfo or such Citus configuration that was incorrectly set by the user can also result in similar outcomes as in b d. internal waitevent set / poll errors or OOM in local node We also track two more statistics for query execution: * **query_execution_single_shard** * **query_execution_multi_shard** And more importantly, both query_execution_single_shard and query_execution_multi_shard are not only tracked for the top-level queries but also for the subplans etc. The reason is that for some queries, e.g., the ones that go through recursive planning, after Citus performs the heavy work as part of subplans, the work that needs to be done for the top-level query becomes quite straightforward. And for such query types, it would be deceiving if we only incremented the query stat counters for the top-level query. Similarly, for non-pushable INSERT .. SELECT and MERGE queries, we perform separate counter increments for the SELECT / source part of the query besides the final INSERT / MERGE query. |
|
|
|
37e23f44b4
|
Add Support for CASCADE/RESTRICT in REVOKE statements (#7958)
Fixes #7105. DESCRIPTION: Fixes a bug that causes omitting CASCADE clause for the commands sent to workers for REVOKE commands on tables. --------- Co-authored-by: ThomasC02 <thomascantrell02@gmail.com> Co-authored-by: Onur Tirtir <onurcantirtir@gmail.com> Co-authored-by: Tiago Silva <tiagos3373@gmail.com> |
|
|
|
48d89c9c1b
|
Adjust max_prepared_transactions only when it is default (#7712)
DESCRIPTION: Adjusts max_prepared_transactions only when it's set to default on PG >= 16 Fixes #7711. Change AdjustMaxPreparedTransactions to really check if max_prepared_transactions is explicitly set by user, and only adjust max_prepared_transactions when it is default. This fixes 021_twophase test failure with loaded Citus library after postgres/postgres@b39c5272. Co-authored-by: Karina Litskevich <litskevichkarina@gmail.com> |
|
|
|
0e6127c4f6
|
AddressSanitizer: stack-use-after-scope on distributed_planner:HasUnresolvedExternParamsWalker (#7948)
Var externParamPlaceholder is created on stack, and its address is used for paramFetch. Postgres code return address of externParamPlaceholder var to externParam, then code flow go out of scope and dereference pointer on stack out of scope. Fixes https://github.com/citusdata/citus/issues/7941. --------- Co-authored-by: Onur Tirtir <onurcantirtir@gmail.com> |
|
|
|
f084b79a4b
|
AddressSanitizer: stack-use-after-scope on address in CreateBackgroundJob (#7949)
Var jobTypeName is created on stack and its value over pointer is used in heap_form_tuple, so we have stack use out of scope. Issue was detected with adress sanitizer. Fixes #7943. |
|
|
|
1dc60e38bb
|
Propagates GRANT/REVOKE rights on table columns (#7918)
This commit adds support for GRANT/REVOKE on table columns. It extends propagated DDL according to this logic: https://github.com/citusdata/citus/tree/main/src/backend/distributed#ddl * Unchanged pre-existing behavior related to splitting ddl per relation during propagation. * Changed the way ACL are checked in some cases (see `EnsureTablePermissions()` and associated commits) * Rewrite `pg_get_table_grants` to include column grants as well * Add missing `pfree()` in `pg_get_table_grants()` Fixes https://github.com/citusdata/citus/issues/7287 Also check a box in https://github.com/citusdata/citus/issues/4812 |
|
|
|
a7e686c106
|
Make sure to prevent INSERT INTO ... SELECT queries involving subfield or sublink (#7912)
DESCRIPTION: Makes sure to prevent `INSERT INTO ... SELECT` queries involving subfield or sublink, to avoid crashes The following query was crashing the backend: ``` INSERT INTO field_indirection_test_1 ( int_col, ct1_col.int_1,ct1_col.int_2 ) SELECT 0, 1, 2; -- crash ``` En passant, added more tests with sublink in distributed_types and found another query with wrong behavior: ``` INSERT INTO domain_indirection_test (f1,f3.if1) SELECT 0, 1; ERROR: could not find a conversion path from type 23 to 17619 -- not the expected ERROR ``` Fixed them by using `strip_implicit_coercions()` on target entry expression before checking for the presence of a subscript or fieldstore, else we fail to find the existing ones and wrongly accept to execute unsafe query. |
|
|
|
4b4fa22b64
|
Fix mis-deparsing of shard query in "output-table column" name conflict (#7932)
DESCRIPTION: Fixes a bug in deparsing of shard query in case of
"output-table column" name conflict
If an `ORDER BY` item in `SELECT` is a bare identifier, the parser
_first seeks it as an output column name_ of the `SELECT` (for SQL92
compatibility). However, ruleutils.c is expecting the SQL99
interpretation _where such a name is an input column name_. So it's
possible to produce an incorrect display of a view in the (admittedly
pretty ill-advised) case where some other column is renamed in the
`SELECT` output list to match an `ORDER BY` column.
The `DISTINCT ON` expressions are interpreted using the same rules as
for `ORDER BY`.
We had an issue reported that actually uses `DISTINCT ON`: #7684
Since Citus uses ruleutils deparsing logic to create the shard queries,
it would not
table-qualify the column names as needed.
PG17 fixed this https://github.com/postgres/postgres/commit/a7eb633563c
by table-qualifying such names in the dumped view text. Therefore,
Citus doesn't reproduce the issue in PG17, since PG17 table-qualifies
the column names when needed, and the produced shard queries are
correct.
This PR applies the PG17 patch to `ruleutils_15.c` and `ruleutils_16.c`.
Even though we generally try to avoid modifying the ruleutils files, in
this case
we are applying a Postgres patch that `ruleutils_17.c` already has:
|
|
|
|
1c09469dd2
|
Adds a method to determine if current node is primary (#7720)
DESCRIPTION: Adds citus_is_primary_node() UDF to determine if the current node is a primary node in the cluster. --------- Co-authored-by: German Eichberger <geeichbe@microsoft.com> Co-authored-by: Onur Tirtir <onurcantirtir@gmail.com> |
|
|
|
1d0bdbd749 | Bump Citus into 13.1devel | |
|
|
caceb35eba | Some cleanup from dropping pg14 | |
|
|
08913e27d7 | PG17 renamed Anum_pg_database_daticulocale to Anum_pg_database_datlocale | |
|
|
17b4122e84 | Rename some more foreach_ptr to foreach_declared_ptr | |
|
|
ed40a0ad02 |
fix issue #7676: wrong handler around MULTIEXPR (#7914)
DESCRIPTION: Fixes a bug with `UPDATE SET (...) = (SELECT some_func(),... )` (#7676) Citus was checking for presence of sublink, but forgot to manage multiexpr while evaluating clauses during planning. At this stage (citus planner), it's not always possible to call PostgreSQL code because the tree is not yet ready for PostgreSQL pure executor. Fixes https://github.com/citusdata/citus/issues/7676. Fixed by adding a new function to check sublink or multiexpr in the tree. --------- Co-authored-by: Colm <colmmchugh@microsoft.com> |
|
|
|
e50563fbd8 |
Issue 7887 Enhance AddInsertSelectCasts for Identity Columns (#7920)
## Enhance `AddInsertSelectCasts` for Identity Columns This PR fixes #7887 and improves the behavior of partial inserts into **identity columns** by modifying the **`AddInsertSelectCasts`** function. Specifically, we introduce **special-case handling** for `nextval(...)` calls (represented in the parse tree as `NextValueExpr`) to ensure that if the identity column’s declared type differs from `nextval`’s default return type (`int8`), we **cast** the expression properly. This prevents mismatches like `int8` → `int4` from causing “invalid string enlargement” errors or other type-related failures. When `INSERT ... SELECT` is processed, `AddInsertSelectCasts` reconciles each target column’s type with the corresponding SELECT expression’s type. Historically, for identity columns that rely on `nextval(...)`, we can end up with a mismatch: - `nextval` returns **`int8`**, - The identity column might be **`int4`**, **`bigint`**, or another integer type. Without a correct cast, Postgres or Citus can produce plan-time or runtime errors. By **detecting** `NextValueExpr` and applying a cast to the column’s type, the final plan ensures consistent insertion without errors. ## What Changed 1. **Check for `NextValueExpr`**: In `AddInsertSelectCasts`, we now have a code block: ```c if (IsA(selectEntry->expr, NextValueExpr)) { Oid nextvalType = GetNextvalReturnTypeCatalog(); ... // If (targetType != nextvalType), build a cast from int8 -> targetType } else { // fallback to generic mismatch logic } ``` This short-circuits any expression that’s a `nextval(...)` call, letting us explicitly cast to the correct type. 2. **Fallback Generic Logic**: If it isn’t a `NextValueExpr` (i.e. a normal column or expression mismatch), we still rely on the existing path that compares `sourceType` vs. `targetType` and calls `CastExpr(...)` if they differ. 3. **`GetNextvalReturnTypeCatalog`**: We added or refined a helper function to confirm that `nextval` returns `int8`, or do a `LookupFuncName("nextval", ...)` to discover the function’s return type from `pg_proc`—making it robust if future changes happen. ## Benefits - **Partial inserts** into identity columns no longer fail with type mismatches. - When `nextval` yields `int8` but the identity column is `int4` (or another type), we properly cast to the column’s type in the plan. - Preserves the **existing** approach for other columns—only identity calls get the specialized `NextValueExpr` logic. ## Testing - Extended `generatedidentity.sql` test scenario to cover partial inserts into both `GENERATED ALWAYS` and `GENERATED BY DEFAULT` identity columns, including tests for the `OVERRIDING SYSTEM VALUE` clause and partial inserts referencing foreign-key columns. |
|
|
|
95da74c47f |
Fix Deadlock with transaction recovery is possible during Citus upgrades (#7910)
DESCRIPTION: Fixes deadlock with transaction recovery that is possible during Citus upgrades. Fixes #7875. This commit addresses two interrelated deadlock issues uncovered during Citus upgrades: 1. Local Deadlock: - **Problem:** In `RecoverWorkerTransactions()`, a new connection is created for each worker node to perform transaction recovery by locking the `pg_dist_transaction` catalog table until the end of the transaction. When `RecoverTwoPhaseCommits()` calls this function for each worker node, the order of acquiring locks on `pg_dist_authinfo` and `pg_dist_transaction` can alternate. This reversal can lead to a deadlock if any concurrent process requires locks on these tables. - **Fix:** Pre-establish all worker node connections upfront so that `RecoverWorkerTransactions()` operates with a single, consistent connection. This ensures that locks on `pg_dist_authinfo` and `pg_dist_transaction` are always acquired in the correct order, thereby preventing the local deadlock. 2. Distributed Deadlock: - **Problem:** After resolving the local deadlock, a distributed deadlock issue emerges. The maintenance daemon calls `RecoverWorkerTransactions()` on each worker node— including the local node—which leads to a complex locking sequence: - A RowExclusiveLock is taken on the `pg_dist_transaction` table in `RecoverWorkerTransactions()`. - An update extension then tries to acquire an AccessExclusiveLock on the same table, getting blocked by the RowExclusiveLock. - A subsequent query (e.g., a SELECT on `pg_prepared_xacts`) issued using a separate connection on the local node gets blocked due to locks held during a call to `BuildCitusTableCacheEntry()`. - The maintenance daemon waits for this query, resulting in a circular wait and stalling the entire cluster. - **Fix:** Avoid cache lookups for internal PostgreSQL tables by implementing an early bailout for relation IDs below `FirstNormalObjectId` (system objects). This eliminates unnecessary calls to `BuildCitusTableCache`, reducing lock contention and mitigating the distributed deadlock. Furthermore, this optimization improves performance in fast connect→query_catalog→disconnect cycles by eliminating redundant cache creation and lookups. 3. Also reverts the commit that disabled the relevant test cases. |