**DESCRIPTION:**
This pull request introduces the foundation and core logic for the
snapshot-based node split feature in Citus. This feature enables
promoting a streaming replica (referred to as a clone in this feature
and UI) to a primary node and rebalancing shards between the original
and the newly promoted node without requiring a full data copy.
This significantly reduces rebalance times for scale-out operations
where the new node already contains a full copy of the data via
streaming replication.
Key Highlights:
**1. Replica (Clone) Registration & Management Infrastructure**
Introduces a new set of UDFs to register and manage clone nodes:
- citus_add_clone_node()
- citus_add_clone_node_with_nodeid()
- citus_remove_clone_node()
- citus_remove_clone_node_with_nodeid()
These functions allow administrators to register a streaming replica of
an existing worker node as a clone, making it eligible for later
promotion via snapshot-based split.
**2. Snapshot-Based Node Split (Core Implementation)**
New core UDF:
- citus_promote_clone_and_rebalance()
This function implements the full workflow to promote a clone and
rebalance shards between the old and new primaries. Steps include:
1. Ensuring Exclusivity – Blocks any concurrent placement-changing
operations.
2. Blocking Writes – Temporarily blocks writes on the primary to ensure
consistency.
3. Replica Catch-up – Waits for the replica to be fully in sync.
4. Promotion – Promotes the replica to a primary using pg_promote.
5. Metadata Update – Updates metadata to reflect the newly promoted
primary node.
6. Shard Rebalancing – Redistributes shards between the old and new
primary nodes.
**3. Split Plan Preview**
A new helper UDF get_snapshot_based_node_split_plan() provides a preview
of the shard distribution post-split, without executing the promotion.
**Example:**
```
reb 63796> select * from pg_catalog.get_snapshot_based_node_split_plan('127.0.0.1',5433,'127.0.0.1',5453);
table_name | shardid | shard_size | placement_node
--------------+---------+------------+----------------
companies | 102008 | 0 | Primary Node
campaigns | 102010 | 0 | Primary Node
ads | 102012 | 0 | Primary Node
mscompanies | 102014 | 0 | Primary Node
mscampaigns | 102016 | 0 | Primary Node
msads | 102018 | 0 | Primary Node
mscompanies2 | 102020 | 0 | Primary Node
mscampaigns2 | 102022 | 0 | Primary Node
msads2 | 102024 | 0 | Primary Node
companies | 102009 | 0 | Clone Node
campaigns | 102011 | 0 | Clone Node
ads | 102013 | 0 | Clone Node
mscompanies | 102015 | 0 | Clone Node
mscampaigns | 102017 | 0 | Clone Node
msads | 102019 | 0 | Clone Node
mscompanies2 | 102021 | 0 | Clone Node
mscampaigns2 | 102023 | 0 | Clone Node
msads2 | 102025 | 0 | Clone Node
(18 rows)
```
**4 Test Infrastructure Enhancements**
- Added a new test case scheduler for snapshot-based split scenarios.
- Enhanced pg_regress_multi.pl to support creating node backups with
slightly modified options to simulate real-world backup-based clone
creation.
### 5. Usage Guide
The snapshot-based node split can be performed using the following
workflow:
**- Take a Backup of the Worker Node**
Run pg_basebackup (or an equivalent tool) against the existing worker
node to create a physical backup.
`pg_basebackup -h <primary_worker_host> -p <port> -D
/path/to/replica/data --write-recovery-conf
`
**- Start the Replica Node**
Start PostgreSQL on the replica using the backup data directory,
ensuring it is configured as a streaming replica of the original worker
node.
**- Register the Backup Node as a Clone**
Mark the registered replica as a clone of its original worker node:
`SELECT * FROM citus_add_clone_node('<clone_host>', <clone_port>,
'<primary_host>', <primary_port>);
`
**- Promote and Rebalance the Clone**
Promote the clone to a primary and rebalance shards between it and the
original worker:
`SELECT * FROM citus_promote_clone_and_rebalance('clone_node_id');
`
**- Drop Any Replication Slots from the Original Worker**
After promotion, clean up any unused replication slots from the original
worker:
`SELECT pg_drop_replication_slot('<slot_name>');
`
DESCRIPTION: Fixes potential memory corruptions that could happen when
accessing pg_dist_object after a Citus downgrade is followed by a Citus
upgrade.
In case of Citus downgrade and further upgrade an undefined behavior may
be encountered. The reason is that Citus hardcoded the number of columns
in the extension's tables, but in case of downgrade and following update
some of these tables can have more columns, and some of them can be
marked as dropped.
This PR fixes all such tables using the approach introduced in #7950,
which solved the problem for the pg_dist_partition table.
See #7515 for a more thorough explanation.
---------
Co-authored-by: Karina Litskevich <litskevichkarina@gmail.com>
Co-authored-by: Onur Tirtir <onurcantirtir@gmail.com>
DESCRIPTION: Adds support for pushing down LEFT/RIGHT outer joins having
a reference table in the outer side and a distributed table on the inner
side (e.g., <reference table> LEFT JOIN <distributed table>)
Partially addresses #6546
1) `<outer:reference>` LEFT JOIN `<inner:distributed>`
2) `<inner:distributed>` RIGHT JOIN `<outer:reference>`
Previously, for outer joins of types (1) and (2), the distributed side
was computed recursively. This was necessary because, when the inner
side of a recurring outer join is a distributed table, it is not
possible to directly distribute the join; the preserved (outer and
recurring) side may generate rows with join keys that hash to different
shards.
To implement distributed planning while maintaining consistency with
global execution semantics, this PR restricts the outer side only to
those partition key values that route to the selected shard during
distributed shard query computation. This method is employed )when the
following criteria are met: (recursive planning applied otherwise)
- The join type is (1) or (2) (lateral joins are not supported).
- The outer side is a reference table.
- The outer join qualifications include an equality condition between
the partition column of a distributed table and the recurring table.
- The join is not part of a chained join.
- The “enable_recurring_outer_join_pushdown” GUC is enabled (default is
on).
---------
Co-authored-by: ebruaydingol <ebruaydingol@microsoft.com>
Co-authored-by: Onur Tirtir <onurcantirtir@gmail.com>
DESCRIPTION: Not automatically create citus_columnar when there are no
relations using it.
Previously, we were always creating citus_columnar when creating citus
with version >= 11.1. And how we were doing was as follows:
* Detach SQL objects owned by old columnar, i.e., "drop" them from
citus, but not actually drop them from the database
* "old columnar" is the one that we had before Citus 11.1 as part of
citus, i.e., before splitting the access method ands its catalog to
citus_columnar.
* Create citus_columnar and attach the SQL objects leftover from old
columnar to it so that we can continue supporting the columnar tables
that user had before Citus 11.1 with citus_columnar.
First part is unchanged, however, now we don't create citus_columnar
automatically anymore if the user didn't have any relations using
columnar. For this reason, as of Citus 13.2, when these SQL objects are
not owned by an extension and there are no relations using columnar
access method, we drop these SQL objects when updating Citus to 13.2.
The net effect is still the same as if we automatically created
citus_columnar and user dropped citus_columnar later, so we should not
have any issues with dropping them.
(**Update:** Seems we've made some assumptions in citus, e.g.,
citus_finish_pg_upgrade() still assumes columnar metadata exists and
tries to apply some fixes for it, so this PR fixes them as well. See the
last section of this PR description.)
Also, ideally I was hoping to just remove some lines of code from
extension.c, where we decide automatically creating citus_columnar when
creating citus, however, this didn't happen to be the case for two
reasons:
* We still need to automatically create it for the servers using
columnar access method.
* We need to clean-up the leftover SQL objects from old columnar when
the above is not case otherwise we would have leftover SQL objects from
old columnar for no reason, and that would confuse users too.
* Old columnar cannot be used to create columnar tables properly, so we
should clean them up and let the user decide whether they want to create
citus_columnar when they really need it later.
---
Also made several changes in the test suite because similarly, we don't
always want to have citus_columnar created in citus tests anymore:
* Now, columnar specific test targets, which cover **41** test sql
files, always install columnar by default, by using
"--load-extension=citus_columnar".
* "--load-extension=citus_columnar" is not added to citus specific test
targets because by default we don't want to have citus_columnar created
during citus tests.
* Excluding citus_columnar specific tests, we have **601** sql files
that we have as citus tests and in **27** of them we manually create
citus_columnar at the very beginning of the test because these tests do
test some functionalities of citus together with columnar tables.
Also, before and after schedules for PG upgrade tests are now duplicated
so we have two versions of each: one with columnar tests and one
without. To choose between them, check-pg-upgrade now supports a
"test-with-columnar" option, which can be set to "true" or anything else
to logically indicate "false". In CI, we run the check-pg-upgrade test
target with both options. The purpose is to ensure we can test PG
upgrades where citus_columnar is not created in the cluster before the
upgrade as well.
Finally, added more tests to multi_extension.sql to test Citus upgrade
scenarios with / without columnar tables / citus_columnar extension.
---
Also, seems citus_finish_pg_upgrade was assuming that citus_columnar is
always created but actually we should have never made such an
assumption. To fix that, moved columnar specific post-PG-upgrade work
from citus to a new columnar UDF, which is columnar_finish_pg_upgrade.
But to avoid breaking existing customer / managed service scripts, we
continue to automatically perform post PG-upgrade work for columnar
within citus_finish_pg_upgrade, but only if columnar access method
exists this time.
Enhance security by addressing a code scanning alert and refactoring the
background worker setup code for better maintainability and clarity.
---------
Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
DESCRIPTION: Fixes potential memory corruptions that could happen when
accessing pg_dist_background_task after a Citus downgrade is followed by
a Citus upgrade.
In case of Citus downgrade and further upgrade an undefined behavior may
be encountered. The reason is that Citus hardcoded the number of columns
in the extension's tables, but in case of downgrade and following update
some of these tables can have more columns, and some of them can be
marked as dropped.
This PR fixes all such tables using the approach introduced in #7950,
which solved the problem for the pg_dist_partition table.
See #7515 for a more thorough explanation.
---------
Co-authored-by: Karina Litskevich <litskevichkarina@gmail.com>
Co-authored-by: Onur Tirtir <onurcantirtir@gmail.com>
DESCRIPTION: Fixes potential memory corruptions that could happen when a
Citus downgrade is followed by a Citus upgrade.
In case of citus downgrade and further upgrade citus crash with core
dump.
The reason is that citus hardcoded number of columns in
pg_dist_partition table,
but in case of downgrade and following update table can have more
columns, and
some of then can be marked as dropped.
Patch suggest decision for this problem with using
tupleDescriptor->nattrs(postgres internal approach).
Fixes#7933.
---------
Co-authored-by: Onur Tirtir <onurcantirtir@gmail.com>
fixes#8072fixes#8055706054b11b
before fix
when try to create cluster with assert on
`citus_dev make test1 --destroy`
```
TRAP: failed Assert("HaveRegisteredOrActiveSnapshot()"), File: "heapam.c", Line: 232, PID: 75572
postgres: citus citus [local] SELECT(ExceptionalCondition+0x6e)[0x5585e16123e6]
postgres: citus citus [local] SELECT(heap_insert+0x220)[0x5585e10709af]
postgres: citus citus [local] SELECT(simple_heap_insert+0x33)[0x5585e1071a20]
postgres: citus citus [local] SELECT(CatalogTupleInsert+0x32)[0x5585e1135843]
/home/citus/.pgenv/pgsql-18beta2/lib/citus.so(+0x11e0aa)[0x7fa26f1ca0aa]
/home/citus/.pgenv/pgsql-18beta2/lib/citus.so(+0x11b607)[0x7fa26f1c7607]
/home/citus/.pgenv/pgsql-18beta2/lib/citus.so(+0x11bf25)[0x7fa26f1c7f25]
/home/citus/.pgenv/pgsql-18beta2/lib/citus.so(+0x11d4e2)[0x7fa26f1c94e2]
postgres: citus citus [local] SELECT(+0x1c267d)[0x5585e10e967d]
postgres: citus citus [local] SELECT(+0x1c6ba0)[0x5585e10edba0]
postgres: citus citus [local] SELECT(+0x1c7b80)[0x5585e10eeb80]
postgres: citus citus [local] SELECT(CommitTransactionCommand+0xd)[0x5585e10eef0a]
postgres: citus citus [local] SELECT(+0x575b3d)[0x5585e149cb3d]
postgres: citus citus [local] SELECT(+0x5788ce)[0x5585e149f8ce]
postgres: citus citus [local] SELECT(PostgresMain+0xae7)[0x5585e14a2088]
postgres: citus citus [local] SELECT(BackendMain+0x51)[0x5585e149ab36]
postgres: citus citus [local] SELECT(postmaster_child_launch+0x101)[0x5585e13d6b32]
postgres: citus citus [local] SELECT(+0x4b273f)[0x5585e13d973f]
postgres: citus citus [local] SELECT(+0x4b49f3)[0x5585e13db9f3]
postgres: citus citus [local] SELECT(PostmasterMain+0x1089)[0x5585e13dcee2]
postgres: citus citus [local] SELECT(main+0x1d7)[0x5585e12e3428]
/lib/x86_64-linux-gnu/libc.so.6(+0x29d90)[0x7fa271421d90]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x80)[0x7fa271421e40]
```
This PR provides successful build against PG18Beta1. RuleUtils PR was
reviewed separately: #8010
## PG 18Beta1–related changes for building Citus
### TupleDesc / Attr layout
**What changed in PG:** Postgres consolidated the
`TupleDescData.attrs[]` array into a more compact representation. Direct
field access (tupdesc->attrs[i]) was replaced by the new
`TupleDescAttr()` API.
**Citus adaptation:** Everywhere we previously used
`tupdesc->attrs[...]`, we now call `TupleDescAttr(tupdesc, idx)` (or our
own `Attr()` macro) under a compatibility guard.
*
5983a4cffc
General Logic:
* Use `Attr(...)` in places where `columnar_version_compat.h` is
included. This avoids the need to sprinkle `#if PG_VERSION_NUM` guards
around each attribute access.
* Use `TupleDescAttr(tupdesc, i)` when the relevant PostgreSQL header is
already included and the additional macro indirection is unnecessary.
### Collation‐aware `LIKE`
**What changed in PG:** The `textlike` operator now requires an explicit
collation, to avoid ambiguous‐collation errors. Core code switched from
`DirectFunctionCall2(textlike, ...)` to
`DirectFunctionCall2Coll(textlike, DEFAULT_COLLATION_OID, ...)`.
**Citus adaptation:** In `remote_commands.c` and any other LIKE call, we
now use `DirectFunctionCall2Coll(textlike, DEFAULT_COLLATION_OID, ...)`
and `#include <utils/pg_collation.h>`.
*
85b7efa1cd
### Columnar storage API
* Adapt `columnar_relation_set_new_filelocator` (and related init
routines) for PG 18’s revised SMGR and storage-initialization hooks.
* Pull in the new headers (`explain_format.h`,
`columnar_version_compat.h`) so the columnar module compiles cleanly
against PG 18.
- heap_modify_tuple + heap_inplace_update only exist on PG < 18; on PG18
the in-place helper was removed upstream
-
a07e03fd8f
### OpenSSL / TLS integration
**What changed in PG:** Moved from the legacy `SSL_library_init()` to
`OPENSSL_init_ssl(OPENSSL_INIT_LOAD_CONFIG, NULL)`, updated certificate
API calls (`X509_getm_notBefore`, `X509_getm_notAfter`), and
standardized on `TLS_method()`.
**Citus adaptation:** We now `#include <openssl/opensslv.h>` and use
`#if OPENSSL_VERSION_NUMBER >= 0x10100000L` to choose between`
OPENSSL_init_ssl()` or `SSL_library_init()`, and wrap`
X509_gmtime_adj()` calls around the new accessor functions.
*
6c66b7443c
### Adapt `ExtractColumns()` to the new PG-18 `expandRTE()` signature
PostgreSQL 18
80feb727c8
added a fourth argument of type `VarReturningType` to `expandRTE()`, so
calls that used the old 7-parameter form no longer compile. This patch:
* Wraps the `expandRTE(...)` call in a `#if PG_VERSION_NUM >= 180000`
guard.
* On PG 18+ passes the new `VAR_RETURNING_DEFAULT` argument before
`location`.
* On PG 15–17 continues to call the original 7-arg form.
* Adds the necessary includes (`parser/parse_relation.h` for `expandRTE`
and `VarReturningType`, and `pg_version_constants.h` for
`PG_VERSION_NUM`).
### Adapt `ExecutorStart`/`ExecutorRun` hooks to PG-18’s new signatures
PostgreSQL 18
525392d572
changed the signatures of the executor hooks:
* `ExecutorStart_hook` now returns `bool` instead of `void`, and
* `ExecutorRun_hook` drops its old `run_once` argument.
This patch preserves Citus’s existing hook logic by:
1. **Adding two adapter functions** under `#if PG_VERSION_NUM >=
PG_VERSION_18`:
* `citus_executor_start_adapter(QueryDesc *queryDesc, int eflags)`
Calls the old `CitusExecutorStart(queryDesc, eflags)` and then returns
`true` to satisfy the new hook’s `bool` return type.
* `citus_executor_run_adapter(QueryDesc *queryDesc, ScanDirection
direction, uint64 count)`
Calls the old `CitusExecutorRun(queryDesc, direction, count, true)`
(passing `true` for the dropped `run_once` argument), and returns
`void`.
2. **Installing the adapters** in `_PG_init()` instead of the original
hooks when building against PG 18+:
```c
#if PG_VERSION_NUM >= PG_VERSION_18
ExecutorStart_hook = citus_executor_start_adapter;
ExecutorRun_hook = citus_executor_run_adapter;
#else
ExecutorStart_hook = CitusExecutorStart;
ExecutorRun_hook = CitusExecutorRun;
#endif
```
### Adapt to PG-18’s removal of the “run\_once” flag from
ExecutorRun/PortalRun
PostgreSQL commit
[[3eea7a0](3eea7a0c97)
rationalized the executor’s parallelism logic by moving the “execute a
plan only once” check into `ExecutePlan()` itself and dropping the old
`bool run_once` argument from the public APIs:
```diff
- void ExecutorRun(QueryDesc *queryDesc,
- ScanDirection direction,
- uint64 count,
- bool run_once);
+ void ExecutorRun(QueryDesc *queryDesc,
+ ScanDirection direction,
+ uint64 count);
```
(and similarly for `PortalRun()`).
To stay compatible across PG 15–18, Citus now:
1. **Updates all internal calls** to `ExecutorRun(...)` and
`PortalRun(...)`:
* On PG 18+, use the new three-argument form (`ExecutorRun(qd, dir,
count)`).
* On PG 15–17, keep the old four-arg form (`ExecutorRun(qd, dir, count,
true)`) under a `#if PG_VERSION_NUM < 180000` guard.
2. **Guards the dispatcher hooks** via the adapter functions (from the
earlier patch) so that Citus’s executor hooks continue to work under
both the old and new signatures.
### Adapt to PG-18’s shortened PortalRun signature
PostgreSQL 18’s refactoring (see commit
[3eea7a0](3eea7a0c97))
also removed the old run_once and alternate‐dest arguments from the
public PortalRun() API. The signature changed from:
```diff
- bool PortalRun(Portal portal,
- long count,
- bool isTopLevel,
- bool run_once,
- DestReceiver *dest,
- DestReceiver *altdest,
- QueryCompletion *qc);
+ bool PortalRun(Portal portal,
+ long count,
+ bool isTopLevel,
+ DestReceiver *dest,
+ DestReceiver *altdest,
+ QueryCompletion *qc);
```
To support both versions in Citus, we:
1. **Version-guard each call** to `PortalRun()`:
* **On PG 18+** invoke the new 6-argument form.
* **On PG 15–17** fall back to the legacy 7-argument form, passing
`true` for `run_once`.
### Add support for PG-18’s new `plansource` argument in
`PortalDefineQuery`**
PostgreSQL 18 extended the `PortalDefineQuery` API to carry a
`CachedPlanSource *plansource` pointer so that the portal machinery can
track cached‐plan invalidation (as introduced alongside deferred-locking
in commit
525392d572.
To remain compatible across PG 15–18, Citus now wraps its calls under a
version guard:
```diff
- PortalDefineQuery(portal, NULL, sql, commandTag, plantree_list, NULL);
+#if PG_VERSION_NUM >= 180000
+ /* PG 18+: seven-arg signature (adds plansource) */
+ PortalDefineQuery(
+ portal,
+ NULL, /* no prepared-stmt name */
+ sql, /* the query text */
+ commandTag, /* the CommandTag */
+ plantree_list, /* List of PlannedStmt* */
+ NULL, /* no CachedPlan */
+ NULL /* no CachedPlanSource */
+ );
+#else
+ /* PG 15–17: six-arg signature */
+ PortalDefineQuery(
+ portal,
+ NULL, /* no prepared-stmt name */
+ sql, /* the query text */
+ commandTag, /* the CommandTag */
+ plantree_list, /* List of PlannedStmt* */
+ NULL /* no CachedPlan */
+ );
+#endif
```
### Adapt ExecInitRangeTable() calls to PG-18’s new signature
PostgreSQL commit
[cbc127917e04a978a788b8bc9d35a70244396d5b](cbc127917e)
overhauled the planner API for range‐table initialization:
**PG 18+**: added a fourth `Bitmapset *unpruned_relids` argument to
support deferred partition pruning
In Citus’s `create_estate_for_relation()` (in `columnar_metadata.c`), we
now wrap the call in a compile‐time guard so that the code compiles
correctly on all supported PostgreSQL versions:
```
/* Prepare permission info on PG 16+ */
#if PG_VERSION_NUM >= PG_VERSION_16
List *perminfos = NIL;
addRTEPermissionInfo(&perminfos, rte);
#else
List *perminfos = NIL; /* unused on PG 15 */
#endif
/* Initialize the range table, with the right signature for each PG version */
#if PG_VERSION_NUM >= PG_VERSION_18
/* PG 18+: four‐arg signature (adds unpruned_relids) */
ExecInitRangeTable(
estate,
list_make1(rte),
perminfos,
NULL /* unpruned_relids: not used by columnar */
);
#elif PG_VERSION_NUM >= PG_VERSION_16
/* PG 16–17: three‐arg signature (permInfos) */
ExecInitRangeTable(
estate,
list_make1(rte),
perminfos
);
#else
/* PG 15: two‐arg signature */
ExecInitRangeTable(
estate,
list_make1(rte)
);
#endif
estate->es_output_cid = GetCurrentCommandId(true);
```
### Adapt `pgstat_report_vacuum()` to PG-18’s new timestamp argument
PostgreSQL commit
[[30a6ed0ce4bb18212ec38cdb537ea4b43bc99b83](30a6ed0ce4)
extended the `pgstat_report_vacuum()` API by adding a `TimestampTz
start_time` parameter at the end so that the VACUUM statistics collector
can record when the operation began:
```diff
/* PG ≤17: four-arg signature */
- void pgstat_report_vacuum(Oid tableoid,
- bool shared,
- double num_live_tuples,
- double num_dead_tuples);
+/* PG ≥18: five-arg signature adds a start_time */
+ void pgstat_report_vacuum(Oid tableoid,
+ bool shared,
+ double num_live_tuples,
+ double num_dead_tuples,
+ TimestampTz start_time);
```
To support both versions, we now wrap the call in `columnar_tableam.c`
with a version guard, supplying `GetCurrentTimestamp()` for PG-18+:
```c
#if PG_VERSION_NUM >= 180000
/* PG 18+: include start_timestamp */
pgstat_report_vacuum(
RelationGetRelid(rel),
rel->rd_rel->relisshared,
Max(new_live_tuples, 0), /* live tuples */
0, /* dead tuples */
GetCurrentTimestamp() /* start time */
);
#else
/* PG 15–17: original signature */
pgstat_report_vacuum(
RelationGetRelid(rel),
rel->rd_rel->relisshared,
Max(new_live_tuples, 0), /* live tuples */
0 /* dead tuples */
);
#endif
```
### Adapt `ExecuteTaskPlan()` to PG-18’s expanded `CreateQueryDesc()`
signature
PostgreSQL 18 changed `CreateQueryDesc()` from an eight-argument to a
nine-argument call by inserting a `CachedPlan *cplan` parameter
immediately after the `PlannedStmt *plannedstmt` argument (see commit
525392d572).
To remain compatible with PG 15–17, Citus now wraps its invocation in
`local_executor.c` with a version guard:
```diff
- /* PG15–17: eight-arg CreateQueryDesc without cached plan */
- QueryDesc *queryDesc = CreateQueryDesc(
- taskPlan, /* PlannedStmt *plannedstmt */
- queryString, /* const char *sourceText */
- GetActiveSnapshot(),/* Snapshot snapshot */
- InvalidSnapshot, /* Snapshot crosscheck_snapshot */
- destReceiver, /* DestReceiver *dest */
- paramListInfo, /* ParamListInfo params */
- queryEnv, /* QueryEnvironment *queryEnv */
- 0 /* int instrument_options */
- );
+#if PG_VERSION_NUM >= 180000
+ /* PG18+: nine-arg CreateQueryDesc with a CachedPlan slot */
+ QueryDesc *queryDesc = CreateQueryDesc(
+ taskPlan, /* PlannedStmt *plannedstmt */
+ NULL, /* CachedPlan *cplan (none) */
+ queryString, /* const char *sourceText */
+ GetActiveSnapshot(),/* Snapshot snapshot */
+ InvalidSnapshot, /* Snapshot crosscheck_snapshot */
+ destReceiver, /* DestReceiver *dest */
+ paramListInfo, /* ParamListInfo params */
+ queryEnv, /* QueryEnvironment *queryEnv */
+ 0 /* int instrument_options */
+ );
+#else
+ /* PG15–17: eight-arg CreateQueryDesc without cached plan */
+ QueryDesc *queryDesc = CreateQueryDesc(
+ taskPlan, /* PlannedStmt *plannedstmt */
+ queryString, /* const char *sourceText */
+ GetActiveSnapshot(),/* Snapshot snapshot */
+ InvalidSnapshot, /* Snapshot crosscheck_snapshot */
+ destReceiver, /* DestReceiver *dest */
+ paramListInfo, /* ParamListInfo params */
+ queryEnv, /* QueryEnvironment *queryEnv */
+ 0 /* int instrument_options */
+ );
+#endif
```
### Adapt `RelationGetPrimaryKeyIndex()` to PG-18’s new “deferrable\_ok”
flag
PostgreSQL commit
14e87ffa5c
added a new Boolean `deferrable_ok` parameter to
`RelationGetPrimaryKeyIndex()` so that the lock manager can defer
unique‐constraint locks when requested. The API changed from:
```c
RelationGetPrimaryKeyIndex(Relation relation)
```
to:
```c
RelationGetPrimaryKeyIndex(Relation relation, bool deferrable_ok)
```
```diff
diff --git a/src/backend/distributed/metadata/node_metadata.c
b/src/backend/distributed/metadata/node_metadata.c
index e3a1b2c..f4d5e6f 100644
--- a/src/backend/distributed/metadata/node_metadata.c
+++ b/src/backend/distributed/metadata/node_metadata.c
@@ -2965,8 +2965,18 @@
*/
- Relation replicaIndex =
index_open(RelationGetPrimaryKeyIndex(pgDistNode),
- AccessShareLock);
+ #if PG_VERSION_NUM >= PG_VERSION_18
+ /* PG 18+ adds a bool "deferrable_ok" parameter */
+ Relation replicaIndex =
+ index_open(
+ RelationGetPrimaryKeyIndex(pgDistNode, false),
+ AccessShareLock);
+ #else
+ Relation replicaIndex =
+ index_open(
+ RelationGetPrimaryKeyIndex(pgDistNode),
+ AccessShareLock);
+ #endif
ScanKeyInit(&scanKey[0], Anum_pg_dist_node_nodename,
BTEqualStrategyNumber, F_TEXTEQ, CStringGetTextDatum(nodeName));
```
```diff
diff --git a/src/backend/distributed/operations/node_protocol.c b/src/backend/distributed/operations/node_protocol.c
index e3a1b2c..f4d5e6f 100644
--- a/src/backend/distributed/operations/node_protocol.c
+++ b/src/backend/distributed/operations/node_protocol.c
@@ -746,7 +746,12 @@
if (!OidIsValid(idxoid))
{
- idxoid = RelationGetPrimaryKeyIndex(rel);
+ /* Determine the index OID of the primary key (PG18 adds a second parameter) */
+#if PG_VERSION_NUM >= PG_VERSION_18
+ idxoid = RelationGetPrimaryKeyIndex(rel, false);
+#else
+ idxoid = RelationGetPrimaryKeyIndex(rel);
+#endif
}
return idxoid;
```
Because Citus has always taken the lock immediately—just as the old
two-arg call did—we pass `false` to keep that same immediate-lock
behavior. Passing `true` would switch to deferred locking, which we
don’t want.
### Adapt `ExplainOnePlan()` to PG-18’s expanded API
PostgreSQL 18 extended
525392d572
the `ExplainOnePlan()` function to carry the `CachedPlan *` and
`CachedPlanSource *` pointers plus an explicit `query_index`, letting
the EXPLAIN machinery track plan‐source invalidation. The old signature:
```c
/* PG ≤17 */
void
ExplainOnePlan(PlannedStmt *plannedstmt,
IntoClause *into,
struct ExplainState *es,
const char *queryString,
ParamListInfo params,
QueryEnvironment *queryEnv,
const instr_time *planduration,
const BufferUsage *bufusage);
```
became, in PG 18:
```c
/* PG ≥18 */
void
ExplainOnePlan(PlannedStmt *plannedstmt,
CachedPlan *cplan,
CachedPlanSource *plansource,
int query_index,
IntoClause *into,
struct ExplainState *es,
const char *queryString,
ParamListInfo params,
QueryEnvironment *queryEnv,
const instr_time *planduration,
const BufferUsage *bufusage,
const MemoryContextCounters *mem_counters);
```
To compile under both versions, Citus now wraps each call in
`multi_explain.c` with:
```c
#if PG_VERSION_NUM >= PG_VERSION_18
/* PG 18+: pass NULL for the new cached‐plan fields and zero for query_index */
ExplainOnePlan(
plan, /* PlannedStmt *plannedstmt */
NULL, /* CachedPlan *cplan */
NULL, /* CachedPlanSource *plansource */
0, /* query_index */
into, /* IntoClause *into */
es, /* ExplainState *es */
queryString, /* const char *queryString */
params, /* ParamListInfo params */
NULL, /* QueryEnvironment *queryEnv */
&planduration,/* const instr_time *planduration */
(es->buffers ? &bufusage : NULL),
(es->memory ? &mem_counters : NULL)
);
#elif PG_VERSION_NUM >= PG_VERSION_17
/* PG 17: same as before, plus passing mem_counters if enabled */
ExplainOnePlan(
plan,
into,
es,
queryString,
params,
queryEnv,
&planduration,
(es->buffers ? &bufusage : NULL),
(es->memory ? &mem_counters : NULL)
);
#else
/* PG 15–16: original seven-arg form */
ExplainOnePlan(
plan,
into,
es,
queryString,
params,
queryEnv,
&planduration,
(es->buffers ? &bufusage : NULL)
);
#endif
```
### Adapt to the unified “index interpretation” API in PG 18 (commit
a8025f544854)
PostgreSQL commit
a8025f5448
generalized the old btree‐specific operator‐interpretation API into a
single “index interpretation” interface:
* **Renamed type**:
`OpBtreeInterpretation` → `OpIndexInterpretation`
* **Renamed function**:
`get_op_btree_interpretation(opno)` →
`get_op_index_interpretation(opno)`
* **Unified field**:
Each interpretation now carries `cmptype` instead of `strategy`.
To build cleanly on PG 18 while still supporting PG 15–17, Citus’s
shard‐pruning code now wraps these changes:
```c
#include "pg_version_constants.h"
#if PG_VERSION_NUM >= PG_VERSION_18
/* On PG 18+ the btree‐only APIs vanished; alias them to the new generic versions */
typedef OpIndexInterpretation OpBtreeInterpretation;
#define get_op_btree_interpretation(opno) get_op_index_interpretation(opno)
#define ROWCOMPARE_NE COMPARE_NE
#endif
/* … later, when checking an interpretation … */
OpBtreeInterpretation *interp =
(OpBtreeInterpretation *) lfirst(cell);
#if PG_VERSION_NUM >= PG_VERSION_18
/* use cmptype on PG 18+ */
if (interp->cmptype == ROWCOMPARE_NE)
#else
/* use strategy on PG 15–17 */
if (interp->strategy == ROWCOMPARE_NE)
#endif
{
/* … */
}
```
### Adapt `create_foreignscan_path()` for PG-18’s revised signature
PostgreSQL commit
e222534679
reordered and removed a couple of parameters in the FDW‐path builder:
* **PG 15–17 signature (11 args)**
```c
create_foreignscan_path(PlannerInfo *root,
RelOptInfo *rel,
PathTarget *target,
double rows,
Cost startup_cost,
Cost total_cost,
List *pathkeys,
Relids required_outer,
Path *fdw_outerpath,
List *fdw_restrictinfo,
List *fdw_private);
```
* **PG 18+ signature (9 args)**
```c
create_foreignscan_path(PlannerInfo *root,
RelOptInfo *rel,
PathTarget *target,
double rows,
int disabled_nodes,
Cost startup_cost,
Cost total_cost,
Relids required_outer,
Path *fdw_outerpath,
List *fdw_private);
```
To support both, Citus now defines a compatibility macro in
`pg_version_compat.h`:
```c
#include "nodes/bitmapset.h" /* for Relids */
#include "nodes/pg_list.h" /* for List */
#include "optimizer/pathnode.h" /* for create_foreignscan_path() */
#if PG_VERSION_NUM >= PG_VERSION_18
/* PG18+: drop pathkeys & fdw_restrictinfo, add disabled_nodes */
#define create_foreignscan_path_compat(a, b, c, d, e, f, g, h, i, j, k) \
create_foreignscan_path( \
(a), /* root */ \
(b), /* rel */ \
(c), /* target */ \
(d), /* rows */ \
(0), /* disabled_nodes (unused by Citus) */ \
(e), /* startup_cost */ \
(f), /* total_cost */ \
(g), /* required_outer */ \
(h), /* fdw_outerpath */ \
(k) /* fdw_private */ \
)
#else
/* PG15–17: original signature */
#define create_foreignscan_path_compat(a, b, c, d, e, f, g, h, i, j, k) \
create_foreignscan_path( \
(a), (b), (c), (d), \
(e), (f), \
(g), (h), (i), (j), (k) \
)
#endif
```
Now every call to `create_foreignscan_path_compat(...)`—even in tests
like `fake_fdw.c`—automatically picks the correct argument list for
PG 15 through PG 18.
### Drop the obsolete bitmap‐scan hooks on PG 18+
PostgreSQL commit
c3953226a0
cleaned up the `TableAmRoutine` API by removing the two bitmap‐scan
callback slots:
* `scan_bitmap_next_block`
* `scan_bitmap_next_tuple`
Since those hook‐slots no longer exist in PG 18, Citus now wraps their
NULL‐initialization in a `#if PG_VERSION_NUM < PG_VERSION_18` guard. On
PG 15–17 we still explicitly set them to `NULL` (to satisfy the old
struct layout), and on PG 18+ we omit them entirely:
```c
#if PG_VERSION_NUM < PG_VERSION_18
/* PG 15–17 only: these fields were removed upstream in PG 18 */
.scan_bitmap_next_block = NULL,
.scan_bitmap_next_tuple = NULL,
#endif
```
### Adapt `vac_update_relstats()` invocation to PG-18’s new
“all\_frozen” argument
PostgreSQL commit
99f8f3fbbc
extended the `vac_update_relstats()` API by inserting a
`num_all_frozen_pages` parameter between the existing
`num_all_visible_pages` and `hasindex` arguments:
```diff
- /* PG ≤17: */
- void
- vac_update_relstats(Relation relation,
- BlockNumber num_pages,
- double num_tuples,
- BlockNumber num_all_visible_pages,
- bool hasindex,
- TransactionId frozenxid,
- MultiXactId minmulti,
- bool *frozenxid_updated,
- bool *minmulti_updated,
- bool in_outer_xact);
+ /* PG ≥18: adds num_all_frozen_pages */
+ void
+ vac_update_relstats(Relation relation,
+ BlockNumber num_pages,
+ double num_tuples,
+ BlockNumber num_all_visible_pages,
+ BlockNumber num_all_frozen_pages,
+ bool hasindex,
+ TransactionId frozenxid,
+ MultiXactId minmulti,
+ bool *frozenxid_updated,
+ bool *minmulti_updated,
+ bool in_outer_xact);
```
To compile cleanly on both PG 15–17 and PG 18+, Citus wraps its call in
a version guard and supplies a zero placeholder for the new field:
```c
#if PG_VERSION_NUM >= 180000
/* PG 18+: supply explicit “all_frozen” count */
vac_update_relstats(
rel,
new_rel_pages,
new_live_tuples,
new_rel_allvisible, /* allvisible */
0, /* all_frozen */
nindexes > 0,
newRelFrozenXid,
newRelminMxid,
&frozenxid_updated,
&minmulti_updated,
false /* in_outer_xact */
);
#else
/* PG 15–17: original signature */
vac_update_relstats(
rel,
new_rel_pages,
new_live_tuples,
new_rel_allvisible,
nindexes > 0,
newRelFrozenXid,
newRelminMxid,
&frozenxid_updated,
&minmulti_updated,
false /* in_outer_xact */
);
#endif
```
**Why all_frozen = 0?**
Columnar storage never embeds transaction IDs in its pages, so it never
needs to track “all‐frozen” pages the way a heap does. Setting both
allvisible and allfrozen to zero simply tells Postgres “there are no
pages with the visibility or frozen‐status bits set,” matching our
existing behavior.
This change ensures Citus’s VACUUM‐statistic updates work unmodified
across all supported Postgres versions.
_Since we've never released a Citus release that contains the commit
that introduced this bug (see #7461), we don't need to have a
DESCRIPTION line that shows up in release changelog._
From 8 valgrind test targets run for release-13.1 with PG 17.5, we got
1344 stack traces and except one of them, they were all about below
unsafe memory access because this is a very hot code-path that we
execute via our drop trigger.
On main, even `make -C src/test/regress/ check-base-vg` dumps this stack
trace with PG 16/17 to src/test/regress/citus_valgrind_test_log.txt when
executing "multi_cluster_management", and this is not the case with this
PR anymore.
```c
==27337== VALGRINDERROR-BEGIN
==27337== Conditional jump or move depends on uninitialised value(s)
==27337== at 0x7E26B68: citus_unmark_object_distributed (home/onurctirtir/citus/src/backend/distributed/metadata/distobject.c:113)
==27337== by 0x7E26CC7: master_unmark_object_distributed (home/onurctirtir/citus/src/backend/distributed/metadata/distobject.c:153)
==27337== by 0x4BD852: ExecInterpExpr (home/onurctirtir/.pgenv/src/postgresql-16.2/src/backend/executor/execExprInterp.c:758)
==27337== by 0x4BFD00: ExecInterpExprStillValid (home/onurctirtir/.pgenv/src/postgresql-16.2/src/backend/executor/execExprInterp.c:1870)
==27337== by 0x51D82C: ExecEvalExprSwitchContext (home/onurctirtir/.pgenv/src/postgresql-16.2/src/backend/executor/../../../src/include/executor/executor.h:355)
==27337== by 0x51D8A4: ExecProject (home/onurctirtir/.pgenv/src/postgresql-16.2/src/backend/executor/../../../src/include/executor/executor.h:389)
==27337== by 0x51DADB: ExecResult (home/onurctirtir/.pgenv/src/postgresql-16.2/src/backend/executor/nodeResult.c:136)
==27337== by 0x4D72ED: ExecProcNodeFirst (home/onurctirtir/.pgenv/src/postgresql-16.2/src/backend/executor/execProcnode.c:464)
==27337== by 0x4CA394: ExecProcNode (home/onurctirtir/.pgenv/src/postgresql-16.2/src/backend/executor/../../../src/include/executor/executor.h:273)
==27337== by 0x4CD34C: ExecutePlan (home/onurctirtir/.pgenv/src/postgresql-16.2/src/backend/executor/execMain.c:1670)
==27337== by 0x4CAA7C: standard_ExecutorRun (home/onurctirtir/.pgenv/src/postgresql-16.2/src/backend/executor/execMain.c:365)
==27337== by 0x7E1E475: CitusExecutorRun (home/onurctirtir/citus/src/backend/distributed/executor/multi_executor.c:238)
==27337== Uninitialised value was created by a heap allocation
==27337== at 0x4848899: malloc (in /usr/libexec/valgrind/vgpreload_memcheck-amd64-linux.so)
==27337== by 0x9AB1F7: AllocSetContextCreateInternal (home/onurctirtir/.pgenv/src/postgresql-16.2/src/backend/utils/mmgr/aset.c:438)
==27337== by 0x4E0D56: CreateExprContextInternal (home/onurctirtir/.pgenv/src/postgresql-16.2/src/backend/executor/execUtils.c:261)
==27337== by 0x4E0E3E: CreateExprContext (home/onurctirtir/.pgenv/src/postgresql-16.2/src/backend/executor/execUtils.c:311)
==27337== by 0x4E10D9: ExecAssignExprContext (home/onurctirtir/.pgenv/src/postgresql-16.2/src/backend/executor/execUtils.c:490)
==27337== by 0x51EE09: ExecInitSeqScan (home/onurctirtir/.pgenv/src/postgresql-16.2/src/backend/executor/nodeSeqscan.c:147)
==27337== by 0x4D6CE1: ExecInitNode (home/onurctirtir/.pgenv/src/postgresql-16.2/src/backend/executor/execProcnode.c:210)
==27337== by 0x5243C7: ExecInitSubqueryScan (home/onurctirtir/.pgenv/src/postgresql-16.2/src/backend/executor/nodeSubqueryscan.c:126)
==27337== by 0x4D6DD9: ExecInitNode (home/onurctirtir/.pgenv/src/postgresql-16.2/src/backend/executor/execProcnode.c:250)
==27337== by 0x4F05B2: ExecInitAppend (home/onurctirtir/.pgenv/src/postgresql-16.2/src/backend/executor/nodeAppend.c:223)
==27337== by 0x4D6C46: ExecInitNode (home/onurctirtir/.pgenv/src/postgresql-16.2/src/backend/executor/execProcnode.c:182)
==27337== by 0x52003D: ExecInitSetOp (home/onurctirtir/.pgenv/src/postgresql-16.2/src/backend/executor/nodeSetOp.c:530)
==27337==
==27337== VALGRINDERROR-END
```
Issue #7709 asks for security labels on columns to be propagated, to
support the `anon` extension. Before, Citus supported security labels
on roles (#7735) and this PR adds support for propagating security
labels on tables and columns.
All scenarios that involve propagating metadata for a Citus table now
include the security labels on the table and on the columns of the
table. These scenarios are:
- When a table becomes distributed using `create_distributed_table()` or
`create_reference_table()`, its security labels (if any) are propageted.
- When a security label is defined on a distributed table, or one of its
columns, the label is propagated.
- When a node is added to a Citus cluster, all distributed tables have
their security labels propagated.
- When a column of a distributed table is dropped, any security labels
on the column are also dropped.
- When a column is added to a distributed table, security labels can be
defined on the column and are propagated.
- Security labels on a distributed table or its columns are not
propagated when `citus.enable_metadata_sync` is enabled.
Regress test `seclabel` is extended with tests to cover these scenarios.
The implementation is somewhat involved because it impacts DDL
propagation of Citus tables, but can be broken down as follows:
- distributed_object_ops has `Role_SecLabel`, `Table_SecLabel` and
`Column_SecLabel` to take care of security labels on roles, tables and
columns. `Any_SecLabel` is used for all other security labels and is
essentially a nop.
- Deparser support - `DeparseRoleSecLabelStmt()`,
`DeparseTableSecLabelStmt()` and `DeparseColumnSecLabelStmt()` take care
of deparsing security label statements on roles, tables and columns
respectively.
- When reconstructing the DDL for a citus table, security labels on the
table or its columns are included by having
`GetPreLoadTableCreationCommands()` call a new function
`CreateSecurityLabelCommands()` to take care of any security labels on
the table or its columns.
- When changing a distributed table name to a shard name before running
a command locally on a worker, function `RelayEventExtendNames()` checks
for security labels on a table or its columns.
Var jobTypeName is created on stack and its value over pointer is used
in heap_form_tuple, so we
have stack use out of scope.
Issue was detected with adress sanitizer.
Fixes#7943.
DESCRIPTION: Adds citus_is_primary_node() UDF to determine if the
current node is a primary node in the cluster.
---------
Co-authored-by: German Eichberger <geeichbe@microsoft.com>
Co-authored-by: Onur Tirtir <onurcantirtir@gmail.com>
DESCRIPTION: Fixes deadlock with transaction recovery that is possible
during Citus upgrades.
Fixes#7875.
This commit addresses two interrelated deadlock issues uncovered during Citus
upgrades:
1. Local Deadlock:
- **Problem:**
In `RecoverWorkerTransactions()`, a new connection is created for each worker
node to perform transaction recovery by locking the
`pg_dist_transaction` catalog table until the end of the transaction. When
`RecoverTwoPhaseCommits()` calls this function for each worker node, the order
of acquiring locks on `pg_dist_authinfo` and `pg_dist_transaction` can alternate.
This reversal can lead to a deadlock if any concurrent process requires locks on
these tables.
- **Fix:**
Pre-establish all worker node connections upfront so that
`RecoverWorkerTransactions()` operates with a single, consistent connection.
This ensures that locks on `pg_dist_authinfo` and `pg_dist_transaction` are always
acquired in the correct order, thereby preventing the local deadlock.
2. Distributed Deadlock:
- **Problem:**
After resolving the local deadlock, a distributed deadlock issue emerges. The
maintenance daemon calls `RecoverWorkerTransactions()` on each worker node—
including the local node—which leads to a complex locking sequence:
- A RowExclusiveLock is taken on the `pg_dist_transaction` table in
`RecoverWorkerTransactions()`.
- An update extension then tries to acquire an AccessExclusiveLock on the same
table, getting blocked by the RowExclusiveLock.
- A subsequent query (e.g., a SELECT on `pg_prepared_xacts`) issued using a
separate connection on the local node gets blocked due to locks held during a
call to `BuildCitusTableCacheEntry()`.
- The maintenance daemon waits for this query, resulting in a circular wait and
stalling the entire cluster.
- **Fix:**
Avoid cache lookups for internal PostgreSQL tables by implementing an early bailout
for relation IDs below `FirstNormalObjectId` (system objects). This eliminates
unnecessary calls to `BuildCitusTableCache`, reducing lock contention and mitigating
the distributed deadlock.
Furthermore, this optimization improves performance in fast
connect→query_catalog→disconnect cycles by eliminating redundant
cache creation and lookups.
3. Also reverts the commit that disabled the relevant test cases.
DESCRIPTION: Drops PG14 support
1. Remove "$version_num" != 'xx' from configure file
2. delete all PG_VERSION_NUM = PG_VERSION_XX references in the code
3. Look at pg_version_compat.h file, remove all _compat functions etc
defined specifically for PGXX differences
4. delete all PG_VERSION_NUM >= PG_VERSION_(XX+1), PG_VERSION_NUM <
PG_VERSION_(XX+1) ifs in the codebase
5. delete ruleutils_xx.c file
6. cleanup normalize.sed file from pg14 specific lines
7. delete all alternative output files for that particular PG version,
server_version_ge variable helps here
This is prep work for successful compilation with PG17
PG17added foreach_ptr, foreach_int and foreach_oid macros
Relevant PG commit
14dd0f27d7cd56ffae9ecdbe324965073d01a9ff
14dd0f27d7
We already have these macros, but they are different with the
PG17 ones because our macros take a DECLARED variable, whereas
the PG16 macros declare a locally-scoped loop variable themselves.
Hence I am renaming our macros to foreach_declared_
I am separating this into its own PR since it touches many files. The
main compilation PR is https://github.com/citusdata/citus/pull/7699
When multiple sessions concurrently attempt to add the same coordinator
node using `citus_set_coordinator_host`, there is a potential race
condition. Both sessions may pass the initial metadata check
(`isCoordinatorInMetadata`), but only one will succeed in adding the
node. The other session will fail with an assertion error
(`Assert(!nodeAlreadyExists)`), causing the server to crash. Even though
the `AddNodeMetadata` function takes an exclusive lock, it appears that
the lock is not preventing the race condition before the initial
metadata check.
- **Issue**: The current logic allows concurrent sessions to pass the
check for existing coordinators, leading to an attempt to insert
duplicate nodes, which triggers the assertion failure.
- **Impact**: This race condition leads to crashes during operations
that involve concurrent coordinator additions, as seen in
https://github.com/citusdata/citus/issues/7646.
**Test Plan:**
- Isolation Test Limitation: An isolation test was added to simulate
concurrent additions of the same coordinator node, but due to the
behavior of PostgreSQL locking mechanisms, the test does not trigger the
edge case. The lock applied within the function serializes the
operations, preventing the race condition from occurring in the
isolation test environment.
While the edge case is difficult to reproduce in an isolation test, the
fix addresses the core issue by ensuring concurrency control through
proper locking.
- Existing Tests: All existing tests related to node metadata and
coordinator management have been run to ensure that no regressions were
introduced.
**After the Fix:**
- Concurrent attempts to add the same coordinator node will be
serialized. One session will succeed in adding the node, while the
others will skip the operation without crashing the server.
Co-authored-by: Mehmet YILMAZ <mehmet.yilmaz@microsoft.com>
DESCRIPTION: Fix performance issue when creating distributed tables if
many already exist
This builds on the work to speed up EnsureSequenceTypeSupported, and now
does something similar for SequenceUsedInDistributedTable.
SequenceUsedInDistributedTable had a similar O(number of citus tables)
operation. This fixes that and speeds up creation of distributed tables
significantly when many distributed tables already exist.
Fixes#7022
DESCRIPTION: Fix performance issue when creating distributed tables and many already exist
EnsureSequenceTypeSupported was doing an O(number of distributed tables)
operation. This can become very slow with lots of Citus tables, which
now happens much more frequently in practice due to schema based sharding.
Partially addresses #7022
DESCRIPTION: Adds support for distributed `ALTER/DROP ROLE` commands
from the databases where Citus is not installed
---------
Co-authored-by: Onur Tirtir <onurcantirtir@gmail.com>
This fixes#7454: master_disable_node() has only two arguments, but
calls citus_disable_node() that tries to read three arguments
Co-authored-by: Karina Litskevich <litskevichkarina@gmail.com>
Moves the following functions to the Citus internal schema:
citus_internal_local_blocked_processes
citus_internal_global_blocked_processes
citus_internal_mark_node_not_synced
citus_internal_unregister_tenant_schema_globally
citus_internal_update_none_dist_table_metadata
citus_internal_update_placement_metadata
citus_internal_update_relation_colocation
citus_internal_start_replication_origin_tracking
citus_internal_stop_replication_origin_tracking
citus_internal_is_replication_origin_tracking_active
#7405
---------
Co-authored-by: Jelte Fennema-Nio <jelte.fennema@microsoft.com>
Moves the following functions:
citus_internal_delete_colocation_metadata
citus_internal_delete_partition_metadata
citus_internal_delete_placement_metadata
citus_internal_delete_shard_metadata
citus_internal_delete_tenant_schema
Move more functions to citus_internal schema, the list:
citus_internal_add_placement_metadata
citus_internal_add_shard_metadata
citus_internal_add_tenant_schema
citus_internal_adjust_local_clock_to_remote
citus_internal_database_command
#7405
Move citus_internal_acquire_citus_advisory_object_class_lock and
citus_internal_add_colocation_metadata functions from pg_catalog to
citus_internal.
#7405
Fixes a bug that breaks queries from non-maindbs when
citus.local_hostname is set to a value different than "localhost".
This is a very old bug doesn't cause a problem as long as Citus catalog
is available to FindWorkerNode(). And the catalog is always available
unless we're in non-main database, which might be the case on main but
not on older releases, hence not adding a `DESCRIPTION`. For this
reason, I don't see a reason to backport this.
Maybe we should totally refrain using LOCAL_HOST_NAME in all code-paths,
but not doing that in this PR as the other paths don't seem to be
breaking something that is user-facing.
```c
char *
GetAuthinfo(char *hostname, int32 port, char *user)
{
char *authinfo = NULL;
bool isLoopback = (strncmp(LOCAL_HOST_NAME, hostname, MAX_NODE_LENGTH) == 0 &&
PostPortNumber == port);
if (IsTransactionState())
{
int64 nodeId = WILDCARD_NODE_ID;
/* -1 is a special value for loopback connections (task tracker) */
if (isLoopback)
{
nodeId = LOCALHOST_NODE_ID;
}
else
{
WorkerNode *worker = FindWorkerNode(hostname, port);
if (worker != NULL)
{
nodeId = worker->nodeId;
}
}
authinfo = GetAuthinfoViaCatalog(user, nodeId);
}
return (authinfo != NULL) ? authinfo : "";
}
```
This PR makes the connections to other nodes for
`mark_object_distributed` use the same user as
`execute_command_on_remote_nodes_as_user` so they'll use the same
connection.
Running a query from a Citus non-main database that inserts to
pg_dist_object requires a new connection to the main database itself.
This PR adds that connection to the main database.
---------
Co-authored-by: Jelte Fennema-Nio <github-tech@jeltef.nl>
LoadShardList is called twice, which is not neccessary, and there is no
need to sort the shard placement list since we only want to know the list
length.
DESCRIPTION: Adds support for issuing `CREATE`/`DROP` DATABASE commands
from worker nodes
With this commit, we allow issuing CREATE / DROP DATABASE commands from
worker nodes too.
As in #7278, this is not allowed when the coordinator is not added to
metadata because we don't ever sync metadata changes to coordinator
when adding coordinator to the metadata via
`SELECT citus_set_coordinator_host('<hostname>')`, or equivalently, via
`SELECT citus_add_node(<coordinator_node_name>, <coordinator_node_port>, 0)`.
We serialize database management commands by acquiring a Citus specific
advisory lock on the first primary worker node if there are any workers in the
cluster. As opposed to what we've done in https://github.com/citusdata/citus/pull/7278
for role management commands, we try to avoid from running into distributed deadlocks
as much as possible. This is because, while distributed deadlocks that can happen around
role management commands can be detected by Citus, this is not the case for database
management commands because most of them cannot be run inside in a transaction block.
In that case, Citus cannot even detect the distributed deadlock because the command is not
part of a distributed transaction at all, then the command execution might not return the
control back to the user for an indefinite amount of time.
DESCRIPTION: Adds support for 2PC from non-Citus main databases
This PR only adds support for `CREATE USER` queries, other queries need
to be added. But it should be simple because this PR creates the
underlying structure.
Citus main database is the database where the Citus extension is
created. A non-main database is all the other databases that are in the
same node with a Citus main database.
When a `CREATE USER` query is run on a non-main database we:
1. Run `start_management_transaction` on the main database. This
function saves the outer transaction's xid (the non-main database
query's transaction id) and marks the current query as main db command.
2. Run `execute_command_on_remote_nodes_as_user("CREATE USER
<username>", <username to run the command>)` on the main database. This
function creates the users in the rest of the cluster by running the
query on the other nodes. The user on the current node is created by the
query on the outer, non-main db, query to make sure consequent commands
in the same transaction can see this user.
3. Run `mark_object_distributed` on the main database. This function
adds the user to `pg_dist_object` in all of the nodes, including the
current one.
This PR also implements transaction recovery for the queries from
non-main databases.
This change adds a script to programatically group all includes in a
specific order. The script was used as a one time invocation to group
and sort all includes throught our formatted code. The grouping is as
follows:
- System includes (eg. `#include<...>`)
- Postgres.h (eg. `#include "postgres.h"`)
- Toplevel imports from postgres, not contained in a directory (eg.
`#include "miscadmin.h"`)
- General postgres includes (eg . `#include "nodes/..."`)
- Toplevel citus includes, not contained in a directory (eg. `#include
"citus_verion.h"`)
- Columnar includes (eg. `#include "columnar/..."`)
- Distributed includes (eg. `#include "distributed/..."`)
Because it is quite hard to understand the difference between toplevel
citus includes and toplevel postgres includes it hardcodes the list of
toplevel citus includes. In the same manner it assumes anything not
prefixed with `columnar/` or `distributed/` as a postgres include.
The sorting/grouping is enforced by CI. Since we do so with our own
script there are not changes required in our uncrustify configuration.
DESCRIPTION: Adds support for propagating `CREATE`/`DROP` database
In this PR, create and drop database support is added.
For CREATE DATABASE:
* "oid" option is not supported
* specifying "strategy" to be different than "wal_log" is not supported
* specifying "template" to be different than "template1" is not
supported
The last two are because those are not saved in `pg_database` and when
activating a node, we cannot assume what parameters were provided when
creating the database.
And "oid" is not supported because whether user specified an arbitrary
oid when creating the database is not saved in pg_database and we want
to avoid from oid collisions that might arise from attempting to use an
auto-assigned oid on workers.
Finally, in case of node activation, GRANTs for the database are also
propagated.
---------
Co-authored-by: Jelte Fennema-Nio <github-tech@jeltef.nl>
Co-authored-by: Jelte Fennema-Nio <jelte.fennema@microsoft.com>
Co-authored-by: Onur Tirtir <onurcantirtir@gmail.com>
DESCRIPTION: Adds support from issuing role management commands from worker nodes
It's unlikely to get into a distributed deadlock with role commands, we
don't care much about them at the moment.
There were several attempts to reduce the chances of a deadlock but we
didn't any of them merged into main branch yet, see:
#7325#7016#7009
I just enhanced the existing code to check if the relation is an index
belonging to a distributed table.
If so the shardId is appended to relation (index) name and the *_size
function are executed as before.
There is a change in an extern function:
`extern StringInfo GenerateSizeQueryOnMultiplePlacements(...)`
It's possible to create a new function and deprecate this one later if
compatibility is an issue.
Fixes https://github.com/citusdata/citus/issues/6496.
DESCRIPTION: Allows using Citus size functions on distributed tables
indexes.
---------
Co-authored-by: Onur Tirtir <onurcantirtir@gmail.com>
HasDistributionKey & HasDistributionKeyCacheEntry returns true when the
corresponding table has a distribution key, the comments state the
opposite,
which should be fixed.
Signed-off-by: Zhao Junwang <zhjwpku@gmail.com>
Co-authored-by: Onur Tirtir <onurcantirtir@gmail.com>
For a database that does not create the citus extension by running
` CREATE EXTENSION citus;`
`CitusHasBeenLoaded ` function ends up querying the `pg_extension` table
every time it is invoked. This is not an ideal situation for a such a
database.
The idea in this PR is as follows:
### A new field in MetadataCache.
Add a new variable `extensionCreatedState `of the following type:
```
typedef enum ExtensionCreatedState
{
UNKNOWN = 0,
CREATED = 1,
NOTCREATED = 2,
} ExtensionCreatedState;
```
When the MetadataCache is invalidated, `ExtensionCreatedState` will be
set to UNKNOWN.
### Invalidate MetadataCache when CREATE/DROP/ALTER EXTENSION citus
commands are run.
- Register a callback function, named
`InvalidateDistRelationCacheCallback`, for relcache invalidation during
the shared library initialization for `citus.so`. This callback function
is invoked in all the backends whenever the relcache is invalidated in
one of the backends. (This could be caused many DDLs operations).
- In the cache invalidation callback,`
InvalidateDistRelationCacheCallback`, invalidate `MetadataCache` zeroing
it out.
- In `CitusHasBeenLoaded`, perform the costly citus is loaded check only
if the `MetadataCache` is not valid.
### Downsides
Any relcache invalidation (caused by various DDL operations) will case
Citus MetadataCache to get invalidated. Most of the time it will be
unnecessary. But we rely on that DDL operations on relations will not be
too frequent.
DESCRIPTION: Presenting citus_pause_node UDF enabling pausing by
node_id.
citus_pause_node takes a node_id parameter and fetches all the shards in
that node and puts AccessExclusiveLock on all the shards inside that
node. With this lock, insert is disabled, until citus_pause_node
transaction is closed.
---------
Co-authored-by: Hanefi Onaldi <Hanefi.Onaldi@microsoft.com>
PG16 compatibility - part 11
Check out part 1 42d956888d
part 2 0d503dd5ac
part 3 907d72e60d
part 4 7c6b4ce103
part 5 6056cb2c29
part 6 b36c431abb
part 7 ee3153fe50
part 8 2c50b5f7ff
part 9 b2291374b4
part 10 a2315fdc67
This commit is in the series of PG16 compatibility commits. It fixes
AM dependency and grant's admin option:
- Fix with admin option in grants
grantstmt->admin_opt no longer exists in PG16
instead, grantstmt has a list of options, one of them is admin option.
Relevant PG commit:
e3ce2de09d
e3ce2de09d814f8770b2e3b3c152b7671bcdb83f
- Fix pg_depend entry to AMs after ALTER TABLE .. SET ACCESS METHOD
Relevant PG commit:
97d8910104
97d89101045fac8cb36f4ef6c08526ea0841a596
More PG16 compatibility commits are coming soon:
We are very close to merging "PG16Beta3 Support - Regression tests sanity"