fixes#8364
PostgreSQL 18 changes VACUUM/ANALYZE to recurse into inheritance
children by default, and introduces `ONLY` to limit processing to the
parent. Upstream change:
[https://github.com/postgres/postgres/commit/62ddf7ee9](https://github.com/postgres/postgres/commit/62ddf7ee9)
For Citus tables, we should treat shard placements as “children” and
avoid propagating `VACUUM/ANALYZE` to shards when the user explicitly
asks for `ONLY`.
This PR adjusts the Citus VACUUM handling to align with PG18 semantics,
and adds regression coverage on both regular distributed tables and
partitioned distributed tables.
---
### Behavior changes
* Introduce a per-relation helper struct:
```c
typedef struct CitusVacuumRelation
{
VacuumRelation *vacuumRelation;
Oid relationId;
} CitusVacuumRelation;
```
This lets us keep both:
* the resolved relation OID (for `IsCitusTable`, task building), and
* the original `VacuumRelation` node (for column list and ONLY/inh
flag).
* Replace the old `VacuumRelationIdList` / `ExtractVacuumTargetRels`
flow with:
```c
static List *VacuumRelationList(VacuumStmt *vacuumStmt,
CitusVacuumParams vacuumParams);
```
`VacuumRelationList` now:
* Iterates over `vacuumStmt->rels`.
* Resolves `relid` via `RangeVarGetRelidExtended` when `relation` is
present.
* Falls back to locking `VacuumRelation->oid` when only an OID is
available.
* Respects `VACOPT_FULL` for lock mode and `VACOPT_SKIP_LOCKED` for
locking behavior.
* Builds a `List *` of `CitusVacuumRelation` entries.
* Update:
```c
IsDistributedVacuumStmt(List *vacuumRelationList);
ExecuteVacuumOnDistributedTables(VacuumStmt *vacuumStmt,
List *vacuumRelationList,
CitusVacuumParams vacuumParams);
```
to operate on `CitusVacuumRelation` instead of bare OIDs.
* Implement `ONLY` semantics in `ExecuteVacuumOnDistributedTables`:
```c
RangeVar *relation = vacuumRelation->relation;
if (relation != NULL && !relation->inh)
{
/* ONLY specified, so don't recurse to shard placements */
continue;
}
```
Effect:
* `VACUUM / ANALYZE` (no `ONLY`) on a Citus table: behavior unchanged,
Citus creates tasks and propagates to shard placements.
* `VACUUM ONLY <citus_table>` / `ANALYZE ONLY <citus_table>`:
* Core still processes the coordinator relation as usual.
* Citus **skips** building tasks for shard placements, so we do not
recurse into distributed children.
* The code compiles and behaves as before on pre-PG18; the new behavior
becomes observable only when the core planner starts setting `inh =
false` for `ONLY` (PG18).
* Unqualified `VACUUM` / `ANALYZE` (no rels) is unchanged and still
handled via `ExecuteUnqualifiedVacuumTasks`.
* Remove now-redundant helpers:
* `VacuumColumnList`
* `ExtractVacuumTargetRels`
Column lists are now taken directly from `vacuumRelation->va_cols` via
`CitusVacuumRelation`.
---
### Testing
Extend `src/test/regress/sql/pg18.sql` and `expected/pg18.out` with two
PG18-only blocks that verify we do not recurse into shard placements
when `ONLY` is used:
1. **Simple distributed table (`pg18_vacuum_part`)**
* Create and distribute a regular table:
```sql
CREATE SCHEMA pg18_vacuum_part;
SET search_path TO pg18_vacuum_part;
CREATE TABLE vac_analyze_only (a int);
SELECT create_distributed_table('vac_analyze_only', 'a');
INSERT INTO vac_analyze_only VALUES (1), (2), (3);
```
* On the coordinator:
* Run `ANALYZE vac_analyze_only;` and later `ANALYZE ONLY
vac_analyze_only;`.
* Run `VACUUM vac_analyze_only;` and later `VACUUM ONLY
vac_analyze_only;`.
* On `worker_1`:
* Capture `coalesce(max(last_analyze), 'epoch')` from
`pg_stat_user_tables` for `vac_analyze_only_%` into
`:analyze_before_only`, then assert:
```sql
SELECT max(last_analyze) = :'analyze_before_only'::timestamptz AS
analyze_only_skipped;
```
* Capture `coalesce(max(last_vacuum), 'epoch')` into
`:vacuum_before_only`, then assert:
```sql
SELECT max(last_vacuum) = :'vacuum_before_only'::timestamptz AS
vacuum_only_skipped;
```
Both checks return `t`, confirming `ONLY` does not change `last_analyze`
/ `last_vacuum` on shard tables.
2. **Partitioned distributed table (`pg18_vacuum_part_dist`)**
* Create a partitioned table whose parent is distributed:
```sql
CREATE SCHEMA pg18_vacuum_part_dist;
SET search_path TO pg18_vacuum_part_dist;
SET citus.shard_count = 2;
SET citus.shard_replication_factor = 1;
CREATE TABLE part_dist (id int, v int) PARTITION BY RANGE (id);
CREATE TABLE part_dist_1 PARTITION OF part_dist FOR VALUES FROM (1) TO
(100);
CREATE TABLE part_dist_2 PARTITION OF part_dist FOR VALUES FROM (100) TO
(200);
SELECT create_distributed_table('part_dist', 'id');
INSERT INTO part_dist SELECT g, g FROM generate_series(1, 199) g;
```
* On the coordinator:
* Run `ANALYZE part_dist;` then `ANALYZE ONLY part_dist;`.
* Run `VACUUM part_dist;` then `VACUUM ONLY part_dist;` (PG18 emits the
expected warning: `VACUUM ONLY of partitioned table "part_dist" has no
effect`).
* On `worker_1`:
* Capture `coalesce(max(last_analyze), 'epoch')` for `part_dist_%` into
`:analyze_before_only`, then assert:
```sql
SELECT max(last_analyze) = :'analyze_before_only'::timestamptz
AS analyze_only_partitioned_skipped;
```
* Capture `coalesce(max(last_vacuum), 'epoch')` into
`:vacuum_before_only`, then assert:
```sql
SELECT max(last_vacuum) = :'vacuum_before_only'::timestamptz
AS vacuum_only_partitioned_skipped;
```
Both checks return `t`, confirming that even for a partitioned
distributed parent, `VACUUM/ANALYZE ONLY` does not recurse into shard
placements, and Citus behavior matches PG18’s “ONLY = parent only”
semantics.
Generated columns can be virtual (not stored) and this is the default.
This PG18 feature requires tweaking citus_ruleutils and deparse table to
support in Citus. Relevant PG commit: 83ea6c540.
DESCRIPTION: Adds propagation of ENFORCED / NOT ENFORCED on CHECK
constraints.
Add propagation support to Citus ruleutils and appropriate regress
tests. Relevant PG commit: ca87c41.
Test that FOREIGN KEY .. NOT ENFORCED is propagated when applied to
Citus tables. No C code changes required, ruleutils handles it. Relevant
PG commit eec0040c4. Given that Citus does not yet support ALTER TABLE
.. ALTER CONSTRAINT its usefulness is questionable, but we propagate its
definition at least.
PG18 introduces `Disabled: <bool>` and planning/memory details that
cause unstable diffs, especially with `FORMAT JSON` when keys are
removed. This PR:
* Switches EXPLAIN assertions to `FORMAT YAML` wherever JSON structure
is not required.
* Stops normalizing/removing `"Disabled"` in JSON to avoid creating
invalid trailing commas.
* Adds alternative expected files for tests that must remain in JSON and
include `Disabled` lines.
### Rationale
* YAML is line-oriented and avoids JSON’s trailing-comma pitfalls; it
yields more stable diffs across PG15–PG18.
* Some tests intentionally assert JSON structure; those stay JSON but
use dedicated `.out` alternates rather than regex-based key removal.
* A proper “parse → modify → re-emit JSON” helper is being explored but
is non-trivial; this PR adopts the lowest-risk path now.
### What changed
1. **Test format**
* Convert `EXPLAIN (..., FORMAT JSON)` to `FORMAT YAML` in tests that
only care about plan shape/fields.
* Keep JSON only where the test explicitly validates JSON; introduce alt
expected files for those.
2. **Normalization**
* Remove JSON-side filtering of `"Disabled"` (no deletion, no regex
rewrite).
* Retain existing normalization for text/YAML (numbers, `Index
Searches`, `Window: ...`, buffers, etc.).
3. **Expected files**
* Update YAML-based expected outputs.
* Add `.out` alternates for JSON cases that include `Disabled`.
https://github.com/postgres/postgres/commit/7054186c4fixes#8358
This PR wires up PostgreSQL 18’s `publish_generated_columns` publication
option in Citus and adds regression coverage to ensure it behaves
correctly for distributed tables, without changing existing DDL output
for publications that rely on the default.
---
### 1. Preserve `publish_generated_columns` when rebuilding publications
In `BuildCreatePublicationStmt`:
* On PG18+ we now read the new `pubgencols` field from `pg_publication`
and map it as follows:
* `'n'` → default (`none`)
* `'s'` → `stored`
* For `pubgencols == 's'` we append a `publish_generated_columns`
defelem to the reconstructed statement:
```c
#if PG_VERSION_NUM >= PG_VERSION_18
if (publicationForm->pubgencols == 's') /* stored */
{
DefElem *pubGenColsOption =
makeDefElem("publish_generated_columns",
(Node *) makeString("stored"),
-1);
createPubStmt->options =
lappend(createPubStmt->options, pubGenColsOption);
}
else if (publicationForm->pubgencols != 'n') /* 'n' = none (default) */
{
ereport(ERROR,
(errmsg("unexpected pubgencols value '%c' for publication %u",
publicationForm->pubgencols, publicationId)));
}
#endif
```
* For `pubgencols == 'n'` we do **not** emit an option and rely on
PostgreSQL’s default.
* Any value other than `'n'` or `'s'` raises an error rather than
silently producing incorrect DDL.
This ensures:
* Publications that explicitly use `publish_generated_columns = stored`
are reconstructed with that option on workers, so workers get
`pubgencols = 's'`.
* Publications that use the default (`none`) continue to produce the
same `CREATE PUBLICATION ... WITH (...)` text as before (no extra
`publish_generated_columns = 'none'` noise), fixing the unintended diffs
in existing publication tests.
---
### 2. New PG18 regression coverage for distributed publications
In `src/test/regress/sql/pg18.sql`:
* Create a table with a stored generated column and make it distributed
so the publication goes through Citus DDL propagation:
```sql
CREATE TABLE gen_pub_tab (
id int primary key,
a int,
b int GENERATED ALWAYS AS (a * 10) STORED
);
SELECT create_distributed_table('gen_pub_tab', 'id', colocate_with :=
'none');
```
* Create two publications that exercise both `pubgencols` values:
```sql
CREATE PUBLICATION pub_gen_cols_stored
FOR TABLE gen_pub_tab
WITH (publish = 'insert, update', publish_generated_columns = stored);
CREATE PUBLICATION pub_gen_cols_none
FOR TABLE gen_pub_tab
WITH (publish = 'insert, update', publish_generated_columns = none);
```
* On coordinator and both workers, assert the catalog contents:
```sql
SELECT pubname, pubgencols
FROM pg_publication
WHERE pubname IN ('pub_gen_cols_stored', 'pub_gen_cols_none')
ORDER BY pubname;
```
Expected on all three nodes:
* `pub_gen_cols_stored | s`
* `pub_gen_cols_none | n`
This test verifies that:
* `pubgencols` is correctly set on the coordinator for both `stored` and
`none`.
* Citus propagates the setting unchanged to all workers for a
distributed table.
fixes#81568b1b342544
* Extend `normalize.sed` to:
* Rewrite auto-generated window names like `OVER w1` back to `OVER (?)`
on:
* `Sort Key: …`
* `Group Key: …`
* `Output: …`
* Leave functional window specs like `OVER (PARTITION BY …)` untouched.
* Use `public.explain_filter(...)` around EXPLAINs in window-related
tests to:
* Avoid plan text churn from PG18 planner/EXPLAIN changes while still
checking that we use the Citus executor and expected node types.
* Update expected outputs in:
* `mixed_relkind_tests.out`
* `multi_explain*.out`
* `multi_outer_join_columns*.out`
* `multi_subquery_window_functions.out`
* `multi_test_helpers.out`
* `window_functions.out`
to match the filtered EXPLAIN output on PG18 while remaining compatible
with older PG versions.
PG18: syntax & semantics behavior in Citus, part 1.
Includes PG18 tests for:
- OLD/NEW support in RETURNING clause of DML queries (PG commit
80feb727c)
- WITHOUT OVERLAPS in PRIMARY KEY and UNIQUE constraints (PG commit
fc0438b4e)
- COLUMNS clause in JSON_TABLE (PG commit bb766cd)
- Foreign tables created with LIKE <table> clause (PG commit 302cf1575)
- Foreign Key constraint with PERIOD clause (PG commit 89f908a6d)
- COPY command REJECT_LIMIT option (PG commit 4ac2a9bec)
- COPY TABLE TO on a materialized view (PG commit 534874fac)
Partially addresses issue #8250
fixes#8279
PostgreSQL 18 planner changes (probably AIO and updated cost model) make
sequential scans cheaper, so the psql `\d table`-style query that uses a
regex on `pg_class.relname` no longer chooses an index scan. This causes
a plan difference in the `mx_hide_shard_names` regression test.
This patch adds an alternative out for the multi_mx_hide_shard_names
test.
This PR introduces infrastructure and validation to detect breaking
changes during Citus minor version upgrades, designed to run in release
branches only.
**Breaking change detection:**
- [GUCs] Detects removed GUCs and changes to default values
- [UDFs] Detects removed functions and function signature changes
-- Supports backward-compatible function overloading (new optional
parameters allowed)
- [types] Detects removed data types
- [tables/views] Detects removed tables/views and removed/changed
columns
- New make targets for minor version upgrade tests
- Follow-up PRs will add test schedules with different upgrade scenarios
The test will be enabled in release branches (e.g., release-13) via the
new test-citus-minor-upgrade job shown below. It will not run on the
main branch.
Testing
Verified locally with sample breaking changes:
`make check-citus-minor-upgrade-local citus-old-version=v13.2.0
`
**Test case 1:** Backward-compatible signature change (allowed)
```
-- Old: CREATE FUNCTION pg_catalog.citus_blocking_pids(pBlockedPid integer)
-- New: CREATE FUNCTION pg_catalog.citus_blocking_pids(pBlockedPid integer, pBlockedByPid integer DEFAULT NULL)
```
No breaking change detected (new parameter has DEFAULT)
**Test case 2:** Incompatible signature change (breaking)
```
-- Old: CREATE FUNCTION pg_catalog.citus_blocking_pids(pBlockedPid integer)
-- New: CREATE FUNCTION pg_catalog.citus_blocking_pids(pBlockedPid integer, pBlockedByPid integer)
```
Breaking change detected:
`UDF signature removed: pg_catalog.citus_blocking_pids(pblockedpid
integer) RETURNS integer[]`
**Test case 3:** GUC changes (breaking)
- Removed `citus.max_worker_nodes_tracked`
- Changed default value of `citus.max_shared_pool_size` from 0 to 4
Breaking change detected:
```
The default value of GUC citus.max_shared_pool_size was changed from 0 to 4
GUC citus.max_worker_nodes_tracked was removed
```
**Test case 4:** Table/view changes
- Dropped `pg_catalog.pg_dist_rebalance_strategy` and removed a column
from `pg_catalog.citus_lock_waits`
```
- Column blocking_nodeid in table/view pg_catalog.citus_lock_waits was removed
- Table/view pg_catalog.pg_dist_rebalance_strategy was removed
```
**Test case 5:** Remove a custom type
- Dropped `cluster_clock` and the objects depend on it. In addition to
the dependent objects, test shows:
```
- Type pg_catalog.cluster_clock was removed
```
Sample new job for build and test workflow (for release branches):
```
test-citus-minor-upgrade:
name: PG17 - check-citus-minor-upgrade
runs-on: ubuntu-latest
container:
image: "${{ needs.params.outputs.citusupgrade_image_name }}:${{ fromJson(needs.params.outputs.pg17_version).full }}${{ needs.params.outputs.image_suffix }}"
options: --user root
needs:
- params
- build
env:
citus_version: 13.2
steps:
- uses: actions/checkout@v4
- uses: "./.github/actions/setup_extension"
with:
skip_installation: true
- name: Install and test citus minor version upgrade
run: |-
gosu circleci \
make -C src/test/regress \
check-citus-minor-upgrade \
bindir=/usr/lib/postgresql/${PG_MAJOR}/bin \
citus-pre-tar=/install-pg${PG_MAJOR}-citus${citus_version}.tar \
citus-post-tar=${GITHUB_WORKSPACE}/install-$PG_MAJOR.tar;
- uses: "./.github/actions/save_logs_and_results"
if: always()
with:
folder: ${{ env.PG_MAJOR }}_citus_minor_upgrade
- uses: "./.github/actions/upload_coverage"
if: always()
with:
flags: ${{ env.PG_MAJOR }}_citus_minor_upgrade
codecov_token: ${{ secrets.CODECOV_TOKEN }}
```
fixes#8264
PostgreSQL 18 introduced a planner improvement (commit `ae4569161`) that
rewrites simple `OR` equality clauses into `= ANY(...)` forms, allowing
the use of a single index scan instead of multiple scans or a custom
scan.
This change affects the columnar path tests where queries like `a=0 OR
a=5` previously chose a Columnar or Seq Scan plan.
In this PR:
* Updated test expectations for `uses_custom_scan` and `uses_seq_scan`
to reflect the new index scan plan.
This keeps the test output consistent with PostgreSQL 18’s updated
planner behavior.
fixes#83170d8bd0a72e
PG18 changed the wording of connection failures during `CREATE
SUBSCRIPTION` to include a subscription prefix and a verbose “connection
to server … failed:” preamble. This breaks one regression output
(`multi_move_mx`). This PR adds normalization rules to map PG18 output
back to the prior form so results are stable across PG15–PG18.
**What changes**
Add two rules in `src/test/regress/bin/normalize.sed`:
```sed
# PG18: drop 'subscription "<name>"' prefix
# remove when PG18 is the minimum supported version
s/^[[:space:]]*ERROR:[[:space:]]+subscription "[^"]+" could not connect to the publisher:[[:space:]]*/ERROR: could not connect to the publisher: /I
# PG18: drop verbose 'connection to server … failed:' preamble
s/^[[:space:]]*ERROR:[[:space:]]+could not connect to the publisher:[[:space:]]*connection to server .* failed:[[:space:]]*/ERROR: could not connect to the publisher: /I
```
**Before (PG18)**
```
ERROR: subscription "subs_01" could not connect to the publisher:
connection to server at "localhost" (::1), port 57637 failed:
root certificate file "/non/existing/certificate.crt" does not exist
```
**After normalization**
```
ERROR: could not connect to the publisher:
root certificate file "/non/existing/certificate.crt" does not exist
```
**Why**
Maintain identical regression outputs across supported PG versions while
Citus still supports PG<18.
PostgreSQL 18 adds a new line to text EXPLAIN with ANALYZE (`Index
Searches: N`). That extra line both creates noise and bumps psql’s `(N
rows)` footer. This PR keeps ANALYZE (so statements still execute) while
removing the version-specific churn in our regress outputs.
### What changed
* **Use `explain_filter(...)` instead of raw text EXPLAIN**
* In `local_shard_execution.sql` and
`local_shard_execution_replicated.sql`, replace direct:
```sql
EXPLAIN (ANALYZE, COSTS OFF, SUMMARY OFF, TIMING OFF, BUFFERS OFF)
<stmt>;
```
with:
```sql
\pset footer off
SELECT public.explain_filter('EXPLAIN (ANALYZE, COSTS OFF, SUMMARY OFF,
TIMING OFF, BUFFERS OFF) <stmt>');
\pset footer on
```
* Expected files updated accordingly to show the `explain_filter` output
block instead of raw EXPLAIN text.
* **Extend `explain_filter` to drop the PG18 line**
* Filter now removes any `Index Searches: <number>` line before
normalizing numeric fields, preventing the “N” version of the same line
from sneaking in.
* **Keep suite-wide normalizer intact**
Fixes https://github.com/citusdata/citus/issues/8235
PG18 and PG latest minors ignore temporary relations in
`RelidByRelfilenumber` (`RelidByRelfilenode` in PG15)
Relevant PG commit:
https://github.com/postgres/postgres/commit/86831952
Here we are keeping temp reloids instead of getting it with
RelidByRelfilenumber, for example, in some cases, we can directly get
reloid from relations, in other cases we keep it in some structures.
Note: there is still an outstanding issue with columnar temp tables in
concurrent sessions, that will be fixed in PR
https://github.com/citusdata/citus/pull/8252
The `merge` regress test uses SQL functions which can be cached in PG18+
since commit
[0dca5d68d](https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=0dca5d68d7bebf2c1036fd84875533afef6df992).
Distributed plan's copy function did not include the
`sourceResultRepartitionColumnIndex` field, which is critical for MERGE
queries, and for cached distributed plans this field was always 0
leading to the problem (#8285). Ensuring it is copied fixes it. This was
an oversight in Citus, and not specific to PG18.
fixes#8318
PostgreSQL 18 started printing an extra line in `EXPLAIN (ANALYZE …)`
for index scans:
```
Index Searches: N
```
This makes our test output flap (extra line, footer row count changes)
while the intent of the test is simply to **prove we plan an index
scan**, not to assert runtime counters.
### What this PR does
```sql
EXPLAIN (... analyze on, ...)
```
to
```sql
EXPLAIN (... analyze off, ...)
```
### Why this approach
* **Minimal change**: keeps the test’s purpose (“we exercise an index
scan”) without introducing normalization rules
* **Version-stable**: avoids PG18’s new text while remaining valid on
PG15–PG18.
* **No behavioral impact**: we still validate the plan node is an Index
Scan; we just don’t request execution.
### Before → After (essence)
Before (PG18):
```
Aggregate
-> Index Scan ... (actual rows=1 loops=1)
Index Cond: ...
Index Searches: 1
```
After:
```
Aggregate
-> Index Scan ...
Index Cond: ...
```
fixes#8278
Please check issue:
https://github.com/citusdata/citus/issues/8278#issuecomment-3431707484f4e7756ef9
### What PG18 changed
SELECT creates diff has a **named join**:
```sql
(...) AS unsupported_join (x,y,z,t,e,f,q)
```
On PG17, `COUNT(unsupported_join.*)` stayed as a single whole-row Var
that referenced the **join alias**.
On PG18, the parser expands that whole-row Var **early** into a
`ROW(...)` of **base** columns:
```
ROW(a.user_id, a.item_id, a.buy_count,
b.id, b.it_name, b.k_no,
c.id, c.it_name, c.k_no)
```
But since the join is *named*, inner aliases `a/b/c` are hidden.
Referencing them later blows up with
“invalid reference to FROM-clause entry for table ‘a’”.
### What this PR changes
1. **Retarget at `RowExpr` deparse (not in `get_variable`)**
* In `get_rule_expr()`’s `T_RowExpr` branch, each element `e` of
`ROW(...)` is examined.
* If `e` unwraps to a simple, same-level `Var` (`varlevelsup == 0`,
`varattno > 0`) and there is a **named `RTE_JOIN`** with
`joinaliasvars`, we **do not** change `varno/varattno`.
* Instead, we build a copy of the Var and set **`varnosyn/varattnosyn`**
to the matching join alias column (from `joinaliasvars`).
* Then we deparse that Var via `get_rule_expr_toplevel(...)`, which
naturally prints `join_alias.colname`.
* Scope is limited to **query deparsing** (`dpns->plan == NULL`),
exactly where PG18 expands whole-row vars into `ROW(...)` of base Vars.
2. **Helpers (PG18-only file)**
* `unwrap_simple_var(Node*)`: strips trivial wrappers (`RelabelType`,
`CoerceToDomain`, `CollateExpr`) to reveal a `Var`.
* `var_matches_base(const Var*, int varno, AttrNumber attno)`: matches
canonical or synonym identity.
* `dpns_has_named_join(const deparse_namespace*)`: fast precheck for any
named join with `joinaliasvars`.
* `map_var_through_join_alias(...)`: scans `joinaliasvars` to locate the
**JOIN RTE index + attno** for a 1:1 alias; the caller uses these to set
`varnosyn/varattnosyn`.
3. **Safety and non-goals**
* **No effect on plan deparsing** (`dpns->plan != NULL`).
* **No change to semantic identity**: we leave `varno/varattno`
untouched; only set `varnosyn/varattnosyn`.
* Skip whole-row/system columns (`attno <= 0`) and non-simple join
columns (computed expressions).
* Works with named joins **with or without** an explicit column list (we
rely on `joinaliasvars`, not the alias collist).
### Reproducer
```sql
CREATE TABLE distributed_table(user_id int, item_id int, buy_count int);
CREATE TABLE reference_table(id int, it_name varchar(25), k_no int);
SELECT create_distributed_table('distributed_table', 'user_id');
SELECT COUNT(unsupported_join.*)
FROM (distributed_table a
LEFT JOIN reference_table b ON true
RIGHT JOIN reference_table c ON true)
AS unsupported_join (x,y,z,t,e,f,q)
JOIN (reference_table d JOIN reference_table e ON true) ON true;
```
**Before (PG18):** deparser emitted `ROW(a.user_id, …)` → `ERROR:
invalid reference to FROM-clause entry for table "a"`
**After:** deparser emits
`ROW(unsupported_join.x, ..., unsupported_join.k_no)` → runs
successfully.
Now maps to `unsupported_join.<auto_col_names>` and runs.
With PG18's GROUP RTE, queries that should have been eligible for fast
path planning were skipped because the fast path planner allows exactly
one range table only. This fix extends that to account for a GROUP RTE.
Fixes#8275 by printing the names in order so that in every message
`DETAIL: x and y are not co-located` x precedes (or is lexicographically
less than) y.
fixes#8267
* Extend `src/test/regress/bin/normalize.sed` to drop the new PG18
EXPLAIN instrumentation line:
```
Storage: <Memory|Disk|Memory and Disk> Maximum Storage: <size>
```
which appears under `Materialize`, some `CTE Scan`s, etc. when `ANALYZE`
is on.
**Why**
* PG18 added storage usage reporting for materialization/tuplestore
nodes. It’s useful for humans but creates noisy, non-semantic diffs in
regression output. There’s no EXPLAIN flag to suppress it, so we
normalize in tests instead. This PR wires that normalization into our
sed pipeline.
https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=1eff8279d
**How**
* Add a narrowly scoped sed rule that matches only lines starting with
`Storage:` (keeping `Sort Method`, `Hash`, `Buffers`, etc. intact). Use
ERE compatible with `sed -Ef` and Python `re` (no POSIX character
classes), e.g.:
```
/^[ \t]*Storage:[ \t].*$/d
```
fixes#8263
* Introduces `columnar_test_helpers._plan_json(q text) -> jsonb`, which
runs `EXPLAIN (FORMAT JSON, COSTS OFF, ANALYZE OFF)` and returns the
plan as `jsonb`. This lets us assert on plan structure instead of text
matching.
* Updates `columnar_indexes.sql` tests to detect whether any index-based
scan is used by searching the JSON plan’s `"Node Type"` (e.g., `Index
Scan`, `Index Only Scan`, `Bitmap Index Scan`).
## Notable changes
* New helper in `src/test/regress/sql/columnar_test_helpers.sql`:
* `EXECUTE format('EXPLAIN (FORMAT JSON, COSTS OFF, ANALYZE OFF) %s', q)
INTO j;`
* Returns the `jsonb` plan for downstream assertions.
```sql
SELECT NOT jsonb_path_exists(
columnar_test_helpers._plan_json('SELECT b FROM columnar_table WHERE b =
30000'),
'$[*].Plan.** ? (@."Node Type" like_regex "^(Index|Bitmap
Index).*Scan$")'
) AS uses_no_index_scan;
```
to verify no index scan occurs at the partial index boundary (`b =
30000`) and that an index scan is used where expected (`b = 30001`).
---------
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
fixes#8277https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=45188c2ea
PostgreSQL 18 + newer OpenSSL builds surface `ssl_ciphers` as a **rule
string** (e.g., `HIGH:MEDIUM:+3DES:!aNULL`) instead of an expanded
cipher list. Our tests hard-pinned the literal list and started failing
on PG18. Also, with TLS 1.3 in the picture, we need to assert that
cipher configuration is sane without coupling to OpenSSL’s expansion.
**What changed**
* **sql/ssl_by_default.sql**
* Replace brittle `SHOW ssl_ciphers` string matching with invariant
checks:
* non-empty ciphers: `current_setting('ssl_ciphers') <> ''`
* looks like a rule/list: `position(':' in
current_setting('ssl_ciphers')) > 0`
* Run the same checks on **workers** via `run_command_on_workers`.
* Keep existing validations for `ssl=on`, `sslmode=require` in
`citus.node_conninfo`, and `pg_stat_ssl.ssl = true`.
* **expected/ssl_by_default.out**
* Update expected output to booleans for the new checks (less diff-prone
across PG/SSL variants).
Fixes#8268
PG18 added core async I/O infrastructure. Relevant PG commit:
https://github.com/postgres/postgres/commit/da72269
In `pg16.sql test`, we tested `REINDEX SYSTEM` command, which would
later cause io debug lines in a command of `multi_insert_select.sql`
test, that runs _after_ `pg16.sql` test.
"This is expected because the I/O subsystem is repopulating its internal
callback pool after heavy catalog I/O churn caused by REINDEX SYSTEM." -
chatgpt (seems right 😄)
This PR removes `REINDEX SYSTEM` test as its not crucial anyway.
`REINDEX DATABASE` is sufficient for the purpose presented in pg16.sql
test.
In the PR [8142](https://github.com/citusdata/citus/pull/8142) was added
`PushActiveSnapshot`.
This commit places it outside a for loop as taking a snapshot is a resource
heavy operation.
The GUC configuration for SkipAdvisoryLockPermissionChecks had
misconfigured the settings for GUC_SUPERUSER_ONLY for PGC_SUSET - when
PostgreSQL running with ASAN, this fails when querying pg_settings due
to exceeding the size of the array GucContext_Names. Fix up this GUC
declaration to not crash with ASAN.
The failing queries all have a GROUP BY, and the fix teaches the Citus recursive planner how to handle a PG18 GROUP range table in the outer query:
- In recursive query planning, don't recurse into subquery expressions in a GROUP BY clause
- Flatten references to a GROUP rte before creating the worker subquery in pushdown planning
- If a PARAM node points to a GROUP rte then tunnel through to the underlying expression
Fixes#8296.
fixes#8246
PostgreSQL 18 introduced stricter NUMA page-inquiry permissions for the
`pg_shmem_allocations_numa` view.
Without the required kernel capabilities, the test fails with:
```
ERROR: failed NUMA pages inquiry status: Operation not permitted
```
This PR updates our test containers to include the necessary privileges:
* Adds `--cap-add=SYS_NICE` and `--security-opt seccomp=unconfined`
When PostgreSQL’s new NUMA views (`pg_shmem_allocations_numa`,
`pg_buffercache_numa`) run, they call `move_pages()` to ask the kernel
which NUMA node holds each shared memory page.
https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=8cc139bec
That syscall (`move_pages()`) requires `CAP_SYS_NICE` when inspecting
another process.
So: `--cap-add=SYS_NICE` grants the container permission to perform that
NUMA page query.
https://man7.org/linux/man-pages/man2/move_pages.2.html#:~:text=must%20be%20privileged%0A%20%20%20%20%20%20%20%20%20%20(-,CAP_SYS_NICE,-)%20or%20the%20real
`--security-opt seccomp=unconfined`
Docker containers still run under a seccomp filter which a kernel-level
sandbox that blocks many system calls entirely for safety.
The default Docker seccomp profile blocks `move_pages()` outright,
because it can expose kernel memory layout information.
https://docs.docker.com/engine/security/seccomp/#:~:text=You%20can%20pass-,unconfined,-to%20run%20a
**In combination**
Both flags are required for NUMA introspection inside a container:
- `SYS_NICE` → permission
- `seccomp=unconfined` → ability
fixes#82650fbceae841
PostgreSQL 18 started printing an extra line in `EXPLAIN (ANALYZE …)`
for index scans:
```
Index Searches: N
```
**normalize.sed**: add a rule to remove the PG18-only line
```
/^\s*Index Searches:\s*\d+\s*$/d
```
0dca5d68d7fixes#8153
```diff
/citus/src/test/regress/expected/multi_subquery_misc.out
-- should error out
SELECT sql_subquery_test(1,1);
-ERROR: could not create distributed plan
-DETAIL: Possibly this is caused by the use of parameters in SQL functions, which is not supported in Citus.
-HINT: Consider using PL/pgSQL functions instead.
-CONTEXT: SQL function "sql_subquery_test" statement 1
+ sql_subquery_test
+-------------------
+ 307
+(1 row)
+
```
PostgreSQL 18 changes planner behavior for inlining/parameter handling
in SQL functions (pg18 commit `0dca5d68d`). As a result, a query in
`multi_subquery_misc` that previously failed to create a distributed
plan now succeeds. This PR updates the regression **expected** file to
reflect the new outcome on PG18+.
### What changed
* Updated `src/test/regress/expected/multi_subquery_misc.out`:
* Replaced the previous error block:
```
ERROR: could not create distributed plan
DETAIL: Possibly this is caused by the use of parameters in SQL
functions, which is not supported in Citus.
HINT: Consider using PL/pgSQL functions instead.
CONTEXT: SQL function "sql_subquery_test" statement 1
```
* with the actual successful result on PG18+:
```
sql_subquery_test
--------------------------------------------
307
(1 row)
```
* **PG < 18:** Behavior remains unchanged; the test still errors on
older versions.
* **PG ≥ 18:** The call succeeds; the updated expected output matches
actual results.
similar PR: https://github.com/citusdata/citus/pull/8184
fixes#8093c2a4078eba
- Enable buffer-usage reporting by default in `EXPLAIN ANALYZE` on
PostgreSQL 18 and above.
- Introduce the explicit `BUFFERS OFF` option in every existing
regression test to maintain pre-PG18 output consistency.
- This appends, `BUFFERS OFF` to all `EXPLAIN(...)` calls in
src/test/regress/sql and the corresponding .out files.
DESCRIPTION: pin PostgreSQL server development package version to 17
rather than full dev package which now pulls in 18 and Citus does not
yet support pg18
The error `Unrecognized range table id` seen in regress test
`insert_select_into_local_tables` is a consequence of the INSERT ..
SELECT planner getting confused by a SELECT query with a GROUP BY and
hence a Group RTE, introduced in PG18 (commit 247dea89f). The solution
is to flatten the relevant parts of the SELECT query before preparing
the INSERT .. SELECT query tree for use by Citus.
PG18 has removed heap_inplace_update(), which is crucial for
citus_columnar extension because we always want to update
stripe entries for columnar in-place.
Relevant PG18 commit:
https://github.com/postgres/postgres/commit/a07e03f
heap_inplace_update() has been replaced by
heap_inplace_update_and_unlock, which is used inside
systable_inplace_update_finish, which is used together with
systable_inplace_update_begin. This change has been back-patched
up to v12, which is enough for us since the oldest version
Citus supports is v15.
In PG<18, a deprecated heap_inplace_update() is retained,
however, let's start using the new functions because they are
better, and such that we don't need to wrap these changes in
PG18 version conditionals.
Basically, in this commit we replace the following:
SysScanDesc scanDescriptor = systable_beginscan(columnarStripes,
indexId, indexOk, &dirtySnapshot, 2, scanKey);
heap_inplace_update(columnarStripes, modifiedTuple);
with the following:
systable_inplace_update_begin(columnarStripes, indexId, indexOk,
NULL, 2, scanKey, &tuple, &state);
systable_inplace_update_finish(state, tuple);
For more understanding, it's best to refer to an example:
REL_18_0/src/backend/catalog/toasting.c#L349-L371
of how systable_inplace_update_begin and
systable_inplace_update_finish are used in PG18, because they
mirror the need of citus columnar.
Fixes#8207
This reverts commit 5d805eb10b.
heap_inplace_update was incorrectly replaced by
CatalogTupleUpdate in 5d805eb. In Citus, we assume a stripe
entry with some columns set to null means that a write
is in-progress, because otherwise we wouldn't see a such row.
But this breaks when we use CatalogTupleUpdate because it
inserts a new version of the row, which leaves the
in-progress version behind. Among other things, this was
causing various issues in PG18 - check-columnar test.
PG18 changed the names generated for child foreign key constraints.
https://github.com/postgres/postgres/commit/3db61db48
The test failures in Citus regression suite are all changing the name of
a constraint from `'sensors%'` to `'%to_parent%_1'`: the naming is very
nice here because `to_parent` means that we have a foreign key to a
parent table.
To fix the diff, we exclude those constraints from the output. To verify
correctness, we still count the problematic constraints to make sure
they are there - we are simply removing them from the first output (we
add this count query right after the previous one)
Fixes#8126
Co-authored-by: Mehmet YILMAZ <mehmety87@gmail.com>
Qualify create domain stmt after local execution, to avoid such diffs in
PG vanilla tests:
```diff
create domain d_fail as anyelement;
-ERROR: "anyelement" is not a valid base type for a domain
+ERROR: "pg_catalog.anyelement" is not a valid base type for a domain
```
These tests were newly added in PG18, however this is not new PG18
behavior, just some added tests.
https://github.com/postgres/postgres/commit/0172b4c94Fixes#8042
PG18 changed the visibility of various Explain Serialize functions and
structs to `extern`. Previously, for PG17 support, these were `static`,
so we had to copy paste their definitions from `explain.c` to Citus's
`multi_explain.c`.
Relevant PG18 commits:
https://github.com/postgres/postgres/commit/555960a0https://github.com/postgres/postgres/commit/77cb08be
Now we don't need to define the following anymore in Citus, since they
are extern in PG18:
- typedef struct SerializeMetrics
- void ExplainIndentText(ExplainState *es);
- SerializeMetrics GetSerializationMetrics(DestReceiver *dest);
- typedef struct SerializeDestReceiver (this is not extern, however it
is only used by GetSerializationMetrics function)
This was incorrectly handled in
https://github.com/citusdata/citus/commit/9e42f3f2c
by wrapping these definitions and usages in PG17 only,
causing such diffs in PG18 (not able to see serialization at all):
```diff
citus/src/test/regress/expected/pg17.out
select public.explain_filter('explain (analyze,
serialize binary,buffers,timing) select * from int8_tbl i8');
...
Planning Time: N.N ms
- Serialization: time=N.N ms output=NkB format=binary
Execution Time: N.N ms
Planning Time: N.N ms
Serialization: time=N.N ms output=NkB format=binary
Execution Time: N.N ms
-(14 rows)
+(13 rows)
```
This PR solves the following diffs, originating from the addition of
`varreturningtype` field to the `Var` struct in PG18:
https://github.com/postgres/postgres/commit/80feb727c
Previously we didn't account for this new field (as it's new), so this
wouldn't allow the parser to correctly reconstruct the `Var` node
structure, but rather it would error out with `did not find '}' at end
of input node`:
```diff
SELECT column_to_column_name(logicalrelid, partkey)
FROM pg_dist_partition WHERE partkey IS NOT NULL ORDER BY 1 LIMIT 1;
- column_to_column_name
----------------------------------------------------------------------
- a
-(1 row)
-
+ERROR: did not find '}' at end of input node
```
Solution follows precedent https://github.com/citusdata/citus/pull/7107,
when varnullingrels field was added to the `Var` struct in PG16.
The solution includes:
- Taking care of the `partkey` in `pg_dist_partition` table because it's
coming from the `Var` struct. This mainly includes fixing the upgrade
script to PG18, by saving all the `partkey` infos before upgrading to
PG18 (in `citus_prepare_pg_upgrade`), and then re-generating `partkey`
columns in `pg_dist_partition` (using `UPDATE`) after upgrading to PG18
(in `citus_finish_pg_upgrade`).
- Adding a normalize rule to fix output differences among PG versions.
Note that we need two normalize lines: one for PG15 since it doesn't
have `varnullingrels`, and one for PG16/PG17.
- Small trick on `metadata_sync_helpers` to use different text when
generating the `partkey`, based on the PG version.
Fixes#8189
3 minor changes to reduce some noise from the regression diffs.
1 - Reduce verbosity when ALTER EXTENSION fails
PG18 has improved reporting of errors in extension script files
Relevant PG commit:
https://github.com/postgres/postgres/commit/774171c4f
There was more context in PG18, so reducing verbosity
```
ALTER EXTENSION citus UPDATE TO '11.0-1';
ERROR: cstore_fdw tables are deprecated as of Citus 11.0
HINT: Install Citus 10.2 and convert your cstore_fdw tables to the
columnar access method before upgrading further
CONTEXT: PL/pgSQL function inline_code_block line 4 at RAISE
+SQL statement "DO LANGUAGE plpgsql
+$$
+BEGIN
+ IF EXISTS (SELECT 1 FROM pg_dist_shard where shardstorage = 'c') THEN
+ RAISE EXCEPTION 'cstore_fdw tables are deprecated as of Citus 11.0'
+ USING HINT = 'Install Citus 10.2 and convert your cstore_fdw tables
to the columnar access method before upgrading further';
+ END IF;
+END;
+$$"
+extension script file "citus--10.2-5--11.0-1.sql", near line 532
```
2 - Fix backend type order in tests for PG18
PG18 added another backend type which messed the order
in this test
Adding a separate IF condition for PG18
Relevant PG commit:
https://github.com/postgres/postgres/commit/18d67a8d7d
3 - Ignore "DEBUG: find_in_path" lines in output
Relevant PG commit:
https://github.com/postgres/postgres/commit/4f7f7b0375
The new GUC extension_control_path specifies a path to look for
extension control files.
6 minor changes to reduce some noise from the regression diffs.
1 - Add ORDER BY to fix subquery_in_where diff
2 - Disable buffers in explain analyze calls
Leftover work from
https://github.com/citusdata/citus/commit/f1f0b09f7
3 - Reduce verbosity to avoid diffs between PG versions
Relevant PG commit:
https://github.com/postgres/postgres/commit/0dca5d68d7
diff was:
```
CALL test_procedure_commit(2,5);
ERROR: COMMIT is not allowed in an SQL function
-CONTEXT: SQL function "test_procedure_commit" during startup
+CONTEXT: SQL function "test_procedure_commit" statement 2
```
4 - Rename array_sort to array_sort_citus since PG18 added array_sort
Relevant PG commit:
https://github.com/postgres/postgres/commit/6c12ae09f5a
Diff we were seeing in multi_array_agg, because the PG18 test was using
PG18's array_sort function instead:
```
-- Check that we return NULL in case there are no input rows to array_agg()
SELECT array_sort(array_agg(l_orderkey))
FROM lineitem WHERE l_orderkey < 0;
array_sort
------------
- {}
+
(1 row)
```
5 - Exclude not-null constraints from output to avoid diffs
PG18 has added pg_constraint rows for not-null constraints
Relevant PG commit
https://github.com/postgres/postgres/commit/14e87ffa5c
Remove them by condition contype <> 'n'
6 - Reduce verbosity to avoid md5 pwd deprecation warning in PG18
PG18 has deprecated MD5 passwords
Relevant PG commit:
https://github.com/postgres/postgres/commit/db6a4a985Fixes#8154Fixes#8157
Checksums are now on by default in PG18: 04bec894a
Upgrade to PG18 fails with the following error:
`old cluster does not use data checksums but the new one does`
To overcome this error, we add --data-checksums option such that
clusters with PG less than 18 also use data checksums.
Fixes#8229
DESCRIPTION: Normalize \d+ output in PG18 by filtering “Not-null
constraints” blocks
fixes#8095
**PR Description**
Postgres 18 started representing column `NOT NULL` as named constraints
in `pg_constraint`, and `psql \d+` now prints them under a `Not-null
constraints:` section. This caused extra diffs in our regression tests.
14e87ffa5c
This PR updates the normalization rules to strip those sections during
diff filtering by adding two regex rules:
* remove the `Not-null constraints:` header
* remove any indented constraint lines ending in `_not_null`
0dca5d68d7fixes#8153
```diff
diff -dU10 -w /__w/citus/citus/src/test/regress/expected/multi_sql_function.out /__w/citus/citus/src/test/regress/results/multi_sql_function.out
--- /__w/citus/citus/src/test/regress/expected/multi_sql_function.out.modified 2025-08-25 12:43:24.373634581 +0000
+++ /__w/citus/citus/src/test/regress/results/multi_sql_function.out.modified 2025-08-25 12:43:24.383634533 +0000
@@ -317,24 +317,25 @@
$$ LANGUAGE SQL STABLE;
INSERT INTO test_parameterized_sql VALUES(1, 1);
-- all of them should fail
SELECT * FROM test_parameterized_sql_function(1);
ERROR: cannot perform distributed planning on this query because parameterized queries for SQL functions referencing distributed tables are not supported
HINT: Consider using PL/pgSQL functions instead.
SELECT (SELECT 1 FROM test_parameterized_sql limit 1) FROM test_parameterized_sql_function(1);
ERROR: cannot perform distributed planning on this query because parameterized queries for SQL functions referencing distributed tables are not supported
HINT: Consider using PL/pgSQL functions instead.
SELECT test_parameterized_sql_function_in_subquery_where(1);
-ERROR: could not create distributed plan
-DETAIL: Possibly this is caused by the use of parameters in SQL functions, which is not supported in Citus.
-HINT: Consider using PL/pgSQL functions instead.
-CONTEXT: SQL function "test_parameterized_sql_function_in_subquery_where" statement 1
+ test_parameterized_sql_function_in_subquery_where
+---------------------------------------------------
+ 1
+(1 row)
+
```
allows custom vs. generic plans for SQL functions; arguments can be
folded to consts, enabling more rewrites/optimizations (and in your
case, routable Citus plans)
seems that P18 rewrote how LANGUAGE SQL functions are planned/executed:
they now go through the plan cache (like PL/pgSQL does) and can produce
custom plans with the function arguments substituted as constants. That
means your call
SELECT test_parameterized_sql_function_in_subquery_where(1);
is planned with org_id_val = 1 baked in, so Citus no longer sees an
unresolved Param inside the function body and is able to build a
distributed plan instead of tripping the old “params in SQL functions”
error path.
**What’s in here**
- Update `expected/multi_sql_function.out` to reflect PG18 behavior
- Add `expected/multi_sql_function_0.out` as an alternate expected file
that retains the pre-PG18 error output for the same test
This crash has been there for a while but wasn't tested before pg18.
PG18 added this test:
CREATE STATISTICS tst ON a FROM (VALUES (x)) AS foo;
which tries to create statistics on a derived-on-the-fly table (which is
not allowed) However Citus assumes we always have a valid table when
intercepting CREATE STATISTICS command to check for Citus tables
Added a check to return early if needed.
pg18 commit: https://github.com/postgres/postgres/commit/3eea4dc2cFixes#8212
DESCRIPTION: Fixes a bug that causes allowing UPDATE / MERGE queries
that may change the distribution column value.
Fixes: #8087.
Probably as of #769, we were not properly checking if UPDATE
may change the distribution column.
In #769, we had these checks:
```c
if (targetEntry->resno != column->varattno)
{
/* target entry of the form SET some_other_col = <x> */
isColumnValueChanged = false;
}
else if (IsA(setExpr, Var))
{
Var *newValue = (Var *) setExpr;
if (newValue->varattno == column->varattno)
{
/* target entry of the form SET col = table.col */
isColumnValueChanged = false;
}
}
```
However, what we check in "if" and in the "else if" are not so
different in the sense they both attempt to verify if SET expr
of the target entry points to the attno of given column. So, in
#5220, we even removed the first check because it was redundant.
Also see this PR comment from #5220:
https://github.com/citusdata/citus/pull/5220#discussion_r699230597.
In #769, probably we actually wanted to first check whether both
SET expr of the target entry and given variable are pointing to the
same range var entry, but this wasn't what the "if" was checking,
so removed.
As a result, in the cases that are mentioned in the linked issue,
we were incorrectly concluding that the SET expr of the target
entry won't change given column just because it's pointing to the
same attno as given variable, regardless of what range var entries
the column and the SET expr are pointing to. Then we also started
using the same function to check for such cases for update action
of MERGE, so we have the same bug there as well.
So with this PR, we properly check for such cases by comparing
varno as well in TargetEntryChangesValue(). However, then some of
the existing tests started failing where the SET expr doesn't
directly assign the column to itself but the "where" clause could
actually imply that the distribution column won't change. Even before
we were not attempting to verify if "where" cluse quals could imply a
no-op assignment for the SET expr in such cases but that was not a
problem. This is because, for the most cases, we were always qualifying
such SET expressions as a no-op update as long as the SET expr's
attno is the same as given column's. For this reason, to prevent
regressions, this PR also adds some extra logic as well to understand
if the "where" clause quals could imply that SET expr for the
distribution key is a no-op.
Ideally, we should instead use "relation restriction equivalence"
mechanism to understand if the "where" clause implies a no-op
update. This is because, for instance, right now we're not able to
deduce that the update is a no-op when the "where" clause transitively
implies a no-op update, as in the case where we're setting "column a"
to "column c" and where clause looks like:
"column a = column b AND column b = column c".
If this means a regression for some users, we can consider doing it
that way. Until then, as a workaround, we can suggest adding additional
quals to "where" clause that would directly imply equivalence.
Also, after fixing TargetEntryChangesValue(), we started successfully
deducing that the update action is a no-op for such MERGE queries:
```sql
MERGE INTO dist_1
USING dist_1 src
ON (dist_1.a = src.b)
WHEN MATCHED THEN UPDATE SET a = src.b;
```
However, we then started seeing below error for above query even
though now the update is qualified as a no-op update:
```
ERROR: Unexpected column index of the source list
```
This was because of #8180 and #8201 fixed that.
In summary, with this PR:
* We disallow such queries,
```sql
-- attno for dist_1.a, dist_1.b: 1, 2
-- attno for dist_different_order_1.a, dist_different_order_1.b: 2, 1
UPDATE dist_1 SET a = dist_different_order_1.b
FROM dist_different_order_1
WHERE dist_1.a dist_different_order_1.a;
-- attno for dist_1.a, dist_1.b: 1, 2
-- but ON (..) doesn't imply a no-op update for SET expr
MERGE INTO dist_1
USING dist_1 src
ON (dist_1.a = src.b)
WHEN MATCHED THEN UPDATE SET a = src.a;
```
* .. and allow such queries,
```sql
MERGE INTO dist_1
USING dist_1 src
ON (dist_1.a = src.b)
WHEN MATCHED THEN UPDATE SET a = src.b;
```
The range table entry array created by the Postgres planner for each
SELECT in a query may have NULL entries as of PG18. Add a NULL check
to skip over these when looking for matches in rte identities.
Fix deparsing of UPDATE statements with indirection (#7675) involved
changing ruleutils of our supported Postgres versions. It means that
when integrating a new Postgres version we need to update its ruleutils
with the relevant parts of #7675; basically PG ruleutils needs to call
the `citus_ruleutils.c` functions added by #7675.
DESCRIPTION: Fixes a bug that causes an unexpected error when executing
repartitioned merge.
Fixes#8180.
This was happening because of a bug in
SourceResultPartitionColumnIndex(). And to fix it, this PR avoids
using DistributionColumnIndex() in SourceResultPartitionColumnIndex().
Instead, invents FindTargetListEntryWithVarExprAttno(), which finds
the index of the target entry in the source query's target list that
can be used to repartition the source for a repartitioned merge. In
short, to find the source target entry that refences the Var used in
ON (..) clause and that references the source rte, we should check the
varattno of the underlying expr, which presumably is always a Var for
repartitioned merge as we always wrap the source rte with a subquery,
where all target entries point to the columns of the original source
relation.
Using DistributionColumnIndex() prior to 13.0 wasn't causing such an
issue because prior to 13.0, the varattno of the underlying expr of
the source target entries was almost (*1) always equal to resno of the
target entry as we were including all target entries of the source
relation. However, starting with #7659, which is merged to main before
13.0, we started using CreateFilteredTargetListForRelation() instead of
CreateAllTargetListForRelation() to compute the target entry list for
the source rte to fix another bug. So we cannot revert to using
CreateAllTargetListForRelation() because otherwise we would re-introduce
bug that it helped fixing, so we instead had to find a way to properly
deal with the "filtered target list"s, as in this commit. Plus (*1),
even before #7659, probably we would still fail when the source relation
has dropped attributes or such because that would probably also cause
such a mismatch between the varattno of the underlying expr of the
target entry and its resno.
The change in `merge_planner.c` fixes _unrecognized range table entry_
diffs in merge regress tests (category 2 diffs in #7992), the change in
`multi_router_planner.c` fixes _column reference ... is ambiguous_ diffs
in `multi_insert_select` and `multi_insert_select_window` (category 3
diffs in #7992). Edit to `common.py` enables standalone regress tests
with pg18 (e..g `citus_tests/run_test.py merge`).
DESCRIPTION: Fix 'column does not exist' errors in grouping regress
tests.
Postgres 18's GROUP RTE was being ignored by query pushdown planning
when constructing the query tree for the worker subquery. The solution
is straightforward - ensure the worker subquery tree has the same
groupRTE property as the original query. Postgres ruleutils then does
the right thing when generating the pushed down query. Fixes category 1
in #7992.
DESCRIPTION: Stabilize table_checks across PG15–PG18: switch to
pg_constraint, remove dupes, exclude NOT NUL
fixes#8138fixes#8131
**Problem**
```diff
diff -dU10 -w /__w/citus/citus/src/test/regress/expected/multi_create_table_constraints.out /__w/citus/citus/src/test/regress/results/multi_create_table_constraints.out
--- /__w/citus/citus/src/test/regress/expected/multi_create_table_constraints.out.modified 2025-08-18 12:26:51.991598284 +0000
+++ /__w/citus/citus/src/test/regress/results/multi_create_table_constraints.out.modified 2025-08-18 12:26:52.004598519 +0000
@@ -403,22 +403,30 @@
relid = 'check_example_partition_col_key_365068'::regclass;
Column | Type | Definition
---------------+---------+---------------
partition_col | integer | partition_col
(1 row)
SELECT "Constraint", "Definition" FROM table_checks WHERE relid='public.check_example_365068'::regclass;
Constraint | Definition
-------------------------------------+-----------------------------------
check_example_other_col_check | CHECK other_col >= 100
+ check_example_other_col_check | CHECK other_col >= 100
+ check_example_other_col_check | CHECK other_col >= 100
+ check_example_other_col_check | CHECK other_col >= 100
+ check_example_other_col_check | CHECK other_col >= 100
check_example_other_other_col_check | CHECK abs(other_other_col) >= 100
-(2 rows)
+ check_example_other_other_col_check | CHECK abs(other_other_col) >= 100
+ check_example_other_other_col_check | CHECK abs(other_other_col) >= 100
+ check_example_other_other_col_check | CHECK abs(other_other_col) >= 100
+ check_example_other_other_col_check | CHECK abs(other_other_col) >= 100
+(10 rows)
```
On PostgreSQL 18, `NOT NULL` is represented as a cataloged constraint
and surfaces through `information_schema.check_constraints`.
14e87ffa5c
Our helper view `table_checks` (built on
`information_schema.check_constraints` + `constraint_column_usage`)
started returning:
* Extra `…_not_null` rows (noise for our tests)
* Duplicate rows for real CHECKs due to the one-to-many join via
`constraint_column_usage`
* Occasional literal formatting differences (e.g., dates) coming from
the information\_schema deparser
### What changed
1. **Rewrite `table_checks` to use system catalogs directly**
We now select only expression-based, table-level constraints—excluding
NOT NULL—by filtering on `contype <> 'n'` and requiring `conbin IS NOT
NULL`. This yields the same effective set as real CHECKs while remaining
future-proof against non-CHECK constraint types.
```sql
CREATE OR REPLACE VIEW table_checks AS
SELECT
c.conname AS "Constraint",
'CHECK ' ||
-- drop a single pair of outer parens if the deparser adds them
regexp_replace(pg_get_expr(c.conbin, c.conrelid, true), '^\((.*)\)$', '\1')
AS "Definition",
c.conrelid AS relid
FROM pg_catalog.pg_constraint AS c
WHERE c.contype <> 'n' -- drop NOT NULL (PG18)
AND c.conbin IS NOT NULL -- only expression-bearing constraints (i.e., CHECKs)
AND c.conrelid <> 0 -- table-level only (exclude domains)
ORDER BY "Constraint", "Definition";
```
Why this filter?
* `contype <> 'n'` excludes PG18’s NOT NULL rows.
* `conbin IS NOT NULL` restricts to expression-backed constraints
(CHECKs); PK/UNIQUE/FK/EXCLUSION don’t have `conbin`.
* `conrelid <> 0` removes domain constraints.
2. **Add a PG18-specific regression test for `contype = 'n'`**
New test (`pg18_not_null_constraints`) verifies:
* Coordinator tables have `n` rows for NOT NULL (columns `a`, `c`),
* A worker shard has matching `n` rows,
* Dropping a NOT NULL on the coordinator propagates to shards (count
goes from 2 → 1),
* `table_checks` *never* reports NOT NULL, but does report a real CHECK
added for the test.
---
### Why this works (PG15–PG18)
* **Stable source of truth:** Directly reads `pg_constraint` instead of
`information_schema`.
* **No duplicates:** Eliminates the `constraint_column_usage` join,
removing multiplicity.
* **No NOT NULL noise:** PG18’s `contype = 'n'` is filtered out by
design.
* **Deterministic text:** Uses `pg_get_expr` and strips a single outer
set of parentheses for consistent output.
---
### Impact on tests
* Removes spurious `…_not_null` entries and duplicate `checky_…` rows
(e.g., in `multi_name_lengths` and similar).
* Existing expected files stabilize without adding brittle
normalizations.
* New PG18 test asserts correct catalog behavior and Citus propagation
while remaining a no-op on earlier PG versions.
---
14e87ffa5c
PostgreSQL 18 now records column `NOT NULL` constraints in
`pg_constraint` (`contype = 'n'`). That means queries that previously
listed “all constraints” for a relation now return extra rows, causing
noisy diffs in Citus regression tests. This PR narrows each catalog
probe to the specific constraint type under test
(PK/UNIQUE/EXCLUDE/CHECK), keeping results stable across PG15–PG18.
## What changed
* Update
`src/test/regress/sql/multi_alter_table_add_constraints_without_name.sql`
to:
* Add `AND con.contype IN ('p'|'u'|'x'|'c')` in each query, matching the
constraint just created.
* Join namespace via `rel.relnamespace` for robustness.
* Refresh
`src/test/regress/expected/multi_alter_table_add_constraints_without_name.out`
to reflect the filtered results.
## Why
* PG18 adds named `NOT NULL` entries to `pg_constraint`, which
previously lived only in `pg_attribute`. Tests that select from
`pg_constraint` without filtering now see extra rows (e.g.,
`*_not_null`), breaking expectations. Filtering by `contype` validates
exactly what the test intends (PK/UNIQUE/EXCLUDE/CHECK
naming/propagation) and ignores unrelated `NOT NULL` rows.
```diff
diff -dU10 -w /__w/citus/citus/src/test/regress/expected/multi_alter_table_add_constraints_without_name.out /__w/citus/citus/src/test/regress/results/multi_alter_table_add_constraints_without_name.out
--- /__w/citus/citus/src/test/regress/expected/multi_alter_table_add_constraints_without_name.out.modified 2025-09-11 14:36:52.521254512 +0000
+++ /__w/citus/citus/src/test/regress/results/multi_alter_table_add_constraints_without_name.out.modified 2025-09-11 14:36:52.549254440 +0000
@@ -20,34 +20,36 @@
ALTER TABLE AT_AddConstNoName.products ADD PRIMARY KEY(product_no);
SELECT con.conname
FROM pg_catalog.pg_constraint con
INNER JOIN pg_catalog.pg_class rel ON rel.oid = con.conrelid
INNER JOIN pg_catalog.pg_namespace nsp ON nsp.oid = connamespace
WHERE rel.relname = 'products';
conname
------------------------------
products_pkey
-(1 row)
+ products_product_no_not_null
+(2 rows)
-- Check that the primary key name created on the coordinator is sent to workers and
-- the constraints created for the shard tables conform to the <conname>_shardid naming scheme.
\c - - :public_worker_1_host :worker_1_port
SELECT con.conname
FROM pg_catalog.pg_constraint con
INNER JOIN pg_catalog.pg_class rel ON rel.oid = con.conrelid
INNER JOIN pg_catalog.pg_namespace nsp ON nsp.oid = connamespace
WHERE rel.relname = 'products_5410000';
conname
--------------------------------------
+ products_5410000_product_no_not_null
products_pkey_5410000
-(1 row)
+(2 rows)
```
after pr:
https://github.com/citusdata/citus/actions/runs/17697415668/job/50298622183#step:5:265
fixes#8186086c84b23d
PG18 emitting a more specific message for foreign-key violations when
the action is `RESTRICT` (SQLSTATE 23001), e.g.
`violates RESTRICT setting of foreign key constraint ...` and `Key (...)
is referenced from table ...`.
Older versions printed the generic FK text (SQLSTATE 23503), e.g.
`violates foreign key constraint ...` and `Key (...) is still referenced
from table ...`.
This change was causing noisy diffs in our regression tests (e.g.,
`multi_foreign_key.out`).
To keep a single set of expected files across PG15–PG18, this PR adds
two normalization rules to the test filter:
```sed
# PG18 FK wording -> legacy generic form
s/violates RESTRICT setting of foreign key constraint/violates foreign key constraint/g
# DETAIL line: "is referenced" -> "is still referenced"
s/\<is referenced from table\>/is still referenced from table/g
```
**Scope / impact**
* Test-only change; runtime behavior is unaffected.
* Keeps outputs stable across PG15–PG18 without version-splitting
expected files.
* Rules are narrowly targeted to the FK wording introduced in PG18.
with pr:
https://github.com/citusdata/citus/actions/runs/17698469722/job/50300960878#step:5:252
DESCRIPTION: Remove Code Climate coverage upload steps from GitHub
Actions workflow
CI: remove Code Climate coverage reporting (cc-test-reporter) and
related jobs; keep Codecov as source of truth
* **Why**
Code Climate’s test-reporter has been archived; their download/API path
is no longer served, which breaks our CC upload step (`cc-test-reporter
…` ends up downloading HTML/404).
* **What changed**
* Drop the Code Climate formatting/artifact steps from the composite
action `.github/actions/upload_coverage/action.yml`.
* Delete the `upload-coverage` job that aggregated and pushed to Code
Climate (`cc-test-reporter sum-coverage` / `upload-coverage`).
* **Impact**
* Codecov uploads remain; coverage stays visible via Codecov.
* No test/build behavior change—only removes a failing reporter path.
Added detailed explanation of delayed fast path planning in Citus 13.2,
including conditions and processes involved.
---------
Co-authored-by: Onur Tirtir <onurcantirtir@gmail.com>
Fixes#5808.
DESCRIPTION: Fixes an assertion failure in Citus maintenance daemon that
can happen in very slow systems.
Try running `make -C src/test/regress/ check-multi-1-vg` - while the
tests will exit with code 2 at least %50 of the times in the very early
stages of the test suite by producing a core-dump on main, it won't be
the case on this branch, at least based on my trials :)
DESCRIPTION: Fixes an undefined behavior that could happen when
computing tenant score for citus_stat_tenants
Add check for shift size, reset to zero in case of overflow
Fixes#7953.
---------
Co-authored-by: Onur Tirtir <onurcantirtir@gmail.com>
Also use pid in valgrind logfile name to avoid overwriting the valgrind
logs due to the memory errors that can happen in different processes
concurrently:
(from https://valgrind.org/docs/manual/manual-core.html)
```
--log-file=<filename>
Specifies that Valgrind should send all of its messages to the specified file. If the file name is empty, it causes an abort. There are three special format specifiers that can be used in the file name.
%p is replaced with the current process ID. This is very useful for program that invoke multiple processes. WARNING: If you use --trace-children=yes and your program invokes multiple processes OR your program forks without calling exec afterwards, and you don't use this specifier (or the %q specifier below), the Valgrind output from all those processes will go into one file, possibly jumbled up, and possibly incomplete.
```
With this change, we'll start having lots of valgrind output files
generated under "src/test/regress" with the same prefix,
citus_valgrind_test_log.txt, by default, during valgrind tests, so it'll
look a bit ugly; but one can use `cat
src/test/regress/citus_valgrind_test_log.txt.[0-9]*"` or such to combine
them into a single valgrind log file later.
Need to also check Postgres plan's rangetables for relations used in Initplans.
DESCRIPTION: Fix a bug in redundant WHERE clause detection; we need to
additionally check the Postgres plan's range tables for the presence of
citus tables, to account for relations that are referenced from scalar
subqueries.
There is a fundamental flaw in 4139370, the assumption that, after
Postgres planning has completed, all tables used in a query can be
obtained by walking the query tree. This is not the case for scalar
subqueries, which will be referenced by `PARAM` nodes. The fix adds an
additional check of the Postgres plan range tables; if there is at least
one citus table in there we do not need to change the needs distributed
planning flag.
Fixes#8159
DESCRIPTION: Checking first for the presence of subscript ops avoids a
shallow copy of the target list for target lists where there are no
array or json subscripts.
Commit 0c1b31c fixed a bug in UPDATE statements with array or json
subscripting in the target list. This commit modifies that to first
check that the target list has a subscript and avoid a shallow copy of
the target list for UPDATE statements with no array/json subscripting.
- Downgrade replication lag reporting from NOTICE to DEBUG to reduce
noise and improve regression test stability.
- Add hints to certain replication status messages for better clarity.
- Update expected output files accordingly.
Test was not cleaning up after itself therefore failed consecutive runs
Test locally with:
make check-columnar-minimal
\ EXTRA_TESTS='columnar_chunk_filtering columnar_chunk_filtering'
In #7950, #8120, #8124, #8121 and #8114, TupleDescSize() was used to
check whether the tuple length is `Natts_<catalog_table_name>`. However
this was wrong because TupleDescSize() returns the size of the
tupledesc, not the length of it (i.e., number of attributes).
Actually `TupleDescSize(tupleDesc) == Natts_<catalog_table_name>` was
always returning false but this didn't cause any problems because using
`tupleDesc->natts - 1` when `tupleDesc->natts ==
Natts_<catalog_table_name>` too had the same effect as using
`Anum_<column_added_later> - 1` in that case.
So this also makes me thinking of always returning `tupleDesc->natts -
1` (or `tupleDesc->natts - 2` if it's the second to last attribute) but
being more explicit seems more useful.
Even more, in the future we should probably switch to a different
implementation if / when we think of adding more columns to those
tables. We should probably scan non-dropped attributes of the relation,
enumerate them and return the attribute number of the one that we're
looking for, but seems this is not needed right now.
Unlike what has been fixed in #7950, #8120, #8124, #8121 and #8114, this
was not an issue in older releases but is a potential issue to be
introduced by the current (13.2) release because in one of recent
commits (#8122) two columns has been added to pg_dist_node. In other
words, none of the older releases since we started supporting downgrades
added new columns to pg_dist_node.
The mentioned PR actually attempted avoiding these kind of issues in one
of the code-paths but not in some others.
So, this PR, avoids memory corruptions around pg_dist_node accessors in
a standardized way (as implemented in other example PRs) and in all
code-paths.
DESCRIPTION: Normalize Actual Rows output in regression tests for PG18
compatibility
PostgreSQL 18 changed `EXPLAIN ANALYZE` to always print fractional row
counts (e.g. `1.00` instead of `1`).
95dbd827f2
This caused diffs across multiple output formats in Citus regression
tests:
* Text EXPLAIN: `actual rows=50.00` vs `actual rows=50`
* YAML: `Actual Rows: 1.00` vs `Actual Rows: 1`
* XML: `<Actual-Rows>1.00</Actual-Rows>` vs
`<Actual-Rows>1</Actual-Rows>`
* JSON: `"Actual Rows": 1.00` vs `"Actual Rows": 1`
* Placeholders: `rows=N.N` vs `rows=N`
This patch extends `normalize.sed` to strip trailing `.0…` from `Actual
Rows` in all supported formats and collapses placeholder values back to
`N`. With these changes, regression tests produce stable output across
PG15–PG18.
No functional changes to Citus itself — only test normalization was
updated.
Relevant PG18 commit:
c2a4078eba
- Enable buffer-usage reporting by default in `EXPLAIN ANALYZE` on
PostgreSQL 18 and above.
Solution:
- Introduce the explicit `BUFFERS OFF` option in every existing
regression test to maintain pre-PG18 output consistency.
- This appends, `BUFFERS OFF` to all `EXPLAIN ANALYZE(...)` calls in
src/test/regress/sql and the corresponding .out files.
fixes#8093
DESCRIPTION: Add `citus_stats` UDF
This UDF acts on a Citus table, and provides `null_frac`,
`most_common_vals` and `most_common_freqs` for each column in the table,
based on the definitions of these columns in the Postgres view
`pg_stats`.
**Aggregated Views: pg\_stats > citus\_stats**
citus\_stats, is a **view** intended for use in **Citus**, a distributed
extension of PostgreSQL. It collects and returns **column-level**
**statistics** for a distributed table—specifically, the **most common
values**, their **frequencies,** and **fraction of null values**, like
pg\_stats view does for regular Postgres tables.
**Use Case**
This view is useful when:
- You need **column-level insights** on a distributed table.
- You're performing **query optimization**, **cardinality estimation**,
or **data profiling** across shards.
**What It Returns**
A **table** with:
| Column Name | Data Type | Description |
|---------------------|-----------|-----------------------------------------------------------------------------|
| schemaname | text | Name of the schema containing the distributed
table |
| tablename | text | Name of the distributed table |
| attname | text | Name of the column (attribute) |
| null_frac | float4 | Estimated fraction of NULLs in the column across
all shards |
| most_common_vals | text[] | Array of most common values for the column
|
| most_common_freqs | float4[] | Array of corresponding frequencies (as
fractions) of the most common values|
**Caveats**
- The function assumes that the array of the most common values among
different shards will be the same, therefore it just adds everything up.
DESCRIPTION: Remove an assertion from Postgres ruleutils that was rendered meaningless by a previous Citus commit.
Fixes#8123. This has been present since 00068e0, which changed the code preceding the assert as follows:
```
#ifdef USE_ASSERT_CHECKING
- while (i < colinfo->num_cols && colinfo->colnames[i] == NULL)
- i++;
+ for (int col_index = 0; col_index < colinfo->num_cols; col_index++)
+ {
+ /*
+ * In the above processing-loops, "i" advances only if
+ * the column is not new, check if this is a new column.
+ */
+ if (colinfo->is_new_col[col_index])
+ i++;
+ }
Assert(i == colinfo->num_cols);
Assert(j == nnewcolumns);
#endif
```
This commit altered both the loop condition and the incrementing of `i`. After analysis, the assert no longer makes sense.
**DESCRIPTION:**
This pull request introduces the foundation and core logic for the
snapshot-based node split feature in Citus. This feature enables
promoting a streaming replica (referred to as a clone in this feature
and UI) to a primary node and rebalancing shards between the original
and the newly promoted node without requiring a full data copy.
This significantly reduces rebalance times for scale-out operations
where the new node already contains a full copy of the data via
streaming replication.
Key Highlights:
**1. Replica (Clone) Registration & Management Infrastructure**
Introduces a new set of UDFs to register and manage clone nodes:
- citus_add_clone_node()
- citus_add_clone_node_with_nodeid()
- citus_remove_clone_node()
- citus_remove_clone_node_with_nodeid()
These functions allow administrators to register a streaming replica of
an existing worker node as a clone, making it eligible for later
promotion via snapshot-based split.
**2. Snapshot-Based Node Split (Core Implementation)**
New core UDF:
- citus_promote_clone_and_rebalance()
This function implements the full workflow to promote a clone and
rebalance shards between the old and new primaries. Steps include:
1. Ensuring Exclusivity – Blocks any concurrent placement-changing
operations.
2. Blocking Writes – Temporarily blocks writes on the primary to ensure
consistency.
3. Replica Catch-up – Waits for the replica to be fully in sync.
4. Promotion – Promotes the replica to a primary using pg_promote.
5. Metadata Update – Updates metadata to reflect the newly promoted
primary node.
6. Shard Rebalancing – Redistributes shards between the old and new
primary nodes.
**3. Split Plan Preview**
A new helper UDF get_snapshot_based_node_split_plan() provides a preview
of the shard distribution post-split, without executing the promotion.
**Example:**
```
reb 63796> select * from pg_catalog.get_snapshot_based_node_split_plan('127.0.0.1',5433,'127.0.0.1',5453);
table_name | shardid | shard_size | placement_node
--------------+---------+------------+----------------
companies | 102008 | 0 | Primary Node
campaigns | 102010 | 0 | Primary Node
ads | 102012 | 0 | Primary Node
mscompanies | 102014 | 0 | Primary Node
mscampaigns | 102016 | 0 | Primary Node
msads | 102018 | 0 | Primary Node
mscompanies2 | 102020 | 0 | Primary Node
mscampaigns2 | 102022 | 0 | Primary Node
msads2 | 102024 | 0 | Primary Node
companies | 102009 | 0 | Clone Node
campaigns | 102011 | 0 | Clone Node
ads | 102013 | 0 | Clone Node
mscompanies | 102015 | 0 | Clone Node
mscampaigns | 102017 | 0 | Clone Node
msads | 102019 | 0 | Clone Node
mscompanies2 | 102021 | 0 | Clone Node
mscampaigns2 | 102023 | 0 | Clone Node
msads2 | 102025 | 0 | Clone Node
(18 rows)
```
**4 Test Infrastructure Enhancements**
- Added a new test case scheduler for snapshot-based split scenarios.
- Enhanced pg_regress_multi.pl to support creating node backups with
slightly modified options to simulate real-world backup-based clone
creation.
### 5. Usage Guide
The snapshot-based node split can be performed using the following
workflow:
**- Take a Backup of the Worker Node**
Run pg_basebackup (or an equivalent tool) against the existing worker
node to create a physical backup.
`pg_basebackup -h <primary_worker_host> -p <port> -D
/path/to/replica/data --write-recovery-conf
`
**- Start the Replica Node**
Start PostgreSQL on the replica using the backup data directory,
ensuring it is configured as a streaming replica of the original worker
node.
**- Register the Backup Node as a Clone**
Mark the registered replica as a clone of its original worker node:
`SELECT * FROM citus_add_clone_node('<clone_host>', <clone_port>,
'<primary_host>', <primary_port>);
`
**- Promote and Rebalance the Clone**
Promote the clone to a primary and rebalance shards between it and the
original worker:
`SELECT * FROM citus_promote_clone_and_rebalance('clone_node_id');
`
**- Drop Any Replication Slots from the Original Worker**
After promotion, clean up any unused replication slots from the original
worker:
`SELECT pg_drop_replication_slot('<slot_name>');
`
DESCRIPTION: Parallelizes shard rebalancing and removes the bottlenecks
that previously blocked concurrent logical-replication moves.
These improvements reduce rebalance windows—particularly for clusters
with large reference tables and enable multiple shard transfers to run in parallel.
Motivation:
Citus’ shard rebalancer has some key performance bottlenecks:
**Sequential Movement of Reference Tables:**
Reference tables are often assumed to be small, but in real-world
deployments, they can grow significantly large. Previously, reference
table shards were transferred as a single unit, making the process
monolithic and time-consuming.
**No Parallelism Within a Colocation Group:**
Although Citus distributes data using colocated shards, shard
movements within the same colocation group were serialized. In
environments with hundreds of distributed tables colocated
together, this serialization significantly slowed down rebalance
operations.
**Excessive Locking:**
Rebalancer used restrictive locks and redundant logical replication
guards, further limiting concurrency.
The goal of this commit is to eliminate these inefficiencies and enable
maximum parallelism during rebalance, without compromising correctness
or compatibility. Parallelize shard rebalancing to reduce rebalance
time.
Feature Summary:
**1. Parallel Reference Table Rebalancing**
Each reference-table shard is now copied in its own background task.
Foreign key and other constraints are deferred until all shards are
copied.
For single shard movement without considering colocation a new
internal-only UDF '`citus_internal_copy_single_shard_placement`' is
introduced to allow single-shard copy/move operations.
Since this function is internal, we do not allow users to call it
directly.
**Temporary Hack to Set Background Task Context** Background tasks
cannot currently set custom GUCs like application_name before executing
internal-only functions. 'citus_rebalancer ...' statement as a prefix in
the task command. This is a temporary hack to label internal tasks until
proper GUC injection support is added to the background task executor.
**2. Changes in Locking Strategy**
- Drop the leftover replication lock that previously serialized shard
moves performed via logical replication. This lock was only needed when
we used to drop and recreate the subscriptions/publications before each
move. Since Citus now removes those objects later as part of the “unused
distributed objects” cleanup, shard moves via logical replication can
safely run in parallel without additional locking.
- Introduced a per-shard advisory lock to prevent concurrent operations
on the same shard while allowing maximum parallelism elsewhere.
- Change the lock mode in AcquirePlacementColocationLock from
ExclusiveLock to RowExclusiveLock to allow concurrent updates within the
same colocation group, while still preventing concurrent DDL operations.
**3. citus_rebalance_start() enhancements**
The citus_rebalance_start() function now accepts two new optional
parameters:
```
- parallel_transfer_colocated_shards BOOLEAN DEFAULT false,
- parallel_transfer_reference_tables BOOLEAN DEFAULT false
```
This ensures backward compatibility by preserving the existing behavior
and avoiding any disruption to user expectations and when both are set
to true, the rebalancer operates with full parallelism.
**Previous Rebalancer Behavior:**
`SELECT citus_rebalance_start(shard_transfer_mode := 'force_logical');`
This would:
Start a single background task for replicating all reference tables
Then, move all shards serially, one at a time.
```
Task 1: replicate_reference_tables()
↓
Task 2: move_shard_1()
↓
Task 3: move_shard_2()
↓
Task 4: move_shard_3()
```
Slow and sequential. Reference table copy is a bottleneck. Colocated
shards must wait for each other.
**New Parallel Rebalancer:**
```
SELECT citus_rebalance_start(
shard_transfer_mode := 'force_logical',
parallel_transfer_colocated_shards := true,
parallel_transfer_reference_tables := true
);
```
This would:
- Schedule independent background tasks for each reference-table shard.
- Move colocated shards in parallel, while still maintaining dependency
order.
- Defer constraint application until all reference shards are in place.
-
```
Task 1: copy_ref_shard_1()
Task 2: copy_ref_shard_2()
Task 3: copy_ref_shard_3()
→ Task 4: apply_constraints()
↓
Task 5: copy_shard_1()
Task 6: copy_shard_2()
Task 7: copy_shard_3()
↓
Task 8-10: move_shard_1..3()
```
Each operation is scheduled independently and can run as soon as
dependencies are satisfied.
DESCRIPTION: Fixes potential memory corruptions that could happen when
accessing pg_dist_object after a Citus downgrade is followed by a Citus
upgrade.
In case of Citus downgrade and further upgrade an undefined behavior may
be encountered. The reason is that Citus hardcoded the number of columns
in the extension's tables, but in case of downgrade and following update
some of these tables can have more columns, and some of them can be
marked as dropped.
This PR fixes all such tables using the approach introduced in #7950,
which solved the problem for the pg_dist_partition table.
See #7515 for a more thorough explanation.
---------
Co-authored-by: Karina Litskevich <litskevichkarina@gmail.com>
Co-authored-by: Onur Tirtir <onurcantirtir@gmail.com>
DESCRIPTION: Fixes potential memory corruptions that could happen when
accessing columnar.stripe after a Citus downgrade is followed by a Citus
upgrade.
In case of Citus downgrade and further upgrade an undefined behavior may
be encountered. The reason is that Citus hardcoded the number of columns
in the extension's tables, but in case of downgrade and following update
some of these tables can have more columns, and some of them can be
marked as dropped.
This PR fixes all such tables using the approach introduced in
https://github.com/citusdata/citus/pull/7950, which solved the problem
for the pg_dist_partition table.
See https://github.com/citusdata/citus/issues/7515 for a more thorough
explanation.
---------
Co-authored-by: Karina Litskevich <litskevichkarina@gmail.com>
Co-authored-by: Onur Tirtir <onurcantirtir@gmail.com>
DESCRIPTION: Fixes potential memory corruptions that could happen when
accessing pg_dist_transaction after a Citus downgrade is followed by a
Citus upgrade.
In case of Citus downgrade and further upgrade an undefined behavior may
be encountered. The reason is that Citus hardcoded the number of columns
in the extension's tables, but in case of downgrade and following update
some of these tables can have more columns, and some of them can be
marked as dropped.
This PR fixes all such tables using the approach introduced in #7950,
which solved the problem for the pg_dist_partition table.
See #7515 for a more thorough explanation.
Co-authored-by: Karina Litskevich <litskevichkarina@gmail.com>
DESCRIPTION: Adds support for pushing down LEFT/RIGHT outer joins having
a reference table in the outer side and a distributed table on the inner
side (e.g., <reference table> LEFT JOIN <distributed table>)
Partially addresses #6546
1) `<outer:reference>` LEFT JOIN `<inner:distributed>`
2) `<inner:distributed>` RIGHT JOIN `<outer:reference>`
Previously, for outer joins of types (1) and (2), the distributed side
was computed recursively. This was necessary because, when the inner
side of a recurring outer join is a distributed table, it is not
possible to directly distribute the join; the preserved (outer and
recurring) side may generate rows with join keys that hash to different
shards.
To implement distributed planning while maintaining consistency with
global execution semantics, this PR restricts the outer side only to
those partition key values that route to the selected shard during
distributed shard query computation. This method is employed )when the
following criteria are met: (recursive planning applied otherwise)
- The join type is (1) or (2) (lateral joins are not supported).
- The outer side is a reference table.
- The outer join qualifications include an equality condition between
the partition column of a distributed table and the recurring table.
- The join is not part of a chained join.
- The “enable_recurring_outer_join_pushdown” GUC is enabled (default is
on).
---------
Co-authored-by: ebruaydingol <ebruaydingol@microsoft.com>
Co-authored-by: Onur Tirtir <onurcantirtir@gmail.com>
DESCRIPTION: Not automatically create citus_columnar when there are no
relations using it.
Previously, we were always creating citus_columnar when creating citus
with version >= 11.1. And how we were doing was as follows:
* Detach SQL objects owned by old columnar, i.e., "drop" them from
citus, but not actually drop them from the database
* "old columnar" is the one that we had before Citus 11.1 as part of
citus, i.e., before splitting the access method ands its catalog to
citus_columnar.
* Create citus_columnar and attach the SQL objects leftover from old
columnar to it so that we can continue supporting the columnar tables
that user had before Citus 11.1 with citus_columnar.
First part is unchanged, however, now we don't create citus_columnar
automatically anymore if the user didn't have any relations using
columnar. For this reason, as of Citus 13.2, when these SQL objects are
not owned by an extension and there are no relations using columnar
access method, we drop these SQL objects when updating Citus to 13.2.
The net effect is still the same as if we automatically created
citus_columnar and user dropped citus_columnar later, so we should not
have any issues with dropping them.
(**Update:** Seems we've made some assumptions in citus, e.g.,
citus_finish_pg_upgrade() still assumes columnar metadata exists and
tries to apply some fixes for it, so this PR fixes them as well. See the
last section of this PR description.)
Also, ideally I was hoping to just remove some lines of code from
extension.c, where we decide automatically creating citus_columnar when
creating citus, however, this didn't happen to be the case for two
reasons:
* We still need to automatically create it for the servers using
columnar access method.
* We need to clean-up the leftover SQL objects from old columnar when
the above is not case otherwise we would have leftover SQL objects from
old columnar for no reason, and that would confuse users too.
* Old columnar cannot be used to create columnar tables properly, so we
should clean them up and let the user decide whether they want to create
citus_columnar when they really need it later.
---
Also made several changes in the test suite because similarly, we don't
always want to have citus_columnar created in citus tests anymore:
* Now, columnar specific test targets, which cover **41** test sql
files, always install columnar by default, by using
"--load-extension=citus_columnar".
* "--load-extension=citus_columnar" is not added to citus specific test
targets because by default we don't want to have citus_columnar created
during citus tests.
* Excluding citus_columnar specific tests, we have **601** sql files
that we have as citus tests and in **27** of them we manually create
citus_columnar at the very beginning of the test because these tests do
test some functionalities of citus together with columnar tables.
Also, before and after schedules for PG upgrade tests are now duplicated
so we have two versions of each: one with columnar tests and one
without. To choose between them, check-pg-upgrade now supports a
"test-with-columnar" option, which can be set to "true" or anything else
to logically indicate "false". In CI, we run the check-pg-upgrade test
target with both options. The purpose is to ensure we can test PG
upgrades where citus_columnar is not created in the cluster before the
upgrade as well.
Finally, added more tests to multi_extension.sql to test Citus upgrade
scenarios with / without columnar tables / citus_columnar extension.
---
Also, seems citus_finish_pg_upgrade was assuming that citus_columnar is
always created but actually we should have never made such an
assumption. To fix that, moved columnar specific post-PG-upgrade work
from citus to a new columnar UDF, which is columnar_finish_pg_upgrade.
But to avoid breaking existing customer / managed service scripts, we
continue to automatically perform post PG-upgrade work for columnar
within citus_finish_pg_upgrade, but only if columnar access method
exists this time.
List of extensions that are verified to be working with Citus, and some
special cases that needs attention. Thanks to the efforts of @emelsimsek
, @m3hm3t , @alperkocatas , @eaydingol
Enhance security by addressing a code scanning alert and refactoring the
background worker setup code for better maintainability and clarity.
---------
Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
fixes#8110
This patch updates the `normalize.sed` script used in pg18 psql
regression tests:
- Replaces the headings “List of tables”, “List of indexes”, and “List
of sequences” with a single, uniform heading: “List of relations”.
fixes#8105
This change lets `FindReferencedTableColumn()` correctly resolve columns
through a CTE even when the expression comes from an outer query level
(`varlevelsup > 0`, `skipOuterVars = false`). Before, we hit an
`Assert(skipOuterVars)` in this path.
**Problem**
* Hitting a CTE after walking outer Vars triggered
`Assert(skipOuterVars)`.
* Cause: we modified `parentQueryList` in place and didn’t rebuild the
correct parent chain before recursing into the CTE, so the path was
considered unsafe.
**Fix**
* Remove the `Assert(skipOuterVars)` in the `RTE_CTE` branch.
* Find the CTE’s owning level via `ctelevelsup` and compute
`cteParentListIndex`.
* Rebuild a private parent list for recursion: `list_copy` →
`list_truncate` → `lappend(current query)`.
* Add a bounds check before indexing the CTE’s `targetList`.
**Why it works**
```diff
-parentQueryList = lappend(parentQueryList, query);
-FindReferencedTableColumn(targetEntry->expr, parentQueryList,
- cteQuery, column, rteContainingReferencedColumn,
- skipOuterVars);
+ /* hand a private, bounded parent list to the recursion */
+ List *newParent = list_copy(parentQueryList);
+ newParent = list_truncate(newParent, cteParentListIndex + 1);
+ newParent = lappend(newParent, query);
+
+ FindReferencedTableColumn(targetEntry->expr,
+ newParent,
+ cteQuery,
+ column,
+ rteContainingReferencedColumn,
+ skipOuterVars);
+}
```
**Before:** We changed `parentQueryList` in place (`parentQueryList =
lappend(...)`) and didn’t trim it to the CTE’s owner level.
**After:** We copy the list, trim it to the CTE’s owner level, then
append the current query. This keeps the parent list accurate for the
current recursion and safe when following outer Vars.
**Example: Nested subquery referencing the CTE (two levels down)**
```
WITH c AS MATERIALIZED (SELECT user_id FROM raw_events_first)
SELECT 1
FROM raw_events_first t
WHERE EXISTS (
SELECT 1
FROM (SELECT user_id FROM c) c2
WHERE c2.user_id = t.user_id
);
```
Levels:
Q0 = top SELECT
Q1 = EXISTS subquery
Q2 = inner (SELECT user_id FROM c)
When resolving c2.user_id inside Q2:
- parentQueryList is [Q0, Q1, Q2].
- `ctelevelsup`: 2
`cteParentListIndex = length(parentQueryList) - ctelevelsup - 1`
- Recurse into the CTE’s query with [Q0, Q2].
**Tests (added in `multi_insert_select`)**
* **T1:** Correlated subquery that references a CTE (one level down)
Verifies that resolving through `RTE_CTE` after following an outer `Var`
succeeds, row count matches source table.
* **T2:** Nested subquery that references a CTE (two levels down)
Exercises deeper recursion and confirms identical to T1.
* **T3:** Scalar subquery in a target list that reads from the outer CTE
Checks expected row count and that no NULLs are inserted.
These tests cover the cases that previously hit `Assert(skipOuterVars)`
and confirm CTE references while following outer Vars.
DESCRIPTION: Fixes potential memory corruptions that could happen when
accessing pg_dist_background_task after a Citus downgrade is followed by
a Citus upgrade.
In case of Citus downgrade and further upgrade an undefined behavior may
be encountered. The reason is that Citus hardcoded the number of columns
in the extension's tables, but in case of downgrade and following update
some of these tables can have more columns, and some of them can be
marked as dropped.
This PR fixes all such tables using the approach introduced in #7950,
which solved the problem for the pg_dist_partition table.
See #7515 for a more thorough explanation.
---------
Co-authored-by: Karina Litskevich <litskevichkarina@gmail.com>
Co-authored-by: Onur Tirtir <onurcantirtir@gmail.com>
DESCRIPTION: Introduce a new check to push down a query including union
and outer join to fix#8091 .
In "SafeToPushdownUnionSubquery", we check if the distribution column of
the outer relation is in the target list.
DESCRIPTION: Fixes potential memory corruptions that could happen when a
Citus downgrade is followed by a Citus upgrade.
In case of citus downgrade and further upgrade citus crash with core
dump.
The reason is that citus hardcoded number of columns in
pg_dist_partition table,
but in case of downgrade and following update table can have more
columns, and
some of then can be marked as dropped.
Patch suggest decision for this problem with using
tupleDescriptor->nattrs(postgres internal approach).
Fixes#7933.
---------
Co-authored-by: Onur Tirtir <onurcantirtir@gmail.com>
We never update an older version of a SQL object for consistency across
release tags, so this commit moves "DROP FUNCTION .." for the older
version of "pg_catalog.worker_last_saved_explain_analyze();" to the
appropriate migration script.
See https://github.com/citusdata/citus/pull/8017.
DESCRIPTION: Fixed a bug in EXPLAIN ANALYZE to prevent unintended (duplicate) execution of the (sub)plans during the explain phase.
Fixes#4212
### 🐞 Bug #4212 : Redundant (Subplan) Execution in `EXPLAIN ANALYZE`
codepath
#### 🔍 Background
In the standard PostgreSQL execution path, `ExplainOnePlan()` is
responsible for two distinct operations depending on whether `EXPLAIN
ANALYZE` is requested:
1. **Execute the plan**
```c
if (es->analyze)
ExecutorRun(queryDesc, direction, 0L, true);
```
2. **Print the plan tree**
```c
ExplainPrintPlan(es, queryDesc);
```
When printing the plan, the executor should **not run the plan again**.
Execution is only expected to happen once—at the top level when
`es->analyze = true`.
---
#### ⚠️ Issue in Citus
In the Citus implementation of `CustomScanMethods.ExplainCustomScan =
CitusExplainScan`, which is a custom scan explain callback function used
to print explain information of a Citus plan incorrectly performs
**redundant execution** inside the explain path of `ExplainPrintPlan()`
```c
ExplainOnePlan()
ExplainPrintPlan()
ExplainNode()
CitusExplainScan()
if (distributedPlan->subPlanList != NIL)
{
ExplainSubPlans(distributedPlan, es);
{
PlannedStmt *plan = subPlan->plan;
ExplainOnePlan(plan, ...); // ⚠️ May re-execute subplan if es->analyze is true
}
}
```
This causes the subplans to be **executed again**, even though they have
already been executed during the top-level plan execution. This behavior
violates the expectation in PostgreSQL where `EXPLAIN ANALYZE` should
**execute each node exactly once** for analysis.
---
#### ✅ Fix (proposed)
Save the output of Subplans during `ExecuteSubPlans()`, and later use it
in `ExplainSubPlans()`
fixes#8072fixes#8055706054b11b
before fix
when try to create cluster with assert on
`citus_dev make test1 --destroy`
```
TRAP: failed Assert("HaveRegisteredOrActiveSnapshot()"), File: "heapam.c", Line: 232, PID: 75572
postgres: citus citus [local] SELECT(ExceptionalCondition+0x6e)[0x5585e16123e6]
postgres: citus citus [local] SELECT(heap_insert+0x220)[0x5585e10709af]
postgres: citus citus [local] SELECT(simple_heap_insert+0x33)[0x5585e1071a20]
postgres: citus citus [local] SELECT(CatalogTupleInsert+0x32)[0x5585e1135843]
/home/citus/.pgenv/pgsql-18beta2/lib/citus.so(+0x11e0aa)[0x7fa26f1ca0aa]
/home/citus/.pgenv/pgsql-18beta2/lib/citus.so(+0x11b607)[0x7fa26f1c7607]
/home/citus/.pgenv/pgsql-18beta2/lib/citus.so(+0x11bf25)[0x7fa26f1c7f25]
/home/citus/.pgenv/pgsql-18beta2/lib/citus.so(+0x11d4e2)[0x7fa26f1c94e2]
postgres: citus citus [local] SELECT(+0x1c267d)[0x5585e10e967d]
postgres: citus citus [local] SELECT(+0x1c6ba0)[0x5585e10edba0]
postgres: citus citus [local] SELECT(+0x1c7b80)[0x5585e10eeb80]
postgres: citus citus [local] SELECT(CommitTransactionCommand+0xd)[0x5585e10eef0a]
postgres: citus citus [local] SELECT(+0x575b3d)[0x5585e149cb3d]
postgres: citus citus [local] SELECT(+0x5788ce)[0x5585e149f8ce]
postgres: citus citus [local] SELECT(PostgresMain+0xae7)[0x5585e14a2088]
postgres: citus citus [local] SELECT(BackendMain+0x51)[0x5585e149ab36]
postgres: citus citus [local] SELECT(postmaster_child_launch+0x101)[0x5585e13d6b32]
postgres: citus citus [local] SELECT(+0x4b273f)[0x5585e13d973f]
postgres: citus citus [local] SELECT(+0x4b49f3)[0x5585e13db9f3]
postgres: citus citus [local] SELECT(PostmasterMain+0x1089)[0x5585e13dcee2]
postgres: citus citus [local] SELECT(main+0x1d7)[0x5585e12e3428]
/lib/x86_64-linux-gnu/libc.so.6(+0x29d90)[0x7fa271421d90]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x80)[0x7fa271421e40]
```
Bumps [black](https://github.com/psf/black) from 23.11.0 to 24.3.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/psf/black/releases">black's
releases</a>.</em></p>
<blockquote>
<h2>24.3.0</h2>
<h3>Highlights</h3>
<p>This release is a milestone: it fixes Black's first CVE security
vulnerability. If you
run Black on untrusted input, or if you habitually put thousands of
leading tab
characters in your docstrings, you are strongly encouraged to upgrade
immediately to fix
<a
href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2024-21503">CVE-2024-21503</a>.</p>
<p>This release also fixes a bug in Black's AST safety check that
allowed Black to make
incorrect changes to certain f-strings that are valid in Python 3.12 and
higher.</p>
<h3>Stable style</h3>
<ul>
<li>Don't move comments along with delimiters, which could cause crashes
(<a
href="https://redirect.github.com/psf/black/issues/4248">#4248</a>)</li>
<li>Strengthen AST safety check to catch more unsafe changes to strings.
Previous versions
of Black would incorrectly format the contents of certain unusual
f-strings containing
nested strings with the same quote type. Now, Black will crash on such
strings until
support for the new f-string syntax is implemented. (<a
href="https://redirect.github.com/psf/black/issues/4270">#4270</a>)</li>
<li>Fix a bug where line-ranges exceeding the last code line would not
work as expected
(<a
href="https://redirect.github.com/psf/black/issues/4273">#4273</a>)</li>
</ul>
<h3>Performance</h3>
<ul>
<li>Fix catastrophic performance on docstrings that contain large
numbers of leading tab
characters. This fixes
<a
href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2024-21503">CVE-2024-21503</a>.
(<a
href="https://redirect.github.com/psf/black/issues/4278">#4278</a>)</li>
</ul>
<h3>Documentation</h3>
<ul>
<li>Note what happens when <code>--check</code> is used with
<code>--quiet</code> (<a
href="https://redirect.github.com/psf/black/issues/4236">#4236</a>)</li>
</ul>
<h2>24.2.0</h2>
<h3>Stable style</h3>
<ul>
<li>Fixed a bug where comments where mistakenly removed along with
redundant parentheses
(<a
href="https://redirect.github.com/psf/black/issues/4218">#4218</a>)</li>
</ul>
<h3>Preview style</h3>
<ul>
<li>Move the <code>hug_parens_with_braces_and_square_brackets</code>
feature to the unstable style
due to an outstanding crash and proposed formatting tweaks (<a
href="https://redirect.github.com/psf/black/issues/4198">#4198</a>)</li>
<li>Fixed a bug where base expressions caused inconsistent formatting of
** in tenary
expression (<a
href="https://redirect.github.com/psf/black/issues/4154">#4154</a>)</li>
<li>Checking for newline before adding one on docstring that is almost
at the line limit
(<a
href="https://redirect.github.com/psf/black/issues/4185">#4185</a>)</li>
<li>Remove redundant parentheses in <code>case</code> statement
<code>if</code> guards (<a
href="https://redirect.github.com/psf/black/issues/4214">#4214</a>).</li>
</ul>
<h3>Configuration</h3>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/psf/black/blob/main/CHANGES.md">black's
changelog</a>.</em></p>
<blockquote>
<h2>24.3.0</h2>
<h3>Highlights</h3>
<p>This release is a milestone: it fixes Black's first CVE security
vulnerability. If you
run Black on untrusted input, or if you habitually put thousands of
leading tab
characters in your docstrings, you are strongly encouraged to upgrade
immediately to fix
<a
href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2024-21503">CVE-2024-21503</a>.</p>
<p>This release also fixes a bug in Black's AST safety check that
allowed Black to make
incorrect changes to certain f-strings that are valid in Python 3.12 and
higher.</p>
<h3>Stable style</h3>
<ul>
<li>Don't move comments along with delimiters, which could cause crashes
(<a
href="https://redirect.github.com/psf/black/issues/4248">#4248</a>)</li>
<li>Strengthen AST safety check to catch more unsafe changes to strings.
Previous versions
of Black would incorrectly format the contents of certain unusual
f-strings containing
nested strings with the same quote type. Now, Black will crash on such
strings until
support for the new f-string syntax is implemented. (<a
href="https://redirect.github.com/psf/black/issues/4270">#4270</a>)</li>
<li>Fix a bug where line-ranges exceeding the last code line would not
work as expected
(<a
href="https://redirect.github.com/psf/black/issues/4273">#4273</a>)</li>
</ul>
<h3>Performance</h3>
<ul>
<li>Fix catastrophic performance on docstrings that contain large
numbers of leading tab
characters. This fixes
<a
href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2024-21503">CVE-2024-21503</a>.
(<a
href="https://redirect.github.com/psf/black/issues/4278">#4278</a>)</li>
</ul>
<h3>Documentation</h3>
<ul>
<li>Note what happens when <code>--check</code> is used with
<code>--quiet</code> (<a
href="https://redirect.github.com/psf/black/issues/4236">#4236</a>)</li>
</ul>
<h2>24.2.0</h2>
<h3>Stable style</h3>
<ul>
<li>Fixed a bug where comments where mistakenly removed along with
redundant parentheses
(<a
href="https://redirect.github.com/psf/black/issues/4218">#4218</a>)</li>
</ul>
<h3>Preview style</h3>
<ul>
<li>Move the <code>hug_parens_with_braces_and_square_brackets</code>
feature to the unstable style
due to an outstanding crash and proposed formatting tweaks (<a
href="https://redirect.github.com/psf/black/issues/4198">#4198</a>)</li>
<li>Fixed a bug where base expressions caused inconsistent formatting of
** in tenary
expression (<a
href="https://redirect.github.com/psf/black/issues/4154">#4154</a>)</li>
<li>Checking for newline before adding one on docstring that is almost
at the line limit
(<a
href="https://redirect.github.com/psf/black/issues/4185">#4185</a>)</li>
<li>Remove redundant parentheses in <code>case</code> statement
<code>if</code> guards (<a
href="https://redirect.github.com/psf/black/issues/4214">#4214</a>).</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="552baf8229"><code>552baf8</code></a>
Prepare release 24.3.0 (<a
href="https://redirect.github.com/psf/black/issues/4279">#4279</a>)</li>
<li><a
href="f000936726"><code>f000936</code></a>
Fix catastrophic performance in lines_with_leading_tabs_expanded() (<a
href="https://redirect.github.com/psf/black/issues/4278">#4278</a>)</li>
<li><a
href="7b5a657285"><code>7b5a657</code></a>
Fix --line-ranges behavior when ranges are at EOF (<a
href="https://redirect.github.com/psf/black/issues/4273">#4273</a>)</li>
<li><a
href="1abcffc818"><code>1abcffc</code></a>
Use regex where we ignore case on windows (<a
href="https://redirect.github.com/psf/black/issues/4252">#4252</a>)</li>
<li><a
href="719e67462c"><code>719e674</code></a>
Fix 4227: Improve documentation for --quiet --check (<a
href="https://redirect.github.com/psf/black/issues/4236">#4236</a>)</li>
<li><a
href="e5510afc06"><code>e5510af</code></a>
update plugin url for Thonny (<a
href="https://redirect.github.com/psf/black/issues/4259">#4259</a>)</li>
<li><a
href="6af7d11096"><code>6af7d11</code></a>
Fix AST safety check false negative (<a
href="https://redirect.github.com/psf/black/issues/4270">#4270</a>)</li>
<li><a
href="f03ee113c9"><code>f03ee11</code></a>
Ensure <code>blib2to3.pygram</code> is initialized before use (<a
href="https://redirect.github.com/psf/black/issues/4224">#4224</a>)</li>
<li><a
href="e4bfedbec2"><code>e4bfedb</code></a>
fix: Don't move comments while splitting delimiters (<a
href="https://redirect.github.com/psf/black/issues/4248">#4248</a>)</li>
<li><a
href="d0287e1f75"><code>d0287e1</code></a>
Make trailing comma logic more concise (<a
href="https://redirect.github.com/psf/black/issues/4202">#4202</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/psf/black/compare/23.11.0...24.3.0">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the
[Security Alerts
page](https://github.com/citusdata/citus/network/alerts).
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
The assert on the number of shards incorrectly used the value of
citus.shard_replication_factor; it should check the table's metadata
to determine the replication factor of its data, and not assume it is
the current GUC value.
Commit 245a62df3e included an assertion on a struct field that is
in PG16+, without PG_VERSION_NUM check. This commit removes the
offending line of code. The same assertion is present later in the
function with the PG_VERSION_NUM check, so the offending line of code is
redundant.
DESCRIPTION: Fixes problematic UPDATE statements with indirection and array/jsonb subscripting with more than one field.
Fixes#4092, #7674 and #5621. Issues #7674 and #4092 involve an UPDATE with out of order columns and a sublink (SELECT) in the source, e.g. `UPDATE T SET (col3, col1, col4) = (SELECT 3, 1, 4)` where an incorrect value could get written to a column because query deparsing generated an incorrect SQL statement. To address this the fix adds an additional
check to `ruleutils` to ensure that the target list of an UPDATE statement is in an order so that deparsing can be done safely. It is needed when the source of the UPDATE has a sublink, because Postgres `rewrite` will have put the target list in attribute order, but for deparsing to produce a correct SQL text the target list needs to be in order of the references (or `paramids`) to the target list of the sublink(s). Issue #5621 involves an UPDATE with array/jsonb subscripting that can behave incorrectly with more than one field, again because Citus query deparsing is receiving a post-`rewrite` query tree. The fix also adds a
check to `ruleutils` to enable correct query deparsing of the UPDATE.
---------
Co-authored-by: Ibrahim Halatci <ihalatci@gmail.com>
Co-authored-by: Colm McHugh <colm.mchugh@gmail.com>
DESCRIPTION: Avoid query deparse and planning of shard query in local execution. Adds citus.enable_local_execution_local_plan GUC to allow avoiding unnecessary query deparsing to improve performance of fast-path queries targeting local shards.
If a fast path query resolves to a shard that is local to the node planning the query, a shortcut can be taken so that the OID of the shard is plugged into the parse tree, which is then planned by Postgres. In `local_executor.c` the task uses that plan instead of parsing and planning a shard query. How this is done: The fast path planner identifies if the shortcut is possible, and then the distributed planner checks, using `CheckAndBuildDelayedFastPathPlan()`, if a local plan can be generated or if the shard query should be generated.
This optimization is controlled by a GUC `citus.enable_local_execution_local_plan` which is on by default. A new
regress test `local_execution_local_plan` tests both row-sharding and schema sharding. Negative tests are added to
`local_shard_execution_dropped_column` to verify that the optimization is not taken when the shard is local but there is a difference between the shard and distributed table because of a dropped column.
Bumps [black](https://github.com/psf/black) from 24.2.0 to 24.3.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/psf/black/releases">black's
releases</a>.</em></p>
<blockquote>
<h2>24.3.0</h2>
<h3>Highlights</h3>
<p>This release is a milestone: it fixes Black's first CVE security
vulnerability. If you
run Black on untrusted input, or if you habitually put thousands of
leading tab
characters in your docstrings, you are strongly encouraged to upgrade
immediately to fix
<a
href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2024-21503">CVE-2024-21503</a>.</p>
<p>This release also fixes a bug in Black's AST safety check that
allowed Black to make
incorrect changes to certain f-strings that are valid in Python 3.12 and
higher.</p>
<h3>Stable style</h3>
<ul>
<li>Don't move comments along with delimiters, which could cause crashes
(<a
href="https://redirect.github.com/psf/black/issues/4248">#4248</a>)</li>
<li>Strengthen AST safety check to catch more unsafe changes to strings.
Previous versions
of Black would incorrectly format the contents of certain unusual
f-strings containing
nested strings with the same quote type. Now, Black will crash on such
strings until
support for the new f-string syntax is implemented. (<a
href="https://redirect.github.com/psf/black/issues/4270">#4270</a>)</li>
<li>Fix a bug where line-ranges exceeding the last code line would not
work as expected
(<a
href="https://redirect.github.com/psf/black/issues/4273">#4273</a>)</li>
</ul>
<h3>Performance</h3>
<ul>
<li>Fix catastrophic performance on docstrings that contain large
numbers of leading tab
characters. This fixes
<a
href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2024-21503">CVE-2024-21503</a>.
(<a
href="https://redirect.github.com/psf/black/issues/4278">#4278</a>)</li>
</ul>
<h3>Documentation</h3>
<ul>
<li>Note what happens when <code>--check</code> is used with
<code>--quiet</code> (<a
href="https://redirect.github.com/psf/black/issues/4236">#4236</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/psf/black/blob/main/CHANGES.md">black's
changelog</a>.</em></p>
<blockquote>
<h2>24.3.0</h2>
<h3>Highlights</h3>
<p>This release is a milestone: it fixes Black's first CVE security
vulnerability. If you
run Black on untrusted input, or if you habitually put thousands of
leading tab
characters in your docstrings, you are strongly encouraged to upgrade
immediately to fix
<a
href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2024-21503">CVE-2024-21503</a>.</p>
<p>This release also fixes a bug in Black's AST safety check that
allowed Black to make
incorrect changes to certain f-strings that are valid in Python 3.12 and
higher.</p>
<h3>Stable style</h3>
<ul>
<li>Don't move comments along with delimiters, which could cause crashes
(<a
href="https://redirect.github.com/psf/black/issues/4248">#4248</a>)</li>
<li>Strengthen AST safety check to catch more unsafe changes to strings.
Previous versions
of Black would incorrectly format the contents of certain unusual
f-strings containing
nested strings with the same quote type. Now, Black will crash on such
strings until
support for the new f-string syntax is implemented. (<a
href="https://redirect.github.com/psf/black/issues/4270">#4270</a>)</li>
<li>Fix a bug where line-ranges exceeding the last code line would not
work as expected
(<a
href="https://redirect.github.com/psf/black/issues/4273">#4273</a>)</li>
</ul>
<h3>Performance</h3>
<ul>
<li>Fix catastrophic performance on docstrings that contain large
numbers of leading tab
characters. This fixes
<a
href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2024-21503">CVE-2024-21503</a>.
(<a
href="https://redirect.github.com/psf/black/issues/4278">#4278</a>)</li>
</ul>
<h3>Documentation</h3>
<ul>
<li>Note what happens when <code>--check</code> is used with
<code>--quiet</code> (<a
href="https://redirect.github.com/psf/black/issues/4236">#4236</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="552baf8229"><code>552baf8</code></a>
Prepare release 24.3.0 (<a
href="https://redirect.github.com/psf/black/issues/4279">#4279</a>)</li>
<li><a
href="f000936726"><code>f000936</code></a>
Fix catastrophic performance in lines_with_leading_tabs_expanded() (<a
href="https://redirect.github.com/psf/black/issues/4278">#4278</a>)</li>
<li><a
href="7b5a657285"><code>7b5a657</code></a>
Fix --line-ranges behavior when ranges are at EOF (<a
href="https://redirect.github.com/psf/black/issues/4273">#4273</a>)</li>
<li><a
href="1abcffc818"><code>1abcffc</code></a>
Use regex where we ignore case on windows (<a
href="https://redirect.github.com/psf/black/issues/4252">#4252</a>)</li>
<li><a
href="719e67462c"><code>719e674</code></a>
Fix 4227: Improve documentation for --quiet --check (<a
href="https://redirect.github.com/psf/black/issues/4236">#4236</a>)</li>
<li><a
href="e5510afc06"><code>e5510af</code></a>
update plugin url for Thonny (<a
href="https://redirect.github.com/psf/black/issues/4259">#4259</a>)</li>
<li><a
href="6af7d11096"><code>6af7d11</code></a>
Fix AST safety check false negative (<a
href="https://redirect.github.com/psf/black/issues/4270">#4270</a>)</li>
<li><a
href="f03ee113c9"><code>f03ee11</code></a>
Ensure <code>blib2to3.pygram</code> is initialized before use (<a
href="https://redirect.github.com/psf/black/issues/4224">#4224</a>)</li>
<li><a
href="e4bfedbec2"><code>e4bfedb</code></a>
fix: Don't move comments while splitting delimiters (<a
href="https://redirect.github.com/psf/black/issues/4248">#4248</a>)</li>
<li><a
href="d0287e1f75"><code>d0287e1</code></a>
Make trailing comma logic more concise (<a
href="https://redirect.github.com/psf/black/issues/4202">#4202</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/psf/black/compare/24.2.0...24.3.0">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the
[Security Alerts
page](https://github.com/citusdata/citus/network/alerts).
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Bumps [black](https://github.com/psf/black) from 24.2.0 to 24.3.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/psf/black/releases">black's
releases</a>.</em></p>
<blockquote>
<h2>24.3.0</h2>
<h3>Highlights</h3>
<p>This release is a milestone: it fixes Black's first CVE security
vulnerability. If you
run Black on untrusted input, or if you habitually put thousands of
leading tab
characters in your docstrings, you are strongly encouraged to upgrade
immediately to fix
<a
href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2024-21503">CVE-2024-21503</a>.</p>
<p>This release also fixes a bug in Black's AST safety check that
allowed Black to make
incorrect changes to certain f-strings that are valid in Python 3.12 and
higher.</p>
<h3>Stable style</h3>
<ul>
<li>Don't move comments along with delimiters, which could cause crashes
(<a
href="https://redirect.github.com/psf/black/issues/4248">#4248</a>)</li>
<li>Strengthen AST safety check to catch more unsafe changes to strings.
Previous versions
of Black would incorrectly format the contents of certain unusual
f-strings containing
nested strings with the same quote type. Now, Black will crash on such
strings until
support for the new f-string syntax is implemented. (<a
href="https://redirect.github.com/psf/black/issues/4270">#4270</a>)</li>
<li>Fix a bug where line-ranges exceeding the last code line would not
work as expected
(<a
href="https://redirect.github.com/psf/black/issues/4273">#4273</a>)</li>
</ul>
<h3>Performance</h3>
<ul>
<li>Fix catastrophic performance on docstrings that contain large
numbers of leading tab
characters. This fixes
<a
href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2024-21503">CVE-2024-21503</a>.
(<a
href="https://redirect.github.com/psf/black/issues/4278">#4278</a>)</li>
</ul>
<h3>Documentation</h3>
<ul>
<li>Note what happens when <code>--check</code> is used with
<code>--quiet</code> (<a
href="https://redirect.github.com/psf/black/issues/4236">#4236</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/psf/black/blob/main/CHANGES.md">black's
changelog</a>.</em></p>
<blockquote>
<h2>24.3.0</h2>
<h3>Highlights</h3>
<p>This release is a milestone: it fixes Black's first CVE security
vulnerability. If you
run Black on untrusted input, or if you habitually put thousands of
leading tab
characters in your docstrings, you are strongly encouraged to upgrade
immediately to fix
<a
href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2024-21503">CVE-2024-21503</a>.</p>
<p>This release also fixes a bug in Black's AST safety check that
allowed Black to make
incorrect changes to certain f-strings that are valid in Python 3.12 and
higher.</p>
<h3>Stable style</h3>
<ul>
<li>Don't move comments along with delimiters, which could cause crashes
(<a
href="https://redirect.github.com/psf/black/issues/4248">#4248</a>)</li>
<li>Strengthen AST safety check to catch more unsafe changes to strings.
Previous versions
of Black would incorrectly format the contents of certain unusual
f-strings containing
nested strings with the same quote type. Now, Black will crash on such
strings until
support for the new f-string syntax is implemented. (<a
href="https://redirect.github.com/psf/black/issues/4270">#4270</a>)</li>
<li>Fix a bug where line-ranges exceeding the last code line would not
work as expected
(<a
href="https://redirect.github.com/psf/black/issues/4273">#4273</a>)</li>
</ul>
<h3>Performance</h3>
<ul>
<li>Fix catastrophic performance on docstrings that contain large
numbers of leading tab
characters. This fixes
<a
href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2024-21503">CVE-2024-21503</a>.
(<a
href="https://redirect.github.com/psf/black/issues/4278">#4278</a>)</li>
</ul>
<h3>Documentation</h3>
<ul>
<li>Note what happens when <code>--check</code> is used with
<code>--quiet</code> (<a
href="https://redirect.github.com/psf/black/issues/4236">#4236</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="552baf8229"><code>552baf8</code></a>
Prepare release 24.3.0 (<a
href="https://redirect.github.com/psf/black/issues/4279">#4279</a>)</li>
<li><a
href="f000936726"><code>f000936</code></a>
Fix catastrophic performance in lines_with_leading_tabs_expanded() (<a
href="https://redirect.github.com/psf/black/issues/4278">#4278</a>)</li>
<li><a
href="7b5a657285"><code>7b5a657</code></a>
Fix --line-ranges behavior when ranges are at EOF (<a
href="https://redirect.github.com/psf/black/issues/4273">#4273</a>)</li>
<li><a
href="1abcffc818"><code>1abcffc</code></a>
Use regex where we ignore case on windows (<a
href="https://redirect.github.com/psf/black/issues/4252">#4252</a>)</li>
<li><a
href="719e67462c"><code>719e674</code></a>
Fix 4227: Improve documentation for --quiet --check (<a
href="https://redirect.github.com/psf/black/issues/4236">#4236</a>)</li>
<li><a
href="e5510afc06"><code>e5510af</code></a>
update plugin url for Thonny (<a
href="https://redirect.github.com/psf/black/issues/4259">#4259</a>)</li>
<li><a
href="6af7d11096"><code>6af7d11</code></a>
Fix AST safety check false negative (<a
href="https://redirect.github.com/psf/black/issues/4270">#4270</a>)</li>
<li><a
href="f03ee113c9"><code>f03ee11</code></a>
Ensure <code>blib2to3.pygram</code> is initialized before use (<a
href="https://redirect.github.com/psf/black/issues/4224">#4224</a>)</li>
<li><a
href="e4bfedbec2"><code>e4bfedb</code></a>
fix: Don't move comments while splitting delimiters (<a
href="https://redirect.github.com/psf/black/issues/4248">#4248</a>)</li>
<li><a
href="d0287e1f75"><code>d0287e1</code></a>
Make trailing comma logic more concise (<a
href="https://redirect.github.com/psf/black/issues/4202">#4202</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/psf/black/compare/24.2.0...24.3.0">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the
[Security Alerts
page](https://github.com/citusdata/citus/network/alerts).
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Bumps [werkzeug](https://github.com/pallets/werkzeug) from 2.3.7 to
3.0.6.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/pallets/werkzeug/releases">werkzeug's
releases</a>.</em></p>
<blockquote>
<h2>3.0.6</h2>
<p>This is the Werkzeug 3.0.6 security fix release, which fixes security
issues but does not otherwise change behavior and should not result in
breaking changes.</p>
<p>PyPI: <a
href="https://pypi.org/project/Werkzeug/3.0.6/">https://pypi.org/project/Werkzeug/3.0.6/</a>
Changes: <a
href="https://werkzeug.palletsprojects.com/en/stable/changes/#version-3-0-6">https://werkzeug.palletsprojects.com/en/stable/changes/#version-3-0-6</a></p>
<ul>
<li>Fix how <code>max_form_memory_size</code> is applied when parsing
large non-file fields. <a
href="https://github.com/advisories/GHSA-q34m-jh98-gwm2">GHSA-q34m-jh98-gwm2</a></li>
<li><code>safe_join</code> catches certain paths on Windows that were
not caught by <code>ntpath.isabs</code> on Python < 3.11. <a
href="https://github.com/advisories/GHSA-f9vj-2wh5-fj8j">GHSA-f9vj-2wh5-fj8j</a></li>
</ul>
<h2>3.0.5</h2>
<p>This is the Werkzeug 3.0.5 fix release, which fixes bugs but does not
otherwise change behavior and should not result in breaking changes.</p>
<p>PyPI: <a
href="https://pypi.org/project/Werkzeug/3.0.5/">https://pypi.org/project/Werkzeug/3.0.5/</a>
Changes: <a
href="https://werkzeug.palletsprojects.com/en/stable/changes/#version-3-0-5">https://werkzeug.palletsprojects.com/en/stable/changes/#version-3-0-5</a>
Milestone: <a
href="https://github.com/pallets/werkzeug/milestone/37?closed=1">https://github.com/pallets/werkzeug/milestone/37?closed=1</a></p>
<ul>
<li>The Watchdog reloader ignores file closed no write events. <a
href="https://redirect.github.com/pallets/werkzeug/issues/2945">#2945</a></li>
<li>Logging works with client addresses containing an IPv6 scope. <a
href="https://redirect.github.com/pallets/werkzeug/issues/2952">#2952</a></li>
<li>Ignore invalid authorization parameters. <a
href="https://redirect.github.com/pallets/werkzeug/issues/2955">#2955</a></li>
<li>Improve type annotation fore <code>SharedDataMiddleware</code>. <a
href="https://redirect.github.com/pallets/werkzeug/issues/2958">#2958</a></li>
<li>Compatibility with Python 3.13 when generating debugger pin and the
current UID does not have an associated name. <a
href="https://redirect.github.com/pallets/werkzeug/issues/2957">#2957</a></li>
</ul>
<h2>3.0.4</h2>
<p>This is the Werkzeug 3.0.4 fix release, which fixes bugs but does not
otherwise change behavior and should not result in breaking changes.</p>
<p>PyPI: <a
href="https://pypi.org/project/Werkzeug/3.0.4/">https://pypi.org/project/Werkzeug/3.0.4/</a>
Changes: <a
href="https://werkzeug.palletsprojects.com/en/3.0.x/changes/#version-3-0-4">https://werkzeug.palletsprojects.com/en/3.0.x/changes/#version-3-0-4</a>
Milestone: <a
href="https://github.com/pallets/werkzeug/milestone/36?closed=1">https://github.com/pallets/werkzeug/milestone/36?closed=1</a></p>
<ul>
<li>Restore behavior where parsing
<code>multipart/x-www-form-urlencoded</code> data with
invalid UTF-8 bytes in the body results in no form data parsed rather
than a
413 error. <a
href="https://redirect.github.com/pallets/werkzeug/issues/2930">#2930</a></li>
<li>Improve <code>parse_options_header</code> performance when parsing
unterminated
quoted string values. <a
href="https://redirect.github.com/pallets/werkzeug/issues/2904">#2904</a></li>
<li>Debugger pin auth is synchronized across threads/processes when
tracking
failed entries. <a
href="https://redirect.github.com/pallets/werkzeug/issues/2916">#2916</a></li>
<li>Dev server handles unexpected <code>SSLEOFError</code> due to issue
in Python < 3.13.
<a
href="https://redirect.github.com/pallets/werkzeug/issues/2926">#2926</a></li>
<li>Debugger pin auth works when the URL already contains a query
string.
<a
href="https://redirect.github.com/pallets/werkzeug/issues/2918">#2918</a></li>
</ul>
<h2>3.0.3</h2>
<p>This is the Werkzeug 3.0.3 security release, which fixes security
issues and bugs but does not otherwise change behavior and should not
result in breaking changes.</p>
<p>PyPI: <a
href="https://pypi.org/project/Werkzeug/3.0.3/">https://pypi.org/project/Werkzeug/3.0.3/</a>
Changes: <a
href="https://werkzeug.palletsprojects.com/en/3.0.x/changes/#version-3-0-3">https://werkzeug.palletsprojects.com/en/3.0.x/changes/#version-3-0-3</a>
Milestone: <a
href="https://github.com/pallets/werkzeug/milestone/35?closed=1">https://github.com/pallets/werkzeug/milestone/35?closed=1</a></p>
<ul>
<li>Only allow <code>localhost</code>, <code>.localhost</code>,
<code>127.0.0.1</code>, or the specified hostname when running the dev
server, to make debugger requests. Additional hosts can be added by
using the debugger middleware directly. The debugger UI makes requests
using the full URL rather than only the path. GHSA-2g68-c3qc-8985</li>
<li>Make reloader more robust when <code>""</code> is in
<code>sys.path</code>. <a
href="https://redirect.github.com/pallets/werkzeug/issues/2823">#2823</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/pallets/werkzeug/blob/main/CHANGES.rst">werkzeug's
changelog</a>.</em></p>
<blockquote>
<h2>Version 3.0.6</h2>
<p>Released 2024-10-25</p>
<ul>
<li>Fix how <code>max_form_memory_size</code> is applied when parsing
large non-file
fields. :ghsa:<code>q34m-jh98-gwm2</code></li>
<li><code>safe_join</code> catches certain paths on Windows that were
not caught by
<code>ntpath.isabs</code> on Python < 3.11.
:ghsa:<code>f9vj-2wh5-fj8j</code></li>
</ul>
<h2>Version 3.0.5</h2>
<p>Released 2024-10-24</p>
<ul>
<li>The Watchdog reloader ignores file closed no write events.
:issue:<code>2945</code></li>
<li>Logging works with client addresses containing an IPv6 scope
:issue:<code>2952</code></li>
<li>Ignore invalid authorization parameters.
:issue:<code>2955</code></li>
<li>Improve type annotation fore <code>SharedDataMiddleware</code>.
:issue:<code>2958</code></li>
<li>Compatibility with Python 3.13 when generating debugger pin and the
current
UID does not have an associated name. :issue:<code>2957</code></li>
</ul>
<h2>Version 3.0.4</h2>
<p>Released 2024-08-21</p>
<ul>
<li>Restore behavior where parsing
<code>multipart/x-www-form-urlencoded</code> data with
invalid UTF-8 bytes in the body results in no form data parsed rather
than a
413 error. :issue:<code>2930</code></li>
<li>Improve <code>parse_options_header</code> performance when parsing
unterminated
quoted string values. :issue:<code>2904</code></li>
<li>Debugger pin auth is synchronized across threads/processes when
tracking
failed entries. :issue:<code>2916</code></li>
<li>Dev server handles unexpected <code>SSLEOFError</code> due to issue
in Python < 3.13.
:issue:<code>2926</code></li>
<li>Debugger pin auth works when the URL already contains a query
string.
:issue:<code>2918</code></li>
</ul>
<h2>Version 3.0.3</h2>
<p>Released 2024-05-05</p>
<ul>
<li>Only allow <code>localhost</code>, <code>.localhost</code>,
<code>127.0.0.1</code>, or the specified
hostname when running the dev server, to make debugger requests.
Additional
hosts can be added by using the debugger middleware directly. The
debugger</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="5eaefc3996"><code>5eaefc3</code></a>
release version 3.0.6</li>
<li><a
href="2767bcb10a"><code>2767bcb</code></a>
Merge commit from fork</li>
<li><a
href="87cc78a25f"><code>87cc78a</code></a>
catch special absolute path on Windows Python < 3.11</li>
<li><a
href="50cfeebcb0"><code>50cfeeb</code></a>
Merge commit from fork</li>
<li><a
href="8760275afb"><code>8760275</code></a>
apply max_form_memory_size another level up in the parser</li>
<li><a
href="8d6a12e2af"><code>8d6a12e</code></a>
start version 3.0.6</li>
<li><a
href="a7b121abc7"><code>a7b121a</code></a>
release version 3.0.5 (<a
href="https://redirect.github.com/pallets/werkzeug/issues/2961">#2961</a>)</li>
<li><a
href="9caf72ac06"><code>9caf72a</code></a>
release version 3.0.5</li>
<li><a
href="e28a2451e9"><code>e28a245</code></a>
catch OSError from getpass.getuser (<a
href="https://redirect.github.com/pallets/werkzeug/issues/2960">#2960</a>)</li>
<li><a
href="e6b4cce97e"><code>e6b4cce</code></a>
catch OSError from getpass.getuser</li>
<li>Additional commits viewable in <a
href="https://github.com/pallets/werkzeug/compare/2.3.7...3.0.6">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the
[Security Alerts
page](https://github.com/citusdata/citus/network/alerts).
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
DESCRIPTION: Automatically updates dynamic_library_path when CDC is
enabled
fix : #7715
According to the documentation and `pg_settings`, the context of the
`citus.enable_change_data_capture` parameter is user.
However, changing this parameter — even as a superuser — doesn't work as
expected: while the initial copy phase works correctly, subsequent
change events are not propagated.
This appears to be due to the fact that `dynamic_library_path` is only
updated to `$libdir/citus_decoders:$libdir` when the server is restarted
and the `_PG_init` function is invoked.
To address this, I added an `EnableChangeDataCaptureAssignHook` that
automatically updates `dynamic_library_path` at runtime when
`citus.enable_change_data_capture` is enabled, ensuring that the CDC
decoder libraries are properly loaded.
Note that `dynamic_library_path` is already a `superuser`-context
parameter in base PostgreSQL, so updating it from within the assign hook
should be safe and consistent with PostgreSQL’s configuration model.
If there’s any reason this approach might be problematic or if there’s a
preferred alternative, I’d appreciate any feedback.
cc. @jy-min
---------
Co-authored-by: Hanefi Onaldi <Hanefi.Onaldi@microsoft.com>
Co-authored-by: ibrahim halatci <ihalatci@gmail.com>
Fixes#8040
```
- Custom Scan (Citus Adaptive) (actual rows=0 loops=1)
+ Custom Scan (Citus Adaptive) (actual rows=0.00 loops=1)
```
Add a normalization rule to the pg_regress `normalize.sed` script that
strips any trailing decimal fraction from actual rows= counts (e.g.
turning `actual rows=0.00` into `actual rows=0`). This silences noise
diffs introduced by the new PostgreSQL 18 beta’s planner output.
commit b06bde5771
Fixes#8019
**Background / Problem**
- PostgreSQL 18 (commit
[a07e03f…](a07e03fd8f))
removed `heap_inplace_update()` and related helpers.
- Citus’ columnar writer relied on that API in
`UpdateStripeMetadataRow()` to patch the `columnar_stripe` catalog row
with the stripe file-offset, size, and row-count.
- Building the extension against PG 18 therefore failed at link-time
and, if stubbed out, left `file_offset = 0`, causing every insert to
abort with
`ERROR: attempted columnar write … to invalid logical offset: 0`
**Scope of This PR**
- Keep the fast-path on PG 12–17 (`heap_inplace_update()` unchanged).
- Switch to `CatalogTupleUpdate()` on PG 18+, matching core’s new
catalog-update API.
- Bump the lock level from `AccessShareLock` → `RowExclusiveLock` when
the normal heap-update path is taken.
- No behavioral changes for users on PG ≤ 17
Fixes#8020
PostgreSQL 18 introduces two new, *pseudo* rangetable‐entry kinds that
Citus’ downstream deparser must recognize:
1. **Pulled-up shard RTE clones** (`CITUS_RTE_SHARD` with `relid ==
InvalidOid`)
2. **Grouping-step RTE** (`RTE_GROUP`, alias `*GROUP*`, not actually in
the FROM clause)
Without special handling, Citus crashes or emits invalid SQL when
running against PG 18beta1:
* **`ERROR: could not open relation with OID 0`**
Citus was unconditionally calling `relation_open(rte->relid,…)` on
entries whose `relid` is 0.
* **`ERROR: missing FROM-clause entry for table "*GROUP*"`**
Citus’ `set_rtable_names()` assigned the synthetic `*GROUP*` alias but
never printed a matching FROM item.
This PR teaches Citus’ `ruleutils_18.c` to skip catalog lookups for RTEs
without valid OIDs and to suppress the grouping-RTE alias, restoring
compatibility with both PG 17 and PG 18.
---
## Background
* **Upstream commit
[[247dea8](247dea89f7)**
Introduced `RTE_GROUP` for the grouping step so that multiple subqueries
in `GROUP BY`/`HAVING` can be deduplicated and planned correctly.
* **Citus PR
[[#6428](https://github.com/citusdata/citus/pull/6428)](https://github.com/citusdata/citus/pull/6428)**
Added initial support for treating shard RTEs like real
relations—calling `relation_open()` to pick up renamed-column fixes.
Worked fine on PG 11–17, but PG 18’s pull-up logic clones those shard
RTEs with `relid=0`, leading to OID 0 crashes.
---
## Changes
1. **Guard `relation_open()`**
In `set_relation_column_names()`, only call `relation_open(rte->relid,
…)` when
```c
OidIsValid(rte->relid)
```
Prevents the “could not open relation with OID 0” crash on both
pulled-up shards and synthetic RTEs.
2. **Handle pulled-up shards** (`CITUS_RTE_SHARD` with `relid=0`)
Copy column names directly from `rte->eref->colnames` instead of hitting
the catalog.
3. **Handle grouping RTE** (`RTE_GROUP`)
* **In `set_relation_column_names()`**: fallback to
`rte->eref->colnames` for `RTE_GROUP`.
* **In `set_rtable_names()`**: explicitly assign
```c
refname = NULL; /* never show *GROUP* in FROM */
```
so that no `*GROUP*` alias is ever printed.
**Why this is required:**
PostgreSQL 18’s parser now represents the grouping step with a synthetic
RTE whose alias is always `*GROUP*`—and that RTE is **never** actually
listed in the `FROM` clause. If Citus’ deparser assigns and emits
`*GROUP*` as a table reference, the pushed-down SQL becomes:
```sql
SELECT *GROUP*.mygroupcol … -- but there is no “*GROUP*” in the FROM
list
```
Workers then fail:
```
ERROR: missing FROM-clause entry for table "*GROUP*"
```
By setting `refname = NULL` for `RTE_GROUP` in `set_rtable_names()`, the
deparser prints just the column name unqualified, exactly matching
upstream PG 18’s behavior and yielding valid SQL on the workers.
4. **Maintain existing behavior on PG 15–17**
* Shard RTEs *with* valid `relid` still open the catalog to pick up
renamed-column fixes.
* No impact on other RTE kinds or versions prior to PG 18.
---
Fixes#8032
PostgreSQL 18 introduces a dedicated “grouping-step” range table entry
(`RTE_GROUP`) whose target columns are exactly the expressions in our
`GROUP BY` clause, rather than hiding them as `resjunk` items. In
Citus’s distributed planner, the function `FindReferencedTableColumn`
must be able to map from a `Var` referencing a grouped column back to
the underlying table column. Without special handling for `RTE_GROUP`,
queries that rely on pushdown of `GROUP BY` expressions can fail or
mis-identify their target columns.
This PR adds support for `RTE_GROUP` in Citus when built against PG 18
or later, ensuring that:
* Each grouped expression is correctly resolved.
* The pushdown planner can trace a `Var`’s `varattno` into the
corresponding `groupexprs` list.
* Existing behavior on PG < 18 is unchanged.
---
## What’s Changed
In **`src/backend/distributed/planner/multi_logical_optimizer.c`**,
inside `FindReferencedTableColumn`:
* **Under** `#if PG_VERSION_NUM >= PG_VERSION_18`
Introduce an `else if` branch for
```c
rangeTableEntry->rtekind == RTE_GROUP
```
* **Extraction of grouped expressions:**
```c
List *groupexprs = rangeTableEntry->groupexprs;
AttrNumber groupIndex = candidateColumn->varattno - 1;
```
* **Safety check** to guard against malformed `Var` numbers:
```c
if (groupIndex < 0 || groupIndex >= list_length(groupexprs))
return; /* malformed Var */
```
* **Recursive descent:**
Fetch the corresponding expression from `groupexprs` and call
```c
FindReferencedTableColumn(groupExpr, parentQueryList, query,
column, rteContainingReferencedColumn,
skipOuterVars);
```
so that the normal resolution logic applies to the underlying
expression.
* **Unchanged code path** for PG < 18 and for other `rtekind` values.
---
This PR provides successful build against PG18Beta1. RuleUtils PR was
reviewed separately: #8010
## PG 18Beta1–related changes for building Citus
### TupleDesc / Attr layout
**What changed in PG:** Postgres consolidated the
`TupleDescData.attrs[]` array into a more compact representation. Direct
field access (tupdesc->attrs[i]) was replaced by the new
`TupleDescAttr()` API.
**Citus adaptation:** Everywhere we previously used
`tupdesc->attrs[...]`, we now call `TupleDescAttr(tupdesc, idx)` (or our
own `Attr()` macro) under a compatibility guard.
*
5983a4cffc
General Logic:
* Use `Attr(...)` in places where `columnar_version_compat.h` is
included. This avoids the need to sprinkle `#if PG_VERSION_NUM` guards
around each attribute access.
* Use `TupleDescAttr(tupdesc, i)` when the relevant PostgreSQL header is
already included and the additional macro indirection is unnecessary.
### Collation‐aware `LIKE`
**What changed in PG:** The `textlike` operator now requires an explicit
collation, to avoid ambiguous‐collation errors. Core code switched from
`DirectFunctionCall2(textlike, ...)` to
`DirectFunctionCall2Coll(textlike, DEFAULT_COLLATION_OID, ...)`.
**Citus adaptation:** In `remote_commands.c` and any other LIKE call, we
now use `DirectFunctionCall2Coll(textlike, DEFAULT_COLLATION_OID, ...)`
and `#include <utils/pg_collation.h>`.
*
85b7efa1cd
### Columnar storage API
* Adapt `columnar_relation_set_new_filelocator` (and related init
routines) for PG 18’s revised SMGR and storage-initialization hooks.
* Pull in the new headers (`explain_format.h`,
`columnar_version_compat.h`) so the columnar module compiles cleanly
against PG 18.
- heap_modify_tuple + heap_inplace_update only exist on PG < 18; on PG18
the in-place helper was removed upstream
-
a07e03fd8f
### OpenSSL / TLS integration
**What changed in PG:** Moved from the legacy `SSL_library_init()` to
`OPENSSL_init_ssl(OPENSSL_INIT_LOAD_CONFIG, NULL)`, updated certificate
API calls (`X509_getm_notBefore`, `X509_getm_notAfter`), and
standardized on `TLS_method()`.
**Citus adaptation:** We now `#include <openssl/opensslv.h>` and use
`#if OPENSSL_VERSION_NUMBER >= 0x10100000L` to choose between`
OPENSSL_init_ssl()` or `SSL_library_init()`, and wrap`
X509_gmtime_adj()` calls around the new accessor functions.
*
6c66b7443c
### Adapt `ExtractColumns()` to the new PG-18 `expandRTE()` signature
PostgreSQL 18
80feb727c8
added a fourth argument of type `VarReturningType` to `expandRTE()`, so
calls that used the old 7-parameter form no longer compile. This patch:
* Wraps the `expandRTE(...)` call in a `#if PG_VERSION_NUM >= 180000`
guard.
* On PG 18+ passes the new `VAR_RETURNING_DEFAULT` argument before
`location`.
* On PG 15–17 continues to call the original 7-arg form.
* Adds the necessary includes (`parser/parse_relation.h` for `expandRTE`
and `VarReturningType`, and `pg_version_constants.h` for
`PG_VERSION_NUM`).
### Adapt `ExecutorStart`/`ExecutorRun` hooks to PG-18’s new signatures
PostgreSQL 18
525392d572
changed the signatures of the executor hooks:
* `ExecutorStart_hook` now returns `bool` instead of `void`, and
* `ExecutorRun_hook` drops its old `run_once` argument.
This patch preserves Citus’s existing hook logic by:
1. **Adding two adapter functions** under `#if PG_VERSION_NUM >=
PG_VERSION_18`:
* `citus_executor_start_adapter(QueryDesc *queryDesc, int eflags)`
Calls the old `CitusExecutorStart(queryDesc, eflags)` and then returns
`true` to satisfy the new hook’s `bool` return type.
* `citus_executor_run_adapter(QueryDesc *queryDesc, ScanDirection
direction, uint64 count)`
Calls the old `CitusExecutorRun(queryDesc, direction, count, true)`
(passing `true` for the dropped `run_once` argument), and returns
`void`.
2. **Installing the adapters** in `_PG_init()` instead of the original
hooks when building against PG 18+:
```c
#if PG_VERSION_NUM >= PG_VERSION_18
ExecutorStart_hook = citus_executor_start_adapter;
ExecutorRun_hook = citus_executor_run_adapter;
#else
ExecutorStart_hook = CitusExecutorStart;
ExecutorRun_hook = CitusExecutorRun;
#endif
```
### Adapt to PG-18’s removal of the “run\_once” flag from
ExecutorRun/PortalRun
PostgreSQL commit
[[3eea7a0](3eea7a0c97)
rationalized the executor’s parallelism logic by moving the “execute a
plan only once” check into `ExecutePlan()` itself and dropping the old
`bool run_once` argument from the public APIs:
```diff
- void ExecutorRun(QueryDesc *queryDesc,
- ScanDirection direction,
- uint64 count,
- bool run_once);
+ void ExecutorRun(QueryDesc *queryDesc,
+ ScanDirection direction,
+ uint64 count);
```
(and similarly for `PortalRun()`).
To stay compatible across PG 15–18, Citus now:
1. **Updates all internal calls** to `ExecutorRun(...)` and
`PortalRun(...)`:
* On PG 18+, use the new three-argument form (`ExecutorRun(qd, dir,
count)`).
* On PG 15–17, keep the old four-arg form (`ExecutorRun(qd, dir, count,
true)`) under a `#if PG_VERSION_NUM < 180000` guard.
2. **Guards the dispatcher hooks** via the adapter functions (from the
earlier patch) so that Citus’s executor hooks continue to work under
both the old and new signatures.
### Adapt to PG-18’s shortened PortalRun signature
PostgreSQL 18’s refactoring (see commit
[3eea7a0](3eea7a0c97))
also removed the old run_once and alternate‐dest arguments from the
public PortalRun() API. The signature changed from:
```diff
- bool PortalRun(Portal portal,
- long count,
- bool isTopLevel,
- bool run_once,
- DestReceiver *dest,
- DestReceiver *altdest,
- QueryCompletion *qc);
+ bool PortalRun(Portal portal,
+ long count,
+ bool isTopLevel,
+ DestReceiver *dest,
+ DestReceiver *altdest,
+ QueryCompletion *qc);
```
To support both versions in Citus, we:
1. **Version-guard each call** to `PortalRun()`:
* **On PG 18+** invoke the new 6-argument form.
* **On PG 15–17** fall back to the legacy 7-argument form, passing
`true` for `run_once`.
### Add support for PG-18’s new `plansource` argument in
`PortalDefineQuery`**
PostgreSQL 18 extended the `PortalDefineQuery` API to carry a
`CachedPlanSource *plansource` pointer so that the portal machinery can
track cached‐plan invalidation (as introduced alongside deferred-locking
in commit
525392d572.
To remain compatible across PG 15–18, Citus now wraps its calls under a
version guard:
```diff
- PortalDefineQuery(portal, NULL, sql, commandTag, plantree_list, NULL);
+#if PG_VERSION_NUM >= 180000
+ /* PG 18+: seven-arg signature (adds plansource) */
+ PortalDefineQuery(
+ portal,
+ NULL, /* no prepared-stmt name */
+ sql, /* the query text */
+ commandTag, /* the CommandTag */
+ plantree_list, /* List of PlannedStmt* */
+ NULL, /* no CachedPlan */
+ NULL /* no CachedPlanSource */
+ );
+#else
+ /* PG 15–17: six-arg signature */
+ PortalDefineQuery(
+ portal,
+ NULL, /* no prepared-stmt name */
+ sql, /* the query text */
+ commandTag, /* the CommandTag */
+ plantree_list, /* List of PlannedStmt* */
+ NULL /* no CachedPlan */
+ );
+#endif
```
### Adapt ExecInitRangeTable() calls to PG-18’s new signature
PostgreSQL commit
[cbc127917e04a978a788b8bc9d35a70244396d5b](cbc127917e)
overhauled the planner API for range‐table initialization:
**PG 18+**: added a fourth `Bitmapset *unpruned_relids` argument to
support deferred partition pruning
In Citus’s `create_estate_for_relation()` (in `columnar_metadata.c`), we
now wrap the call in a compile‐time guard so that the code compiles
correctly on all supported PostgreSQL versions:
```
/* Prepare permission info on PG 16+ */
#if PG_VERSION_NUM >= PG_VERSION_16
List *perminfos = NIL;
addRTEPermissionInfo(&perminfos, rte);
#else
List *perminfos = NIL; /* unused on PG 15 */
#endif
/* Initialize the range table, with the right signature for each PG version */
#if PG_VERSION_NUM >= PG_VERSION_18
/* PG 18+: four‐arg signature (adds unpruned_relids) */
ExecInitRangeTable(
estate,
list_make1(rte),
perminfos,
NULL /* unpruned_relids: not used by columnar */
);
#elif PG_VERSION_NUM >= PG_VERSION_16
/* PG 16–17: three‐arg signature (permInfos) */
ExecInitRangeTable(
estate,
list_make1(rte),
perminfos
);
#else
/* PG 15: two‐arg signature */
ExecInitRangeTable(
estate,
list_make1(rte)
);
#endif
estate->es_output_cid = GetCurrentCommandId(true);
```
### Adapt `pgstat_report_vacuum()` to PG-18’s new timestamp argument
PostgreSQL commit
[[30a6ed0ce4bb18212ec38cdb537ea4b43bc99b83](30a6ed0ce4)
extended the `pgstat_report_vacuum()` API by adding a `TimestampTz
start_time` parameter at the end so that the VACUUM statistics collector
can record when the operation began:
```diff
/* PG ≤17: four-arg signature */
- void pgstat_report_vacuum(Oid tableoid,
- bool shared,
- double num_live_tuples,
- double num_dead_tuples);
+/* PG ≥18: five-arg signature adds a start_time */
+ void pgstat_report_vacuum(Oid tableoid,
+ bool shared,
+ double num_live_tuples,
+ double num_dead_tuples,
+ TimestampTz start_time);
```
To support both versions, we now wrap the call in `columnar_tableam.c`
with a version guard, supplying `GetCurrentTimestamp()` for PG-18+:
```c
#if PG_VERSION_NUM >= 180000
/* PG 18+: include start_timestamp */
pgstat_report_vacuum(
RelationGetRelid(rel),
rel->rd_rel->relisshared,
Max(new_live_tuples, 0), /* live tuples */
0, /* dead tuples */
GetCurrentTimestamp() /* start time */
);
#else
/* PG 15–17: original signature */
pgstat_report_vacuum(
RelationGetRelid(rel),
rel->rd_rel->relisshared,
Max(new_live_tuples, 0), /* live tuples */
0 /* dead tuples */
);
#endif
```
### Adapt `ExecuteTaskPlan()` to PG-18’s expanded `CreateQueryDesc()`
signature
PostgreSQL 18 changed `CreateQueryDesc()` from an eight-argument to a
nine-argument call by inserting a `CachedPlan *cplan` parameter
immediately after the `PlannedStmt *plannedstmt` argument (see commit
525392d572).
To remain compatible with PG 15–17, Citus now wraps its invocation in
`local_executor.c` with a version guard:
```diff
- /* PG15–17: eight-arg CreateQueryDesc without cached plan */
- QueryDesc *queryDesc = CreateQueryDesc(
- taskPlan, /* PlannedStmt *plannedstmt */
- queryString, /* const char *sourceText */
- GetActiveSnapshot(),/* Snapshot snapshot */
- InvalidSnapshot, /* Snapshot crosscheck_snapshot */
- destReceiver, /* DestReceiver *dest */
- paramListInfo, /* ParamListInfo params */
- queryEnv, /* QueryEnvironment *queryEnv */
- 0 /* int instrument_options */
- );
+#if PG_VERSION_NUM >= 180000
+ /* PG18+: nine-arg CreateQueryDesc with a CachedPlan slot */
+ QueryDesc *queryDesc = CreateQueryDesc(
+ taskPlan, /* PlannedStmt *plannedstmt */
+ NULL, /* CachedPlan *cplan (none) */
+ queryString, /* const char *sourceText */
+ GetActiveSnapshot(),/* Snapshot snapshot */
+ InvalidSnapshot, /* Snapshot crosscheck_snapshot */
+ destReceiver, /* DestReceiver *dest */
+ paramListInfo, /* ParamListInfo params */
+ queryEnv, /* QueryEnvironment *queryEnv */
+ 0 /* int instrument_options */
+ );
+#else
+ /* PG15–17: eight-arg CreateQueryDesc without cached plan */
+ QueryDesc *queryDesc = CreateQueryDesc(
+ taskPlan, /* PlannedStmt *plannedstmt */
+ queryString, /* const char *sourceText */
+ GetActiveSnapshot(),/* Snapshot snapshot */
+ InvalidSnapshot, /* Snapshot crosscheck_snapshot */
+ destReceiver, /* DestReceiver *dest */
+ paramListInfo, /* ParamListInfo params */
+ queryEnv, /* QueryEnvironment *queryEnv */
+ 0 /* int instrument_options */
+ );
+#endif
```
### Adapt `RelationGetPrimaryKeyIndex()` to PG-18’s new “deferrable\_ok”
flag
PostgreSQL commit
14e87ffa5c
added a new Boolean `deferrable_ok` parameter to
`RelationGetPrimaryKeyIndex()` so that the lock manager can defer
unique‐constraint locks when requested. The API changed from:
```c
RelationGetPrimaryKeyIndex(Relation relation)
```
to:
```c
RelationGetPrimaryKeyIndex(Relation relation, bool deferrable_ok)
```
```diff
diff --git a/src/backend/distributed/metadata/node_metadata.c
b/src/backend/distributed/metadata/node_metadata.c
index e3a1b2c..f4d5e6f 100644
--- a/src/backend/distributed/metadata/node_metadata.c
+++ b/src/backend/distributed/metadata/node_metadata.c
@@ -2965,8 +2965,18 @@
*/
- Relation replicaIndex =
index_open(RelationGetPrimaryKeyIndex(pgDistNode),
- AccessShareLock);
+ #if PG_VERSION_NUM >= PG_VERSION_18
+ /* PG 18+ adds a bool "deferrable_ok" parameter */
+ Relation replicaIndex =
+ index_open(
+ RelationGetPrimaryKeyIndex(pgDistNode, false),
+ AccessShareLock);
+ #else
+ Relation replicaIndex =
+ index_open(
+ RelationGetPrimaryKeyIndex(pgDistNode),
+ AccessShareLock);
+ #endif
ScanKeyInit(&scanKey[0], Anum_pg_dist_node_nodename,
BTEqualStrategyNumber, F_TEXTEQ, CStringGetTextDatum(nodeName));
```
```diff
diff --git a/src/backend/distributed/operations/node_protocol.c b/src/backend/distributed/operations/node_protocol.c
index e3a1b2c..f4d5e6f 100644
--- a/src/backend/distributed/operations/node_protocol.c
+++ b/src/backend/distributed/operations/node_protocol.c
@@ -746,7 +746,12 @@
if (!OidIsValid(idxoid))
{
- idxoid = RelationGetPrimaryKeyIndex(rel);
+ /* Determine the index OID of the primary key (PG18 adds a second parameter) */
+#if PG_VERSION_NUM >= PG_VERSION_18
+ idxoid = RelationGetPrimaryKeyIndex(rel, false);
+#else
+ idxoid = RelationGetPrimaryKeyIndex(rel);
+#endif
}
return idxoid;
```
Because Citus has always taken the lock immediately—just as the old
two-arg call did—we pass `false` to keep that same immediate-lock
behavior. Passing `true` would switch to deferred locking, which we
don’t want.
### Adapt `ExplainOnePlan()` to PG-18’s expanded API
PostgreSQL 18 extended
525392d572
the `ExplainOnePlan()` function to carry the `CachedPlan *` and
`CachedPlanSource *` pointers plus an explicit `query_index`, letting
the EXPLAIN machinery track plan‐source invalidation. The old signature:
```c
/* PG ≤17 */
void
ExplainOnePlan(PlannedStmt *plannedstmt,
IntoClause *into,
struct ExplainState *es,
const char *queryString,
ParamListInfo params,
QueryEnvironment *queryEnv,
const instr_time *planduration,
const BufferUsage *bufusage);
```
became, in PG 18:
```c
/* PG ≥18 */
void
ExplainOnePlan(PlannedStmt *plannedstmt,
CachedPlan *cplan,
CachedPlanSource *plansource,
int query_index,
IntoClause *into,
struct ExplainState *es,
const char *queryString,
ParamListInfo params,
QueryEnvironment *queryEnv,
const instr_time *planduration,
const BufferUsage *bufusage,
const MemoryContextCounters *mem_counters);
```
To compile under both versions, Citus now wraps each call in
`multi_explain.c` with:
```c
#if PG_VERSION_NUM >= PG_VERSION_18
/* PG 18+: pass NULL for the new cached‐plan fields and zero for query_index */
ExplainOnePlan(
plan, /* PlannedStmt *plannedstmt */
NULL, /* CachedPlan *cplan */
NULL, /* CachedPlanSource *plansource */
0, /* query_index */
into, /* IntoClause *into */
es, /* ExplainState *es */
queryString, /* const char *queryString */
params, /* ParamListInfo params */
NULL, /* QueryEnvironment *queryEnv */
&planduration,/* const instr_time *planduration */
(es->buffers ? &bufusage : NULL),
(es->memory ? &mem_counters : NULL)
);
#elif PG_VERSION_NUM >= PG_VERSION_17
/* PG 17: same as before, plus passing mem_counters if enabled */
ExplainOnePlan(
plan,
into,
es,
queryString,
params,
queryEnv,
&planduration,
(es->buffers ? &bufusage : NULL),
(es->memory ? &mem_counters : NULL)
);
#else
/* PG 15–16: original seven-arg form */
ExplainOnePlan(
plan,
into,
es,
queryString,
params,
queryEnv,
&planduration,
(es->buffers ? &bufusage : NULL)
);
#endif
```
### Adapt to the unified “index interpretation” API in PG 18 (commit
a8025f544854)
PostgreSQL commit
a8025f5448
generalized the old btree‐specific operator‐interpretation API into a
single “index interpretation” interface:
* **Renamed type**:
`OpBtreeInterpretation` → `OpIndexInterpretation`
* **Renamed function**:
`get_op_btree_interpretation(opno)` →
`get_op_index_interpretation(opno)`
* **Unified field**:
Each interpretation now carries `cmptype` instead of `strategy`.
To build cleanly on PG 18 while still supporting PG 15–17, Citus’s
shard‐pruning code now wraps these changes:
```c
#include "pg_version_constants.h"
#if PG_VERSION_NUM >= PG_VERSION_18
/* On PG 18+ the btree‐only APIs vanished; alias them to the new generic versions */
typedef OpIndexInterpretation OpBtreeInterpretation;
#define get_op_btree_interpretation(opno) get_op_index_interpretation(opno)
#define ROWCOMPARE_NE COMPARE_NE
#endif
/* … later, when checking an interpretation … */
OpBtreeInterpretation *interp =
(OpBtreeInterpretation *) lfirst(cell);
#if PG_VERSION_NUM >= PG_VERSION_18
/* use cmptype on PG 18+ */
if (interp->cmptype == ROWCOMPARE_NE)
#else
/* use strategy on PG 15–17 */
if (interp->strategy == ROWCOMPARE_NE)
#endif
{
/* … */
}
```
### Adapt `create_foreignscan_path()` for PG-18’s revised signature
PostgreSQL commit
e222534679
reordered and removed a couple of parameters in the FDW‐path builder:
* **PG 15–17 signature (11 args)**
```c
create_foreignscan_path(PlannerInfo *root,
RelOptInfo *rel,
PathTarget *target,
double rows,
Cost startup_cost,
Cost total_cost,
List *pathkeys,
Relids required_outer,
Path *fdw_outerpath,
List *fdw_restrictinfo,
List *fdw_private);
```
* **PG 18+ signature (9 args)**
```c
create_foreignscan_path(PlannerInfo *root,
RelOptInfo *rel,
PathTarget *target,
double rows,
int disabled_nodes,
Cost startup_cost,
Cost total_cost,
Relids required_outer,
Path *fdw_outerpath,
List *fdw_private);
```
To support both, Citus now defines a compatibility macro in
`pg_version_compat.h`:
```c
#include "nodes/bitmapset.h" /* for Relids */
#include "nodes/pg_list.h" /* for List */
#include "optimizer/pathnode.h" /* for create_foreignscan_path() */
#if PG_VERSION_NUM >= PG_VERSION_18
/* PG18+: drop pathkeys & fdw_restrictinfo, add disabled_nodes */
#define create_foreignscan_path_compat(a, b, c, d, e, f, g, h, i, j, k) \
create_foreignscan_path( \
(a), /* root */ \
(b), /* rel */ \
(c), /* target */ \
(d), /* rows */ \
(0), /* disabled_nodes (unused by Citus) */ \
(e), /* startup_cost */ \
(f), /* total_cost */ \
(g), /* required_outer */ \
(h), /* fdw_outerpath */ \
(k) /* fdw_private */ \
)
#else
/* PG15–17: original signature */
#define create_foreignscan_path_compat(a, b, c, d, e, f, g, h, i, j, k) \
create_foreignscan_path( \
(a), (b), (c), (d), \
(e), (f), \
(g), (h), (i), (j), (k) \
)
#endif
```
Now every call to `create_foreignscan_path_compat(...)`—even in tests
like `fake_fdw.c`—automatically picks the correct argument list for
PG 15 through PG 18.
### Drop the obsolete bitmap‐scan hooks on PG 18+
PostgreSQL commit
c3953226a0
cleaned up the `TableAmRoutine` API by removing the two bitmap‐scan
callback slots:
* `scan_bitmap_next_block`
* `scan_bitmap_next_tuple`
Since those hook‐slots no longer exist in PG 18, Citus now wraps their
NULL‐initialization in a `#if PG_VERSION_NUM < PG_VERSION_18` guard. On
PG 15–17 we still explicitly set them to `NULL` (to satisfy the old
struct layout), and on PG 18+ we omit them entirely:
```c
#if PG_VERSION_NUM < PG_VERSION_18
/* PG 15–17 only: these fields were removed upstream in PG 18 */
.scan_bitmap_next_block = NULL,
.scan_bitmap_next_tuple = NULL,
#endif
```
### Adapt `vac_update_relstats()` invocation to PG-18’s new
“all\_frozen” argument
PostgreSQL commit
99f8f3fbbc
extended the `vac_update_relstats()` API by inserting a
`num_all_frozen_pages` parameter between the existing
`num_all_visible_pages` and `hasindex` arguments:
```diff
- /* PG ≤17: */
- void
- vac_update_relstats(Relation relation,
- BlockNumber num_pages,
- double num_tuples,
- BlockNumber num_all_visible_pages,
- bool hasindex,
- TransactionId frozenxid,
- MultiXactId minmulti,
- bool *frozenxid_updated,
- bool *minmulti_updated,
- bool in_outer_xact);
+ /* PG ≥18: adds num_all_frozen_pages */
+ void
+ vac_update_relstats(Relation relation,
+ BlockNumber num_pages,
+ double num_tuples,
+ BlockNumber num_all_visible_pages,
+ BlockNumber num_all_frozen_pages,
+ bool hasindex,
+ TransactionId frozenxid,
+ MultiXactId minmulti,
+ bool *frozenxid_updated,
+ bool *minmulti_updated,
+ bool in_outer_xact);
```
To compile cleanly on both PG 15–17 and PG 18+, Citus wraps its call in
a version guard and supplies a zero placeholder for the new field:
```c
#if PG_VERSION_NUM >= 180000
/* PG 18+: supply explicit “all_frozen” count */
vac_update_relstats(
rel,
new_rel_pages,
new_live_tuples,
new_rel_allvisible, /* allvisible */
0, /* all_frozen */
nindexes > 0,
newRelFrozenXid,
newRelminMxid,
&frozenxid_updated,
&minmulti_updated,
false /* in_outer_xact */
);
#else
/* PG 15–17: original signature */
vac_update_relstats(
rel,
new_rel_pages,
new_live_tuples,
new_rel_allvisible,
nindexes > 0,
newRelFrozenXid,
newRelminMxid,
&frozenxid_updated,
&minmulti_updated,
false /* in_outer_xact */
);
#endif
```
**Why all_frozen = 0?**
Columnar storage never embeds transaction IDs in its pages, so it never
needs to track “all‐frozen” pages the way a heap does. Setting both
allvisible and allfrozen to zero simply tells Postgres “there are no
pages with the visibility or frozen‐status bits set,” matching our
existing behavior.
This change ensures Citus’s VACUUM‐statistic updates work unmodified
across all supported Postgres versions.
Bumps [werkzeug](https://github.com/pallets/werkzeug) from 2.3.7 to
3.0.6.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/pallets/werkzeug/releases">werkzeug's
releases</a>.</em></p>
<blockquote>
<h2>3.0.6</h2>
<p>This is the Werkzeug 3.0.6 security fix release, which fixes security
issues but does not otherwise change behavior and should not result in
breaking changes.</p>
<p>PyPI: <a
href="https://pypi.org/project/Werkzeug/3.0.6/">https://pypi.org/project/Werkzeug/3.0.6/</a>
Changes: <a
href="https://werkzeug.palletsprojects.com/en/stable/changes/#version-3-0-6">https://werkzeug.palletsprojects.com/en/stable/changes/#version-3-0-6</a></p>
<ul>
<li>Fix how <code>max_form_memory_size</code> is applied when parsing
large non-file fields. <a
href="https://github.com/advisories/GHSA-q34m-jh98-gwm2">GHSA-q34m-jh98-gwm2</a></li>
<li><code>safe_join</code> catches certain paths on Windows that were
not caught by <code>ntpath.isabs</code> on Python < 3.11. <a
href="https://github.com/advisories/GHSA-f9vj-2wh5-fj8j">GHSA-f9vj-2wh5-fj8j</a></li>
</ul>
<h2>3.0.5</h2>
<p>This is the Werkzeug 3.0.5 fix release, which fixes bugs but does not
otherwise change behavior and should not result in breaking changes.</p>
<p>PyPI: <a
href="https://pypi.org/project/Werkzeug/3.0.5/">https://pypi.org/project/Werkzeug/3.0.5/</a>
Changes: <a
href="https://werkzeug.palletsprojects.com/en/stable/changes/#version-3-0-5">https://werkzeug.palletsprojects.com/en/stable/changes/#version-3-0-5</a>
Milestone: <a
href="https://github.com/pallets/werkzeug/milestone/37?closed=1">https://github.com/pallets/werkzeug/milestone/37?closed=1</a></p>
<ul>
<li>The Watchdog reloader ignores file closed no write events. <a
href="https://redirect.github.com/pallets/werkzeug/issues/2945">#2945</a></li>
<li>Logging works with client addresses containing an IPv6 scope. <a
href="https://redirect.github.com/pallets/werkzeug/issues/2952">#2952</a></li>
<li>Ignore invalid authorization parameters. <a
href="https://redirect.github.com/pallets/werkzeug/issues/2955">#2955</a></li>
<li>Improve type annotation fore <code>SharedDataMiddleware</code>. <a
href="https://redirect.github.com/pallets/werkzeug/issues/2958">#2958</a></li>
<li>Compatibility with Python 3.13 when generating debugger pin and the
current UID does not have an associated name. <a
href="https://redirect.github.com/pallets/werkzeug/issues/2957">#2957</a></li>
</ul>
<h2>3.0.4</h2>
<p>This is the Werkzeug 3.0.4 fix release, which fixes bugs but does not
otherwise change behavior and should not result in breaking changes.</p>
<p>PyPI: <a
href="https://pypi.org/project/Werkzeug/3.0.4/">https://pypi.org/project/Werkzeug/3.0.4/</a>
Changes: <a
href="https://werkzeug.palletsprojects.com/en/3.0.x/changes/#version-3-0-4">https://werkzeug.palletsprojects.com/en/3.0.x/changes/#version-3-0-4</a>
Milestone: <a
href="https://github.com/pallets/werkzeug/milestone/36?closed=1">https://github.com/pallets/werkzeug/milestone/36?closed=1</a></p>
<ul>
<li>Restore behavior where parsing
<code>multipart/x-www-form-urlencoded</code> data with
invalid UTF-8 bytes in the body results in no form data parsed rather
than a
413 error. <a
href="https://redirect.github.com/pallets/werkzeug/issues/2930">#2930</a></li>
<li>Improve <code>parse_options_header</code> performance when parsing
unterminated
quoted string values. <a
href="https://redirect.github.com/pallets/werkzeug/issues/2904">#2904</a></li>
<li>Debugger pin auth is synchronized across threads/processes when
tracking
failed entries. <a
href="https://redirect.github.com/pallets/werkzeug/issues/2916">#2916</a></li>
<li>Dev server handles unexpected <code>SSLEOFError</code> due to issue
in Python < 3.13.
<a
href="https://redirect.github.com/pallets/werkzeug/issues/2926">#2926</a></li>
<li>Debugger pin auth works when the URL already contains a query
string.
<a
href="https://redirect.github.com/pallets/werkzeug/issues/2918">#2918</a></li>
</ul>
<h2>3.0.3</h2>
<p>This is the Werkzeug 3.0.3 security release, which fixes security
issues and bugs but does not otherwise change behavior and should not
result in breaking changes.</p>
<p>PyPI: <a
href="https://pypi.org/project/Werkzeug/3.0.3/">https://pypi.org/project/Werkzeug/3.0.3/</a>
Changes: <a
href="https://werkzeug.palletsprojects.com/en/3.0.x/changes/#version-3-0-3">https://werkzeug.palletsprojects.com/en/3.0.x/changes/#version-3-0-3</a>
Milestone: <a
href="https://github.com/pallets/werkzeug/milestone/35?closed=1">https://github.com/pallets/werkzeug/milestone/35?closed=1</a></p>
<ul>
<li>Only allow <code>localhost</code>, <code>.localhost</code>,
<code>127.0.0.1</code>, or the specified hostname when running the dev
server, to make debugger requests. Additional hosts can be added by
using the debugger middleware directly. The debugger UI makes requests
using the full URL rather than only the path. GHSA-2g68-c3qc-8985</li>
<li>Make reloader more robust when <code>""</code> is in
<code>sys.path</code>. <a
href="https://redirect.github.com/pallets/werkzeug/issues/2823">#2823</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/pallets/werkzeug/blob/main/CHANGES.rst">werkzeug's
changelog</a>.</em></p>
<blockquote>
<h2>Version 3.0.6</h2>
<p>Released 2024-10-25</p>
<ul>
<li>Fix how <code>max_form_memory_size</code> is applied when parsing
large non-file
fields. :ghsa:<code>q34m-jh98-gwm2</code></li>
<li><code>safe_join</code> catches certain paths on Windows that were
not caught by
<code>ntpath.isabs</code> on Python < 3.11.
:ghsa:<code>f9vj-2wh5-fj8j</code></li>
</ul>
<h2>Version 3.0.5</h2>
<p>Released 2024-10-24</p>
<ul>
<li>The Watchdog reloader ignores file closed no write events.
:issue:<code>2945</code></li>
<li>Logging works with client addresses containing an IPv6 scope
:issue:<code>2952</code></li>
<li>Ignore invalid authorization parameters.
:issue:<code>2955</code></li>
<li>Improve type annotation fore <code>SharedDataMiddleware</code>.
:issue:<code>2958</code></li>
<li>Compatibility with Python 3.13 when generating debugger pin and the
current
UID does not have an associated name. :issue:<code>2957</code></li>
</ul>
<h2>Version 3.0.4</h2>
<p>Released 2024-08-21</p>
<ul>
<li>Restore behavior where parsing
<code>multipart/x-www-form-urlencoded</code> data with
invalid UTF-8 bytes in the body results in no form data parsed rather
than a
413 error. :issue:<code>2930</code></li>
<li>Improve <code>parse_options_header</code> performance when parsing
unterminated
quoted string values. :issue:<code>2904</code></li>
<li>Debugger pin auth is synchronized across threads/processes when
tracking
failed entries. :issue:<code>2916</code></li>
<li>Dev server handles unexpected <code>SSLEOFError</code> due to issue
in Python < 3.13.
:issue:<code>2926</code></li>
<li>Debugger pin auth works when the URL already contains a query
string.
:issue:<code>2918</code></li>
</ul>
<h2>Version 3.0.3</h2>
<p>Released 2024-05-05</p>
<ul>
<li>Only allow <code>localhost</code>, <code>.localhost</code>,
<code>127.0.0.1</code>, or the specified
hostname when running the dev server, to make debugger requests.
Additional
hosts can be added by using the debugger middleware directly. The
debugger</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="5eaefc3996"><code>5eaefc3</code></a>
release version 3.0.6</li>
<li><a
href="2767bcb10a"><code>2767bcb</code></a>
Merge commit from fork</li>
<li><a
href="87cc78a25f"><code>87cc78a</code></a>
catch special absolute path on Windows Python < 3.11</li>
<li><a
href="50cfeebcb0"><code>50cfeeb</code></a>
Merge commit from fork</li>
<li><a
href="8760275afb"><code>8760275</code></a>
apply max_form_memory_size another level up in the parser</li>
<li><a
href="8d6a12e2af"><code>8d6a12e</code></a>
start version 3.0.6</li>
<li><a
href="a7b121abc7"><code>a7b121a</code></a>
release version 3.0.5 (<a
href="https://redirect.github.com/pallets/werkzeug/issues/2961">#2961</a>)</li>
<li><a
href="9caf72ac06"><code>9caf72a</code></a>
release version 3.0.5</li>
<li><a
href="e28a2451e9"><code>e28a245</code></a>
catch OSError from getpass.getuser (<a
href="https://redirect.github.com/pallets/werkzeug/issues/2960">#2960</a>)</li>
<li><a
href="e6b4cce97e"><code>e6b4cce</code></a>
catch OSError from getpass.getuser</li>
<li>Additional commits viewable in <a
href="https://github.com/pallets/werkzeug/compare/2.3.7...3.0.6">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the
[Security Alerts
page](https://github.com/citusdata/citus/network/alerts).
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Bumps [cryptography](https://github.com/pyca/cryptography) from 42.0.3
to 44.0.1.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst">cryptography's
changelog</a>.</em></p>
<blockquote>
<p>44.0.1 - 2025-02-11</p>
<pre><code>
* Updated Windows, macOS, and Linux wheels to be compiled with OpenSSL
3.4.1.
* We now build ``armv7l`` ``manylinux`` wheels and publish them to PyPI.
* We now build ``manylinux_2_34`` wheels and publish them to PyPI.
<p>.. _v44-0-0:</p>
<p>44.0.0 - 2024-11-27
</code></pre></p>
<ul>
<li><strong>BACKWARDS INCOMPATIBLE:</strong> Dropped support for
LibreSSL < 3.9.</li>
<li>Deprecated Python 3.7 support. Python 3.7 is no longer supported by
the
Python core team. Support for Python 3.7 will be removed in a future
<code>cryptography</code> release.</li>
<li>Updated Windows, macOS, and Linux wheels to be compiled with OpenSSL
3.4.0.</li>
<li>macOS wheels are now built against the macOS 10.13 SDK. Users on
older
versions of macOS should upgrade, or they will need to build
<code>cryptography</code> themselves.</li>
<li>Enforce the :rfc:<code>5280</code> requirement that extended key
usage extensions must
not be empty.</li>
<li>Added support for timestamp extraction to the
:class:<code>~cryptography.fernet.MultiFernet</code> class.</li>
<li>Relax the Authority Key Identifier requirements on root CA
certificates
during X.509 verification to allow fields permitted by
:rfc:<code>5280</code> but
forbidden by the CA/Browser BRs.</li>
<li>Added support for
:class:<code>~cryptography.hazmat.primitives.kdf.argon2.Argon2id</code>
when using OpenSSL 3.2.0+.</li>
<li>Added support for the
:class:<code>~cryptography.x509.Admissions</code> certificate
extension.</li>
<li>Added basic support for PKCS7 decryption (including S/MIME 3.2) via
:func:<code>~cryptography.hazmat.primitives.serialization.pkcs7.pkcs7_decrypt_der</code>,
:func:<code>~cryptography.hazmat.primitives.serialization.pkcs7.pkcs7_decrypt_pem</code>,
and
:func:<code>~cryptography.hazmat.primitives.serialization.pkcs7.pkcs7_decrypt_smime</code>.</li>
</ul>
<p>.. _v43-0-3:</p>
<p>43.0.3 - 2024-10-18</p>
<pre><code>
* Fixed release metadata for ``cryptography-vectors``
<p>.. _v43-0-2:</p>
<p>43.0.2 - 2024-10-18
</code></pre></p>
<ul>
<li>Fixed compilation when using LibreSSL 4.0.0.</li>
</ul>
<p>.. _v43-0-1:</p>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="adaaaed77d"><code>adaaaed</code></a>
Bump for 44.0.1 release (<a
href="https://redirect.github.com/pyca/cryptography/issues/12441">#12441</a>)</li>
<li><a
href="ccc61dabe3"><code>ccc61da</code></a>
[backport] test and build on armv7l (<a
href="https://redirect.github.com/pyca/cryptography/issues/12420">#12420</a>)
(<a
href="https://redirect.github.com/pyca/cryptography/issues/12431">#12431</a>)</li>
<li><a
href="f299a48153"><code>f299a48</code></a>
remove deprecated call (<a
href="https://redirect.github.com/pyca/cryptography/issues/12052">#12052</a>)</li>
<li><a
href="439eb0594a"><code>439eb05</code></a>
Bump version for 44.0.0 (<a
href="https://redirect.github.com/pyca/cryptography/issues/12051">#12051</a>)</li>
<li><a
href="2c5ad4d8dc"><code>2c5ad4d</code></a>
chore(deps): bump maturin from 1.7.4 to 1.7.5 in /.github/requirements
(<a
href="https://redirect.github.com/pyca/cryptography/issues/12050">#12050</a>)</li>
<li><a
href="d23968addd"><code>d23968a</code></a>
chore(deps): bump libc from 0.2.165 to 0.2.166 (<a
href="https://redirect.github.com/pyca/cryptography/issues/12049">#12049</a>)</li>
<li><a
href="133c0e02ed"><code>133c0e0</code></a>
Bump x509-limbo and/or wycheproof in CI (<a
href="https://redirect.github.com/pyca/cryptography/issues/12047">#12047</a>)</li>
<li><a
href="f2259d7aa0"><code>f2259d7</code></a>
Bump BoringSSL and/or OpenSSL in CI (<a
href="https://redirect.github.com/pyca/cryptography/issues/12046">#12046</a>)</li>
<li><a
href="e201c870b8"><code>e201c87</code></a>
fixed metadata in changelog (<a
href="https://redirect.github.com/pyca/cryptography/issues/12044">#12044</a>)</li>
<li><a
href="c6104cc366"><code>c6104cc</code></a>
Prohibit Python 3.9.0, 3.9.1 -- they have a bug that causes errors (<a
href="https://redirect.github.com/pyca/cryptography/issues/12045">#12045</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/pyca/cryptography/compare/42.0.3...44.0.1">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the
[Security Alerts
page](https://github.com/citusdata/citus/network/alerts).
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Bumps [tornado](https://github.com/tornadoweb/tornado) from 6.4 to 6.5.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/tornadoweb/tornado/blob/master/docs/releases.rst">tornado's
changelog</a>.</em></p>
<blockquote>
<h1>Release notes</h1>
<p>.. toctree::
:maxdepth: 2</p>
<p>releases/v6.5.1
releases/v6.5.0
releases/v6.4.2
releases/v6.4.1
releases/v6.4.0
releases/v6.3.3
releases/v6.3.2
releases/v6.3.1
releases/v6.3.0
releases/v6.2.0
releases/v6.1.0
releases/v6.0.4
releases/v6.0.3
releases/v6.0.2
releases/v6.0.1
releases/v6.0.0
releases/v5.1.1
releases/v5.1.0
releases/v5.0.2
releases/v5.0.1
releases/v5.0.0
releases/v4.5.3
releases/v4.5.2
releases/v4.5.1
releases/v4.5.0
releases/v4.4.3
releases/v4.4.2
releases/v4.4.1
releases/v4.4.0
releases/v4.3.0
releases/v4.2.1
releases/v4.2.0
releases/v4.1.0
releases/v4.0.2
releases/v4.0.1
releases/v4.0.0
releases/v3.2.2
releases/v3.2.1
releases/v3.2.0
releases/v3.1.1
releases/v3.1.0
releases/v3.0.2
releases/v3.0.1
releases/v3.0.0</p>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="ab5f354312"><code>ab5f354</code></a>
Merge pull request <a
href="https://redirect.github.com/tornadoweb/tornado/issues/3498">#3498</a>
from bdarnell/final-6.5</li>
<li><a
href="3623024dfc"><code>3623024</code></a>
Final release notes for 6.5.0</li>
<li><a
href="b39b892bf7"><code>b39b892</code></a>
Merge pull request <a
href="https://redirect.github.com/tornadoweb/tornado/issues/3497">#3497</a>
from bdarnell/multipart-log-spam</li>
<li><a
href="cc61050e8f"><code>cc61050</code></a>
httputil: Raise errors instead of logging in multipart/form-data
parsing</li>
<li><a
href="ae4a4e4fea"><code>ae4a4e4</code></a>
asyncio: Preserve contextvars across SelectorThread on Windows (<a
href="https://redirect.github.com/tornadoweb/tornado/issues/3479">#3479</a>)</li>
<li><a
href="197ff13f76"><code>197ff13</code></a>
Merge pull request <a
href="https://redirect.github.com/tornadoweb/tornado/issues/3496">#3496</a>
from bdarnell/undeprecate-set-event-loop</li>
<li><a
href="c3d906c4ad"><code>c3d906c</code></a>
requirements: Upgrade tox to 4.26.0</li>
<li><a
href="a83897732e"><code>a838977</code></a>
testing: Remove deprecation warning filter for set_event_loop</li>
<li><a
href="d8e0d36eba"><code>d8e0d36</code></a>
build: Fix free-threaded build, mark speedups module as no-GIL</li>
<li><a
href="bfe7489485"><code>bfe7489</code></a>
Merge pull request <a
href="https://redirect.github.com/tornadoweb/tornado/issues/3492">#3492</a>
from bdarnell/relnotes-6.5</li>
<li>Additional commits viewable in <a
href="https://github.com/tornadoweb/tornado/compare/v6.4.0...v6.5.0">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the
[Security Alerts
page](https://github.com/citusdata/citus/network/alerts).
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Bumps [cryptography](https://github.com/pyca/cryptography) from 42.0.3
to 44.0.1.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst">cryptography's
changelog</a>.</em></p>
<blockquote>
<p>44.0.1 - 2025-02-11</p>
<pre><code>
* Updated Windows, macOS, and Linux wheels to be compiled with OpenSSL
3.4.1.
* We now build ``armv7l`` ``manylinux`` wheels and publish them to PyPI.
* We now build ``manylinux_2_34`` wheels and publish them to PyPI.
<p>.. _v44-0-0:</p>
<p>44.0.0 - 2024-11-27
</code></pre></p>
<ul>
<li><strong>BACKWARDS INCOMPATIBLE:</strong> Dropped support for
LibreSSL < 3.9.</li>
<li>Deprecated Python 3.7 support. Python 3.7 is no longer supported by
the
Python core team. Support for Python 3.7 will be removed in a future
<code>cryptography</code> release.</li>
<li>Updated Windows, macOS, and Linux wheels to be compiled with OpenSSL
3.4.0.</li>
<li>macOS wheels are now built against the macOS 10.13 SDK. Users on
older
versions of macOS should upgrade, or they will need to build
<code>cryptography</code> themselves.</li>
<li>Enforce the :rfc:<code>5280</code> requirement that extended key
usage extensions must
not be empty.</li>
<li>Added support for timestamp extraction to the
:class:<code>~cryptography.fernet.MultiFernet</code> class.</li>
<li>Relax the Authority Key Identifier requirements on root CA
certificates
during X.509 verification to allow fields permitted by
:rfc:<code>5280</code> but
forbidden by the CA/Browser BRs.</li>
<li>Added support for
:class:<code>~cryptography.hazmat.primitives.kdf.argon2.Argon2id</code>
when using OpenSSL 3.2.0+.</li>
<li>Added support for the
:class:<code>~cryptography.x509.Admissions</code> certificate
extension.</li>
<li>Added basic support for PKCS7 decryption (including S/MIME 3.2) via
:func:<code>~cryptography.hazmat.primitives.serialization.pkcs7.pkcs7_decrypt_der</code>,
:func:<code>~cryptography.hazmat.primitives.serialization.pkcs7.pkcs7_decrypt_pem</code>,
and
:func:<code>~cryptography.hazmat.primitives.serialization.pkcs7.pkcs7_decrypt_smime</code>.</li>
</ul>
<p>.. _v43-0-3:</p>
<p>43.0.3 - 2024-10-18</p>
<pre><code>
* Fixed release metadata for ``cryptography-vectors``
<p>.. _v43-0-2:</p>
<p>43.0.2 - 2024-10-18
</code></pre></p>
<ul>
<li>Fixed compilation when using LibreSSL 4.0.0.</li>
</ul>
<p>.. _v43-0-1:</p>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="adaaaed77d"><code>adaaaed</code></a>
Bump for 44.0.1 release (<a
href="https://redirect.github.com/pyca/cryptography/issues/12441">#12441</a>)</li>
<li><a
href="ccc61dabe3"><code>ccc61da</code></a>
[backport] test and build on armv7l (<a
href="https://redirect.github.com/pyca/cryptography/issues/12420">#12420</a>)
(<a
href="https://redirect.github.com/pyca/cryptography/issues/12431">#12431</a>)</li>
<li><a
href="f299a48153"><code>f299a48</code></a>
remove deprecated call (<a
href="https://redirect.github.com/pyca/cryptography/issues/12052">#12052</a>)</li>
<li><a
href="439eb0594a"><code>439eb05</code></a>
Bump version for 44.0.0 (<a
href="https://redirect.github.com/pyca/cryptography/issues/12051">#12051</a>)</li>
<li><a
href="2c5ad4d8dc"><code>2c5ad4d</code></a>
chore(deps): bump maturin from 1.7.4 to 1.7.5 in /.github/requirements
(<a
href="https://redirect.github.com/pyca/cryptography/issues/12050">#12050</a>)</li>
<li><a
href="d23968addd"><code>d23968a</code></a>
chore(deps): bump libc from 0.2.165 to 0.2.166 (<a
href="https://redirect.github.com/pyca/cryptography/issues/12049">#12049</a>)</li>
<li><a
href="133c0e02ed"><code>133c0e0</code></a>
Bump x509-limbo and/or wycheproof in CI (<a
href="https://redirect.github.com/pyca/cryptography/issues/12047">#12047</a>)</li>
<li><a
href="f2259d7aa0"><code>f2259d7</code></a>
Bump BoringSSL and/or OpenSSL in CI (<a
href="https://redirect.github.com/pyca/cryptography/issues/12046">#12046</a>)</li>
<li><a
href="e201c870b8"><code>e201c87</code></a>
fixed metadata in changelog (<a
href="https://redirect.github.com/pyca/cryptography/issues/12044">#12044</a>)</li>
<li><a
href="c6104cc366"><code>c6104cc</code></a>
Prohibit Python 3.9.0, 3.9.1 -- they have a bug that causes errors (<a
href="https://redirect.github.com/pyca/cryptography/issues/12045">#12045</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/pyca/cryptography/compare/42.0.3...44.0.1">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the
[Security Alerts
page](https://github.com/citusdata/citus/network/alerts).
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Bumps [tornado](https://github.com/tornadoweb/tornado) from 6.4.2 to
6.5.1.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/tornadoweb/tornado/blob/master/docs/releases.rst">tornado's
changelog</a>.</em></p>
<blockquote>
<h1>Release notes</h1>
<p>.. toctree::
:maxdepth: 2</p>
<p>releases/v6.5.1
releases/v6.5.0
releases/v6.4.2
releases/v6.4.1
releases/v6.4.0
releases/v6.3.3
releases/v6.3.2
releases/v6.3.1
releases/v6.3.0
releases/v6.2.0
releases/v6.1.0
releases/v6.0.4
releases/v6.0.3
releases/v6.0.2
releases/v6.0.1
releases/v6.0.0
releases/v5.1.1
releases/v5.1.0
releases/v5.0.2
releases/v5.0.1
releases/v5.0.0
releases/v4.5.3
releases/v4.5.2
releases/v4.5.1
releases/v4.5.0
releases/v4.4.3
releases/v4.4.2
releases/v4.4.1
releases/v4.4.0
releases/v4.3.0
releases/v4.2.1
releases/v4.2.0
releases/v4.1.0
releases/v4.0.2
releases/v4.0.1
releases/v4.0.0
releases/v3.2.2
releases/v3.2.1
releases/v3.2.0
releases/v3.1.1
releases/v3.1.0
releases/v3.0.2
releases/v3.0.1
releases/v3.0.0</p>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="b5586f3f29"><code>b5586f3</code></a>
Merge pull request <a
href="https://redirect.github.com/tornadoweb/tornado/issues/3503">#3503</a>
from bdarnell/multipart-utf8</li>
<li><a
href="62c276434d"><code>62c2764</code></a>
Release notes for v6.5.1</li>
<li><a
href="170a58af2c"><code>170a58a</code></a>
httputil: Fix support for non-latin1 filenames in multipart uploads</li>
<li><a
href="ab5f354312"><code>ab5f354</code></a>
Merge pull request <a
href="https://redirect.github.com/tornadoweb/tornado/issues/3498">#3498</a>
from bdarnell/final-6.5</li>
<li><a
href="3623024dfc"><code>3623024</code></a>
Final release notes for 6.5.0</li>
<li><a
href="b39b892bf7"><code>b39b892</code></a>
Merge pull request <a
href="https://redirect.github.com/tornadoweb/tornado/issues/3497">#3497</a>
from bdarnell/multipart-log-spam</li>
<li><a
href="cc61050e8f"><code>cc61050</code></a>
httputil: Raise errors instead of logging in multipart/form-data
parsing</li>
<li><a
href="ae4a4e4fea"><code>ae4a4e4</code></a>
asyncio: Preserve contextvars across SelectorThread on Windows (<a
href="https://redirect.github.com/tornadoweb/tornado/issues/3479">#3479</a>)</li>
<li><a
href="197ff13f76"><code>197ff13</code></a>
Merge pull request <a
href="https://redirect.github.com/tornadoweb/tornado/issues/3496">#3496</a>
from bdarnell/undeprecate-set-event-loop</li>
<li><a
href="c3d906c4ad"><code>c3d906c</code></a>
requirements: Upgrade tox to 4.26.0</li>
<li>Additional commits viewable in <a
href="https://github.com/tornadoweb/tornado/compare/v6.4.2...v6.5.1">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the
[Security Alerts
page](https://github.com/citusdata/citus/network/alerts).
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Bumps [jinja2](https://github.com/pallets/jinja) from 3.1.3 to 3.1.6.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/pallets/jinja/releases">jinja2's
releases</a>.</em></p>
<blockquote>
<h2>3.1.6</h2>
<p>This is the Jinja 3.1.6 security release, which fixes security issues
but does not otherwise change behavior and should not result in breaking
changes compared to the latest feature release.</p>
<p>PyPI: <a
href="https://pypi.org/project/Jinja2/3.1.6/">https://pypi.org/project/Jinja2/3.1.6/</a>
Changes: <a
href="https://jinja.palletsprojects.com/en/stable/changes/#version-3-1-6">https://jinja.palletsprojects.com/en/stable/changes/#version-3-1-6</a></p>
<ul>
<li>The <code>|attr</code> filter does not bypass the environment's
attribute lookup, allowing the sandbox to apply its checks. <a
href="https://github.com/pallets/jinja/security/advisories/GHSA-cpwx-vrp4-4pq7">https://github.com/pallets/jinja/security/advisories/GHSA-cpwx-vrp4-4pq7</a></li>
</ul>
<h2>3.1.5</h2>
<p>This is the Jinja 3.1.5 security fix release, which fixes security
issues and bugs but does not otherwise change behavior and should not
result in breaking changes compared to the latest feature release.</p>
<p>PyPI: <a
href="https://pypi.org/project/Jinja2/3.1.5/">https://pypi.org/project/Jinja2/3.1.5/</a>
Changes: <a
href="https://jinja.palletsprojects.com/changes/#version-3-1-5">https://jinja.palletsprojects.com/changes/#version-3-1-5</a>
Milestone: <a
href="https://github.com/pallets/jinja/milestone/16?closed=1">https://github.com/pallets/jinja/milestone/16?closed=1</a></p>
<ul>
<li>The sandboxed environment handles indirect calls to
<code>str.format</code>, such as by passing a stored reference to a
filter that calls its argument. <a
href="https://github.com/pallets/jinja/security/advisories/GHSA-q2x7-8rv6-6q7h">GHSA-q2x7-8rv6-6q7h</a></li>
<li>Escape template name before formatting it into error messages, to
avoid issues with names that contain f-string syntax. <a
href="https://redirect.github.com/pallets/jinja/issues/1792">#1792</a>,
<a
href="https://github.com/pallets/jinja/security/advisories/GHSA-gmj6-6f8f-6699">GHSA-gmj6-6f8f-6699</a></li>
<li>Sandbox does not allow <code>clear</code> and <code>pop</code> on
known mutable sequence types. <a
href="https://redirect.github.com/pallets/jinja/issues/2032">#2032</a></li>
<li>Calling sync <code>render</code> for an async template uses
<code>asyncio.run</code>. <a
href="https://redirect.github.com/pallets/jinja/issues/1952">#1952</a></li>
<li>Avoid unclosed <code>auto_aiter</code> warnings. <a
href="https://redirect.github.com/pallets/jinja/issues/1960">#1960</a></li>
<li>Return an <code>aclose</code>-able <code>AsyncGenerator</code> from
<code>Template.generate_async</code>. <a
href="https://redirect.github.com/pallets/jinja/issues/1960">#1960</a></li>
<li>Avoid leaving <code>root_render_func()</code> unclosed in
<code>Template.generate_async</code>. <a
href="https://redirect.github.com/pallets/jinja/issues/1960">#1960</a></li>
<li>Avoid leaving async generators unclosed in blocks, includes and
extends. <a
href="https://redirect.github.com/pallets/jinja/issues/1960">#1960</a></li>
<li>The runtime uses the correct <code>concat</code> function for the
current environment when calling block references. <a
href="https://redirect.github.com/pallets/jinja/issues/1701">#1701</a></li>
<li>Make <code>|unique</code> async-aware, allowing it to be used after
another async-aware filter. <a
href="https://redirect.github.com/pallets/jinja/issues/1781">#1781</a></li>
<li><code>|int</code> filter handles <code>OverflowError</code> from
scientific notation. <a
href="https://redirect.github.com/pallets/jinja/issues/1921">#1921</a></li>
<li>Make compiling deterministic for tuple unpacking in a <code>{% set
... %}</code> call. <a
href="https://redirect.github.com/pallets/jinja/issues/2021">#2021</a></li>
<li>Fix dunder protocol (<code>copy</code>/<code>pickle</code>/etc)
interaction with <code>Undefined</code> objects. <a
href="https://redirect.github.com/pallets/jinja/issues/2025">#2025</a></li>
<li>Fix <code>copy</code>/<code>pickle</code> support for the internal
<code>missing</code> object. <a
href="https://redirect.github.com/pallets/jinja/issues/2027">#2027</a></li>
<li><code>Environment.overlay(enable_async)</code> is applied correctly.
<a
href="https://redirect.github.com/pallets/jinja/issues/2061">#2061</a></li>
<li>The error message from <code>FileSystemLoader</code> includes the
paths that were searched. <a
href="https://redirect.github.com/pallets/jinja/issues/1661">#1661</a></li>
<li><code>PackageLoader</code> shows a clearer error message when the
package does not contain the templates directory. <a
href="https://redirect.github.com/pallets/jinja/issues/1705">#1705</a></li>
<li>Improve annotations for methods returning copies. <a
href="https://redirect.github.com/pallets/jinja/issues/1880">#1880</a></li>
<li><code>urlize</code> does not add <code>mailto:</code> to values like
<code>@a@b</code>. <a
href="https://redirect.github.com/pallets/jinja/issues/1870">#1870</a></li>
<li>Tests decorated with <code>@pass_context</code> can be used with the
<code>|select</code> filter. <a
href="https://redirect.github.com/pallets/jinja/issues/1624">#1624</a></li>
<li>Using <code>set</code> for multiple assignment (<code>a, b = 1,
2</code>) does not fail when the target is a namespace attribute. <a
href="https://redirect.github.com/pallets/jinja/issues/1413">#1413</a></li>
<li>Using <code>set</code> in all branches of <code>{% if %}{% elif %}{%
else %}</code> blocks does not cause the variable to be considered
initially undefined. <a
href="https://redirect.github.com/pallets/jinja/issues/1253">#1253</a></li>
</ul>
<h2>3.1.4</h2>
<p>This is the Jinja 3.1.4 security release, which fixes security issues
and bugs but does not otherwise change behavior and should not result in
breaking changes.</p>
<p>PyPI: <a
href="https://pypi.org/project/Jinja2/3.1.4/">https://pypi.org/project/Jinja2/3.1.4/</a>
Changes: <a
href="https://jinja.palletsprojects.com/en/3.1.x/changes/#version-3-1-4">https://jinja.palletsprojects.com/en/3.1.x/changes/#version-3-1-4</a></p>
<ul>
<li>The <code>xmlattr</code> filter does not allow keys with
<code>/</code> solidus, <code>></code> greater-than sign, or
<code>=</code> equals sign, in addition to disallowing spaces.
Regardless of any validation done by Jinja, user input should never be
used as keys to this filter, or must be separately validated first.
GHSA-h75v-3vvj-5mfj</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/pallets/jinja/blob/main/CHANGES.rst">jinja2's
changelog</a>.</em></p>
<blockquote>
<h2>Version 3.1.6</h2>
<p>Released 2025-03-05</p>
<ul>
<li>The <code>|attr</code> filter does not bypass the environment's
attribute lookup,
allowing the sandbox to apply its checks.
:ghsa:<code>cpwx-vrp4-4pq7</code></li>
</ul>
<h2>Version 3.1.5</h2>
<p>Released 2024-12-21</p>
<ul>
<li>The sandboxed environment handles indirect calls to
<code>str.format</code>, such as
by passing a stored reference to a filter that calls its argument.
:ghsa:<code>q2x7-8rv6-6q7h</code></li>
<li>Escape template name before formatting it into error messages, to
avoid
issues with names that contain f-string syntax.
:issue:<code>1792</code>, :ghsa:<code>gmj6-6f8f-6699</code></li>
<li>Sandbox does not allow <code>clear</code> and <code>pop</code> on
known mutable sequence
types. :issue:<code>2032</code></li>
<li>Calling sync <code>render</code> for an async template uses
<code>asyncio.run</code>.
:pr:<code>1952</code></li>
<li>Avoid unclosed <code>auto_aiter</code> warnings.
:pr:<code>1960</code></li>
<li>Return an <code>aclose</code>-able <code>AsyncGenerator</code> from
<code>Template.generate_async</code>. :pr:<code>1960</code></li>
<li>Avoid leaving <code>root_render_func()</code> unclosed in
<code>Template.generate_async</code>. :pr:<code>1960</code></li>
<li>Avoid leaving async generators unclosed in blocks, includes and
extends.
:pr:<code>1960</code></li>
<li>The runtime uses the correct <code>concat</code> function for the
current environment
when calling block references. :issue:<code>1701</code></li>
<li>Make <code>|unique</code> async-aware, allowing it to be used after
another
async-aware filter. :issue:<code>1781</code></li>
<li><code>|int</code> filter handles <code>OverflowError</code> from
scientific notation.
:issue:<code>1921</code></li>
<li>Make compiling deterministic for tuple unpacking in a <code>{% set
... %}</code>
call. :issue:<code>2021</code></li>
<li>Fix dunder protocol (<code>copy</code>/<code>pickle</code>/etc)
interaction with <code>Undefined</code>
objects. :issue:<code>2025</code></li>
<li>Fix <code>copy</code>/<code>pickle</code> support for the internal
<code>missing</code> object.
:issue:<code>2027</code></li>
<li><code>Environment.overlay(enable_async)</code> is applied correctly.
:pr:<code>2061</code></li>
<li>The error message from <code>FileSystemLoader</code> includes the
paths that were
searched. :issue:<code>1661</code></li>
<li><code>PackageLoader</code> shows a clearer error message when the
package does not
contain the templates directory. :issue:<code>1705</code></li>
<li>Improve annotations for methods returning copies.
:pr:<code>1880</code></li>
<li><code>urlize</code> does not add <code>mailto:</code> to values like
<code>@a@b</code>. :pr:<code>1870</code></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="15206881c0"><code>1520688</code></a>
release version 3.1.6</li>
<li><a
href="90457bbf33"><code>90457bb</code></a>
Merge commit from fork</li>
<li><a
href="065334d1ee"><code>065334d</code></a>
attr filter uses env.getattr</li>
<li><a
href="033c20015c"><code>033c200</code></a>
start version 3.1.6</li>
<li><a
href="bc68d4efa9"><code>bc68d4e</code></a>
use global contributing guide (<a
href="https://redirect.github.com/pallets/jinja/issues/2070">#2070</a>)</li>
<li><a
href="247de5e0c5"><code>247de5e</code></a>
use global contributing guide</li>
<li><a
href="ab8218c7a1"><code>ab8218c</code></a>
use project advisory link instead of global</li>
<li><a
href="b4ffc8ff29"><code>b4ffc8f</code></a>
release version 3.1.5 (<a
href="https://redirect.github.com/pallets/jinja/issues/2066">#2066</a>)</li>
<li><a
href="877f6e51be"><code>877f6e5</code></a>
release version 3.1.5</li>
<li><a
href="8d58859265"><code>8d58859</code></a>
remove test pypi</li>
<li>Additional commits viewable in <a
href="https://github.com/pallets/jinja/compare/3.1.3...3.1.6">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the
[Security Alerts
page](https://github.com/citusdata/citus/network/alerts).
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Bumps [jinja2](https://github.com/pallets/jinja) from 3.1.3 to 3.1.6.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/pallets/jinja/releases">jinja2's
releases</a>.</em></p>
<blockquote>
<h2>3.1.6</h2>
<p>This is the Jinja 3.1.6 security release, which fixes security issues
but does not otherwise change behavior and should not result in breaking
changes compared to the latest feature release.</p>
<p>PyPI: <a
href="https://pypi.org/project/Jinja2/3.1.6/">https://pypi.org/project/Jinja2/3.1.6/</a>
Changes: <a
href="https://jinja.palletsprojects.com/en/stable/changes/#version-3-1-6">https://jinja.palletsprojects.com/en/stable/changes/#version-3-1-6</a></p>
<ul>
<li>The <code>|attr</code> filter does not bypass the environment's
attribute lookup, allowing the sandbox to apply its checks. <a
href="https://github.com/pallets/jinja/security/advisories/GHSA-cpwx-vrp4-4pq7">https://github.com/pallets/jinja/security/advisories/GHSA-cpwx-vrp4-4pq7</a></li>
</ul>
<h2>3.1.5</h2>
<p>This is the Jinja 3.1.5 security fix release, which fixes security
issues and bugs but does not otherwise change behavior and should not
result in breaking changes compared to the latest feature release.</p>
<p>PyPI: <a
href="https://pypi.org/project/Jinja2/3.1.5/">https://pypi.org/project/Jinja2/3.1.5/</a>
Changes: <a
href="https://jinja.palletsprojects.com/changes/#version-3-1-5">https://jinja.palletsprojects.com/changes/#version-3-1-5</a>
Milestone: <a
href="https://github.com/pallets/jinja/milestone/16?closed=1">https://github.com/pallets/jinja/milestone/16?closed=1</a></p>
<ul>
<li>The sandboxed environment handles indirect calls to
<code>str.format</code>, such as by passing a stored reference to a
filter that calls its argument. <a
href="https://github.com/pallets/jinja/security/advisories/GHSA-q2x7-8rv6-6q7h">GHSA-q2x7-8rv6-6q7h</a></li>
<li>Escape template name before formatting it into error messages, to
avoid issues with names that contain f-string syntax. <a
href="https://redirect.github.com/pallets/jinja/issues/1792">#1792</a>,
<a
href="https://github.com/pallets/jinja/security/advisories/GHSA-gmj6-6f8f-6699">GHSA-gmj6-6f8f-6699</a></li>
<li>Sandbox does not allow <code>clear</code> and <code>pop</code> on
known mutable sequence types. <a
href="https://redirect.github.com/pallets/jinja/issues/2032">#2032</a></li>
<li>Calling sync <code>render</code> for an async template uses
<code>asyncio.run</code>. <a
href="https://redirect.github.com/pallets/jinja/issues/1952">#1952</a></li>
<li>Avoid unclosed <code>auto_aiter</code> warnings. <a
href="https://redirect.github.com/pallets/jinja/issues/1960">#1960</a></li>
<li>Return an <code>aclose</code>-able <code>AsyncGenerator</code> from
<code>Template.generate_async</code>. <a
href="https://redirect.github.com/pallets/jinja/issues/1960">#1960</a></li>
<li>Avoid leaving <code>root_render_func()</code> unclosed in
<code>Template.generate_async</code>. <a
href="https://redirect.github.com/pallets/jinja/issues/1960">#1960</a></li>
<li>Avoid leaving async generators unclosed in blocks, includes and
extends. <a
href="https://redirect.github.com/pallets/jinja/issues/1960">#1960</a></li>
<li>The runtime uses the correct <code>concat</code> function for the
current environment when calling block references. <a
href="https://redirect.github.com/pallets/jinja/issues/1701">#1701</a></li>
<li>Make <code>|unique</code> async-aware, allowing it to be used after
another async-aware filter. <a
href="https://redirect.github.com/pallets/jinja/issues/1781">#1781</a></li>
<li><code>|int</code> filter handles <code>OverflowError</code> from
scientific notation. <a
href="https://redirect.github.com/pallets/jinja/issues/1921">#1921</a></li>
<li>Make compiling deterministic for tuple unpacking in a <code>{% set
... %}</code> call. <a
href="https://redirect.github.com/pallets/jinja/issues/2021">#2021</a></li>
<li>Fix dunder protocol (<code>copy</code>/<code>pickle</code>/etc)
interaction with <code>Undefined</code> objects. <a
href="https://redirect.github.com/pallets/jinja/issues/2025">#2025</a></li>
<li>Fix <code>copy</code>/<code>pickle</code> support for the internal
<code>missing</code> object. <a
href="https://redirect.github.com/pallets/jinja/issues/2027">#2027</a></li>
<li><code>Environment.overlay(enable_async)</code> is applied correctly.
<a
href="https://redirect.github.com/pallets/jinja/issues/2061">#2061</a></li>
<li>The error message from <code>FileSystemLoader</code> includes the
paths that were searched. <a
href="https://redirect.github.com/pallets/jinja/issues/1661">#1661</a></li>
<li><code>PackageLoader</code> shows a clearer error message when the
package does not contain the templates directory. <a
href="https://redirect.github.com/pallets/jinja/issues/1705">#1705</a></li>
<li>Improve annotations for methods returning copies. <a
href="https://redirect.github.com/pallets/jinja/issues/1880">#1880</a></li>
<li><code>urlize</code> does not add <code>mailto:</code> to values like
<code>@a@b</code>. <a
href="https://redirect.github.com/pallets/jinja/issues/1870">#1870</a></li>
<li>Tests decorated with <code>@pass_context</code> can be used with the
<code>|select</code> filter. <a
href="https://redirect.github.com/pallets/jinja/issues/1624">#1624</a></li>
<li>Using <code>set</code> for multiple assignment (<code>a, b = 1,
2</code>) does not fail when the target is a namespace attribute. <a
href="https://redirect.github.com/pallets/jinja/issues/1413">#1413</a></li>
<li>Using <code>set</code> in all branches of <code>{% if %}{% elif %}{%
else %}</code> blocks does not cause the variable to be considered
initially undefined. <a
href="https://redirect.github.com/pallets/jinja/issues/1253">#1253</a></li>
</ul>
<h2>3.1.4</h2>
<p>This is the Jinja 3.1.4 security release, which fixes security issues
and bugs but does not otherwise change behavior and should not result in
breaking changes.</p>
<p>PyPI: <a
href="https://pypi.org/project/Jinja2/3.1.4/">https://pypi.org/project/Jinja2/3.1.4/</a>
Changes: <a
href="https://jinja.palletsprojects.com/en/3.1.x/changes/#version-3-1-4">https://jinja.palletsprojects.com/en/3.1.x/changes/#version-3-1-4</a></p>
<ul>
<li>The <code>xmlattr</code> filter does not allow keys with
<code>/</code> solidus, <code>></code> greater-than sign, or
<code>=</code> equals sign, in addition to disallowing spaces.
Regardless of any validation done by Jinja, user input should never be
used as keys to this filter, or must be separately validated first.
GHSA-h75v-3vvj-5mfj</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/pallets/jinja/blob/main/CHANGES.rst">jinja2's
changelog</a>.</em></p>
<blockquote>
<h2>Version 3.1.6</h2>
<p>Released 2025-03-05</p>
<ul>
<li>The <code>|attr</code> filter does not bypass the environment's
attribute lookup,
allowing the sandbox to apply its checks.
:ghsa:<code>cpwx-vrp4-4pq7</code></li>
</ul>
<h2>Version 3.1.5</h2>
<p>Released 2024-12-21</p>
<ul>
<li>The sandboxed environment handles indirect calls to
<code>str.format</code>, such as
by passing a stored reference to a filter that calls its argument.
:ghsa:<code>q2x7-8rv6-6q7h</code></li>
<li>Escape template name before formatting it into error messages, to
avoid
issues with names that contain f-string syntax.
:issue:<code>1792</code>, :ghsa:<code>gmj6-6f8f-6699</code></li>
<li>Sandbox does not allow <code>clear</code> and <code>pop</code> on
known mutable sequence
types. :issue:<code>2032</code></li>
<li>Calling sync <code>render</code> for an async template uses
<code>asyncio.run</code>.
:pr:<code>1952</code></li>
<li>Avoid unclosed <code>auto_aiter</code> warnings.
:pr:<code>1960</code></li>
<li>Return an <code>aclose</code>-able <code>AsyncGenerator</code> from
<code>Template.generate_async</code>. :pr:<code>1960</code></li>
<li>Avoid leaving <code>root_render_func()</code> unclosed in
<code>Template.generate_async</code>. :pr:<code>1960</code></li>
<li>Avoid leaving async generators unclosed in blocks, includes and
extends.
:pr:<code>1960</code></li>
<li>The runtime uses the correct <code>concat</code> function for the
current environment
when calling block references. :issue:<code>1701</code></li>
<li>Make <code>|unique</code> async-aware, allowing it to be used after
another
async-aware filter. :issue:<code>1781</code></li>
<li><code>|int</code> filter handles <code>OverflowError</code> from
scientific notation.
:issue:<code>1921</code></li>
<li>Make compiling deterministic for tuple unpacking in a <code>{% set
... %}</code>
call. :issue:<code>2021</code></li>
<li>Fix dunder protocol (<code>copy</code>/<code>pickle</code>/etc)
interaction with <code>Undefined</code>
objects. :issue:<code>2025</code></li>
<li>Fix <code>copy</code>/<code>pickle</code> support for the internal
<code>missing</code> object.
:issue:<code>2027</code></li>
<li><code>Environment.overlay(enable_async)</code> is applied correctly.
:pr:<code>2061</code></li>
<li>The error message from <code>FileSystemLoader</code> includes the
paths that were
searched. :issue:<code>1661</code></li>
<li><code>PackageLoader</code> shows a clearer error message when the
package does not
contain the templates directory. :issue:<code>1705</code></li>
<li>Improve annotations for methods returning copies.
:pr:<code>1880</code></li>
<li><code>urlize</code> does not add <code>mailto:</code> to values like
<code>@a@b</code>. :pr:<code>1870</code></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="15206881c0"><code>1520688</code></a>
release version 3.1.6</li>
<li><a
href="90457bbf33"><code>90457bb</code></a>
Merge commit from fork</li>
<li><a
href="065334d1ee"><code>065334d</code></a>
attr filter uses env.getattr</li>
<li><a
href="033c20015c"><code>033c200</code></a>
start version 3.1.6</li>
<li><a
href="bc68d4efa9"><code>bc68d4e</code></a>
use global contributing guide (<a
href="https://redirect.github.com/pallets/jinja/issues/2070">#2070</a>)</li>
<li><a
href="247de5e0c5"><code>247de5e</code></a>
use global contributing guide</li>
<li><a
href="ab8218c7a1"><code>ab8218c</code></a>
use project advisory link instead of global</li>
<li><a
href="b4ffc8ff29"><code>b4ffc8f</code></a>
release version 3.1.5 (<a
href="https://redirect.github.com/pallets/jinja/issues/2066">#2066</a>)</li>
<li><a
href="877f6e51be"><code>877f6e5</code></a>
release version 3.1.5</li>
<li><a
href="8d58859265"><code>8d58859</code></a>
remove test pypi</li>
<li>Additional commits viewable in <a
href="https://github.com/pallets/jinja/compare/3.1.3...3.1.6">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the
[Security Alerts
page](https://github.com/citusdata/citus/network/alerts).
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Bumps [tornado](https://github.com/tornadoweb/tornado) from 6.4 to
6.4.2.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/tornadoweb/tornado/blob/master/docs/releases.rst">tornado's
changelog</a>.</em></p>
<blockquote>
<h1>Release notes</h1>
<p>.. toctree::
:maxdepth: 2</p>
<p>releases/v6.5.0
releases/v6.4.2
releases/v6.4.1
releases/v6.4.0
releases/v6.3.3
releases/v6.3.2
releases/v6.3.1
releases/v6.3.0
releases/v6.2.0
releases/v6.1.0
releases/v6.0.4
releases/v6.0.3
releases/v6.0.2
releases/v6.0.1
releases/v6.0.0
releases/v5.1.1
releases/v5.1.0
releases/v5.0.2
releases/v5.0.1
releases/v5.0.0
releases/v4.5.3
releases/v4.5.2
releases/v4.5.1
releases/v4.5.0
releases/v4.4.3
releases/v4.4.2
releases/v4.4.1
releases/v4.4.0
releases/v4.3.0
releases/v4.2.1
releases/v4.2.0
releases/v4.1.0
releases/v4.0.2
releases/v4.0.1
releases/v4.0.0
releases/v3.2.2
releases/v3.2.1
releases/v3.2.0
releases/v3.1.1
releases/v3.1.0
releases/v3.0.2
releases/v3.0.1
releases/v3.0.0
releases/v2.4.1</p>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="a5ecfab15e"><code>a5ecfab</code></a>
Bump version to 6.4.2</li>
<li><a
href="bc7df6bafd"><code>bc7df6b</code></a>
Fix tests with Twisted 24.7.0</li>
<li><a
href="d5ba4a1695"><code>d5ba4a1</code></a>
httputil: Fix quadratic performance of cookie parsing</li>
<li><a
href="2a0e1d13b5"><code>2a0e1d1</code></a>
Merge pull request <a
href="https://redirect.github.com/tornadoweb/tornado/issues/3388">#3388</a>
from bdarnell/release-641</li>
<li><a
href="b7af4e8f5e"><code>b7af4e8</code></a>
Release notes and version bump for version 6.4.1</li>
<li><a
href="d65f6e71a7"><code>d65f6e7</code></a>
Merge pull request <a
href="https://redirect.github.com/tornadoweb/tornado/issues/3387">#3387</a>
from bdarnell/chunked-parsing</li>
<li><a
href="8d721a877d"><code>8d721a8</code></a>
httputil: Only strip tabs and spaces from header values</li>
<li><a
href="7786f09f84"><code>7786f09</code></a>
Merge pull request <a
href="https://redirect.github.com/tornadoweb/tornado/issues/3386">#3386</a>
from bdarnell/curl-crlf</li>
<li><a
href="fb119c767e"><code>fb119c7</code></a>
http1connection: Stricter handling of transfer-encoding</li>
<li><a
href="b0ffc58e02"><code>b0ffc58</code></a>
curl_httpclient,http1connection: Prohibit CR and LF in headers</li>
<li>Additional commits viewable in <a
href="https://github.com/tornadoweb/tornado/compare/v6.4.0...v6.4.2">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the
[Security Alerts
page](https://github.com/citusdata/citus/network/alerts).
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: ibrahim halatci <ihalatci@gmail.com>
Nontrivial bump because of the following PG15.3 commit
317aba70e
https://github.com/postgres/postgres/commit/317aba70e
Previously, when views were converted to RTE_SUBQUERY the relid
would be cleared in PG15. In this patch of PG15, relid is retained.
Therefore, we add a check with the "relkind and rtekind" to
identify the converted views in 15.13
Sister PR https://github.com/citusdata/the-process/pull/164
Using dev image sha because I encountered the libpq
symlink issue again with "-v219b87c"
_Since we've never released a Citus release that contains the commit
that introduced this bug (see #7461), we don't need to have a
DESCRIPTION line that shows up in release changelog._
From 8 valgrind test targets run for release-13.1 with PG 17.5, we got
1344 stack traces and except one of them, they were all about below
unsafe memory access because this is a very hot code-path that we
execute via our drop trigger.
On main, even `make -C src/test/regress/ check-base-vg` dumps this stack
trace with PG 16/17 to src/test/regress/citus_valgrind_test_log.txt when
executing "multi_cluster_management", and this is not the case with this
PR anymore.
```c
==27337== VALGRINDERROR-BEGIN
==27337== Conditional jump or move depends on uninitialised value(s)
==27337== at 0x7E26B68: citus_unmark_object_distributed (home/onurctirtir/citus/src/backend/distributed/metadata/distobject.c:113)
==27337== by 0x7E26CC7: master_unmark_object_distributed (home/onurctirtir/citus/src/backend/distributed/metadata/distobject.c:153)
==27337== by 0x4BD852: ExecInterpExpr (home/onurctirtir/.pgenv/src/postgresql-16.2/src/backend/executor/execExprInterp.c:758)
==27337== by 0x4BFD00: ExecInterpExprStillValid (home/onurctirtir/.pgenv/src/postgresql-16.2/src/backend/executor/execExprInterp.c:1870)
==27337== by 0x51D82C: ExecEvalExprSwitchContext (home/onurctirtir/.pgenv/src/postgresql-16.2/src/backend/executor/../../../src/include/executor/executor.h:355)
==27337== by 0x51D8A4: ExecProject (home/onurctirtir/.pgenv/src/postgresql-16.2/src/backend/executor/../../../src/include/executor/executor.h:389)
==27337== by 0x51DADB: ExecResult (home/onurctirtir/.pgenv/src/postgresql-16.2/src/backend/executor/nodeResult.c:136)
==27337== by 0x4D72ED: ExecProcNodeFirst (home/onurctirtir/.pgenv/src/postgresql-16.2/src/backend/executor/execProcnode.c:464)
==27337== by 0x4CA394: ExecProcNode (home/onurctirtir/.pgenv/src/postgresql-16.2/src/backend/executor/../../../src/include/executor/executor.h:273)
==27337== by 0x4CD34C: ExecutePlan (home/onurctirtir/.pgenv/src/postgresql-16.2/src/backend/executor/execMain.c:1670)
==27337== by 0x4CAA7C: standard_ExecutorRun (home/onurctirtir/.pgenv/src/postgresql-16.2/src/backend/executor/execMain.c:365)
==27337== by 0x7E1E475: CitusExecutorRun (home/onurctirtir/citus/src/backend/distributed/executor/multi_executor.c:238)
==27337== Uninitialised value was created by a heap allocation
==27337== at 0x4848899: malloc (in /usr/libexec/valgrind/vgpreload_memcheck-amd64-linux.so)
==27337== by 0x9AB1F7: AllocSetContextCreateInternal (home/onurctirtir/.pgenv/src/postgresql-16.2/src/backend/utils/mmgr/aset.c:438)
==27337== by 0x4E0D56: CreateExprContextInternal (home/onurctirtir/.pgenv/src/postgresql-16.2/src/backend/executor/execUtils.c:261)
==27337== by 0x4E0E3E: CreateExprContext (home/onurctirtir/.pgenv/src/postgresql-16.2/src/backend/executor/execUtils.c:311)
==27337== by 0x4E10D9: ExecAssignExprContext (home/onurctirtir/.pgenv/src/postgresql-16.2/src/backend/executor/execUtils.c:490)
==27337== by 0x51EE09: ExecInitSeqScan (home/onurctirtir/.pgenv/src/postgresql-16.2/src/backend/executor/nodeSeqscan.c:147)
==27337== by 0x4D6CE1: ExecInitNode (home/onurctirtir/.pgenv/src/postgresql-16.2/src/backend/executor/execProcnode.c:210)
==27337== by 0x5243C7: ExecInitSubqueryScan (home/onurctirtir/.pgenv/src/postgresql-16.2/src/backend/executor/nodeSubqueryscan.c:126)
==27337== by 0x4D6DD9: ExecInitNode (home/onurctirtir/.pgenv/src/postgresql-16.2/src/backend/executor/execProcnode.c:250)
==27337== by 0x4F05B2: ExecInitAppend (home/onurctirtir/.pgenv/src/postgresql-16.2/src/backend/executor/nodeAppend.c:223)
==27337== by 0x4D6C46: ExecInitNode (home/onurctirtir/.pgenv/src/postgresql-16.2/src/backend/executor/execProcnode.c:182)
==27337== by 0x52003D: ExecInitSetOp (home/onurctirtir/.pgenv/src/postgresql-16.2/src/backend/executor/nodeSetOp.c:530)
==27337==
==27337== VALGRINDERROR-END
```
DESCRIPTION: Adds `citus_nodes` view that displays the node name, port,
role, and "active" for nodes in the cluster.
This PR adds `citus_nodes` view to the `pg_catalog` schema. The
`citus_nodes` view is created in the `citus` schema and is used to
display the node name, port, role, and active status of each node in the
`pg_dist_node` table.
The view is granted `SELECT` permission to the `PUBLIC` role and is set
to the `pg_catalog` schema.
Test cases was added to `multi_cluster_management` tests.
structs.py was modified to add white spaces as `citus_indent` required.
---------
Co-authored-by: Alper Kocatas <alperkocatas@microsoft.com>
PG15 commit d1ef5631e620f9a5b6480a32bb70124c857af4f1
and PG16 commit 695f5deb7902865901eb2d50a70523af655c3a00
disallow replacing joins with scans in queries with pseudoconstant quals.
This commit prevents the set_join_pathlist_hook from being called
if any of the join restrictions is a pseudo-constant.
So in these cases, citus has no info on the join, never sees that
the query has an outer join, and ends up producing an incorrect plan.
PG17 fixes this by commit 9e9931d2bf40e2fea447d779c2e133c2c1256ef3
Therefore, we take this extra measure here for PG versions less than 17.
hasOuterJoin can never be true when set_join_pathlist_hook is absent.
This PR fixes#7784 and refactors the `WrapSubquery(Query *subquery)`
function to improve clarity and correctness when handling volatile
expressions in subqueries during Citus insert-select rewriting.
### Background
The `WrapSubquery` function rewrites a query of the form:
```sql
INSERT INTO target_table SELECT ... FROM ...
```
...by wrapping the `SELECT` in a subquery:
```sql
SELECT <outer-TL>
FROM ( <subquery with volatile expressions replaced with NULL> ) citus_insert_select_subquery
```
This transformation allows:
* **Volatile expressions** (e.g., `nextval`, `now`) **not used in `GROUP
BY` or `ORDER BY`** to be evaluated **exactly once on the coordinator**.
* **Stable/immutable or sort-relevant expressions** to remain in the
worker-executed subquery.
* Placeholder `NULL`s to maintain column alignment in the inner
subquery.
### Fix Details
* Restructured the code into labeled logical sections:
1. Build wrapper query (`SELECT … FROM (subquery)`)
2. Rewrite target lists with volatility analysis
3. Assign and return updated query trees
* Preserved existing behavior, focusing on clarity and maintainability.
### How the new code handles volatile items
stage | what we look for | what we do | why
-- | -- | -- | --
scan target list once | 1. `expr_is_volatile(te->expr)` 2.
`te->ressortgroupref != 0` (is the column used in GROUP BY / ORDER BY?)
| decide whether to hoist or keep | we must not hoist an expression the
inner query still needs for sorting/grouping, otherwise its
`SortGroupClause` breaks
volatile & not used in sort/group | deep‑copy the expression into the
outer target list | executes once on the coordinator |
| leave a typed `NULL `placeholder (visible, not `resjunk`) in the
inner target list | keeps column numbering stable for helpers that
already ran (reorder, cast); the worker sends a cheap constant |
stable / immutable, or volatile but used in sort/group | keep the
original expression in the inner list; outer list references it via a
`Var `| workers can evaluate it safely and, if needed, the inner
ORDER BY still works |
### Example
Given this query:
```sql
INSERT INTO t SELECT nextval('s'), 42 FROM generate_series(1, 2);
```
The planner rewrites it as:
```sql
SELECT nextval('s'), col2
FROM (SELECT NULL::bigint AS col1, 42 AS col2 FROM generate_series(1, 2)) citus_insert_select_subquery;
```
This ensures `nextval('s')` is evaluated only once per row on the
**coordinator**, not on each worker node, preserving correct sequence
semantics.
#### **Outer‑Var guard (`FindReferencedTableColumn`)**
Because `WrapSubquery` adds an extra query level, lots of Vars that the
old code never expected become “outer” Vars; without teaching
`FindReferencedTableColumn` to climb that extra level reliably, Citus
would intermittently reject valid foreign keys and even hit asserts.
* Re‑implemented the outer‑Var guard so that the function:
* **Walks deterministically up the query stack** when `skipOuterVars =
false` (default for FK / UNION checks). A new while‑loop copies — rather
than truncates — `parentQueryList` on each hop, eliminating
list‑aliasing that made *issue 5248* fail intermittently in parallel
regressions.
* Handles multi‑level `varlevelsup` in a single loop; never mutates the
caller’s list in place.
Issue #7709 asks for security labels on columns to be propagated, to
support the `anon` extension. Before, Citus supported security labels
on roles (#7735) and this PR adds support for propagating security
labels on tables and columns.
All scenarios that involve propagating metadata for a Citus table now
include the security labels on the table and on the columns of the
table. These scenarios are:
- When a table becomes distributed using `create_distributed_table()` or
`create_reference_table()`, its security labels (if any) are propageted.
- When a security label is defined on a distributed table, or one of its
columns, the label is propagated.
- When a node is added to a Citus cluster, all distributed tables have
their security labels propagated.
- When a column of a distributed table is dropped, any security labels
on the column are also dropped.
- When a column is added to a distributed table, security labels can be
defined on the column and are propagated.
- Security labels on a distributed table or its columns are not
propagated when `citus.enable_metadata_sync` is enabled.
Regress test `seclabel` is extended with tests to cover these scenarios.
The implementation is somewhat involved because it impacts DDL
propagation of Citus tables, but can be broken down as follows:
- distributed_object_ops has `Role_SecLabel`, `Table_SecLabel` and
`Column_SecLabel` to take care of security labels on roles, tables and
columns. `Any_SecLabel` is used for all other security labels and is
essentially a nop.
- Deparser support - `DeparseRoleSecLabelStmt()`,
`DeparseTableSecLabelStmt()` and `DeparseColumnSecLabelStmt()` take care
of deparsing security label statements on roles, tables and columns
respectively.
- When reconstructing the DDL for a citus table, security labels on the
table or its columns are included by having
`GetPreLoadTableCreationCommands()` call a new function
`CreateSecurityLabelCommands()` to take care of any security labels on
the table or its columns.
- When changing a distributed table name to a shard name before running
a command locally on a worker, function `RelayEventExtendNames()` checks
for security labels on a table or its columns.
DESCRIPTION: Adds citus_stat_counters view that can be used to query
stat counters that Citus collects while the feature is enabled, which is
controlled by citus.enable_stat_counters. citus_stat_counters() can be
used to query the stat counters for the provided database oid and
citus_stat_counters_reset() can be used to reset them for the provided
database oid or for the current database if nothing or 0 is provided.
Today we don't persist stat counters on server shutdown. In other words,
stat counters are automatically reset in case of a server restart.
Details on the underlying design can be found in header comment of
stat_counters.c and in the technical readme.
-------
Here are the details about what we track as of this PR:
For connection management, we have three statistics about the inter-node
connections initiated by the node itself:
* **connection_establishment_succeeded**
* **connection_establishment_failed**
* **connection_reused**
While the first two are relatively easier to understand, the third one
covers the case where a connection is reused. This can happen when a
connection was already established to the desired node, Citus decided to
cache it for some time (see citus.max_cached_conns_per_worker &
citus.max_cached_connection_lifetime), and then reused it for a new
remote operation. Here are the other important details about these
connection statistics:
1. connection_establishment_failed doesn't care about the connections
that we could establish but are lost later in the transaction. Plus, we
cannot guarantee that the connections that are counted in
connection_establishment_succeeded were not lost later.
2. connection_establishment_failed doesn't care about the optional
connections (see OPTIONAL_CONNECTION flag) that we gave up establishing
because of the connection throttling rules we follow (see
citus.max_shared_pool_size & citus.local_shared_pool_size). The reaason
for this is that we didn't even try to establish these connections.
3. For the rest of the cases where a connection failed for some reason,
we always increment connection_establishment_failed even if the caller
was okay with the failure and know how to recover from it (e.g., the
adaptive executor knows how to fall back local execution when the target
node is the local node and if it cannot establish a connection to the
local node). The reason is that even if it's likely that we can still
serve the operation, we still failed to establish the connection and we
want to track this.
4. Finally, the connection failures that we count in
connection_establishment_failed might be caused by any of the following
reasons and for now we prefer to _not_ further distinguish them for
simplicity:
a. remote node is down or cannot accept any more connections, or
overloaded such that citus.node_connection_timeout is not enough to
establish a connection
b. any internal Citus error that might result in preparing a bad
connection string so that libpq fails when parsing the connection string
even before actually trying to establish a connection via connect() call
c. broken citus.node_conninfo or such Citus configuration that was
incorrectly set by the user can also result in similar outcomes as in b
d. internal waitevent set / poll errors or OOM in local node
We also track two more statistics for query execution:
* **query_execution_single_shard**
* **query_execution_multi_shard**
And more importantly, both query_execution_single_shard and
query_execution_multi_shard are not only tracked for the top-level
queries but also for the subplans etc. The reason is that for some
queries, e.g., the ones that go through recursive planning, after Citus
performs the heavy work as part of subplans, the work that needs to be
done for the top-level query becomes quite straightforward. And for such
query types, it would be deceiving if we only incremented the query stat
counters for the top-level query. Similarly, for non-pushable INSERT ..
SELECT and MERGE queries, we perform separate counter increments for the
SELECT / source part of the query besides the final INSERT / MERGE
query.
Fixes#7105.
DESCRIPTION: Fixes a bug that causes omitting CASCADE clause for the
commands sent to workers for REVOKE commands on tables.
---------
Co-authored-by: ThomasC02 <thomascantrell02@gmail.com>
Co-authored-by: Onur Tirtir <onurcantirtir@gmail.com>
Co-authored-by: Tiago Silva <tiagos3373@gmail.com>
DESCRIPTION: Adjusts max_prepared_transactions only when it's set to
default on PG >= 16
Fixes#7711.
Change AdjustMaxPreparedTransactions to really check if
max_prepared_transactions is explicitly set by user, and only adjust
max_prepared_transactions when it is default.
This fixes 021_twophase test failure with loaded Citus library after
postgres/postgres@b39c5272.
Co-authored-by: Karina Litskevich <litskevichkarina@gmail.com>
The retirement of the ubuntu-20.04 runner has been announced by GitHub,
with its removal scheduled for April 15, 2025.
To ensure uninterrupted execution of CI workflows, "Build & Test"
workflow can use the ubuntu-latest runner. It currently points to Ubuntu
22.04 and will automatically track supported versions going forward.
Var externParamPlaceholder is created on stack, and its address is used
for paramFetch. Postgres code return address of externParamPlaceholder
var to externParam, then code flow go out of scope and dereference
pointer on stack out of scope.
Fixes https://github.com/citusdata/citus/issues/7941.
---------
Co-authored-by: Onur Tirtir <onurcantirtir@gmail.com>
Var jobTypeName is created on stack and its value over pointer is used
in heap_form_tuple, so we
have stack use out of scope.
Issue was detected with adress sanitizer.
Fixes#7943.
DESCRIPTION: Makes sure to prevent `INSERT INTO ... SELECT` queries involving subfield or sublink, to avoid crashes
The following query was crashing the backend:
```
INSERT INTO field_indirection_test_1 (
int_col, ct1_col.int_1,ct1_col.int_2
) SELECT 0, 1, 2;
-- crash
```
En passant, added more tests with sublink in distributed_types and found
another query with wrong behavior:
```
INSERT INTO domain_indirection_test (f1,f3.if1) SELECT 0, 1;
ERROR: could not find a conversion path from type 23 to 17619
-- not the expected ERROR
```
Fixed them by using `strip_implicit_coercions()` on target entry
expression before checking for the presence of a subscript or
fieldstore, else we fail to find the existing ones and wrongly accept to
execute unsafe query.
DESCRIPTION: Fixes a bug in deparsing of shard query in case of
"output-table column" name conflict
If an `ORDER BY` item in `SELECT` is a bare identifier, the parser
_first seeks it as an output column name_ of the `SELECT` (for SQL92
compatibility). However, ruleutils.c is expecting the SQL99
interpretation _where such a name is an input column name_. So it's
possible to produce an incorrect display of a view in the (admittedly
pretty ill-advised) case where some other column is renamed in the
`SELECT` output list to match an `ORDER BY` column.
The `DISTINCT ON` expressions are interpreted using the same rules as
for `ORDER BY`.
We had an issue reported that actually uses `DISTINCT ON`: #7684
Since Citus uses ruleutils deparsing logic to create the shard queries,
it would not
table-qualify the column names as needed.
PG17 fixed this https://github.com/postgres/postgres/commit/a7eb633563c
by table-qualifying such names in the dumped view text. Therefore,
Citus doesn't reproduce the issue in PG17, since PG17 table-qualifies
the column names when needed, and the produced shard queries are
correct.
This PR applies the PG17 patch to `ruleutils_15.c` and `ruleutils_16.c`.
Even though we generally try to avoid modifying the ruleutils files, in
this case
we are applying a Postgres patch that `ruleutils_17.c` already has:
897d996b8f
Thanks @c2main for your discussion and idea in the issue.
Fixes#7684
DESCRIPTION: Adds citus_is_primary_node() UDF to determine if the
current node is a primary node in the cluster.
---------
Co-authored-by: German Eichberger <geeichbe@microsoft.com>
Co-authored-by: Onur Tirtir <onurcantirtir@gmail.com>
This is a Merge commit that includes all changes from
release-13.0 branch into main branch.
This Merge commit adds PG17 support and drops PG14 support
from the main branch.
Local steps to open this PR and
include `release-13.0` commits to the `main` branch:
```bash
git checkout release-13.0
git checkout -b naisila/merge_13_0
git rebase main
```
Understandably, the rebase step was a resolve-conflict pain. On top of
resolving some conflicts, I had to add some more commits to this PR such
that the main branch compiles and runs as we want it to. Mainly there
were PG17 additions or PG14 subtractions.
I chose this approach as it cleanly stacks _any new_ `release-13.0`
changes on top of the current main branch. Only new ones, not stuff
there is already on main (we had backported several commits from main to
`release-13.0`, so we ignore those in this PR). The idea is to merge all
these commits in the main branch, not squash and merge.
Note 0: We should remove PG14 tests from required tests as this PR
will drop PG14 support in the main branch as well.
Note 1: `check-style` fails because it considers
`src/backend/distributed/sql/citus--12.1-1--12.2-1.sql` as deleted, and
`src/backend/distributed/sql/downgrades/citus--12.2-1--12.1-1.sql` as
renamed. The reason is that the downgrade script actually stayed 98% the
same therefore was considered a rename. I don't think we can fix this.
Note 2:
I tried the following approach as well:
```bash
git checkout main
git checkout -b naisila/merge_13_0
git merge release-13.0
```
However, this approach was a mess as it included several irrelevant
commits that differ between the main and `release-13.0` branch which
just make this PR difficult to understand. For reference, I have pushed
a different branch with that approach.
https://github.com/citusdata/citus/tree/naisila/merge_13_0_first_try As
you can see it's 156 commits ahead of main, with irrelevant commits such
as
1b4d7a51f8.
The reason is that it's including commits from the very first point of
divergence between `main` and `release-12.1` branch (because we had
cloned `release-13.0` branch from `release-12.1` branch, not `main`).
This commit also has to do with renaming of
daticulocale to datlocale
Relevant PG commit:
f696c0cd5f299f1b51e214efc55a22a782cc175d
f696c0cd5f
Keeping this commit separate from the previous one because
these changes will be different once we drop PG15 support.
For now I renamed pg_ge_15_options to pg_ge_15_17_options
and together with it I changed the meaning of the variable.
However when we drop PG14 support, we will use pg_ge_17_options
and delete pg_ge_15_options altogether
DESCRIPTION: Fixes a bug with `UPDATE SET (...) = (SELECT
some_func(),... )` (#7676)
Citus was checking for presence of sublink, but forgot to manage
multiexpr while evaluating clauses during planning. At this stage (citus
planner), it's not always possible to call PostgreSQL code because the
tree is not yet ready for PostgreSQL pure executor.
Fixes https://github.com/citusdata/citus/issues/7676.
Fixed by adding a new function to check sublink or multiexpr in the
tree.
---------
Co-authored-by: Colm <colmmchugh@microsoft.com>
## Enhance `AddInsertSelectCasts` for Identity Columns
This PR fixes#7887 and improves the behavior of partial inserts into
**identity columns** by modifying the **`AddInsertSelectCasts`**
function. Specifically, we introduce **special-case handling** for
`nextval(...)` calls (represented in the parse tree as `NextValueExpr`)
to ensure that if the identity column’s declared type differs from
`nextval`’s default return type (`int8`), we **cast** the expression
properly. This prevents mismatches like `int8` → `int4` from causing
“invalid string enlargement” errors or other type-related failures.
When `INSERT ... SELECT` is processed, `AddInsertSelectCasts` reconciles
each target column’s type with the corresponding SELECT expression’s
type. Historically, for identity columns that rely on `nextval(...)`, we
can end up with a mismatch:
- `nextval` returns **`int8`**,
- The identity column might be **`int4`**, **`bigint`**, or another
integer type.
Without a correct cast, Postgres or Citus can produce plan-time or
runtime errors. By **detecting** `NextValueExpr` and applying a cast to
the column’s type, the final plan ensures consistent insertion without
errors.
## What Changed
1. **Check for `NextValueExpr`**:
In `AddInsertSelectCasts`, we now have a code block:
```c
if (IsA(selectEntry->expr, NextValueExpr))
{
Oid nextvalType = GetNextvalReturnTypeCatalog();
...
// If (targetType != nextvalType), build a cast from int8 -> targetType
}
else
{
// fallback to generic mismatch logic
}
```
This short-circuits any expression that’s a `nextval(...)` call, letting
us explicitly cast to the correct type.
2. **Fallback Generic Logic**:
If it isn’t a `NextValueExpr` (i.e. a normal column or expression
mismatch), we still rely on the existing path that compares `sourceType`
vs. `targetType` and calls `CastExpr(...)` if they differ.
3. **`GetNextvalReturnTypeCatalog`**:
We added or refined a helper function to confirm that `nextval` returns
`int8`, or do a `LookupFuncName("nextval", ...)` to discover the
function’s return type from `pg_proc`—making it robust if future changes
happen.
## Benefits
- **Partial inserts** into identity columns no longer fail with type
mismatches.
- When `nextval` yields `int8` but the identity column is `int4` (or
another type), we properly cast to the column’s type in the plan.
- Preserves the **existing** approach for other columns—only identity
calls get the specialized `NextValueExpr` logic.
## Testing
- Extended `generatedidentity.sql` test scenario to cover partial
inserts into both `GENERATED ALWAYS` and `GENERATED BY DEFAULT` identity
columns, including tests for the `OVERRIDING SYSTEM VALUE` clause and
partial inserts referencing foreign-key columns.
DESCRIPTION: Fixes deadlock with transaction recovery that is possible
during Citus upgrades.
Fixes#7875.
This commit addresses two interrelated deadlock issues uncovered during Citus
upgrades:
1. Local Deadlock:
- **Problem:**
In `RecoverWorkerTransactions()`, a new connection is created for each worker
node to perform transaction recovery by locking the
`pg_dist_transaction` catalog table until the end of the transaction. When
`RecoverTwoPhaseCommits()` calls this function for each worker node, the order
of acquiring locks on `pg_dist_authinfo` and `pg_dist_transaction` can alternate.
This reversal can lead to a deadlock if any concurrent process requires locks on
these tables.
- **Fix:**
Pre-establish all worker node connections upfront so that
`RecoverWorkerTransactions()` operates with a single, consistent connection.
This ensures that locks on `pg_dist_authinfo` and `pg_dist_transaction` are always
acquired in the correct order, thereby preventing the local deadlock.
2. Distributed Deadlock:
- **Problem:**
After resolving the local deadlock, a distributed deadlock issue emerges. The
maintenance daemon calls `RecoverWorkerTransactions()` on each worker node—
including the local node—which leads to a complex locking sequence:
- A RowExclusiveLock is taken on the `pg_dist_transaction` table in
`RecoverWorkerTransactions()`.
- An update extension then tries to acquire an AccessExclusiveLock on the same
table, getting blocked by the RowExclusiveLock.
- A subsequent query (e.g., a SELECT on `pg_prepared_xacts`) issued using a
separate connection on the local node gets blocked due to locks held during a
call to `BuildCitusTableCacheEntry()`.
- The maintenance daemon waits for this query, resulting in a circular wait and
stalling the entire cluster.
- **Fix:**
Avoid cache lookups for internal PostgreSQL tables by implementing an early bailout
for relation IDs below `FirstNormalObjectId` (system objects). This eliminates
unnecessary calls to `BuildCitusTableCache`, reducing lock contention and mitigating
the distributed deadlock.
Furthermore, this optimization improves performance in fast
connect→query_catalog→disconnect cycles by eliminating redundant
cache creation and lookups.
3. Also reverts the commit that disabled the relevant test cases.
DESCRIPTION: fix a planning error caused by a redundant WHERE clause
Fix a Citus planning glitch that occurs in a DML query when the WHERE
clause of the query is of the form:
` WHERE true OR <expression with 1 or more citus tables> `
and this is the only place in the query referencing a citus table.
Postgres' standard planner transforms the WHERE clause to:
` WHERE true `
So the query now has no citus tables, confusing the Citus planner as
described in issues #7782 and #7783. The fix is to check, after Postgres
standard planner, if the Query has been transformed as shown, and re-run
the check of whether or not the query needs distributed planning.
This PR fixes an issue #7891 in the Citus planner where an `UPDATE` on a
local table with a subquery referencing a reference table could produce
a 0-task plan. Historically, the planner sometimes failed to detect that
both the target and referenced tables were effectively “local,”
assigning `INVALID_SHARD_ID `and yielding a no-op plan.
### Root Cause
- In the Citus router logic (`PlanRouterQuery`), we relied on `shardId`
to determine whether a query should be routed to a single shard.
- If `shardId == INVALID_SHARD_ID`, but we also had not marked the query
as a “local table modification,” the code path would produce zero tasks.
- Local + reference tables do not require multi-shard routing. Failing
to detect this “purely local” scenario caused Citus to incorrectly route
to zero tasks.
### Changes
**Enhanced Local Table Detection**
- Updated `IsLocalTableModification` and related checks to consider both
local and reference tables as “local” for planning, preventing the
0-task scenario.
- Expanded `ContainsOnlyLocalOrReferenceTables` to return true if there
are no fully distributed tables in the query.
**Added Regress Test**
- Introduced a new regress test (`issue_7891.sql`) which reproduces the
scenario.
- Verifies we get a valid single- or local-task plan rather than a
0-task plan.
DESCRIPTION: Ensure that a MERGE command on a distributed table with a
`WHEN NOT MATCHED BY SOURCE` clause runs against all shards of the
distributed table.
The Postgres MERGE command updates a table using a table or a query as a
data source. It provides three ways to match the target table with the
source: `WHEN MATCHED` means that there is a row in both the target and
source; `WHEN NOT MATCHED` means that there is a row in the source that
has no match (is not present) in the target; and, as of PG17, `WHEN NOT
MATCHED BY SOURCE` means that there is a row in the target that has no
match in the source.
In Citus, when a MERGE command updates a distributed table using a
local/reference table or a distributed query as source, that source is
repartitioned, and for each repartitioned shard that has data (i.e. 1 or
more rows) the MERGE is run against the corresponding distributed table
shard. Suppose the distributed table has 32 shards, and the source
repartitions into 4 shards that have data, with the remaining 28 shards
being empty; then the MERGE command is performed on the 4 corresponding
shards of the distributed table. However, the semantics of `WHEN NOT
MATCHED BY SOURCE` are that the specified action must be performed on
the target for each row in the target that is not in the source; so if
the source is empty, all target rows should be updated. To see this,
consider the following MERGE command:
```
MERGE INTO target AS t
USING source AS s ON t.id = s.id
WHEN NOT MATCHED BY SOURCE THEN UPDATE t SET t.col1 = 100
```
If the source has zero rows then every row in the target is updated s.t.
its col1 value is 100. Currently in Citus a MERGE on a distributed table
with a local/reference table or a distributed query as source ignores
shards of the distributed table when the corresponding shard of the
repartitioned source has zero rows. However, if the MERGE command
specifies a `WHEN NOT MATCHED BY SOURCE` clause, then the MERGE should
be performed on all shards of the distributed table, to ensure that the
specified action is performed on the target for each row in the target
that is not in the source. This PR enhances Citus MERGE execution so
that when a repartitioned source shard has zero rows, and the MERGE
command specifies a `WHEN NOT MATCHED BY SOURCE` clause, the MERGE is
performed against the corresponding shard of the distributed table using
an empty (zero row) relation as source, by generating a query of the
form:
```
MERGE INTO target_shard_0002 AS t
USING (SELECT id FROM (VALUES (NULL) ) source_0002(id) WHERE FALSE) AS s ON t.id = s.id
WHEN NOT MATCHED BY SOURCE THEN UPDATE t set t.col1 = 100
```
This works because each row in the target shard will be updated, and
`WHEN MATCHED` and `WHEN NOT MATCHED`, if specified, will be no-ops
because the source has zero rows.
To implement this when the source is a local or reference table involves
teaching function `ExcuteSourceAtCoordAndRedistribution()` in
`merge_executor.c` to not prune tasks when the query has `WHEN NOT
MATCHED BY SOURCE` but to instead replace the task's query to one that
uses an empty relation as source. And when the source is a distributed
query, function
`ExecuteMergeSourcePlanIntoColocatedIntermediateResults()` (also in
`merge_executor.c`) instead of skipping empty tasks now generates a
query that uses an empty relation as source for the corresponding target
shard of the distributed table, but again only when the query has `WHEN
NOT MATCHED BY SOURCE`. A new function `BuildEmptyResultQuery()` is
added to `recursive_planning.c` and it is used by both the
aforementioned functions in `merge_executor.c` to build an empty
relation to use as the source. It applies the appropriate type to each
column of the empty relation so the join with the target makes sense to
the query compiler.
DESCRIPTION: Fixes a crash in columnar custom scan that happens when a
columnar table is used in a join. Fixes issue #7647.
Co-authored-by: Ольга Сергеева <ob-sergeeva@it-serv.ru>
DESCRIPTION: Fixes a crash in left outer joins that can happen when
there is an an aggregate on a column from the inner side of the join.
Fix the SEGV seen in #7787 and #7899; it occurs because a column in the
targetlist of a worker subquery can contain a non-empty varnullingrels
field if the column is from the inner side of a left outer join. The
issue can also occur with the columns in the HAVING clause, and this is
also tested in the fix. The issue was triggered by the introduction of
the varnullingrels to Vars in Postgres 16 (2489d76c)
There is a related issue, #7705, where a non-empty varnullingrels was
incorrectly copied into the query tree for the combine query. Here, a
non-empty varnullingrels field of a var is incorrectly copied into the
query tree for a worker subquery.
The regress file from #7705 is used (and renamed) to also test this
(#7787). An alternative test output file is required for Postgres 15
because of an optimization to DISTINCT in Postgres 16 (1349d2790bf).
DESCRIPTION: Drops PG14 support
1. Remove "$version_num" != 'xx' from configure file
2. delete all PG_VERSION_NUM = PG_VERSION_XX references in the code
3. Look at pg_version_compat.h file, remove all _compat functions etc
defined specifically for PGXX differences
4. delete all PG_VERSION_NUM >= PG_VERSION_(XX+1), PG_VERSION_NUM <
PG_VERSION_(XX+1) ifs in the codebase
5. delete ruleutils_xx.c file
6. cleanup normalize.sed file from pg14 specific lines
7. delete all alternative output files for that particular PG version,
server_version_ge variable helps here
As of this commit, after recovering the remote transactions, now we release the lock
on pg_dist_transaction while closing it to avoid deadlocks that might occur because
of trying to acquire a lock on pg_dist_authinfo while holding a lock on
pg_dist_transaction. Such a scenario can only cause a deadlock if another transaction
is trying to acquire a strong lock on pg_dist_transaction while holding a lock on
pg_dist_authinfo. As of today, we (implicitly) acquire a strong lock on
pg_dist_transaction only when upgrading Citus to 11.3-1 and this happens when creating
a REPLICA IDENTITY on pg_dist_transaction.
And regardless of the code-path we are in, it should be okay to release the lock there
because all we do after that point is to abort the prepared transactions that are not
part of an in-progress distributed transaction and releasing the lock before doing so
should be just fine.
This also changes the blocking behavior between citus_create_restore_point and the
transaction recovery code-path in the sense that now citus_create_restore_point doesn't
until transaction recovery completes aborting the prepared transactions that are not
part of an in-progress distributed transaction. However, this should be fine because
even before this was possible, e.g., if transaction recovery fails to open a remote
connection to a node.
This pull request addresses Issue #7846, where specific MERGE queries on
non-distributed and distributed tables can result in crashes in certain
scenarios. The issue stems from the usage of `pg_class` catalog table,
and the `FilterShardsFromPgclass` function in Citus. This function goes
through the query's jointree to hide the shards. However, in PG17,
MERGE's join quals are in a separate structure called
`mergeJoinCondition`. Therefore FilterShardsFromPgclass was not
filtering correctly in a `MERGE` command that involves `pg_class`. To
fix the issue, we handle `mergeJoinCondition` separately in PG17.
Relevant PG commit:
0294df2f1f
**Non-Distributed Tables:**
A MERGE query involving a non-distributed table using
`pg_catalog.pg_class` as the source may execute successfully but needs
testing to ensure stability.
**Distributed Tables:**
Performing a MERGE on a distributed table using `pg_catalog.pg_class` as
the source raises an error:
`ERROR: MERGE INTO a distributed table from Postgres table is not yet
supported`
However, in some cases, this can lead to a server crash if the
unsupported operation is not properly handled.
This is the test output from the same test conducted prior to the code
changes being implemented.
```
-- Issue #7846: Test crash scenarios with MERGE on non-distributed and distributed tables
-- Step 1: Connect to a worker node to verify shard visibility
\c postgresql://postgres@localhost::worker_1_port/regression?application_name=psql
SET search_path TO pg17;
-- Step 2: Create and test a non-distributed table
CREATE TABLE non_dist_table_12345 (id INTEGER);
-- Test MERGE on the non-distributed table
MERGE INTO non_dist_table_12345 AS target_0
USING pg_catalog.pg_class AS ref_0
ON target_0.id = ref_0.relpages
WHEN NOT MATCHED THEN DO NOTHING;
SSL SYSCALL error: EOF detected
connection to server was lost
```
Regress test tdigest_aggregate_support has been failing since at least
Citus 12.0, when tdigest extension is installed in Postgres. This
appears to be because of an omission by commit 03832f3 and a change in
the implementation of Postgres random() function (pg commit
[d4f109e4a](https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=d4f109e4a)).
To reproduce the test diff:
- Checkout [tdigest ](https://github.com/tvondra/tdigest)and run `make;
make install`
- In citus regress directory run `make check-multi` or
`./citus_tests/run_test.py tdigest_aggregate_support`
There are two parts to this commit:
1. Revert `Output: xxxxx` in EXPLAIN VERBOSE. Citus commit fe4ac51
normalized EXPLAIN VERBOSE output because of a change between pg12 and
pg13. When pg12 support was no longer required, the rule was removed
from normalize.sed and `Output: xxxx` was reverted in the impacted
regress output files (03832f3), but `tdigest_aggregate_support` was
omitted.
2. Adjust the query results; the tdigest_aggregate_support test file has
a comment _verifying results - should be stable due to seed while
inserting the data, if failure due to data these queries could be
removed or check for certain ranges_ but the result values in this
commit are consistent across citus 12.0 (pg 15), citus 12.1 (pg 16) and
citus 13.0 (pg 17), or since the Postgres changed their [implementation
of
random](https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=d4f109e4a),
so proposing to go with these results.
DESCRIPTION: Propagates MERGE ... WHEN NOT MATCHED BY SOURCE
It seems like there is not much needed to be done here.
`get_merge_query_def` from `ruleutils_17` is updated with "WHEN NOT
MATCHED BY SOURCE" therefore `deparse_shard_query` parses the merge
query for execution on the shard correctly.
Relevant PG commit:
https://github.com/postgres/postgres/commit/0294df2f1
DESCRIPTION: Propagates MEMORY and SERIALIZE options of EXPLAIN
The options for `MEMORY` can be true or false. Default is false.
The options for `SERIALIZE` can be none, text or binary. Default is
none.
I referred to how we added support for WAL option in this PR [Support
EXPLAIN(ANALYZE, WAL)](https://github.com/citusdata/citus/pull/4196).
For the tests however, I used the same tests as Postgres, not like the
tests in the WAL PR. I used exactly the same tests as Postgres does, I
simply distributed the table beforehand. See below the relevant Postgres
commits from where you can see the tests added as well:
- [Add EXPLAIN
(MEMORY)](https://github.com/postgres/postgres/commit/5de890e36)
- [Invent SERIALIZE option for
EXPLAIN.](https://github.com/postgres/postgres/commit/06286709e)
This PR required a lot of copying of Postgres static functions regarding
how `EXPLAIN` works for `MEMORY` and `SERIALIZE` options. Specifically,
these copy-pastes were required for updating `ExplainWorkerPlan()`
function, which is in fact based on postgres' `ExplainOnePlan()`:
```C
/* copied from explain.c to update ExplainWorkerPlan() in citus according to ExplainOnePlan() in postgres */
#define BYTES_TO_KILOBYTES(b)
typedef struct SerializeMetrics
static bool peek_buffer_usage(ExplainState *es, const BufferUsage *usage);
static void show_buffer_usage(ExplainState *es, const BufferUsage *usage);
static void show_memory_counters(ExplainState *es, const MemoryContextCounters *mem_counters);
static void ExplainIndentText(ExplainState *es);
static void ExplainPrintSerialize(ExplainState *es, SerializeMetrics *metrics);
static SerializeMetrics GetSerializationMetrics(DestReceiver *dest);
```
_Note_: it looks like we were missing some `buffers` option details as
well. I put them together with the memory option, like the code in
Postgres explain.c, as I didn't want to change the copied code. However,
I tested locally and there is no big deal in previous Citus versions,
and you can also see that existing Citus tests with `buffers true`
didn't change. Therefore, I prefer not to backport "buffers" changes to
previous versions.
This PR adds regression tests to verify REINDEX support with event
triggers. Tests validates trigger execution, shard placement
consistency, and distributed index rebuilding without disruption.
This PR adds a regression test to verify the behavior of access methods
for partitioned and distributed tables, including:
- Creating partitioned tables with heap.
- Distributing tables using create_distributed_table.
- Switching access methods to columnar with ALTER TABLE.
- Validating access method inheritance for new partitions.
Relecant PG17 commit: https://github.com/postgres/postgres/commit/374c7a229
DESCRIPTION: Adds JSON_TABLE() support
PG17 has added basic `JSON_TABLE()` functionality
`JSON_TABLE()` allows `JSON` data to be converted into a relational view
and thus used, for example, in a `FROM` clause, like other tabular data.
We treat `JSON_TABLE` the same as correlated functions (e.g., recurring
tuples). In the end, for multi-shard `JSON_TABLE` commands, we apply the
same restrictions as reference tables (e.g., cannot perform a lateral
outer join when a distributed subquery references a (reference
table)/(json table) etc.)
Relevant PG17 commits:
[basic JSON
table](https://github.com/postgres/postgres/commit/de3600452), [nested
paths in json
table](https://github.com/postgres/postgres/commit/bb766cde6)
Onder had previously added json table support for PG15BETA1, but we
reverted that commit because json table was reverted in PG15.
ce7f1a530f
Previous relevant PG15Beta1 commit:
https://github.com/postgres/postgres/commit/4e34747c8
Therefore, I referred to Onder's commit for this commit as well, with a
few changes due to some differences between PG15/PG17:
1) In PG15Beta1, we had also `PLAN` clauses for `JSON_TABLE`
https://github.com/postgres/postgres/commit/fadb48b00, and Onder's
commit includes tests for those as well. However, `PLAN` nodes are _not_
added in PG17. Therefore, I didn't include the `json_table_select_only`
test, which had mostly queries involving `PLAN`. I only included the
last query from that test.
2) In PG15 timeline (Citus 11.1), we didn't support outer joins where
the outer rel is a recurring one and the inner one is a non-recurring
one. However, [Onur added support for that one in Citus
11.2](https://github.com/citusdata/citus/pull/6512), therefore I updated
the tests from Onder's commit accordingly.
3) PG17 json table has nested paths and columns, therefore I added a
test
with a distributed table, which is exactly the same as the one in
sqljson_jsontable in PG17.
https://github.com/postgres/postgres/commit/bb766cde6
This pull request also adds some basic tests on validation of SQL/JSON
constructor functions JSON(), JSON_SCALAR(), and JSON_SERIALIZE(),
and also SQL/JSON query functions JSON_EXISTS(), JSON_QUERY(), and
JSON_VALUE(). The relevant PG commits are the following:
[JSON(), JSON_SCALAR(),
JSON_SERIALIZE()](https://github.com/postgres/postgres/commit/03734a7fe)
[JSON_EXISTS(), JSON_VALUE(),
JSON_QUERY()](https://github.com/postgres/postgres/commit/6185c9737)
PG17 has added support for AT LOCAL operator
it converts the given time type to
time stamp with the session's TimeZone value as time zone. Here we add
tests that validate that we can use AT LOCAL at INSERT commands
Relevant PG commit:
https://github.com/postgres/postgres/commit/97957fdba
With the tests, we verify that we evaluate AT LOCAL at the coordinator
and then perform the insert remotely.
PG17 added support for
ALTER TABLE ... ALTER COLUMN ... SET EXPRESSION.
Relevant PG commit: https://github.com/postgres/postgres/commit/5d06e99a3
We currently don't support propagating this command for Citus tables.
It is added to future work.
This PR disallows `ALTER TABLE ... ALTER COLUMN ... SET EXPRESSION` on
all Citus table types (local, distributed, and partitioned distributed)
by adding an error check in `ErrorIfUnsupportedAlterTableStmt`. A new
regression test verifies that each table type fails with a consistent
error message when attempting to set an expression.
PG17 introduced ALTER TABLE ... SET ACCESS METHOD DEFAULT
This PR introduces and enforces an error check preventing ALTER TABLE
... SET ACCESS METHOD DEFAULT on both Citus local tables (added via
citus_add_local_table_to_metadata) and distributed/partitioned
distributed tables. The regression tests now demonstrate that each table
type raises an error advising users to explicitly specify an access
method, rather than relying on DEFAULT. This ensures consistent behavior
across local and distributed environments in Citus.
The reason why we currently don't support this is that we can't simply
propagate the command as it is, because the default table access method
may be different across Citus cluster nodes.
Relevant PG commit:
https://github.com/postgres/postgres/commit/d61a6cad6
These options already existed in PG17, and we support them and have
tests for them in `multi_copy.sql`.
In PG17, their capability was extended to specify ALL columns at once
using *.
Citus performs the COPY correctly, as is validated by the added tests in
this PR.
Relevant PG commit:
https://github.com/postgres/postgres/commit/f6d4c9cf1
Copy-pasting from Postgres documentation what these options do, such
that the reviewer may better understand the tests added:
`FORCE_NOT_NULL`: Do not match the specified columns' values against the
null string. In the default case where the null string is empty, this
means that empty values will be read as zero-length strings rather than
nulls, even when they are not quoted. If * is specified, the option will
be applied to all columns. This option is allowed only in `COPY FROM`,
and only when using `CSV` format.
`FORCE_NULL`: Match the specified columns' values against the null
string, even if it has been quoted, and if a match is found set the
value to `NULL`. In the default case where the null string is empty,
this converts a quoted empty string into `NULL`. If * is specified, the
option will be applied to all columns. This option is allowed only in
`COPY FROM`, and only when using `CSV` format.
`FORCE_NULL` and `FORCE_NOT_NULL` can be used simultaneously on the same
column. This results in converting quoted null strings to null values
and unquoted null strings to empty strings.
Explain it to me like I'm a 5-year-old, for a text column:
`FORCE_NULL` looks for empty strings and registers them as `NULL`
`FORCE_NOT_NULL` looks for null values and registers them as empty
strings.
PG17 added the new ON_ERROR option for COPY FROM. When this option is
specified, COPY skips soft errors and
continues copying.
Relevant PG commits:
-- https://github.com/postgres/postgres/commit/9e2d87011
-- https://github.com/postgres/postgres/commit/b725b7eec
I tried it locally with Citus tables.
Without further implementation, it doesn't work correctly.
Therefore, we error out for now, and add it to future work.
PG17 also added log_verbosity option, which controls the
amount of messages emitted during processing. This is
currently used in COPY FROM when ON_ERROR option is set to
ignore. Therefore, we error out for this option as well.
Relevant PG17 commit:
https://github.com/postgres/postgres/commit/f5a227895
DESCRIPTION: Propagates ALTER INDEX ALTER COLUMN SET STATISTICS DEFAULT
We automatically support this. Adding tests only.
We currently don't support ALTER TABLE ALTER COLUMN SET STATISTICS
Relevant PG commit:
https://github.com/postgres/postgres/commit/4f622503d
We are using `release-13.0` branch for both development and release, to
deliver PG17 support in Citus.
Afterwards, we will (probably) merge this branch into main.
Some potential changes for main branch, after we are done working on
release-13.0:
- Merge changes from `release-13.0` to `main`
- Figure out what changes were there on 12.2, move them to 13.1 version.
In a nutshell: rename `12.1--12.2` to `13.0--13.1` and fix issues.
- Set version to 13.1devel
In earlier versions of PostgreSQL, exclusion constraints were not
allowed on partitioned tables. This is why the error in your regression
test (ERROR: exclusion constraints are not supported on partitioned
tables) was raised in PostgreSQL 16. In PostgreSQL 17, exclusion
constraints are now allowed on partitioned tables, which is why the
error no longer appears when you attempt to add an exclusion constraint.
The constraint exclusion mechanism, described in the documentation,
relies on CHECK constraints to decide which partitions or child tables
need to be queried.
[CHECK
constraints](https://www.postgresql.org/docs/current/ddl-partitioning.html#DDL-PARTITIONING-CONSTRAINT-EXCLUSION)
```diff
-- Check "ADD EXCLUDE" errors out for partitioned table since the postgres does not allow it
ALTER TABLE AT_AddConstNoName.citus_local_partitioned_table ADD EXCLUDE(partition_col WITH =);
-ERROR: exclusion constraints are not supported on partitioned tables
-- Check "ADD CHECK"
SET client_min_messages TO DEBUG1;
ALTER TABLE AT_AddConstNoName.citus_local_partitioned_table ADD CHECK (dist_col > 0);
DEBUG: the constraint name on the shards of the partition is too long, switching to sequential and local execution mode to prevent self deadlocks: longlonglonglonglonglonglonglonglonglonglonglo_537570f5_5_check
DEBUG: verifying table "longlonglonglonglonglonglonglonglonglonglonglonglonglonglongabc"
DEBUG: verifying table "p1"
RESET client_min_messages;
SELECT con.conname
FROM pg_catalog.pg_constraint con
INNER JOIN pg_catalog.pg_class rel ON rel.oid = con.conrelid
INNER JOIN pg_catalog.pg_namespace nsp ON nsp.oid = connamespace
WHERE rel.relname = 'citus_local_partitioned_table';
conname
--------------------------------------------------
+ citus_local_partitioned_table_partition_col_excl
citus_local_partitioned_table_check
-(1 row)
+(2 rows)
```
This PR enhances `isolation_multiuser_locking.spec` test compatibility
across multiple PostgreSQL versions by handling differences in error
messages and behavior. Key updates include:
- **Error Message Handling:** Adjustments to manage version-specific
error messages, ensuring consistent test results.
- Modified to address variations in locking behavior across PostgreSQL
versions, ensuring test stability in multiuser scenarios.
- **REINDEX Behavior Adjustment**: This PR accounts for a behavioral
change introduced in PostgreSQL by commit ecb0fd337, which alters how
REINDEX interacts with system catalogs.
https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=ecb0fd337
---------
Co-authored-by: Mehmet YILMAZ <mehmet.yilmaz@microsoft.com>
There is a crash when running vanilla tests because of the
`citus.hide_citus_dependent_objects` GUC. We turn on this GUC only for
the pg vanilla tests. This GUC runs the following function
`HideCitusDependentObjectsOnQueriesOfPgMetaTables`. This function
doesn't take into account the new `mergeJoinCondition`. I rewrote the
function such that it checks for merge join conditions as well.
Relevant PG commit:
https://github.com/postgres/postgres/commit/0294df2f1
The crash could be reproduced locally like the following:
```SQL
SET citus.hide_citus_dependent_objects TO on;
CREATE OR REPLACE FUNCTION
pg_catalog.is_citus_depended_object(oid,oid)
RETURNS bool
LANGUAGE C
AS 'citus', $$is_citus_depended_object$$;
-- try a system catalog
MERGE INTO pg_class c
USING (SELECT 'pg_depend'::regclass AS oid) AS j
ON j.oid = c.oid
WHEN MATCHED THEN
UPDATE SET reltuples = reltuples + 1
RETURNING j.oid;
CREATE VIEW classv AS SELECT * FROM pg_class;
MERGE INTO classv c
USING pg_namespace n
ON n.oid = c.relnamespace
WHEN MATCHED AND c.oid = 'pg_depend'::regclass THEN
UPDATE SET reltuples = reltuples - 1
RETURNING c.oid;
-- crash happens here
```
PostgreSQL 17 seems to have introduced improvements in how correlated
subqueries are handled during plan generation. Instead of generating a
trivial subplan with WHERE true, it now applies more specific filtering
(WHERE (key = 5)), which makes the execution plan more efficient.
https://github.com/postgres/postgres/commit/b262ad44
```
diff -dU10 -w /__w/citus/citus/src/test/regress/expected/local_table_join.out /__w/citus/citus/src/test/regress/results/local_table_join.out
--- /__w/citus/citus/src/test/regress/expected/local_table_join.out.modified 2024-11-05 09:53:50.423970699 +0000
+++ /__w/citus/citus/src/test/regress/results/local_table_join.out.modified 2024-11-05 09:53:50.463971296 +0000
@@ -1420,32 +1420,32 @@
) as subq_1
) as subq_2;
DEBUG: Wrapping relation "custom_pg_type" to a subquery
DEBUG: generating subplan 204_1 for subquery SELECT typdefault FROM local_table_join.custom_pg_type WHERE true
ERROR: direct joins between distributed and local tables are not supported
HINT: Use CTE's or subqueries to select from local tables and use them in joins
-- correlated sublinks are not yet supported because of #4470, unless we convert not-correlated table
SELECT COUNT(*) FROM distributed_table d1 JOIN postgres_table using(key)
WHERE d1.key IN (SELECT key FROM distributed_table WHERE d1.key = key and key = 5);
DEBUG: Wrapping relation "postgres_table" to a subquery
-DEBUG: generating subplan XXX_1 for subquery SELECT key FROM local_table_join.postgres_table WHERE true
+DEBUG: generating subplan 206_1 for subquery SELECT key FROM local_table_join.postgres_table WHERE (key OPERATOR(pg_catalog.=) 5)
```
Co-authored-by: Naisila Puka <37271756+naisila@users.noreply.github.com>
PostgreSQL 16 adds an extra condition (id IS NOT NULL) to the subquery.
This condition is likely used to ensure that no null values are
processed in the subquery. Instead of using the condition id IS NOT
NULL, PostgreSQL 17 generates the subplan with a trivial condition
(WHERE true), indicating that it does not need to explicitly check for
non-null values.
PostgreSQL 17 likely includes optimizations to handle null checks more
efficiently. The WHERE (id IS NOT NULL) condition that was present in
PostgreSQL 16 may now be considered redundant by the planner, as it is
implicitly handled by the query execution engine.
https://github.com/postgres/postgres/commit/b262ad44
```diff
SELECT
foo1.id
FROM
(SELECT local.id, local.title FROM local, distributed WHERE local.id = distributed.id ) as foo9,
(SELECT local.id, local.title FROM local, distributed WHERE local.id = distributed.id ) as foo8,
(SELECT local.id, local.title FROM local, distributed WHERE local.id = distributed.id ) as foo7,
(SELECT local.id, local.title FROM local, distributed WHERE local.id = distributed.id ) as foo6,
(SELECT local.id, local.title FROM local, distributed WHERE local.id = distributed.id ) as foo5,
(SELECT local.id, local.title FROM local, distributed WHERE local.id = distributed.id ) as foo4,
(SELECT local.id, local.title FROM local, distributed WHERE local.id = distributed.id ) as foo3,
(SELECT local.id, local.title FROM local, distributed WHERE local.id = distributed.id ) as foo2,
(SELECT local.id, local.title FROM local, distributed WHERE local.id = distributed.id ) as foo10,
(SELECT local.id, local.title FROM local, distributed WHERE local.id = distributed.id ) as foo1
WHERE
foo1.id = foo9.id AND
foo1.id = foo8.id AND
foo1.id = foo7.id AND
foo1.id = foo6.id AND
foo1.id = foo5.id AND
foo1.id = foo4.id AND
foo1.id = foo3.id AND
foo1.id = foo2.id AND
foo1.id = foo10.id AND
foo1.id = foo1.id
ORDER BY 1;
...
-DEBUG: generating subplan XXX_10 for subquery SELECT id FROM local_dist_join_mixed.local WHERE (id IS NOT NULL)
+DEBUG: generating subplan XXX_10 for subquery SELECT id FROM local_dist_join_mixed.local WHERE true
...
```
in regress test isolation_progress_monitoring, with an ORDER BY. The
implementation of get_progress() uses a tuplestore to hold the step and
progress values, and tuplestore does not provide any guarantee on the
ordering of the tuples so ORDER BY ensures stable test output. Also make
the output more user friendly by including the column names. Fixing
occasional failures seen in isolation_progress_monitoring.

- Adapted `pgmerge.sql` tests from PostgreSQL community's `merge.sql` to
Citus by converting tables into Citus local tables.
- Identified two new PostgreSQL 17 MERGE features (`RETURNING` support
and MERGE on updatable views) not yet supported by Citus.
- Implemented changes to detect unsupported features and raise clean
exceptions, ensuring pgmerge tests pass without diffs.
- Addressed breaking changes caused by `MERGE ... WHEN NOT MATCHED BY
SOURCE` restructuring, reducing diffs in pgmerge tests.
- Segregated unsupported test cases into `merge_unsupported.sql` to
maintain clarity and avoid large diffs in test files.
- Prepared the Citus MERGE planner to handle new PostgreSQL changes,
reducing remaining test discrepancies.
All merge tests now pass cleanly, with unsupported cases clearly
isolated.
Relevant PG commits:
c649fa24a
https://github.com/postgres/postgres/commit/c649fa24a
0294df2f1
https://github.com/postgres/postgres/commit/0294df2f1
---------
Co-authored-by: naisila <nicypp@gmail.com>
PG17 added support for identity columns in partitioned tables:
https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=699586315
A consequence is that a table with an identity column cannot be attached
as a partition. But Citus on Postgres 17 will generate identity column
for the partitions if the parent table has one (or more) identity
columns when propagating distributed table DDL to worker nodes, as
happens in the `generated_identity` regress test in #7768:
```
CREATE TABLE partitioned_table (
a bigint CONSTRAINT myconname GENERATED BY DEFAULT AS IDENTITY (START WITH 10 INCREMENT BY 10),
b bigint GENERATED ALWAYS AS IDENTITY (START WITH 10 INCREMENT BY 10),
c int
)
PARTITION BY RANGE (c);
CREATE TABLE partitioned_table_1_50 PARTITION OF partitioned_table FOR VALUES FROM (1) TO (50);
CREATE TABLE partitioned_table_50_500 PARTITION OF partitioned_table FOR VALUES FROM (50) TO (1000);
SELECT create_distributed_table('partitioned_table', 'a');
- create_distributed_table
----------------------------------------------------------------------
-
-(1 row)
-
+ERROR: table "partitioned_table_1_50" being attached contains an identity column "a"
+DETAIL: The new partition may not contain an identity column.
```
It is the Citus-generated ATTACH PARTITION statement that errors out,
because the Citus-generated CREATE TABLE for the partitions included
identity column definitions. The fix is straightforward - when
propagating the CREATE TABLE ddl for a partition of a table with an
identity column, don't include the identity column(s), they will be
inherited on attaching the partition. In Citus on Postgres 16 (or less)
partitions do not inherit identity; the partitions in the example would
not have any identity columns so it was not an issue previously.
Regress test `multi_explain` has two queries that have a different query
plan with PG17. Here is part of the plan diff for the query labelled
_Union and left join subquery pushdown_ in `multi_explain.sql` (for the
complete diff, search for `multi_explain`
[here](https://github.com/citusdata/citus/actions/runs/12158205599/attempts/1)):
```
-> Sort
Sort Key: ((users.composite_id).tenant_id), ((users.composite_id).user_id), subquery_2.hasdone, events.event_time
- -> Hash Left Join
- Hash Cond: (users.composite_id = subquery_2.composite_id)
- -> HashAggregate
- Group Key: ((users.composite_id).tenant_id), ((users.composite_id).user_id), users.composite_id, ('action=>1'::text), events.event_time
+ -> Nested Loop Left Join
+ Join Filter: (users.composite_id = subquery_2.composite_id)
+ -> Unique
+ -> Sort
+ Sort Key: ((users.composite_id).tenant_id), ((users.composite_id).user_id), users.composite_id, ('action=>1'::text), events.event_time
-> Append
```
The change is the same in both queries; a hash left join with subquery_1
on the outer and subquery_2 on the inner side of the join is now a
nested loop left join with subquery_1 on the outer and subquery_2 on the
inner; additionally, the chosen method of uniquifying the UNION in
subquery_1 has changed from hashed grouping to sort followed by unique,
as shown in the diff above.
The PG17 commit that caused this plan change is likely _[Fix MergeAppend
to more accurately compute the number of rows that need to be
sorted](https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=9d1a5354f)_
because it impacts the estimated rows counts of UNION paths. Comparing a
costed plan of the query between PG16 and PG17 I noticed that with PG16
the rows estimate for the UNION in subquery_1 is 4, whereas with PG17
the rows estimate is 2. A lower rows estimate in the outer side of the
join may result in nested loop looking cheaper than hash join for the
left outer join, hence the plan change in the two queries where there is
a UNION on the outer side of a left outer join.
The proposed fix achieves a consistent plan across all supported
postgres versions by temporarily disabling nested loop join and sort for
the two impacted queries; the postgres optimizer selects hash join for
the outer left join and hashed aggregation for the UNION operation. I
investigated tweaking the queries, but was not able to arrive at a
consistent plan, and I believe the SQL operator (e.g. join, group by,
union) implementations are orthogonal to the intent of the test, so this
should be a satisfactory solution, particularly as it avoids introducing
a second alternative output file for `multi_explain`.
This PR addresses regress tests impacted by the introduction of [the
MAINTAIN privilege in
PG17](https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=ecb0fd337).
The impacted tests include `generated_identity`,
`create_single_shard_table`, `grant_on_sequence_propagation`,
`grant_on_foreign_server_propagation`, `single_node_enterprise`,
`multi_multiuser_master_protocol`,
`multi_alter_table_row_level_security`, `shard_move_constraints` which
show the following error:
```
SELECT start_metadata_sync_to_node('localhost', :worker_2_port);
- start_metadata_sync_to_node
----------------------------------------------------------------------
-
-(1 row)
-
+ERROR: unrecognized aclright: 16384
```
and `multi_multiuser_master_protocol`, where the `pg_class.relacl`
column has 'm' for MAINTAIN if applicable:
```
relname | rolname | relacl
---------------------+-------------+------------------------------------------------------------
trivial_full_access | full_access |
- trivial_postgres | postgres | {postgres=arwdDxt/postgres,full_access=arwdDxt/postgres}
+ trivial_postgres | postgres | {postgres=arwdDxtm/postgres,full_access=arwdDxtm/postgres}
```
The PR updates function `convert_aclright_to_string()` in
citus_ruleutils.c to include a case for `ACL_MAINTAIN`. Per the comment
on `convert_aclright_to_string()` in citus_ruleutils.c, it is a copy of
`convert_aclright_to_string()` in Postgres (where it is in
`src/backend/utils/adt/acl.c`), so requires updating to be consistent
with Postgres. With this change Citus can recognize the MAINTAIN
privilege, and will not emit the `unrecognized aclright` error. The PR
also adds an alternative goldfile for `multi_multiuser_master_protocol`.
Note that `convert_aclright_to_string()` in Postgres includes access
types SET and ALTER SYSTEM on system parameters (aka GUCs), added by
[this PG16
commit](https://github.com/postgres/postgres/commit/a0ffa885e). If Citus
were to have a requirement to support granting SET and ALTER SYSTEM we
would need to update `convert_aclright_to_string()` in citus_ruleutils.c
with SET and ALTER SYSTEM.
This fix ensures that the expected DEBUG error messages from the router
planner in `multi_router_planner`, `multi_router_planner_fast_path` and
`query_single_shard_table` are present with PG17.
In `query_single_shard_table` the diff:
```
SELECT COUNT(*) FROM citus_local_table t1
WHERE t1.b IN (
SELECT b+1 FROM nullkey_c1_t1 t2 WHERE t2.b = t1.a
);
-DEBUG: router planner does not support queries that reference non-colocated distributed tables
+DEBUG: Local tables cannot be used in distributed queries.
```
occurred because of[ this PG17
commit](https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=9f1337639)
which enables the optimizer to pull up a correlated ANY subquery to a
join. The fix inhibits subquery pull up by including a volatile function
in the predicate involving the ANY subquery, preserving the pre-PG17
optimizer treatment of the query.
In the case of `multi_router_planner` and
`multi_router_planner_fast_path` the diffs:
```
-- partition_column is null clause does not prune out any shards,
-- all shards remain after shard pruning, not router plannable
SELECT *
FROM articles_hash a
WHERE a.author_id is null;
-DEBUG: Router planner cannot handle multi-shard select queries
+DEBUG: Creating router plan
```
are because of [this PG17
commit](https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=b262ad440),
which enables the optimizer to detect and remove redundant IS (NOT) NULL
expressions. The fix is to adjust the table definition so the column
used for distribution is not marked NOT NULL, thus preserving the
pre-PG17 query planning behavior.
Finallly, a rule is added to `normalize.sed` to ignore DEBUG logging in CREATE MATERIALIZED
VIEW AS statements introduced by [this PG17
commit](https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=b4da732fd64);
_when creating materialized views, use REFRESH logic to load data_, a
consequence of which is that with `client_min_messages` at `DEBUG2`
Postgres emits extra detail for CREATE MATERIALIZED VIEW AS statements.
```
CREATE MATERIALIZED VIEW mv_articles_hash_empty AS
SELECT * FROM articles_hash WHERE author_id = 1;
DEBUG: Creating router plan
DEBUG: query has a single distribution column value: 1
+DEBUG: drop auto-cascades to type multi_router_planner.pg_temp_61391
+DEBUG: drop auto-cascades to type multi_router_planner.pg_temp_61391[]
```
The rule can be changed to a normalization, or possibly dropped, when 17 becomes the minimum supported version.
PG17 regress sanity (#7653) fix; address diffs in vanilla tests
`create_index` and `privileges`. There is a change from `permission
denied` to `must be owner of`, seen in create_index:
```
@@ -2970,21 +2970,21 @@
REINDEX TABLE pg_toast.pg_toast_1260;
ERROR: permission denied for table pg_toast_1260
REINDEX INDEX pg_toast.pg_toast_1260_index;
-ERROR: permission denied for index pg_toast_1260_index
+ERROR: must be owner of index pg_toast_1260_index
```
and privileges:
```
@@ -2945,41 +2945,43 @@
ERROR: permission denied for table maintain_test
REINDEX INDEX maintain_test_a_idx;
-ERROR: permission denied for index maintain_test_a_idx
+ERROR: must be owner of index maintain_test_a_idx
REINDEX SCHEMA reindex_test;
REINDEX INDEX maintain_test_a_idx;
+ERROR: must be owner of index maintain_test_a_idx
REINDEX SCHEMA reindex_test;
```
The fix updates function `RangeVarCallbackForReindexIndex()` in
`index.c` with changes made by the introduction of the [MAINTAIN
privilege in
PG17](https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=ecb0fd337)
to the function `RangeVarCallbackForReindexIndex()` in `indexcmds.c`.
The code is under a Postgres 17 version directive, which can be removed
when 17 becomes the oldest supported Postgres version.
This PR fixes diffs in `columnnar_chunk_filtering` and `columnar_paths`
tests.
In `columnnar_chunk_filtering` an expression `(NOT (SubPlan 1))` changed
to `(NOT (ANY (a = (SubPlan 1).col1)))`. This is due to [aPG17
commit](https://github.com/postgres/postgres/commit/fd0398fc) that
improved how scalar subqueries (InitPlans) and ANY subqueries (SubPlans)
are EXPLAINed in expressions. The fix uses a helper function which
converts the PG17 format to the pre-PG17 format. It is done this way
because pre-PG17 EXPLAIN does not provide enough context to convert to
the PG17 format. The helper function can (and should) be retired when 17
becomes the minimum supported PG.
In `columnar_paths`, a merge join changed to a hash join. This is due to
[this PG17
commit](f7816aec23),
which improved the PG optimizer's ability to estimate the size of a CTE
scan. The impacted query involves a CTE scan with a point predicate
`(a=123)` and before the change the CTE size was estimated to be 5000,
but with the change it is correctly (given the data in the table)
estimated to be 1, making hash join a more attractive join method. The
fix is to have an alternative goldfile for pre-PG17. I tried, but was
unable, to force a specific kind of join method using the GUCs
(`enable_nestloop`, `enable_hashjoin`, `enable_mergejoin`), but it was
not possible to obtain a consistent plan across all supported PG
versions (in some cases the join inputs switched sides).
There are two commits in this PR:
1) Remove domain_default column since it has been removed from PG17
Relevant PG commit:
78806a9509
78806a95095c4fb9230a441925244690d9c07d23
2) pg_stat_statements reset output diff fix
pg_stat_statements reset output changed in PG17, fix idea from
Relevant PG commits:
6ab1dbd26b
6ab1dbd26bbf307055d805feaaca16dc3e750d36
Test `tableam` expects that this CREATE TABLE statement: `CREATE TABLE
test_partitioned(id int, p int, val int) PARTITION BY RANGE (p) USING
fake_am;`
will produce this error:
`specifying a table access method is not supported on a partitioned
table`
but as of [this PG
commit](https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=374c7a229)
it is possible to specify an access method on a partitioned table. This
fix moves the CREATE TABLE statement to pg17, and adds an additional
test to show parent access method is inherited.
Disable DDL propagation for the vanilla test suite. This enables the
vanilla `database ` test to pass, where previously it was correctly
returning `ERROR: unrecognized ALTER DATABASE option: tablespace`
because release-13.0 does not propagate this ALTER DATABASE variant.
We (Citus team) discussed cherry picking
[#7253](https://github.com/citusdata/citus/pull/7253) from main to
release-13.0 because it does propagate ALTER DATABASE tablespace option
(as well as a couple of others) but decided fixing the regress test was
not the proper context for that. The fix disables
`citus.enable_metadata_sync` when running vanilla, we discussed
disabling `citus.enable_create_database_propagation` but this is not in
release-13.0.
Preserve the test error message by adjusting the query so that PG17
cannot pull it up to a join. Another instance of a subquery that can be
pulled up to a join with PG17 (#7745)
This should have been fixed in, but slipped by, #7745
In PG17, Auto-generated array types, multirange types, and relation
rowtypes
are treated as dependent objects, hence changing the output of the
print_extension_changes function.
Relevant PG commit:
e5bc9454e527b1cba97553531d8d4992892fdeef
e5bc9454e5
Here we create a table with only the basic extension types
in order to avoid printing extra ones for now.
This can be removed when we drop PG16 support.
https://github.com/citusdata/citus/actions/runs/11960253650/attempts/1#summary-33343972656
```diff
| table pg_dist_rebalance_strategy
+ | type citus.distribution_type[]
+ | type citus.pg_dist_object
+ | type pg_dist_shard
+ | type pg_dist_shard[]
+ | type pg_dist_shard_placement
+ | type pg_dist_shard_placement[]
+ | type pg_dist_transaction
+ | type pg_dist_transaction[]
| view citus_dist_stat_activity
| view pg_dist_shard_placement
```
This work was already done by @m3hm3t and approved as part of
https://github.com/citusdata/citus/pull/7722
I separated it in this PR since the previous one contained other changes
which we don't currently want to merge.
Relevant PG commit:
---------
Co-authored-by: Mehmet YILMAZ <mehmety87@gmail.com>
A recent Postgres commit (*) that refactored error messages is the cause
of the diffs in pg16 regress test when running Citus on Postgres 17. The
fix changes the pg16 goldfile and includes a normalization rule for the
error messages so pg16 will pass when running with version 16 of
Postgres.
(*)
https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=498ee9ee2f
PG17 changed how scalar subquery outputs appear in EXPLAIN output (*).
This commit changes impacted regress goldfiles to the PG17 format, and
adds a helper function to covert pre-PG17 plans to the PG17 format. The
conversion is required when testing Citus on pgversions prior to 17. The
helper function can and should be removed when 17 becomes the minimum
supported version.
(*)
https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=fd0398fcb
Fix Test Failure in subquery_in_where, set_operations, dml_recursive in
PG17 #7741
The test failures are caused by[ this commit in
PG17](https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=9f1337639),
which enables correlated subqueries to be pulled up to a join. Prior to
this, the correlated subquery was implemented as a subplan. In citus, it
is not possible to pushdown a correlated subplan, but with a different
plan in PG17 the query can be executed, per the test diff from
`subquery_in_where`:
```
37,39c37,41
< DEBUG: generating subplan XXX_1 for CTE event_id: SELECT user_id AS events_user_id, "time" AS events_time, event_type FROM public.events_table
< DEBUG: Plan XXX query after replacing subqueries and CTEs: SELECT count(*) AS count FROM ...
< ERROR: correlated subqueries are not supported when the FROM clause contains a CTE or subquery
---
> count
> ---------------------------------------------------------------------
> 0
> (1 row)
>
```
This is because with pg17 `= ANY subquery` in the queries can be
implemented as a join, instead of as a subplan filter on a table scan.
For example, `SELECT * FROM test a WHERE x IN (SELECT x FROM test b
UNION SELECT y FROM test c WHERE a.x = c.x) ORDER BY 1,2` (from
set_operations) has this plan in pg17; note that the subquery is the
inner side of a nested loop join:
```
┌───────────────────────────────────────────────────┐
│ QUERY PLAN │
├───────────────────────────────────────────────────┤
│ Sort │
│ Sort Key: a.x, a.y │
│ -> Nested Loop │
│ -> Seq Scan on test a │
│ -> Subquery Scan on "ANY_subquery" │
│ Filter: (a.x = "ANY_subquery".x) │
│ -> HashAggregate │
│ Group Key: b.x │
│ -> Append │
│ -> Seq Scan on test b │
│ -> Seq Scan on test c │
│ Filter: (a.x = x) │
└───────────────────────────────────────────────────┘
```
and this plan in pg16 (and previous pg versions); the subquery is a
correlated subplan filter on a table scan:
```
┌───────────────────────────────────────────────┐
│ QUERY PLAN │
├───────────────────────────────────────────────┤
│ Sort │
│ Sort Key: a.x, a.y │
│ -> Seq Scan on test a │
│ Filter: (SubPlan 1) │
│ SubPlan 1 │
│ -> HashAggregate │
│ Group Key: b.x │
│ -> Append │
│ -> Seq Scan on test b │
│ -> Seq Scan on test c │
│ Filter: (a.x = x) │
└───────────────────────────────────────────────┘
```
The fix Modifies the queries causing the test failures so that an ANY
subquery is not folded to a join, preserving the expected output of the
tests. A similar approach was taken for existing regress tests in the[
postgres
commit](https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=9f1337639).
See the `join `regress test, for example.
We also add pg17 specific tests that leverage this improvement in Postgres
with Citus distributed planning as well.
Regression test cte_inline has the following diff;
```
DEBUG: CTE cte_1 is going to be inlined via distributed planning
DEBUG: CTE cte_1 is going to be inlined via distributed planning
DEBUG: Creating router plan
-DEBUG: query has a single distribution column value: 1
```
DEBUG message `query has a single distribution column value` does not
appear with PG17. This is because PG17 can recognize when a Result node
does not need to have an input node, so the predicate on the
distribution column is not present in the query plan. Comparing the
query plan obtained before PG17:
```
│ Result │
│ One-Time Filter: false │
│ -> GroupAggregate │
│ -> Seq Scan on public.test_table │
│ Filter: (test_table.key = 1) │
```
with the PG17 query plan:
```
┌──────────────────────────────────┐
│ QUERY PLAN │
├──────────────────────────────────┤
│ Result │
│ One-Time Filter: false │
└──────────────────────────────────┘
```
we see that the Result node in the PG16 plan has an Aggregate node, but
the Result node in the PG17 plan does not have any input node; PG17
recognizes it is not needed given a Filter that evaluates to False at
compile-time. The Result node is present in both plans because PG in
both versions can recognize when a combination of predicates equate to
false at compile time; this is the because the successive predicates in
the test query (key=6, key=5, key=4, etc) become contradictory when the
CTEs are inlined. Here is an example query showing the effect of the CTE
inlining:
```
select count(*), key FROM test_table WHERE key = 1 AND key = 2 GROUP BY key;
```
In this case, the WHERE clause obviously evaluates to False. The PG16
query plan for this query is:
```
┌────────────────────────────────────┐
│ QUERY PLAN │
├────────────────────────────────────┤
│ GroupAggregate │
│ -> Result │
│ One-Time Filter: false │
│ -> Seq Scan on test_table │
│ Filter: (key = 1) │
└────────────────────────────────────┘
```
The PG17 query plan is:
```
┌────────────────────────────────┐
│ QUERY PLAN │
├────────────────────────────────┤
│ GroupAggregate │
│ -> Result │
│ One-Time Filter: false │
└────────────────────────────────┘
```
In both plans the PG optimizer is able to derive the predicate 1=2 from
the equivalence class { key, 1, 2 } and then constant fold this to
False. But, in the PG16 plan the Result node has an input node (a
sequential scan on test_table), while in the PG17 plan the Result node
does not have any input. This is because PG17 recognizes that when the
Result filter resolves to False at compile time it is not necessary to
set an input on the Result. I think this is a consequence of this PG17
commit:
https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=b262ad440
which handles redundant IS [NOT] NULL predicates, but also refactored
evaluating of predicates to true/false at compile-time, enabling
optimizations such as those seen here.
Given the reason for the diff, the fix preserves the test output by
modifying the query so the predicates are not contradictory when the
CTEs are inlined.
In PG17 adds builtin C.UTF-8 locale option, we add it in the code to
avoid "unknown collation provider" in vanilla tests.
Relevant PG commit:
f69319f2f1
f69319f2f1fb16eda4b535bcccec90dff3a6795e
Also in PG17, colliculocale, daticulocale renamed to colllocale,
datlocale
Here we fix the following tests to avoid alternative output
pg15 pg16 multi_mx_create_table multi_schema_support
Relevant PG commit:
f696c0cd5f
f696c0cd5f299f1b51e214efc55a22a782cc175d
PG 17 added support for DEFAULT in ALTER TABLE .. SET ACCESS METHOD
Relevant PG commit:
d61a6cad6418f643a5773352038d0dfe5d3535b8
d61a6cad64
In that case, name in `AlterTableCmd->name` would be null.
Add a null check here to avoid crash.
In PG17, the outer loop in `acquire_sample_rows()` changed
from
`while (BlockSampler_HasMore(&bs))`
to
`while (table_scan_analyze_next_block(scan, stream))`
Relevant PG commit:
041b96802efa33d2bc9456f2ad946976b92b5ae1
041b96802e
It is expected that the `scan_analyze_next_block` function will
check if there are any blocks left. So we add that check in
`columnar_scan_analyze_next_block`
Without this fix, we will have an indefinite loop causing timeout.
Specifically, in our test schedules,
`multi schedule` stuck at `drop_column_partitioned_table` test
`multi-mx` schedule stuck at `start_stop_metadata_sync` test
`columnar schedule` stuck at `columnar_create` test
Changed `attstattarget` in `pg_attribute` to use `NullableDatum`,
allowing null representation for default statistics target in PostgreSQL
17.
Relevant PG commit:
6a004f1be87d34cfe51acf2fe2552d2b08a79273
6a004f1be8
```diff
-- verify statistics is set
SELECT c.relname, a.attstattarget
FROM pg_attribute a
JOIN pg_class c ON a.attrelid = c.oid AND c.relname LIKE 'test\_idx%'
ORDER BY c.relname, a.attnum;
relname | attstattarget
-----------+---------------
test_idx | 4646
- test_idx2 | -1
+ test_idx2 |
test_idx2 | 10000
test_idx2 | 3737
(4 rows)
```
Changed stxstattarget in pg_statistic_ext to use nullable
representation, removing explicit -1 for default statistics target in
PostgreSQL 17.
Relevant PG commit:
012460ee93c304fbc7220e5b55d9d0577fc766ab
012460ee93
```diff
SELECT stxstattarget, stxrelid::regclass
FROM pg_statistic_ext
WHERE stxnamespace IN (
SELECT oid
FROM pg_namespace
WHERE nspname IN ('statistics''TestTarget')
)
AND stxname SIMILAR TO '%\_\d+'
ORDER BY stxstattarget, stxrelid::regclass ASC;
stxstattarget | stxrelid
---------------+-----------------------------------
- -1 | "statistics'TestTarget".t1_980000
- -1 | "statistics'TestTarget".t1_980002
...
+ | "statistics'TestTarget".t1_980000
+ | "statistics'TestTarget".t1_980002
...
```
PG17 compatibility - Part 2
https://github.com/citusdata/citus/pull/7699 was the first PG17
compatibility PR merged to main branch, which provided ONLY successful
Citus compilation with PG17.0.
This PR, consider it as Part 2, provides ruleutils changes for PG17.
Ruleutils changes is the first thing we should merge, after successful
build. It's the core for deparsing logic in Citus.
# Question: How do we add ruleutils changes?
- We add a new ruleutils file specific to PG17.
- We keep track of the changes in Postgres's ruleutils file from here
https://github.com/postgres/postgres/commits/REL_17_0/src/backend/utils/adt/ruleutils.c
- Per each commit in that history that belongs only to 17.0, we add the
relevant changes to static functions to our ruleutils file for PG17.
It's like a manual commit copying.
# Check the PR's commits for detailed steps
https://github.com/citusdata/citus/pull/7725/commits
This PR provides successful compilation against PG17.0.
- Remove ExecFreeExprContext call
Relevant PG commit
d060e921ea5aa47b6265174c32e1128cebdbc3df
d060e921ea
- PG17 uses streaming IO in analyze, fix scan_analyze_next_block function
Relevant PG commit
041b96802efa33d2bc9456f2ad946976b92b5ae1
041b96802e
- Define ObjectClass for PG17+ only since it's removed
Relevant PG commit:
89e5ef7e21812916c9cf9fcf56e45f0f74034656
89e5ef7e21
- Remove ReorderBufferTupleBuf structure.
Relevant PG commit:
08e6344fd6423210b339e92c069bb979ba4e7cd6
08e6344fd6
- Define colliculocale and daticulocale since they have been renamed
Relevant PG commit:
f696c0cd5f299f1b51e214efc55a22a782cc175d
f696c0cd5f
- makeStringConst defined in PG17
Relevant PG commit:
de3600452b61d1bc3967e9e37e86db8956c8f577
de3600452b
- RangeVarCallbackOwnsTable was replaced by RangeVarCallbackMaintainsTable
Relevant PG commit:
ecb0fd33720fab91df1207e85704f382f55e1eb7
ecb0fd3372
- attstattarget is nullable, define pg compatible functions for it
Relevant PG commit:
4f622503d6de975ac87448aea5cea7de4bc140d5
4f622503d6
- stxstattarget is nullable in PG17, write compat functions for it
Relevant PG commit:
012460ee93c304fbc7220e5b55d9d0577fc766ab
012460ee93
- Use ResourceOwner to track WaitEventSet in PG17
Relevant PG commit:
50c67c2019ab9ade8aa8768bfe604cd802fe8591
50c67c2019
- getIdentitySequence now uses Relation instead of relation_id
Relevant PG commit:
509199587df73f06eda898ae13284292f4ae573a
509199587d
- Remove no-op tuplestore_donestoring function
Relevant PG commit:
75680c3d805e2323cd437ac567f0677fdfc7b680
75680c3d80
- MergeAction can have 3 merge kinds (now enum) in PG17, write compat
Relevant PG commit:
0294df2f1f842dfb0eed79007b21016f486a3c6c
0294df2f1f
- EXPLAIN (MEMORY) is added, make changes to ExplainOnePlan
Relevant PG commit:
5de890e3610d5a12cdaea36413d967cf5c544e20
5de890e361
- LIMIT_OPTION_DEFAULT has been removed as it's useless, use LIMIT_OPTION_COUNT
Relevant PG commit:
a6be0600ac3b71dda8277ab0fcbe59ee101ac1ce
a6be0600ac
- write compat for create_foreignscan_path bcs of more arguments in PG17
Relevant PG commit:
9e9931d2bf40e2fea447d779c2e133c2c1256ef3
9e9931d2bf
- pgprocno and lxid have been combined into a struct in PGPROC
Relevant PG commits:
28f3915b73f75bd1b50ba070f56b34241fe53fd1
28f3915b73
ab355e3a88de745607f6dd4c21f0119b5c68f2ad
ab355e3a88
024c521117579a6d356050ad3d78fdc95e44eefa
024c521117
- Simplify CitusNewNode (#7434)
postgres refactored newNode() in PG 17, the main point for doing this is
the original tricks is no longer neccessary for modern compilers[1].
This does the same for Citus.
This should have no backward compatibility issues since it just replaces
palloc0fast with palloc0.
This is good for forward compatibility since palloc0fast no longer
exists in PG 17.
[1]
https://www.postgresql.org/message-id/b51f1fa7-7e6a-4ecc-936d-90a8a1659e7c@iki.fi
(cherry picked from commit 4b295cc)
This is prep work for successful compilation with PG17
PG17added foreach_ptr, foreach_int and foreach_oid macros
Relevant PG commit
14dd0f27d7cd56ffae9ecdbe324965073d01a9ff
14dd0f27d7
We already have these macros, but they are different with the
PG17 ones because our macros take a DECLARED variable, whereas
the PG16 macros declare a locally-scoped loop variable themselves.
Hence I am renaming our macros to foreach_declared_
I am separating this into its own PR since it touches many files. The
main compilation PR is https://github.com/citusdata/citus/pull/7699
In the function TaskConcurrentCancelCheck() the pointer "task" was
utilized after checking against NULL, which can lead to dereference of
the null pointer.
To avoid the problem, added a separate handling of the case when the
pointer is null with an interruption of execution.
Fixes: #7693.
Fixes: 1f8675da4382f6e("nonblocking concurrent task execution via
background workers")
Signed-off-by: Maksim Korotkov <m.korotkov@postgrespro.ru>
Fixes#6795
The `worker_copy_table_to_node` is not supposed to be called for Citus
tables. When this function was initially introduced in #6098 , it had
the respective check. But the check was omitted, since
`worker_copy_table_to_node` called for Citus table finishes with error
anyway:
```
ERROR: cannot execute a distributed query from a query on a shard
DETAIL: Executing a distributed query in a function call that may be pushed to a remote node can lead to incorrect results.
```
It turns out that in some cases this error does not occur. See #6795
I suggest restoring that check.
Co-authored-by: Karina Litskevich <litskevichkarina@gmail.com>
The test added in #7604 doesn't reach the `HasRangeTableRef` function
and thus doesn't test what it should.
Co-authored-by: Karina Litskevich <litskevichkarina@gmail.com>
Bump PG versions to the latest minors 14.15, 15.10, 16.6
There is a libpq symlink issue when the images are built remotely
https://github.com/citusdata/citus/actions/runs/12583502447/job/35071296238
Hence, we use the commit sha of a local build of the images, pushed.
This is temporary, until we find the underlying cause of the symlink
issue.
---------
Co-authored-by: Onur Tirtir <onurcantirtir@gmail.com>
We thought we provided support for this in
b8c493f2c4
However the use of parameters in SQL is not supported in Citus. Since
generic plan queries use parameters, we can't support for now.
Relevant PG16 commit https://github.com/postgres/postgres/commit/3c05284Fixes#7813 with proper error message
DESCRIPTION: Fixes a crash that happens because of unsafe catalog access
when re-assigning the global pid after application_name changes.
When application_name changes, we don't actually need to
try re-assigning the global pid for external client backends because
application_name doesn't affect the global pid for such backends. Plus,
trying to re-assign the global pid for external client backends would
unnecessarily cause performing a catalog access when the cached local
node id is invalidated. However, accessing to the catalog tables is
dangerous in certain situations like when we're not in a transaction
block. And for the other types of backends, i.e., the Citus internal
backends, we need to re-assign the global pid when the application_name
changes because for such backends we simply extract the global pid
inherited from the originating backend from the application_name -that's
specified by originating backend when openning that connection- and this
doesn't require catalog access.
This PR is a proposed fix for issue
[7705](https://github.com/citusdata/citus/issues/7705). The following is
the background and rationale for the fix (please refer to
[7705](https://github.com/citusdata/citus/issues/7705) for context);
The `varnullingrels `field was introduced to the Var node struct
definition in Postgres 16. Its purpose is to associate a variable with
the set of outer join relations that can cause the variable to be NULL.
The `varnullingrels ` for the variable
`"gianluca_camp_test"."start_timestamp"` in the problem query is 3,
because the variable "gianluca_camp_test"."start_timestamp" is coming
from the inner (nullable) side of an outer join and 3 is the RT index
(aka relid) of that outer join. The problem occurs when the Postgres
planner attempts to plan the combine query. The format of a combine
query is:
```
SELECT <targets>
FROM pg_catalog.citus_extradata_container();
```
There is only one relation in a combine query, so no outer joins are
present, but the non-empty `varnullingrels `field causes the Postgres
planner to access structures for a non-existent relation. The source of
the problem is that, when creating the target list for the combine
query, function MasterAggregateMutator() uses copyObject() to construct
a Var node before setting the master table ID, and this copies over the
non-empty varnullingrels field in the case of the
`"gianluca_camp_test"."start_timestamp"` var. The proposed solution is
to have MasterAggregateMutator() use makeVar() instead of copyObject(),
and only set the fields that make sense for the combine query; var type,
collation and type modifier. The `varnullingrels `field can be left
empty because there is only one relation in the combine query.
A new regress test issue_7705.sql is added to exercise the fix. The
issue is not specific to window functions, any target expression that
cannot be pushed down and contains at least one column from the inner
side of a left outer join (so has a non-empty varnullingrels field) can
cause the same issue.
More about Citus combine queries
[here](https://github.com/citusdata/citus/tree/main/src/backend/distributed#combine-query-planner).
More about Postgres varnullingrels
[here](https://github.com/postgres/postgres/blob/master/src/backend/optimizer/README).
In function MasterAggregateMutator(), when the original Node is a Var node use makeVar() instead
of copyObject() when constructing the Var node for the target list of the combine query.
The varnullingrels field of the original Var node is ignored because it is not relevant for the
combine query; copying this cause the problem in issue 7705, where a coordinator query had
a Var with a reference to a non-existent join relation.
Very small PR, no changes to behaviour. Just a typo fix :-)
Under
`src/backend/distributed/sql/udfs/citus_finalize_upgrade_to_citus11/`
the sql has a typo "runnnig", which will be displayed to the user if the
`citus_check_cluster_node_health()` fails when calling
`citus_finish_citus_upgrade();`
Co-authored-by: eaydingol <60466783+eaydingol@users.noreply.github.com>
When multiple sessions concurrently attempt to add the same coordinator
node using `citus_set_coordinator_host`, there is a potential race
condition. Both sessions may pass the initial metadata check
(`isCoordinatorInMetadata`), but only one will succeed in adding the
node. The other session will fail with an assertion error
(`Assert(!nodeAlreadyExists)`), causing the server to crash. Even though
the `AddNodeMetadata` function takes an exclusive lock, it appears that
the lock is not preventing the race condition before the initial
metadata check.
- **Issue**: The current logic allows concurrent sessions to pass the
check for existing coordinators, leading to an attempt to insert
duplicate nodes, which triggers the assertion failure.
- **Impact**: This race condition leads to crashes during operations
that involve concurrent coordinator additions, as seen in
https://github.com/citusdata/citus/issues/7646.
**Test Plan:**
- Isolation Test Limitation: An isolation test was added to simulate
concurrent additions of the same coordinator node, but due to the
behavior of PostgreSQL locking mechanisms, the test does not trigger the
edge case. The lock applied within the function serializes the
operations, preventing the race condition from occurring in the
isolation test environment.
While the edge case is difficult to reproduce in an isolation test, the
fix addresses the core issue by ensuring concurrency control through
proper locking.
- Existing Tests: All existing tests related to node metadata and
coordinator management have been run to ensure that no regressions were
introduced.
**After the Fix:**
- Concurrent attempts to add the same coordinator node will be
serialized. One session will succeed in adding the node, while the
others will skip the operation without crashing the server.
Co-authored-by: Mehmet YILMAZ <mehmet.yilmaz@microsoft.com>
**Description:**
This PR adds a section to CONTRIBUTING.md that explains how to set up
debugging in the devcontainer using VS Code.
**Changes:**
- **New Debugging Section**: Clear instructions on starting the
debugger, selecting the appropriate PostgreSQL process, and setting
breakpoints for easier troubleshooting.
**Purpose:**
- **Improved Contributor Workflow**: Enables contributors to debug the
Citus extension within the devcontainer, enhancing productivity and
making it easier to resolve issues.
---------
Co-authored-by: Mehmet YILMAZ <mehmet.yilmaz@microsoft.com>
DESCRIPTION: Add a check to see if the given limit is null.
Fixes a bug by checking if the limit given in the query is null when the
actual limit is computed with respect to the given offset.
Prior to this change, null is interpreted as 0 during the limit
calculation when both limit and offset are given.
Fixes#7663
Removes el/7 and ol/7 as runners and update checkout action to v4
We use EL/7 and OL/7 runners to test packaging for these distributions.
However, for the past two weeks, we've encountered errors during the
checkout step in the pipelines. The error message is as follows:
```
/__e/node20/bin/node: /lib64/libm.so.6: version `GLIBC_2.27' not found (required by /__e/node20/bin/node)
/__e/node20/bin/node: /lib64/libstdc++.so.6: version `GLIBCXX_3.4.20' not found (required by /__e/node20/bin/node)
/__e/node20/bin/node: /lib64/libstdc++.so.6: version `CXXABI_1.3.9' not found (required by /__e/node20/bin/node)
/__e/node20/bin/node: /lib64/libstdc++.so.6: version `GLIBCXX_3.4.21' not found (required by /__e/node20/bin/node)
/__e/node20/bin/node: /lib64/libc.so.6: version `GLIBC_2.28' not found (required by /__e/node20/bin/node)
/__e/node20/bin/node: /lib64/libc.so.6: version `GLIBC_2.25' not found (required by /__e/node20/bin/node)
```
The GCC version within the EL/7 and OL/7 Docker images is 2.17, and we
cannot upgrade it. Therefore, we need to remove these images from the
packaging test pipelines. Consequently, we will no longer verify if the
code builds for EL/7 and OL/7.
However, we are not using these packaging images as runners within the
packaging infrastructure, so we can continue to use these images for
packaging.
Additional Info: I learned that Marlin team fully dropped the el/7
support so we will drop in further releases as well
We move the CI images to the github container registry.
Given we mostly (if not solely) run these containers on github actions
infra it makes sense to have them hosted closer to where they are
needed.
Image changes: https://github.com/citusdata/the-process/pull/157
The sections about the rebalancer algorithm and the backround tasks were
empty.
---------
Co-authored-by: Marco Slot <marco.slot@gmail.com>
Co-authored-by: Steven Sheehy <17552371+steven-sheehy@users.noreply.github.com>
Related to issue #7619, #7620
Merge command fails when source query is single sharded and source and
target are co-located and insert is not using distribution key of
source.
Example
```
CREATE TABLE source (id integer);
CREATE TABLE target (id integer );
-- let's distribute both table on id field
SELECT create_distributed_table('source', 'id');
SELECT create_distributed_table('target', 'id');
MERGE INTO target t
USING ( SELECT 1 AS somekey
FROM source
WHERE source.id = 1) s
ON t.id = s.somekey
WHEN NOT MATCHED
THEN INSERT (id)
VALUES (s.somekey)
ERROR: MERGE INSERT must use the source table distribution column value
HINT: MERGE INSERT must use the source table distribution column value
```
Author's Opinion: If join is not between source and target distributed
column, we should not force user to use source distributed column while
inserting value of target distributed column.
Fix: If user is not using distributed key of source for insertion let's
not push down query to workers and don't force user to use source
distributed column if it is not part of join.
This reverts commit fa4fc0b372.
Co-authored-by: paragjain <paragjain@microsoft.com>
Because we want to track PR numbers and to make backporting easy we
(pretty much always) use squash-merges when merging to master. We
accidentally used a rebase merge for PR #7620. This reverts those
changes so we can redo the merge using squash merge.
This reverts all commits from eedb607c to 9e71750fc.
For some reason using localhost in our hba file doesn't have the
intended effect anymore in our Github Actions runners. Probably because
of some networking change (IPv6 maybe) or some change in the
`/etc/hosts` file.
Replacing localhost with the equivalent loopback IPv4 and IPv6 addresses
resolved this issue.
Updates checkout plugin for github actions to v4. Can not update the
version for check-sql-snapshots since new plugin causes below error in
the docker image this step is using . Please refer to:
https://github.com/citusdata/citus/actions/runs/9286197994/job/25552373953
Error:
```
/__e/node20/bin/node: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.27' not found (required by /__e/node20/bin/node)
/__e/node20/bin/node: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.28' not found (required by /__e/node20/bin/node)
/__e/node20/bin/node: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.25' not found (required by /__e/node20/bin/node)
```
DESCRIPTION: Fix performance issue when using "\d tablename" on a server
with many tables
We introduce a filter to every query on pg_class to automatically remove
shards. This is useful to make sure \d and PgAdmin are not cluttered
with shards. However, the way we were introducing this filter was using
`securityQuals` which can have negative impact on query performance.
On clusters with 100k+ tables this could cause a simple "\d tablename"
command to take multiple seconds, because a skipped optimization by
Postgres causes a full table scan. This changes the code to introduce
this filter in the regular `quals` list instead of in `securityQuals`.
Which causes Postgres to use the intended optimization again.
For reference, this was initially reported as a Postgres issue by me:
https://www.postgresql.org/message-id/flat/4189982.1712785863%40sss.pgh.pa.us#b87421293b362d581ea8677e3bfea920
Variables being modified in the PG_TRY block and read in the PG_CATCH
block should be qualified with volatile.
The variable waitEventSet is modified in the PG_TRY block (line 1085)
and read in the PG_CATCH block (line 1095).
The variable relation is modified in the PG_TRY block (line 500) and
read in the PG_CATCH block (line 515).
Besides, the variable objectAddress doesn't need the volatile qualifier.
Ref: C99 7.13.2.1[^1],
> All accessible objects have values, and all other components of the
abstract machine have state, as of the time the longjmp function was
called, except that the values of objects of automatic storage duration
that are local to the function containing the invocation of the
corresponding setjmp macro that do not have volatile-qualified type and
have been changed between the setjmp invocation and longjmp call are
indeterminate.
[^1]: https://www.open-std.org/jtc1/sc22/wg14/www/docs/n1256.pdf
DESCRIPTION: Correctly mark some variables as volatile
---------
Co-authored-by: Hong Yi <zouzou0208@gmail.com>
Fix check-arbitrary-configs tests failure with current REL_16_STABLE.
This is the same problem as described in #7573. I missed pg_regress call
in _run_pg_regress() in that PR.
Co-authored-by: Karina Litskevich <litskevichkarina@gmail.com>
DESCRIPTION: Fix performance issue in GetForeignKeyOids on systems with
many constraints
GetForeignKeyOids was showing up in CPU profiles when distributing
schemas on systems with 100k+ constraints. The reason was that this
function was doing a sequence scan of pg_constraint to get the foreign
keys that referenced the requested table.
This fixes that by finding the constraints referencing the table through
pg_depend instead of pg_constraint. We're doing this indirection,
because pg_constraint doesn't have an index that we can use, but
pg_depend does.
DESCRIPTION: Fix PG upgrades when invalid rebalance strategies exist
Without this change an upgrade of a cluster with an invalid rebalance
strategy would fail with an error like this:
```
cache lookup failed for shard_cost_function with oid 6077337
CONTEXT: SQL statement "SELECT citus_validate_rebalance_strategy_functions(
NEW.shard_cost_function,
NEW.node_capacity_function,
NEW.shard_allowed_on_node_function)"
PL/pgSQL function citus_internal.pg_dist_rebalance_strategy_trigger_func() line 5 at PERFORM
SQL statement "INSERT INTO pg_catalog.pg_dist_rebalance_strategy SELECT
name,
default_strategy,
shard_cost_function::regprocedure::regproc,
node_capacity_function::regprocedure::regproc,
shard_allowed_on_node_function::regprocedure::regproc,
default_threshold,
minimum_threshold,
improvement_threshold
FROM public.pg_dist_rebalance_strategy"
PL/pgSQL function citus_finish_pg_upgrade() line 115 at SQL statement
```
This fixes that by disabling the trigger and simply re-inserting the
invalid rebalance strategy without checking. We could also silently
remove it, but this seems nicer.
DESCRIPTION: Fix performance issue when distributing a table that
depends on an extension
When the database contains many objects this function would show up in
profiles because it was doing a sequence scan on pg_depend. And with
many objects pg_depend can get very large.
This starts using an index scan to only look for rows containing FDWs,
of which there are expected to be very few (often even zero).
DESCRIPTION: Fix performance issue when creating distributed tables if
many already exist
This builds on the work to speed up EnsureSequenceTypeSupported, and now
does something similar for SequenceUsedInDistributedTable.
SequenceUsedInDistributedTable had a similar O(number of citus tables)
operation. This fixes that and speeds up creation of distributed tables
significantly when many distributed tables already exist.
Fixes#7022
DESCRIPTION: Fix performance issue when creating distributed tables and many already exist
EnsureSequenceTypeSupported was doing an O(number of distributed tables)
operation. This can become very slow with lots of Citus tables, which
now happens much more frequently in practice due to schema based sharding.
Partially addresses #7022
And when that is the case, directly use it as "host" parameter for the
connections between nodes and use the "hostname" provided in
pg_dist_node / pg_dist_poolinfo as "hostaddr" to avoid host name lookup.
This is to avoid allowing dns resolution (and / or setting up DNS names
for each host in the cluster). This already works currently when using
IPs in the hostname. The only use of setting host is that you can then
use sslmode=verify-full and it will validate that the hostname matches
the certificate provided by the node you're connecting too.
It would be more flexible to make this a per-node setting, but that
requires SQL changes. And we'd like to backport this change, and
backporting such a sql change would be quite hard while backporting this
change would be very easy. And in many setups, a different hostname for
TLS validation is actually not needed. The reason for that is
query-from-any node: With query-from-any-node all nodes usually have a
certificate that is valid for the same "cluster hostname", either using
a wildcard cert or a Subject Alternative Name (SAN). Because if you load
balance across nodes you don't know which node you're connecting to, but
you still want TLS validation to do it's job. So with this change you
can use this same "cluster hostname" for TLS validation within the
cluster. Obviously this means you don't validate that you're connecting
to a particular node, just that you're connecting to one of the nodes in
the cluster, but that should be fine from a security perspective (in
most cases).
Note to self: This change requires updating
https://docs.citusdata.com/en/latest/develop/api_guc.html#citus-node-conninfo-text.
DESCRIPTION: Allows overwriting host name for all inter-node connections
by supporting "host" parameter in citus.node_conninfo
In PostgreSQL 16 a new option expecteddir was introduced to pg_regress.
Together with fix in
[196eeb6b](https://github.com/postgres/postgres/commit/196eeb6b) it
causes check-vanilla failure if expecteddir is not specified.
Co-authored-by: Karina Litskevich <litskevichkarina@gmail.com>
DESCRIPTION: Fixes a crash caused by some form of ALTER TABLE ADD COLUMN
statements. When adding multiple columns, if one of the ADD COLUMN
statements contains a FOREIGN constraint ommitting the referenced
columns in the statement, a SEGFAULT occurs.
For instance, the following statement results in a crash:
```
ALTER TABLE lt ADD COLUMN new_col1 bool,
ADD COLUMN new_col2 int references rt;
```
Fixes#7520.
Fixes https://github.com/citusdata/citus/issues/7536.
Note to reviewer:
Before this commit, the following results in an assertion failure when
executed locally and this won't be the case anymore:
```console
make -C src/test/regress/ check-citus-upgrade-local citus-old-version=v10.2.0
```
Note that this doesn't happen on CI as we don't enable assertions there.
---------
Co-authored-by: Jelte Fennema-Nio <jelte.fennema@microsoft.com>
RunPreprocessNonMainDBCommand and RunPostprocessNonMainDBCommand are
the entrypoints for this module. These functions are called from
utility_hook.c to support some of the node-wide object management
commands from non-main databases.
To add support for a new command type, one needs to define a new
NonMainDbDistributeObjectOps object and add it to
GetNonMainDbDistributeObjectOps.
This PR changes the order in which the locks are acquired (for the
target and reference tables), when a modify request is initiated from a
worker node that is not the "FirstWorkerNode".
To prevent concurrent writes, locks are acquired on the first worker
node for the replicated tables. When the update statement originates
from the first worker node, it acquires the lock on the reference
table(s) first, followed by the target table(s). However, if the update
statement is initiated in another worker node, the lock requests are
sent to the first worker in a different order. This PR unifies the
modification order on the first worker node. With the third commit,
independent of the node that received the request, the locks are
acquired for the modified table and then the reference tables on the
first node.
The first commit shows a sample output for the test prior to the fix.
Fixes#7477
---------
Co-authored-by: Jelte Fennema-Nio <jelte.fennema@microsoft.com>
When using a CASE WHEN expression in the body
of the function that is used in the DO block, a segmentation
fault occured. This fixes that.
Fixes#7381
---------
Co-authored-by: Konstantin Morozov <vzbdryn@yahoo.com>
This fixes#7551 reported by Egor Chindyaskin
Function activate_node_snapshot() is not meant to be called on a cluster
without worker nodes. This commit adds ERROR report for such case to
prevent server crash.
DESCRIPTION: Adds support for distributed `ALTER/DROP ROLE` commands
from the databases where Citus is not installed
---------
Co-authored-by: Onur Tirtir <onurcantirtir@gmail.com>
In preprocess phase, we save the original database name, replace
dbname field of CreatedbStmt with a temporary name (to let Postgres
to create the database with the temporary name locally) and then
we insert a cleanup record for the temporary database name on all
nodes **(\*\*)**.
And in postprocess phase, we first rename the temporary database
back to its original name for local node and then return a list of
distributed DDL jobs i) to create the database with the temporary
name and then ii) to rename it back to its original name on other
nodes. That way, if CREATE DATABASE fails on any of the nodes, the
temporary database will be cleaned up by the cleanup records that
we inserted in preprocess phase and in case of a failure, we won't
leak any databases called as the name that user intended to use for
the database.
Solves the problem documented in
https://github.com/citusdata/citus/issues/7369
for CREATE DATABASE commands.
**(\*\*):** To ensure that we insert cleanup records on all nodes,
with this PR we also start requiring having the coordinator in the
metadata because otherwise we would skip inserting a cleanup record
for the coordinator.
Add configuration for coredumps and document how to make sure they are
enabled when developing in a devcontainer.
---------
Co-authored-by: Jelte Fennema-Nio <jelte.fennema@microsoft.com>
When adding CREATE/DROP DATABASE propagation in #7240, luckily
we've added EnsureSupportedCreateDatabaseCommand() check into
deparser too just to be on the safe side. That way, today CREATE
DATABASE commands from non-main dbs don't silently allow unsupported
options.
I wasn't aware of this when merging #7439 and hence wanted to add
a test so that we don't mistakenly remove that check from deparser
in future.
Fix for the #7519
In metadata sync phase, grant statements for roles are being fetched and
propagated from catalog tables.
However, in some cases grant .. with admin option clauses executes after
the granted by statements which causes #7519 error.
We will fix this issue with the grantor propagation task in the project
This fixes#7454: master_disable_node() has only two arguments, but
calls citus_disable_node() that tries to read three arguments
Co-authored-by: Karina Litskevich <litskevichkarina@gmail.com>
DESCRIPTION: Adds support for distributed `CREATE/DROP DATABASE `
commands from the databases where Citus is not installed
---------
Co-authored-by: Onur Tirtir <onurcantirtir@gmail.com>
DESCRIPTION: Adds support for distributed `GRANT .. ON DATABASE TO USER`
commands from the databases where Citus is not installed
---------
Co-authored-by: Onur Tirtir <onurcantirtir@gmail.com>
Rename InsertCleanupRecordInCurrentTransaction ->
InsertCleanupOnSuccessRecordInCurrentTransaction and hardcode policy
type as CLEANUP_DEFERRED_ON_SUCCESS.
Rename InsertCleanupRecordInSubtransaction ->
InsertCleanupRecordOutsideTransaction.
DESCRIPTION: Adds support for distributed role-membership management
commands from the databases where Citus is not installed (`GRANT <role>
TO <role>`)
This PR also refactors the code-path that allows executing some of the
node-wide commands so that we use send deparsed query string to other
nodes instead of the `queryString` passed into utility hook.
---------
Co-authored-by: Onur Tirtir <onurcantirtir@gmail.com>
DESCRIPTION: Fixes incorrect propagating of `GRANTED BY` and
`CASCADE/RESTRICT` clauses for `REVOKE` statements
There are two issues fixed in this PR
1. granted by statement will appear for revoke statements as well
2. revoke/cascade statement will appear after granted by
Since granted by statements does not appear in statements, this bug
hasn't been visible until now. However, after activating the granted by
statement for revoke, order problem arised and this issue was fixed
order problem for cascade/revoke as well
In summary, this PR provides usage of granted by statements properly now
with the correct order of statements.
We can verify the both errors, fixed with just single statement
REVOKE dist_role_3 from non_dist_role_3 granted by test_admin_role
cascade;
Let's use version 2.3.7 to fix the following error as we do in docker
images created in https://github.com/citusdata/the-process/ repo.
```
ImportError: cannot import name 'url_quote' from 'werkzeug.urls' (/home/onurctirtir/.local/share/virtualenvs/regress-ffZKpSmO/lib/python3.9/site-packages/werkzeug/urls.py)
```
And changing werkzeug version required rebuilding Pipfile.lock file in
src/test/regress. Before updating this Pipfile.lock file, we want to
make sure that versions specified there don't break any tests. And to
ensure that this is the case,
https://github.com/citusdata/the-process/pull/155 synchronizes
requirements.txt file based on new Pipfile.lock and hence this PR
updates test image suffix accordingly.
Also, while updating https://github.com/citusdata/the-process/pull/155,
I also had to update Postgres versions to latest minors to make image
builds passing again and updating Postgres versions in images requires
updating Postgres versions in this repo too. While doing that, we also
update Postgres version used in devcontainer too.
DESCRIPTION: Resolves an issue that disrupts distributed GRANT
statements with the grantor option
In this issue 3 issues are being solved:
1.Correcting the erroneous appending of multiple granted by in the
deparser.
2Adding support for grantor (granted by) in grant role propagation.
3. Implementing grantor (granted by) support during the metadata sync
grant role propagation phase.
Limitations: Currently, the grantor must be created prior to the
metadata sync phase. During metadata sync, both the creation of the
grantor and the grants given by that role cannot be performed, as the
grantor role is not detected during the dependency resolution phase.
---------
Co-authored-by: Onur Tirtir <onurcantirtir@gmail.com>
Moves the following functions to the Citus internal schema:
citus_internal_local_blocked_processes
citus_internal_global_blocked_processes
citus_internal_mark_node_not_synced
citus_internal_unregister_tenant_schema_globally
citus_internal_update_none_dist_table_metadata
citus_internal_update_placement_metadata
citus_internal_update_relation_colocation
citus_internal_start_replication_origin_tracking
citus_internal_stop_replication_origin_tracking
citus_internal_is_replication_origin_tracking_active
#7405
---------
Co-authored-by: Jelte Fennema-Nio <jelte.fennema@microsoft.com>
DESCRIPTION: citus_move_shard_placement now fails early when shard
cannot be safely moved
The implementation is quite simplistic -
`citus_move_shard_placement(...)` will fail with an error if there's any
new node in the cluster that doesn't have reference tables yet.
It could have been finer-grained, i.e. erroring only when trying to move
a shard to an unitialized node. Looking at the related functions -
`replicate_reference_tables()` or `citus_rebalance_start()`, I think
it's acceptable behaviour. These other functions also treat "any"
unitialized node as a temporary anomaly.
Fixes#7426
---------
Co-authored-by: Jelte Fennema-Nio <jelte.fennema@microsoft.com>
Since Postgres commit da9b580d files and directories are supposed to
be created with pg_file_create_mode and pg_dir_create_mode permissions
when default permissions are expected.
This fixes a failure of one of the postgres tests:
If we create file add.conf containing
```
shared_preload_libraries='citus'
```
and run postgres tests
```
TEMP_CONFIG=/path/to/add.conf make installcheck -C src/bin/pg_ctl/
```
then 001_start_stop.pl fails with
```
.../data/base/pgsql_job_cache mode must be 0750
```
in the log.
In passing this also stops creating directories that we haven't used
since Citus 7.4
This change explicitely doesn't change permissions of certificates/keys
that we create.
---------
Co-authored-by: Karina Litskevich <litskevichkarina@gmail.com>
Moves the following functions:
citus_internal_delete_colocation_metadata
citus_internal_delete_partition_metadata
citus_internal_delete_placement_metadata
citus_internal_delete_shard_metadata
citus_internal_delete_tenant_schema
Move more functions to citus_internal schema, the list:
citus_internal_add_placement_metadata
citus_internal_add_shard_metadata
citus_internal_add_tenant_schema
citus_internal_adjust_local_clock_to_remote
citus_internal_database_command
#7405
Move citus_internal_acquire_citus_advisory_object_class_lock and
citus_internal_add_colocation_metadata functions from pg_catalog to
citus_internal.
#7405
Soon we will have occurrences of "citus.X" in shared_library_init.c that
are not part of GUC defs, so we need to use a more precise regular
expression.
Fixes a bug that breaks queries from non-maindbs when
citus.local_hostname is set to a value different than "localhost".
This is a very old bug doesn't cause a problem as long as Citus catalog
is available to FindWorkerNode(). And the catalog is always available
unless we're in non-main database, which might be the case on main but
not on older releases, hence not adding a `DESCRIPTION`. For this
reason, I don't see a reason to backport this.
Maybe we should totally refrain using LOCAL_HOST_NAME in all code-paths,
but not doing that in this PR as the other paths don't seem to be
breaking something that is user-facing.
```c
char *
GetAuthinfo(char *hostname, int32 port, char *user)
{
char *authinfo = NULL;
bool isLoopback = (strncmp(LOCAL_HOST_NAME, hostname, MAX_NODE_LENGTH) == 0 &&
PostPortNumber == port);
if (IsTransactionState())
{
int64 nodeId = WILDCARD_NODE_ID;
/* -1 is a special value for loopback connections (task tracker) */
if (isLoopback)
{
nodeId = LOCALHOST_NODE_ID;
}
else
{
WorkerNode *worker = FindWorkerNode(hostname, port);
if (worker != NULL)
{
nodeId = worker->nodeId;
}
}
authinfo = GetAuthinfoViaCatalog(user, nodeId);
}
return (authinfo != NULL) ? authinfo : "";
}
```
This patch includes the username in the reported error message.
This makes debugging easier when certain commands open connections
as other users than the user that is executing the command.
```
monitora_snapshot=# SELECT citus_move_shard_placement(102030, 'monitora.db-dev-worker-a', 6005, 'monitora.db-dev-worker-a', 6017);
ERROR: connection to the remote node monitora_user@monitora.db-dev-worker-a:6017 failed with the following error: fe_sendauth: no password supplied
Time: 40,198 ms
```
This PR makes the connections to other nodes for
`mark_object_distributed` use the same user as
`execute_command_on_remote_nodes_as_user` so they'll use the same
connection.
ExecuteTaskListIntoTupleDestWithParam and ExecuteTaskListIntoTupleDest
are nearly the same. I parameterized and a made a reusable structure
here
---------
Co-authored-by: Onur Tirtir <onurcantirtir@gmail.com>
DESCRIPTION: Remove a few small memory leaks
In #7440 one instance of a strdup was removed. But there were a few
more. This removes the ones that are left over, or adds a comment why
strdup is on purpose.
This change refactors the code by using generate_qualified_relation_name
from id instead of using a sequence of functions to generate the
relation name.
Fixes#6602
postgres refactored newNode() in PG 17, the main point for doing this is
the original tricks is no longer neccessary for modern compilers[1].
This does the same for Citus.
This should have no backward compatibility issues since it just replaces
palloc0fast with palloc0.
This is good for forward compatibility since palloc0fast no longer
exists in PG 17.
[1]
https://www.postgresql.org/message-id/b51f1fa7-7e6a-4ecc-936d-90a8a1659e7c@iki.fi
This fixes two problems:
1. Allow `make check -j20` to work, by disabling parallelism. This was
reported by a user in #7432
2. Actually run all the tests by forwarding to `make check` instead of
`check-full`, because confusingly `check-full` does not run all the
tests.
DESCRIPTION: Adds comment on database and role propagation.
Example commands are as below
comment on database <db_name> is '<comment_text>'
comment on database <db_name> is NULL
comment on role <role_name> is '<comment_text>'
comment on role <role_name> is NULL
---------
Co-authored-by: Jelte Fennema-Nio <jelte.fennema@microsoft.com>
I noticed while reviewing #7203 that there as no example of executing
sql on a worker for the pytest README. Since this is a pretty common
thing that people want to do, this PR adds that.
Test isolation_update_node fails on some systems with the following error:
```
-s2: WARNING: connection to the remote node non-existent:57637 failed with the following error: could not translate host name "non-existent" to address: Name or service not known
+s2: WARNING: connection to the remote node non-existent:57637 failed with the following error: could not translate host name "non-existent" to address: Temporary failure in name resolution
```
This slightly modifies an already existing [normalization
rule](739c6d26df/src/test/regress/bin/normalize.sed (L217-L218))
to fix it.
Co-authored-by: Karina Litskevich <litskevichkarina@gmail.com>
Adding upgrade_basic_before_non_mixed.sql file because while
upgrade_basic_after_non_mixed exist, its before variation didn't exist
as we don't have any "before" steps. However, run_test.py assumes that
all "after" files do have a "before" variation as well. So this PR adds
an empty upgrade_basic_before_non_mixed.sql file.
Also, given that we don't have such a version called as 12.1devel
anymore, change it to 12.1.1.
And finally, let CI skip testing flakyness for upgrade tests both
because it's quite hard to get flaky-test-detection job working for
upgrade tests and also because in the end it is not much useful to test
upgrade tests against flakyness.
Running a query from a Citus non-main database that inserts to
pg_dist_object requires a new connection to the main database itself.
This PR adds that connection to the main database.
---------
Co-authored-by: Jelte Fennema-Nio <github-tech@jeltef.nl>
When there are multiple localhost entries in /etc/hosts like following
/etc/hosts:
```
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
127.0.0.1 localhost
```
multi_cluster_management check will failed:
```
@@ -857,20 +857,21 @@
ERROR: group 14 already has a primary node
-- check that you can add secondaries and unavailable nodes to a group
SELECT groupid AS worker_2_group FROM pg_dist_node WHERE nodeport = :worker_2_port \gset
SELECT 1 FROM master_add_node('localhost', 9998, groupid => :worker_1_group, noderole => 'secondary');
?column?
----------
1
(1 row)
SELECT 1 FROM master_add_node('localhost', 9997, groupid => :worker_1_group, noderole => 'unavailable');
+WARNING: could not establish connection after 5000 ms
?column?
----------
1
(1 row)
```
This actually isn't just a problem in test environments, but could occur
as well during actual usage when a hostname in pg_dist_node
resolves to multiple IPs and one of those IPs is unreachable.
Postgres will then automatically continue with the next IP, but
Citus should listen for events on the new socket. Not on the
old one.
Co-authored-by: chuhx43211 <chuhx43211@hundsun.com>
LoadShardList is called twice, which is not neccessary, and there is no
need to sort the shard placement list since we only want to know the list
length.
DESCRIPTION: Adds support for issuing `CREATE`/`DROP` DATABASE commands
from worker nodes
With this commit, we allow issuing CREATE / DROP DATABASE commands from
worker nodes too.
As in #7278, this is not allowed when the coordinator is not added to
metadata because we don't ever sync metadata changes to coordinator
when adding coordinator to the metadata via
`SELECT citus_set_coordinator_host('<hostname>')`, or equivalently, via
`SELECT citus_add_node(<coordinator_node_name>, <coordinator_node_port>, 0)`.
We serialize database management commands by acquiring a Citus specific
advisory lock on the first primary worker node if there are any workers in the
cluster. As opposed to what we've done in https://github.com/citusdata/citus/pull/7278
for role management commands, we try to avoid from running into distributed deadlocks
as much as possible. This is because, while distributed deadlocks that can happen around
role management commands can be detected by Citus, this is not the case for database
management commands because most of them cannot be run inside in a transaction block.
In that case, Citus cannot even detect the distributed deadlock because the command is not
part of a distributed transaction at all, then the command execution might not return the
control back to the user for an indefinite amount of time.
This fixes#7230.
First of all, using HeapTupleHeaderGetDatumLength(heapTuple) is
definetly wrong, it gives a number that's 4 times less than the correct
tuple size (heapTuple.t_len). See
https://github.com/postgres/postgres/blob/REL_16_0/src/include/access/htup_details.h#L455-L456https://github.com/postgres/postgres/blob/REL_16_0/src/include/varatt.h#L279https://github.com/postgres/postgres/blob/REL_16_0/src/include/varatt.h#L225-L226
When I fixed it, the limit_intermediate_size test failed, so I tried to
understand what's going on there. In original commit fd546cf these
queries were supposed to fail. Then in b3af63c three of the queries that
were supposed to fail suddenly worked and tests were changed to pass
without understanding why the output had changed or how to keep test
testing what it had to test. Even comments saying that these queries
should fail were left untouched. Commit message gives no clue about why
exactly test has changed:
> It seems that when we use adaptive executor instead of task tracker,
we
> exceed the intermediate result size less in the test. Therefore
updated
> the tests accordingly.
Then 3fda2c3 also blindly raised the limit for one of the queries to
keep it working:
3fda2c3254 (diff-a9b7b617f9dfd345318cb8987d5897143ca1b723c87b81049bbadd94dcc86570R19)
When in fe3caf3 that HeapTupleHeaderGetDatumLength(heapTuple) call was
finally added, one of those test queries became failing again.
The other two of them now also failing after the fix. I don't understand
how exactly the calculation of "intermediate result size" that is
limited by citus.max_intermediate_result_size had changed through
b3af63c and fe3caf3, but these numbers are now closer to what
they originally were when this limitation was added in
fd546cf. So these queries should fail, like in the original
version of the limit_intermediate_size test.
Co-authored-by: Karina Litskevich <litskevichkarina@gmail.com>
foreign_key_to_reference_shard_rebalance failed because partition of
2024 year does not exist, fixed by add default partition.
Replaces https://github.com/citusdata/citus/pull/7396 by adding a rule
that allows properly testing foreign_key_to_reference_shard_rebalance
via run_test.py.
Closes#7396
Co-authored-by: chuhx <148182736+cstarc1@users.noreply.github.com>
DESCRIPTION: Adds REASSIGN OWNED BY propagation
This pull request introduces the propagation of the "Reassign owned by"
statement. It accommodates both local and distributed roles for both the
old and new assignments. However, when the old role is a local role, it
undergoes filtering and is not propagated. On the other hand, if the new
role is a local role, the process involves first creating the role on
worker nodes before propagating the "Reassign owned" statement.
DESCRIPTION: Adds database connection limit, rename and set tablespace
propagation
In this PR, below statement propagations are added
alter database <database_name> with allow_connections = <boolean_value>;
alter database <database_name> rename to <database_name2>;
alter database <database_name> set TABLESPACE <table_space_name>
---------
Co-authored-by: Jelte Fennema-Nio <github-tech@jeltef.nl>
Co-authored-by: Jelte Fennema-Nio <jelte.fennema@microsoft.com>
Co-authored-by: Onur Tirtir <onurcantirtir@gmail.com>
DESCRIPTION: Adds support for 2PC from non-Citus main databases
This PR only adds support for `CREATE USER` queries, other queries need
to be added. But it should be simple because this PR creates the
underlying structure.
Citus main database is the database where the Citus extension is
created. A non-main database is all the other databases that are in the
same node with a Citus main database.
When a `CREATE USER` query is run on a non-main database we:
1. Run `start_management_transaction` on the main database. This
function saves the outer transaction's xid (the non-main database
query's transaction id) and marks the current query as main db command.
2. Run `execute_command_on_remote_nodes_as_user("CREATE USER
<username>", <username to run the command>)` on the main database. This
function creates the users in the rest of the cluster by running the
query on the other nodes. The user on the current node is created by the
query on the outer, non-main db, query to make sure consequent commands
in the same transaction can see this user.
3. Run `mark_object_distributed` on the main database. This function
adds the user to `pg_dist_object` in all of the nodes, including the
current one.
This PR also implements transaction recovery for the queries from
non-main databases.
Allowing GRANT ADMIN to now also be INHERIT or SET in support of psql16
GRANT role_name [, ...] TO role_specification [, ...] [ WITH { ADMIN |
INHERIT | SET } { OPTION | TRUE | FALSE } ] [ GRANTED BY
role_specification ]
Fixes: #7148
Related: #7138
See review changes from https://github.com/citusdata/citus/pull/7164
The devcontainer missed two tools used by code formatting, as done by
`ci/fix_style.sh`
The missing tools were both python tools, used for formatting our python
scripts.
- black
- isort
This change adds both tools. The way it does this is by keeping a
`requirements.txt` in `.devcontainer/` containing all python
dependencies we need to install. When installing both tools in a clean
environment we have exported all installed packages with `pip freeze`
into the `requirements.txt` assuming this is all related to the two
tools installed.
Since python installs the binaires in `~/.local/bin/` we also move some
scripts we manually install from `~/.bin/` to that same directory. At
first it seemed like vscode's devcontainers were not having that on the
path. However, when the container has that directory when it starts the
directory does get added to `$PATH` by `~/.profile`. This makes the
whole environment a bit more streamlined.
This change adds a script to programatically group all includes in a
specific order. The script was used as a one time invocation to group
and sort all includes throught our formatted code. The grouping is as
follows:
- System includes (eg. `#include<...>`)
- Postgres.h (eg. `#include "postgres.h"`)
- Toplevel imports from postgres, not contained in a directory (eg.
`#include "miscadmin.h"`)
- General postgres includes (eg . `#include "nodes/..."`)
- Toplevel citus includes, not contained in a directory (eg. `#include
"citus_verion.h"`)
- Columnar includes (eg. `#include "columnar/..."`)
- Distributed includes (eg. `#include "distributed/..."`)
Because it is quite hard to understand the difference between toplevel
citus includes and toplevel postgres includes it hardcodes the list of
toplevel citus includes. In the same manner it assumes anything not
prefixed with `columnar/` or `distributed/` as a postgres include.
The sorting/grouping is enforced by CI. Since we do so with our own
script there are not changes required in our uncrustify configuration.
DESCRIPTION: Adds support for propagating `CREATE`/`DROP` database
In this PR, create and drop database support is added.
For CREATE DATABASE:
* "oid" option is not supported
* specifying "strategy" to be different than "wal_log" is not supported
* specifying "template" to be different than "template1" is not
supported
The last two are because those are not saved in `pg_database` and when
activating a node, we cannot assume what parameters were provided when
creating the database.
And "oid" is not supported because whether user specified an arbitrary
oid when creating the database is not saved in pg_database and we want
to avoid from oid collisions that might arise from attempting to use an
auto-assigned oid on workers.
Finally, in case of node activation, GRANTs for the database are also
propagated.
---------
Co-authored-by: Jelte Fennema-Nio <github-tech@jeltef.nl>
Co-authored-by: Jelte Fennema-Nio <jelte.fennema@microsoft.com>
Co-authored-by: Onur Tirtir <onurcantirtir@gmail.com>
https://app.circleci.com/pipelines/github/citusdata/citus/34550/workflows/5b802f66-2666-4623-a209-6d7799f7ee5f/jobs/1229153
```diff
VACUUM (FREEZE, PROCESS_TOAST true) local_vacuum_table;
SELECT relfrozenxid::text::integer > :frozenxid AS frozen_performed FROM pg_class
WHERE oid=:reltoastrelid::regclass;
frozen_performed
------------------
- t
+ f
(1 row)
```
Process toast option in vacuum was introduced in PG14. The failing test
was supposed to be a part of `multi_utilities.sql`, but it was included
in `pg14.sql` to avoid alternative output for PG13. See
ba62c0a148 (diff-ed03478f693155e2fe092e9ad356bf884dc097f554e8d75eff562d52bbcf7a75L255-L272)
for reference.
However, now that we don't support PG13 anymore, we can move this test
to `multi_utilities.sql`. Moving the test, plus inserting data before
running vacuum freeze such that the freeze is more meaningful and not
flaky, fixes the flakiness problem of the test.
With the recent changes in packaging images, linux package installations
to execute validate_output is unnecessary now.
In this PR, I removed them to make the pipeline more effective.
- [x] Remove the test warning before merge
When preparing changelog for 12.1.1 release, I accidentally swapped
the PR numbers for the two commits. This commit fixes the changelog
to point to the correct PRs.
We propagate `SECURITY LABEL [for provider] ON ROLE rolename IS
labelname` to the worker nodes.
We also make sure to run the relevant `SecLabelStmt` commands on a
newly added node by looking at roles found in `pg_shseclabel`.
See official docs for explanation on how this command works:
https://www.postgresql.org/docs/current/sql-security-label.html
This command stores the role label in the `pg_shseclabel` catalog table.
This commit also fixes the regex string in
`check_gucs_are_alphabetically_sorted.sh` script such that it escapes
the dot. Previously it was looking for all strings starting with "citus"
instead of "citus." as it should.
To test this feature, I currently make use of a special GUC to control
label provider registration in PG_init when creating the Citus extension.
While investigating replication slots leftovers
in PR https://github.com/citusdata/citus/pull/7338,
I ran into the following refactoring/cleanup
that can be done in our test suite:
- Add separate test to remove non default nodes
- Remove coordinator removal from `add_coordinator` test
Use `remove_coordinator_from_metadata` test where needed
- Don't print nodeids in `multi_multiuser_auth` and
`multi_poolinfo_usage`
tests
- Use `startswith` when checking for isolation or failure tests
- Add some dependencies accordingly in `run_test.py` for running flaky
test schedules
Postgres got minor updates on Nov9, this starts using the images with
the latest version for our tests, namely 14.10, 15.5 and 16.1.
These minor updates were compatible with Citus.
Sister PR: https://github.com/citusdata/the-process/pull/152
DESCRIPTION: Adds support from issuing role management commands from worker nodes
It's unlikely to get into a distributed deadlock with role commands, we
don't care much about them at the moment.
There were several attempts to reduce the chances of a deadlock but we
didn't any of them merged into main branch yet, see:
#7325#7016#7009
When I run this test in my local, the size of the table after the DELETE
command is around 58785792. Hence, I assume that the diffs suggest that
the Vacuum had no effect. The current solution is to run the VACUUM
command three times instead of once.
Example diff:
https://github.com/citusdata/citus/actions/runs/6722231142/attempts/1#summary-18269870674
```diff
insert into local_vacuum_table select i from generate_series(1,1000000) i;
delete from local_vacuum_table;
VACUUM local_vacuum_table;
SELECT CASE WHEN s BETWEEN 20000000 AND 25000000 THEN 22500000 ELSE s END
FROM pg_total_relation_size('local_vacuum_table') s ;
s
----------
- 22500000
+ 58785792
(1 row)
```
See more diff examples in the PR description
https://github.com/citusdata/citus/pull/7334
https://github.com/citusdata/citus/actions/runs/6745019678/attempts/1#summary-18336188930
```diff
insert into target_table SELECT a*2 FROM source_table RETURNING a;
-NOTICE: executing the command locally: SELECT bytes FROM fetch_intermediate_results(ARRAY['repartitioned_results_xxxxx_from_4213582_to_0','repartitioned_results_xxxxx_from_4213584_to_0']::text[],'localhost',57638) bytes
+NOTICE: executing the command locally: SELECT bytes FROM fetch_intermediate_results(ARRAY['repartitioned_results_3940758121873413_from_4213584_to_0','repartitioned_results_3940758121873413_from_4213582_to_0']::text[],'localhost',57638) bytes
```
The elements in the array passed to `fetch_intermediate_results` are the
same, but in the opposite order than expected.
To fix this flakiness, we can omit the `"SELECT bytes FROM
fetch_intermediate_results..."` line. From the following logs, it is
understandable that the intermediate results have been fetched.
Fix the flaky test that results in following diff by waiting until the
backend that we want to terminate really terminates, until 5secs.
```diff
--- /__w/citus/citus/src/test/regress/expected/isolation_get_all_active_transactions.out.modified 2023-11-01 16:30:57.648749795 +0000
+++ /__w/citus/citus/src/test/regress/results/isolation_get_all_active_transactions.out.modified 2023-11-01 16:30:57.656749877 +0000
@@ -114,13 +114,13 @@
--------------------
t
(1 row)
step s3-show-activity:
SET ROLE postgres;
select count(*) from get_all_active_transactions() where process_id IN (SELECT * FROM selected_pid);
count
-----
- 0
+ 1
(1 row)
```
Sometimes multi_alter_table_statements would fail in CI like this:
```diff
-- Verify that DROP NOT NULL works
ALTER TABLE lineitem_alter ALTER COLUMN int_column2 DROP NOT NULL;
SELECT "Column", "Type", "Modifiers" FROM table_desc WHERE relid='lineitem_alter'::regclass;
- Column | Type | Modifiers
----------------------------------------------------------------------
- l_orderkey | bigint | not null
- l_partkey | integer | not null
- l_suppkey | integer | not null
- l_linenumber | integer | not null
- l_quantity | numeric(15,2) | not null
- l_extendedprice | numeric(15,2) | not null
- l_discount | numeric(15,2) | not null
- l_tax | numeric(15,2) | not null
- l_returnflag | character(1) | not null
- l_linestatus | character(1) | not null
- l_shipdate | date | not null
- l_commitdate | date | not null
- l_receiptdate | date | not null
- l_shipinstruct | character(25) | not null
- l_shipmode | character(10) | not null
- l_comment | character varying(44) | not null
- float_column | double precision | default 1
- date_column | date |
- int_column1 | integer |
- int_column2 | integer |
- null_column | integer |
-(21 rows)
-
+ERROR: schema "alter_table_add_column" does not exist
-- COPY should succeed now
SELECT master_create_empty_shard('lineitem_alter') as shardid \gset
```
Reading from table_desc apparantly has an issue that if the schema gets
deleted from one of the items, while it is being read that we get such
an error.
This change fixes that by not running multi_alter_table_statements in parallel
with alter_table_add_column anymore.
This is another instance of the same issue as in #7294
Sometimes in CI we run into this failure:
```diff
SELECT resultId, nodeport, rowcount, targetShardId, targetShardIndex
FROM partition_task_list_results('test', $$ SELECT * FROM source_table $$, 'target_table')
NATURAL JOIN pg_dist_node;
-WARNING: connection to the remote node localhost:xxxxx failed with the following error: connection not open
+ERROR: connection to the remote node localhost:9060 failed with the following error: connection not open
SELECT * FROM distributed_result_info ORDER BY resultId;
- resultid | nodeport | rowcount | targetshardid | targetshardindex
----------------------------------------------------------------------
- test_from_100800_to_0 | 9060 | 22 | 100805 | 0
- test_from_100801_to_0 | 57637 | 2 | 100805 | 0
- test_from_100801_to_1 | 57637 | 15 | 100806 | 1
- test_from_100802_to_1 | 57637 | 10 | 100806 | 1
- test_from_100802_to_2 | 57637 | 5 | 100807 | 2
- test_from_100803_to_2 | 57637 | 18 | 100807 | 2
- test_from_100803_to_3 | 57637 | 4 | 100808 | 3
- test_from_100804_to_3 | 9060 | 24 | 100808 | 3
-(8 rows)
-
+ERROR: current transaction is aborted, commands ignored until end of transaction block
-- fetch from worker 2 should fail
SAVEPOINT s1;
+ERROR: current transaction is aborted, commands ignored until end of transaction block
SELECT fetch_intermediate_results('{test_from_100802_to_1,test_from_100802_to_2}'::text[], 'localhost', :worker_2_port) > 0 AS fetched;
-ERROR: could not open file "base/pgsql_job_cache/xx_x_xxx/test_from_100802_to_1.data": No such file or directory
-CONTEXT: while executing command on localhost:xxxxx
+ERROR: current transaction is aborted, commands ignored until end of transaction block
ROLLBACK TO SAVEPOINT s1;
+ERROR: savepoint "s1" does not exist
-- fetch from worker 1 should succeed
SELECT fetch_intermediate_results('{test_from_100802_to_1,test_from_100802_to_2}'::text[], 'localhost', :worker_1_port) > 0 AS fetched;
- fetched
----------------------------------------------------------------------
- t
-(1 row)
-
+ERROR: current transaction is aborted, commands ignored until end of transaction block
-- make sure the results read are same as the previous transaction block
SELECT count(*), sum(x) FROM
read_intermediate_results('{test_from_100802_to_1,test_from_100802_to_2}'::text[],'binary') AS res (x int);
- count | sum
----------------------------------------------------------------------
- 15 | 863
-(1 row)
-
+ERROR: current transaction is aborted, commands ignored until end of transaction block
ROLLBACk;
```
As outlined in the #7306 I created, the reason for this is related to
only having a single connection open to the node. Finding and fixing the
full cause is not trivial, so instead this PR starts working around
this bug by forcing maximum parallelism. Preferably we'd want
this workaround not to be necessary, but that requires
spending time to fix this. For now having a less flaky CI is
good enough.
Sometimes in CI insert_select_connection_leak would fail like this:
```diff
END;
SELECT worker_connection_count(:worker_1_port) - :pre_xact_worker_1_connections AS leaked_worker_1_connections,
worker_connection_count(:worker_2_port) - :pre_xact_worker_2_connections AS leaked_worker_2_connections;
leaked_worker_1_connections | leaked_worker_2_connections
-----------------------------+-----------------------------
- 0 | 0
+ -1 | 0
(1 row)
-- ROLLBACK
BEGIN;
INSERT INTO target_table SELECT * FROM source_table;
INSERT INTO target_table SELECT * FROM source_table;
ROLLBACK;
SELECT worker_connection_count(:worker_1_port) - :pre_xact_worker_1_connections AS leaked_worker_1_connections,
worker_connection_count(:worker_2_port) - :pre_xact_worker_2_connections AS leaked_worker_2_connections;
leaked_worker_1_connections | leaked_worker_2_connections
-----------------------------+-----------------------------
- 0 | 0
+ -1 | 0
(1 row)
\set VERBOSITY TERSE
-- Error on constraint failure
BEGIN;
INSERT INTO target_table SELECT * FROM source_table;
SELECT worker_connection_count(:worker_1_port) AS worker_1_connections,
worker_connection_count(:worker_2_port) AS worker_2_connections \gset
SAVEPOINT s1;
INSERT INTO target_table SELECT a, CASE WHEN a < 50 THEN b ELSE null END FROM source_table;
@@ -89,15 +89,15 @@
leaked_worker_1_connections | leaked_worker_2_connections
-----------------------------+-----------------------------
0 | 0
(1 row)
END;
SELECT worker_connection_count(:worker_1_port) - :pre_xact_worker_1_connections AS leaked_worker_1_connections,
worker_connection_count(:worker_2_port) - :pre_xact_worker_2_connections AS leaked_worker_2_connections;
leaked_worker_1_connections | leaked_worker_2_connections
-----------------------------+-----------------------------
- 0 | 0
+ -1 | 0
(1 row)
```
Source:
https://github.com/citusdata/citus/actions/runs/6718401194/attempts/1#summary-18258258387
A negative amount of leaked connectios is obviously not possible. For
some reason there was a connection open when we checked the initial
amount of connections that was closed afterwards. This could be the
from the maintenance daemon or maybe from the previous test that had not
fully closed its connections just yet.
The change in this PR doesnt't actually fix the cause of the negative
connection, but it simply considers it good as well, by changing the
result to zero for negative values.
With this fix we might sometimes miss a leak, because the negative
number can cancel out the leak and still result in a 0. But since the
negative number only occurs sometimes, we'll still find the leak often
enough.
When executing a prepared CALL, which is not pure SQL but available with
some drivers like npgsql and jpgdbc, Citus entered a code path where a
plan is not defined, while trying to increase its cost. Thus SIG11 when
plan is a NULL pointer.
Fix by only increasing plan cost when plan is not null.
However, it is a bit suspicious to get here with a NULL plan and maybe a
better change will be to not call
ShardPlacementForFunctionColocatedWithDistTable() with a NULL plan at
all (in call.c:134)
bug hit with for example:
```
CallableStatement proc = con.prepareCall("{CALL p(?)}");
proc.registerOutParameter(1, java.sql.Types.BIGINT);
proc.setInt(1, -100);
proc.execute();
```
where `p(bigint)` is a distributed "function" and the param the
distribution key (also in a distributed table), see #7242 for details
Fixes#7242
Sometimes in CI our logical_replication test fails like this:
```diff
+++ /__w/citus/citus/src/test/regress/results/logical_replication.out.modified 2023-11-01 14:15:08.562758546 +0000
@@ -40,21 +40,21 @@
SELECT count(*) from pg_publication;
count
-------
0
(1 row)
SELECT count(*) from pg_replication_slots;
count
-------
- 0
+ 1
(1 row)
SELECT count(*) FROM dist;
count
-------
```
It's hard to understand what is going on here, just based on the wrong
number. So this PR changes the test to show the name of the
subscription, publication and replication slot to make finding the cause
easier.
In passing this also fixes another flaky test in the same file that our
flaky test detection picked up. This is done by waiting for resource
cleanup after the shard move.
This is causing 404 failures due to a race condition:
https://github.com/actions/toolkit/issues/1235
It also makes the tests take unnecessarily long.
This was tested by changing a test file and seeing that the flaky test
detection was still working.
Fixes the flaky test that results in following diff:
```diff
--- /__w/citus/citus/src/test/regress/expected/multi_mx_node_metadata.out.modified 2023-11-01 14:22:12.890476575 +0000
+++ /__w/citus/citus/src/test/regress/results/multi_mx_node_metadata.out.modified 2023-11-01 14:22:12.914476657 +0000
@@ -840,24 +840,26 @@
(1 row)
\c :datname - - :master_port
SELECT datname FROM pg_stat_activity WHERE application_name LIKE 'Citus Met%';
datname
------------
db_to_drop
(1 row)
DROP DATABASE db_to_drop;
+ERROR: database "db_to_drop" is being accessed by other users
SELECT datname FROM pg_stat_activity WHERE application_name LIKE 'Citus Met%';
datname
------------
-(0 rows)
+ db_to_drop
+(1 row)
-- cleanup
DROP SEQUENCE sequence CASCADE;
NOTICE: drop cascades to default value for column a of table reference_table
```
Sometimes isolation_metadata_sync_deadlock fails in CI like this:
```diff
diff -dU10 -w /__w/citus/citus/src/test/regress/expected/isolation_metadata_sync_deadlock.out /__w/citus/citus/src/test/regress/results/isolation_metadata_sync_deadlock.out
--- /__w/citus/citus/src/test/regress/expected/isolation_metadata_sync_deadlock.out.modified 2023-11-01 16:03:15.090199229 +0000
+++ /__w/citus/citus/src/test/regress/results/isolation_metadata_sync_deadlock.out.modified 2023-11-01 16:03:15.098199312 +0000
@@ -110,10 +110,14 @@
t
(1 row)
step s2-stop-connection:
SELECT stop_session_level_connection_to_node();
stop_session_level_connection_to_node
-------------------------------------
(1 row)
+
+teardown failed: ERROR: localhost:57638 is a metadata node, but is out of sync
+HINT: If the node is up, wait until metadata gets synced to it and try again.
+CONTEXT: SQL statement "SELECT master_remove_distributed_table_metadata_from_workers(v_obj.objid, v_obj.schema_name, v_obj.object_name)"
```
Source:
https://github.com/citusdata/citus/actions/runs/6721938040/attempts/1#summary-18268946448
To fix this we now wait for the metadata to be fully synced to all
nodes at the start of the teardown steps.
Sometimes in CI citus_non_blocking_split_shard_cleanup failed like this:
```diff
--- /__w/citus/citus/src/test/regress/expected/citus_non_blocking_split_shard_cleanup.out.modified 2023-11-01 15:07:14.280551207 +0000
+++ /__w/citus/citus/src/test/regress/results/citus_non_blocking_split_shard_cleanup.out.modified 2023-11-01 15:07:14.292551358 +0000
@@ -106,21 +106,22 @@
-----------------------------------
(1 row)
\c - - - :worker_2_port
SET search_path TO "citus_split_test_schema";
-- Replication slots should be cleaned up
SELECT slot_name FROM pg_replication_slots;
slot_name
---------------------------------
-(0 rows)
+ citus_shard_split_slot_19_10_17
+(1 row)
-- Publications should be cleanedup
SELECT count(*) FROM pg_publication;
count
```
It's expected that the replication slot is sometimes not cleaned up if
we don't wait until resource cleanup completes. This PR starts doing
that here.
Normally, tests which are written non-dependent to other tests can use
minimal-tests and should use as well. However, in our test settings
base-schedule is being used which may cause unnecessary dependencies and
so unrelated errors that developers don't see in their local environment
With this change, default setting will be minimal, so that tests will be
free of unnecessary dependencies.
Sometimes failure_split_cleanup failed in CI like this:
```diff
ERROR: server closed the connection unexpectedly
CONTEXT: while executing command on localhost:9060
SELECT operation_id, object_type, object_name, node_group_id, policy_type
FROM pg_dist_cleanup where operation_id = 777 ORDER BY object_name;
operation_id | object_type | object_name | node_group_id | policy_type
--------------+-------------+-----------------------------------------------------------+---------------+-------------
777 | 1 | citus_failure_split_cleanup_schema.table_to_split_8981000 | 1 | 0
- 777 | 1 | citus_failure_split_cleanup_schema.table_to_split_8981002 | 1 | 1
777 | 1 | citus_failure_split_cleanup_schema.table_to_split_8981002 | 2 | 0
+ 777 | 1 | citus_failure_split_cleanup_schema.table_to_split_8981002 | 1 | 1
777 | 1 | citus_failure_split_cleanup_schema.table_to_split_8981003 | 2 | 1
777 | 4 | citus_shard_split_publication_1_10_777 | 2 | 0
(5 rows)
-- we need to allow connection so that we can connect to proxy
```
Source:
https://github.com/citusdata/citus/actions/runs/6717642291/attempts/1#summary-18256014949
It's the common problem where we're missing a column in the ORDER BY
clause. This fixes that by adding an node_group_id to the query in
question.
Sometimes in CI isolation_master_update_node fails like this:
```diff
------------------
(1 row)
step s2-abort: ABORT;
step s1-abort: ABORT;
FATAL: terminating connection due to administrator command
FATAL: terminating connection due to administrator command
SSL connection has been closed unexpectedly
+server closed the connection unexpectedly
master_remove_node
------------------
```
This just seesm like a random error line. The only way to reasonably fix
this is by adding an extra output file. So that's what this PR does.
We want the nice looking green checkmark on our main branch too.
This PR includes running on pushes to release branches too, but that
won't come into effect until we have release branches with this
workflow file.
One of our most flaky and most anoying tests is
multi_cluster_management. It usually fails like this:
```diff
SELECT citus_disable_node('localhost', :worker_2_port);
citus_disable_node
--------------------
(1 row)
SELECT public.wait_until_metadata_sync(60000);
+WARNING: waiting for metadata sync timed out
wait_until_metadata_sync
--------------------------
(1 row)
```
This tries to address that by hardening wait_until_metadata_sync. I
believe the reason for this warning is that there is a race condition in
wait_until_metadata_sync. It's possible for the pre-check to fail, then
have the maintenance daemon send a notification. And only then have the
backend start to listen. I tried to fix it in two ways:
1. First run LISTEN, and only then read do the pre-check.
2. If we time out, check again just to make sure that we did not miss
the notification somehow. And don't show a warning if all metadata is
synced after the timeout.
It's hard to know for sure that this fixes it because the test is not
repeatable and I could not reproduce it locally. Let's just hope for the
best.
---------
Co-authored-by: Onur Tirtir <onurcantirtir@gmail.com>
Sometimes multi_reference_table failed in CI like this:
```diff
\c - - - :master_port
DROP INDEX reference_schema.reference_index_2;
\c - - - :worker_1_port
SELECT "Column", "Type", "Modifiers" FROM table_desc WHERE relid='reference_schema.reference_table_ddl_1250019'::regclass;
- Column | Type | Modifiers
----------------------------------------------------------------------
- value_2 | double precision | default 25.0
- value_3 | text | not null
- value_4 | timestamp without time zone |
- value_5 | double precision |
-(4 rows)
-
+ERROR: schema "citus_local_table_queries" does not exist
\di reference_schema.reference_index_2*
List of relations
Schema | Name | Type | Owner | Table
```
Source:
https://github.com/citusdata/citus/actions/runs/6707535961/attempts/2#summary-18226879513
Reading from table_desc apparantly has an issue that if the schema gets
deleted from one of the items, while it is being read that we get such
an error.
This change fixes that by not running multi_reference_table in parallel
with citus_local_tables_queries anymore.
I just enhanced the existing code to check if the relation is an index
belonging to a distributed table.
If so the shardId is appended to relation (index) name and the *_size
function are executed as before.
There is a change in an extern function:
`extern StringInfo GenerateSizeQueryOnMultiplePlacements(...)`
It's possible to create a new function and deprecate this one later if
compatibility is an issue.
Fixes https://github.com/citusdata/citus/issues/6496.
DESCRIPTION: Allows using Citus size functions on distributed tables
indexes.
---------
Co-authored-by: Onur Tirtir <onurcantirtir@gmail.com>
Sometimes validate constraint would fail like this:
```diff
validatable_constraint_8000016 | t
(10 rows)
DROP TABLE constrained_table;
+ERROR: deadlock detected
+DETAIL: Process 16602 waits for ShareRowExclusiveLock on relation 56258 of database 16384; blocked by process 16601.
+Process 16601 waits for AccessShareLock on relation 56120 of database 16384; blocked by process 16602.
+HINT: See server log for query details.
DROP TABLE referenced_table CASCADE;
DROP TABLE referencing_table;
DROP SCHEMA validate_constraint CASCADE;
-NOTICE: drop cascades to 3 other objects
+NOTICE: drop cascades to 4 other objects
DETAIL: drop cascades to type constraint_validity
drop cascades to view constraint_validations_in_workers
drop cascades to view constraint_validations
+drop cascades to table constrained_table
SET search_path TO DEFAULT;
```
Source:
https://github.com/citusdata/citus/actions/runs/6708383699?pr=7291
This change fixes that by not running together with the
foreign_key_to_reference_table test anymore. In passing it also
simplifies dropping of the test its resources.
Making tasks in CI required before merging to master is important and
useful. The way this works is by saving the exact names of the required
tasks in the admin interface of the repo. It has a search box to add
them so it's not completely horrible, but doing so is quite a hassle
since we have so many jobs. So limiting the amount of churn in this list
of required jobs is quite useful.
This changes the names of tasks to only include the major versions of
Postgres, not the minor ones. Otherwise the next time we bump the minor
versions we would have to remove and re-add each of the jobs.
DESCRIPTION: This change starts a maintenance deamon at the time of
server start if there is a designated main database.
This is the code flow:
1. User designates a main database:
`ALTER SYSTEM SET citus.main_db = "myadmindb";`
2. When postmaster starts, in _PG_Init, citus calls
`InitializeMaintenanceDaemonForMainDb`
This function registers a background worker to run
`CitusMaintenanceDaemonMain `with `databaseOid = 0 `
3. `CitusMaintenanceDaemonMain ` takes some special actions when
databaseOid is 0:
- Gets the citus.main_db value.
- Connects to the citus.main_db
- Now the `MyDatabaseId `is available, creates a hash entry for it.
- Then follows the same control flow as for a regular db,
When debugging postgres it is quite hard to get to the source for
`errfinish` in `elog.c`. Instead of relying on the developer to set a
breakpoint in the `elog.c` file for `errfinish` for `elevel == ERROR`,
this change adds the breakpoint to `.gdbinit`. This makes sure that
whenever a debugger is attached to a postgres backend it will break on
postgres errors.
When attaching the debugger a small banner is printed that explains how
to disable the breakpoint.
HasDistributionKey & HasDistributionKeyCacheEntry returns true when the
corresponding table has a distribution key, the comments state the
opposite,
which should be fixed.
Signed-off-by: Zhao Junwang <zhjwpku@gmail.com>
Co-authored-by: Onur Tirtir <onurcantirtir@gmail.com>
There was a bug reported for previous versions of Citus where
shard\_size was returning NULL for tables with spaces in them. It works
fine on the main branch though, but I'm still adding a test for this to
the main branch because it seems a good test to have.
During the creation of the devcontainer we need to add a ppa repository,
which is easiest done via software-properies-common. As turns out this
installes pkexec into the container as a side effect.
When vscode tries to attach a debugger it first checks if pkexec is
installed as this gives a nicer popup asking for elevation of rights to
attach to the process. However, since dev containers don't have a
windowing system running pkexec isn't working as expected and thus
prevents the debugger from attaching.
Without pkexec in the container vscode 'falls back' to plain old sudo
which we can run passwordless in the container.
For pkexec to be removed we need to first purge
software-propertied-common as well as autoremove all packages that were
installed due to the installation of said package. By performing this
all in one step we minimize the size of the layer we are creating.
DESCRIPTION: Send keepalive messages during the logical replication
phase of large shard splits to avoid timeouts.
During the logical replication part of the shard split process, split
decoder filters out the wal records produced by the initial copy. If the
number of wal records is big, then split decoder ends up processing for
a long time before sending out any wal records through pgoutput. Hence
the wal receiver may time out and restarts repeatedly causing our split
driver code catch up logic to fail.
Notes:
1. If the wal_receiver_timeout is set to a very small number e.g. 600ms,
it may time out before receiving the keepalives. My tests show that this
code works best when the` wal_receiver_timeout `is set to 1minute, which
is the default value.
2. Once a logical replication worker time outs, a new one gets launched.
The new logical replication worker sets the pg_stat_subscription columns
to initial values. E.g. the latest_end_lsn is set to 0. Our driver logic
in `WaitForGroupedLogicalRepTargetsToCatchUp` can not handle LSN value
to go back. This is the main reason for it to get stuck in the infinite
loop.
This change adds a devcontainer configuration to the Citus project. This
devcontainer allows for quick generation of isolated development
environments, either local on the machine of a developer or in a cloud,
like github codepaces.
The devcontainer is updated automatically by github actions when its
configuration changes.
For more detailed instructions on how to quickstart the development in a
container see CONTRIBUTING.md
DESCRIPTION: Fix leaking of memory and memory contexts in Foreign
Constraint Graphs
Previously, every time we (re)created the Foreign Constraint
Relationship Graph, we created a new Memory Context while loosing a
reference to the previous context. This old context could still have
left over memory in there causing a memory leak.
With this patch we statically have one memory context that we lazily
initialize the first time we create our foreign constraint relationship
graph. On every subsequent creation, beside destroying our previous
hashmap we also reset our memory context to remove any left over
references.
This commit aims to add a comprehensive guide that covers all essential
aspects of Citus, including planning, execution, locking mechanisms,
shard moves, 2PC, and many other major components of Citus.
Co-authored-by: Marco Slot <marco.slot@gmail.com>
When testing rolling Citus upgrades, coordinator should not be upgraded
until we upgrade all the workers.
---------
Co-authored-by: Jelte Fennema-Nio <github-tech@jeltef.nl>
DESCRIPTION: Shard moves/isolate report LSN's in lsn format
While investigating an issue with our catchup mechanism on certain
postgres versions we noticed we print LSN's in the format of the native
long type. This is an uncommon representation for LSN's in postgres
logs.
This patch changes the output of our log message to go from the long
type representation to the native LSN type representation. Making it
easier for postgres users to recognize and compare LSN's with other
related reports.
example of new output:
```
2023-09-25 17:28:47.544 CEST [11345] LOG: The LSN of the target subscriptions on node localhost:9701 have increased from 0/0 to 0/E1ED20F8 at 2023-09-25 17:28:47.544165+02 where the source LSN is 1/415DCAD0
```
If you make a fresh install make clean is not
required. However, if you install before, without
a make install, one can get errors
---------
Co-authored-by: aykut-bozkurt <51649454+aykut-bozkurt@users.noreply.github.com>
When cdc got added the makefiles hardcoded the `.so` extension instead
of using the platform specifc `$(DLSUFFIX)` variable used by `pgxs.mk`.
Also don't remove installed cdc artifacts on `make clean`.
This was sometimes failing when running locally due to some local shard
still existing due to. This fixes that. We normally silence all
`drop schema cascade` output like this anyway to avoid unnecessary
diffs when modifying a test later on.
centos 7 and oracle 7 is not being supported for newer releases by
Postgres. Therefore, getting package download errors in packaging
pipelines.
This PR removes el/7 and ol/7 Postgres 16 pipelines
DESCRIPTION: Adds support for ALTER DATABASE <db_name> SET .. statement
propagation
SET statements in Postgres has a common structure which is already being
used in Alter Function
statement.
In this PR, I added a util file; citus_setutils and made it usable for
both for
alter database<db_name>set .. and alter function ... set ... statements.
With this PR, below statements will be propagated
```sql
ALTER DATABASE name SET configuration_parameter { TO | = } { value | DEFAULT }
ALTER DATABASE name SET configuration_parameter FROM CURRENT
ALTER DATABASE name RESET configuration_parameter
ALTER DATABASE name RESET ALL
```
Additionally, there was a bug in processing float values in the common
code block.
I fixed this one as well
Previous
```C
case T_Float:
{
appendStringInfo(buf, " %s", strVal(value));
break;
}
```
Now
```C
case T_Float:
{
appendStringInfo(buf, " %s", nodeToString(value));
break;
}
```
The easiest way to start contributing is via our devcontainer. This container works both locally in visual studio code with docker-desktop/docker-for-mac as well as [Github Codespaces](https://github.com/features/codespaces). To open the project in vscode you will need the [Dev Containers extension](https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-containers). For codespaces you will need to [create a new codespace](https://codespace.new/citusdata/citus).
With the extension installed you can run the following from the command pallet to get started
```
> Dev Containers: Clone Repository in Container Volume...
```
In the subsequent popup paste the url to the repo and hit enter.
```
https://github.com/citusdata/citus
```
This will create an isolated Workspace in vscode, complete with all tools required to build, test and run the Citus extension. We keep this container up to date with the supported postgres versions as well as the exact versions of tooling we use.
To quickly start we suggest splitting your terminal once to have two shells. The left one in the `/workspaces/citus`, the second one changed to `/data`. The left terminal will be used to interact with the project, the right one with a testing cluster.
To get citus installed from source we run `make install -s` in the first terminal. Once installed you can start a Citus cluster in the second terminal via `citus_dev make citus`. The cluster will run in the background, and can be interacted with via `citus_dev`. To get an overview of the available commands.
With the Citus cluster running you can connect to the coordinator in the first terminal via `psql -p9700`. Because the coordinator is the most common entrypoint the `PGPORT` environment is set accordingly, so a simple `psql` will connect directly to the coordinator.
### Debugging in the VS code
1. Start Debugging: Press F5 in VS Code to start debugging. When prompted, you'll need to attach the debugger to the appropriate PostgreSQL process.
2. Identify the Process: If you're running a psql command, take note of the PID that appears in your psql prompt. For example:
```
[local] citus@citus:9700 (PID: 5436)=#
```
This PID (5436 in this case) indicates the process that you should attach the debugger to.
If you are uncertain about which process to attach, you can list all running PostgreSQL processes using the following command:
```
ps aux | grep postgres
```
Look for the process associated with the PID you noted. For example:
```
citus 5436 0.0 0.0 0 0 ? S 14:00 0:00 postgres: citus citus
```
4. Attach the Debugger: Once you've identified the correct PID, select that process when prompted in VS Code to attach the debugger. You should now be able to debug the PostgreSQL session tied to the psql command.
5. Set Breakpoints and Debug: With the debugger attached, you can set breakpoints within the code. This allows you to step through the code execution, inspect variables, and fully debug the PostgreSQL instance running in your container.
### Getting and building
[PostgreSQL documentation](https://www.postgresql.org/support/versioning/) has a
@ -41,6 +87,8 @@ that are missing in earlier minor versions.
cd citus
./configure
# If you have already installed the project, you need to clean it first
make clean
make
make install
# Optionally, you might instead want to use `make install-all`
@ -79,6 +127,8 @@ that are missing in earlier minor versions.
git clone https://github.com/citusdata/citus.git
cd citus
./configure
# If you have already installed the project previously, you need to clean it first
make clean
make
sudo make install
# Optionally, you might instead want to use `sudo make install-all`
@ -129,6 +179,8 @@ that are missing in earlier minor versions.
git clone https://github.com/citusdata/citus.git
cd citus
PG_CONFIG=/usr/pgsql-14/bin/pg_config ./configure
# If you have already installed the project previously, you need to clean it first
make clean
make
sudo make install
# Optionally, you might instead want to use `sudo make install-all`
@ -145,43 +197,7 @@ that are missing in earlier minor versions.
### Following our coding conventions
CircleCI will automatically reject any PRs which do not follow our coding
conventions. The easiest way to ensure your PR adheres to those conventions is
to use the [citus_indent](https://github.com/citusdata/tools/tree/develop/uncrustify)
tool. This tool uses `uncrustify` under the hood.
```bash
# Uncrustify changes the way it formats code every release a bit. To make sure
# everyone formats consistently we use version 0.68.1:
curl -L https://github.com/uncrustify/uncrustify/archive/uncrustify-0.68.1.tar.gz | tar xz
cd uncrustify-uncrustify-0.68.1/
mkdir build
cd build
cmake ..
make -j5
sudo make install
cd ../..
git clone https://github.com/citusdata/tools.git
cd tools
make uncrustify/.install
```
Once you've done that, you can run the `make reindent` command from the top
directory to recursively check and correct the style of any source files in the
current directory. Under the hood, `make reindent` will run `citus_indent` and
some other style corrections for you.
You can also run the following in the directory of this repository to
automatically format all the files that you have changed before committing:
Our coding conventions are documented in [STYLEGUIDE.md](STYLEGUIDE.md).
### Making SQL changes
@ -234,3 +250,34 @@ Any other SQL you can put directly in the main sql file, e.g.
### Running tests
See [`src/test/regress/README.md`](https://github.com/citusdata/citus/blob/master/src/test/regress/README.md)
### Documentation
User-facing documentation is published on [docs.citusdata.com](https://docs.citusdata.com/). When adding a new feature, function, or setting, you can open a pull request or issue against the [Citus docs repo](https://github.com/citusdata/citus_docs/).
Detailed descriptions of the implementation for Citus developers are provided in the [Citus Technical Documentation](src/backend/distributed/README.md). It is currently a single file for ease of searching. Please update the documentation if you make any changes that affect the design or add major new features.
# Making a pull request ready for reviews
Asking for help and asking for reviews are two different things. When you're asking for help, you're asking for someone to help you with something that you're not expected to know.
But when you're asking for a review, you're asking for someone to review your work and provide feedback. So, when you're asking for a review, you're expected to make sure that:
* Your changes don't perform **unnecessary line addition / deletions / style changes on unrelated files / lines**.
* All CI jobs are **passing**, including **style checks** and **flaky test detection jobs**. Note that if you're an external contributor, you don't have to wait CI jobs to run (and finish) because they don't get automatically triggered for external contributors.
* Your PR has necessary amount of **tests** and that they're passing.
* You separated as much as possible work into **separate PRs**, e.g., a prerequisite bugfix, a refactoring etc..
* Your PR doesn't introduce a typo or something that you can easily fix yourself.
* After all CI jobs pass, code-coverage measurement job (CodeCov as of today) then kicks in. That's why it's important to make the **tests passing** first. At that point, you're expected to check **CodeCov annotations** that can be seen in the **Files Changed** tab and expected to make sure that it doesn't complain about any lines that are not covered. For example, it's ok if CodeCov complains about an `ereport()` call that you put for an "unexpected-but-better-than-crashing" case, but it's not ok if it complains about an uncovered `if` branch that you added.
* And finally, perform a **self-review** to make sure that:
* Code and code-comments reflects the idea **without requiring an extra explanation** via a chat message / email / PR comment.
This is important because we don't expect developers to reach out to author / read about the whole discussion in the PR to understand the idea behind a commit merged into `main` branch.
* PR description is clear enough.
* If-and-only-if you're **introducing a user facing change / bugfix**, your PR has a line that starts with `DESCRIPTION: <Present simple tense word that starts with a capital letter, e.g., Adds support for / Fixes / Disallows>`.
* **Commit messages** are clear enough if the commits are doing logically different things.
When postgres/citus crashes, there is the option to create a coredump. This is useful for debugging the issue. Coredumps are enabled in the devcontainer by default. However, not all environments are configured correctly out of the box. The most important configuration that is not standardized is the `core_pattern`. The configuration can be verified from the container, however, you cannot change this setting from inside the container as the filesystem containing this setting is in read only mode while inside the container.
To verify if corefiles are written run the following command in a terminal. This shows the filename pattern with which the corefile will be written.
```bash
cat /proc/sys/kernel/core_pattern
```
This should be configured with a relative path or simply a simple filename, such as `core`. When your environment shows an absolute path you will need to change this setting. How to change this setting depends highly on the underlying system as the setting needs to be changed on the kernel of the host running the container.
You can put any pattern in `/proc/sys/kernel/core_pattern` as you see fit. eg. You can add the PID to the core pattern in one of two ways;
- You either include `%p` in the core_pattern. This gets substituted with the PID of the crashing process.
- Alternatively you could set `/proc/sys/kernel/core_uses_pid` to `1` in the same way as you set `core_pattern`. This will append the PID to the corefile if `%p` is not explicitly contained in the core_pattern.
When a coredump is written you can use the debug/launch configuration `Open core file` which is preconfigured in the devcontainer. This will open a fileprompt that lists all coredumps that are found in your workspace. When you want to debug coredumps from `citus_dev` that are run in your `/data` directory, you can add the data directory to your workspace. In the command pallet of vscode you can run `>Workspace: Add Folder to Workspace...` and select the `/data` directory. This will allow you to open the coredumps from the `/data` directory in the `Open core file` debug configuration.
### Windows (docker desktop)
When running in docker desktop on windows you will most likely need to change this setting. The linux guest in WSL2 that runs your container is the `docker-desktop` environment. The easiest way to get onto the host, where you can change this setting, is to open a powershell window and verify you have the docker-desktop environment listed.
```powershell
wsl --list
```
Among others this should list both `docker-desktop` and `docker-desktop-data`. You can then open a shell in the `docker-desktop` environment.
```powershell
wsl -d docker-desktop
```
Inside this shell you can verify that you have the right environment by running
```bash
cat /proc/sys/kernel/core_pattern
```
This should show the same configuration as the one you see inside the devcontainer. You can then change the setting by running the following command.
This will change the setting for the current session. If you want to make the change permanent you will need to add this to a startup script.
| age | Partially | Works fine side by side, but graph data cannot be distributed. |
| amcheck | Yes | |
| anon | Partially | Cannot anonymize distributed tables. It is possible to anonymize local tables. |
| auto_explain | No | [Issue #6448](https://github.com/citusdata/citus/issues/6448) |
| azure | Yes | |
| azure_ai | Yes | |
| azure_storage | Yes | |
| bloom | Yes | |
| Btree_gin | Yes | |
| btree_gist | Yes | |
| citext | Yes | |
| Citus_columnar | Yes | |
| cube | Yes | |
| dblink | Yes | |
| dict_int | Yes | |
| dict_xsyn | Yes | |
| earthdistance | Yes | |
| fuzzystrmatch | Yes | |
| hll | Yes | |
| hstore | Yes | |
| hypopg | Partially | Hypopg can work on local tables and individual shards, however, when we create a hypothetical index on a distributed table, citus does not propagate the index creation command to worker nodes, and thus, hypothetical index is not used in explain statements. |
| intagg | Yes | |
| intarray | Yes | |
| isn | Yes | |
| lo | Partially | Extension relies on triggers, but Citus does not support triggers over distributed tables |
| login_hook | Yes | |
| ltree | Yes | |
| oracle_fdw | Yes | |
| orafce | Yes | |
| pageinspect | Yes | |
| pg_buffercache | Yes | |
| pg_cron | Yes | |
| pg_diskann | Yes | |
| pg_failover_slots | To be tested | |
| pg_freespacemap | Partially | Users can set citus.override_table_visibility='off'; to get accurate calculation of free space map. |
| pg_hint_plan | Partially | Works fine side by side, but hints are ignored for distributed queries |
| pg_partman | Yes | |
| pg_prewarm | Partially | In order to prewarm distributed tables, set " citus.override_table_visibility" to off, and run prewarm for each shard. This needs to be done at each node. |
| pg_repack | Partially | Extension relies on triggers, but Citus does not support triggers over distributed tables. It works fine on local tables. |
| pg_squeeze | Partially | It can work on local tables, but it is not aware of distributed tables. Users can set citus.override_table_visibility='off'; and then run pg_squeeze for each shard. This needs to be done at each node. |
| pg_stat_statements | Yes | |
| pg_trgm | Yes | |
| pg_visibility | Partially | In order to get visibility map of a distributed table, customers can run the functions for shard tables. |
| pgaadauth | Yes | |
| pgaudit | Yes | |
| pgcrypto | Yes | |
| pglogical | No | |
| pgrowlocks | Partially | It works only with individual shards, not with distributed table names. |
| pgstattuple | Yes | |
| plpgsql | Yes | |
| plv8 | Yes | |
| postgis | Yes | |
| postgis_raster | Yes | |
| postgis_sfcgal | Yes | |
| postgis_tiger_geocoder | No | |
| postgis_topology | No | |
| postgres_fdw | Yes | |
| postgres_protobuf | Yes | |
| semver | Yes | |
| session_variable | No | |
| sslinfo | Yes | |
| tablefunc | Yes | |
| tdigest | Yes | |
| tds_fdw | Yes | |
| timescaledb | No | [Known to be incompatible with Citus](https://www.citusdata.com/blog/2021/10/22/how-to-scale-postgres-for-time-series-data-with-citus/#:~:text=Postgres%E2%80%99%20built-in%20partitioning) |
| **<br/>The Citus database is 100% open source.<br/><imgwidth=1000/><br/>Learn what's new in the [Citus 12.0 release blog](https://www.citusdata.com/blog/2023/07/18/citus-12-schema-based-sharding-comes-to-postgres/) and the [Citus Updates page](https://www.citusdata.com/updates/).<br/><br/>**|
| **<br/>The Citus database is 100% open source.<br/><imgwidth=1000/><br/>Learn what's new in the [Citus 13.0 release blog](https://www.citusdata.com/blog/2025/02/06/distribute-postgresql-17-with-citus-13/) and the [Citus Updates page](https://www.citusdata.com/updates/).<br/><br/>**|
@ -31,7 +31,7 @@ You can use these Citus superpowers to make your Postgres database scale-out rea
Our [SIGMOD '21](https://2021.sigmod.org/) paper [Citus: Distributed PostgreSQL for Data-Intensive Applications](https://doi.org/10.1145/3448016.3457551) gives a more detailed look into what Citus is, how it works, and why it works that way.


Since Citus is an extension to Postgres, you can use Citus with the latest Postgres versions. And Citus works seamlessly with the PostgreSQL tools and extensions you are already familiar with.
@ -95,14 +95,14 @@ Install packages on Ubuntu / Debian:
To add Citus to your local PostgreSQL database, add the following to `postgresql.conf`:
@ -423,12 +423,14 @@ A Citus database cluster grows from a single PostgreSQL node into a cluster by a
Data in distributed tables is stored in “shards”, which are actually just regular PostgreSQL tables on the worker nodes. When querying a distributed table on the coordinator node, Citus will send regular SQL queries to the worker nodes. That way, all the usual PostgreSQL optimizations and extensions can automatically be used with Citus.
When you send a query in which all (co-located) distributed tables have the same filter on the distribution column, Citus will automatically detect that and send the whole query to the worker node that stores the data. That way, arbitrarily complex queries are supported with minimal routing overhead, which is especially useful for scaling transactional workloads. If queries do not have a specific filter, each shard is queried in parallel, which is especially useful in analytical workloads. The Citus distributed executor is adaptive and is designed to handle both query types at the same time on the same system under high concurrency, which enables large-scale mixed workloads.
The schema and metadata of distributed tables and reference tables are automatically synchronized to all the nodes in the cluster. That way, you can connect to any node to run distributed queries. Schema changes and cluster administration still need to go through the coordinator.
Detailed descriptions of the implementation for Citus developers are provided in the [Citus Technical Documentation](src/backend/distributed/README.md).
## When to use Citus
Citus is uniquely capable of scaling both analytical and transactional workloads with up to petabytes of data. Use cases in which Citus is commonly used:
@ -438,21 +440,21 @@ Citus is uniquely capable of scaling both analytical and transactional workloads
The advanced parallel, distributed query engine in Citus combined with PostgreSQL features such as [array types](https://www.postgresql.org/docs/current/arrays.html), [JSONB](https://www.postgresql.org/docs/current/datatype-json.html), [lateral joins](https://heap.io/blog/engineering/postgresqls-powerful-new-join-type-lateral), and extensions like [HyperLogLog](https://github.com/citusdata/postgresql-hll) and [TopN](https://github.com/citusdata/postgresql-topn) allow you to build responsive analytics dashboards no matter how many customers or how much data you have.
Example real-time analytics users: [Algolia](https://www.citusdata.com/customers/algolia), [Heap](https://www.citusdata.com/customers/heap)
Example real-time analytics users: [Algolia](https://www.citusdata.com/customers/algolia)
- **[Time series data](http://docs.citusdata.com/en/stable/use_cases/timeseries.html)**:
Citus enables you to process and analyze very large amounts of time series data. The biggest Citus clusters store well over a petabyte of time series data and ingest terabytes per day.
Citus integrates seamlessly with [Postgres table partitioning](https://www.postgresql.org/docs/current/ddl-partitioning.html) and has [built-in functions for partitioning by time](https://www.citusdata.com/blog/2021/10/22/how-to-scale-postgres-for-time-series-data-with-citus/), which can speed up queries and writes on time series tables. You can take advantage of Citus’s parallel, distributed query engine for fast analytical queries, and use the built-in *columnar storage* to compress old partitions.
Example users: [MixRank](https://www.citusdata.com/customers/mixrank), [Windows team](https://techcommunity.microsoft.com/t5/azure-database-for-postgresql/architecting-petabyte-scale-analytics-by-scaling-out-postgres-on/ba-p/969685)
Example users: [MixRank](https://www.citusdata.com/customers/mixrank)
SaaS and other multi-tenant applications need to be able to scale their database as the number of tenants/customers grows. Citus enables you to transparently shard a complex data model by the tenant dimension, so your database can grow along with your business.
By distributing tables along a tenant ID column and co-locating data for the same tenant, Citus can horizontally scale complex (tenant-scoped) queries, transactions, and foreign key graphs. Reference tables and distributed DDL commands make database management a breeze compared to manual sharding. On top of that, you have a built-in distributed query engine for doing cross-tenant analytics inside the database.
Example multi-tenant SaaS users: [Copper](https://www.citusdata.com/customers/copper), [Salesloft](https://fivetran.com/case-studies/replicating-sharded-databases-a-case-study-of-salesloft-citus-data-and-fivetran), [ConvertFlow](https://www.citusdata.com/customers/convertflow)
Example multi-tenant SaaS users: [Salesloft](https://fivetran.com/case-studies/replicating-sharded-databases-a-case-study-of-salesloft-citus-data-and-fivetran), [ConvertFlow](https://www.citusdata.com/customers/convertflow)
- **[Microservices](https://docs.citusdata.com/en/stable/get_started/tutorial_microservices.html)**: Citus supports schema based sharding, which allows distributing regular database schemas across many machines. This sharding methodology fits nicely with typical Microservices architecture, where storage is fully owned by the service hence can’t share the same schema definition with other tenants. Citus allows distributing horizontally scalable state across services, solving one of the [main problems](https://stackoverflow.blog/2020/11/23/the-macro-problem-with-microservices/) of microservices.
The existing code-style in our code-base is not super consistent. There are multiple reasons for that. One big reason is because our code-base is relatively old and our standards have changed over time. The second big reason is that our style-guide is different from style-guide of Postgres and some code is copied from Postgres source code and is slightly modified. The below rules are for new code. If you're changing existing code that uses a different style, use your best judgement to decide if you use the rules here or if you match the existing style.
## Using citus_indent
CI pipeline will automatically reject any PRs which do not follow our coding
conventions. The easiest way to ensure your PR adheres to those conventions is
to use the [citus_indent](https://github.com/citusdata/tools/tree/develop/uncrustify)
tool. This tool uses `uncrustify` under the hood.
```bash
# Uncrustify changes the way it formats code every release a bit. To make sure
# everyone formats consistently we use version 0.68.1:
curl -L https://github.com/uncrustify/uncrustify/archive/uncrustify-0.68.1.tar.gz | tar xz
cd uncrustify-uncrustify-0.68.1/
mkdir build
cd build
cmake ..
make -j5
sudo make install
cd ../..
git clone https://github.com/citusdata/tools.git
cd tools
make uncrustify/.install
```
Once you've done that, you can run the `make reindent` command from the top
directory to recursively check and correct the style of any source files in the
current directory. Under the hood, `make reindent` will run `citus_indent` and
some other style corrections for you.
You can also run the following in the directory of this repository to
automatically format all the files that you have changed before committing:
## Other rules we follow that citus_indent does not enforce
* We almost always use **CamelCase**, when naming functions, variables etc., **not snake_case**.
* We also have the habits of using a **lowerCamelCase** for some variables named from their type or from their function name, as shown in the examples: