mirror of https://github.com/citusdata/citus.git
3144 Commits (c34b6a56013362dd501c713df048649733c97db0)
| Author | SHA1 | Message | Date |
|---|---|---|---|
|
|
c34b6a5601 | add tests | |
|
|
aa0ac0af60
|
Citus upgrade tests (#8237)
Expand the citus upgrade tests matrix: PG15: v11.1.0 v11.3.0 v12.1.10 PG16: v12.1.10 See https://github.com/citusdata/the-process/pull/174 |
|
|
|
432b69eb9d
|
PG18 - fix naming diffs of child FK constraints (#8247)
PG18 changed the names generated for child foreign key constraints. https://github.com/postgres/postgres/commit/3db61db48 The test failures in Citus regression suite are all changing the name of a constraint from `'sensors%'` to `'%to_parent%_1'`: the naming is very nice here because `to_parent` means that we have a foreign key to a parent table. To fix the diff, we exclude those constraints from the output. To verify correctness, we still count the problematic constraints to make sure they are there - we are simply removing them from the first output (we add this count query right after the previous one) Fixes #8126 Co-authored-by: Mehmet YILMAZ <mehmety87@gmail.com> |
|
|
|
287abea661
|
PG18 compatibility - varreturningtype additions (#8231)
This PR solves the following diffs, originating from the addition of `varreturningtype` field to the `Var` struct in PG18: https://github.com/postgres/postgres/commit/80feb727c Previously we didn't account for this new field (as it's new), so this wouldn't allow the parser to correctly reconstruct the `Var` node structure, but rather it would error out with `did not find '}' at end of input node`: ```diff SELECT column_to_column_name(logicalrelid, partkey) FROM pg_dist_partition WHERE partkey IS NOT NULL ORDER BY 1 LIMIT 1; - column_to_column_name ---------------------------------------------------------------------- - a -(1 row) - +ERROR: did not find '}' at end of input node ``` Solution follows precedent https://github.com/citusdata/citus/pull/7107, when varnullingrels field was added to the `Var` struct in PG16. The solution includes: - Taking care of the `partkey` in `pg_dist_partition` table because it's coming from the `Var` struct. This mainly includes fixing the upgrade script to PG18, by saving all the `partkey` infos before upgrading to PG18 (in `citus_prepare_pg_upgrade`), and then re-generating `partkey` columns in `pg_dist_partition` (using `UPDATE`) after upgrading to PG18 (in `citus_finish_pg_upgrade`). - Adding a normalize rule to fix output differences among PG versions. Note that we need two normalize lines: one for PG15 since it doesn't have `varnullingrels`, and one for PG16/PG17. - Small trick on `metadata_sync_helpers` to use different text when generating the `partkey`, based on the PG version. Fixes #8189 |
|
|
|
f0014cf0df
|
PG18 compatibility: misc output diffs pt2 (#8234)
3 minor changes to reduce some noise from the regression diffs. 1 - Reduce verbosity when ALTER EXTENSION fails PG18 has improved reporting of errors in extension script files Relevant PG commit: https://github.com/postgres/postgres/commit/774171c4f There was more context in PG18, so reducing verbosity ``` ALTER EXTENSION citus UPDATE TO '11.0-1'; ERROR: cstore_fdw tables are deprecated as of Citus 11.0 HINT: Install Citus 10.2 and convert your cstore_fdw tables to the columnar access method before upgrading further CONTEXT: PL/pgSQL function inline_code_block line 4 at RAISE +SQL statement "DO LANGUAGE plpgsql +$$ +BEGIN + IF EXISTS (SELECT 1 FROM pg_dist_shard where shardstorage = 'c') THEN + RAISE EXCEPTION 'cstore_fdw tables are deprecated as of Citus 11.0' + USING HINT = 'Install Citus 10.2 and convert your cstore_fdw tables to the columnar access method before upgrading further'; + END IF; +END; +$$" +extension script file "citus--10.2-5--11.0-1.sql", near line 532 ``` 2 - Fix backend type order in tests for PG18 PG18 added another backend type which messed the order in this test Adding a separate IF condition for PG18 Relevant PG commit: https://github.com/postgres/postgres/commit/18d67a8d7d 3 - Ignore "DEBUG: find_in_path" lines in output Relevant PG commit: https://github.com/postgres/postgres/commit/4f7f7b0375 The new GUC extension_control_path specifies a path to look for extension control files. |
|
|
|
d9652bf5f9
|
PG18 compatibility: misc output diffs (#8233)
6 minor changes to reduce some noise from the regression diffs. 1 - Add ORDER BY to fix subquery_in_where diff 2 - Disable buffers in explain analyze calls Leftover work from https://github.com/citusdata/citus/commit/f1f0b09f7 3 - Reduce verbosity to avoid diffs between PG versions Relevant PG commit: https://github.com/postgres/postgres/commit/0dca5d68d7 diff was: ``` CALL test_procedure_commit(2,5); ERROR: COMMIT is not allowed in an SQL function -CONTEXT: SQL function "test_procedure_commit" during startup +CONTEXT: SQL function "test_procedure_commit" statement 2 ``` 4 - Rename array_sort to array_sort_citus since PG18 added array_sort Relevant PG commit: https://github.com/postgres/postgres/commit/6c12ae09f5a Diff we were seeing in multi_array_agg, because the PG18 test was using PG18's array_sort function instead: ``` -- Check that we return NULL in case there are no input rows to array_agg() SELECT array_sort(array_agg(l_orderkey)) FROM lineitem WHERE l_orderkey < 0; array_sort ------------ - {} + (1 row) ``` 5 - Exclude not-null constraints from output to avoid diffs PG18 has added pg_constraint rows for not-null constraints Relevant PG commit https://github.com/postgres/postgres/commit/14e87ffa5c Remove them by condition contype <> 'n' 6 - Reduce verbosity to avoid md5 pwd deprecation warning in PG18 PG18 has deprecated MD5 passwords Relevant PG commit: https://github.com/postgres/postgres/commit/db6a4a985 Fixes #8154 Fixes #8157 |
|
|
|
2a6414c727
|
PG18: use data-checksums by default in upgrades (#8230)
Checksums are now on by default in PG18: 04bec894a Upgrade to PG18 fails with the following error: `old cluster does not use data checksums but the new one does` To overcome this error, we add --data-checksums option such that clusters with PG less than 18 also use data checksums. Fixes #8229 |
|
|
|
c5dde4b115
|
Fix crash on create statistics with non-RangeVar type pt2 (#8227)
Fixes #8225 very similar to #8213 Also the error message changed between pg18rc1 and pg18.0 |
|
|
|
d4dfdd765b
|
PG18 - Normalize \d+ output in PG18 by filtering “Not-null constraints” blocks (#8183)
DESCRIPTION: Normalize \d+ output in PG18 by filtering “Not-null
constraints” blocks
fixes #8095
**PR Description**
Postgres 18 started representing column `NOT NULL` as named constraints
in `pg_constraint`, and `psql \d+` now prints them under a `Not-null
constraints:` section. This caused extra diffs in our regression tests.
|
|
|
|
cec1848b13
|
PG18: adapt multi_sql_function expected output to SQL-function plan cache (#8184)
|
|
|
|
bb840e58a7
|
Fix crash on create statistics with non-RangeVar type (#8213)
This crash has been there for a while but wasn't tested before pg18. PG18 added this test: CREATE STATISTICS tst ON a FROM (VALUES (x)) AS foo; which tries to create statistics on a derived-on-the-fly table (which is not allowed) However Citus assumes we always have a valid table when intercepting CREATE STATISTICS command to check for Citus tables Added a check to return early if needed. pg18 commit: https://github.com/postgres/postgres/commit/3eea4dc2c Fixes #8212 |
|
|
|
5eb1d93be1
|
Properly detect no-op shard-key updates via UPDATE / MERGE (#8214)
DESCRIPTION: Fixes a bug that causes allowing UPDATE / MERGE queries that may change the distribution column value. Fixes: #8087. Probably as of #769, we were not properly checking if UPDATE may change the distribution column. In #769, we had these checks: ```c if (targetEntry->resno != column->varattno) { /* target entry of the form SET some_other_col = <x> */ isColumnValueChanged = false; } else if (IsA(setExpr, Var)) { Var *newValue = (Var *) setExpr; if (newValue->varattno == column->varattno) { /* target entry of the form SET col = table.col */ isColumnValueChanged = false; } } ``` However, what we check in "if" and in the "else if" are not so different in the sense they both attempt to verify if SET expr of the target entry points to the attno of given column. So, in #5220, we even removed the first check because it was redundant. Also see this PR comment from #5220: https://github.com/citusdata/citus/pull/5220#discussion_r699230597. In #769, probably we actually wanted to first check whether both SET expr of the target entry and given variable are pointing to the same range var entry, but this wasn't what the "if" was checking, so removed. As a result, in the cases that are mentioned in the linked issue, we were incorrectly concluding that the SET expr of the target entry won't change given column just because it's pointing to the same attno as given variable, regardless of what range var entries the column and the SET expr are pointing to. Then we also started using the same function to check for such cases for update action of MERGE, so we have the same bug there as well. So with this PR, we properly check for such cases by comparing varno as well in TargetEntryChangesValue(). However, then some of the existing tests started failing where the SET expr doesn't directly assign the column to itself but the "where" clause could actually imply that the distribution column won't change. Even before we were not attempting to verify if "where" cluse quals could imply a no-op assignment for the SET expr in such cases but that was not a problem. This is because, for the most cases, we were always qualifying such SET expressions as a no-op update as long as the SET expr's attno is the same as given column's. For this reason, to prevent regressions, this PR also adds some extra logic as well to understand if the "where" clause quals could imply that SET expr for the distribution key is a no-op. Ideally, we should instead use "relation restriction equivalence" mechanism to understand if the "where" clause implies a no-op update. This is because, for instance, right now we're not able to deduce that the update is a no-op when the "where" clause transitively implies a no-op update, as in the case where we're setting "column a" to "column c" and where clause looks like: "column a = column b AND column b = column c". If this means a regression for some users, we can consider doing it that way. Until then, as a workaround, we can suggest adding additional quals to "where" clause that would directly imply equivalence. Also, after fixing TargetEntryChangesValue(), we started successfully deducing that the update action is a no-op for such MERGE queries: ```sql MERGE INTO dist_1 USING dist_1 src ON (dist_1.a = src.b) WHEN MATCHED THEN UPDATE SET a = src.b; ``` However, we then started seeing below error for above query even though now the update is qualified as a no-op update: ``` ERROR: Unexpected column index of the source list ``` This was because of #8180 and #8201 fixed that. In summary, with this PR: * We disallow such queries, ```sql -- attno for dist_1.a, dist_1.b: 1, 2 -- attno for dist_different_order_1.a, dist_different_order_1.b: 2, 1 UPDATE dist_1 SET a = dist_different_order_1.b FROM dist_different_order_1 WHERE dist_1.a dist_different_order_1.a; -- attno for dist_1.a, dist_1.b: 1, 2 -- but ON (..) doesn't imply a no-op update for SET expr MERGE INTO dist_1 USING dist_1 src ON (dist_1.a = src.b) WHEN MATCHED THEN UPDATE SET a = src.a; ``` * .. and allow such queries, ```sql MERGE INTO dist_1 USING dist_1 src ON (dist_1.a = src.b) WHEN MATCHED THEN UPDATE SET a = src.b; ``` |
|
|
|
83b25e1fb1
|
Fix unexpected column index error for repartitioned merge (#8201)
DESCRIPTION: Fixes a bug that causes an unexpected error when executing repartitioned merge. Fixes #8180. This was happening because of a bug in SourceResultPartitionColumnIndex(). And to fix it, this PR avoids using DistributionColumnIndex() in SourceResultPartitionColumnIndex(). Instead, invents FindTargetListEntryWithVarExprAttno(), which finds the index of the target entry in the source query's target list that can be used to repartition the source for a repartitioned merge. In short, to find the source target entry that refences the Var used in ON (..) clause and that references the source rte, we should check the varattno of the underlying expr, which presumably is always a Var for repartitioned merge as we always wrap the source rte with a subquery, where all target entries point to the columns of the original source relation. Using DistributionColumnIndex() prior to 13.0 wasn't causing such an issue because prior to 13.0, the varattno of the underlying expr of the source target entries was almost (*1) always equal to resno of the target entry as we were including all target entries of the source relation. However, starting with #7659, which is merged to main before 13.0, we started using CreateFilteredTargetListForRelation() instead of CreateAllTargetListForRelation() to compute the target entry list for the source rte to fix another bug. So we cannot revert to using CreateAllTargetListForRelation() because otherwise we would re-introduce bug that it helped fixing, so we instead had to find a way to properly deal with the "filtered target list"s, as in this commit. Plus (*1), even before #7659, probably we would still fail when the source relation has dropped attributes or such because that would probably also cause such a mismatch between the varattno of the underlying expr of the target entry and its resno. |
|
|
|
10d62d50ea
|
Stabilize table_checks across PG15–PG18: switch to pg_constraint, remove dupes, exclude NOT NULL (#8140)
DESCRIPTION: Stabilize table_checks across PG15–PG18: switch to
pg_constraint, remove dupes, exclude NOT NUL
fixes #8138
fixes #8131
**Problem**
```diff
diff -dU10 -w /__w/citus/citus/src/test/regress/expected/multi_create_table_constraints.out /__w/citus/citus/src/test/regress/results/multi_create_table_constraints.out
--- /__w/citus/citus/src/test/regress/expected/multi_create_table_constraints.out.modified 2025-08-18 12:26:51.991598284 +0000
+++ /__w/citus/citus/src/test/regress/results/multi_create_table_constraints.out.modified 2025-08-18 12:26:52.004598519 +0000
@@ -403,22 +403,30 @@
relid = 'check_example_partition_col_key_365068'::regclass;
Column | Type | Definition
---------------+---------+---------------
partition_col | integer | partition_col
(1 row)
SELECT "Constraint", "Definition" FROM table_checks WHERE relid='public.check_example_365068'::regclass;
Constraint | Definition
-------------------------------------+-----------------------------------
check_example_other_col_check | CHECK other_col >= 100
+ check_example_other_col_check | CHECK other_col >= 100
+ check_example_other_col_check | CHECK other_col >= 100
+ check_example_other_col_check | CHECK other_col >= 100
+ check_example_other_col_check | CHECK other_col >= 100
check_example_other_other_col_check | CHECK abs(other_other_col) >= 100
-(2 rows)
+ check_example_other_other_col_check | CHECK abs(other_other_col) >= 100
+ check_example_other_other_col_check | CHECK abs(other_other_col) >= 100
+ check_example_other_other_col_check | CHECK abs(other_other_col) >= 100
+ check_example_other_other_col_check | CHECK abs(other_other_col) >= 100
+(10 rows)
```
On PostgreSQL 18, `NOT NULL` is represented as a cataloged constraint
and surfaces through `information_schema.check_constraints`.
|
|
|
|
b4cb1a94e9
|
Bump citus and citus_columnar to 14.0devel (#8170) | |
|
|
becc02b398
|
Cleanup from dropping pg14 in merge isolation tests (#8204)
These alternative test outputs are redundant since we have dropped PG14 support on main. |
|
|
|
b58af1c8d5
|
PG18: stabilize constraint-name tests by filtering pg_constraint on contype (#8185)
|
|
|
|
4012e5938a
|
PG18 - normalize PG18 “RESTRICT” FK error wording to legacy form (#8188)
fixes #8186
|
|
|
|
0fd95d71e4
|
Order same frequency common values, and add test (#8167)
Added similar test to what @colm-mchugh tested in the original PR https://github.com/citusdata/citus/pull/8026#discussion_r2279021218 |
|
|
|
d5f0ec5cd1
|
Fix invalid input syntax for type bigint (#8166)
Fixes #8164 |
|
|
|
544b6c4716
|
Add GUC for queries with outer joins and pseudoconstant quals (#8163)
Users can turn on this GUC at their own risk. |
|
|
|
2e1de77744
|
Also use pid in valgrind logfile name (#8150)
Also use pid in valgrind logfile name to avoid overwriting the valgrind logs due to the memory errors that can happen in different processes concurrently: (from https://valgrind.org/docs/manual/manual-core.html) ``` --log-file=<filename> Specifies that Valgrind should send all of its messages to the specified file. If the file name is empty, it causes an abort. There are three special format specifiers that can be used in the file name. %p is replaced with the current process ID. This is very useful for program that invoke multiple processes. WARNING: If you use --trace-children=yes and your program invokes multiple processes OR your program forks without calling exec afterwards, and you don't use this specifier (or the %q specifier below), the Valgrind output from all those processes will go into one file, possibly jumbled up, and possibly incomplete. ``` With this change, we'll start having lots of valgrind output files generated under "src/test/regress" with the same prefix, citus_valgrind_test_log.txt, by default, during valgrind tests, so it'll look a bit ugly; but one can use `cat src/test/regress/citus_valgrind_test_log.txt.[0-9]*"` or such to combine them into a single valgrind log file later. |
|
|
|
bb6eeb17cc
|
Fix bug in redundant WHERE clause detection. (#8162)
Need to also check Postgres plan's rangetables for relations used in Initplans.
DESCRIPTION: Fix a bug in redundant WHERE clause detection; we need to
additionally check the Postgres plan's range tables for the presence of
citus tables, to account for relations that are referenced from scalar
subqueries.
There is a fundamental flaw in
|
|
|
|
62e5fcfe09
|
Enhance clone node replication status messages (#8152)
- Downgrade replication lag reporting from NOTICE to DEBUG to reduce noise and improve regression test stability. - Add hints to certain replication status messages for better clarity. - Update expected output files accordingly. |
|
|
|
aaa31376e0
|
Make columnar_chunk_filtering pass consecutive runs (#8147)
Test was not cleaning up after itself therefore failed consecutive runs Test locally with: make check-columnar-minimal \ EXTRA_TESTS='columnar_chunk_filtering columnar_chunk_filtering' |
|
|
|
86b5bc6a20
|
Normalize Actual Rows output in regression tests for PG18 compatibility (#8141)
DESCRIPTION: Normalize Actual Rows output in regression tests for PG18
compatibility
PostgreSQL 18 changed `EXPLAIN ANALYZE` to always print fractional row
counts (e.g. `1.00` instead of `1`).
|
|
|
|
f1f0b09f73
|
PG18 - Add BUFFERS OFF to EXPLAIN ANALYZE calls (#8101)
Relevant PG18 commit:
|
|
|
|
eaa609f510
|
Add citus_stats UDF (#8026)
DESCRIPTION: Add `citus_stats` UDF This UDF acts on a Citus table, and provides `null_frac`, `most_common_vals` and `most_common_freqs` for each column in the table, based on the definitions of these columns in the Postgres view `pg_stats`. **Aggregated Views: pg\_stats > citus\_stats** citus\_stats, is a **view** intended for use in **Citus**, a distributed extension of PostgreSQL. It collects and returns **column-level** **statistics** for a distributed table—specifically, the **most common values**, their **frequencies,** and **fraction of null values**, like pg\_stats view does for regular Postgres tables. **Use Case** This view is useful when: - You need **column-level insights** on a distributed table. - You're performing **query optimization**, **cardinality estimation**, or **data profiling** across shards. **What It Returns** A **table** with: | Column Name | Data Type | Description | |---------------------|-----------|-----------------------------------------------------------------------------| | schemaname | text | Name of the schema containing the distributed table | | tablename | text | Name of the distributed table | | attname | text | Name of the column (attribute) | | null_frac | float4 | Estimated fraction of NULLs in the column across all shards | | most_common_vals | text[] | Array of most common values for the column | | most_common_freqs | float4[] | Array of corresponding frequencies (as fractions) of the most common values| **Caveats** - The function assumes that the array of the most common values among different shards will be the same, therefore it just adds everything up. |
|
|
|
be6668e440
|
Snapshot-Based Node Split – Foundation and Core Implementation (#8122)
**DESCRIPTION:**
This pull request introduces the foundation and core logic for the
snapshot-based node split feature in Citus. This feature enables
promoting a streaming replica (referred to as a clone in this feature
and UI) to a primary node and rebalancing shards between the original
and the newly promoted node without requiring a full data copy.
This significantly reduces rebalance times for scale-out operations
where the new node already contains a full copy of the data via
streaming replication.
Key Highlights:
**1. Replica (Clone) Registration & Management Infrastructure**
Introduces a new set of UDFs to register and manage clone nodes:
- citus_add_clone_node()
- citus_add_clone_node_with_nodeid()
- citus_remove_clone_node()
- citus_remove_clone_node_with_nodeid()
These functions allow administrators to register a streaming replica of
an existing worker node as a clone, making it eligible for later
promotion via snapshot-based split.
**2. Snapshot-Based Node Split (Core Implementation)**
New core UDF:
- citus_promote_clone_and_rebalance()
This function implements the full workflow to promote a clone and
rebalance shards between the old and new primaries. Steps include:
1. Ensuring Exclusivity – Blocks any concurrent placement-changing
operations.
2. Blocking Writes – Temporarily blocks writes on the primary to ensure
consistency.
3. Replica Catch-up – Waits for the replica to be fully in sync.
4. Promotion – Promotes the replica to a primary using pg_promote.
5. Metadata Update – Updates metadata to reflect the newly promoted
primary node.
6. Shard Rebalancing – Redistributes shards between the old and new
primary nodes.
**3. Split Plan Preview**
A new helper UDF get_snapshot_based_node_split_plan() provides a preview
of the shard distribution post-split, without executing the promotion.
**Example:**
```
reb 63796> select * from pg_catalog.get_snapshot_based_node_split_plan('127.0.0.1',5433,'127.0.0.1',5453);
table_name | shardid | shard_size | placement_node
--------------+---------+------------+----------------
companies | 102008 | 0 | Primary Node
campaigns | 102010 | 0 | Primary Node
ads | 102012 | 0 | Primary Node
mscompanies | 102014 | 0 | Primary Node
mscampaigns | 102016 | 0 | Primary Node
msads | 102018 | 0 | Primary Node
mscompanies2 | 102020 | 0 | Primary Node
mscampaigns2 | 102022 | 0 | Primary Node
msads2 | 102024 | 0 | Primary Node
companies | 102009 | 0 | Clone Node
campaigns | 102011 | 0 | Clone Node
ads | 102013 | 0 | Clone Node
mscompanies | 102015 | 0 | Clone Node
mscampaigns | 102017 | 0 | Clone Node
msads | 102019 | 0 | Clone Node
mscompanies2 | 102021 | 0 | Clone Node
mscampaigns2 | 102023 | 0 | Clone Node
msads2 | 102025 | 0 | Clone Node
(18 rows)
```
**4 Test Infrastructure Enhancements**
- Added a new test case scheduler for snapshot-based split scenarios.
- Enhanced pg_regress_multi.pl to support creating node backups with
slightly modified options to simulate real-world backup-based clone
creation.
### 5. Usage Guide
The snapshot-based node split can be performed using the following
workflow:
**- Take a Backup of the Worker Node**
Run pg_basebackup (or an equivalent tool) against the existing worker
node to create a physical backup.
`pg_basebackup -h <primary_worker_host> -p <port> -D
/path/to/replica/data --write-recovery-conf
`
**- Start the Replica Node**
Start PostgreSQL on the replica using the backup data directory,
ensuring it is configured as a streaming replica of the original worker
node.
**- Register the Backup Node as a Clone**
Mark the registered replica as a clone of its original worker node:
`SELECT * FROM citus_add_clone_node('<clone_host>', <clone_port>,
'<primary_host>', <primary_port>);
`
**- Promote and Rebalance the Clone**
Promote the clone to a primary and rebalance shards between it and the
original worker:
`SELECT * FROM citus_promote_clone_and_rebalance('clone_node_id');
`
**- Drop Any Replication Slots from the Original Worker**
After promotion, clean up any unused replication slots from the original
worker:
`SELECT pg_drop_replication_slot('<slot_name>');
`
|
|
|
|
f743b35fc2
|
Parallelize Shard Rebalancing & Unlock Concurrent Logical Shard Moves (#7983)
DESCRIPTION: Parallelizes shard rebalancing and removes the bottlenecks
that previously blocked concurrent logical-replication moves.
These improvements reduce rebalance windows—particularly for clusters
with large reference tables and enable multiple shard transfers to run in parallel.
Motivation:
Citus’ shard rebalancer has some key performance bottlenecks:
**Sequential Movement of Reference Tables:**
Reference tables are often assumed to be small, but in real-world
deployments, they can grow significantly large. Previously, reference
table shards were transferred as a single unit, making the process
monolithic and time-consuming.
**No Parallelism Within a Colocation Group:**
Although Citus distributes data using colocated shards, shard
movements within the same colocation group were serialized. In
environments with hundreds of distributed tables colocated
together, this serialization significantly slowed down rebalance
operations.
**Excessive Locking:**
Rebalancer used restrictive locks and redundant logical replication
guards, further limiting concurrency.
The goal of this commit is to eliminate these inefficiencies and enable
maximum parallelism during rebalance, without compromising correctness
or compatibility. Parallelize shard rebalancing to reduce rebalance
time.
Feature Summary:
**1. Parallel Reference Table Rebalancing**
Each reference-table shard is now copied in its own background task.
Foreign key and other constraints are deferred until all shards are
copied.
For single shard movement without considering colocation a new
internal-only UDF '`citus_internal_copy_single_shard_placement`' is
introduced to allow single-shard copy/move operations.
Since this function is internal, we do not allow users to call it
directly.
**Temporary Hack to Set Background Task Context** Background tasks
cannot currently set custom GUCs like application_name before executing
internal-only functions. 'citus_rebalancer ...' statement as a prefix in
the task command. This is a temporary hack to label internal tasks until
proper GUC injection support is added to the background task executor.
**2. Changes in Locking Strategy**
- Drop the leftover replication lock that previously serialized shard
moves performed via logical replication. This lock was only needed when
we used to drop and recreate the subscriptions/publications before each
move. Since Citus now removes those objects later as part of the “unused
distributed objects” cleanup, shard moves via logical replication can
safely run in parallel without additional locking.
- Introduced a per-shard advisory lock to prevent concurrent operations
on the same shard while allowing maximum parallelism elsewhere.
- Change the lock mode in AcquirePlacementColocationLock from
ExclusiveLock to RowExclusiveLock to allow concurrent updates within the
same colocation group, while still preventing concurrent DDL operations.
**3. citus_rebalance_start() enhancements**
The citus_rebalance_start() function now accepts two new optional
parameters:
```
- parallel_transfer_colocated_shards BOOLEAN DEFAULT false,
- parallel_transfer_reference_tables BOOLEAN DEFAULT false
```
This ensures backward compatibility by preserving the existing behavior
and avoiding any disruption to user expectations and when both are set
to true, the rebalancer operates with full parallelism.
**Previous Rebalancer Behavior:**
`SELECT citus_rebalance_start(shard_transfer_mode := 'force_logical');`
This would:
Start a single background task for replicating all reference tables
Then, move all shards serially, one at a time.
```
Task 1: replicate_reference_tables()
↓
Task 2: move_shard_1()
↓
Task 3: move_shard_2()
↓
Task 4: move_shard_3()
```
Slow and sequential. Reference table copy is a bottleneck. Colocated
shards must wait for each other.
**New Parallel Rebalancer:**
```
SELECT citus_rebalance_start(
shard_transfer_mode := 'force_logical',
parallel_transfer_colocated_shards := true,
parallel_transfer_reference_tables := true
);
```
This would:
- Schedule independent background tasks for each reference-table shard.
- Move colocated shards in parallel, while still maintaining dependency
order.
- Defer constraint application until all reference shards are in place.
-
```
Task 1: copy_ref_shard_1()
Task 2: copy_ref_shard_2()
Task 3: copy_ref_shard_3()
→ Task 4: apply_constraints()
↓
Task 5: copy_shard_1()
Task 6: copy_shard_2()
Task 7: copy_shard_3()
↓
Task 8-10: move_shard_1..3()
```
Each operation is scheduled independently and can run as soon as
dependencies are satisfied.
|
|
|
|
8d929d3bf8
|
Push down recurring outer joins when possible (#7973)
DESCRIPTION: Adds support for pushing down LEFT/RIGHT outer joins having a reference table in the outer side and a distributed table on the inner side (e.g., <reference table> LEFT JOIN <distributed table>) Partially addresses #6546 1) `<outer:reference>` LEFT JOIN `<inner:distributed>` 2) `<inner:distributed>` RIGHT JOIN `<outer:reference>` Previously, for outer joins of types (1) and (2), the distributed side was computed recursively. This was necessary because, when the inner side of a recurring outer join is a distributed table, it is not possible to directly distribute the join; the preserved (outer and recurring) side may generate rows with join keys that hash to different shards. To implement distributed planning while maintaining consistency with global execution semantics, this PR restricts the outer side only to those partition key values that route to the selected shard during distributed shard query computation. This method is employed )when the following criteria are met: (recursive planning applied otherwise) - The join type is (1) or (2) (lateral joins are not supported). - The outer side is a reference table. - The outer join qualifications include an equality condition between the partition column of a distributed table and the recurring table. - The join is not part of a chained join. - The “enable_recurring_outer_join_pushdown” GUC is enabled (default is on). --------- Co-authored-by: ebruaydingol <ebruaydingol@microsoft.com> Co-authored-by: Onur Tirtir <onurcantirtir@gmail.com> |
|
|
|
87a1b631e8
|
Not automatically create citus_columnar when creating citus extension (#8081)
DESCRIPTION: Not automatically create citus_columnar when there are no relations using it. Previously, we were always creating citus_columnar when creating citus with version >= 11.1. And how we were doing was as follows: * Detach SQL objects owned by old columnar, i.e., "drop" them from citus, but not actually drop them from the database * "old columnar" is the one that we had before Citus 11.1 as part of citus, i.e., before splitting the access method ands its catalog to citus_columnar. * Create citus_columnar and attach the SQL objects leftover from old columnar to it so that we can continue supporting the columnar tables that user had before Citus 11.1 with citus_columnar. First part is unchanged, however, now we don't create citus_columnar automatically anymore if the user didn't have any relations using columnar. For this reason, as of Citus 13.2, when these SQL objects are not owned by an extension and there are no relations using columnar access method, we drop these SQL objects when updating Citus to 13.2. The net effect is still the same as if we automatically created citus_columnar and user dropped citus_columnar later, so we should not have any issues with dropping them. (**Update:** Seems we've made some assumptions in citus, e.g., citus_finish_pg_upgrade() still assumes columnar metadata exists and tries to apply some fixes for it, so this PR fixes them as well. See the last section of this PR description.) Also, ideally I was hoping to just remove some lines of code from extension.c, where we decide automatically creating citus_columnar when creating citus, however, this didn't happen to be the case for two reasons: * We still need to automatically create it for the servers using columnar access method. * We need to clean-up the leftover SQL objects from old columnar when the above is not case otherwise we would have leftover SQL objects from old columnar for no reason, and that would confuse users too. * Old columnar cannot be used to create columnar tables properly, so we should clean them up and let the user decide whether they want to create citus_columnar when they really need it later. --- Also made several changes in the test suite because similarly, we don't always want to have citus_columnar created in citus tests anymore: * Now, columnar specific test targets, which cover **41** test sql files, always install columnar by default, by using "--load-extension=citus_columnar". * "--load-extension=citus_columnar" is not added to citus specific test targets because by default we don't want to have citus_columnar created during citus tests. * Excluding citus_columnar specific tests, we have **601** sql files that we have as citus tests and in **27** of them we manually create citus_columnar at the very beginning of the test because these tests do test some functionalities of citus together with columnar tables. Also, before and after schedules for PG upgrade tests are now duplicated so we have two versions of each: one with columnar tests and one without. To choose between them, check-pg-upgrade now supports a "test-with-columnar" option, which can be set to "true" or anything else to logically indicate "false". In CI, we run the check-pg-upgrade test target with both options. The purpose is to ensure we can test PG upgrades where citus_columnar is not created in the cluster before the upgrade as well. Finally, added more tests to multi_extension.sql to test Citus upgrade scenarios with / without columnar tables / citus_columnar extension. --- Also, seems citus_finish_pg_upgrade was assuming that citus_columnar is always created but actually we should have never made such an assumption. To fix that, moved columnar specific post-PG-upgrade work from citus to a new columnar UDF, which is columnar_finish_pg_upgrade. But to avoid breaking existing customer / managed service scripts, we continue to automatically perform post PG-upgrade work for columnar within citus_finish_pg_upgrade, but only if columnar access method exists this time. |
|
|
|
41883cea38
|
PG18 - unify psql headings to ‘List of relations’ (#8119)
fixes #8110 This patch updates the `normalize.sed` script used in pg18 psql regression tests: - Replaces the headings “List of tables”, “List of indexes”, and “List of sequences” with a single, uniform heading: “List of relations”. |
|
|
|
bfc6d1f440
|
PG18 - Adjust EXPLAIN's output for disabled nodes (#8108)
fixes #8097 |
|
|
|
a6161f5a21
|
Fix CTE traversal for outer Vars in FindReferencedTableColumn (remove assert; correct parentQueryList handling) (#8106)
fixes #8105 This change lets `FindReferencedTableColumn()` correctly resolve columns through a CTE even when the expression comes from an outer query level (`varlevelsup > 0`, `skipOuterVars = false`). Before, we hit an `Assert(skipOuterVars)` in this path. **Problem** * Hitting a CTE after walking outer Vars triggered `Assert(skipOuterVars)`. * Cause: we modified `parentQueryList` in place and didn’t rebuild the correct parent chain before recursing into the CTE, so the path was considered unsafe. **Fix** * Remove the `Assert(skipOuterVars)` in the `RTE_CTE` branch. * Find the CTE’s owning level via `ctelevelsup` and compute `cteParentListIndex`. * Rebuild a private parent list for recursion: `list_copy` → `list_truncate` → `lappend(current query)`. * Add a bounds check before indexing the CTE’s `targetList`. **Why it works** ```diff -parentQueryList = lappend(parentQueryList, query); -FindReferencedTableColumn(targetEntry->expr, parentQueryList, - cteQuery, column, rteContainingReferencedColumn, - skipOuterVars); + /* hand a private, bounded parent list to the recursion */ + List *newParent = list_copy(parentQueryList); + newParent = list_truncate(newParent, cteParentListIndex + 1); + newParent = lappend(newParent, query); + + FindReferencedTableColumn(targetEntry->expr, + newParent, + cteQuery, + column, + rteContainingReferencedColumn, + skipOuterVars); +} ``` **Before:** We changed `parentQueryList` in place (`parentQueryList = lappend(...)`) and didn’t trim it to the CTE’s owner level. **After:** We copy the list, trim it to the CTE’s owner level, then append the current query. This keeps the parent list accurate for the current recursion and safe when following outer Vars. **Example: Nested subquery referencing the CTE (two levels down)** ``` WITH c AS MATERIALIZED (SELECT user_id FROM raw_events_first) SELECT 1 FROM raw_events_first t WHERE EXISTS ( SELECT 1 FROM (SELECT user_id FROM c) c2 WHERE c2.user_id = t.user_id ); ``` Levels: Q0 = top SELECT Q1 = EXISTS subquery Q2 = inner (SELECT user_id FROM c) When resolving c2.user_id inside Q2: - parentQueryList is [Q0, Q1, Q2]. - `ctelevelsup`: 2 `cteParentListIndex = length(parentQueryList) - ctelevelsup - 1` - Recurse into the CTE’s query with [Q0, Q2]. **Tests (added in `multi_insert_select`)** * **T1:** Correlated subquery that references a CTE (one level down) Verifies that resolving through `RTE_CTE` after following an outer `Var` succeeds, row count matches source table. * **T2:** Nested subquery that references a CTE (two levels down) Exercises deeper recursion and confirms identical to T1. * **T3:** Scalar subquery in a target list that reads from the outer CTE Checks expected row count and that no NULLs are inserted. These tests cover the cases that previously hit `Assert(skipOuterVars)` and confirm CTE references while following outer Vars. |
|
|
|
6b6d959fac
|
PG18 - pg17.sql Simplify step 10 verification to use COUNT(*) instead of SELECT * (#8111)
fixes #8096
PostgreSQL 18 adds a `conenforced` flag allowing `CHECK` constraints to
be declared `NOT ENFORCED`.
|
|
|
|
3d8fd337e5
|
Check outer table partition column (#8092)
DESCRIPTION: Introduce a new check to push down a query including union and outer join to fix #8091 . In "SafeToPushdownUnionSubquery", we check if the distribution column of the outer relation is in the target list. |
|
|
|
889aa92ac0
|
EXPLAIN ANALYZE - Prevent execution of the plan during the plan-print (#8017)
DESCRIPTION: Fixed a bug in EXPLAIN ANALYZE to prevent unintended (duplicate) execution of the (sub)plans during the explain phase. Fixes #4212 ### 🐞 Bug #4212 : Redundant (Subplan) Execution in `EXPLAIN ANALYZE` codepath #### 🔍 Background In the standard PostgreSQL execution path, `ExplainOnePlan()` is responsible for two distinct operations depending on whether `EXPLAIN ANALYZE` is requested: 1. **Execute the plan** ```c if (es->analyze) ExecutorRun(queryDesc, direction, 0L, true); ``` 2. **Print the plan tree** ```c ExplainPrintPlan(es, queryDesc); ``` When printing the plan, the executor should **not run the plan again**. Execution is only expected to happen once—at the top level when `es->analyze = true`. --- #### ⚠️ Issue in Citus In the Citus implementation of `CustomScanMethods.ExplainCustomScan = CitusExplainScan`, which is a custom scan explain callback function used to print explain information of a Citus plan incorrectly performs **redundant execution** inside the explain path of `ExplainPrintPlan()` ```c ExplainOnePlan() ExplainPrintPlan() ExplainNode() CitusExplainScan() if (distributedPlan->subPlanList != NIL) { ExplainSubPlans(distributedPlan, es); { PlannedStmt *plan = subPlan->plan; ExplainOnePlan(plan, ...); // ⚠️ May re-execute subplan if es->analyze is true } } ``` This causes the subplans to be **executed again**, even though they have already been executed during the top-level plan execution. This behavior violates the expectation in PostgreSQL where `EXPLAIN ANALYZE` should **execute each node exactly once** for analysis. --- #### ✅ Fix (proposed) Save the output of Subplans during `ExecuteSubPlans()`, and later use it in `ExplainSubPlans()` |
|
|
|
3e2b6f61fa
|
Bump certifi from 2024.2.2 to 2024.7.4 in /src/test/regress (#8076)
Bumps [certifi](https://github.com/certifi/python-certifi) from 2024.2.2 to 2024.7.4. <details> <summary>Commits</summary> <ul> <li><a href=" |
|
|
|
0c1b31cdb5
|
Fix UPDATE stmts with indirection & array/jsonb subscripting with more than 1 field (#7675)
DESCRIPTION: Fixes problematic UPDATE statements with indirection and array/jsonb subscripting with more than one field. Fixes #4092, #7674 and #5621. Issues #7674 and #4092 involve an UPDATE with out of order columns and a sublink (SELECT) in the source, e.g. `UPDATE T SET (col3, col1, col4) = (SELECT 3, 1, 4)` where an incorrect value could get written to a column because query deparsing generated an incorrect SQL statement. To address this the fix adds an additional check to `ruleutils` to ensure that the target list of an UPDATE statement is in an order so that deparsing can be done safely. It is needed when the source of the UPDATE has a sublink, because Postgres `rewrite` will have put the target list in attribute order, but for deparsing to produce a correct SQL text the target list needs to be in order of the references (or `paramids`) to the target list of the sublink(s). Issue #5621 involves an UPDATE with array/jsonb subscripting that can behave incorrectly with more than one field, again because Citus query deparsing is receiving a post-`rewrite` query tree. The fix also adds a check to `ruleutils` to enable correct query deparsing of the UPDATE. --------- Co-authored-by: Ibrahim Halatci <ihalatci@gmail.com> Co-authored-by: Colm McHugh <colm.mchugh@gmail.com> |
|
|
|
245a62df3e
|
Avoid query deparse and planning of shard query in local execution. (#8035)
DESCRIPTION: Avoid query deparse and planning of shard query in local execution. Adds citus.enable_local_execution_local_plan GUC to allow avoiding unnecessary query deparsing to improve performance of fast-path queries targeting local shards. If a fast path query resolves to a shard that is local to the node planning the query, a shortcut can be taken so that the OID of the shard is plugged into the parse tree, which is then planned by Postgres. In `local_executor.c` the task uses that plan instead of parsing and planning a shard query. How this is done: The fast path planner identifies if the shortcut is possible, and then the distributed planner checks, using `CheckAndBuildDelayedFastPathPlan()`, if a local plan can be generated or if the shard query should be generated. This optimization is controlled by a GUC `citus.enable_local_execution_local_plan` which is on by default. A new regress test `local_execution_local_plan` tests both row-sharding and schema sharding. Negative tests are added to `local_shard_execution_dropped_column` to verify that the optimization is not taken when the shard is local but there is a difference between the shard and distributed table because of a dropped column. |
|
|
|
3da9096d53
|
Bump black from 24.2.0 to 24.3.0 in /src/test/regress (#8062)
Bumps [black](https://github.com/psf/black) from 24.2.0 to 24.3.0. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/psf/black/releases">black's releases</a>.</em></p> <blockquote> <h2>24.3.0</h2> <h3>Highlights</h3> <p>This release is a milestone: it fixes Black's first CVE security vulnerability. If you run Black on untrusted input, or if you habitually put thousands of leading tab characters in your docstrings, you are strongly encouraged to upgrade immediately to fix <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2024-21503">CVE-2024-21503</a>.</p> <p>This release also fixes a bug in Black's AST safety check that allowed Black to make incorrect changes to certain f-strings that are valid in Python 3.12 and higher.</p> <h3>Stable style</h3> <ul> <li>Don't move comments along with delimiters, which could cause crashes (<a href="https://redirect.github.com/psf/black/issues/4248">#4248</a>)</li> <li>Strengthen AST safety check to catch more unsafe changes to strings. Previous versions of Black would incorrectly format the contents of certain unusual f-strings containing nested strings with the same quote type. Now, Black will crash on such strings until support for the new f-string syntax is implemented. (<a href="https://redirect.github.com/psf/black/issues/4270">#4270</a>)</li> <li>Fix a bug where line-ranges exceeding the last code line would not work as expected (<a href="https://redirect.github.com/psf/black/issues/4273">#4273</a>)</li> </ul> <h3>Performance</h3> <ul> <li>Fix catastrophic performance on docstrings that contain large numbers of leading tab characters. This fixes <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2024-21503">CVE-2024-21503</a>. (<a href="https://redirect.github.com/psf/black/issues/4278">#4278</a>)</li> </ul> <h3>Documentation</h3> <ul> <li>Note what happens when <code>--check</code> is used with <code>--quiet</code> (<a href="https://redirect.github.com/psf/black/issues/4236">#4236</a>)</li> </ul> </blockquote> </details> <details> <summary>Changelog</summary> <p><em>Sourced from <a href="https://github.com/psf/black/blob/main/CHANGES.md">black's changelog</a>.</em></p> <blockquote> <h2>24.3.0</h2> <h3>Highlights</h3> <p>This release is a milestone: it fixes Black's first CVE security vulnerability. If you run Black on untrusted input, or if you habitually put thousands of leading tab characters in your docstrings, you are strongly encouraged to upgrade immediately to fix <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2024-21503">CVE-2024-21503</a>.</p> <p>This release also fixes a bug in Black's AST safety check that allowed Black to make incorrect changes to certain f-strings that are valid in Python 3.12 and higher.</p> <h3>Stable style</h3> <ul> <li>Don't move comments along with delimiters, which could cause crashes (<a href="https://redirect.github.com/psf/black/issues/4248">#4248</a>)</li> <li>Strengthen AST safety check to catch more unsafe changes to strings. Previous versions of Black would incorrectly format the contents of certain unusual f-strings containing nested strings with the same quote type. Now, Black will crash on such strings until support for the new f-string syntax is implemented. (<a href="https://redirect.github.com/psf/black/issues/4270">#4270</a>)</li> <li>Fix a bug where line-ranges exceeding the last code line would not work as expected (<a href="https://redirect.github.com/psf/black/issues/4273">#4273</a>)</li> </ul> <h3>Performance</h3> <ul> <li>Fix catastrophic performance on docstrings that contain large numbers of leading tab characters. This fixes <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2024-21503">CVE-2024-21503</a>. (<a href="https://redirect.github.com/psf/black/issues/4278">#4278</a>)</li> </ul> <h3>Documentation</h3> <ul> <li>Note what happens when <code>--check</code> is used with <code>--quiet</code> (<a href="https://redirect.github.com/psf/black/issues/4236">#4236</a>)</li> </ul> </blockquote> </details> <details> <summary>Commits</summary> <ul> <li><a href=" |
|
|
|
743c9bbf87
|
fix #7715 - add assign hook for CDC library path adjustment (#8025)
DESCRIPTION: Automatically updates dynamic_library_path when CDC is enabled fix : #7715 According to the documentation and `pg_settings`, the context of the `citus.enable_change_data_capture` parameter is user. However, changing this parameter — even as a superuser — doesn't work as expected: while the initial copy phase works correctly, subsequent change events are not propagated. This appears to be due to the fact that `dynamic_library_path` is only updated to `$libdir/citus_decoders:$libdir` when the server is restarted and the `_PG_init` function is invoked. To address this, I added an `EnableChangeDataCaptureAssignHook` that automatically updates `dynamic_library_path` at runtime when `citus.enable_change_data_capture` is enabled, ensuring that the CDC decoder libraries are properly loaded. Note that `dynamic_library_path` is already a `superuser`-context parameter in base PostgreSQL, so updating it from within the assign hook should be safe and consistent with PostgreSQL’s configuration model. If there’s any reason this approach might be problematic or if there’s a preferred alternative, I’d appreciate any feedback. cc. @jy-min --------- Co-authored-by: Hanefi Onaldi <Hanefi.Onaldi@microsoft.com> Co-authored-by: ibrahim halatci <ihalatci@gmail.com> |
|
|
|
a8900b57e6
|
PG18 - Strip decimal fractions from actual rows counts in normalize.sed (#8041)
Fixes #8040
```
- Custom Scan (Citus Adaptive) (actual rows=0 loops=1)
+ Custom Scan (Citus Adaptive) (actual rows=0.00 loops=1)
```
Add a normalization rule to the pg_regress `normalize.sed` script that
strips any trailing decimal fraction from actual rows= counts (e.g.
turning `actual rows=0.00` into `actual rows=0`). This silences noise
diffs introduced by the new PostgreSQL 18 beta’s planner output.
commit
|
|
|
|
5deaf9a616
|
Bump werkzeug from 2.3.7 to 3.0.6 in /src/test/regress (#8003)
Bumps [werkzeug](https://github.com/pallets/werkzeug) from 2.3.7 to 3.0.6. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/pallets/werkzeug/releases">werkzeug's releases</a>.</em></p> <blockquote> <h2>3.0.6</h2> <p>This is the Werkzeug 3.0.6 security fix release, which fixes security issues but does not otherwise change behavior and should not result in breaking changes.</p> <p>PyPI: <a href="https://pypi.org/project/Werkzeug/3.0.6/">https://pypi.org/project/Werkzeug/3.0.6/</a> Changes: <a href="https://werkzeug.palletsprojects.com/en/stable/changes/#version-3-0-6">https://werkzeug.palletsprojects.com/en/stable/changes/#version-3-0-6</a></p> <ul> <li>Fix how <code>max_form_memory_size</code> is applied when parsing large non-file fields. <a href="https://github.com/advisories/GHSA-q34m-jh98-gwm2">GHSA-q34m-jh98-gwm2</a></li> <li><code>safe_join</code> catches certain paths on Windows that were not caught by <code>ntpath.isabs</code> on Python < 3.11. <a href="https://github.com/advisories/GHSA-f9vj-2wh5-fj8j">GHSA-f9vj-2wh5-fj8j</a></li> </ul> <h2>3.0.5</h2> <p>This is the Werkzeug 3.0.5 fix release, which fixes bugs but does not otherwise change behavior and should not result in breaking changes.</p> <p>PyPI: <a href="https://pypi.org/project/Werkzeug/3.0.5/">https://pypi.org/project/Werkzeug/3.0.5/</a> Changes: <a href="https://werkzeug.palletsprojects.com/en/stable/changes/#version-3-0-5">https://werkzeug.palletsprojects.com/en/stable/changes/#version-3-0-5</a> Milestone: <a href="https://github.com/pallets/werkzeug/milestone/37?closed=1">https://github.com/pallets/werkzeug/milestone/37?closed=1</a></p> <ul> <li>The Watchdog reloader ignores file closed no write events. <a href="https://redirect.github.com/pallets/werkzeug/issues/2945">#2945</a></li> <li>Logging works with client addresses containing an IPv6 scope. <a href="https://redirect.github.com/pallets/werkzeug/issues/2952">#2952</a></li> <li>Ignore invalid authorization parameters. <a href="https://redirect.github.com/pallets/werkzeug/issues/2955">#2955</a></li> <li>Improve type annotation fore <code>SharedDataMiddleware</code>. <a href="https://redirect.github.com/pallets/werkzeug/issues/2958">#2958</a></li> <li>Compatibility with Python 3.13 when generating debugger pin and the current UID does not have an associated name. <a href="https://redirect.github.com/pallets/werkzeug/issues/2957">#2957</a></li> </ul> <h2>3.0.4</h2> <p>This is the Werkzeug 3.0.4 fix release, which fixes bugs but does not otherwise change behavior and should not result in breaking changes.</p> <p>PyPI: <a href="https://pypi.org/project/Werkzeug/3.0.4/">https://pypi.org/project/Werkzeug/3.0.4/</a> Changes: <a href="https://werkzeug.palletsprojects.com/en/3.0.x/changes/#version-3-0-4">https://werkzeug.palletsprojects.com/en/3.0.x/changes/#version-3-0-4</a> Milestone: <a href="https://github.com/pallets/werkzeug/milestone/36?closed=1">https://github.com/pallets/werkzeug/milestone/36?closed=1</a></p> <ul> <li>Restore behavior where parsing <code>multipart/x-www-form-urlencoded</code> data with invalid UTF-8 bytes in the body results in no form data parsed rather than a 413 error. <a href="https://redirect.github.com/pallets/werkzeug/issues/2930">#2930</a></li> <li>Improve <code>parse_options_header</code> performance when parsing unterminated quoted string values. <a href="https://redirect.github.com/pallets/werkzeug/issues/2904">#2904</a></li> <li>Debugger pin auth is synchronized across threads/processes when tracking failed entries. <a href="https://redirect.github.com/pallets/werkzeug/issues/2916">#2916</a></li> <li>Dev server handles unexpected <code>SSLEOFError</code> due to issue in Python < 3.13. <a href="https://redirect.github.com/pallets/werkzeug/issues/2926">#2926</a></li> <li>Debugger pin auth works when the URL already contains a query string. <a href="https://redirect.github.com/pallets/werkzeug/issues/2918">#2918</a></li> </ul> <h2>3.0.3</h2> <p>This is the Werkzeug 3.0.3 security release, which fixes security issues and bugs but does not otherwise change behavior and should not result in breaking changes.</p> <p>PyPI: <a href="https://pypi.org/project/Werkzeug/3.0.3/">https://pypi.org/project/Werkzeug/3.0.3/</a> Changes: <a href="https://werkzeug.palletsprojects.com/en/3.0.x/changes/#version-3-0-3">https://werkzeug.palletsprojects.com/en/3.0.x/changes/#version-3-0-3</a> Milestone: <a href="https://github.com/pallets/werkzeug/milestone/35?closed=1">https://github.com/pallets/werkzeug/milestone/35?closed=1</a></p> <ul> <li>Only allow <code>localhost</code>, <code>.localhost</code>, <code>127.0.0.1</code>, or the specified hostname when running the dev server, to make debugger requests. Additional hosts can be added by using the debugger middleware directly. The debugger UI makes requests using the full URL rather than only the path. GHSA-2g68-c3qc-8985</li> <li>Make reloader more robust when <code>""</code> is in <code>sys.path</code>. <a href="https://redirect.github.com/pallets/werkzeug/issues/2823">#2823</a></li> </ul> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Changelog</summary> <p><em>Sourced from <a href="https://github.com/pallets/werkzeug/blob/main/CHANGES.rst">werkzeug's changelog</a>.</em></p> <blockquote> <h2>Version 3.0.6</h2> <p>Released 2024-10-25</p> <ul> <li>Fix how <code>max_form_memory_size</code> is applied when parsing large non-file fields. :ghsa:<code>q34m-jh98-gwm2</code></li> <li><code>safe_join</code> catches certain paths on Windows that were not caught by <code>ntpath.isabs</code> on Python < 3.11. :ghsa:<code>f9vj-2wh5-fj8j</code></li> </ul> <h2>Version 3.0.5</h2> <p>Released 2024-10-24</p> <ul> <li>The Watchdog reloader ignores file closed no write events. :issue:<code>2945</code></li> <li>Logging works with client addresses containing an IPv6 scope :issue:<code>2952</code></li> <li>Ignore invalid authorization parameters. :issue:<code>2955</code></li> <li>Improve type annotation fore <code>SharedDataMiddleware</code>. :issue:<code>2958</code></li> <li>Compatibility with Python 3.13 when generating debugger pin and the current UID does not have an associated name. :issue:<code>2957</code></li> </ul> <h2>Version 3.0.4</h2> <p>Released 2024-08-21</p> <ul> <li>Restore behavior where parsing <code>multipart/x-www-form-urlencoded</code> data with invalid UTF-8 bytes in the body results in no form data parsed rather than a 413 error. :issue:<code>2930</code></li> <li>Improve <code>parse_options_header</code> performance when parsing unterminated quoted string values. :issue:<code>2904</code></li> <li>Debugger pin auth is synchronized across threads/processes when tracking failed entries. :issue:<code>2916</code></li> <li>Dev server handles unexpected <code>SSLEOFError</code> due to issue in Python < 3.13. :issue:<code>2926</code></li> <li>Debugger pin auth works when the URL already contains a query string. :issue:<code>2918</code></li> </ul> <h2>Version 3.0.3</h2> <p>Released 2024-05-05</p> <ul> <li>Only allow <code>localhost</code>, <code>.localhost</code>, <code>127.0.0.1</code>, or the specified hostname when running the dev server, to make debugger requests. Additional hosts can be added by using the debugger middleware directly. The debugger</li> </ul> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Commits</summary> <ul> <li><a href=" |
|
|
|
4cd8bb1b67 | Bump Citus version to 13.2devel | |
|
|
55a0d1f730
|
Add skip_qualify_public param to shard_name() to allow qualifying for "public" schema (#8014)
DESCRIPTION: Adds skip_qualify_public param to `shard_name()` UDF to allow qualifying for "public" schema when needed. |
|
|
|
5e37fe0c46
|
Bump cryptography from 42.0.3 to 44.0.1 in /src/test/regress (#7996)
Bumps [cryptography](https://github.com/pyca/cryptography) from 42.0.3 to 44.0.1. <details> <summary>Changelog</summary> <p><em>Sourced from <a href="https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst">cryptography's changelog</a>.</em></p> <blockquote> <p>44.0.1 - 2025-02-11</p> <pre><code> * Updated Windows, macOS, and Linux wheels to be compiled with OpenSSL 3.4.1. * We now build ``armv7l`` ``manylinux`` wheels and publish them to PyPI. * We now build ``manylinux_2_34`` wheels and publish them to PyPI. <p>.. _v44-0-0:</p> <p>44.0.0 - 2024-11-27 </code></pre></p> <ul> <li><strong>BACKWARDS INCOMPATIBLE:</strong> Dropped support for LibreSSL < 3.9.</li> <li>Deprecated Python 3.7 support. Python 3.7 is no longer supported by the Python core team. Support for Python 3.7 will be removed in a future <code>cryptography</code> release.</li> <li>Updated Windows, macOS, and Linux wheels to be compiled with OpenSSL 3.4.0.</li> <li>macOS wheels are now built against the macOS 10.13 SDK. Users on older versions of macOS should upgrade, or they will need to build <code>cryptography</code> themselves.</li> <li>Enforce the :rfc:<code>5280</code> requirement that extended key usage extensions must not be empty.</li> <li>Added support for timestamp extraction to the :class:<code>~cryptography.fernet.MultiFernet</code> class.</li> <li>Relax the Authority Key Identifier requirements on root CA certificates during X.509 verification to allow fields permitted by :rfc:<code>5280</code> but forbidden by the CA/Browser BRs.</li> <li>Added support for :class:<code>~cryptography.hazmat.primitives.kdf.argon2.Argon2id</code> when using OpenSSL 3.2.0+.</li> <li>Added support for the :class:<code>~cryptography.x509.Admissions</code> certificate extension.</li> <li>Added basic support for PKCS7 decryption (including S/MIME 3.2) via :func:<code>~cryptography.hazmat.primitives.serialization.pkcs7.pkcs7_decrypt_der</code>, :func:<code>~cryptography.hazmat.primitives.serialization.pkcs7.pkcs7_decrypt_pem</code>, and :func:<code>~cryptography.hazmat.primitives.serialization.pkcs7.pkcs7_decrypt_smime</code>.</li> </ul> <p>.. _v43-0-3:</p> <p>43.0.3 - 2024-10-18</p> <pre><code> * Fixed release metadata for ``cryptography-vectors`` <p>.. _v43-0-2:</p> <p>43.0.2 - 2024-10-18 </code></pre></p> <ul> <li>Fixed compilation when using LibreSSL 4.0.0.</li> </ul> <p>.. _v43-0-1:</p> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Commits</summary> <ul> <li><a href=" |
|
|
|
e8c3179b4d
|
Bump tornado from 6.4.2 to 6.5.1 in /src/test/regress (#8001)
Bumps [tornado](https://github.com/tornadoweb/tornado) from 6.4.2 to 6.5.1. <details> <summary>Changelog</summary> <p><em>Sourced from <a href="https://github.com/tornadoweb/tornado/blob/master/docs/releases.rst">tornado's changelog</a>.</em></p> <blockquote> <h1>Release notes</h1> <p>.. toctree:: :maxdepth: 2</p> <p>releases/v6.5.1 releases/v6.5.0 releases/v6.4.2 releases/v6.4.1 releases/v6.4.0 releases/v6.3.3 releases/v6.3.2 releases/v6.3.1 releases/v6.3.0 releases/v6.2.0 releases/v6.1.0 releases/v6.0.4 releases/v6.0.3 releases/v6.0.2 releases/v6.0.1 releases/v6.0.0 releases/v5.1.1 releases/v5.1.0 releases/v5.0.2 releases/v5.0.1 releases/v5.0.0 releases/v4.5.3 releases/v4.5.2 releases/v4.5.1 releases/v4.5.0 releases/v4.4.3 releases/v4.4.2 releases/v4.4.1 releases/v4.4.0 releases/v4.3.0 releases/v4.2.1 releases/v4.2.0 releases/v4.1.0 releases/v4.0.2 releases/v4.0.1 releases/v4.0.0 releases/v3.2.2 releases/v3.2.1 releases/v3.2.0 releases/v3.1.1 releases/v3.1.0 releases/v3.0.2 releases/v3.0.1 releases/v3.0.0</p> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Commits</summary> <ul> <li><a href=" |
|
|
|
92dc7f36fc
|
Bump jinja2 from 3.1.3 to 3.1.6 in /src/test/regress (#8002)
Bumps [jinja2](https://github.com/pallets/jinja) from 3.1.3 to 3.1.6. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/pallets/jinja/releases">jinja2's releases</a>.</em></p> <blockquote> <h2>3.1.6</h2> <p>This is the Jinja 3.1.6 security release, which fixes security issues but does not otherwise change behavior and should not result in breaking changes compared to the latest feature release.</p> <p>PyPI: <a href="https://pypi.org/project/Jinja2/3.1.6/">https://pypi.org/project/Jinja2/3.1.6/</a> Changes: <a href="https://jinja.palletsprojects.com/en/stable/changes/#version-3-1-6">https://jinja.palletsprojects.com/en/stable/changes/#version-3-1-6</a></p> <ul> <li>The <code>|attr</code> filter does not bypass the environment's attribute lookup, allowing the sandbox to apply its checks. <a href="https://github.com/pallets/jinja/security/advisories/GHSA-cpwx-vrp4-4pq7">https://github.com/pallets/jinja/security/advisories/GHSA-cpwx-vrp4-4pq7</a></li> </ul> <h2>3.1.5</h2> <p>This is the Jinja 3.1.5 security fix release, which fixes security issues and bugs but does not otherwise change behavior and should not result in breaking changes compared to the latest feature release.</p> <p>PyPI: <a href="https://pypi.org/project/Jinja2/3.1.5/">https://pypi.org/project/Jinja2/3.1.5/</a> Changes: <a href="https://jinja.palletsprojects.com/changes/#version-3-1-5">https://jinja.palletsprojects.com/changes/#version-3-1-5</a> Milestone: <a href="https://github.com/pallets/jinja/milestone/16?closed=1">https://github.com/pallets/jinja/milestone/16?closed=1</a></p> <ul> <li>The sandboxed environment handles indirect calls to <code>str.format</code>, such as by passing a stored reference to a filter that calls its argument. <a href="https://github.com/pallets/jinja/security/advisories/GHSA-q2x7-8rv6-6q7h">GHSA-q2x7-8rv6-6q7h</a></li> <li>Escape template name before formatting it into error messages, to avoid issues with names that contain f-string syntax. <a href="https://redirect.github.com/pallets/jinja/issues/1792">#1792</a>, <a href="https://github.com/pallets/jinja/security/advisories/GHSA-gmj6-6f8f-6699">GHSA-gmj6-6f8f-6699</a></li> <li>Sandbox does not allow <code>clear</code> and <code>pop</code> on known mutable sequence types. <a href="https://redirect.github.com/pallets/jinja/issues/2032">#2032</a></li> <li>Calling sync <code>render</code> for an async template uses <code>asyncio.run</code>. <a href="https://redirect.github.com/pallets/jinja/issues/1952">#1952</a></li> <li>Avoid unclosed <code>auto_aiter</code> warnings. <a href="https://redirect.github.com/pallets/jinja/issues/1960">#1960</a></li> <li>Return an <code>aclose</code>-able <code>AsyncGenerator</code> from <code>Template.generate_async</code>. <a href="https://redirect.github.com/pallets/jinja/issues/1960">#1960</a></li> <li>Avoid leaving <code>root_render_func()</code> unclosed in <code>Template.generate_async</code>. <a href="https://redirect.github.com/pallets/jinja/issues/1960">#1960</a></li> <li>Avoid leaving async generators unclosed in blocks, includes and extends. <a href="https://redirect.github.com/pallets/jinja/issues/1960">#1960</a></li> <li>The runtime uses the correct <code>concat</code> function for the current environment when calling block references. <a href="https://redirect.github.com/pallets/jinja/issues/1701">#1701</a></li> <li>Make <code>|unique</code> async-aware, allowing it to be used after another async-aware filter. <a href="https://redirect.github.com/pallets/jinja/issues/1781">#1781</a></li> <li><code>|int</code> filter handles <code>OverflowError</code> from scientific notation. <a href="https://redirect.github.com/pallets/jinja/issues/1921">#1921</a></li> <li>Make compiling deterministic for tuple unpacking in a <code>{% set ... %}</code> call. <a href="https://redirect.github.com/pallets/jinja/issues/2021">#2021</a></li> <li>Fix dunder protocol (<code>copy</code>/<code>pickle</code>/etc) interaction with <code>Undefined</code> objects. <a href="https://redirect.github.com/pallets/jinja/issues/2025">#2025</a></li> <li>Fix <code>copy</code>/<code>pickle</code> support for the internal <code>missing</code> object. <a href="https://redirect.github.com/pallets/jinja/issues/2027">#2027</a></li> <li><code>Environment.overlay(enable_async)</code> is applied correctly. <a href="https://redirect.github.com/pallets/jinja/issues/2061">#2061</a></li> <li>The error message from <code>FileSystemLoader</code> includes the paths that were searched. <a href="https://redirect.github.com/pallets/jinja/issues/1661">#1661</a></li> <li><code>PackageLoader</code> shows a clearer error message when the package does not contain the templates directory. <a href="https://redirect.github.com/pallets/jinja/issues/1705">#1705</a></li> <li>Improve annotations for methods returning copies. <a href="https://redirect.github.com/pallets/jinja/issues/1880">#1880</a></li> <li><code>urlize</code> does not add <code>mailto:</code> to values like <code>@a@b</code>. <a href="https://redirect.github.com/pallets/jinja/issues/1870">#1870</a></li> <li>Tests decorated with <code>@pass_context</code> can be used with the <code>|select</code> filter. <a href="https://redirect.github.com/pallets/jinja/issues/1624">#1624</a></li> <li>Using <code>set</code> for multiple assignment (<code>a, b = 1, 2</code>) does not fail when the target is a namespace attribute. <a href="https://redirect.github.com/pallets/jinja/issues/1413">#1413</a></li> <li>Using <code>set</code> in all branches of <code>{% if %}{% elif %}{% else %}</code> blocks does not cause the variable to be considered initially undefined. <a href="https://redirect.github.com/pallets/jinja/issues/1253">#1253</a></li> </ul> <h2>3.1.4</h2> <p>This is the Jinja 3.1.4 security release, which fixes security issues and bugs but does not otherwise change behavior and should not result in breaking changes.</p> <p>PyPI: <a href="https://pypi.org/project/Jinja2/3.1.4/">https://pypi.org/project/Jinja2/3.1.4/</a> Changes: <a href="https://jinja.palletsprojects.com/en/3.1.x/changes/#version-3-1-4">https://jinja.palletsprojects.com/en/3.1.x/changes/#version-3-1-4</a></p> <ul> <li>The <code>xmlattr</code> filter does not allow keys with <code>/</code> solidus, <code>></code> greater-than sign, or <code>=</code> equals sign, in addition to disallowing spaces. Regardless of any validation done by Jinja, user input should never be used as keys to this filter, or must be separately validated first. GHSA-h75v-3vvj-5mfj</li> </ul> </blockquote> </details> <details> <summary>Changelog</summary> <p><em>Sourced from <a href="https://github.com/pallets/jinja/blob/main/CHANGES.rst">jinja2's changelog</a>.</em></p> <blockquote> <h2>Version 3.1.6</h2> <p>Released 2025-03-05</p> <ul> <li>The <code>|attr</code> filter does not bypass the environment's attribute lookup, allowing the sandbox to apply its checks. :ghsa:<code>cpwx-vrp4-4pq7</code></li> </ul> <h2>Version 3.1.5</h2> <p>Released 2024-12-21</p> <ul> <li>The sandboxed environment handles indirect calls to <code>str.format</code>, such as by passing a stored reference to a filter that calls its argument. :ghsa:<code>q2x7-8rv6-6q7h</code></li> <li>Escape template name before formatting it into error messages, to avoid issues with names that contain f-string syntax. :issue:<code>1792</code>, :ghsa:<code>gmj6-6f8f-6699</code></li> <li>Sandbox does not allow <code>clear</code> and <code>pop</code> on known mutable sequence types. :issue:<code>2032</code></li> <li>Calling sync <code>render</code> for an async template uses <code>asyncio.run</code>. :pr:<code>1952</code></li> <li>Avoid unclosed <code>auto_aiter</code> warnings. :pr:<code>1960</code></li> <li>Return an <code>aclose</code>-able <code>AsyncGenerator</code> from <code>Template.generate_async</code>. :pr:<code>1960</code></li> <li>Avoid leaving <code>root_render_func()</code> unclosed in <code>Template.generate_async</code>. :pr:<code>1960</code></li> <li>Avoid leaving async generators unclosed in blocks, includes and extends. :pr:<code>1960</code></li> <li>The runtime uses the correct <code>concat</code> function for the current environment when calling block references. :issue:<code>1701</code></li> <li>Make <code>|unique</code> async-aware, allowing it to be used after another async-aware filter. :issue:<code>1781</code></li> <li><code>|int</code> filter handles <code>OverflowError</code> from scientific notation. :issue:<code>1921</code></li> <li>Make compiling deterministic for tuple unpacking in a <code>{% set ... %}</code> call. :issue:<code>2021</code></li> <li>Fix dunder protocol (<code>copy</code>/<code>pickle</code>/etc) interaction with <code>Undefined</code> objects. :issue:<code>2025</code></li> <li>Fix <code>copy</code>/<code>pickle</code> support for the internal <code>missing</code> object. :issue:<code>2027</code></li> <li><code>Environment.overlay(enable_async)</code> is applied correctly. :pr:<code>2061</code></li> <li>The error message from <code>FileSystemLoader</code> includes the paths that were searched. :issue:<code>1661</code></li> <li><code>PackageLoader</code> shows a clearer error message when the package does not contain the templates directory. :issue:<code>1705</code></li> <li>Improve annotations for methods returning copies. :pr:<code>1880</code></li> <li><code>urlize</code> does not add <code>mailto:</code> to values like <code>@a@b</code>. :pr:<code>1870</code></li> </ul> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Commits</summary> <ul> <li><a href=" |