This PR fixes diffs in `columnnar_chunk_filtering` and `columnar_paths`
tests.
In `columnnar_chunk_filtering` an expression `(NOT (SubPlan 1))` changed
to `(NOT (ANY (a = (SubPlan 1).col1)))`. This is due to [aPG17
commit](https://github.com/postgres/postgres/commit/fd0398fc) that
improved how scalar subqueries (InitPlans) and ANY subqueries (SubPlans)
are EXPLAINed in expressions. The fix uses a helper function which
converts the PG17 format to the pre-PG17 format. It is done this way
because pre-PG17 EXPLAIN does not provide enough context to convert to
the PG17 format. The helper function can (and should) be retired when 17
becomes the minimum supported PG.
In `columnar_paths`, a merge join changed to a hash join. This is due to
[this PG17
commit](f7816aec23),
which improved the PG optimizer's ability to estimate the size of a CTE
scan. The impacted query involves a CTE scan with a point predicate
`(a=123)` and before the change the CTE size was estimated to be 5000,
but with the change it is correctly (given the data in the table)
estimated to be 1, making hash join a more attractive join method. The
fix is to have an alternative goldfile for pre-PG17. I tried, but was
unable, to force a specific kind of join method using the GUCs
(`enable_nestloop`, `enable_hashjoin`, `enable_mergejoin`), but it was
not possible to obtain a consistent plan across all supported PG
versions (in some cases the join inputs switched sides).
There are two commits in this PR:
1) Remove domain_default column since it has been removed from PG17
Relevant PG commit:
78806a9509
78806a95095c4fb9230a441925244690d9c07d23
2) pg_stat_statements reset output diff fix
pg_stat_statements reset output changed in PG17, fix idea from
Relevant PG commits:
6ab1dbd26b
6ab1dbd26bbf307055d805feaaca16dc3e750d36
Test `tableam` expects that this CREATE TABLE statement: `CREATE TABLE
test_partitioned(id int, p int, val int) PARTITION BY RANGE (p) USING
fake_am;`
will produce this error:
`specifying a table access method is not supported on a partitioned
table`
but as of [this PG
commit](https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=374c7a229)
it is possible to specify an access method on a partitioned table. This
fix moves the CREATE TABLE statement to pg17, and adds an additional
test to show parent access method is inherited.
Disable DDL propagation for the vanilla test suite. This enables the
vanilla `database ` test to pass, where previously it was correctly
returning `ERROR: unrecognized ALTER DATABASE option: tablespace`
because release-13.0 does not propagate this ALTER DATABASE variant.
We (Citus team) discussed cherry picking
[#7253](https://github.com/citusdata/citus/pull/7253) from main to
release-13.0 because it does propagate ALTER DATABASE tablespace option
(as well as a couple of others) but decided fixing the regress test was
not the proper context for that. The fix disables
`citus.enable_metadata_sync` when running vanilla, we discussed
disabling `citus.enable_create_database_propagation` but this is not in
release-13.0.
Preserve the test error message by adjusting the query so that PG17
cannot pull it up to a join. Another instance of a subquery that can be
pulled up to a join with PG17 (#7745)
This should have been fixed in, but slipped by, #7745
In PG17, Auto-generated array types, multirange types, and relation
rowtypes
are treated as dependent objects, hence changing the output of the
print_extension_changes function.
Relevant PG commit:
e5bc9454e527b1cba97553531d8d4992892fdeef
e5bc9454e5
Here we create a table with only the basic extension types
in order to avoid printing extra ones for now.
This can be removed when we drop PG16 support.
https://github.com/citusdata/citus/actions/runs/11960253650/attempts/1#summary-33343972656
```diff
| table pg_dist_rebalance_strategy
+ | type citus.distribution_type[]
+ | type citus.pg_dist_object
+ | type pg_dist_shard
+ | type pg_dist_shard[]
+ | type pg_dist_shard_placement
+ | type pg_dist_shard_placement[]
+ | type pg_dist_transaction
+ | type pg_dist_transaction[]
| view citus_dist_stat_activity
| view pg_dist_shard_placement
```
This work was already done by @m3hm3t and approved as part of
https://github.com/citusdata/citus/pull/7722
I separated it in this PR since the previous one contained other changes
which we don't currently want to merge.
Relevant PG commit:
---------
Co-authored-by: Mehmet YILMAZ <mehmety87@gmail.com>
A recent Postgres commit (*) that refactored error messages is the cause
of the diffs in pg16 regress test when running Citus on Postgres 17. The
fix changes the pg16 goldfile and includes a normalization rule for the
error messages so pg16 will pass when running with version 16 of
Postgres.
(*)
https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=498ee9ee2f
PG17 changed how scalar subquery outputs appear in EXPLAIN output (*).
This commit changes impacted regress goldfiles to the PG17 format, and
adds a helper function to covert pre-PG17 plans to the PG17 format. The
conversion is required when testing Citus on pgversions prior to 17. The
helper function can and should be removed when 17 becomes the minimum
supported version.
(*)
https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=fd0398fcb
Fix Test Failure in subquery_in_where, set_operations, dml_recursive in
PG17 #7741
The test failures are caused by[ this commit in
PG17](https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=9f1337639),
which enables correlated subqueries to be pulled up to a join. Prior to
this, the correlated subquery was implemented as a subplan. In citus, it
is not possible to pushdown a correlated subplan, but with a different
plan in PG17 the query can be executed, per the test diff from
`subquery_in_where`:
```
37,39c37,41
< DEBUG: generating subplan XXX_1 for CTE event_id: SELECT user_id AS events_user_id, "time" AS events_time, event_type FROM public.events_table
< DEBUG: Plan XXX query after replacing subqueries and CTEs: SELECT count(*) AS count FROM ...
< ERROR: correlated subqueries are not supported when the FROM clause contains a CTE or subquery
---
> count
> ---------------------------------------------------------------------
> 0
> (1 row)
>
```
This is because with pg17 `= ANY subquery` in the queries can be
implemented as a join, instead of as a subplan filter on a table scan.
For example, `SELECT * FROM test a WHERE x IN (SELECT x FROM test b
UNION SELECT y FROM test c WHERE a.x = c.x) ORDER BY 1,2` (from
set_operations) has this plan in pg17; note that the subquery is the
inner side of a nested loop join:
```
┌───────────────────────────────────────────────────┐
│ QUERY PLAN │
├───────────────────────────────────────────────────┤
│ Sort │
│ Sort Key: a.x, a.y │
│ -> Nested Loop │
│ -> Seq Scan on test a │
│ -> Subquery Scan on "ANY_subquery" │
│ Filter: (a.x = "ANY_subquery".x) │
│ -> HashAggregate │
│ Group Key: b.x │
│ -> Append │
│ -> Seq Scan on test b │
│ -> Seq Scan on test c │
│ Filter: (a.x = x) │
└───────────────────────────────────────────────────┘
```
and this plan in pg16 (and previous pg versions); the subquery is a
correlated subplan filter on a table scan:
```
┌───────────────────────────────────────────────┐
│ QUERY PLAN │
├───────────────────────────────────────────────┤
│ Sort │
│ Sort Key: a.x, a.y │
│ -> Seq Scan on test a │
│ Filter: (SubPlan 1) │
│ SubPlan 1 │
│ -> HashAggregate │
│ Group Key: b.x │
│ -> Append │
│ -> Seq Scan on test b │
│ -> Seq Scan on test c │
│ Filter: (a.x = x) │
└───────────────────────────────────────────────┘
```
The fix Modifies the queries causing the test failures so that an ANY
subquery is not folded to a join, preserving the expected output of the
tests. A similar approach was taken for existing regress tests in the[
postgres
commit](https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=9f1337639).
See the `join `regress test, for example.
We also add pg17 specific tests that leverage this improvement in Postgres
with Citus distributed planning as well.
Regression test cte_inline has the following diff;
```
DEBUG: CTE cte_1 is going to be inlined via distributed planning
DEBUG: CTE cte_1 is going to be inlined via distributed planning
DEBUG: Creating router plan
-DEBUG: query has a single distribution column value: 1
```
DEBUG message `query has a single distribution column value` does not
appear with PG17. This is because PG17 can recognize when a Result node
does not need to have an input node, so the predicate on the
distribution column is not present in the query plan. Comparing the
query plan obtained before PG17:
```
│ Result │
│ One-Time Filter: false │
│ -> GroupAggregate │
│ -> Seq Scan on public.test_table │
│ Filter: (test_table.key = 1) │
```
with the PG17 query plan:
```
┌──────────────────────────────────┐
│ QUERY PLAN │
├──────────────────────────────────┤
│ Result │
│ One-Time Filter: false │
└──────────────────────────────────┘
```
we see that the Result node in the PG16 plan has an Aggregate node, but
the Result node in the PG17 plan does not have any input node; PG17
recognizes it is not needed given a Filter that evaluates to False at
compile-time. The Result node is present in both plans because PG in
both versions can recognize when a combination of predicates equate to
false at compile time; this is the because the successive predicates in
the test query (key=6, key=5, key=4, etc) become contradictory when the
CTEs are inlined. Here is an example query showing the effect of the CTE
inlining:
```
select count(*), key FROM test_table WHERE key = 1 AND key = 2 GROUP BY key;
```
In this case, the WHERE clause obviously evaluates to False. The PG16
query plan for this query is:
```
┌────────────────────────────────────┐
│ QUERY PLAN │
├────────────────────────────────────┤
│ GroupAggregate │
│ -> Result │
│ One-Time Filter: false │
│ -> Seq Scan on test_table │
│ Filter: (key = 1) │
└────────────────────────────────────┘
```
The PG17 query plan is:
```
┌────────────────────────────────┐
│ QUERY PLAN │
├────────────────────────────────┤
│ GroupAggregate │
│ -> Result │
│ One-Time Filter: false │
└────────────────────────────────┘
```
In both plans the PG optimizer is able to derive the predicate 1=2 from
the equivalence class { key, 1, 2 } and then constant fold this to
False. But, in the PG16 plan the Result node has an input node (a
sequential scan on test_table), while in the PG17 plan the Result node
does not have any input. This is because PG17 recognizes that when the
Result filter resolves to False at compile time it is not necessary to
set an input on the Result. I think this is a consequence of this PG17
commit:
https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=b262ad440
which handles redundant IS [NOT] NULL predicates, but also refactored
evaluating of predicates to true/false at compile-time, enabling
optimizations such as those seen here.
Given the reason for the diff, the fix preserves the test output by
modifying the query so the predicates are not contradictory when the
CTEs are inlined.
In PG17 adds builtin C.UTF-8 locale option, we add it in the code to
avoid "unknown collation provider" in vanilla tests.
Relevant PG commit:
f69319f2f1
f69319f2f1fb16eda4b535bcccec90dff3a6795e
Also in PG17, colliculocale, daticulocale renamed to colllocale,
datlocale
Here we fix the following tests to avoid alternative output
pg15 pg16 multi_mx_create_table multi_schema_support
Relevant PG commit:
f696c0cd5f
f696c0cd5f299f1b51e214efc55a22a782cc175d
Changed `attstattarget` in `pg_attribute` to use `NullableDatum`,
allowing null representation for default statistics target in PostgreSQL
17.
Relevant PG commit:
6a004f1be87d34cfe51acf2fe2552d2b08a79273
6a004f1be8
```diff
-- verify statistics is set
SELECT c.relname, a.attstattarget
FROM pg_attribute a
JOIN pg_class c ON a.attrelid = c.oid AND c.relname LIKE 'test\_idx%'
ORDER BY c.relname, a.attnum;
relname | attstattarget
-----------+---------------
test_idx | 4646
- test_idx2 | -1
+ test_idx2 |
test_idx2 | 10000
test_idx2 | 3737
(4 rows)
```
Changed stxstattarget in pg_statistic_ext to use nullable
representation, removing explicit -1 for default statistics target in
PostgreSQL 17.
Relevant PG commit:
012460ee93c304fbc7220e5b55d9d0577fc766ab
012460ee93
```diff
SELECT stxstattarget, stxrelid::regclass
FROM pg_statistic_ext
WHERE stxnamespace IN (
SELECT oid
FROM pg_namespace
WHERE nspname IN ('statistics''TestTarget')
)
AND stxname SIMILAR TO '%\_\d+'
ORDER BY stxstattarget, stxrelid::regclass ASC;
stxstattarget | stxrelid
---------------+-----------------------------------
- -1 | "statistics'TestTarget".t1_980000
- -1 | "statistics'TestTarget".t1_980002
...
+ | "statistics'TestTarget".t1_980000
+ | "statistics'TestTarget".t1_980002
...
```
Fixes#6795
The `worker_copy_table_to_node` is not supposed to be called for Citus
tables. When this function was initially introduced in #6098 , it had
the respective check. But the check was omitted, since
`worker_copy_table_to_node` called for Citus table finishes with error
anyway:
```
ERROR: cannot execute a distributed query from a query on a shard
DETAIL: Executing a distributed query in a function call that may be pushed to a remote node can lead to incorrect results.
```
It turns out that in some cases this error does not occur. See #6795
I suggest restoring that check.
Co-authored-by: Karina Litskevich <litskevichkarina@gmail.com>
The test added in #7604 doesn't reach the `HasRangeTableRef` function
and thus doesn't test what it should.
Co-authored-by: Karina Litskevich <litskevichkarina@gmail.com>
We thought we provided support for this in
b8c493f2c4
However the use of parameters in SQL is not supported in Citus. Since
generic plan queries use parameters, we can't support for now.
Relevant PG16 commit https://github.com/postgres/postgres/commit/3c05284Fixes#7813 with proper error message
DESCRIPTION: Fixes a crash that happens because of unsafe catalog access
when re-assigning the global pid after application_name changes.
When application_name changes, we don't actually need to
try re-assigning the global pid for external client backends because
application_name doesn't affect the global pid for such backends. Plus,
trying to re-assign the global pid for external client backends would
unnecessarily cause performing a catalog access when the cached local
node id is invalidated. However, accessing to the catalog tables is
dangerous in certain situations like when we're not in a transaction
block. And for the other types of backends, i.e., the Citus internal
backends, we need to re-assign the global pid when the application_name
changes because for such backends we simply extract the global pid
inherited from the originating backend from the application_name -that's
specified by originating backend when openning that connection- and this
doesn't require catalog access.
In function MasterAggregateMutator(), when the original Node is a Var node use makeVar() instead
of copyObject() when constructing the Var node for the target list of the combine query.
The varnullingrels field of the original Var node is ignored because it is not relevant for the
combine query; copying this cause the problem in issue 7705, where a coordinator query had
a Var with a reference to a non-existent join relation.
DESCRIPTION: Add a check to see if the given limit is null.
Fixes a bug by checking if the limit given in the query is null when the
actual limit is computed with respect to the given offset.
Prior to this change, null is interpreted as 0 during the limit
calculation when both limit and offset are given.
Fixes#7663
We move the CI images to the github container registry.
Given we mostly (if not solely) run these containers on github actions
infra it makes sense to have them hosted closer to where they are
needed.
Image changes: https://github.com/citusdata/the-process/pull/157
Related to issue #7619, #7620
Merge command fails when source query is single sharded and source and
target are co-located and insert is not using distribution key of
source.
Example
```
CREATE TABLE source (id integer);
CREATE TABLE target (id integer );
-- let's distribute both table on id field
SELECT create_distributed_table('source', 'id');
SELECT create_distributed_table('target', 'id');
MERGE INTO target t
USING ( SELECT 1 AS somekey
FROM source
WHERE source.id = 1) s
ON t.id = s.somekey
WHEN NOT MATCHED
THEN INSERT (id)
VALUES (s.somekey)
ERROR: MERGE INSERT must use the source table distribution column value
HINT: MERGE INSERT must use the source table distribution column value
```
Author's Opinion: If join is not between source and target distributed
column, we should not force user to use source distributed column while
inserting value of target distributed column.
Fix: If user is not using distributed key of source for insertion let's
not push down query to workers and don't force user to use source
distributed column if it is not part of join.
This reverts commit fa4fc0b372.
Co-authored-by: paragjain <paragjain@microsoft.com>
Because we want to track PR numbers and to make backporting easy we
(pretty much always) use squash-merges when merging to master. We
accidentally used a rebase merge for PR #7620. This reverts those
changes so we can redo the merge using squash merge.
This reverts all commits from eedb607c to 9e71750fc.
For some reason using localhost in our hba file doesn't have the
intended effect anymore in our Github Actions runners. Probably because
of some networking change (IPv6 maybe) or some change in the
`/etc/hosts` file.
Replacing localhost with the equivalent loopback IPv4 and IPv6 addresses
resolved this issue.
DESCRIPTION: Fix performance issue when using "\d tablename" on a server
with many tables
We introduce a filter to every query on pg_class to automatically remove
shards. This is useful to make sure \d and PgAdmin are not cluttered
with shards. However, the way we were introducing this filter was using
`securityQuals` which can have negative impact on query performance.
On clusters with 100k+ tables this could cause a simple "\d tablename"
command to take multiple seconds, because a skipped optimization by
Postgres causes a full table scan. This changes the code to introduce
this filter in the regular `quals` list instead of in `securityQuals`.
Which causes Postgres to use the intended optimization again.
For reference, this was initially reported as a Postgres issue by me:
https://www.postgresql.org/message-id/flat/4189982.1712785863%40sss.pgh.pa.us#b87421293b362d581ea8677e3bfea920
Fix check-arbitrary-configs tests failure with current REL_16_STABLE.
This is the same problem as described in #7573. I missed pg_regress call
in _run_pg_regress() in that PR.
Co-authored-by: Karina Litskevich <litskevichkarina@gmail.com>
DESCRIPTION: Fix PG upgrades when invalid rebalance strategies exist
Without this change an upgrade of a cluster with an invalid rebalance
strategy would fail with an error like this:
```
cache lookup failed for shard_cost_function with oid 6077337
CONTEXT: SQL statement "SELECT citus_validate_rebalance_strategy_functions(
NEW.shard_cost_function,
NEW.node_capacity_function,
NEW.shard_allowed_on_node_function)"
PL/pgSQL function citus_internal.pg_dist_rebalance_strategy_trigger_func() line 5 at PERFORM
SQL statement "INSERT INTO pg_catalog.pg_dist_rebalance_strategy SELECT
name,
default_strategy,
shard_cost_function::regprocedure::regproc,
node_capacity_function::regprocedure::regproc,
shard_allowed_on_node_function::regprocedure::regproc,
default_threshold,
minimum_threshold,
improvement_threshold
FROM public.pg_dist_rebalance_strategy"
PL/pgSQL function citus_finish_pg_upgrade() line 115 at SQL statement
```
This fixes that by disabling the trigger and simply re-inserting the
invalid rebalance strategy without checking. We could also silently
remove it, but this seems nicer.
And when that is the case, directly use it as "host" parameter for the
connections between nodes and use the "hostname" provided in
pg_dist_node / pg_dist_poolinfo as "hostaddr" to avoid host name lookup.
This is to avoid allowing dns resolution (and / or setting up DNS names
for each host in the cluster). This already works currently when using
IPs in the hostname. The only use of setting host is that you can then
use sslmode=verify-full and it will validate that the hostname matches
the certificate provided by the node you're connecting too.
It would be more flexible to make this a per-node setting, but that
requires SQL changes. And we'd like to backport this change, and
backporting such a sql change would be quite hard while backporting this
change would be very easy. And in many setups, a different hostname for
TLS validation is actually not needed. The reason for that is
query-from-any node: With query-from-any-node all nodes usually have a
certificate that is valid for the same "cluster hostname", either using
a wildcard cert or a Subject Alternative Name (SAN). Because if you load
balance across nodes you don't know which node you're connecting to, but
you still want TLS validation to do it's job. So with this change you
can use this same "cluster hostname" for TLS validation within the
cluster. Obviously this means you don't validate that you're connecting
to a particular node, just that you're connecting to one of the nodes in
the cluster, but that should be fine from a security perspective (in
most cases).
Note to self: This change requires updating
https://docs.citusdata.com/en/latest/develop/api_guc.html#citus-node-conninfo-text.
DESCRIPTION: Allows overwriting host name for all inter-node connections
by supporting "host" parameter in citus.node_conninfo
In PostgreSQL 16 a new option expecteddir was introduced to pg_regress.
Together with fix in
[196eeb6b](https://github.com/postgres/postgres/commit/196eeb6b) it
causes check-vanilla failure if expecteddir is not specified.
Co-authored-by: Karina Litskevich <litskevichkarina@gmail.com>
DESCRIPTION: Fixes a crash caused by some form of ALTER TABLE ADD COLUMN
statements. When adding multiple columns, if one of the ADD COLUMN
statements contains a FOREIGN constraint ommitting the referenced
columns in the statement, a SEGFAULT occurs.
For instance, the following statement results in a crash:
```
ALTER TABLE lt ADD COLUMN new_col1 bool,
ADD COLUMN new_col2 int references rt;
```
Fixes#7520.
This PR changes the order in which the locks are acquired (for the
target and reference tables), when a modify request is initiated from a
worker node that is not the "FirstWorkerNode".
To prevent concurrent writes, locks are acquired on the first worker
node for the replicated tables. When the update statement originates
from the first worker node, it acquires the lock on the reference
table(s) first, followed by the target table(s). However, if the update
statement is initiated in another worker node, the lock requests are
sent to the first worker in a different order. This PR unifies the
modification order on the first worker node. With the third commit,
independent of the node that received the request, the locks are
acquired for the modified table and then the reference tables on the
first node.
The first commit shows a sample output for the test prior to the fix.
Fixes#7477
---------
Co-authored-by: Jelte Fennema-Nio <jelte.fennema@microsoft.com>
When using a CASE WHEN expression in the body
of the function that is used in the DO block, a segmentation
fault occured. This fixes that.
Fixes#7381
---------
Co-authored-by: Konstantin Morozov <vzbdryn@yahoo.com>
DESCRIPTION: Adds support for distributed `ALTER/DROP ROLE` commands
from the databases where Citus is not installed
---------
Co-authored-by: Onur Tirtir <onurcantirtir@gmail.com>
In preprocess phase, we save the original database name, replace
dbname field of CreatedbStmt with a temporary name (to let Postgres
to create the database with the temporary name locally) and then
we insert a cleanup record for the temporary database name on all
nodes **(\*\*)**.
And in postprocess phase, we first rename the temporary database
back to its original name for local node and then return a list of
distributed DDL jobs i) to create the database with the temporary
name and then ii) to rename it back to its original name on other
nodes. That way, if CREATE DATABASE fails on any of the nodes, the
temporary database will be cleaned up by the cleanup records that
we inserted in preprocess phase and in case of a failure, we won't
leak any databases called as the name that user intended to use for
the database.
Solves the problem documented in
https://github.com/citusdata/citus/issues/7369
for CREATE DATABASE commands.
**(\*\*):** To ensure that we insert cleanup records on all nodes,
with this PR we also start requiring having the coordinator in the
metadata because otherwise we would skip inserting a cleanup record
for the coordinator.
When adding CREATE/DROP DATABASE propagation in #7240, luckily
we've added EnsureSupportedCreateDatabaseCommand() check into
deparser too just to be on the safe side. That way, today CREATE
DATABASE commands from non-main dbs don't silently allow unsupported
options.
I wasn't aware of this when merging #7439 and hence wanted to add
a test so that we don't mistakenly remove that check from deparser
in future.
Fix for the #7519
In metadata sync phase, grant statements for roles are being fetched and
propagated from catalog tables.
However, in some cases grant .. with admin option clauses executes after
the granted by statements which causes #7519 error.
We will fix this issue with the grantor propagation task in the project
DESCRIPTION: Adds support for distributed `CREATE/DROP DATABASE `
commands from the databases where Citus is not installed
---------
Co-authored-by: Onur Tirtir <onurcantirtir@gmail.com>
DESCRIPTION: Adds support for distributed `GRANT .. ON DATABASE TO USER`
commands from the databases where Citus is not installed
---------
Co-authored-by: Onur Tirtir <onurcantirtir@gmail.com>
DESCRIPTION: Adds support for distributed role-membership management
commands from the databases where Citus is not installed (`GRANT <role>
TO <role>`)
This PR also refactors the code-path that allows executing some of the
node-wide commands so that we use send deparsed query string to other
nodes instead of the `queryString` passed into utility hook.
---------
Co-authored-by: Onur Tirtir <onurcantirtir@gmail.com>
DESCRIPTION: Fixes incorrect propagating of `GRANTED BY` and
`CASCADE/RESTRICT` clauses for `REVOKE` statements
There are two issues fixed in this PR
1. granted by statement will appear for revoke statements as well
2. revoke/cascade statement will appear after granted by
Since granted by statements does not appear in statements, this bug
hasn't been visible until now. However, after activating the granted by
statement for revoke, order problem arised and this issue was fixed
order problem for cascade/revoke as well
In summary, this PR provides usage of granted by statements properly now
with the correct order of statements.
We can verify the both errors, fixed with just single statement
REVOKE dist_role_3 from non_dist_role_3 granted by test_admin_role
cascade;
Let's use version 2.3.7 to fix the following error as we do in docker
images created in https://github.com/citusdata/the-process/ repo.
```
ImportError: cannot import name 'url_quote' from 'werkzeug.urls' (/home/onurctirtir/.local/share/virtualenvs/regress-ffZKpSmO/lib/python3.9/site-packages/werkzeug/urls.py)
```
And changing werkzeug version required rebuilding Pipfile.lock file in
src/test/regress. Before updating this Pipfile.lock file, we want to
make sure that versions specified there don't break any tests. And to
ensure that this is the case,
https://github.com/citusdata/the-process/pull/155 synchronizes
requirements.txt file based on new Pipfile.lock and hence this PR
updates test image suffix accordingly.
Also, while updating https://github.com/citusdata/the-process/pull/155,
I also had to update Postgres versions to latest minors to make image
builds passing again and updating Postgres versions in images requires
updating Postgres versions in this repo too. While doing that, we also
update Postgres version used in devcontainer too.
DESCRIPTION: Resolves an issue that disrupts distributed GRANT
statements with the grantor option
In this issue 3 issues are being solved:
1.Correcting the erroneous appending of multiple granted by in the
deparser.
2Adding support for grantor (granted by) in grant role propagation.
3. Implementing grantor (granted by) support during the metadata sync
grant role propagation phase.
Limitations: Currently, the grantor must be created prior to the
metadata sync phase. During metadata sync, both the creation of the
grantor and the grants given by that role cannot be performed, as the
grantor role is not detected during the dependency resolution phase.
---------
Co-authored-by: Onur Tirtir <onurcantirtir@gmail.com>
Moves the following functions to the Citus internal schema:
citus_internal_local_blocked_processes
citus_internal_global_blocked_processes
citus_internal_mark_node_not_synced
citus_internal_unregister_tenant_schema_globally
citus_internal_update_none_dist_table_metadata
citus_internal_update_placement_metadata
citus_internal_update_relation_colocation
citus_internal_start_replication_origin_tracking
citus_internal_stop_replication_origin_tracking
citus_internal_is_replication_origin_tracking_active
#7405
---------
Co-authored-by: Jelte Fennema-Nio <jelte.fennema@microsoft.com>
DESCRIPTION: citus_move_shard_placement now fails early when shard
cannot be safely moved
The implementation is quite simplistic -
`citus_move_shard_placement(...)` will fail with an error if there's any
new node in the cluster that doesn't have reference tables yet.
It could have been finer-grained, i.e. erroring only when trying to move
a shard to an unitialized node. Looking at the related functions -
`replicate_reference_tables()` or `citus_rebalance_start()`, I think
it's acceptable behaviour. These other functions also treat "any"
unitialized node as a temporary anomaly.
Fixes#7426
---------
Co-authored-by: Jelte Fennema-Nio <jelte.fennema@microsoft.com>
Moves the following functions:
citus_internal_delete_colocation_metadata
citus_internal_delete_partition_metadata
citus_internal_delete_placement_metadata
citus_internal_delete_shard_metadata
citus_internal_delete_tenant_schema
Move more functions to citus_internal schema, the list:
citus_internal_add_placement_metadata
citus_internal_add_shard_metadata
citus_internal_add_tenant_schema
citus_internal_adjust_local_clock_to_remote
citus_internal_database_command
#7405
Move citus_internal_acquire_citus_advisory_object_class_lock and
citus_internal_add_colocation_metadata functions from pg_catalog to
citus_internal.
#7405
Fixes a bug that breaks queries from non-maindbs when
citus.local_hostname is set to a value different than "localhost".
This is a very old bug doesn't cause a problem as long as Citus catalog
is available to FindWorkerNode(). And the catalog is always available
unless we're in non-main database, which might be the case on main but
not on older releases, hence not adding a `DESCRIPTION`. For this
reason, I don't see a reason to backport this.
Maybe we should totally refrain using LOCAL_HOST_NAME in all code-paths,
but not doing that in this PR as the other paths don't seem to be
breaking something that is user-facing.
```c
char *
GetAuthinfo(char *hostname, int32 port, char *user)
{
char *authinfo = NULL;
bool isLoopback = (strncmp(LOCAL_HOST_NAME, hostname, MAX_NODE_LENGTH) == 0 &&
PostPortNumber == port);
if (IsTransactionState())
{
int64 nodeId = WILDCARD_NODE_ID;
/* -1 is a special value for loopback connections (task tracker) */
if (isLoopback)
{
nodeId = LOCALHOST_NODE_ID;
}
else
{
WorkerNode *worker = FindWorkerNode(hostname, port);
if (worker != NULL)
{
nodeId = worker->nodeId;
}
}
authinfo = GetAuthinfoViaCatalog(user, nodeId);
}
return (authinfo != NULL) ? authinfo : "";
}
```
This patch includes the username in the reported error message.
This makes debugging easier when certain commands open connections
as other users than the user that is executing the command.
```
monitora_snapshot=# SELECT citus_move_shard_placement(102030, 'monitora.db-dev-worker-a', 6005, 'monitora.db-dev-worker-a', 6017);
ERROR: connection to the remote node monitora_user@monitora.db-dev-worker-a:6017 failed with the following error: fe_sendauth: no password supplied
Time: 40,198 ms
```
This PR makes the connections to other nodes for
`mark_object_distributed` use the same user as
`execute_command_on_remote_nodes_as_user` so they'll use the same
connection.
DESCRIPTION: Adds comment on database and role propagation.
Example commands are as below
comment on database <db_name> is '<comment_text>'
comment on database <db_name> is NULL
comment on role <role_name> is '<comment_text>'
comment on role <role_name> is NULL
---------
Co-authored-by: Jelte Fennema-Nio <jelte.fennema@microsoft.com>
I noticed while reviewing #7203 that there as no example of executing
sql on a worker for the pytest README. Since this is a pretty common
thing that people want to do, this PR adds that.
Test isolation_update_node fails on some systems with the following error:
```
-s2: WARNING: connection to the remote node non-existent:57637 failed with the following error: could not translate host name "non-existent" to address: Name or service not known
+s2: WARNING: connection to the remote node non-existent:57637 failed with the following error: could not translate host name "non-existent" to address: Temporary failure in name resolution
```
This slightly modifies an already existing [normalization
rule](739c6d26df/src/test/regress/bin/normalize.sed (L217-L218))
to fix it.
Co-authored-by: Karina Litskevich <litskevichkarina@gmail.com>
Adding upgrade_basic_before_non_mixed.sql file because while
upgrade_basic_after_non_mixed exist, its before variation didn't exist
as we don't have any "before" steps. However, run_test.py assumes that
all "after" files do have a "before" variation as well. So this PR adds
an empty upgrade_basic_before_non_mixed.sql file.
Also, given that we don't have such a version called as 12.1devel
anymore, change it to 12.1.1.
And finally, let CI skip testing flakyness for upgrade tests both
because it's quite hard to get flaky-test-detection job working for
upgrade tests and also because in the end it is not much useful to test
upgrade tests against flakyness.
Running a query from a Citus non-main database that inserts to
pg_dist_object requires a new connection to the main database itself.
This PR adds that connection to the main database.
---------
Co-authored-by: Jelte Fennema-Nio <github-tech@jeltef.nl>
DESCRIPTION: Adds support for issuing `CREATE`/`DROP` DATABASE commands
from worker nodes
With this commit, we allow issuing CREATE / DROP DATABASE commands from
worker nodes too.
As in #7278, this is not allowed when the coordinator is not added to
metadata because we don't ever sync metadata changes to coordinator
when adding coordinator to the metadata via
`SELECT citus_set_coordinator_host('<hostname>')`, or equivalently, via
`SELECT citus_add_node(<coordinator_node_name>, <coordinator_node_port>, 0)`.
We serialize database management commands by acquiring a Citus specific
advisory lock on the first primary worker node if there are any workers in the
cluster. As opposed to what we've done in https://github.com/citusdata/citus/pull/7278
for role management commands, we try to avoid from running into distributed deadlocks
as much as possible. This is because, while distributed deadlocks that can happen around
role management commands can be detected by Citus, this is not the case for database
management commands because most of them cannot be run inside in a transaction block.
In that case, Citus cannot even detect the distributed deadlock because the command is not
part of a distributed transaction at all, then the command execution might not return the
control back to the user for an indefinite amount of time.
This fixes#7230.
First of all, using HeapTupleHeaderGetDatumLength(heapTuple) is
definetly wrong, it gives a number that's 4 times less than the correct
tuple size (heapTuple.t_len). See
https://github.com/postgres/postgres/blob/REL_16_0/src/include/access/htup_details.h#L455-L456https://github.com/postgres/postgres/blob/REL_16_0/src/include/varatt.h#L279https://github.com/postgres/postgres/blob/REL_16_0/src/include/varatt.h#L225-L226
When I fixed it, the limit_intermediate_size test failed, so I tried to
understand what's going on there. In original commit fd546cf these
queries were supposed to fail. Then in b3af63c three of the queries that
were supposed to fail suddenly worked and tests were changed to pass
without understanding why the output had changed or how to keep test
testing what it had to test. Even comments saying that these queries
should fail were left untouched. Commit message gives no clue about why
exactly test has changed:
> It seems that when we use adaptive executor instead of task tracker,
we
> exceed the intermediate result size less in the test. Therefore
updated
> the tests accordingly.
Then 3fda2c3 also blindly raised the limit for one of the queries to
keep it working:
3fda2c3254 (diff-a9b7b617f9dfd345318cb8987d5897143ca1b723c87b81049bbadd94dcc86570R19)
When in fe3caf3 that HeapTupleHeaderGetDatumLength(heapTuple) call was
finally added, one of those test queries became failing again.
The other two of them now also failing after the fix. I don't understand
how exactly the calculation of "intermediate result size" that is
limited by citus.max_intermediate_result_size had changed through
b3af63c and fe3caf3, but these numbers are now closer to what
they originally were when this limitation was added in
fd546cf. So these queries should fail, like in the original
version of the limit_intermediate_size test.
Co-authored-by: Karina Litskevich <litskevichkarina@gmail.com>
foreign_key_to_reference_shard_rebalance failed because partition of
2024 year does not exist, fixed by add default partition.
Replaces https://github.com/citusdata/citus/pull/7396 by adding a rule
that allows properly testing foreign_key_to_reference_shard_rebalance
via run_test.py.
Closes#7396
Co-authored-by: chuhx <148182736+cstarc1@users.noreply.github.com>
DESCRIPTION: Adds REASSIGN OWNED BY propagation
This pull request introduces the propagation of the "Reassign owned by"
statement. It accommodates both local and distributed roles for both the
old and new assignments. However, when the old role is a local role, it
undergoes filtering and is not propagated. On the other hand, if the new
role is a local role, the process involves first creating the role on
worker nodes before propagating the "Reassign owned" statement.
DESCRIPTION: Adds database connection limit, rename and set tablespace
propagation
In this PR, below statement propagations are added
alter database <database_name> with allow_connections = <boolean_value>;
alter database <database_name> rename to <database_name2>;
alter database <database_name> set TABLESPACE <table_space_name>
---------
Co-authored-by: Jelte Fennema-Nio <github-tech@jeltef.nl>
Co-authored-by: Jelte Fennema-Nio <jelte.fennema@microsoft.com>
Co-authored-by: Onur Tirtir <onurcantirtir@gmail.com>
DESCRIPTION: Adds support for 2PC from non-Citus main databases
This PR only adds support for `CREATE USER` queries, other queries need
to be added. But it should be simple because this PR creates the
underlying structure.
Citus main database is the database where the Citus extension is
created. A non-main database is all the other databases that are in the
same node with a Citus main database.
When a `CREATE USER` query is run on a non-main database we:
1. Run `start_management_transaction` on the main database. This
function saves the outer transaction's xid (the non-main database
query's transaction id) and marks the current query as main db command.
2. Run `execute_command_on_remote_nodes_as_user("CREATE USER
<username>", <username to run the command>)` on the main database. This
function creates the users in the rest of the cluster by running the
query on the other nodes. The user on the current node is created by the
query on the outer, non-main db, query to make sure consequent commands
in the same transaction can see this user.
3. Run `mark_object_distributed` on the main database. This function
adds the user to `pg_dist_object` in all of the nodes, including the
current one.
This PR also implements transaction recovery for the queries from
non-main databases.
Allowing GRANT ADMIN to now also be INHERIT or SET in support of psql16
GRANT role_name [, ...] TO role_specification [, ...] [ WITH { ADMIN |
INHERIT | SET } { OPTION | TRUE | FALSE } ] [ GRANTED BY
role_specification ]
Fixes: #7148
Related: #7138
See review changes from https://github.com/citusdata/citus/pull/7164
DESCRIPTION: Adds support for propagating `CREATE`/`DROP` database
In this PR, create and drop database support is added.
For CREATE DATABASE:
* "oid" option is not supported
* specifying "strategy" to be different than "wal_log" is not supported
* specifying "template" to be different than "template1" is not
supported
The last two are because those are not saved in `pg_database` and when
activating a node, we cannot assume what parameters were provided when
creating the database.
And "oid" is not supported because whether user specified an arbitrary
oid when creating the database is not saved in pg_database and we want
to avoid from oid collisions that might arise from attempting to use an
auto-assigned oid on workers.
Finally, in case of node activation, GRANTs for the database are also
propagated.
---------
Co-authored-by: Jelte Fennema-Nio <github-tech@jeltef.nl>
Co-authored-by: Jelte Fennema-Nio <jelte.fennema@microsoft.com>
Co-authored-by: Onur Tirtir <onurcantirtir@gmail.com>
https://app.circleci.com/pipelines/github/citusdata/citus/34550/workflows/5b802f66-2666-4623-a209-6d7799f7ee5f/jobs/1229153
```diff
VACUUM (FREEZE, PROCESS_TOAST true) local_vacuum_table;
SELECT relfrozenxid::text::integer > :frozenxid AS frozen_performed FROM pg_class
WHERE oid=:reltoastrelid::regclass;
frozen_performed
------------------
- t
+ f
(1 row)
```
Process toast option in vacuum was introduced in PG14. The failing test
was supposed to be a part of `multi_utilities.sql`, but it was included
in `pg14.sql` to avoid alternative output for PG13. See
ba62c0a148 (diff-ed03478f693155e2fe092e9ad356bf884dc097f554e8d75eff562d52bbcf7a75L255-L272)
for reference.
However, now that we don't support PG13 anymore, we can move this test
to `multi_utilities.sql`. Moving the test, plus inserting data before
running vacuum freeze such that the freeze is more meaningful and not
flaky, fixes the flakiness problem of the test.
We propagate `SECURITY LABEL [for provider] ON ROLE rolename IS
labelname` to the worker nodes.
We also make sure to run the relevant `SecLabelStmt` commands on a
newly added node by looking at roles found in `pg_shseclabel`.
See official docs for explanation on how this command works:
https://www.postgresql.org/docs/current/sql-security-label.html
This command stores the role label in the `pg_shseclabel` catalog table.
This commit also fixes the regex string in
`check_gucs_are_alphabetically_sorted.sh` script such that it escapes
the dot. Previously it was looking for all strings starting with "citus"
instead of "citus." as it should.
To test this feature, I currently make use of a special GUC to control
label provider registration in PG_init when creating the Citus extension.