Commit Graph

3019 Commits (b9c5f6be76529bc6c89d0ae296fbcb85f341f1f2)

Author SHA1 Message Date
eaydingol b9c5f6be76 disable nonmaindb interface 2025-02-18 16:12:09 +03:00
Karina 711aec80fa
Fix system_queries test to actually test the problem (#7613)
The test added in #7604 doesn't reach the `HasRangeTableRef` function
and thus doesn't test what it should.

Co-authored-by: Karina Litskevich <litskevichkarina@gmail.com>
2025-02-07 14:29:13 +00:00
Onur Tirtir 2d8be01853 Disable 2PC recovery while executing ALTER EXTENSION cmd during Citus upgrade tests
(cherry picked from commit b6b73e2f4c)
2025-02-04 16:53:32 +03:00
Onur Tirtir b6e3f39583 Fix flaky citus upgrade test
(cherry picked from commit 4cad81d643)
2025-02-04 16:49:12 +03:00
Naisila Puka 0a6adf4ccc
EXPLAIN generic_plan NOT supported in Citus (#7825)
We thought we provided support for this in

b8c493f2c4

However the use of parameters in SQL is not supported in Citus. Since
generic plan queries use parameters, we can't support for now.

Relevant PG16 commit https://github.com/postgres/postgres/commit/3c05284

Fixes #7813 with proper error message
2025-01-02 01:00:40 +03:00
Onur Tirtir 73411915a4
Avoid re-assigning the global pid for client backends and bg workers when the application_name changes (#7791)
DESCRIPTION: Fixes a crash that happens because of unsafe catalog access
when re-assigning the global pid after application_name changes.

When application_name changes, we don't actually need to
try re-assigning the global pid for external client backends because
application_name doesn't affect the global pid for such backends. Plus,
trying to re-assign the global pid for external client backends would
unnecessarily cause performing a catalog access when the cached local
node id is invalidated. However, accessing to the catalog tables is
dangerous in certain situations like when we're not in a transaction
block. And for the other types of backends, i.e., the Citus internal
backends, we need to re-assign the global pid when the application_name
changes because for such backends we simply extract the global pid
inherited from the originating backend from the application_name -that's
specified by originating backend when openning that connection- and this
doesn't require catalog access.
2024-12-23 14:01:53 +00:00
Colm McHugh c52f36019f [Bug Fix] [SEGFAULT] Querying distributed tables with window partition may cause segfault #7705
In function MasterAggregateMutator(), when the original Node is a Var node use makeVar() instead
of copyObject() when constructing the Var node for the target list of the combine query.
The varnullingrels field of the original Var node is ignored because it is not relevant for the
combine query; copying this cause the problem in issue 7705, where a coordinator query had
a Var with a reference to a non-existent join relation.
2024-11-06 19:26:29 +00:00
Parag Jain 5bad6c6a1d
[Bug Fix] : writing incorrect data to target Merge repartition Command (#7659)
We were writing incorrect data to target collection in some cases of merge command. In case of repartition when source query is RELATION. We were referring to incorrect attribute number that was resulting into
this incorrect behavior.

Example :

![image](https://github.com/user-attachments/assets/a101cb36-7976-459c-befb-96a55a5b3dc1)

![image](https://github.com/user-attachments/assets/e5c83b7b-5b8e-4d79-a927-95684dc9ba49)

I have added fixed tests as part of this PR , Thanks.
2024-09-12 21:16:39 -07:00
eaydingol 9e1852eac7
Check if the limit is null (#7665)
DESCRIPTION: Add a check to see if the given limit is null. 

Fixes a bug by checking if the limit given in the query is null when the
actual limit is computed with respect to the given offset.
Prior to this change, null is interpreted as 0 during the limit
calculation when both limit and offset are given.

Fixes #7663
2024-07-31 14:53:38 +03:00
Parag Jain 3c467e6e02
Support MERGE command for single_shard_distributed Target (#7643)
This PR has following changes :
1. Enable MERGE command for single_shard_distributed targets.
2024-07-16 08:08:44 -07:00
Nils Dijk e776a7ebbb
CI: move to github container registry (#7652)
We move the CI images to the github container registry.

Given we mostly (if not solely) run these containers on github actions
infra it makes sense to have them hosted closer to where they are
needed.

Image changes: https://github.com/citusdata/the-process/pull/157
2024-07-12 11:26:38 +03:00
Jelte Fennema-Nio aaaf637a6b
Redo #7620: Fix merge command when insert value does not have source distributed column (#7627)
Related to issue #7619, #7620
Merge command fails when source query is single sharded and source and
target are co-located and insert is not using distribution key of
source.

Example
```
CREATE TABLE source (id integer);
CREATE TABLE target (id integer );

-- let's distribute both table on id field
SELECT create_distributed_table('source', 'id');
SELECT create_distributed_table('target', 'id');

MERGE INTO target t
  USING ( SELECT 1 AS somekey
          FROM source
        WHERE source.id = 1) s
  ON t.id = s.somekey
  WHEN NOT MATCHED
  THEN INSERT (id)
    VALUES (s.somekey)

ERROR:  MERGE INSERT must use the source table distribution column value
HINT:  MERGE INSERT must use the source table distribution column value
```

Author's Opinion: If join is not between source and target distributed
column, we should not force user to use source distributed column while
inserting value of target distributed column.

Fix: If user is not using distributed key of source for insertion let's
not push down query to workers and don't force user to use source
distributed column if it is not part of join.

This reverts commit fa4fc0b372.

Co-authored-by: paragjain <paragjain@microsoft.com>
2024-06-17 14:07:25 +00:00
Jelte Fennema-Nio fa4fc0b372
Revert rebase merge of #7620 (#7626)
Because we want to track PR numbers and to make backporting easy we
(pretty much always) use squash-merges when merging to master. We
accidentally used a rebase merge for PR #7620. This reverts those
changes so we can redo the merge using squash merge.

This reverts all commits from eedb607c to 9e71750fc.
2024-06-17 15:46:00 +02:00
paragjain 9e71750fcd fixing flakyness in test 2024-06-15 14:55:36 -07:00
paragjain e62ae64d00 some more 2024-06-15 14:55:36 -07:00
paragjain 76f68f47c4 removing flakyness from test 2024-06-15 14:55:36 -07:00
Jelte Fennema-Nio d5231c34ab Revert "Try to fix failure"
This reverts commit 89f7217660.
2024-06-15 14:55:36 -07:00
Jelte Fennema-Nio f883cfdd77 Try to fix failure 2024-06-15 14:55:36 -07:00
paragjain 7c8a366ba2 some more 2024-06-15 14:55:36 -07:00
paragjain 493140287a fix some indent 2024-06-15 14:55:36 -07:00
paragjain ec25b433d4 adding update and delete tests 2024-06-15 14:55:36 -07:00
paragjain eedb607cd5 merge command fix 2024-06-15 14:55:36 -07:00
Jelte Fennema-Nio 8c9de08b76
Fix CI issues after Github Actions networking changes (#7624)
For some reason using localhost in our hba file doesn't have the
intended effect anymore in our Github Actions runners. Probably because
of some networking change (IPv6 maybe) or some change in the
`/etc/hosts` file.

Replacing localhost with the equivalent loopback IPv4 and IPv6 addresses
resolved this issue.
2024-06-14 16:20:23 +02:00
Gürkan İndibay 0ab42e7a80
Adds null check for node in HasRangeTableRef (#7609)
DESCRIPTION: Adds null check for node in HasRangeTableRef to prevent
errors
2024-05-28 11:03:38 +03:00
Jelte Fennema-Nio a0151aa31d
Greatly speed up "\d tablename" on servers with many tables (#7577)
DESCRIPTION: Fix performance issue when using "\d tablename" on a server
with many tables

We introduce a filter to every query on pg_class to automatically remove
shards. This is useful to make sure \d and PgAdmin are not cluttered
with shards. However, the way we were introducing this filter was using
`securityQuals` which can have negative impact on query performance.

On clusters with 100k+ tables this could cause a simple "\d tablename"
command to take multiple seconds, because a skipped optimization by
Postgres causes a full table scan. This changes the code to introduce
this filter in the regular `quals` list instead of in `securityQuals`.
Which causes Postgres to use the intended optimization again.

For reference, this was initially reported as a Postgres issue by me:

https://www.postgresql.org/message-id/flat/4189982.1712785863%40sss.pgh.pa.us#b87421293b362d581ea8677e3bfea920
2024-04-16 17:26:12 +02:00
Karina 41e2af8ff5
Use expecteddir option in _run_pg_regress() (#7582)
Fix check-arbitrary-configs tests failure with current REL_16_STABLE.
This is the same problem as described in #7573. I missed pg_regress call
in _run_pg_regress() in that PR.

Co-authored-by: Karina Litskevich <litskevichkarina@gmail.com>
2024-04-16 08:44:47 +00:00
Jelte Fennema-Nio 110b4192b2
Fix PG upgrades when invalid rebalance strategies exist (#7580)
DESCRIPTION: Fix PG upgrades when invalid rebalance strategies exist

Without this change an upgrade of a cluster with an invalid rebalance
strategy would fail with an error like this:
```
cache lookup failed for shard_cost_function with oid 6077337
CONTEXT:  SQL statement "SELECT citus_validate_rebalance_strategy_functions(
        NEW.shard_cost_function,
        NEW.node_capacity_function,
        NEW.shard_allowed_on_node_function)"
PL/pgSQL function citus_internal.pg_dist_rebalance_strategy_trigger_func() line 5 at PERFORM
SQL statement "INSERT INTO pg_catalog.pg_dist_rebalance_strategy SELECT
        name,
        default_strategy,
        shard_cost_function::regprocedure::regproc,
        node_capacity_function::regprocedure::regproc,
        shard_allowed_on_node_function::regprocedure::regproc,
        default_threshold,
        minimum_threshold,
        improvement_threshold
    FROM public.pg_dist_rebalance_strategy"
PL/pgSQL function citus_finish_pg_upgrade() line 115 at SQL statement
```

This fixes that by disabling the trigger and simply re-inserting the
invalid rebalance strategy without checking. We could also silently
remove it, but this seems nicer.
2024-04-15 14:26:33 +00:00
Onur Tirtir 3586aab17a
Allow providing "host" parameter via citus.node_conninfo (#7541)
And when that is the case, directly use it as "host" parameter for the
connections between nodes and use the "hostname" provided in
pg_dist_node / pg_dist_poolinfo as "hostaddr" to avoid host name lookup.

This is to avoid allowing dns resolution (and / or setting up DNS names
for each host in the cluster). This already works currently when using
IPs in the hostname. The only use of setting host is that you can then
use sslmode=verify-full and it will validate that the hostname matches
the certificate provided by the node you're connecting too.

It would be more flexible to make this a per-node setting, but that
requires SQL changes. And we'd like to backport this change, and
backporting such a sql change would be quite hard while backporting this
change would be very easy. And in many setups, a different hostname for
TLS validation is actually not needed. The reason for that is
query-from-any node: With query-from-any-node all nodes usually have a
certificate that is valid for the same "cluster hostname", either using
a wildcard cert or a Subject Alternative Name (SAN). Because if you load
balance across nodes you don't know which node you're connecting to, but
you still want TLS validation to do it's job. So with this change you
can use this same "cluster hostname" for TLS validation within the
cluster. Obviously this means you don't validate that you're connecting
to a particular node, just that you're connecting to one of the nodes in
the cluster, but that should be fine from a security perspective (in
most cases).

Note to self: This change requires updating

https://docs.citusdata.com/en/latest/develop/api_guc.html#citus-node-conninfo-text.

DESCRIPTION: Allows overwriting host name for all inter-node connections
by supporting "host" parameter in citus.node_conninfo
2024-04-15 09:51:11 +00:00
Karina 41d99249d9
Use expecteddir option when running vanilla tests (#7573)
In PostgreSQL 16 a new option expecteddir was introduced to pg_regress.
Together with fix in
[196eeb6b](https://github.com/postgres/postgres/commit/196eeb6b) it
causes check-vanilla failure if expecteddir is not specified.

Co-authored-by: Karina Litskevich <litskevichkarina@gmail.com>
2024-04-10 16:08:54 +00:00
Onur Tirtir 3929a5b2a6
Fix incorrect "VALID UNTIL" assumption made for roles in node activation (#7534)
Fixes https://github.com/citusdata/citus/issues/7533.

DESCRIPTION: Fixes incorrect `VALID UNTIL` setting assumption made for
roles when syncing them to new nodes
2024-03-20 11:38:33 +00:00
Emel Şimşek fdd658acec
Fix crash caused by some form of ALTER TABLE ADD COLUMN statements. (#7522)
DESCRIPTION: Fixes a crash caused by some form of ALTER TABLE ADD COLUMN
statements. When adding multiple columns, if one of the ADD COLUMN
statements contains a FOREIGN constraint ommitting the referenced
columns in the statement, a SEGFAULT occurs.

For instance, the following statement results in a crash:

```
  ALTER TABLE lt ADD COLUMN new_col1 bool,
                          ADD COLUMN new_col2 int references rt;

```                      


Fixes #7520.
2024-03-20 11:06:05 +03:00
eaydingol 8afa2d0386
Change the order in which the locks are acquired (#7542)
This PR changes the order in which the locks are acquired (for the
target and reference tables), when a modify request is initiated from a
worker node that is not the "FirstWorkerNode".


To prevent concurrent writes, locks are acquired on the first worker
node for the replicated tables. When the update statement originates
from the first worker node, it acquires the lock on the reference
table(s) first, followed by the target table(s). However, if the update
statement is initiated in another worker node, the lock requests are
sent to the first worker in a different order. This PR unifies the
modification order on the first worker node. With the third commit,
independent of the node that received the request, the locks are
acquired for the modified table and then the reference tables on the
first node.

The first commit shows a sample output for the test prior to the fix. 

Fixes #7477

---------

Co-authored-by: Jelte Fennema-Nio <jelte.fennema@microsoft.com>
2024-03-10 10:20:08 +03:00
copetol 12f56438fc
Fix segfault when using certain DO block in function (#7554)
When using a CASE WHEN expression in the body
of the function that is used in the DO block, a segmentation
fault occured. This fixes that.

Fixes #7381

---------

Co-authored-by: Konstantin Morozov <vzbdryn@yahoo.com>
2024-03-08 14:21:42 +01:00
Gürkan İndibay 51009d0191
Add support for alter/drop role propagation from non-main databases (#7461)
DESCRIPTION: Adds support for distributed `ALTER/DROP ROLE` commands
from the databases where Citus is not installed

---------

Co-authored-by: Onur Tirtir <onurcantirtir@gmail.com>
2024-02-28 08:58:28 +00:00
Onur Tirtir f4242685e3
Add failure handling for CREATE DATABASE commands (#7483)
In preprocess phase, we save the original database name, replace
dbname field of CreatedbStmt with a temporary name (to let Postgres
to create the database with the temporary name locally) and then
we insert a cleanup record for the temporary database name on all
nodes **(\*\*)**.

And in postprocess phase, we first rename the temporary database
back to its original name for local node and then return a list of
distributed DDL jobs i) to create the database with the temporary
name and then ii) to rename it back to its original name on other
nodes. That way, if CREATE DATABASE fails on any of the nodes, the
temporary database will be cleaned up by the cleanup records that
we inserted in preprocess phase and in case of a failure, we won't
leak any databases called as the name that user intended to use for
the database.

Solves the problem documented in
https://github.com/citusdata/citus/issues/7369
for CREATE DATABASE commands.

**(\*\*):** To ensure that we insert cleanup records on all nodes,
with this PR we also start requiring having the coordinator in the
metadata because otherwise we would skip inserting a cleanup record
for the coordinator.
2024-02-23 17:02:32 +00:00
Onur Tirtir 9ddee5d02a
Test that we check unsupported options for CREATE DATABASE from non-main dbs (#7532)
When adding CREATE/DROP DATABASE propagation in #7240, luckily
we've added EnsureSupportedCreateDatabaseCommand() check into
deparser too just to be on the safe side. That way, today CREATE
DATABASE commands from non-main dbs don't silently allow unsupported
options.

I wasn't aware of this when merging #7439 and hence wanted to add
a test so that we don't mistakenly remove that check from deparser
in future.
2024-02-23 10:37:11 +00:00
eaydingol 3509b7df5a
Add support for SECURITY LABEL on ROLE propagation from non-main databases (#7525)
DESCRIPTION: Adds support for distributed "SECURITY LABEL on ROLE"
commands from the databases where Citus is not installed.
2024-02-23 09:54:19 +03:00
Gürkan İndibay 211415dd4b
Removes granted by statement to fix flaky test errors (#7526)
Fix for the #7519
In metadata sync phase, grant statements for roles are being fetched and
propagated from catalog tables.
However, in some cases grant .. with admin option clauses executes after
the granted by statements which causes #7519 error.
We will fix this issue with the grantor propagation task in the project
2024-02-21 18:37:25 +03:00
Halil Ozan Akgül 852bcc5483
Add support for create / drop database propagation from non-main databases (#7439)
DESCRIPTION: Adds support for distributed `CREATE/DROP DATABASE `
commands from the databases where Citus is not installed

---------

Co-authored-by: Onur Tirtir <onurcantirtir@gmail.com>
2024-02-21 10:44:01 +00:00
Gürkan İndibay b3ef1b7e39
Add support for grant on database propagation from non-main databases (#7443)
DESCRIPTION: Adds support for distributed `GRANT .. ON DATABASE TO USER`
commands from the databases where Citus is not installed

---------

Co-authored-by: Onur Tirtir <onurcantirtir@gmail.com>
2024-02-21 13:14:58 +03:00
Gürkan İndibay 2cbfdbfa46
Adds Grant Role support from non-main db (#7404)
DESCRIPTION: Adds support for distributed role-membership management
commands from the databases where Citus is not installed (`GRANT <role>
TO <role>`)

This PR also refactors the code-path that allows executing some of the
node-wide commands so that we use send deparsed query string to other
nodes instead of the `queryString` passed into utility hook.

---------

Co-authored-by: Onur Tirtir <onurcantirtir@gmail.com>
2024-02-19 17:53:27 +03:00
Gürkan İndibay 9a0cdbf5af
Fixes granted by cascade/restrict statements for revoke (#7517)
DESCRIPTION: Fixes incorrect propagating of `GRANTED BY` and
`CASCADE/RESTRICT` clauses for `REVOKE` statements

There are two issues fixed in this PR
1. granted by statement will appear for revoke statements as well
2. revoke/cascade statement will appear after granted by

Since granted by statements does not appear in statements, this bug
hasn't been visible until now. However, after activating the granted by
statement for revoke, order problem arised and this issue was fixed
order problem for cascade/revoke as well
In summary, this PR provides usage of granted by statements properly now
with the correct order of statements.
We can verify the both errors, fixed with just single statement
REVOKE dist_role_3 from non_dist_role_3 granted by test_admin_role
cascade;
2024-02-19 15:44:21 +03:00
Onur Tirtir 74b55d0546
Enforce using werkzeug 2.3.7 for failure tests and update Postgres versions to latest minors (#7491)
Let's use version 2.3.7 to fix the following error as we do in docker
images created in https://github.com/citusdata/the-process/ repo.
```
ImportError: cannot import name 'url_quote' from 'werkzeug.urls' (/home/onurctirtir/.local/share/virtualenvs/regress-ffZKpSmO/lib/python3.9/site-packages/werkzeug/urls.py)
```

And changing werkzeug version required rebuilding Pipfile.lock file in
src/test/regress. Before updating this Pipfile.lock file, we want to
make sure that versions specified there don't break any tests. And to
ensure that this is the case,
https://github.com/citusdata/the-process/pull/155 synchronizes
requirements.txt file based on new Pipfile.lock and hence this PR
updates test image suffix accordingly.

Also, while updating https://github.com/citusdata/the-process/pull/155,
I also had to update Postgres versions to latest minors to make image
builds passing again and updating Postgres versions in images requires
updating Postgres versions in this repo too. While doing that, we also
update Postgres version used in devcontainer too.
2024-02-16 14:38:32 +00:00
eaydingol 15a3adebe8
Support SECURITY LABEL ON ROLE from any node (#7508)
DESCRIPTION: Propagates SECURITY LABEL ON ROLE statement from any node
2024-02-15 20:34:15 +03:00
Gürkan İndibay 59da0633bb
Fixes invalid grantor field parsing in grant role propagation (#7451)
DESCRIPTION: Resolves an issue that disrupts distributed GRANT
statements with the grantor option

In this issue 3 issues are being solved:
1.Correcting the erroneous appending of multiple granted by in the
deparser.
2Adding support for grantor (granted by) in grant role propagation.
3. Implementing grantor (granted by) support during the metadata sync
grant role propagation phase.

Limitations: Currently, the grantor must be created prior to the
metadata sync phase. During metadata sync, both the creation of the
grantor and the grants given by that role cannot be performed, as the
grantor role is not detected during the dependency resolution phase.

---------

Co-authored-by: Onur Tirtir <onurcantirtir@gmail.com>
2024-02-15 08:27:29 +00:00
eaydingol f01c5f2593
Move remaining citus_internal functions (#7478)
Moves the following functions to the Citus internal schema: 

citus_internal_local_blocked_processes
citus_internal_global_blocked_processes
citus_internal_mark_node_not_synced
citus_internal_unregister_tenant_schema_globally
citus_internal_update_none_dist_table_metadata
citus_internal_update_placement_metadata
citus_internal_update_relation_colocation
citus_internal_start_replication_origin_tracking
citus_internal_stop_replication_origin_tracking
citus_internal_is_replication_origin_tracking_active


#7405

---------

Co-authored-by: Jelte Fennema-Nio <jelte.fennema@microsoft.com>
2024-02-07 16:58:17 +03:00
Filip Sedlák 6869b3ad10
Fail early when shard can't be safely moved to a new node (#7467)
DESCRIPTION: citus_move_shard_placement now fails early when shard
cannot be safely moved

The implementation is quite simplistic -
`citus_move_shard_placement(...)` will fail with an error if there's any
new node in the cluster that doesn't have reference tables yet.

It could have been finer-grained, i.e. erroring only when trying to move
a shard to an unitialized node. Looking at the related functions -
`replicate_reference_tables()` or `citus_rebalance_start()`, I think
it's acceptable behaviour. These other functions also treat "any"
unitialized node as a temporary anomaly.

Fixes #7426

---------

Co-authored-by: Jelte Fennema-Nio <jelte.fennema@microsoft.com>
2024-02-07 12:04:52 +00:00
eaydingol 594cb6f274
Move more citus internal functions (#7473)
Moves the following functions:

 citus_internal_delete_colocation_metadata 
 citus_internal_delete_partition_metadata 
 citus_internal_delete_placement_metadata 
 citus_internal_delete_shard_metadata 
 citus_internal_delete_tenant_schema
2024-01-31 23:00:04 +03:00
eaydingol d05174093b
Move citus internal functions (#7470)
Move more functions to citus_internal schema, the list:

citus_internal_add_placement_metadata
citus_internal_add_shard_metadata
citus_internal_add_tenant_schema
citus_internal_adjust_local_clock_to_remote
citus_internal_database_command

#7405
2024-01-31 11:45:19 +00:00
Onur Tirtir 3ce731d497
Make multi_metadata_sync runnable via run_test.py (#7472) 2024-01-31 09:50:16 +00:00