Compare commits

..

65 Commits

Author SHA1 Message Date
eaydingol f3cb3d99ee
Bump Citus to 12.1.7 (#7894)
Bump Citus to 12.1.7
2025-02-07 14:58:55 +03:00
eaydingol bae20578d4
Add changelog for 12.1.7 (#7889)
Add changelog entries for 12.1.7

---------

Co-authored-by: Onur Tirtir <onurcantirtir@gmail.com>
2025-02-07 10:56:08 +03:00
Onur Tirtir 02b3c009e7 Fix mixed Citus upgrade tests (#7218)
When testing rolling Citus upgrades, coordinator should not be upgraded
until we upgrade all the workers.

---------

Co-authored-by: Jelte Fennema-Nio <github-tech@jeltef.nl>
(cherry picked from commit 27ac44eb2a)
2025-02-04 01:24:57 +03:00
Onur Tirtir 352516e619 Avoid publishing artifacts with conflicting names
.. as documented in actions/upload-artifact#480.

(cherry picked from commit 26f16a7654)
2025-02-03 10:13:59 +03:00
Onur Tirtir 66d35b35f8 Upgrade download-artifacts action to 4.1.8
(cherry picked from commit cbe0de33a6)
2025-02-03 10:13:59 +03:00
Onur Tirtir 296b623093 Upgrade upload-artifacts action to 4.6.0
(cherry picked from commit b886cfa223)
2025-02-03 10:13:59 +03:00
Naisila Puka 71d921707d Fix foreign_key_to_reference_shard_rebalance test (#7826)
foreign_key_to_reference_shard_rebalance failed because partition of
2024 year does not exist, fixed by add default partition.

Replaces https://github.com/citusdata/citus/pull/7396 by adding a rule
that allows properly testing foreign_key_to_reference_shard_rebalance
via run_test.py.

Closes #7396

Co-authored-by: chuhx <148182736+cstarc1@users.noreply.github.com>
(cherry picked from commit 968ac74cde)

Co-authored-by: Onur Tirtir <onurcantirtir@gmail.com>
2025-01-07 23:03:18 +03:00
Teja Mupparti 39e890d9ce For scenarios, such as, Bug 3697586: Server crashes when assigning distributed transaction: Raise an ERROR instead of a crash 2025-01-07 23:03:18 +03:00
Onur Tirtir cb31a649e9 Avoid re-assigning the global pid for client backends and bg workers when the application_name changes (#7791)
DESCRIPTION: Fixes a crash that happens because of unsafe catalog access
when re-assigning the global pid after application_name changes.

When application_name changes, we don't actually need to
try re-assigning the global pid for external client backends because
application_name doesn't affect the global pid for such backends. Plus,
trying to re-assign the global pid for external client backends would
unnecessarily cause performing a catalog access when the cached local
node id is invalidated. However, accessing to the catalog tables is
dangerous in certain situations like when we're not in a transaction
block. And for the other types of backends, i.e., the Citus internal
backends, we need to re-assign the global pid when the application_name
changes because for such backends we simply extract the global pid
inherited from the originating backend from the application_name -that's
specified by originating backend when openning that connection- and this
doesn't require catalog access.

(cherry picked from commit 73411915a4)
2024-12-24 09:34:08 +03:00
Emel Şimşek c44682a7d0
Bump Citus Version to 12.1.6 (#7744)
Bump Citus Version to 12.1.6
2024-11-14 15:56:38 +03:00
Emel Şimşek d4f1635775
Add changelog entries for 12.1.6 (#7742)
Add changelog entries for 12.1.6.
2024-11-14 13:41:34 +03:00
Emel Şimşek 0f1aa0e16a
[Bug Fix] Query on distributed tables with window partition may cause… (#7740)
… segfault #7705 (#7718)

This PR is a proposed fix for issue
[7705](https://github.com/citusdata/citus/issues/7705). The following is
the background and rationale for the fix (please refer to
[7705](https://github.com/citusdata/citus/issues/7705) for context);

The `varnullingrels `field was introduced to the Var node struct
definition in Postgres 16. Its purpose is to associate a variable with
the set of outer join relations that can cause the variable to be NULL.
The `varnullingrels ` for the variable
`"gianluca_camp_test"."start_timestamp"` in the problem query is 3,
because the variable "gianluca_camp_test"."start_timestamp" is coming
from the inner (nullable) side of an outer join and 3 is the RT index
(aka relid) of that outer join. The problem occurs when the Postgres
planner attempts to plan the combine query. The format of a combine
query is:
```
SELECT <targets>
FROM   pg_catalog.citus_extradata_container();
```
There is only one relation in a combine query, so no outer joins are
present, but the non-empty `varnullingrels `field causes the Postgres
planner to access structures for a non-existent relation. The source of
the problem is that, when creating the target list for the combine
query, function MasterAggregateMutator() uses copyObject() to construct
a Var node before setting the master table ID, and this copies over the
non-empty varnullingrels field in the case of the
`"gianluca_camp_test"."start_timestamp"` var. The proposed solution is
to have MasterAggregateMutator() use makeVar() instead of copyObject(),
and only set the fields that make sense for the combine query; var type,
collation and type modifier. The `varnullingrels `field can be left
empty because there is only one relation in the combine query.

A new regress test issue_7705.sql is added to exercise the fix. The
issue is not specific to window functions, any target expression that
cannot be pushed down and contains at least one column from the inner
side of a left outer join (so has a non-empty varnullingrels field) can
cause the same issue.

More about Citus combine queries

[here](https://github.com/citusdata/citus/tree/main/src/backend/distributed#combine-query-planner).
More about Postgres varnullingrels

[here](https://github.com/postgres/postgres/blob/master/src/backend/optimizer/README).

(cherry picked from commit 248ff5d52a)

DESCRIPTION: PR description that will go into the change log, up to 78
characters

Co-authored-by: Colm <colm.mchugh@gmail.com>
2024-11-13 19:37:51 +03:00
Emel Şimşek 686d2b46ca
Propagates SECURITY LABEL ON ROLE stmt (#7304) (#7735)
Propagates SECURITY LABEL ON ROLE stmt (https://github.com/citusdata/citus/pull/7304)
We propagate `SECURITY LABEL [for provider] ON ROLE rolename IS
labelname` to the worker nodes.
We also make sure to run the relevant `SecLabelStmt` commands on a
newly added node by looking at roles found in `pg_shseclabel`.

See official docs for explanation on how this command works:
https://www.postgresql.org/docs/current/sql-security-label.html
This command stores the role label in the `pg_shseclabel` catalog table.

This commit also fixes the regex string in
`check_gucs_are_alphabetically_sorted.sh` script such that it escapes
the dot. Previously it was looking for all strings starting with "citus"
instead of "citus." as it should.

To test this feature, I currently make use of a special GUC to control
label provider registration in PG_init when creating the Citus extension.

(cherry picked from commit 0d1f18862b)

Co-authored-by: Naisila Puka <37271756+naisila@users.noreply.github.com>
2024-11-13 14:21:08 +03:00
Hanefi Onaldi 15ecc37ecd
Bump Citus to 12.1.5 2024-07-17 15:11:38 +03:00
Hanefi Onaldi 5c2ef8e2d8
Add changelog entries for 12.1.5
(cherry picked from commit 5c097860aa)
2024-07-17 15:11:38 +03:00
Parag Jain 6349f2d52d
Support MERGE command for single_shard_distributed Target (#7643)
This PR has following changes :
1. Enable MERGE command for single_shard_distributed targets.

(cherry picked from commit 3c467e6e02)
2024-07-17 15:11:38 +03:00
Nils Dijk f60c4cbd19
bump postgres versions in CI and dev (#7655)
Upgrade postgres versions to:
 - 14.12
 -  15.7
 - 16.3

Depends on https://github.com/citusdata/the-process/pull/158

(cherry picked from commit accb7d09f7)
2024-07-17 15:11:38 +03:00
Gürkan İndibay f0ea07a813
Removes el/7 and ol/7 as runners (#7650)
Removes el/7 and ol/7 as runners and update checkout action to v4

We use EL/7 and OL/7 runners to test packaging for these distributions.
However, for the past two weeks, we've encountered errors during the
checkout step in the pipelines. The error message is as follows:
```
/__e/node20/bin/node: /lib64/libm.so.6: version `GLIBC_2.27' not found (required by /__e/node20/bin/node)
/__e/node20/bin/node: /lib64/libstdc++.so.6: version `GLIBCXX_3.4.20' not found (required by /__e/node20/bin/node)
/__e/node20/bin/node: /lib64/libstdc++.so.6: version `CXXABI_1.3.9' not found (required by /__e/node20/bin/node)
/__e/node20/bin/node: /lib64/libstdc++.so.6: version `GLIBCXX_3.4.21' not found (required by /__e/node20/bin/node)
/__e/node20/bin/node: /lib64/libc.so.6: version `GLIBC_2.28' not found (required by /__e/node20/bin/node)
/__e/node20/bin/node: /lib64/libc.so.6: version `GLIBC_2.25' not found (required by /__e/node20/bin/node)
```
The GCC version within the EL/7 and OL/7 Docker images is 2.17, and we
cannot upgrade it. Therefore, we need to remove these images from the
packaging test pipelines. Consequently, we will no longer verify if the
code builds for EL/7 and OL/7.

However, we are not using these packaging images as runners within the
packaging infrastructure, so we can continue to use these images for
packaging.

Additional Info: I learned that Marlin team fully dropped the el/7
support so we will drop in further releases as well

(cherry picked from commit c603c3ed74)
2024-07-17 15:11:38 +03:00
Nils Dijk 9dcfcb92ff
CI: move to github container registry (#7652)
We move the CI images to the github container registry.

Given we mostly (if not solely) run these containers on github actions
infra it makes sense to have them hosted closer to where they are
needed.

Image changes: https://github.com/citusdata/the-process/pull/157

(cherry picked from commit e776a7ebbb)
2024-07-17 14:57:50 +03:00
paragjain caee20ad7c fixing expected file of multi_move_mx test 2024-06-18 16:49:39 +02:00
Onur Tirtir d9635609f4 Fix flaky multi_mx_node_metadata.sql test (#7317)
Fixes the flaky test that results in following diff:
```diff
--- /__w/citus/citus/src/test/regress/expected/multi_mx_node_metadata.out.modified	2023-11-01 14:22:12.890476575 +0000
+++ /__w/citus/citus/src/test/regress/results/multi_mx_node_metadata.out.modified	2023-11-01 14:22:12.914476657 +0000
@@ -840,24 +840,26 @@
 (1 row)

 \c :datname - - :master_port
 SELECT datname FROM pg_stat_activity WHERE application_name LIKE 'Citus Met%';
   datname
 ------------
  db_to_drop
 (1 row)

 DROP DATABASE db_to_drop;
+ERROR:  database "db_to_drop" is being accessed by other users
 SELECT datname FROM pg_stat_activity WHERE application_name LIKE 'Citus Met%';
   datname
 ------------
-(0 rows)
+ db_to_drop
+(1 row)

 -- cleanup
 DROP SEQUENCE sequence CASCADE;
 NOTICE:  drop cascades to default value for column a of table reference_table
```

(cherry picked from commit 9867c5b949)
2024-06-18 16:49:39 +02:00
Jelte Fennema-Nio 4f0053ed6d Redo #7620: Fix merge command when insert value does not have source distributed column (#7627)
Related to issue #7619, #7620
Merge command fails when source query is single sharded and source and
target are co-located and insert is not using distribution key of
source.

Example
```
CREATE TABLE source (id integer);
CREATE TABLE target (id integer );

-- let's distribute both table on id field
SELECT create_distributed_table('source', 'id');
SELECT create_distributed_table('target', 'id');

MERGE INTO target t
  USING ( SELECT 1 AS somekey
          FROM source
        WHERE source.id = 1) s
  ON t.id = s.somekey
  WHEN NOT MATCHED
  THEN INSERT (id)
    VALUES (s.somekey)

ERROR:  MERGE INSERT must use the source table distribution column value
HINT:  MERGE INSERT must use the source table distribution column value
```

Author's Opinion: If join is not between source and target distributed
column, we should not force user to use source distributed column while
inserting value of target distributed column.

Fix: If user is not using distributed key of source for insertion let's
not push down query to workers and don't force user to use source
distributed column if it is not part of join.

This reverts commit fa4fc0b372.

Co-authored-by: paragjain <paragjain@microsoft.com>
(cherry picked from commit aaaf637a6b)
2024-06-18 16:49:39 +02:00
Jelte Fennema-Nio 3594bd7ac0 Fix CI issues after Github Actions networking changes (#7624)
For some reason using localhost in our hba file doesn't have the
intended effect anymore in our Github Actions runners. Probably because
of some networking change (IPv6 maybe) or some change in the
`/etc/hosts` file.

Replacing localhost with the equivalent loopback IPv4 and IPv6 addresses
resolved this issue.

(cherry picked from commit 8c9de08b76)
2024-06-18 16:49:39 +02:00
Gürkan İndibay 7e0dc18b22
Bump Citus version to 12.1.4 (#7610) 2024-05-29 11:35:08 +03:00
Gürkan İndibay 4e838a471a
Adds null check for node in HasRangeTableRef (#7604)
DESCRIPTION: Adds null check for node in HasRangeTableRef to prevent
errors

When executing the query below, users encountered an error due to a null
Node object. This PR adds a null check to handle this error.

Query:
```sql
select
    ct.conname as constraint_name,
    a.attname as column_name,
    fc.relname as foreign_table_name,
    fns.nspname as foreign_table_schema,
    fa.attname as foreign_column_name
from
    (SELECT ct.conname, ct.conrelid, ct.confrelid, ct.conkey, ct.contype,
ct.confkey, generate_subscripts(ct.conkey, 1) AS s
       FROM pg_constraint ct
    ) AS ct
    inner join pg_class c on c.oid=ct.conrelid
    inner join pg_namespace ns on c.relnamespace=ns.oid
    inner join pg_attribute a on a.attrelid=ct.conrelid and a.attnum =
ct.conkey[ct.s]
    left join pg_class fc on fc.oid=ct.confrelid
    left join pg_namespace fns on fc.relnamespace=fns.oid
    left join pg_attribute fa on fa.attrelid=ct.confrelid and fa.attnum =
ct.confkey[ct.s]
where
    ct.contype='f'
    and c.relname='table1'
    and ns.nspname='schemauser'
order by
    fns.nspname, fc.relname, a.attnum
;
```

Error:
```
#0  HasRangeTableRef (node=0x0, varno=varno@entry=0x7ffe18cc3674) at worker/worker_shard_visibility.c:507
507             if (IsA(node, RangeTblRef))
#0  HasRangeTableRef (node=0x0, varno=varno@entry=0x7ffe18cc3674) at worker/worker_shard_visibility.c:507
#1  0x0000561b0aae390e in expression_tree_walker_impl (node=0x561b0d19cc78, walker=walker@entry=0x7f2a73249f0a <HasRangeTableRef>, context=0x7ffe18cc3674)
    at nodeFuncs.c:2091
#2  0x00007f2a73249f26 in HasRangeTableRef (node=<optimized out>, varno=<optimized out>) at worker/worker_shard_visibility.c:513
#3  0x0000561b0aae3e09 in expression_tree_walker_impl (node=0x561b0d19cd68, walker=walker@entry=0x7f2a73249f0a <HasRangeTableRef>, context=context@entry=0x7ffe18cc3674)
    at nodeFuncs.c:2405
#4  0x0000561b0aae3945 in expression_tree_walker_impl (node=0x561b0d19d0f8, walker=walker@entry=0x7f2a73249f0a <HasRangeTableRef>, context=0x7ffe18cc3674)
    at nodeFuncs.c:2111
#5  0x00007f2a73249f26 in HasRangeTableRef (node=<optimized out>, varno=<optimized out>) at worker/worker_shard_visibility.c:513
#6  0x0000561b0aae3e09 in expression_tree_walker_impl (node=0x561b0d19cb38, walker=walker@entry=0x7f2a73249f0a <HasRangeTableRef>, context=context@entry=0x7ffe18cc3674)
    at nodeFuncs.c:2405
#7  0x0000561b0aae396d in expression_tree_walker_impl (node=0x561b0d19d198, walker=walker@entry=0x7f2a73249f0a <HasRangeTableRef>, context=0x7ffe18cc3674)
    at nodeFuncs.c:2127
#8  0x00007f2a73249f26 in HasRangeTableRef (node=<optimized out>, varno=<optimized out>) at worker/worker_shard_visibility.c:513
#9  0x0000561b0aae3ef7 in expression_tree_walker_impl (node=0x561b0d183e88, walker=walker@entry=0x7f2a73249f0a <HasRangeTableRef>, context=0x7ffe18cc3674)
    at nodeFuncs.c:2464
#10 0x00007f2a73249f26 in HasRangeTableRef (node=<optimized out>, varno=<optimized out>) at worker/worker_shard_visibility.c:513
#11 0x0000561b0aae3ed3 in expression_tree_walker_impl (node=0x561b0d184278, walker=walker@entry=0x7f2a73249f0a <HasRangeTableRef>, context=0x7ffe18cc3674)
    at nodeFuncs.c:2460
#12 0x00007f2a73249f26 in HasRangeTableRef (node=<optimized out>, varno=<optimized out>) at worker/worker_shard_visibility.c:513
#13 0x0000561b0aae3ed3 in expression_tree_walker_impl (node=0x561b0d184668, walker=walker@entry=0x7f2a73249f0a <HasRangeTableRef>, context=0x7ffe18cc3674)
    at nodeFuncs.c:2460
#14 0x00007f2a73249f26 in HasRangeTableRef (node=<optimized out>, varno=<optimized out>) at worker/worker_shard_visibility.c:513
#15 0x0000561b0aae3ed3 in expression_tree_walker_impl (node=0x561b0d184f68, walker=walker@entry=0x7f2a73249f0a <HasRangeTableRef>, context=0x7ffe18cc3674)
    at nodeFuncs.c:2460
#16 0x00007f2a73249f26 in HasRangeTableRef (node=<optimized out>, varno=<optimized out>) at worker/worker_shard_visibility.c:513
#17 0x0000561b0aae3e09 in expression_tree_walker_impl (node=0x7f2a68010148, walker=walker@entry=0x7f2a73249f0a <HasRangeTableRef>, context=context@entry=0x7ffe18cc3674)
    at nodeFuncs.c:2405
#18 0x00007f2a7324a0eb in FilterShardsFromPgclass (node=node@entry=0x561b0d185de8, context=context@entry=0x0) at worker/worker_shard_visibility.c:464
#19 0x00007f2a7324a5ff in HideShardsFromSomeApplications (query=query@entry=0x561b0d185de8) at worker/worker_shard_visibility.c:294
#20 0x00007f2a731ed7ac in distributed_planner (parse=0x561b0d185de8, 
    query_string=0x561b0d009478 "select\n    ct.conname as constraint_name,\n    a.attname as column_name,\n    fc.relname as foreign_table_name,\n    fns.nspname as foreign_table_schema,\n    fa.attname as foreign_column_name\nfrom\n    (S"..., cursorOptions=<optimized out>, boundParams=0x0) at planner/distributed_planner.c:237
#21 0x00007f2a7311a52a in pgss_planner (parse=0x561b0d185de8, 
    query_string=0x561b0d009478 "select\n    ct.conname as constraint_name,\n    a.attname as column_name,\n    fc.relname as foreign_table_name,\n    fns.nspname as foreign_table_schema,\n    fa.attname as foreign_column_name\nfrom\n    (S"..., cursorOptions=2048, boundParams=0x0) at pg_stat_statements.c:953
#22 0x0000561b0ab65465 in planner (parse=parse@entry=0x561b0d185de8, 
    query_string=query_string@entry=0x561b0d009478 "select\n    ct.conname as constraint_name,\n    a.attname as column_name,\n    fc.relname as foreign_table_name,\n    fns.nspname as foreign_table_schema,\n    fa.attname as foreign_column_name\nfrom\n    (S"..., cursorOptions=cursorOptions@entry=2048, boundParams=boundParams@entry=0x0)
    at planner.c:279
#23 0x0000561b0ac53aa3 in pg_plan_query (querytree=querytree@entry=0x561b0d185de8, 
    query_string=query_string@entry=0x561b0d009478 "select\n    ct.conname as constraint_name,\n    a.attname as column_name,\n    fc.relname as foreign_table_name,\n    fns.nspname as foreign_table_schema,\n    fa.attname as foreign_column_name\nfrom\n    (S"..., cursorOptions=cursorOptions@entry=2048, boundParams=boundParams@entry=0x0)
    at postgres.c:904
#24 0x0000561b0ac53b71 in pg_plan_queries (querytrees=0x7f2a68012878, 
    query_string=query_string@entry=0x561b0d009478 "select\n    ct.conname as constraint_name,\n    a.attname as column_name,\n    fc.relname as foreign_table_name,\n    fns.nspname as foreign_table_schema,\n    fa.attname as foreign_column_name\nfrom\n    (S"..., cursorOptions=cursorOptions@entry=2048, boundParams=boundParams@entry=0x0)
    at postgres.c:996
#25 0x0000561b0ac5408e in exec_simple_query (
    query_string=query_string@entry=0x561b0d009478 "select\n    ct.conname as constraint_name,\n    a.attname as column_name,\n    fc.relname as foreign_table_name,\n    fns.nspname as foreign_table_schema,\n    fa.attname as foreign_column_name\nfrom\n    (S"...) at postgres.c:1193
#26 0x0000561b0ac56116 in PostgresMain (dbname=<optimized out>, username=<optimized out>) at postgres.c:4637
#27 0x0000561b0abab7a7 in BackendRun (port=port@entry=0x561b0d0caf50) at postmaster.c:4464
#28 0x0000561b0abae969 in BackendStartup (port=port@entry=0x561b0d0caf50) at postmaster.c:4192
#29 0x0000561b0abaeaa6 in ServerLoop () at postmaster.c:1782
```


Fixes #7603
2024-05-28 08:54:40 +03:00
Gürkan İndibay 035aa6eada
Bump Citus version to 12.1.3 (#7588) 2024-04-24 11:15:04 +03:00
Gürkan İndibay 75ff237340 Removes unnecessary package installations in packaging pipelines (#7341)
With the recent changes in packaging images, linux package installations
to execute validate_output is unnecessary now.
In this PR, I removed them to make the pipeline more effective.

- [x] Remove the test warning before merge

(cherry picked from commit 32b0fc23f5)
2024-04-17 10:26:50 +02:00
Gürkan İndibay 40e9e2614d Removes centos 7 for PG 16 in packaging pipelines (#7205)
centos 7 and oracle 7 is not being supported for newer releases by
Postgres. Therefore, getting package download errors in packaging
pipelines.
This PR removes el/7 and ol/7 Postgres 16 pipelines

(cherry picked from commit b0e982d0b5)
2024-04-17 10:26:50 +02:00
Jelte Fennema-Nio bac95cc523 Greatly speed up "\d tablename" on servers with many tables (#7577)
DESCRIPTION: Fix performance issue when using "\d tablename" on a server
with many tables

We introduce a filter to every query on pg_class to automatically remove
shards. This is useful to make sure \d and PgAdmin are not cluttered
with shards. However, the way we were introducing this filter was using
`securityQuals` which can have negative impact on query performance.

On clusters with 100k+ tables this could cause a simple "\d tablename"
command to take multiple seconds, because a skipped optimization by
Postgres causes a full table scan. This changes the code to introduce
this filter in the regular `quals` list instead of in `securityQuals`.
Which causes Postgres to use the intended optimization again.

For reference, this was initially reported as a Postgres issue by me:

https://www.postgresql.org/message-id/flat/4189982.1712785863%40sss.pgh.pa.us#b87421293b362d581ea8677e3bfea920
(cherry picked from commit a0151aa31d)
2024-04-17 10:26:50 +02:00
Xing Guo 38967491ef Add missing volatile qualifier. (#7570)
Variables being modified in the PG_TRY block and read in the PG_CATCH
block should be qualified with volatile.

The variable waitEventSet is modified in the PG_TRY block (line 1085)
and read in the PG_CATCH block (line 1095).

The variable relation is modified in the PG_TRY block (line 500) and
read in the PG_CATCH block (line 515).

Besides, the variable objectAddress doesn't need the volatile qualifier.

Ref: C99 7.13.2.1[^1],

> All accessible objects have values, and all other components of the
abstract machine have state, as of the time the longjmp function was
called, except that the values of objects of automatic storage duration
that are local to the function containing the invocation of the
corresponding setjmp macro that do not have volatile-qualified type and
have been changed between the setjmp invocation and longjmp call are
indeterminate.

[^1]: https://www.open-std.org/jtc1/sc22/wg14/www/docs/n1256.pdf

DESCRIPTION: Correctly mark some variables as volatile

---------

Co-authored-by: Hong Yi <zouzou0208@gmail.com>
(cherry picked from commit ada3ba2507)
2024-04-17 10:26:50 +02:00
Filip Sedlák fc09e1cfdc Log username in the failed connection message (#7432)
This patch includes the username in the reported error message.
This makes debugging easier when certain commands open connections
as other users than the user that is executing the command.

```
monitora_snapshot=# SELECT citus_move_shard_placement(102030, 'monitora.db-dev-worker-a', 6005, 'monitora.db-dev-worker-a', 6017);
ERROR:  connection to the remote node monitora_user@monitora.db-dev-worker-a:6017 failed with the following error: fe_sendauth: no password supplied
Time: 40,198 ms
```

(cherry picked from commit 8b48d6ab02)
2024-04-17 10:26:50 +02:00
Karina 7513061057 Make isolation_update_node test system independent (#7423)
Test isolation_update_node fails on some systems with the following error:
```
-s2: WARNING:  connection to the remote node non-existent:57637 failed with the following error: could not translate host name "non-existent" to address: Name or service not known
+s2: WARNING:  connection to the remote node non-existent:57637 failed with the following error: could not translate host name "non-existent" to address: Temporary failure in name resolution
```

This slightly modifies an already existing [normalization
rule](739c6d26df/src/test/regress/bin/normalize.sed (L217-L218))
to fix it.

Co-authored-by: Karina Litskevich <litskevichkarina@gmail.com>
(cherry picked from commit 21464adfec)
2024-04-17 10:26:50 +02:00
Jelte Fennema-Nio f4af59ab4b Support running isolation_update_node in flaky test detection (#7425)
I noticed in #7423 that `isolation_update_node` could not be run using
flaky test detection. This fixes that.
2024-04-17 10:26:50 +02:00
sminux 5708fca1ea fix bad copy-paste rightComparisonLimit (#7547)
DESCRIPTION: change for #7543
(cherry picked from commit d59c93bc50)
2024-04-17 10:26:50 +02:00
LightDB Enterprise Postgres 2a6164d2d9 Fix timeout when underlying socket is changed in a MultiConnection (#7377)
When there are multiple localhost entries in /etc/hosts like following
/etc/hosts:
```
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
127.0.0.1   localhost
```

multi_cluster_management check will failed:
```

@@ -857,20 +857,21 @@
 ERROR:  group 14 already has a primary node
 -- check that you can add secondaries and unavailable nodes to a group
 SELECT groupid AS worker_2_group FROM pg_dist_node WHERE nodeport = :worker_2_port \gset
 SELECT 1 FROM master_add_node('localhost', 9998, groupid => :worker_1_group, noderole => 'secondary');
  ?column?
 ----------
         1
 (1 row)

 SELECT 1 FROM master_add_node('localhost', 9997, groupid => :worker_1_group, noderole => 'unavailable');
+WARNING:  could not establish connection after 5000 ms
  ?column?
 ----------
         1
 (1 row)
```

This actually isn't just a problem in test environments, but could occur
as well during actual usage when a hostname in pg_dist_node
resolves to multiple IPs and one of those IPs is unreachable.
Postgres will then automatically continue with the next IP, but
Citus should listen for events on the new socket. Not on the
old one.

Co-authored-by: chuhx43211 <chuhx43211@hundsun.com>
(cherry picked from commit 9a91136a3d)
2024-04-17 10:26:50 +02:00
eaydingol db391c0bb7 Change the order in which the locks are acquired (#7542)
This PR changes the order in which the locks are acquired (for the
target and reference tables), when a modify request is initiated from a
worker node that is not the "FirstWorkerNode".

To prevent concurrent writes, locks are acquired on the first worker
node for the replicated tables. When the update statement originates
from the first worker node, it acquires the lock on the reference
table(s) first, followed by the target table(s). However, if the update
statement is initiated in another worker node, the lock requests are
sent to the first worker in a different order. This PR unifies the
modification order on the first worker node. With the third commit,
independent of the node that received the request, the locks are
acquired for the modified table and then the reference tables on the
first node.

The first commit shows a sample output for the test prior to the fix.

Fixes #7477

---------

Co-authored-by: Jelte Fennema-Nio <jelte.fennema@microsoft.com>
(cherry picked from commit 8afa2d0386)
2024-04-17 10:26:50 +02:00
Jelte Fennema-Nio 146725fc9b Replace more spurious strdups with pstrdups (#7441)
DESCRIPTION: Remove a few small memory leaks

In #7440 one instance of a strdup was removed. But there were a few
more. This removes the ones that are left over, or adds a comment why
strdup is on purpose.

(cherry picked from commit 9683bef2ec)
2024-04-17 10:26:50 +02:00
Marco Slot 94ab1dc240 Replace spurious strdup with pstrdup (#7440)
Not sure why we never found this using valgrind, but using strdup will
cause memory leaks because the pointer is not tracked in a memory
context.

(cherry picked from commit 72fbea20c4)
2024-04-17 10:26:50 +02:00
Onur Tirtir 812a2b759f Improve error message for recursive CTEs (#7407)
Fixes #2870

(cherry picked from commit 5aedec4242)
2024-04-17 10:26:50 +02:00
Onur Tirtir 452564c19b Allow providing "host" parameter via citus.node_conninfo (#7541)
And when that is the case, directly use it as "host" parameter for the
connections between nodes and use the "hostname" provided in
pg_dist_node / pg_dist_poolinfo as "hostaddr" to avoid host name lookup.

This is to avoid allowing dns resolution (and / or setting up DNS names
for each host in the cluster). This already works currently when using
IPs in the hostname. The only use of setting host is that you can then
use sslmode=verify-full and it will validate that the hostname matches
the certificate provided by the node you're connecting too.

It would be more flexible to make this a per-node setting, but that
requires SQL changes. And we'd like to backport this change, and
backporting such a sql change would be quite hard while backporting this
change would be very easy. And in many setups, a different hostname for
TLS validation is actually not needed. The reason for that is
query-from-any node: With query-from-any-node all nodes usually have a
certificate that is valid for the same "cluster hostname", either using
a wildcard cert or a Subject Alternative Name (SAN). Because if you load
balance across nodes you don't know which node you're connecting to, but
you still want TLS validation to do it's job. So with this change you
can use this same "cluster hostname" for TLS validation within the
cluster. Obviously this means you don't validate that you're connecting
to a particular node, just that you're connecting to one of the nodes in
the cluster, but that should be fine from a security perspective (in
most cases).

Note to self: This change requires updating

https://docs.citusdata.com/en/latest/develop/api_guc.html#citus-node-conninfo-text.

DESCRIPTION: Allows overwriting host name for all inter-node connections
by supporting "host" parameter in citus.node_conninfo

(cherry picked from commit 3586aab17a)
2024-04-17 10:26:50 +02:00
Karina 9b06d02c43 Fix error in master_disable_node/citus_disable_node (#7492)
This fixes #7454: master_disable_node() has only two arguments, but
calls citus_disable_node() that tries to read three arguments

Co-authored-by: Karina Litskevich <litskevichkarina@gmail.com>
(cherry picked from commit 683e10ab69)
2024-04-17 10:26:50 +02:00
copetol 2ee43fd00c Fix segfault when using certain DO block in function (#7554)
When using a CASE WHEN expression in the body
of the function that is used in the DO block, a segmentation
fault occured. This fixes that.

Fixes #7381

---------

Co-authored-by: Konstantin Morozov <vzbdryn@yahoo.com>
(cherry picked from commit 12f56438fc)
2024-04-17 10:26:50 +02:00
Emel Şimşek f2d102d54b Fix crash caused by some form of ALTER TABLE ADD COLUMN statements. (#7522)
DESCRIPTION: Fixes a crash caused by some form of ALTER TABLE ADD COLUMN
statements. When adding multiple columns, if one of the ADD COLUMN
statements contains a FOREIGN constraint ommitting the referenced
columns in the statement, a SEGFAULT occurs.

For instance, the following statement results in a crash:

```
  ALTER TABLE lt ADD COLUMN new_col1 bool,
                          ADD COLUMN new_col2 int references rt;

```

Fixes #7520.

(cherry picked from commit fdd658acec)
2024-04-17 10:26:50 +02:00
Karina 82637f3e13 Use expecteddir option in _run_pg_regress() (#7582)
Fix check-arbitrary-configs tests failure with current REL_16_STABLE.
This is the same problem as described in #7573. I missed pg_regress call
in _run_pg_regress() in that PR.

Co-authored-by: Karina Litskevich <litskevichkarina@gmail.com>
(cherry picked from commit 41e2af8ff5)
2024-04-17 10:26:50 +02:00
Karina 79616bc7db Use expecteddir option when running vanilla tests (#7573)
In PostgreSQL 16 a new option expecteddir was introduced to pg_regress.
Together with fix in
[196eeb6b](https://github.com/postgres/postgres/commit/196eeb6b) it
causes check-vanilla failure if expecteddir is not specified.

Co-authored-by: Karina Litskevich <litskevichkarina@gmail.com>
(cherry picked from commit 41d99249d9)
2024-04-17 10:26:50 +02:00
Jelte Fennema-Nio 364e8ece14 Speed up GetForeignKeyOids (#7578)
DESCRIPTION: Fix performance issue in GetForeignKeyOids on systems with
many constraints

GetForeignKeyOids was showing up in CPU profiles when distributing
schemas on systems with 100k+ constraints. The reason was that this
function was doing a sequence scan of pg_constraint to get the foreign
keys that referenced the requested table.

This fixes that by finding the constraints referencing the table through
pg_depend instead of pg_constraint. We're doing this indirection,
because pg_constraint doesn't have an index that we can use, but
pg_depend does.

(cherry picked from commit a263ac6f5f)
2024-04-17 10:26:50 +02:00
Jelte Fennema-Nio 62c32067f1 Use an index to get FDWs that depend on extensions (#7574)
DESCRIPTION: Fix performance issue when distributing a table that
depends on an extension

When the database contains many objects this function would show up in
profiles because it was doing a sequence scan on pg_depend. And with
many objects pg_depend can get very large.

This starts using an index scan to only look for rows containing FDWs,
of which there are expected to be very few (often even zero).

(cherry picked from commit 16604a6601)
2024-04-17 10:26:50 +02:00
Jelte Fennema-Nio d9069b1e01 Speed up SequenceUsedInDistributedTable (#7579)
DESCRIPTION: Fix performance issue when creating distributed tables if
many already exist

This builds on the work to speed up EnsureSequenceTypeSupported, and now
does something similar for SequenceUsedInDistributedTable.
SequenceUsedInDistributedTable had a similar O(number of citus tables)
operation. This fixes that and speeds up creation of distributed tables
significantly when many distributed tables already exist.

Fixes #7022

(cherry picked from commit cdf51da458)
2024-04-17 10:26:50 +02:00
Jelte Fennema-Nio fa6743d436 Speed up EnsureSequenceTypeSupported (#7575)
DESCRIPTION: Fix performance issue when creating distributed tables and many already exist

EnsureSequenceTypeSupported was doing an O(number of distributed tables)
operation. This can become very slow with lots of Citus tables, which
now happens much more frequently in practice due to schema based sharding.

Partially addresses #7022

(cherry picked from commit 381f31756e)
2024-04-17 10:26:50 +02:00
Jelte Fennema-Nio 26178fb538 Add missing postgres.h includes
After sorting includes in the previous commit some files were now
invalid because they were not including postgres.h
2024-04-17 10:26:50 +02:00
Jelte Fennema-Nio 48c62095ff Actually sort includes after cherry-pick 2024-04-17 10:26:50 +02:00
Nils Dijk f1b1d7579c Sort includes (#7326)
CHERRY-PICK NOTES: This cherry-pick only includes the scripts, not the
actual changes. These are done in a follow up commit to ease further
backporting.

This change adds a script to programatically group all includes in a
specific order. The script was used as a one time invocation to group
and sort all includes throught our formatted code. The grouping is as
follows:

 - System includes (eg. `#include<...>`)
 - Postgres.h (eg. `#include "postgres.h"`)
- Toplevel imports from postgres, not contained in a directory (eg.
`#include "miscadmin.h"`)
 - General postgres includes (eg . `#include "nodes/..."`)
- Toplevel citus includes, not contained in a directory (eg. `#include
"citus_verion.h"`)
 - Columnar includes (eg. `#include "columnar/..."`)
 - Distributed includes (eg. `#include "distributed/..."`)

Because it is quite hard to understand the difference between toplevel
citus includes and toplevel postgres includes it hardcodes the list of
toplevel citus includes. In the same manner it assumes anything not
prefixed with `columnar/` or `distributed/` as a postgres include.

The sorting/grouping is enforced by CI. Since we do so with our own
script there are not changes required in our uncrustify configuration.

(cherry picked from commit 0620c8f9a6)
2024-04-17 10:26:50 +02:00
Nils Dijk 75df19b616 move pg_version_constants.h to toplevel include (#7335)
In preparation of sorting and grouping all includes we wanted to move
this file to the toplevel includes for good grouping/sorting.

(cherry picked from commit 0dac63afc0)
2024-04-17 10:26:50 +02:00
Gürkan İndibay e2d18c5472
Bump Citus version to 12.1.2 (#7504) 2024-02-14 08:41:15 +03:00
Gürkan İndibay c12a4f7626
Adds Changelog for v12.1.2 (#7499) 2024-02-13 16:44:57 +03:00
Teja Mupparti a945971f48 Fix the incorrect column count after ALTER TABLE, this fixes the bug #7378 (please read the analysis in the bug for more information)
(cherry picked from commit 00068e07c5)
2024-01-24 11:48:06 -08:00
Hanefi Onaldi 4c110faf1b
Fix wrong PR links in changelog (#7350)
When preparing changelog for 12.1.1 release, I accidentally swapped
the PR numbers for the two commits. This commit fixes the changelog
to point to the correct PRs.

(cherry picked from commit 5efd3f181a)
2023-11-16 14:13:20 +03:00
Hanefi Onaldi 2c630eca50
Bump Citus version to 12.1.1 2023-11-13 14:47:11 +03:00
Hanefi Onaldi b421479d46
Add changelog entries for 12.1.1 (#7332)
Co-authored-by: Onur Tirtir <onurcantirtir@gmail.com>
(cherry picked from commit 92228b279a)
2023-11-13 14:47:11 +03:00
Gokhan Gulbiz 2502e7e754
Backport GHA Migration to release-12.1 (#7277)
Co-authored-by: Jelte Fennema-Nio <jelte.fennema@microsoft.com>
2023-11-13 11:46:31 +00:00
Onur Tirtir a4fe969947 Make sure to disallow creating a replicated distributed table concurrently (#7219)
See explanation in https://github.com/citusdata/citus/issues/7216.
Fixes https://github.com/citusdata/citus/issues/7216.

DESCRIPTION: Makes sure to disallow creating a replicated distributed
table concurrently

(cherry picked from commit 111b4c19bc)
2023-10-24 14:04:37 +03:00
Nils Dijk e59ffbf549
Fix leaking of memory and memory contexts in Foreign Constraint Graphs (#7236)
DESCRIPTION: Fix leaking of memory and memory contexts in Foreign
Constraint Graphs

Previously, every time we (re)created the Foreign Constraint
Relationship Graph, we created a new Memory Context while loosing a
reference to the previous context. This old context could still have
left over memory in there causing a memory leak.

With this patch we statically have one memory context that we lazily
initialize the first time we create our foreign constraint relationship
graph. On every subsequent creation, beside destroying our previous
hashmap we also reset our memory context to remove any left over
references.
2023-10-09 13:07:30 +02:00
aykut-bozkurt 3b908eec2a
Fix the changelog entry for citus_pause_node_within_txn() UDF (#7215)
(cherry picked from commit 2c190d0689)
2023-09-20 17:06:07 +03:00
Naisila Puka 9b6ffece5e
Adds PostgreSQL 16.0 Support (#7201)
This commit concludes PG16.0 Support in Citus.

The main PG16 support work has been done for 16beta3
https://github.com/citusdata/citus/pull/6952
There was some extra work needed for 16rc1
https://github.com/citusdata/citus/pull/7173
And this PR yet introduces some extra work needed to 16.0 :)

`pgstat_fetch_stat_local_beentry` has been renamed to
`pgstat_get_local_beentry_by_index` in PG16.0

Relevant PG commit:
8dfa37b797
8dfa37b797843a83a5756ea3309055e8953e1a86

Sister PR
https://github.com/citusdata/the-process/pull/150

(cherry picked from commit 4e46708789)
2023-09-15 12:27:09 +03:00
aykut-bozkurt 1b4d7a51f8
bump citus into 12.1.0 2023-09-13 14:20:21 +03:00
739 changed files with 26345 additions and 49051 deletions

View File

@ -1,33 +0,0 @@
# gdbpg.py contains scripts to nicely print the postgres datastructures
# while in a gdb session. Since the vscode debugger is based on gdb this
# actually also works when debugging with vscode. Providing nice tools
# to understand the internal datastructures we are working with.
source /root/gdbpg.py
# when debugging postgres it is convenient to _always_ have a breakpoint
# trigger when an error is logged. Because .gdbinit is sourced before gdb
# is fully attached and has the sources loaded. To make sure the breakpoint
# is added when the library is loaded we temporary set the breakpoint pending
# to on. After we have added out breakpoint we revert back to the default
# configuration for breakpoint pending.
# The breakpoint is hard to read, but at entry of the function we don't have
# the level loaded in elevel. Instead we hardcode the location where the
# level of the current error is stored. Also gdb doesn't understand the
# ERROR symbol so we hardcode this to the value of ERROR. It is very unlikely
# this value will ever change in postgres, but if it does we might need to
# find a way to conditionally load the correct breakpoint.
set breakpoint pending on
break elog.c:errfinish if errordata[errordata_stack_depth].elevel == 21
set breakpoint pending auto
echo \n
echo ----------------------------------------------------------------------------------\n
echo when attaching to a postgres backend a breakpoint will be set on elog.c:errfinish \n
echo it will only break on errors being raised in postgres \n
echo \n
echo to disable this breakpoint from vscode run `-exec disable 1` in the debug console \n
echo this assumes it's the first breakpoint loaded as it is loaded from .gdbinit \n
echo this can be verified with `-exec info break`, enabling can be done with \n
echo `-exec enable 1` \n
echo ----------------------------------------------------------------------------------\n
echo \n

View File

@ -1 +0,0 @@
postgresql-*.tar.bz2

View File

@ -1,7 +0,0 @@
\timing on
\pset linestyle unicode
\pset border 2
\setenv PAGER 'pspg --no-mouse -bX --no-commandbar --no-topbar'
\set HISTSIZE 100000
\set PROMPT1 '\n%[%033[1m%]%M %n@%/:%> (PID: %p)%R%[%033[0m%]%# '
\set PROMPT2 ' '

View File

@ -1,12 +0,0 @@
[[source]]
url = "https://pypi.org/simple"
verify_ssl = true
name = "pypi"
[packages]
docopt = "*"
[dev-packages]
[requires]
python_version = "3.9"

28
.devcontainer/.vscode/Pipfile.lock generated vendored
View File

@ -1,28 +0,0 @@
{
"_meta": {
"hash": {
"sha256": "6956a6700ead5804aa56bd597c93bb4a13f208d2d49d3b5399365fd240ca0797"
},
"pipfile-spec": 6,
"requires": {
"python_version": "3.9"
},
"sources": [
{
"name": "pypi",
"url": "https://pypi.org/simple",
"verify_ssl": true
}
]
},
"default": {
"docopt": {
"hashes": [
"sha256:49b3a825280bd66b3aa83585ef59c4a8c82f2c8a522dbe754a8bc8d08c85c491"
],
"index": "pypi",
"version": "==0.6.2"
}
},
"develop": {}
}

View File

@ -1,84 +0,0 @@
#! /usr/bin/env pipenv-shebang
"""Generate C/C++ properties file for VSCode.
Uses pgenv to iterate postgres versions and generate
a C/C++ properties file for VSCode containing the
include paths for the postgres headers.
Usage:
generate_c_cpp_properties-json.py <target_path>
generate_c_cpp_properties-json.py (-h | --help)
generate_c_cpp_properties-json.py --version
Options:
-h --help Show this screen.
--version Show version.
"""
import json
import subprocess
from docopt import docopt
def main(args):
target_path = args['<target_path>']
output = subprocess.check_output(['pgenv', 'versions'])
# typical output is:
# 14.8 pgsql-14.8
# * 15.3 pgsql-15.3
# 16beta2 pgsql-16beta2
# where the line marked with a * is the currently active version
#
# we are only interested in the first word of each line, which is the version number
# thus we strip the whitespace and the * from the line and split it into words
# and take the first word
versions = [line.strip('* ').split()[0] for line in output.decode('utf-8').splitlines()]
# create the list of configurations per version
configurations = []
for version in versions:
configurations.append(generate_configuration(version))
# create the json file
c_cpp_properties = {
"configurations": configurations,
"version": 4
}
# write the c_cpp_properties.json file
with open(target_path, 'w') as f:
json.dump(c_cpp_properties, f, indent=4)
def generate_configuration(version):
"""Returns a configuration for the given postgres version.
>>> generate_configuration('14.8')
{
"name": "Citus Development Configuration - Postgres 14.8",
"includePath": [
"/usr/local/include",
"/home/citus/.pgenv/src/postgresql-14.8/src/**",
"${workspaceFolder}/**",
"${workspaceFolder}/src/include/",
],
"configurationProvider": "ms-vscode.makefile-tools"
}
"""
return {
"name": f"Citus Development Configuration - Postgres {version}",
"includePath": [
"/usr/local/include",
f"/home/citus/.pgenv/src/postgresql-{version}/src/**",
"${workspaceFolder}/**",
"${workspaceFolder}/src/include/",
],
"configurationProvider": "ms-vscode.makefile-tools"
}
if __name__ == '__main__':
arguments = docopt(__doc__, version='0.1.0')
main(arguments)

View File

@ -1,40 +0,0 @@
{
"version": "0.2.0",
"configurations": [
{
"name": "Attach Citus (devcontainer)",
"type": "cppdbg",
"request": "attach",
"processId": "${command:pickProcess}",
"program": "/home/citus/.pgenv/pgsql/bin/postgres",
"additionalSOLibSearchPath": "/home/citus/.pgenv/pgsql/lib",
"setupCommands": [
{
"text": "handle SIGUSR1 noprint nostop pass",
"description": "let gdb not stop when SIGUSR1 is sent to process",
"ignoreFailures": true
}
],
},
{
"name": "Open core file",
"type": "cppdbg",
"request": "launch",
"program": "/home/citus/.pgenv/pgsql/bin/postgres",
"coreDumpPath": "${input:corefile}",
"cwd": "${workspaceFolder}",
"MIMode": "gdb",
}
],
"inputs": [
{
"id": "corefile",
"type": "command",
"command": "extension.commandvariable.file.pickFile",
"args": {
"dialogTitle": "Select core file",
"include": "**/core*",
},
},
],
}

View File

@ -1,222 +0,0 @@
FROM ubuntu:22.04 AS base
# environment is to make python pass an interactive shell, probably not the best timezone given a wide variety of colleagues
ENV TZ=UTC
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
# install build tools
RUN apt update && apt install -y \
bison \
bzip2 \
cpanminus \
curl \
docbook-xml \
docbook-xsl \
flex \
gcc \
git \
libcurl4-gnutls-dev \
libicu-dev \
libkrb5-dev \
liblz4-dev \
libpam0g-dev \
libreadline-dev \
libselinux1-dev \
libssl-dev \
libxml2-utils \
libxslt-dev \
libzstd-dev \
locales \
make \
perl \
pkg-config \
python3 \
python3-pip \
software-properties-common \
sudo \
uuid-dev \
valgrind \
xsltproc \
zlib1g-dev \
&& add-apt-repository ppa:deadsnakes/ppa -y \
&& apt install -y \
python3.9-full \
# software properties pulls in pkexec, which makes the debugger unusable in vscode
&& apt purge -y \
software-properties-common \
&& apt autoremove -y \
&& apt clean
RUN sudo pip3 install pipenv pipenv-shebang
RUN cpanm install IPC::Run
RUN locale-gen en_US.UTF-8
# add the citus user to sudoers and allow all sudoers to login without a password prompt
RUN useradd -ms /bin/bash citus \
&& usermod -aG sudo citus \
&& echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers
WORKDIR /home/citus
USER citus
# run all make commands with the number of cores available
RUN echo "export MAKEFLAGS=\"-j \$(nproc)\"" >> "/home/citus/.bashrc"
RUN git clone --branch v1.3.2 --depth 1 https://github.com/theory/pgenv.git .pgenv
COPY --chown=citus:citus pgenv/config/ .pgenv/config/
ENV PATH="/home/citus/.pgenv/bin:${PATH}"
ENV PATH="/home/citus/.pgenv/pgsql/bin:${PATH}"
USER citus
# build postgres versions separately for effective parrallelism and caching of already built versions when changing only certain versions
FROM base AS pg15
RUN MAKEFLAGS="-j $(nproc)" pgenv build 15.13
RUN rm .pgenv/src/*.tar*
RUN make -C .pgenv/src/postgresql-*/ clean
RUN make -C .pgenv/src/postgresql-*/src/include install
# create a staging directory with all files we want to copy from our pgenv build
# we will copy the contents of the staged folder into the final image at once
RUN mkdir .pgenv-staging/
RUN cp -r .pgenv/src .pgenv/pgsql-* .pgenv/config .pgenv-staging/
RUN rm .pgenv-staging/config/default.conf
FROM base AS pg16
RUN MAKEFLAGS="-j $(nproc)" pgenv build 16.9
RUN rm .pgenv/src/*.tar*
RUN make -C .pgenv/src/postgresql-*/ clean
RUN make -C .pgenv/src/postgresql-*/src/include install
# create a staging directory with all files we want to copy from our pgenv build
# we will copy the contents of the staged folder into the final image at once
RUN mkdir .pgenv-staging/
RUN cp -r .pgenv/src .pgenv/pgsql-* .pgenv/config .pgenv-staging/
RUN rm .pgenv-staging/config/default.conf
FROM base AS pg17
RUN MAKEFLAGS="-j $(nproc)" pgenv build 17.5
RUN rm .pgenv/src/*.tar*
RUN make -C .pgenv/src/postgresql-*/ clean
RUN make -C .pgenv/src/postgresql-*/src/include install
# create a staging directory with all files we want to copy from our pgenv build
# we will copy the contents of the staged folder into the final image at once
RUN mkdir .pgenv-staging/
RUN cp -r .pgenv/src .pgenv/pgsql-* .pgenv/config .pgenv-staging/
RUN rm .pgenv-staging/config/default.conf
FROM base AS uncrustify-builder
RUN sudo apt update && sudo apt install -y cmake tree
WORKDIR /uncrustify
RUN curl -L https://github.com/uncrustify/uncrustify/archive/uncrustify-0.68.1.tar.gz | tar xz
WORKDIR /uncrustify/uncrustify-uncrustify-0.68.1/
RUN mkdir build
WORKDIR /uncrustify/uncrustify-uncrustify-0.68.1/build/
RUN cmake ..
RUN MAKEFLAGS="-j $(nproc)" make -s
RUN make install DESTDIR=/uncrustify
# builder for all pipenv's to get them contained in a single layer
FROM base AS pipenv
WORKDIR /workspaces/citus/
# tools to sync pgenv with vscode
COPY --chown=citus:citus .vscode/Pipfile .vscode/Pipfile.lock .devcontainer/.vscode/
RUN ( cd .devcontainer/.vscode && pipenv install )
# environment to run our failure tests
COPY --chown=citus:citus src/ src/
RUN ( cd src/test/regress && pipenv install )
# assemble the final container by copying over the artifacts from separately build containers
FROM base AS devcontainer
LABEL org.opencontainers.image.source=https://github.com/citusdata/citus
LABEL org.opencontainers.image.description="Development container for the Citus project"
LABEL org.opencontainers.image.licenses=AGPL-3.0-only
RUN yes | sudo unminimize
# install developer productivity tools
RUN sudo apt update \
&& sudo apt install -y \
autoconf2.69 \
bash-completion \
fswatch \
gdb \
htop \
libdbd-pg-perl \
libdbi-perl \
lsof \
man \
net-tools \
psmisc \
pspg \
tree \
vim \
&& sudo apt clean
# Since gdb will run in the context of the root user when debugging citus we will need to both
# download the gdbpg.py script as the root user, into their home directory, as well as add .gdbinit
# as a file owned by root
# This will make that as soon as the debugger attaches to a postgres backend (or frankly any other process)
# the gdbpg.py script will be sourced and the developer can direcly use it.
RUN sudo curl -o /root/gdbpg.py https://raw.githubusercontent.com/tvesely/gdbpg/6065eee7872457785f830925eac665aa535caf62/gdbpg.py
COPY --chown=root:root .gdbinit /root/
# install developer dependencies in the global environment
RUN --mount=type=bind,source=requirements.txt,target=requirements.txt pip install -r requirements.txt
# for persistent bash history across devcontainers we need to have
# a) a directory to store the history in
# b) a prompt command to append the history to the file
# c) specify the history file to store the history in
# b and c are done in the .bashrc to make it persistent across shells only
RUN sudo install -d -o citus -g citus /commandhistory \
&& echo "export PROMPT_COMMAND='history -a' && export HISTFILE=/commandhistory/.bash_history" >> "/home/citus/.bashrc"
# install citus-dev
RUN git clone --branch develop https://github.com/citusdata/tools.git citus-tools \
&& ( cd citus-tools/citus_dev && pipenv install ) \
&& mkdir -p ~/.local/bin \
&& ln -s /home/citus/citus-tools/citus_dev/citus_dev-pipenv .local/bin/citus_dev \
&& sudo make -C citus-tools/uncrustify install bindir=/usr/local/bin pkgsysconfdir=/usr/local/etc/ \
&& mkdir -p ~/.local/share/bash-completion/completions/ \
&& ln -s ~/citus-tools/citus_dev/bash_completion ~/.local/share/bash-completion/completions/citus_dev
# TODO some LC_ALL errors, possibly solved by locale-gen
RUN git clone https://github.com/so-fancy/diff-so-fancy.git \
&& mkdir -p ~/.local/bin \
&& ln -s /home/citus/diff-so-fancy/diff-so-fancy .local/bin/
COPY --link --from=uncrustify-builder /uncrustify/usr/ /usr/
COPY --link --from=pg15 /home/citus/.pgenv-staging/ /home/citus/.pgenv/
COPY --link --from=pg16 /home/citus/.pgenv-staging/ /home/citus/.pgenv/
COPY --link --from=pg17 /home/citus/.pgenv-staging/ /home/citus/.pgenv/
COPY --link --from=pipenv /home/citus/.local/share/virtualenvs/ /home/citus/.local/share/virtualenvs/
# place to run your cluster with citus_dev
VOLUME /data
RUN sudo mkdir /data \
&& sudo chown citus:citus /data
COPY --chown=citus:citus .psqlrc .
# with the copy linking of layers github actions seem to misbehave with the ownership of the
# directories leading upto the link, hence a small patch layer to have to right ownerships set
RUN sudo chown --from=root:root citus:citus -R ~
# sets default pg version
RUN pgenv switch 17.5
# make connecting to the coordinator easy
ENV PGPORT=9700

View File

@ -1,11 +0,0 @@
init: ../.vscode/c_cpp_properties.json ../.vscode/launch.json
../.vscode:
mkdir -p ../.vscode
../.vscode/launch.json: ../.vscode .vscode/launch.json
cp .vscode/launch.json ../.vscode/launch.json
../.vscode/c_cpp_properties.json: ../.vscode
./.vscode/generate_c_cpp_properties-json.py ../.vscode/c_cpp_properties.json

View File

@ -1,37 +0,0 @@
{
"image": "ghcr.io/citusdata/citus-devcontainer:main",
"runArgs": [
"--cap-add=SYS_PTRACE",
"--ulimit=core=-1",
],
"forwardPorts": [
9700
],
"customizations": {
"vscode": {
"extensions": [
"eamodio.gitlens",
"GitHub.copilot-chat",
"GitHub.copilot",
"github.vscode-github-actions",
"github.vscode-pull-request-github",
"ms-vscode.cpptools-extension-pack",
"ms-vsliveshare.vsliveshare",
"rioj7.command-variable",
],
"settings": {
"files.exclude": {
"**/*.o": true,
"**/.deps/": true,
}
},
}
},
"mounts": [
"type=volume,target=/data",
"source=citus-bashhistory,target=/commandhistory,type=volume",
],
"updateContentCommand": "./configure",
"postCreateCommand": "make -C .devcontainer/",
}

View File

@ -1,15 +0,0 @@
PGENV_MAKE_OPTIONS=(-s)
PGENV_CONFIGURE_OPTIONS=(
--enable-debug
--enable-depend
--enable-cassert
--enable-tap-tests
'CFLAGS=-ggdb -Og -g3 -fno-omit-frame-pointer -DUSE_VALGRIND'
--with-openssl
--with-libxml
--with-libxslt
--with-uuid=e2fs
--with-icu
--with-lz4
)

View File

@ -1,9 +0,0 @@
black==23.11.0
click==8.1.7
isort==5.12.0
mypy-extensions==1.0.0
packaging==23.2
pathspec==0.11.2
platformdirs==4.0.0
tomli==2.0.1
typing_extensions==4.8.0

View File

@ -1,28 +0,0 @@
[[source]]
name = "pypi"
url = "https://pypi.python.org/simple"
verify_ssl = true
[packages]
mitmproxy = {editable = true, ref = "main", git = "https://github.com/citusdata/mitmproxy.git"}
construct = "*"
docopt = "==0.6.2"
cryptography = ">=41.0.4"
pytest = "*"
psycopg = "*"
filelock = "*"
pytest-asyncio = "*"
pytest-timeout = "*"
pytest-xdist = "*"
pytest-repeat = "*"
pyyaml = "*"
werkzeug = "==2.3.7"
[dev-packages]
black = "*"
isort = "*"
flake8 = "*"
flake8-bugbear = "*"
[requires]
python_version = "3.9"

File diff suppressed because it is too large Load Diff

3
.gitattributes vendored
View File

@ -25,9 +25,10 @@ configure -whitespace
# except these exceptions...
src/backend/distributed/utils/citus_outfuncs.c -citus-style
src/backend/distributed/deparser/ruleutils_13.c -citus-style
src/backend/distributed/deparser/ruleutils_14.c -citus-style
src/backend/distributed/deparser/ruleutils_15.c -citus-style
src/backend/distributed/deparser/ruleutils_16.c -citus-style
src/backend/distributed/deparser/ruleutils_17.c -citus-style
src/backend/distributed/commands/index_pg_source.c -citus-style
src/include/distributed/citus_nodes.h -citus-style

View File

@ -10,13 +10,8 @@ on:
required: false
default: false
type: boolean
push:
branches:
- "main"
- "release-*"
pull_request:
types: [opened, reopened,synchronize]
merge_group:
jobs:
# Since GHA does not interpolate env varibles in matrix context, we need to
# define them in a separate job and use them in other jobs.
@ -31,38 +26,38 @@ jobs:
pgupgrade_image_name: "ghcr.io/citusdata/pgupgradetester"
style_checker_image_name: "ghcr.io/citusdata/stylechecker"
style_checker_tools_version: "0.8.18"
sql_snapshot_pg_version: "17.5"
image_suffix: "-dev-d28f316"
pg15_version: '{ "major": "15", "full": "15.13" }'
pg16_version: '{ "major": "16", "full": "16.9" }'
pg17_version: '{ "major": "17", "full": "17.5" }'
upgrade_pg_versions: "15.13-16.9-17.5"
sql_snapshot_pg_version: "16.3"
image_suffix: "-v13fd57c"
pg14_version: '{ "major": "14", "full": "14.12" }'
pg15_version: '{ "major": "15", "full": "15.7" }'
pg16_version: '{ "major": "16", "full": "16.3" }'
upgrade_pg_versions: "14.12-15.7-16.3"
steps:
# Since GHA jobs need at least one step we use a noop step here.
# Since GHA jobs needs at least one step we use a noop step here.
- name: Set up parameters
run: echo 'noop'
check-sql-snapshots:
needs: params
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
container:
image: ${{ needs.params.outputs.build_image_name }}:${{ needs.params.outputs.sql_snapshot_pg_version }}${{ needs.params.outputs.image_suffix }}
options: --user root
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v3.5.0
- name: Check Snapshots
run: |
git config --global --add safe.directory ${GITHUB_WORKSPACE}
ci/check_sql_snapshots.sh
check-style:
needs: params
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
container:
image: ${{ needs.params.outputs.style_checker_image_name }}:${{ needs.params.outputs.style_checker_tools_version }}${{ needs.params.outputs.image_suffix }}
steps:
- name: Check Snapshots
run: |
git config --global --add safe.directory ${GITHUB_WORKSPACE}
- uses: actions/checkout@v4
- uses: actions/checkout@v3.5.0
with:
fetch-depth: 0
- name: Check C Style
@ -110,15 +105,15 @@ jobs:
image_suffix:
- ${{ needs.params.outputs.image_suffix}}
pg_version:
- ${{ needs.params.outputs.pg14_version }}
- ${{ needs.params.outputs.pg15_version }}
- ${{ needs.params.outputs.pg16_version }}
- ${{ needs.params.outputs.pg17_version }}
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
container:
image: "${{ matrix.image_name }}:${{ fromJson(matrix.pg_version).full }}${{ matrix.image_suffix }}"
options: --user root
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v3.5.0
- name: Expose $PG_MAJOR to Github Env
run: echo "PG_MAJOR=${PG_MAJOR}" >> $GITHUB_ENV
shell: bash
@ -141,9 +136,9 @@ jobs:
image_name:
- ${{ needs.params.outputs.test_image_name }}
pg_version:
- ${{ needs.params.outputs.pg14_version }}
- ${{ needs.params.outputs.pg15_version }}
- ${{ needs.params.outputs.pg16_version }}
- ${{ needs.params.outputs.pg17_version }}
make:
- check-split
- check-multi
@ -161,6 +156,10 @@ jobs:
- check-enterprise-isolation-logicalrep-2
- check-enterprise-isolation-logicalrep-3
include:
- make: check-failure
pg_version: ${{ needs.params.outputs.pg14_version }}
suite: regress
image_name: ${{ needs.params.outputs.fail_test_image_name }}
- make: check-failure
pg_version: ${{ needs.params.outputs.pg15_version }}
suite: regress
@ -169,8 +168,8 @@ jobs:
pg_version: ${{ needs.params.outputs.pg16_version }}
suite: regress
image_name: ${{ needs.params.outputs.fail_test_image_name }}
- make: check-failure
pg_version: ${{ needs.params.outputs.pg17_version }}
- make: check-enterprise-failure
pg_version: ${{ needs.params.outputs.pg14_version }}
suite: regress
image_name: ${{ needs.params.outputs.fail_test_image_name }}
- make: check-enterprise-failure
@ -181,8 +180,8 @@ jobs:
pg_version: ${{ needs.params.outputs.pg16_version }}
suite: regress
image_name: ${{ needs.params.outputs.fail_test_image_name }}
- make: check-enterprise-failure
pg_version: ${{ needs.params.outputs.pg17_version }}
- make: check-pytest
pg_version: ${{ needs.params.outputs.pg14_version }}
suite: regress
image_name: ${{ needs.params.outputs.fail_test_image_name }}
- make: check-pytest
@ -193,10 +192,6 @@ jobs:
pg_version: ${{ needs.params.outputs.pg16_version }}
suite: regress
image_name: ${{ needs.params.outputs.fail_test_image_name }}
- make: check-pytest
pg_version: ${{ needs.params.outputs.pg17_version }}
suite: regress
image_name: ${{ needs.params.outputs.fail_test_image_name }}
- make: installcheck
suite: cdc
image_name: ${{ needs.params.outputs.test_image_name }}
@ -205,10 +200,10 @@ jobs:
suite: cdc
image_name: ${{ needs.params.outputs.test_image_name }}
pg_version: ${{ needs.params.outputs.pg16_version }}
- make: installcheck
suite: cdc
image_name: ${{ needs.params.outputs.test_image_name }}
pg_version: ${{ needs.params.outputs.pg17_version }}
- make: check-query-generator
pg_version: ${{ needs.params.outputs.pg14_version }}
suite: regress
image_name: ${{ needs.params.outputs.fail_test_image_name }}
- make: check-query-generator
pg_version: ${{ needs.params.outputs.pg15_version }}
suite: regress
@ -217,11 +212,7 @@ jobs:
pg_version: ${{ needs.params.outputs.pg16_version }}
suite: regress
image_name: ${{ needs.params.outputs.fail_test_image_name }}
- make: check-query-generator
pg_version: ${{ needs.params.outputs.pg17_version }}
suite: regress
image_name: ${{ needs.params.outputs.fail_test_image_name }}
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
container:
image: "${{ matrix.image_name }}:${{ fromJson(matrix.pg_version).full }}${{ needs.params.outputs.image_suffix }}"
options: --user root --dns=8.8.8.8
@ -232,7 +223,7 @@ jobs:
- params
- build
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v3.5.0
- uses: "./.github/actions/setup_extension"
- name: Run Test
run: gosu circleci make -C src/test/${{ matrix.suite }} ${{ matrix.make }}
@ -261,12 +252,12 @@ jobs:
image_name:
- ${{ needs.params.outputs.fail_test_image_name }}
pg_version:
- ${{ needs.params.outputs.pg14_version }}
- ${{ needs.params.outputs.pg15_version }}
- ${{ needs.params.outputs.pg16_version }}
- ${{ needs.params.outputs.pg17_version }}
parallel: [0,1,2,3,4,5] # workaround for running 6 parallel jobs
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v3.5.0
- uses: "./.github/actions/setup_extension"
- name: Test arbitrary configs
run: |-
@ -297,7 +288,7 @@ jobs:
codecov_token: ${{ secrets.CODECOV_TOKEN }}
test-pg-upgrade:
name: PG${{ matrix.old_pg_major }}-PG${{ matrix.new_pg_major }} - check-pg-upgrade
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
container:
image: "${{ needs.params.outputs.pgupgrade_image_name }}:${{ needs.params.outputs.upgrade_pg_versions }}${{ needs.params.outputs.image_suffix }}"
options: --user root
@ -308,17 +299,17 @@ jobs:
fail-fast: false
matrix:
include:
- old_pg_major: 14
new_pg_major: 15
- old_pg_major: 15
new_pg_major: 16
- old_pg_major: 16
new_pg_major: 17
- old_pg_major: 15
new_pg_major: 17
- old_pg_major: 14
new_pg_major: 16
env:
old_pg_major: ${{ matrix.old_pg_major }}
new_pg_major: ${{ matrix.new_pg_major }}
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v3.5.0
- uses: "./.github/actions/setup_extension"
with:
pg_major: "${{ env.old_pg_major }}"
@ -349,16 +340,16 @@ jobs:
flags: ${{ env.old_pg_major }}_${{ env.new_pg_major }}_upgrade
codecov_token: ${{ secrets.CODECOV_TOKEN }}
test-citus-upgrade:
name: PG${{ fromJson(needs.params.outputs.pg15_version).major }} - check-citus-upgrade
runs-on: ubuntu-latest
name: PG${{ fromJson(needs.params.outputs.pg14_version).major }} - check-citus-upgrade
runs-on: ubuntu-20.04
container:
image: "${{ needs.params.outputs.citusupgrade_image_name }}:${{ fromJson(needs.params.outputs.pg15_version).full }}${{ needs.params.outputs.image_suffix }}"
image: "${{ needs.params.outputs.citusupgrade_image_name }}:${{ fromJson(needs.params.outputs.pg14_version).full }}${{ needs.params.outputs.image_suffix }}"
options: --user root
needs:
- params
- build
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v3.5.0
- uses: "./.github/actions/setup_extension"
with:
skip_installation: true
@ -399,9 +390,9 @@ jobs:
if: always()
env:
CC_TEST_REPORTER_ID: ${{ secrets.CC_TEST_REPORTER_ID }}
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
container:
image: ${{ needs.params.outputs.test_image_name }}:${{ fromJson(needs.params.outputs.pg17_version).full }}${{ needs.params.outputs.image_suffix }}
image: ${{ needs.params.outputs.test_image_name }}:${{ fromJson(needs.params.outputs.pg16_version).full }}${{ needs.params.outputs.image_suffix }}
needs:
- params
- test-citus
@ -421,11 +412,11 @@ jobs:
ch_benchmark:
name: CH Benchmark
if: startsWith(github.ref, 'refs/heads/ch_benchmark/')
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
needs:
- build
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v3.5.0
- uses: azure/login@v1
with:
creds: ${{ secrets.AZURE_CREDENTIALS }}
@ -439,11 +430,11 @@ jobs:
tpcc_benchmark:
name: TPCC Benchmark
if: startsWith(github.ref, 'refs/heads/tpcc_benchmark/')
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
needs:
- build
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v3.5.0
- uses: azure/login@v1
with:
creds: ${{ secrets.AZURE_CREDENTIALS }}
@ -455,14 +446,14 @@ jobs:
chmod +x run_hammerdb.sh
run_hammerdb.sh citusbot_tpcc_benchmark_rg
prepare_parallelization_matrix_32:
name: Prepare parallelization matrix
name: Parallel 32
if: ${{ needs.test-flakyness-pre.outputs.tests != ''}}
needs: test-flakyness-pre
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
outputs:
json: ${{ steps.parallelization.outputs.json }}
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v3.5.0
- uses: "./.github/actions/parallelization"
id: parallelization
with:
@ -470,50 +461,34 @@ jobs:
test-flakyness-pre:
name: Detect regression tests need to be ran
if: ${{ !inputs.skip_test_flakyness }}}
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
needs: build
outputs:
tests: ${{ steps.detect-regression-tests.outputs.tests }}
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v3.5.0
with:
fetch-depth: 0
- name: Detect regression tests need to be ran
id: detect-regression-tests
run: |-
detected_changes=$(git diff origin/main... --name-only --diff-filter=AM | (grep 'src/test/regress/sql/.*\.sql\|src/test/regress/spec/.*\.spec\|src/test/regress/citus_tests/test/test_.*\.py' || true))
detected_changes=$(git diff origin/release-12.1... --name-only --diff-filter=AM | (grep 'src/test/regress/sql/.*\.sql\|src/test/regress/spec/.*\.spec\|src/test/regress/citus_tests/test/test_.*\.py' || true))
tests=${detected_changes}
# split the tests to be skipped --today we only skip upgrade tests
skipped_tests=""
not_skipped_tests=""
for test in $tests; do
if [[ $test =~ ^src/test/regress/sql/upgrade_ ]]; then
skipped_tests="$skipped_tests $test"
else
not_skipped_tests="$not_skipped_tests $test"
fi
done
if [ ! -z "$skipped_tests" ]; then
echo "Skipped tests " $skipped_tests
fi
if [ -z "$not_skipped_tests" ]; then
echo "Not detected any tests that flaky test detection should run"
if [ -z "$tests" ]; then
echo "No test found."
else
echo "Detected tests " $not_skipped_tests
echo "Detected tests " $tests
fi
echo 'tests<<EOF' >> $GITHUB_OUTPUT
echo "$not_skipped_tests" >> "$GITHUB_OUTPUT"
echo "$tests" >> "$GITHUB_OUTPUT"
echo 'EOF' >> $GITHUB_OUTPUT
test-flakyness:
if: ${{ needs.test-flakyness-pre.outputs.tests != ''}}
if: false
name: Test flakyness
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
container:
image: ${{ needs.params.outputs.fail_test_image_name }}:${{ fromJson(needs.params.outputs.pg17_version).full }}${{ needs.params.outputs.image_suffix }}
image: ${{ needs.params.outputs.fail_test_image_name }}:${{ needs.params.outputs.pg16_version }}${{ needs.params.outputs.image_suffix }}
options: --user root
env:
runs: 8
@ -526,7 +501,7 @@ jobs:
fail-fast: false
matrix: ${{ fromJson(needs.prepare_parallelization_matrix_32.outputs.json) }}
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v3.5.0
- uses: actions/download-artifact@v4.1.8
- uses: "./.github/actions/setup_extension"
- name: Run minimal tests
@ -536,10 +511,8 @@ jobs:
for test in "${tests_array[@]}"
do
test_name=$(echo "$test" | sed -r "s/.+\/(.+)\..+/\1/")
gosu circleci src/test/regress/citus_tests/run_test.py $test_name --repeat ${{ env.runs }} --use-whole-schedule-line
gosu circleci src/test/regress/citus_tests/run_test.py $test_name --repeat ${{ env.runs }} --use-base-schedule --use-whole-schedule-line
done
shell: bash
- uses: "./.github/actions/save_logs_and_results"
if: always()
with:
folder: test_flakyness_parallel_${{ matrix.id }}

View File

@ -21,10 +21,10 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@v4
uses: actions/checkout@v3
- name: Initialize CodeQL
uses: github/codeql-action/init@v3
uses: github/codeql-action/init@v2
with:
languages: ${{ matrix.language }}
@ -76,4 +76,4 @@ jobs:
sudo make install-all
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v3
uses: github/codeql-action/analyze@v2

View File

@ -1,54 +0,0 @@
name: "Build devcontainer"
# Since building of containers can be quite time consuming, and take up some storage,
# there is no need to finish a build for a tag if new changes are concurrently being made.
# This cancels any previous builds for the same tag, and only the latest one will be kept.
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
on:
push:
paths:
- ".devcontainer/**"
workflow_dispatch:
jobs:
docker:
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
attestations: write
id-token: write
steps:
-
name: Docker meta
id: meta
uses: docker/metadata-action@v5
with:
images: |
ghcr.io/citusdata/citus-devcontainer
tags: |
type=ref,event=branch
type=sha
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
-
name: 'Login to GitHub Container Registry'
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{github.actor}}
password: ${{secrets.GITHUB_TOKEN}}
-
name: Build and push
uses: docker/build-push-action@v5
with:
context: "{{defaultContext}}:.devcontainer"
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
cache-from: type=gha
cache-to: type=gha,mode=max

View File

@ -28,7 +28,7 @@ jobs:
image: ${{ vars.build_image_name }}:${{ vars.pg15_version }}${{ vars.image_suffix }}
options: --user root
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v3.5.0
- name: Configure, Build, and Install
run: |
echo "PG_MAJOR=${PG_MAJOR}" >> $GITHUB_ENV
@ -46,7 +46,7 @@ jobs:
outputs:
json: ${{ steps.parallelization.outputs.json }}
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v3.5.0
- uses: "./.github/actions/parallelization"
id: parallelization
with:
@ -67,13 +67,13 @@ jobs:
fail-fast: false
matrix: ${{ fromJson(needs.prepare_parallelization_matrix.outputs.json) }}
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v3.5.0
- uses: "./.github/actions/setup_extension"
- name: Run minimal tests
run: |-
gosu circleci src/test/regress/citus_tests/run_test.py ${{ env.test }} --repeat ${{ env.runs }} --use-whole-schedule-line
gosu circleci src/test/regress/citus_tests/run_test.py ${{ env.test }} --repeat ${{ env.runs }} --use-base-schedule --use-whole-schedule-line
shell: bash
- uses: "./.github/actions/save_logs_and_results"
if: always()
with:
folder: check_flakyness_parallel_${{ matrix.id }}
folder: ${{ matrix.id }}

View File

@ -3,7 +3,6 @@ name: Build tests in packaging images
on:
pull_request:
types: [opened, reopened,synchronize]
merge_group:
workflow_dispatch:
@ -116,6 +115,7 @@ jobs:
# for each deb based image and we use POSTGRES_VERSION to set
# PG_CONFIG variable in each of those runs.
packaging_docker_image:
- debian-buster-all
- debian-bookworm-all
- debian-bullseye-all
- ubuntu-focal-all
@ -129,7 +129,7 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@v4
uses: actions/checkout@v3
- name: Set pg_config path and python parameters for deb based distros
run: |

3
.gitignore vendored
View File

@ -55,6 +55,3 @@ lib*.pc
# style related temporary outputs
*.uncrustify
.venv
# added output when modifying check_gucs_are_alphabetically_sorted.sh
guc.out

View File

@ -1,176 +1,3 @@
### citus v13.1.0 (May 30th, 2025) ###
* Adds `citus_stat_counters` view that can be used to query
stat counters that Citus collects while the feature is enabled, which is
controlled by citus.enable_stat_counters. `citus_stat_counters()` can be
used to query the stat counters for the provided database oid and
`citus_stat_counters_reset()` can be used to reset them for the provided
database oid or for the current database if nothing or 0 is provided (#7917)
* Adds `citus_nodes` view that displays the node name, port role, and "active"
for nodes in the cluster (#7968)
* Adds `citus_is_primary_node()` UDF to determine if the current node is a
primary node in the cluster (#7720)
* Adds support for propagating `GRANT/REVOKE` rights on table columns (#7918)
* Adds support for propagating `REASSIGN OWNED BY` commands (#7319)
* Adds support for propagating `CREATE`/`DROP` database from all nodes (#7240,
#7253, #7359)
* Propagates `SECURITY LABEL ON ROLE` statement from any node (#7508)
* Adds support for issuing role management commands from worker nodes (#7278)
* Adds support for propagating `ALTER USER RENAME` commands (#7204)
* Adds support for propagating `ALTER DATABASE <db_name> SET ..` commands
(#7181)
* Adds support for propagating `SECURITY LABEL` on tables and columns (#7956)
* Adds support for propagating `COMMENT ON <database>/<role>` commands (#7388)
* Moves some of the internal citus functions from `pg_catalog` to
`citus_internal` schema (#7473, #7470, #7466, 7456, 7450)
* Adjusts `max_prepared_transactions` only when it's set to default on PG >= 16
(#7712)
* Adds skip_qualify_public param to shard_name() UDF to allow qualifying for
"public" schema when needed (#8014)
* Allows `citus_*_size` on indexes on a distributed tables (#7271)
* Allows `GRANT ADMIN` to now also be `INHERIT` or `SET` in support of PG16
* Makes sure `worker_copy_table_to_node` errors out with Citus tables (#7662)
* Adds information to explain output when using
`citus.explain_distributed_queries=false` (#7412)
* Logs username in the failed connection message (#7432)
* Makes sure to avoid incorrectly pushing-down the outer joins between
distributed tables and recurring relations (like reference tables, local
tables and `VALUES(..)` etc.) prior to PG 17 (#7937)
* Prevents incorrectly pushing `nextval()` call down to workers to avoid using
incorrect sequence value for some types of `INSERT .. SELECT`s (#7976)
* Makes sure to prevent `INSERT INTO ... SELECT` queries involving subfield or
sublink, to avoid crashes (#7912)
* Makes sure to take improvement_threshold into the account
in `citus_add_rebalance_strategy()` (#7247)
* Makes sure to disallow creating a replicated distributed
table concurrently (#7219)
* Fixes a bug that causes omitting `CASCADE` clause for the commands sent to
workers for `REVOKE` commands on tables (#7958)
* Fixes an issue detected using address sanitizer (#7948, #7949)
* Fixes a bug in deparsing of shard query in case of "output-table column" name
conflict (#7932)
* Fixes a crash in columnar custom scan that happens when a columnar table is
used in a join (#7703)
* Fixes `MERGE` command when insert value does not have source distributed
column (#7627)
* Fixes performance issue when using `\d tablename` on a server with many
tables (#7577)
* Fixes performance issue in `GetForeignKeyOids` on systems with many
constraints (#7580)
* Fixes performance issue when distributing a table that depends on an
extension (#7574)
* Fixes performance issue when creating distributed tables if many already
exist (#7575)
* Fixes a crash caused by some form of `ALTER TABLE ADD COLUMN` statements. When
adding multiple columns, if one of the `ADD COLUMN` statements contains a
`FOREIGN` constraint ommitting the referenced
columns in the statement, a `SEGFAULT` occurs (#7522)
* Fixes assertion failure in maintenance daemon during Citus upgrades (#7537)
* Fixes segmentation fault when using `CASE WHEN` in `DO` block functions
(#7554)
* Fixes undefined behavior in `master_disable_node` due to argument mismatch
(#7492)
* Fixes incorrect propagating of `GRANTED BY` and `CASCADE/RESTRICT` clauses
for `REVOKE` statements (#7451)
* Fixes the incorrect column count after `ALTER TABLE` (#7379)
* Fixes timeout when underlying socket is changed for an inter-node connection
(#7377)
* Fixes memory leaks (#7441, #7440)
* Fixes leaking of memory and memory contexts when tracking foreign keys between
Citus tables (#7236)
* Fixes a potential segfault for background rebalancer (#7694)
* Fixes potential `NULL` dereference in casual clocks (#7704)
### citus v13.0.4 (May 29th, 2025) ###
* Fixes an issue detected using address sanitizer (#7966)
* Error out for queries with outer joins and pseudoconstant quals in versions
prior to PG 17 (#7937)
### citus v12.1.8 (May 29, 2025) ###
* Fixes a crash in left outer joins that can happen when there is an an
aggregate on a column from the inner side of the join (#7904)
* Fixes an issue detected using address sanitizer (#7965)
* Fixes a crash when executing a prepared CALL, which is not pure SQL but
available with some drivers like npgsql and jpgdbc (#7288)
### citus v13.0.3 (March 20th, 2025) ###
* Fixes a version bump issue in 13.0.2
### citus v13.0.2 (March 12th, 2025) ###
* Fixes a crash in columnar custom scan that happens when a columnar table is
used in a join. (#7647)
* Fixes a bug that breaks `UPDATE SET (...) = (SELECT some_func(),... )`
type of queries on Citus tables (#7914)
* Fixes a planning error caused by a redundant WHERE clause (#7907)
* Fixes a crash in left outer joins that can happen when there is an aggregate
on a column from the inner side of the join. (#7901)
* Fixes deadlock with transaction recovery that is possible during Citus
upgrades. (#7910)
* Fixes a bug that prevents inserting into Citus tables that uses
a GENERATED ALWAYS AS IDENTITY column. (#7920)
* Ensures that a MERGE command on a distributed table with a WHEN NOT MATCHED BY
SOURCE clause runs against all shards of the distributed table. (#7900)
* Fixes a bug that breaks router updates on distributed tables
when a reference table is used in the subquery (#7897)
### citus v12.1.7 (Feb 6, 2025) ###
* Fixes a crash that happens because of unsafe catalog access when re-assigning
@ -179,48 +6,6 @@ available with some drivers like npgsql and jpgdbc (#7288)
* Prevents crashes when another extension skips executing the
`ClientAuthentication_hook` of Citus. (#7836)
### citus v13.0.1 (February 4th, 2025) ###
* Drops support for PostgreSQL 14 (#7753)
### citus v13.0.0 (January 22, 2025) ###
* Adds support for PostgreSQL 17 (#7699, #7661)
* Adds `JSON_TABLE()` support in distributed queries (#7816)
* Propagates `MERGE ... WHEN NOT MATCHED BY SOURCE` (#7807)
* Propagates `MEMORY` and `SERIALIZE` options of `EXPLAIN` (#7802)
* Adds support for identity columns in distributed partitioned tables (#7785)
* Allows specifying an access method for distributed partitioned tables (#7818)
* Allows exclusion constraints on distributed partitioned tables (#7733)
* Allows configuring sslnegotiation using `citus.node_conn_info` (#7821)
* Avoids wal receiver timeouts during large shard splits (#7229)
* Fixes a bug causing incorrect writing of data to target `MERGE` repartition
command (#7659)
* Fixes a crash that happens because of unsafe catalog access when re-assigning
the global pid after `application_name` changes (#7791)
* Fixes incorrect `VALID UNTIL` setting assumption made for roles when syncing
them to new nodes (#7534)
* Fixes segfault when calling distributed procedure with a parameterized
distribution argument (#7242)
* Fixes server crash when trying to execute `activate_node_snapshot()` on a
single-node cluster (#7552)
* Improves `citus_move_shard_placement()` to fail early if there is a new node
without reference tables yet (#7467)
### citus v12.1.6 (Nov 14, 2024) ###
* Propagates `SECURITY LABEL .. ON ROLE` statements (#7304)
@ -244,8 +29,9 @@ available with some drivers like npgsql and jpgdbc (#7288)
* Allows overwriting host name for all inter-node connections by
supporting "host" parameter in citus.node_conninfo (#7541)
* Avoids distributed deadlocks by changing the order in which the locks are
acquired for the target and reference tables (#7542)
* Changes the order in which the locks are acquired for the target and
reference tables, when a modify request is initiated from a worker
node that is not the "FirstWorkerNode" (#7542)
* Fixes a performance issue when distributing a table that depends on an
extension (#7574)
@ -278,120 +64,10 @@ available with some drivers like npgsql and jpgdbc (#7288)
* Logs username in the failed connection message (#7432)
### citus v11.0.10 (February 15, 2024) ###
* Removes pg_send_cancellation and all references (#7135)
### citus v12.1.2 (February 12, 2024) ###
* Fixes the incorrect column count after ALTER TABLE (#7379)
### citus v12.0.1 (July 11, 2023) ###
* Fixes incorrect default value assumption for VACUUM(PROCESS_TOAST) #7122)
* Fixes a bug that causes an unexpected error when adding a column
with a NULL constraint (#7093)
* Fixes a bug that could cause COPY logic to skip data in case of OOM (#7152)
* Fixes a bug with deleting colocation groups (#6929)
* Fixes memory and memory contexts leaks in Foreign Constraint Graphs (#7236)
* Fixes shard size bug with too many shards (#7018)
* Fixes the incorrect column count after ALTER TABLE (#7379)
* Improves citus_tables view performance (#7050)
* Makes sure to disallow creating a replicated distributed table
concurrently (#7219)
* Removes pg_send_cancellation and all references (#7135)
### citus v11.3.1 (February 12, 2024) ###
* Disallows MERGE when the query prunes down to zero shards (#6946)
* Fixes a bug related to non-existent objects in DDL commands (#6984)
* Fixes a bug that could cause COPY logic to skip data in case of OOM (#7152)
* Fixes a bug with deleting colocation groups (#6929)
* Fixes incorrect results on fetching scrollable with hold cursors (#7014)
* Fixes memory and memory context leaks in Foreign Constraint Graphs (#7236)
* Fixes replicate reference tables task fail when user is superuser (#6930)
* Fixes the incorrect column count after ALTER TABLE (#7379)
* Improves citus_shard_sizes performance (#7050)
* Makes sure to disallow creating a replicated distributed table
concurrently (#7219)
* Removes pg_send_cancellation and all references (#7135)
### citus v11.2.2 (February 12, 2024) ###
* Fixes a bug in background shard rebalancer where the replicate
reference tables task fails if the current user is not a superuser (#6930)
* Fixes a bug related to non-existent objects in DDL commands (#6984)
* Fixes a bug that could cause COPY logic to skip data in case of OOM (#7152)
* Fixes a bug with deleting colocation groups (#6929)
* Fixes incorrect results on fetching scrollable with hold cursors (#7014)
* Fixes memory and memory context leaks in Foreign Constraint Graphs (#7236)
* Fixes the incorrect column count after ALTER TABLE (#7379)
* Improves failure handling of distributed execution (#7090)
* Makes sure to disallow creating a replicated distributed table
concurrently (#7219)
* Removes pg_send_cancellation (#7135)
### citus v11.1.7 (February 12, 2024) ###
* Fixes memory and memory context leaks in Foreign Constraint Graphs (#7236)
* Fixes a bug related to non-existent objects in DDL commands (#6984)
* Fixes a bug that could cause COPY logic to skip data in case of OOM (#7152)
* Fixes a bug with deleting colocation groups (#6929)
* Fixes incorrect results on fetching scrollable with hold cursors (#7014)
* Fixes the incorrect column count after ALTER TABLE (#7379)
* Improves failure handling of distributed execution (#7090)
* Makes sure to disallow creating a replicated distributed table
concurrently (#7219)
* Removes pg_send_cancellation and all references (#7135)
### citus v11.0.9 (February 12, 2024) ###
* Fixes a bug that could cause COPY logic to skip data in case of OOM (#7152)
* Fixes a bug with deleting colocation groups (#6929)
* Fixes memory and memory context leaks in Foreign Constraint Graphs (#7236)
* Fixes the incorrect column count after ALTER TABLE (#7462)
* Improve failure handling of distributed execution (#7090)
### citus v12.1.1 (November 9, 2023) ###
* Fixes leaking of memory and memory contexts in Citus foreign key cache

View File

@ -11,52 +11,6 @@ sign a Contributor License Agreement (CLA). For an explanation of
why we ask this as well as instructions for how to proceed, see the
[Microsoft CLA](https://cla.opensource.microsoft.com/).
### Devcontainer / Github Codespaces
The easiest way to start contributing is via our devcontainer. This container works both locally in visual studio code with docker-desktop/docker-for-mac as well as [Github Codespaces](https://github.com/features/codespaces). To open the project in vscode you will need the [Dev Containers extension](https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-containers). For codespaces you will need to [create a new codespace](https://codespace.new/citusdata/citus).
With the extension installed you can run the following from the command pallet to get started
```
> Dev Containers: Clone Repository in Container Volume...
```
In the subsequent popup paste the url to the repo and hit enter.
```
https://github.com/citusdata/citus
```
This will create an isolated Workspace in vscode, complete with all tools required to build, test and run the Citus extension. We keep this container up to date with the supported postgres versions as well as the exact versions of tooling we use.
To quickly start we suggest splitting your terminal once to have two shells. The left one in the `/workspaces/citus`, the second one changed to `/data`. The left terminal will be used to interact with the project, the right one with a testing cluster.
To get citus installed from source we run `make install -s` in the first terminal. Once installed you can start a Citus cluster in the second terminal via `citus_dev make citus`. The cluster will run in the background, and can be interacted with via `citus_dev`. To get an overview of the available commands.
With the Citus cluster running you can connect to the coordinator in the first terminal via `psql -p9700`. Because the coordinator is the most common entrypoint the `PGPORT` environment is set accordingly, so a simple `psql` will connect directly to the coordinator.
### Debugging in the VS code
1. Start Debugging: Press F5 in VS Code to start debugging. When prompted, you'll need to attach the debugger to the appropriate PostgreSQL process.
2. Identify the Process: If you're running a psql command, take note of the PID that appears in your psql prompt. For example:
```
[local] citus@citus:9700 (PID: 5436)=#
```
This PID (5436 in this case) indicates the process that you should attach the debugger to.
If you are uncertain about which process to attach, you can list all running PostgreSQL processes using the following command:
```
ps aux | grep postgres
```
Look for the process associated with the PID you noted. For example:
```
citus 5436 0.0 0.0 0 0 ? S 14:00 0:00 postgres: citus citus
```
4. Attach the Debugger: Once you've identified the correct PID, select that process when prompted in VS Code to attach the debugger. You should now be able to debug the PostgreSQL session tied to the psql command.
5. Set Breakpoints and Debug: With the debugger attached, you can set breakpoints within the code. This allows you to step through the code execution, inspect variables, and fully debug the PostgreSQL instance running in your container.
### Getting and building
[PostgreSQL documentation](https://www.postgresql.org/support/versioning/) has a
@ -87,8 +41,6 @@ that are missing in earlier minor versions.
cd citus
./configure
# If you have already installed the project, you need to clean it first
make clean
make
make install
# Optionally, you might instead want to use `make install-all`
@ -127,8 +79,6 @@ that are missing in earlier minor versions.
git clone https://github.com/citusdata/citus.git
cd citus
./configure
# If you have already installed the project previously, you need to clean it first
make clean
make
sudo make install
# Optionally, you might instead want to use `sudo make install-all`
@ -179,8 +129,6 @@ that are missing in earlier minor versions.
git clone https://github.com/citusdata/citus.git
cd citus
PG_CONFIG=/usr/pgsql-14/bin/pg_config ./configure
# If you have already installed the project previously, you need to clean it first
make clean
make
sudo make install
# Optionally, you might instead want to use `sudo make install-all`
@ -197,7 +145,43 @@ that are missing in earlier minor versions.
### Following our coding conventions
Our coding conventions are documented in [STYLEGUIDE.md](STYLEGUIDE.md).
CircleCI will automatically reject any PRs which do not follow our coding
conventions. The easiest way to ensure your PR adheres to those conventions is
to use the [citus_indent](https://github.com/citusdata/tools/tree/develop/uncrustify)
tool. This tool uses `uncrustify` under the hood.
```bash
# Uncrustify changes the way it formats code every release a bit. To make sure
# everyone formats consistently we use version 0.68.1:
curl -L https://github.com/uncrustify/uncrustify/archive/uncrustify-0.68.1.tar.gz | tar xz
cd uncrustify-uncrustify-0.68.1/
mkdir build
cd build
cmake ..
make -j5
sudo make install
cd ../..
git clone https://github.com/citusdata/tools.git
cd tools
make uncrustify/.install
```
Once you've done that, you can run the `make reindent` command from the top
directory to recursively check and correct the style of any source files in the
current directory. Under the hood, `make reindent` will run `citus_indent` and
some other style corrections for you.
You can also run the following in the directory of this repository to
automatically format all the files that you have changed before committing:
```bash
cat > .git/hooks/pre-commit << __EOF__
#!/bin/bash
citus_indent --check --diff || { citus_indent --diff; exit 1; }
__EOF__
chmod +x .git/hooks/pre-commit
```
### Making SQL changes
@ -250,34 +234,3 @@ Any other SQL you can put directly in the main sql file, e.g.
### Running tests
See [`src/test/regress/README.md`](https://github.com/citusdata/citus/blob/master/src/test/regress/README.md)
### Documentation
User-facing documentation is published on [docs.citusdata.com](https://docs.citusdata.com/). When adding a new feature, function, or setting, you can open a pull request or issue against the [Citus docs repo](https://github.com/citusdata/citus_docs/).
Detailed descriptions of the implementation for Citus developers are provided in the [Citus Technical Documentation](src/backend/distributed/README.md). It is currently a single file for ease of searching. Please update the documentation if you make any changes that affect the design or add major new features.
# Making a pull request ready for reviews
Asking for help and asking for reviews are two different things. When you're asking for help, you're asking for someone to help you with something that you're not expected to know.
But when you're asking for a review, you're asking for someone to review your work and provide feedback. So, when you're asking for a review, you're expected to make sure that:
* Your changes don't perform **unnecessary line addition / deletions / style changes on unrelated files / lines**.
* All CI jobs are **passing**, including **style checks** and **flaky test detection jobs**. Note that if you're an external contributor, you don't have to wait CI jobs to run (and finish) because they don't get automatically triggered for external contributors.
* Your PR has necessary amount of **tests** and that they're passing.
* You separated as much as possible work into **separate PRs**, e.g., a prerequisite bugfix, a refactoring etc..
* Your PR doesn't introduce a typo or something that you can easily fix yourself.
* After all CI jobs pass, code-coverage measurement job (CodeCov as of today) then kicks in. That's why it's important to make the **tests passing** first. At that point, you're expected to check **CodeCov annotations** that can be seen in the **Files Changed** tab and expected to make sure that it doesn't complain about any lines that are not covered. For example, it's ok if CodeCov complains about an `ereport()` call that you put for an "unexpected-but-better-than-crashing" case, but it's not ok if it complains about an uncovered `if` branch that you added.
* And finally, perform a **self-review** to make sure that:
* Code and code-comments reflects the idea **without requiring an extra explanation** via a chat message / email / PR comment.
This is important because we don't expect developers to reach out to author / read about the whole discussion in the PR to understand the idea behind a commit merged into `main` branch.
* PR description is clear enough.
* If-and-only-if you're **introducing a user facing change / bugfix**, your PR has a line that starts with `DESCRIPTION: <Present simple tense word that starts with a capital letter, e.g., Adds support for / Fixes / Disallows>`.
* **Commit messages** are clear enough if the commits are doing logically different things.

View File

@ -1,43 +0,0 @@
# Devcontainer
## Coredumps
When postgres/citus crashes, there is the option to create a coredump. This is useful for debugging the issue. Coredumps are enabled in the devcontainer by default. However, not all environments are configured correctly out of the box. The most important configuration that is not standardized is the `core_pattern`. The configuration can be verified from the container, however, you cannot change this setting from inside the container as the filesystem containing this setting is in read only mode while inside the container.
To verify if corefiles are written run the following command in a terminal. This shows the filename pattern with which the corefile will be written.
```bash
cat /proc/sys/kernel/core_pattern
```
This should be configured with a relative path or simply a simple filename, such as `core`. When your environment shows an absolute path you will need to change this setting. How to change this setting depends highly on the underlying system as the setting needs to be changed on the kernel of the host running the container.
You can put any pattern in `/proc/sys/kernel/core_pattern` as you see fit. eg. You can add the PID to the core pattern in one of two ways;
- You either include `%p` in the core_pattern. This gets substituted with the PID of the crashing process.
- Alternatively you could set `/proc/sys/kernel/core_uses_pid` to `1` in the same way as you set `core_pattern`. This will append the PID to the corefile if `%p` is not explicitly contained in the core_pattern.
When a coredump is written you can use the debug/launch configuration `Open core file` which is preconfigured in the devcontainer. This will open a fileprompt that lists all coredumps that are found in your workspace. When you want to debug coredumps from `citus_dev` that are run in your `/data` directory, you can add the data directory to your workspace. In the command pallet of vscode you can run `>Workspace: Add Folder to Workspace...` and select the `/data` directory. This will allow you to open the coredumps from the `/data` directory in the `Open core file` debug configuration.
### Windows (docker desktop)
When running in docker desktop on windows you will most likely need to change this setting. The linux guest in WSL2 that runs your container is the `docker-desktop` environment. The easiest way to get onto the host, where you can change this setting, is to open a powershell window and verify you have the docker-desktop environment listed.
```powershell
wsl --list
```
Among others this should list both `docker-desktop` and `docker-desktop-data`. You can then open a shell in the `docker-desktop` environment.
```powershell
wsl -d docker-desktop
```
Inside this shell you can verify that you have the right environment by running
```bash
cat /proc/sys/kernel/core_pattern
```
This should show the same configuration as the one you see inside the devcontainer. You can then change the setting by running the following command.
This will change the setting for the current session. If you want to make the change permanent you will need to add this to a startup script.
```bash
echo "core" > /proc/sys/kernel/core_pattern
```

View File

@ -61,7 +61,6 @@ check-style:
# depend on install-all so that downgrade scripts are installed as well
check: all install-all
# explicetely does not use $(MAKE) to avoid parallelism
make -C src/test/regress check
$(MAKE) -C src/test/regress check-full
.PHONY: all check clean install install-downgrades install-all

View File

@ -1,10 +1,10 @@
| **<br/>The Citus database is 100% open source.<br/><img width=1000/><br/>Learn what's new in the [Citus 13.0 release blog](https://www.citusdata.com/blog/2025/02/06/distribute-postgresql-17-with-citus-13/) and the [Citus Updates page](https://www.citusdata.com/updates/).<br/><br/>**|
| **<br/>The Citus database is 100% open source.<br/><img width=1000/><br/>Learn what's new in the [Citus 12.0 release blog](https://www.citusdata.com/blog/2023/07/18/citus-12-schema-based-sharding-comes-to-postgres/) and the [Citus Updates page](https://www.citusdata.com/updates/).<br/><br/>**|
|---|
<br/>
![Citus Banner](images/citus-readme-banner.png)
![Citus Banner](/citus-readme-banner.png)
[![Latest Docs](https://img.shields.io/badge/docs-latest-brightgreen.svg)](https://docs.citusdata.com/)
[![Stack Overflow](https://img.shields.io/badge/Stack%20Overflow-%20-545353?logo=Stack%20Overflow)](https://stackoverflow.com/questions/tagged/citus)
@ -31,7 +31,7 @@ You can use these Citus superpowers to make your Postgres database scale-out rea
Our [SIGMOD '21](https://2021.sigmod.org/) paper [Citus: Distributed PostgreSQL for Data-Intensive Applications](https://doi.org/10.1145/3448016.3457551) gives a more detailed look into what Citus is, how it works, and why it works that way.
![Citus scales out from a single node](images/citus-scale-out.png)
![Citus scales out from a single node](/citus-scale-out.png)
Since Citus is an extension to Postgres, you can use Citus with the latest Postgres versions. And Citus works seamlessly with the PostgreSQL tools and extensions you are already familiar with.
@ -95,14 +95,14 @@ Install packages on Ubuntu / Debian:
```bash
curl https://install.citusdata.com/community/deb.sh > add-citus-repo.sh
sudo bash add-citus-repo.sh
sudo apt-get -y install postgresql-17-citus-13.0
sudo apt-get -y install postgresql-15-citus-12.0
```
Install packages on Red Hat:
Install packages on CentOS / Red Hat:
```bash
curl https://install.citusdata.com/community/rpm.sh > add-citus-repo.sh
sudo bash add-citus-repo.sh
sudo yum install -y citus130_17
sudo yum install -y citus120_15
```
To add Citus to your local PostgreSQL database, add the following to `postgresql.conf`:
@ -423,14 +423,12 @@ A Citus database cluster grows from a single PostgreSQL node into a cluster by a
Data in distributed tables is stored in “shards”, which are actually just regular PostgreSQL tables on the worker nodes. When querying a distributed table on the coordinator node, Citus will send regular SQL queries to the worker nodes. That way, all the usual PostgreSQL optimizations and extensions can automatically be used with Citus.
![Citus architecture](images/citus-architecture.png)
![Citus architecture](/citus-architecture.png)
When you send a query in which all (co-located) distributed tables have the same filter on the distribution column, Citus will automatically detect that and send the whole query to the worker node that stores the data. That way, arbitrarily complex queries are supported with minimal routing overhead, which is especially useful for scaling transactional workloads. If queries do not have a specific filter, each shard is queried in parallel, which is especially useful in analytical workloads. The Citus distributed executor is adaptive and is designed to handle both query types at the same time on the same system under high concurrency, which enables large-scale mixed workloads.
The schema and metadata of distributed tables and reference tables are automatically synchronized to all the nodes in the cluster. That way, you can connect to any node to run distributed queries. Schema changes and cluster administration still need to go through the coordinator.
Detailed descriptions of the implementation for Citus developers are provided in the [Citus Technical Documentation](src/backend/distributed/README.md).
## When to use Citus
Citus is uniquely capable of scaling both analytical and transactional workloads with up to petabytes of data. Use cases in which Citus is commonly used:
@ -440,21 +438,21 @@ Citus is uniquely capable of scaling both analytical and transactional workloads
The advanced parallel, distributed query engine in Citus combined with PostgreSQL features such as [array types](https://www.postgresql.org/docs/current/arrays.html), [JSONB](https://www.postgresql.org/docs/current/datatype-json.html), [lateral joins](https://heap.io/blog/engineering/postgresqls-powerful-new-join-type-lateral), and extensions like [HyperLogLog](https://github.com/citusdata/postgresql-hll) and [TopN](https://github.com/citusdata/postgresql-topn) allow you to build responsive analytics dashboards no matter how many customers or how much data you have.
Example real-time analytics users: [Algolia](https://www.citusdata.com/customers/algolia)
Example real-time analytics users: [Algolia](https://www.citusdata.com/customers/algolia), [Heap](https://www.citusdata.com/customers/heap)
- **[Time series data](http://docs.citusdata.com/en/stable/use_cases/timeseries.html)**:
Citus enables you to process and analyze very large amounts of time series data. The biggest Citus clusters store well over a petabyte of time series data and ingest terabytes per day.
Citus integrates seamlessly with [Postgres table partitioning](https://www.postgresql.org/docs/current/ddl-partitioning.html) and has [built-in functions for partitioning by time](https://www.citusdata.com/blog/2021/10/22/how-to-scale-postgres-for-time-series-data-with-citus/), which can speed up queries and writes on time series tables. You can take advantage of Cituss parallel, distributed query engine for fast analytical queries, and use the built-in *columnar storage* to compress old partitions.
Example users: [MixRank](https://www.citusdata.com/customers/mixrank)
Example users: [MixRank](https://www.citusdata.com/customers/mixrank), [Windows team](https://techcommunity.microsoft.com/t5/azure-database-for-postgresql/architecting-petabyte-scale-analytics-by-scaling-out-postgres-on/ba-p/969685)
- **[Software-as-a-service (SaaS) applications](http://docs.citusdata.com/en/stable/use_cases/multi_tenant.html)**:
SaaS and other multi-tenant applications need to be able to scale their database as the number of tenants/customers grows. Citus enables you to transparently shard a complex data model by the tenant dimension, so your database can grow along with your business.
By distributing tables along a tenant ID column and co-locating data for the same tenant, Citus can horizontally scale complex (tenant-scoped) queries, transactions, and foreign key graphs. Reference tables and distributed DDL commands make database management a breeze compared to manual sharding. On top of that, you have a built-in distributed query engine for doing cross-tenant analytics inside the database.
Example multi-tenant SaaS users: [Salesloft](https://fivetran.com/case-studies/replicating-sharded-databases-a-case-study-of-salesloft-citus-data-and-fivetran), [ConvertFlow](https://www.citusdata.com/customers/convertflow)
Example multi-tenant SaaS users: [Copper](https://www.citusdata.com/customers/copper), [Salesloft](https://fivetran.com/case-studies/replicating-sharded-databases-a-case-study-of-salesloft-citus-data-and-fivetran), [ConvertFlow](https://www.citusdata.com/customers/convertflow)
- **[Microservices](https://docs.citusdata.com/en/stable/get_started/tutorial_microservices.html)**: Citus supports schema based sharding, which allows distributing regular database schemas across many machines. This sharding methodology fits nicely with typical Microservices architecture, where storage is fully owned by the service hence cant share the same schema definition with other tenants. Citus allows distributing horizontally scalable state across services, solving one of the [main problems](https://stackoverflow.blog/2020/11/23/the-macro-problem-with-microservices/) of microservices.

View File

@ -1,160 +0,0 @@
# Coding style
The existing code-style in our code-base is not super consistent. There are multiple reasons for that. One big reason is because our code-base is relatively old and our standards have changed over time. The second big reason is that our style-guide is different from style-guide of Postgres and some code is copied from Postgres source code and is slightly modified. The below rules are for new code. If you're changing existing code that uses a different style, use your best judgement to decide if you use the rules here or if you match the existing style.
## Using citus_indent
CI pipeline will automatically reject any PRs which do not follow our coding
conventions. The easiest way to ensure your PR adheres to those conventions is
to use the [citus_indent](https://github.com/citusdata/tools/tree/develop/uncrustify)
tool. This tool uses `uncrustify` under the hood.
```bash
# Uncrustify changes the way it formats code every release a bit. To make sure
# everyone formats consistently we use version 0.68.1:
curl -L https://github.com/uncrustify/uncrustify/archive/uncrustify-0.68.1.tar.gz | tar xz
cd uncrustify-uncrustify-0.68.1/
mkdir build
cd build
cmake ..
make -j5
sudo make install
cd ../..
git clone https://github.com/citusdata/tools.git
cd tools
make uncrustify/.install
```
Once you've done that, you can run the `make reindent` command from the top
directory to recursively check and correct the style of any source files in the
current directory. Under the hood, `make reindent` will run `citus_indent` and
some other style corrections for you.
You can also run the following in the directory of this repository to
automatically format all the files that you have changed before committing:
```bash
cat > .git/hooks/pre-commit << __EOF__
#!/bin/bash
citus_indent --check --diff || { citus_indent --diff; exit 1; }
__EOF__
chmod +x .git/hooks/pre-commit
```
## Other rules we follow that citus_indent does not enforce
* We almost always use **CamelCase**, when naming functions, variables etc., **not snake_case**.
* We also have the habits of using a **lowerCamelCase** for some variables named from their type or from their function name, as shown in the examples:
```c
bool IsCitusExtensionLoaded = false;
bool
IsAlterTableRenameStmt(RenameStmt *renameStmt)
{
AlterTableCmd *alterTableCommand = NULL;
..
..
bool isAlterTableRenameStmt = false;
..
}
```
* We **start functions with a comment**:
```c
/*
* MyNiceFunction <something in present simple tense, e.g., processes / returns / checks / takes X as input / does Y> ..
* <some more nice words> ..
* <some more nice words> ..
*/
<static?> <return type>
MyNiceFunction(..)
{
..
..
}
```
* `#includes` needs to be sorted based on below ordering and then alphabetically and we should not include what we don't need in a file:
* System includes (eg. #include<...>)
* Postgres.h (eg. #include "postgres.h")
* Toplevel imports from postgres, not contained in a directory (eg. #include "miscadmin.h")
* General postgres includes (eg . #include "nodes/...")
* Toplevel citus includes, not contained in a directory (eg. #include "citus_verion.h")
* Columnar includes (eg. #include "columnar/...")
* Distributed includes (eg. #include "distributed/...")
* Comments:
```c
/* single line comments start with a lower-case */
/*
* We start multi-line comments with a capital letter
* and keep adding a star to the beginning of each line
* until we close the comment with a star and a slash.
*/
```
* Order of function implementations and their declarations in a file:
We define static functions after the functions that call them. For example:
```c
#include<..>
#include<..>
..
..
typedef struct
{
..
..
} MyNiceStruct;
..
..
PG_FUNCTION_INFO_V1(my_nice_udf1);
PG_FUNCTION_INFO_V1(my_nice_udf2);
..
..
// .. somewhere on top of the file …
static void MyNiceStaticlyDeclaredFunction1(…);
static void MyNiceStaticlyDeclaredFunction2(…);
..
..
void
MyNiceFunctionExternedViaHeaderFile(..)
{
..
..
MyNiceStaticlyDeclaredFunction1(..);
..
..
MyNiceStaticlyDeclaredFunction2(..);
..
}
..
..
// we define this first because it's called by MyNiceFunctionExternedViaHeaderFile()
// before MyNiceStaticlyDeclaredFunction2()
static void
MyNiceStaticlyDeclaredFunction1(…)
{
}
..
..
// then we define this
static void
MyNiceStaticlyDeclaredFunction2(…)
{
}
```

View File

@ -4,22 +4,7 @@ set -euo pipefail
# shellcheck disable=SC1091
source ci/ci_helpers.sh
# Find the line that exactly matches "RegisterCitusConfigVariables(void)" in
# shared_library_init.c. grep command returns something like
# "934:RegisterCitusConfigVariables(void)" and we extract the line number
# with cut.
RegisterCitusConfigVariables_begin_linenumber=$(grep -n "^RegisterCitusConfigVariables(void)$" src/backend/distributed/shared_library_init.c | cut -d: -f1)
# Consider the lines starting from $RegisterCitusConfigVariables_begin_linenumber,
# grep the first line that starts with "}" and extract the line number with cut
# as in the previous step.
RegisterCitusConfigVariables_length=$(tail -n +$RegisterCitusConfigVariables_begin_linenumber src/backend/distributed/shared_library_init.c | grep -n -m 1 "^}$" | cut -d: -f1)
# extract the function definition of RegisterCitusConfigVariables into a temp file
tail -n +$RegisterCitusConfigVariables_begin_linenumber src/backend/distributed/shared_library_init.c | head -n $(($RegisterCitusConfigVariables_length)) > RegisterCitusConfigVariables_func_def.out
# extract citus gucs in the form of <tab><tab>"citus.X"
grep -P "^[\t][\t]\"citus\.[a-zA-Z_0-9]+\"" RegisterCitusConfigVariables_func_def.out > gucs.out
LC_COLLATE=C sort -c gucs.out
# extract citus gucs in the form of "citus.X"
grep -o -E "(\.*\"citus\.\w+\")," src/backend/distributed/shared_library_init.c > gucs.out
sort -c gucs.out
rm gucs.out
rm RegisterCitusConfigVariables_func_def.out

View File

Before

Width:  |  Height:  |  Size: 94 KiB

After

Width:  |  Height:  |  Size: 94 KiB

View File

Before

Width:  |  Height:  |  Size: 22 KiB

After

Width:  |  Height:  |  Size: 22 KiB

View File

Before

Width:  |  Height:  |  Size: 18 KiB

After

Width:  |  Height:  |  Size: 18 KiB

3069
configure vendored

File diff suppressed because it is too large Load Diff

View File

@ -5,7 +5,7 @@
# everyone needing autoconf installed, the resulting files are checked
# into the SCM.
AC_INIT([Citus], [13.2devel])
AC_INIT([Citus], [12.1.7])
AC_COPYRIGHT([Copyright (c) Citus Data, Inc.])
# we'll need sed and awk for some of the version commands
@ -80,7 +80,7 @@ AC_SUBST(with_pg_version_check)
if test "$with_pg_version_check" = no; then
AC_MSG_NOTICE([building against PostgreSQL $version_num (skipped compatibility check)])
elif test "$version_num" != '15' -a "$version_num" != '16' -a "$version_num" != '17'; then
elif test "$version_num" != '14' -a "$version_num" != '15' -a "$version_num" != '16'; then
AC_MSG_ERROR([Citus is not compatible with the detected PostgreSQL version ${version_num}.])
else
AC_MSG_NOTICE([building against PostgreSQL $version_num])

133
gucs.out Normal file
View File

@ -0,0 +1,133 @@
"citus.all_modifications_commutative",
"citus.allow_modifications_from_workers_to_replicated_tables",
"citus.allow_nested_distributed_execution",
"citus.allow_unsafe_constraints",
"citus.allow_unsafe_locks_from_workers",
"citus.background_task_queue_interval",
"citus.check_available_space_before_move",
"citus.cluster_name",
"citus.coordinator_aggregation_strategy",
"citus.copy_switchover_threshold",
"citus.count_distinct_error_rate",
"citus.cpu_priority",
"citus.cpu_priority_for_logical_replication_senders",
"citus.create_object_propagation",
"citus.defer_drop_after_shard_move",
"citus.defer_drop_after_shard_split",
"citus.defer_shard_delete_interval",
"citus.desired_percent_disk_available_after_move",
"citus.distributed_deadlock_detection_factor",
"citus.enable_alter_database_owner",
"citus.enable_alter_role_propagation",
"citus.enable_alter_role_set_propagation",
"citus.enable_binary_protocol",
"citus.enable_change_data_capture",
"citus.enable_cluster_clock",
"citus.enable_cost_based_connection_establishment",
"citus.enable_create_role_propagation",
"citus.enable_create_type_propagation",
"citus.enable_ddl_propagation",
"citus.enable_deadlock_prevention",
"citus.enable_fast_path_router_planner",
"citus.enable_local_execution",
"citus.enable_local_reference_table_foreign_keys",
"citus.enable_manual_changes_to_shards",
"citus.enable_manual_metadata_changes_for_user",
"citus.enable_metadata_sync",
"citus.enable_non_colocated_router_query_pushdown",
"citus.enable_repartition_joins",
"citus.enable_repartitioned_insert_select",
"citus.enable_router_execution",
"citus.enable_schema_based_sharding",
"citus.enable_single_hash_repartition_joins",
"citus.enable_statistics_collection",
"citus.enable_unique_job_ids",
"citus.enable_unsafe_triggers",
"citus.enable_unsupported_feature_messages",
"citus.enable_version_checks",
"citus.enforce_foreign_key_restrictions",
"citus.enforce_object_restrictions_for_local_objects",
"citus.executor_slow_start_interval",
"citus.explain_all_tasks",
"citus.explain_analyze_sort_method",
"citus.explain_distributed_queries",
"citus.force_max_query_parallelization",
"citus.function_opens_transaction_block",
"citus.grep_remote_commands",
"citus.hide_citus_dependent_objects",
"citus.hide_shards_from_app_name_prefixes",
"citus.isolation_test_session_process_id",
"citus.isolation_test_session_remote_process_id",
"citus.limit_clause_row_fetch_count",
"citus.local_copy_flush_threshold",
"citus.local_hostname",
"citus.local_shared_pool_size",
"citus.local_table_join_policy",
"citus.log_distributed_deadlock_detection",
"citus.log_intermediate_results",
"citus.log_local_commands",
"citus.log_multi_join_order",
"citus.log_remote_commands",
"citus.logical_replication_timeout",
"citus.main_db",
"citus.max_adaptive_executor_pool_size",
"citus.max_background_task_executors",
"citus.max_background_task_executors_per_node",
"citus.max_cached_connection_lifetime",
"citus.max_cached_conns_per_worker",
"citus.max_client_connections",
"citus.max_high_priority_background_processes",
"citus.max_intermediate_result_size",
"citus.max_matview_size_to_auto_recreate",
"citus.max_rebalancer_logged_ignored_moves",
"citus.max_shared_pool_size",
"citus.max_worker_nodes_tracked",
"citus.metadata_sync_interval",
"citus.metadata_sync_mode",
"citus.metadata_sync_retry_interval",
"citus.mitmfifo",
"citus.multi_shard_modify_mode",
"citus.multi_task_query_log_level",
"citus.next_cleanup_record_id",
"citus.next_operation_id",
"citus.next_placement_id",
"citus.next_shard_id",
"citus.node_connection_timeout",
"citus.node_conninfo",
"citus.override_table_visibility",
"citus.prevent_incomplete_connection_establishment",
"citus.propagate_session_settings_for_loopback_connection",
"citus.propagate_set_commands",
"citus.rebalancer_by_disk_size_base_cost",
"citus.recover_2pc_interval",
"citus.remote_copy_flush_threshold",
"citus.remote_task_check_interval",
"citus.repartition_join_bucket_count_per_node",
"citus.replicate_reference_tables_on_activate",
"citus.replication_model",
"citus.running_under_citus_test_suite",
"citus.select_opens_transaction_block",
"citus.shard_count",
"citus.shard_replication_factor",
"citus.show_shards_for_app_name_prefixes",
"citus.skip_advisory_lock_permission_checks",
"citus.skip_constraint_validation",
"citus.skip_jsonb_validation_in_copy",
"citus.sort_returning",
"citus.stat_statements_max",
"citus.stat_statements_purge_interval",
"citus.stat_statements_track",
"citus.stat_tenants_limit",
"citus.stat_tenants_log_level",
"citus.stat_tenants_period",
"citus.stat_tenants_track",
"citus.stat_tenants_untracked_sample_rate",
"citus.subquery_pushdown",
"citus.task_assignment_policy",
"citus.task_executor_type",
"citus.use_citus_managed_tables",
"citus.use_secondary_nodes",
"citus.values_materialization_threshold",
"citus.version",
"citus.worker_min_messages",
"citus.writable_standby_coordinator",

Binary file not shown.

Before

Width:  |  Height:  |  Size: 95 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 22 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 102 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 29 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 69 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 111 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 12 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 168 KiB

View File

@ -1,6 +1,6 @@
# Columnar extension
comment = 'Citus Columnar extension'
default_version = '12.2-1'
default_version = '11.3-1'
module_pathname = '$libdir/citus_columnar'
relocatable = false
schema = pg_catalog

View File

@ -32,6 +32,8 @@
#include "optimizer/paths.h"
#include "optimizer/plancat.h"
#include "optimizer/restrictinfo.h"
#include "citus_version.h"
#if PG_VERSION_NUM >= PG_VERSION_16
#include "parser/parse_relation.h"
#include "parser/parsetree.h"
@ -43,8 +45,6 @@
#include "utils/selfuncs.h"
#include "utils/spccache.h"
#include "citus_version.h"
#include "columnar/columnar.h"
#include "columnar/columnar_customscan.h"
#include "columnar/columnar_metadata.h"
@ -363,7 +363,7 @@ ColumnarGetRelationInfoHook(PlannerInfo *root, Oid relationObjectId,
/* disable index-only scan */
IndexOptInfo *indexOptInfo = NULL;
foreach_declared_ptr(indexOptInfo, rel->indexlist)
foreach_ptr(indexOptInfo, rel->indexlist)
{
memset(indexOptInfo->canreturn, false, indexOptInfo->ncolumns * sizeof(bool));
}
@ -381,7 +381,7 @@ RemovePathsByPredicate(RelOptInfo *rel, PathPredicate removePathPredicate)
List *filteredPathList = NIL;
Path *path = NULL;
foreach_declared_ptr(path, rel->pathlist)
foreach_ptr(path, rel->pathlist)
{
if (!removePathPredicate(path))
{
@ -428,7 +428,7 @@ static void
CostColumnarPaths(PlannerInfo *root, RelOptInfo *rel, Oid relationId)
{
Path *path = NULL;
foreach_declared_ptr(path, rel->pathlist)
foreach_ptr(path, rel->pathlist)
{
if (IsA(path, IndexPath))
{
@ -783,7 +783,7 @@ ExtractPushdownClause(PlannerInfo *root, RelOptInfo *rel, Node *node)
List *pushdownableArgs = NIL;
Node *boolExprArg = NULL;
foreach_declared_ptr(boolExprArg, boolExpr->args)
foreach_ptr(boolExprArg, boolExpr->args)
{
Expr *pushdownableArg = ExtractPushdownClause(root, rel,
(Node *) boolExprArg);
@ -1051,15 +1051,6 @@ FindCandidateRelids(PlannerInfo *root, RelOptInfo *rel, List *joinClauses)
candidateRelids = bms_del_members(candidateRelids, rel->relids);
candidateRelids = bms_del_members(candidateRelids, rel->lateral_relids);
/*
* For the relevant PG16 commit requiring this addition:
* postgres/postgres@2489d76
*/
#if PG_VERSION_NUM >= PG_VERSION_16
candidateRelids = bms_del_members(candidateRelids, root->outer_join_rels);
#endif
return candidateRelids;
}
@ -1321,8 +1312,11 @@ AddColumnarScanPath(PlannerInfo *root, RelOptInfo *rel, RangeTblEntry *rte,
cpath->methods = &ColumnarScanPathMethods;
#if (PG_VERSION_NUM >= PG_VERSION_15)
/* necessary to avoid extra Result node in PG15 */
cpath->flags = CUSTOMPATH_SUPPORT_PROJECTION;
#endif
/*
* populate generic path information
@ -1556,7 +1550,7 @@ ColumnarPerStripeScanCost(RelOptInfo *rel, Oid relationId, int numberOfColumnsRe
uint32 maxColumnCount = 0;
uint64 totalStripeSize = 0;
StripeMetadata *stripeMetadata = NULL;
foreach_declared_ptr(stripeMetadata, stripeList)
foreach_ptr(stripeMetadata, stripeList)
{
totalStripeSize += stripeMetadata->dataLength;
maxColumnCount = Max(maxColumnCount, stripeMetadata->columnCount);
@ -1930,6 +1924,11 @@ ColumnarScan_EndCustomScan(CustomScanState *node)
*/
TableScanDesc scanDesc = node->ss.ss_currentScanDesc;
/*
* Free the exprcontext
*/
ExecFreeExprContext(&node->ss.ps);
/*
* clean out the tuple table
*/

View File

@ -24,7 +24,6 @@
#include "postgres.h"
#include "miscadmin.h"
#include "port.h"
#include "safe_lib.h"
#include "access/heapam.h"
@ -43,6 +42,19 @@
#include "executor/spi.h"
#include "lib/stringinfo.h"
#include "nodes/execnodes.h"
#include "citus_version.h"
#include "columnar/columnar.h"
#include "columnar/columnar_storage.h"
#include "columnar/columnar_version_compat.h"
#include "distributed/listutils.h"
#if PG_VERSION_NUM >= PG_VERSION_16
#include "parser/parse_relation.h"
#endif
#include "port.h"
#include "storage/fd.h"
#include "storage/lmgr.h"
#include "storage/procarray.h"
@ -52,18 +64,7 @@
#include "utils/lsyscache.h"
#include "utils/memutils.h"
#include "utils/rel.h"
#include "citus_version.h"
#include "pg_version_constants.h"
#include "columnar/columnar.h"
#include "columnar/columnar_storage.h"
#include "columnar/columnar_version_compat.h"
#include "distributed/listutils.h"
#if PG_VERSION_NUM >= PG_VERSION_16
#include "parser/parse_relation.h"
#include "storage/relfilelocator.h"
#include "utils/relfilenumbermap.h"
#else
@ -1685,7 +1686,7 @@ DeleteTupleAndEnforceConstraints(ModifyState *state, HeapTuple heapTuple)
simple_heap_delete(state->rel, tid);
/* execute AFTER ROW DELETE Triggers to enforce constraints */
ExecARDeleteTriggers(estate, resultRelInfo, tid, NULL, NULL, false);
ExecARDeleteTriggers_compat(estate, resultRelInfo, tid, NULL, NULL, false);
}
@ -2041,7 +2042,7 @@ GetHighestUsedRowNumber(uint64 storageId)
List *stripeMetadataList = ReadDataFileStripeList(storageId,
GetTransactionSnapshot());
StripeMetadata *stripeMetadata = NULL;
foreach_declared_ptr(stripeMetadata, stripeMetadataList)
foreach_ptr(stripeMetadata, stripeMetadataList)
{
highestRowNumber = Max(highestRowNumber,
StripeGetHighestRowNumber(stripeMetadata));

View File

@ -880,7 +880,7 @@ ReadChunkGroupNextRow(ChunkGroupReadState *chunkGroupReadState, Datum *columnVal
memset(columnNulls, true, sizeof(bool) * chunkGroupReadState->columnCount);
int attno;
foreach_declared_int(attno, chunkGroupReadState->projectedColumnList)
foreach_int(attno, chunkGroupReadState->projectedColumnList)
{
const ChunkData *chunkGroupData = chunkGroupReadState->chunkGroupData;
const int rowIndex = chunkGroupReadState->currentRow;
@ -1489,7 +1489,7 @@ ProjectedColumnMask(uint32 columnCount, List *projectedColumnList)
bool *projectedColumnMask = palloc0(columnCount * sizeof(bool));
int attno;
foreach_declared_int(attno, projectedColumnList)
foreach_int(attno, projectedColumnList)
{
/* attno is 1-indexed; projectedColumnMask is 0-indexed */
int columnIndex = attno - 1;

View File

@ -877,7 +877,7 @@ columnar_relation_set_new_filelocator(Relation rel,
*freezeXid = RecentXmin;
*minmulti = GetOldestMultiXactId();
SMgrRelation srel = RelationCreateStorage(*newrlocator, persistence, true);
SMgrRelation srel = RelationCreateStorage_compat(*newrlocator, persistence, true);
ColumnarStorageInit(srel, ColumnarMetadataNewStorageId());
InitColumnarOptions(rel->rd_id);
@ -1424,32 +1424,15 @@ ConditionalLockRelationWithTimeout(Relation rel, LOCKMODE lockMode, int timeout,
static bool
columnar_scan_analyze_next_block(TableScanDesc scan,
#if PG_VERSION_NUM >= PG_VERSION_17
ReadStream *stream)
#else
BlockNumber blockno,
columnar_scan_analyze_next_block(TableScanDesc scan, BlockNumber blockno,
BufferAccessStrategy bstrategy)
#endif
{
/*
* Our access method is not pages based, i.e. tuples are not confined
* to pages boundaries. So not much to do here. We return true anyway
* so acquire_sample_rows() in analyze.c would call our
* columnar_scan_analyze_next_tuple() callback.
* In PG17, we return false in case there is no buffer left, since
* the outer loop changed in acquire_sample_rows(), and it is
* expected for the scan_analyze_next_block function to check whether
* there are any blocks left in the block sampler.
*/
#if PG_VERSION_NUM >= PG_VERSION_17
Buffer buf = read_stream_next_buffer(stream, NULL);
if (!BufferIsValid(buf))
{
return false;
}
ReleaseBuffer(buf);
#endif
return true;
}
@ -2245,6 +2228,7 @@ ColumnarProcessAlterTable(AlterTableStmt *alterTableStmt, List **columnarOptions
columnarRangeVar = alterTableStmt->relation;
}
}
#if PG_VERSION_NUM >= PG_VERSION_15
else if (alterTableCmd->subtype == AT_SetAccessMethod)
{
if (columnarRangeVar || *columnarOptions)
@ -2255,15 +2239,14 @@ ColumnarProcessAlterTable(AlterTableStmt *alterTableStmt, List **columnarOptions
"Specify SET ACCESS METHOD before storage parameters, or use separate ALTER TABLE commands.")));
}
destIsColumnar = (strcmp(alterTableCmd->name ? alterTableCmd->name :
default_table_access_method,
COLUMNAR_AM_NAME) == 0);
destIsColumnar = (strcmp(alterTableCmd->name, COLUMNAR_AM_NAME) == 0);
if (srcIsColumnar && !destIsColumnar)
{
DeleteColumnarTableOptions(RelationGetRelid(rel), true);
}
}
#endif /* PG_VERSION_15 */
}
relation_close(rel, NoLock);
@ -2647,12 +2630,21 @@ ColumnarCheckLogicalReplication(Relation rel)
return;
}
#if PG_VERSION_NUM >= PG_VERSION_15
{
PublicationDesc pubdesc;
RelationBuildPublicationDesc(rel, &pubdesc);
pubActionInsert = pubdesc.pubactions.pubinsert;
}
#else
if (rel->rd_pubactions == NULL)
{
GetRelationPublicationActions(rel);
Assert(rel->rd_pubactions != NULL);
}
pubActionInsert = rel->rd_pubactions->pubinsert;
#endif
if (pubActionInsert)
{
@ -3029,8 +3021,6 @@ AvailableExtensionVersionColumnar(void)
ereport(ERROR, (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
errmsg("citus extension is not found")));
return NULL; /* keep compiler happy */
}
@ -3093,7 +3083,7 @@ DefElem *
GetExtensionOption(List *extensionOptions, const char *defname)
{
DefElem *defElement = NULL;
foreach_declared_ptr(defElement, extensionOptions)
foreach_ptr(defElement, extensionOptions)
{
if (IsA(defElement, DefElem) &&
strncmp(defElement->defname, defname, NAMEDATALEN) == 0)

View File

@ -29,12 +29,6 @@
#include "utils/rel.h"
#include "pg_version_compat.h"
#include "pg_version_constants.h"
#include "columnar/columnar.h"
#include "columnar/columnar_storage.h"
#include "columnar/columnar_version_compat.h"
#if PG_VERSION_NUM >= PG_VERSION_16
#include "storage/relfilelocator.h"
#include "utils/relfilenumbermap.h"
@ -42,6 +36,10 @@
#include "utils/relfilenodemap.h"
#endif
#include "columnar/columnar.h"
#include "columnar/columnar_storage.h"
#include "columnar/columnar_version_compat.h"
struct ColumnarWriteState
{
TupleDesc tupleDescriptor;

View File

@ -1 +0,0 @@
-- citus_columnar--11.3-1--12.2-1

View File

@ -1 +0,0 @@
-- citus_columnar--12.2-1--11.3-1

View File

@ -18,7 +18,7 @@ generated_downgrade_sql_files += $(patsubst %,$(citus_abs_srcdir)/build/sql/%,$(
DATA_built = $(generated_sql_files)
# directories with source files
SUBDIRS = . commands connection ddl deparser executor metadata operations planner progress relay safeclib shardsplit stats test transaction utils worker clock
SUBDIRS = . commands connection ddl deparser executor metadata operations planner progress relay safeclib shardsplit test transaction utils worker clock
# enterprise modules
SUBDIRS += replication

File diff suppressed because it is too large Load Diff

View File

@ -14,6 +14,11 @@ override CPPFLAGS += -DDECODER=\"$(DECODER)\" -I$(citus_abs_top_srcdir)/include
install: install-cdc
clean: clean-cdc
install-cdc:
mkdir -p '$(citus_decoders_dir)'
$(INSTALL_SHLIB) citus_$(DECODER)$(DLSUFFIX) '$(citus_decoders_dir)/$(DECODER)$(DLSUFFIX)'
$(INSTALL_SHLIB) citus_$(DECODER).so '$(citus_decoders_dir)/$(DECODER).so'
clean-cdc:
rm -f '$(DESTDIR)$(datadir)/$(datamoduledir)/citus_decoders/$(DECODER).so'

View File

@ -22,8 +22,6 @@
#include "utils/rel.h"
#include "utils/typcache.h"
#include "pg_version_constants.h"
PG_MODULE_MAGIC;
extern void _PG_output_plugin_init(OutputPluginCallbacks *cb);
@ -437,74 +435,6 @@ TranslateChangesIfSchemaChanged(Relation sourceRelation, Relation targetRelation
return;
}
#if PG_VERSION_NUM >= PG_VERSION_17
/* Check the ReorderBufferChange's action type and handle them accordingly.*/
switch (change->action)
{
case REORDER_BUFFER_CHANGE_INSERT:
{
/* For insert action, only new tuple should always be translated*/
HeapTuple sourceRelationNewTuple = change->data.tp.newtuple;
HeapTuple targetRelationNewTuple = GetTupleForTargetSchemaForCdc(
sourceRelationNewTuple, sourceRelationDesc, targetRelationDesc);
change->data.tp.newtuple = targetRelationNewTuple;
break;
}
/*
* For update changes both old and new tuples need to be translated for target relation
* if the REPLICA IDENTITY is set to FULL. Otherwise, only the new tuple needs to be
* translated for target relation.
*/
case REORDER_BUFFER_CHANGE_UPDATE:
{
/* For update action, new tuple should always be translated*/
/* Get the new tuple from the ReorderBufferChange, and translate it to target relation. */
HeapTuple sourceRelationNewTuple = change->data.tp.newtuple;
HeapTuple targetRelationNewTuple = GetTupleForTargetSchemaForCdc(
sourceRelationNewTuple, sourceRelationDesc, targetRelationDesc);
change->data.tp.newtuple = targetRelationNewTuple;
/*
* Format oldtuple according to the target relation. If the column values of replica
* identiy change, then the old tuple is non-null and needs to be formatted according
* to the target relation schema.
*/
if (change->data.tp.oldtuple != NULL)
{
HeapTuple sourceRelationOldTuple = change->data.tp.oldtuple;
HeapTuple targetRelationOldTuple = GetTupleForTargetSchemaForCdc(
sourceRelationOldTuple,
sourceRelationDesc,
targetRelationDesc);
change->data.tp.oldtuple = targetRelationOldTuple;
}
break;
}
case REORDER_BUFFER_CHANGE_DELETE:
{
/* For delete action, only old tuple should be translated*/
HeapTuple sourceRelationOldTuple = change->data.tp.oldtuple;
HeapTuple targetRelationOldTuple = GetTupleForTargetSchemaForCdc(
sourceRelationOldTuple,
sourceRelationDesc,
targetRelationDesc);
change->data.tp.oldtuple = targetRelationOldTuple;
break;
}
default:
{
/* Do nothing for other action types. */
break;
}
}
#else
/* Check the ReorderBufferChange's action type and handle them accordingly.*/
switch (change->action)
{
@ -569,5 +499,4 @@ TranslateChangesIfSchemaChanged(Relation sourceRelation, Relation targetRelation
break;
}
}
#endif
}

View File

@ -1,6 +1,6 @@
# Citus extension
comment = 'Citus distributed database'
default_version = '13.2-1'
default_version = '12.1-1'
module_pathname = '$libdir/citus'
relocatable = false
schema = pg_catalog

View File

@ -145,6 +145,17 @@ LogicalClockShmemSize(void)
void
InitializeClusterClockMem(void)
{
/* On PG 15 and above, we use shmem_request_hook_type */
#if PG_VERSION_NUM < PG_VERSION_15
/* allocate shared memory for pre PG-15 versions */
if (!IsUnderPostmaster)
{
RequestAddinShmemSpace(LogicalClockShmemSize());
}
#endif
prev_shmem_startup_hook = shmem_startup_hook;
shmem_startup_hook = LogicalClockShmemInit;
}
@ -317,7 +328,7 @@ GetHighestClockInTransaction(List *nodeConnectionList)
{
MultiConnection *connection = NULL;
foreach_declared_ptr(connection, nodeConnectionList)
foreach_ptr(connection, nodeConnectionList)
{
int querySent =
SendRemoteCommand(connection, "SELECT citus_get_node_clock();");
@ -338,7 +349,7 @@ GetHighestClockInTransaction(List *nodeConnectionList)
globalClockValue->counter)));
/* fetch the results and pick the highest clock value of all the nodes */
foreach_declared_ptr(connection, nodeConnectionList)
foreach_ptr(connection, nodeConnectionList)
{
bool raiseInterrupts = true;
@ -386,7 +397,7 @@ AdjustClocksToTransactionHighest(List *nodeConnectionList,
/* Set the clock value on participating worker nodes */
appendStringInfo(queryToSend,
"SELECT citus_internal.adjust_local_clock_to_remote"
"SELECT pg_catalog.citus_internal_adjust_local_clock_to_remote"
"('(%lu, %u)'::pg_catalog.cluster_clock);",
transactionClockValue->logical, transactionClockValue->counter);
@ -420,11 +431,6 @@ PrepareAndSetTransactionClock(void)
MultiConnection *connection = dlist_container(MultiConnection, transactionNode,
iter.cur);
WorkerNode *workerNode = FindWorkerNode(connection->hostname, connection->port);
if (!workerNode)
{
ereport(WARNING, errmsg("Worker node is missing"));
continue;
}
/* Skip the node if we already in the list */
if (list_member_int(nodeList, workerNode->groupId))

View File

@ -209,9 +209,12 @@ static void ReplaceTable(Oid sourceId, Oid targetId, List *justBeforeDropCommand
static bool HasAnyGeneratedStoredColumns(Oid relationId);
static List * GetNonGeneratedStoredColumnNameList(Oid relationId);
static void CheckAlterDistributedTableConversionParameters(TableConversionState *con);
static char * CreateWorkerChangeSequenceDependencyCommand(char *qualifiedSequeceName,
char *qualifiedSourceName,
char *qualifiedTargetName);
static char * CreateWorkerChangeSequenceDependencyCommand(char *sequenceSchemaName,
char *sequenceName,
char *sourceSchemaName,
char *sourceName,
char *targetSchemaName,
char *targetName);
static void ErrorIfMatViewSizeExceedsTheLimit(Oid matViewOid);
static char * CreateMaterializedViewDDLCommand(Oid matViewOid);
static char * GetAccessMethodForMatViewIfExists(Oid viewOid);
@ -414,7 +417,7 @@ UndistributeTables(List *relationIdList)
*/
List *originalForeignKeyRecreationCommands = NIL;
Oid relationId = InvalidOid;
foreach_declared_oid(relationId, relationIdList)
foreach_oid(relationId, relationIdList)
{
List *fkeyCommandsForRelation =
GetFKeyCreationCommandsRelationInvolvedWithTableType(relationId,
@ -788,21 +791,19 @@ ConvertTableInternal(TableConversionState *con)
justBeforeDropCommands = lappend(justBeforeDropCommands, detachFromParentCommand);
}
char *qualifiedRelationName = quote_qualified_identifier(con->schemaName,
con->relationName);
if (PartitionedTable(con->relationId))
{
if (!con->suppressNoticeMessages)
{
ereport(NOTICE, (errmsg("converting the partitions of %s",
qualifiedRelationName)));
quote_qualified_identifier(con->schemaName,
con->relationName))));
}
List *partitionList = PartitionList(con->relationId);
Oid partitionRelationId = InvalidOid;
foreach_declared_oid(partitionRelationId, partitionList)
foreach_oid(partitionRelationId, partitionList)
{
char *tableQualifiedName = generate_qualified_relation_name(
partitionRelationId);
@ -869,11 +870,13 @@ ConvertTableInternal(TableConversionState *con)
if (!con->suppressNoticeMessages)
{
ereport(NOTICE, (errmsg("creating a new table for %s", qualifiedRelationName)));
ereport(NOTICE, (errmsg("creating a new table for %s",
quote_qualified_identifier(con->schemaName,
con->relationName))));
}
TableDDLCommand *tableCreationCommand = NULL;
foreach_declared_ptr(tableCreationCommand, preLoadCommands)
foreach_ptr(tableCreationCommand, preLoadCommands)
{
Assert(CitusIsA(tableCreationCommand, TableDDLCommand));
@ -947,7 +950,7 @@ ConvertTableInternal(TableConversionState *con)
con->suppressNoticeMessages);
TableDDLCommand *tableConstructionCommand = NULL;
foreach_declared_ptr(tableConstructionCommand, postLoadCommands)
foreach_ptr(tableConstructionCommand, postLoadCommands)
{
Assert(CitusIsA(tableConstructionCommand, TableDDLCommand));
char *tableConstructionSQL = GetTableDDLCommand(tableConstructionCommand);
@ -965,7 +968,7 @@ ConvertTableInternal(TableConversionState *con)
MemoryContext oldContext = MemoryContextSwitchTo(citusPerPartitionContext);
char *attachPartitionCommand = NULL;
foreach_declared_ptr(attachPartitionCommand, attachPartitionCommands)
foreach_ptr(attachPartitionCommand, attachPartitionCommands)
{
MemoryContextReset(citusPerPartitionContext);
@ -990,12 +993,14 @@ ConvertTableInternal(TableConversionState *con)
/* For now we only support cascade to colocation for alter_distributed_table UDF */
Assert(con->conversionType == ALTER_DISTRIBUTED_TABLE);
foreach_declared_oid(colocatedTableId, con->colocatedTableList)
foreach_oid(colocatedTableId, con->colocatedTableList)
{
if (colocatedTableId == con->relationId)
{
continue;
}
char *qualifiedRelationName = quote_qualified_identifier(con->schemaName,
con->relationName);
TableConversionParameters cascadeParam = {
.relationId = colocatedTableId,
@ -1018,7 +1023,7 @@ ConvertTableInternal(TableConversionState *con)
if (con->cascadeToColocated != CASCADE_TO_COLOCATED_NO_ALREADY_CASCADED)
{
char *foreignKeyCommand = NULL;
foreach_declared_ptr(foreignKeyCommand, foreignKeyCommands)
foreach_ptr(foreignKeyCommand, foreignKeyCommands)
{
ExecuteQueryViaSPI(foreignKeyCommand, SPI_OK_UTILITY);
}
@ -1054,7 +1059,7 @@ CopyTableConversionReturnIntoCurrentContext(TableConversionReturn *tableConversi
tableConversionReturnCopy = palloc0(sizeof(TableConversionReturn));
List *copyForeignKeyCommands = NIL;
char *foreignKeyCommand = NULL;
foreach_declared_ptr(foreignKeyCommand, tableConversionReturn->foreignKeyCommands)
foreach_ptr(foreignKeyCommand, tableConversionReturn->foreignKeyCommands)
{
char *copyForeignKeyCommand = MemoryContextStrdup(CurrentMemoryContext,
foreignKeyCommand);
@ -1129,7 +1134,7 @@ DropIndexesNotSupportedByColumnar(Oid relationId, bool suppressNoticeMessages)
RelationClose(columnarRelation);
Oid indexId = InvalidOid;
foreach_declared_oid(indexId, indexIdList)
foreach_oid(indexId, indexIdList)
{
char *indexAmName = GetIndexAccessMethodName(indexId);
if (extern_ColumnarSupportsIndexAM(indexAmName))
@ -1389,7 +1394,7 @@ CreateTableConversion(TableConversionParameters *params)
* since they will be handled separately.
*/
Oid colocatedTableId = InvalidOid;
foreach_declared_oid(colocatedTableId, colocatedTableList)
foreach_oid(colocatedTableId, colocatedTableList)
{
if (PartitionTable(colocatedTableId))
{
@ -1605,7 +1610,7 @@ DoesCascadeDropUnsupportedObject(Oid classId, Oid objectId, HTAB *nodeMap)
targetObjectId);
HeapTuple depTup = NULL;
foreach_declared_ptr(depTup, dependencyTupleList)
foreach_ptr(depTup, dependencyTupleList)
{
Form_pg_depend pg_depend = (Form_pg_depend) GETSTRUCT(depTup);
@ -1645,7 +1650,7 @@ GetViewCreationCommandsOfTable(Oid relationId)
List *commands = NIL;
Oid viewOid = InvalidOid;
foreach_declared_oid(viewOid, views)
foreach_oid(viewOid, views)
{
StringInfo query = makeStringInfo();
@ -1683,7 +1688,7 @@ WrapTableDDLCommands(List *commandStrings)
List *tableDDLCommands = NIL;
char *command = NULL;
foreach_declared_ptr(command, commandStrings)
foreach_ptr(command, commandStrings)
{
tableDDLCommands = lappend(tableDDLCommands, makeTableDDLCommandString(command));
}
@ -1745,7 +1750,9 @@ CreateMaterializedViewDDLCommand(Oid matViewOid)
{
StringInfo query = makeStringInfo();
char *qualifiedViewName = generate_qualified_relation_name(matViewOid);
char *viewName = get_rel_name(matViewOid);
char *schemaName = get_namespace_name(get_rel_namespace(matViewOid));
char *qualifiedViewName = quote_qualified_identifier(schemaName, viewName);
/* here we need to get the access method of the view to recreate it */
char *accessMethodName = GetAccessMethodForMatViewIfExists(matViewOid);
@ -1794,8 +1801,9 @@ ReplaceTable(Oid sourceId, Oid targetId, List *justBeforeDropCommands,
bool suppressNoticeMessages)
{
char *sourceName = get_rel_name(sourceId);
char *qualifiedSourceName = generate_qualified_relation_name(sourceId);
char *qualifiedTargetName = generate_qualified_relation_name(targetId);
char *targetName = get_rel_name(targetId);
Oid schemaId = get_rel_namespace(sourceId);
char *schemaName = get_namespace_name(schemaId);
StringInfo query = makeStringInfo();
@ -1803,7 +1811,8 @@ ReplaceTable(Oid sourceId, Oid targetId, List *justBeforeDropCommands,
{
if (!suppressNoticeMessages)
{
ereport(NOTICE, (errmsg("moving the data of %s", qualifiedSourceName)));
ereport(NOTICE, (errmsg("moving the data of %s",
quote_qualified_identifier(schemaName, sourceName))));
}
if (!HasAnyGeneratedStoredColumns(sourceId))
@ -1813,7 +1822,8 @@ ReplaceTable(Oid sourceId, Oid targetId, List *justBeforeDropCommands,
* "INSERT INTO .. SELECT *"".
*/
appendStringInfo(query, "INSERT INTO %s SELECT * FROM %s",
qualifiedTargetName, qualifiedSourceName);
quote_qualified_identifier(schemaName, targetName),
quote_qualified_identifier(schemaName, sourceName));
}
else
{
@ -1828,8 +1838,9 @@ ReplaceTable(Oid sourceId, Oid targetId, List *justBeforeDropCommands,
char *insertColumnString = StringJoin(nonStoredColumnNameList, ',');
appendStringInfo(query,
"INSERT INTO %s (%s) OVERRIDING SYSTEM VALUE SELECT %s FROM %s",
qualifiedTargetName, insertColumnString,
insertColumnString, qualifiedSourceName);
quote_qualified_identifier(schemaName, targetName),
insertColumnString, insertColumnString,
quote_qualified_identifier(schemaName, sourceName));
}
ExecuteQueryViaSPI(query->data, SPI_OK_INSERT);
@ -1840,7 +1851,7 @@ ReplaceTable(Oid sourceId, Oid targetId, List *justBeforeDropCommands,
*/
List *ownedSequences = getOwnedSequences_internal(sourceId, 0, DEPENDENCY_AUTO);
Oid sequenceOid = InvalidOid;
foreach_declared_oid(sequenceOid, ownedSequences)
foreach_oid(sequenceOid, ownedSequences)
{
changeDependencyFor(RelationRelationId, sequenceOid,
RelationRelationId, sourceId, targetId);
@ -1853,11 +1864,14 @@ ReplaceTable(Oid sourceId, Oid targetId, List *justBeforeDropCommands,
*/
if (ShouldSyncTableMetadata(targetId))
{
char *qualifiedSequenceName = generate_qualified_relation_name(sequenceOid);
Oid sequenceSchemaOid = get_rel_namespace(sequenceOid);
char *sequenceSchemaName = get_namespace_name(sequenceSchemaOid);
char *sequenceName = get_rel_name(sequenceOid);
char *workerChangeSequenceDependencyCommand =
CreateWorkerChangeSequenceDependencyCommand(qualifiedSequenceName,
qualifiedSourceName,
qualifiedTargetName);
CreateWorkerChangeSequenceDependencyCommand(sequenceSchemaName,
sequenceName,
schemaName, sourceName,
schemaName, targetName);
SendCommandToWorkersWithMetadata(workerChangeSequenceDependencyCommand);
}
else if (ShouldSyncTableMetadata(sourceId))
@ -1873,30 +1887,32 @@ ReplaceTable(Oid sourceId, Oid targetId, List *justBeforeDropCommands,
}
char *justBeforeDropCommand = NULL;
foreach_declared_ptr(justBeforeDropCommand, justBeforeDropCommands)
foreach_ptr(justBeforeDropCommand, justBeforeDropCommands)
{
ExecuteQueryViaSPI(justBeforeDropCommand, SPI_OK_UTILITY);
}
if (!suppressNoticeMessages)
{
ereport(NOTICE, (errmsg("dropping the old %s", qualifiedSourceName)));
ereport(NOTICE, (errmsg("dropping the old %s",
quote_qualified_identifier(schemaName, sourceName))));
}
resetStringInfo(query);
appendStringInfo(query, "DROP %sTABLE %s CASCADE",
IsForeignTable(sourceId) ? "FOREIGN " : "",
qualifiedSourceName);
quote_qualified_identifier(schemaName, sourceName));
ExecuteQueryViaSPI(query->data, SPI_OK_UTILITY);
if (!suppressNoticeMessages)
{
ereport(NOTICE, (errmsg("renaming the new table to %s", qualifiedSourceName)));
ereport(NOTICE, (errmsg("renaming the new table to %s",
quote_qualified_identifier(schemaName, sourceName))));
}
resetStringInfo(query);
appendStringInfo(query, "ALTER TABLE %s RENAME TO %s",
qualifiedTargetName,
quote_qualified_identifier(schemaName, targetName),
quote_identifier(sourceName));
ExecuteQueryViaSPI(query->data, SPI_OK_UTILITY);
}
@ -1987,7 +2003,7 @@ CheckAlterDistributedTableConversionParameters(TableConversionState *con)
Oid colocatedTableOid = InvalidOid;
text *colocateWithText = cstring_to_text(con->colocateWith);
Oid colocateWithTableOid = ResolveRelationId(colocateWithText, false);
foreach_declared_oid(colocatedTableOid, con->colocatedTableList)
foreach_oid(colocatedTableOid, con->colocatedTableList)
{
if (colocateWithTableOid == colocatedTableOid)
{
@ -2156,13 +2172,18 @@ CheckAlterDistributedTableConversionParameters(TableConversionState *con)
* worker_change_sequence_dependency query with the parameters.
*/
static char *
CreateWorkerChangeSequenceDependencyCommand(char *qualifiedSequeceName,
char *qualifiedSourceName,
char *qualifiedTargetName)
CreateWorkerChangeSequenceDependencyCommand(char *sequenceSchemaName, char *sequenceName,
char *sourceSchemaName, char *sourceName,
char *targetSchemaName, char *targetName)
{
char *qualifiedSchemaName = quote_qualified_identifier(sequenceSchemaName,
sequenceName);
char *qualifiedSourceName = quote_qualified_identifier(sourceSchemaName, sourceName);
char *qualifiedTargetName = quote_qualified_identifier(targetSchemaName, targetName);
StringInfo query = makeStringInfo();
appendStringInfo(query, "SELECT worker_change_sequence_dependency(%s, %s, %s)",
quote_literal_cstr(qualifiedSequeceName),
quote_literal_cstr(qualifiedSchemaName),
quote_literal_cstr(qualifiedSourceName),
quote_literal_cstr(qualifiedTargetName));
@ -2214,7 +2235,7 @@ WillRecreateForeignKeyToReferenceTable(Oid relationId,
{
List *colocatedTableList = ColocatedTableList(relationId);
Oid colocatedTableOid = InvalidOid;
foreach_declared_oid(colocatedTableOid, colocatedTableList)
foreach_oid(colocatedTableOid, colocatedTableList)
{
if (HasForeignKeyToReferenceTable(colocatedTableOid))
{
@ -2242,7 +2263,7 @@ WarningsForDroppingForeignKeysWithDistributedTables(Oid relationId)
List *foreignKeys = list_concat(referencingForeingKeys, referencedForeignKeys);
Oid foreignKeyOid = InvalidOid;
foreach_declared_oid(foreignKeyOid, foreignKeys)
foreach_oid(foreignKeyOid, foreignKeys)
{
ereport(WARNING, (errmsg("foreign key %s will be dropped",
get_constraint_name(foreignKeyOid))));

View File

@ -33,7 +33,7 @@ SaveBeginCommandProperties(TransactionStmt *transactionStmt)
*
* While BEGIN can be quite frequent it will rarely have options set.
*/
foreach_declared_ptr(item, transactionStmt->options)
foreach_ptr(item, transactionStmt->options)
{
A_Const *constant = (A_Const *) item->arg;

View File

@ -168,7 +168,7 @@ GetPartitionRelationIds(List *relationIdList)
List *partitionRelationIdList = NIL;
Oid relationId = InvalidOid;
foreach_declared_oid(relationId, relationIdList)
foreach_oid(relationId, relationIdList)
{
if (PartitionTable(relationId))
{
@ -189,7 +189,7 @@ LockRelationsWithLockMode(List *relationIdList, LOCKMODE lockMode)
{
Oid relationId;
relationIdList = SortList(relationIdList, CompareOids);
foreach_declared_oid(relationId, relationIdList)
foreach_oid(relationId, relationIdList)
{
LockRelationOid(relationId, lockMode);
}
@ -207,7 +207,7 @@ static void
ErrorIfConvertingMultiLevelPartitionedTable(List *relationIdList)
{
Oid relationId;
foreach_declared_oid(relationId, relationIdList)
foreach_oid(relationId, relationIdList)
{
if (PartitionedTable(relationId) && PartitionTable(relationId))
{
@ -236,7 +236,7 @@ void
ErrorIfAnyPartitionRelationInvolvedInNonInheritedFKey(List *relationIdList)
{
Oid relationId = InvalidOid;
foreach_declared_oid(relationId, relationIdList)
foreach_oid(relationId, relationIdList)
{
if (!PartitionTable(relationId))
{
@ -300,7 +300,7 @@ bool
RelationIdListHasReferenceTable(List *relationIdList)
{
Oid relationId = InvalidOid;
foreach_declared_oid(relationId, relationIdList)
foreach_oid(relationId, relationIdList)
{
if (IsCitusTableType(relationId, REFERENCE_TABLE))
{
@ -322,7 +322,7 @@ GetFKeyCreationCommandsForRelationIdList(List *relationIdList)
List *fKeyCreationCommands = NIL;
Oid relationId = InvalidOid;
foreach_declared_oid(relationId, relationIdList)
foreach_oid(relationId, relationIdList)
{
List *relationFKeyCreationCommands =
GetReferencingForeignConstaintCommands(relationId);
@ -342,7 +342,7 @@ static void
DropRelationIdListForeignKeys(List *relationIdList, int fKeyFlags)
{
Oid relationId = InvalidOid;
foreach_declared_oid(relationId, relationIdList)
foreach_oid(relationId, relationIdList)
{
DropRelationForeignKeys(relationId, fKeyFlags);
}
@ -399,7 +399,7 @@ GetRelationDropFkeyCommands(Oid relationId, int fKeyFlags)
List *relationFKeyIdList = GetForeignKeyOids(relationId, fKeyFlags);
Oid foreignKeyId;
foreach_declared_oid(foreignKeyId, relationFKeyIdList)
foreach_oid(foreignKeyId, relationFKeyIdList)
{
char *dropFkeyCascadeCommand = GetDropFkeyCascadeCommand(foreignKeyId);
dropFkeyCascadeCommandList = lappend(dropFkeyCascadeCommandList,
@ -450,7 +450,7 @@ ExecuteCascadeOperationForRelationIdList(List *relationIdList,
cascadeOperationType)
{
Oid relationId = InvalidOid;
foreach_declared_oid(relationId, relationIdList)
foreach_oid(relationId, relationIdList)
{
/*
* The reason behind skipping certain table types in below loop is
@ -531,7 +531,7 @@ ExecuteAndLogUtilityCommandListInTableTypeConversionViaSPI(List *utilityCommandL
PG_TRY();
{
char *utilityCommand = NULL;
foreach_declared_ptr(utilityCommand, utilityCommandList)
foreach_ptr(utilityCommand, utilityCommandList)
{
/*
* CREATE MATERIALIZED VIEW commands need to be parsed/transformed,
@ -569,7 +569,7 @@ void
ExecuteAndLogUtilityCommandList(List *utilityCommandList)
{
char *utilityCommand = NULL;
foreach_declared_ptr(utilityCommand, utilityCommandList)
foreach_ptr(utilityCommand, utilityCommandList)
{
ExecuteAndLogUtilityCommand(utilityCommand);
}
@ -597,7 +597,7 @@ void
ExecuteForeignKeyCreateCommandList(List *ddlCommandList, bool skip_validation)
{
char *ddlCommand = NULL;
foreach_declared_ptr(ddlCommand, ddlCommandList)
foreach_ptr(ddlCommand, ddlCommandList)
{
ExecuteForeignKeyCreateCommand(ddlCommand, skip_validation);
}

View File

@ -588,7 +588,7 @@ ErrorIfOptionListHasNoTableName(List *optionList)
{
char *table_nameString = "table_name";
DefElem *option = NULL;
foreach_declared_ptr(option, optionList)
foreach_ptr(option, optionList)
{
char *optionName = option->defname;
if (strcmp(optionName, table_nameString) == 0)
@ -613,7 +613,7 @@ ForeignTableDropsTableNameOption(List *optionList)
{
char *table_nameString = "table_name";
DefElem *option = NULL;
foreach_declared_ptr(option, optionList)
foreach_ptr(option, optionList)
{
char *optionName = option->defname;
DefElemAction optionAction = option->defaction;
@ -732,7 +732,7 @@ UpdateAutoConvertedForConnectedRelations(List *relationIds, bool autoConverted)
List *relationIdList = NIL;
Oid relid = InvalidOid;
foreach_declared_oid(relid, relationIds)
foreach_oid(relid, relationIds)
{
List *connectedRelations = GetForeignKeyConnectedRelationIdList(relid);
relationIdList = list_concat_unique_oid(relationIdList, connectedRelations);
@ -740,7 +740,7 @@ UpdateAutoConvertedForConnectedRelations(List *relationIds, bool autoConverted)
relationIdList = SortList(relationIdList, CompareOids);
foreach_declared_oid(relid, relationIdList)
foreach_oid(relid, relationIdList)
{
UpdatePgDistPartitionAutoConverted(relid, autoConverted);
}
@ -776,7 +776,7 @@ GetShellTableDDLEventsForCitusLocalTable(Oid relationId)
List *shellTableDDLEvents = NIL;
TableDDLCommand *tableDDLCommand = NULL;
foreach_declared_ptr(tableDDLCommand, tableDDLCommands)
foreach_ptr(tableDDLCommand, tableDDLCommands)
{
Assert(CitusIsA(tableDDLCommand, TableDDLCommand));
shellTableDDLEvents = lappend(shellTableDDLEvents,
@ -863,7 +863,7 @@ RenameShardRelationConstraints(Oid shardRelationId, uint64 shardId)
List *constraintNameList = GetConstraintNameList(shardRelationId);
char *constraintName = NULL;
foreach_declared_ptr(constraintName, constraintNameList)
foreach_ptr(constraintName, constraintNameList)
{
const char *commandString =
GetRenameShardConstraintCommand(shardRelationId, constraintName, shardId);
@ -958,7 +958,7 @@ RenameShardRelationIndexes(Oid shardRelationId, uint64 shardId)
List *indexOidList = GetExplicitIndexOidList(shardRelationId);
Oid indexOid = InvalidOid;
foreach_declared_oid(indexOid, indexOidList)
foreach_oid(indexOid, indexOidList)
{
const char *commandString = GetRenameShardIndexCommand(indexOid, shardId);
ExecuteAndLogUtilityCommand(commandString);
@ -1008,7 +1008,7 @@ RenameShardRelationStatistics(Oid shardRelationId, uint64 shardId)
List *statsCommandList = GetRenameStatsCommandList(statsOidList, shardId);
char *command = NULL;
foreach_declared_ptr(command, statsCommandList)
foreach_ptr(command, statsCommandList)
{
ExecuteAndLogUtilityCommand(command);
}
@ -1044,7 +1044,7 @@ RenameShardRelationNonTruncateTriggers(Oid shardRelationId, uint64 shardId)
List *triggerIdList = GetExplicitTriggerIdList(shardRelationId);
Oid triggerId = InvalidOid;
foreach_declared_oid(triggerId, triggerIdList)
foreach_oid(triggerId, triggerIdList)
{
bool missingOk = false;
HeapTuple triggerTuple = GetTriggerTupleById(triggerId, missingOk);
@ -1097,7 +1097,7 @@ DropRelationTruncateTriggers(Oid relationId)
List *triggerIdList = GetExplicitTriggerIdList(relationId);
Oid triggerId = InvalidOid;
foreach_declared_oid(triggerId, triggerIdList)
foreach_oid(triggerId, triggerIdList)
{
bool missingOk = false;
HeapTuple triggerTuple = GetTriggerTupleById(triggerId, missingOk);
@ -1160,7 +1160,9 @@ DropIdentitiesOnTable(Oid relationId)
if (attributeForm->attidentity)
{
char *qualifiedTableName = generate_qualified_relation_name(relationId);
char *tableName = get_rel_name(relationId);
char *schemaName = get_namespace_name(get_rel_namespace(relationId));
char *qualifiedTableName = quote_qualified_identifier(schemaName, tableName);
StringInfo dropCommand = makeStringInfo();
@ -1175,7 +1177,7 @@ DropIdentitiesOnTable(Oid relationId)
relation_close(relation, NoLock);
char *dropCommand = NULL;
foreach_declared_ptr(dropCommand, dropCommandList)
foreach_ptr(dropCommand, dropCommandList)
{
/*
* We need to disable/enable ddl propagation for this command, to prevent
@ -1218,9 +1220,11 @@ DropViewsOnTable(Oid relationId)
List *reverseOrderedViews = ReversedOidList(views);
Oid viewId = InvalidOid;
foreach_declared_oid(viewId, reverseOrderedViews)
foreach_oid(viewId, reverseOrderedViews)
{
char *qualifiedViewName = generate_qualified_relation_name(viewId);
char *viewName = get_rel_name(viewId);
char *schemaName = get_namespace_name(get_rel_namespace(viewId));
char *qualifiedViewName = quote_qualified_identifier(schemaName, viewName);
StringInfo dropCommand = makeStringInfo();
appendStringInfo(dropCommand, "DROP %sVIEW IF EXISTS %s",
@ -1241,7 +1245,7 @@ ReversedOidList(List *oidList)
{
List *reversed = NIL;
Oid oid = InvalidOid;
foreach_declared_oid(oid, oidList)
foreach_oid(oid, oidList)
{
reversed = lcons_oid(oid, reversed);
}
@ -1293,7 +1297,7 @@ GetRenameStatsCommandList(List *statsOidList, uint64 shardId)
{
List *statsCommandList = NIL;
Oid statsOid;
foreach_declared_oid(statsOid, statsOidList)
foreach_oid(statsOid, statsOidList)
{
HeapTuple tup = SearchSysCache1(STATEXTOID, ObjectIdGetDatum(statsOid));

View File

@ -115,7 +115,7 @@ static bool
IsClusterStmtVerbose_compat(ClusterStmt *clusterStmt)
{
DefElem *opt = NULL;
foreach_declared_ptr(opt, clusterStmt->params)
foreach_ptr(opt, clusterStmt->params)
{
if (strcmp(opt->defname, "verbose") == 0)
{

View File

@ -68,6 +68,8 @@ CreateCollationDDLInternal(Oid collationId, Oid *collowner, char **quotedCollati
char *collcollate;
char *collctype;
#if PG_VERSION_NUM >= PG_VERSION_15
/*
* In PG15, there is an added option to use ICU as global locale provider.
* pg_collation has three locale-related fields: collcollate and collctype,
@ -75,7 +77,7 @@ CreateCollationDDLInternal(Oid collationId, Oid *collowner, char **quotedCollati
* ICU-related field. Only the libc-related fields or the ICU-related field
* is set, never both.
*/
char *colllocale;
char *colliculocale;
bool isnull;
Datum datum = SysCacheGetAttr(COLLOID, heapTuple, Anum_pg_collation_collcollate,
@ -99,17 +101,27 @@ CreateCollationDDLInternal(Oid collationId, Oid *collowner, char **quotedCollati
collctype = NULL;
}
datum = SysCacheGetAttr(COLLOID, heapTuple, Anum_pg_collation_colllocale, &isnull);
datum = SysCacheGetAttr(COLLOID, heapTuple, Anum_pg_collation_colliculocale, &isnull);
if (!isnull)
{
colllocale = TextDatumGetCString(datum);
colliculocale = TextDatumGetCString(datum);
}
else
{
colllocale = NULL;
colliculocale = NULL;
}
Assert((collcollate && collctype) || colllocale);
Assert((collcollate && collctype) || colliculocale);
#else
/*
* In versions before 15, collcollate and collctype were type "name". Use
* pstrdup() to match the interface of 15 so that we consistently free the
* result later.
*/
collcollate = pstrdup(NameStr(collationForm->collcollate));
collctype = pstrdup(NameStr(collationForm->collctype));
#endif
if (collowner != NULL)
{
@ -120,7 +132,6 @@ CreateCollationDDLInternal(Oid collationId, Oid *collowner, char **quotedCollati
char *schemaName = get_namespace_name(collnamespace);
*quotedCollationName = quote_qualified_identifier(schemaName, collname);
const char *providerString =
collprovider == COLLPROVIDER_BUILTIN ? "builtin" :
collprovider == COLLPROVIDER_DEFAULT ? "default" :
collprovider == COLLPROVIDER_ICU ? "icu" :
collprovider == COLLPROVIDER_LIBC ? "libc" : NULL;
@ -135,12 +146,13 @@ CreateCollationDDLInternal(Oid collationId, Oid *collowner, char **quotedCollati
"CREATE COLLATION %s (provider = '%s'",
*quotedCollationName, providerString);
if (colllocale)
#if PG_VERSION_NUM >= PG_VERSION_15
if (colliculocale)
{
appendStringInfo(&collationNameDef,
", locale = %s",
quote_literal_cstr(colllocale));
pfree(colllocale);
quote_literal_cstr(colliculocale));
pfree(colliculocale);
}
else
{
@ -160,7 +172,24 @@ CreateCollationDDLInternal(Oid collationId, Oid *collowner, char **quotedCollati
pfree(collcollate);
pfree(collctype);
}
#else
if (strcmp(collcollate, collctype) == 0)
{
appendStringInfo(&collationNameDef,
", locale = %s",
quote_literal_cstr(collcollate));
}
else
{
appendStringInfo(&collationNameDef,
", lc_collate = %s, lc_ctype = %s",
quote_literal_cstr(collcollate),
quote_literal_cstr(collctype));
}
pfree(collcollate);
pfree(collctype);
#endif
#if PG_VERSION_NUM >= PG_VERSION_16
char *collicurules = NULL;
datum = SysCacheGetAttr(COLLOID, heapTuple, Anum_pg_collation_collicurules, &isnull);

View File

@ -1,131 +0,0 @@
/*-------------------------------------------------------------------------
*
* comment.c
* Commands to interact with the comments for all database
* object types.
*
* Copyright (c) Citus Data, Inc.
*
*-------------------------------------------------------------------------
*/
#include "postgres.h"
#include "access/genam.h"
#include "access/htup_details.h"
#include "access/table.h"
#include "catalog/pg_shdescription.h"
#include "nodes/parsenodes.h"
#include "utils/builtins.h"
#include "utils/fmgroids.h"
#include "utils/rel.h"
#include "distributed/comment.h"
static char * GetCommentForObject(Oid classOid, Oid objectOid);
List *
GetCommentPropagationCommands(Oid classOid, Oid objOoid, char *objectName, ObjectType
objectType)
{
List *commands = NIL;
StringInfo commentStmt = makeStringInfo();
/* Get the comment for the database */
char *comment = GetCommentForObject(classOid, objOoid);
char const *commentObjectType = ObjectTypeNames[objectType];
/* Create the SQL command to propagate the comment to other nodes */
if (comment != NULL)
{
appendStringInfo(commentStmt, "COMMENT ON %s %s IS %s;", commentObjectType,
quote_identifier(objectName),
quote_literal_cstr(comment));
}
/* Add the command to the list */
if (commentStmt->len > 0)
{
commands = list_make1(commentStmt->data);
}
return commands;
}
static char *
GetCommentForObject(Oid classOid, Oid objectOid)
{
HeapTuple tuple;
char *comment = NULL;
/* Open pg_shdescription catalog */
Relation shdescRelation = table_open(SharedDescriptionRelationId, AccessShareLock);
/* Scan the table */
ScanKeyData scanKey[2];
ScanKeyInit(&scanKey[0],
Anum_pg_shdescription_objoid,
BTEqualStrategyNumber, F_OIDEQ,
ObjectIdGetDatum(objectOid));
ScanKeyInit(&scanKey[1],
Anum_pg_shdescription_classoid,
BTEqualStrategyNumber, F_OIDEQ,
ObjectIdGetDatum(classOid));
bool indexOk = true;
int scanKeyCount = 2;
SysScanDesc scan = systable_beginscan(shdescRelation, SharedDescriptionObjIndexId,
indexOk, NULL, scanKeyCount,
scanKey);
if ((tuple = systable_getnext(scan)) != NULL)
{
bool isNull = false;
TupleDesc tupdesc = RelationGetDescr(shdescRelation);
Datum descDatum = heap_getattr(tuple, Anum_pg_shdescription_description, tupdesc,
&isNull);
/* Add the command to the list */
if (!isNull)
{
comment = TextDatumGetCString(descDatum);
}
else
{
comment = NULL;
}
}
/* End the scan and close the catalog */
systable_endscan(scan);
table_close(shdescRelation, AccessShareLock);
return comment;
}
/*
* CommentObjectAddress resolves the ObjectAddress for the object
* on which the comment is placed. Optionally errors if the object does not
* exist based on the missing_ok flag passed in by the caller.
*/
List *
CommentObjectAddress(Node *node, bool missing_ok, bool isPostprocess)
{
CommentStmt *stmt = castNode(CommentStmt, node);
Relation relation;
ObjectAddress objectAddress = get_object_address(stmt->objtype, stmt->object,
&relation, AccessExclusiveLock,
missing_ok);
ObjectAddress *objectAddressCopy = palloc0(sizeof(ObjectAddress));
*objectAddressCopy = objectAddress;
return list_make1(objectAddressCopy);
}

View File

@ -235,7 +235,7 @@ PreprocessDropDistributedObjectStmt(Node *node, const char *queryString,
List *distributedObjects = NIL;
List *distributedObjectAddresses = NIL;
Node *object = NULL;
foreach_declared_ptr(object, stmt->objects)
foreach_ptr(object, stmt->objects)
{
/* TODO understand if the lock should be sth else */
Relation rel = NULL; /* not used, but required to pass to get_object_address */
@ -267,7 +267,7 @@ PreprocessDropDistributedObjectStmt(Node *node, const char *queryString,
* remove the entries for the distributed objects on dropping
*/
ObjectAddress *address = NULL;
foreach_declared_ptr(address, distributedObjectAddresses)
foreach_ptr(address, distributedObjectAddresses)
{
UnmarkObjectDistributed(address);
}
@ -303,7 +303,7 @@ DropTextSearchDictObjectAddress(Node *node, bool missing_ok, bool isPostprocess)
List *objectAddresses = NIL;
List *objNameList = NIL;
foreach_declared_ptr(objNameList, stmt->objects)
foreach_ptr(objNameList, stmt->objects)
{
Oid tsdictOid = get_ts_dict_oid(objNameList, missing_ok);
@ -328,7 +328,7 @@ DropTextSearchConfigObjectAddress(Node *node, bool missing_ok, bool isPostproces
List *objectAddresses = NIL;
List *objNameList = NIL;
foreach_declared_ptr(objNameList, stmt->objects)
foreach_ptr(objNameList, stmt->objects)
{
Oid tsconfigOid = get_ts_config_oid(objNameList, missing_ok);

View File

@ -170,10 +170,12 @@ static void EnsureDistributedSequencesHaveOneType(Oid relationId,
static void CopyLocalDataIntoShards(Oid distributedTableId);
static List * TupleDescColumnNameList(TupleDesc tupleDescriptor);
#if (PG_VERSION_NUM >= PG_VERSION_15)
static bool DistributionColumnUsesNumericColumnNegativeScale(TupleDesc relationDesc,
Var *distributionColumn);
static int numeric_typmod_scale(int32 typmod);
static bool is_valid_numeric_typmod(int32 typmod);
#endif
static bool DistributionColumnUsesGeneratedStoredColumn(TupleDesc relationDesc,
Var *distributionColumn);
@ -832,7 +834,7 @@ HashSplitPointsForShardList(List *shardList)
List *splitPointList = NIL;
ShardInterval *shardInterval = NULL;
foreach_declared_ptr(shardInterval, shardList)
foreach_ptr(shardInterval, shardList)
{
int32 shardMaxValue = DatumGetInt32(shardInterval->maxValue);
@ -888,7 +890,7 @@ WorkerNodesForShardList(List *shardList)
List *nodeIdList = NIL;
ShardInterval *shardInterval = NULL;
foreach_declared_ptr(shardInterval, shardList)
foreach_ptr(shardInterval, shardList)
{
WorkerNode *workerNode = ActiveShardPlacementWorkerNode(shardInterval->shardId);
nodeIdList = lappend_int(nodeIdList, workerNode->nodeId);
@ -1323,7 +1325,10 @@ CreateCitusTable(Oid relationId, CitusTableType tableType,
{
List *partitionList = PartitionList(relationId);
Oid partitionRelationId = InvalidOid;
char *parentRelationName = generate_qualified_relation_name(relationId);
Oid namespaceId = get_rel_namespace(relationId);
char *schemaName = get_namespace_name(namespaceId);
char *relationName = get_rel_name(relationId);
char *parentRelationName = quote_qualified_identifier(schemaName, relationName);
/*
* when there are many partitions, each call to CreateDistributedTable
@ -1335,7 +1340,7 @@ CreateCitusTable(Oid relationId, CitusTableType tableType,
ALLOCSET_DEFAULT_SIZES);
MemoryContext oldContext = MemoryContextSwitchTo(citusPartitionContext);
foreach_declared_oid(partitionRelationId, partitionList)
foreach_oid(partitionRelationId, partitionList)
{
MemoryContextReset(citusPartitionContext);
@ -1549,7 +1554,7 @@ ConvertCitusLocalTableToTableType(Oid relationId, CitusTableType tableType,
MemoryContext oldContext = MemoryContextSwitchTo(citusPartitionContext);
Oid partitionRelationId = InvalidOid;
foreach_declared_oid(partitionRelationId, partitionList)
foreach_oid(partitionRelationId, partitionList)
{
MemoryContextReset(citusPartitionContext);
@ -1699,7 +1704,7 @@ EnsureSequenceTypeSupported(Oid seqOid, Oid attributeTypeId, Oid ownerRelationId
Oid attrDefOid;
List *attrDefOids = GetAttrDefsFromSequence(seqOid);
foreach_declared_oid(attrDefOid, attrDefOids)
foreach_oid(attrDefOid, attrDefOids)
{
ObjectAddress columnAddress = GetAttrDefaultColumnAddress(attrDefOid);
@ -1781,7 +1786,7 @@ static void
EnsureDistributedSequencesHaveOneType(Oid relationId, List *seqInfoList)
{
SequenceInfo *seqInfo = NULL;
foreach_declared_ptr(seqInfo, seqInfoList)
foreach_ptr(seqInfo, seqInfoList)
{
if (!seqInfo->isNextValDefault)
{
@ -2112,6 +2117,8 @@ EnsureRelationCanBeDistributed(Oid relationId, Var *distributionColumn,
"AS (...) STORED.")));
}
#if (PG_VERSION_NUM >= PG_VERSION_15)
/* verify target relation is not distributed by a column of type numeric with negative scale */
if (distributionMethod != DISTRIBUTE_BY_NONE &&
DistributionColumnUsesNumericColumnNegativeScale(relationDesc,
@ -2122,6 +2129,7 @@ EnsureRelationCanBeDistributed(Oid relationId, Var *distributionColumn,
errdetail("Distribution column must not use numeric type "
"with negative scale")));
}
#endif
/* check for support function needed by specified partition method */
if (distributionMethod == DISTRIBUTE_BY_HASH)
@ -2727,15 +2735,11 @@ CopyFromLocalTableIntoDistTable(Oid localTableId, Oid distributedTableId)
ExprContext *econtext = GetPerTupleExprContext(estate);
econtext->ecxt_scantuple = slot;
const bool nonPublishableData = false;
/* we don't track query counters when distributing a table */
const bool trackQueryCounters = false;
DestReceiver *copyDest =
(DestReceiver *) CreateCitusCopyDestReceiver(distributedTableId,
columnNameList,
partitionColumnIndex,
estate, NULL, nonPublishableData,
trackQueryCounters);
estate, NULL, nonPublishableData);
/* initialise state for writing to shards, we'll open connections on demand */
copyDest->rStartup(copyDest, 0, sourceTupleDescriptor);
@ -2843,6 +2847,8 @@ TupleDescColumnNameList(TupleDesc tupleDescriptor)
}
#if (PG_VERSION_NUM >= PG_VERSION_15)
/*
* is_valid_numeric_typmod checks if the typmod value is valid
*
@ -2892,6 +2898,8 @@ DistributionColumnUsesNumericColumnNegativeScale(TupleDesc relationDesc,
}
#endif
/*
* DistributionColumnUsesGeneratedStoredColumn returns whether a given relation uses
* GENERATED ALWAYS AS (...) STORED on distribution column

View File

@ -13,97 +13,32 @@
#include "miscadmin.h"
#include "access/genam.h"
#include "access/heapam.h"
#include "access/htup_details.h"
#include "access/table.h"
#include "access/xact.h"
#include "catalog/objectaddress.h"
#include "catalog/pg_collation.h"
#include "catalog/pg_database.h"
#include "catalog/pg_database_d.h"
#include "catalog/pg_tablespace.h"
#include "commands/dbcommands.h"
#include "commands/defrem.h"
#include "nodes/parsenodes.h"
#include "utils/builtins.h"
#include "utils/fmgroids.h"
#include "utils/lsyscache.h"
#include "utils/rel.h"
#include "utils/relcache.h"
#include "utils/syscache.h"
#include "distributed/adaptive_executor.h"
#include "distributed/commands.h"
#include "distributed/commands/serialize_distributed_ddls.h"
#include "distributed/commands/utility_hook.h"
#include "distributed/comment.h"
#include "distributed/deparse_shard_query.h"
#include "distributed/deparser.h"
#include "distributed/listutils.h"
#include "distributed/local_executor.h"
#include "distributed/metadata/distobject.h"
#include "distributed/metadata_sync.h"
#include "distributed/metadata_utility.h"
#include "distributed/multi_executor.h"
#include "distributed/relation_access_tracking.h"
#include "distributed/shard_cleaner.h"
#include "distributed/worker_protocol.h"
#include "distributed/worker_transaction.h"
/*
* Used to save original name of the database before it is replaced with a
* temporary name for failure handling purposes in PreprocessCreateDatabaseStmt().
*/
static char *CreateDatabaseCommandOriginalDbName = NULL;
/*
* The format string used when creating a temporary databases for failure
* handling purposes.
*
* The fields are as follows to ensure using a unique name for each temporary
* database:
* - operationId: The operation id returned by RegisterOperationNeedingCleanup().
* - groupId: The group id of the worker node where CREATE DATABASE command
* is issued from.
*/
#define TEMP_DATABASE_NAME_FMT "citus_temp_database_%lu_%d"
/*
* DatabaseCollationInfo is used to store collation related information of a database.
*/
typedef struct DatabaseCollationInfo
{
char *datcollate;
char *datctype;
char *daticulocale;
char *datcollversion;
#if PG_VERSION_NUM >= PG_VERSION_16
char *daticurules;
#endif
} DatabaseCollationInfo;
static char * GenerateCreateDatabaseStatementFromPgDatabase(Form_pg_database
databaseForm);
static DatabaseCollationInfo GetDatabaseCollation(Oid dbOid);
static AlterOwnerStmt * RecreateAlterDatabaseOwnerStmt(Oid databaseOid);
static char * GetLocaleProviderString(char datlocprovider);
static char * GetTablespaceName(Oid tablespaceOid);
static ObjectAddress * GetDatabaseAddressFromDatabaseName(char *databaseName,
bool missingOk);
static List * FilterDistributedDatabases(List *databases);
static Oid get_database_owner(Oid dbId);
static Oid get_database_owner(Oid db_oid);
List * PreprocessGrantOnDatabaseStmt(Node *node, const char *queryString,
ProcessUtilityContext processUtilityContext);
/* controlled via GUC */
bool EnableCreateDatabasePropagation = false;
bool EnableAlterDatabaseOwner = true;
/*
* AlterDatabaseOwnerObjectAddress returns the ObjectAddress of the database that is the
* object of the AlterOwnerStmt. Errors if missing_ok is false.
@ -160,13 +95,13 @@ RecreateAlterDatabaseOwnerStmt(Oid databaseOid)
* get_database_owner returns the Oid of the role owning the database
*/
static Oid
get_database_owner(Oid dbId)
get_database_owner(Oid db_oid)
{
HeapTuple tuple = SearchSysCache1(DATABASEOID, ObjectIdGetDatum(dbId));
HeapTuple tuple = SearchSysCache1(DATABASEOID, ObjectIdGetDatum(db_oid));
if (!HeapTupleIsValid(tuple))
{
ereport(ERROR, (errcode(ERRCODE_UNDEFINED_DATABASE),
errmsg("database with OID %u does not exist", dbId)));
errmsg("database with OID %u does not exist", db_oid)));
}
Oid dba = ((Form_pg_database) GETSTRUCT(tuple))->datdba;
@ -196,23 +131,17 @@ PreprocessGrantOnDatabaseStmt(Node *node, const char *queryString,
GrantStmt *stmt = castNode(GrantStmt, node);
Assert(stmt->objtype == OBJECT_DATABASE);
List *distributedDatabases = FilterDistributedDatabases(stmt->objects);
List *databaseList = stmt->objects;
if (list_length(distributedDatabases) == 0)
if (list_length(databaseList) == 0)
{
return NIL;
}
EnsureCoordinator();
List *originalObjects = stmt->objects;
stmt->objects = distributedDatabases;
char *sql = DeparseTreeNode((Node *) stmt);
stmt->objects = originalObjects;
List *commands = list_make3(DISABLE_DDL_PROPAGATION,
(void *) sql,
ENABLE_DDL_PROPAGATION);
@ -221,196 +150,23 @@ PreprocessGrantOnDatabaseStmt(Node *node, const char *queryString,
}
/*
* FilterDistributedDatabases filters the database list and returns the distributed ones,
* as a list.
*/
static List *
FilterDistributedDatabases(List *databases)
{
List *distributedDatabases = NIL;
String *databaseName = NULL;
foreach_declared_ptr(databaseName, databases)
{
bool missingOk = true;
ObjectAddress *dbAddress =
GetDatabaseAddressFromDatabaseName(strVal(databaseName), missingOk);
if (IsAnyObjectDistributed(list_make1(dbAddress)))
{
distributedDatabases = lappend(distributedDatabases, databaseName);
}
}
return distributedDatabases;
}
/*
* IsSetTablespaceStatement returns true if the statement is a SET TABLESPACE statement,
* false otherwise.
*/
static bool
IsSetTablespaceStatement(AlterDatabaseStmt *stmt)
{
DefElem *def = NULL;
foreach_declared_ptr(def, stmt->options)
{
if (strcmp(def->defname, "tablespace") == 0)
{
return true;
}
}
return false;
}
/*
* PreprocessAlterDatabaseStmt is executed before the statement is applied to the local
* postgres instance.
*
* In this stage we can prepare the commands that need to be run on all workers to grant
* on databases.
*
* We also serialize database commands globally by acquiring a Citus specific advisory
* lock based on OCLASS_DATABASE on the first primary worker node.
*/
List *
PreprocessAlterDatabaseStmt(Node *node, const char *queryString,
ProcessUtilityContext processUtilityContext)
{
bool missingOk = false;
if (!ShouldPropagate())
{
return NIL;
}
AlterDatabaseStmt *stmt = castNode(AlterDatabaseStmt, node);
ObjectAddress *dbAddress = GetDatabaseAddressFromDatabaseName(stmt->dbname,
missingOk);
if (!ShouldPropagate() || !IsAnyObjectDistributed(list_make1(dbAddress)))
{
return NIL;
}
EnsureCoordinator();
SerializeDistributedDDLsOnObjectClassObject(OCLASS_DATABASE, stmt->dbname);
char *sql = DeparseTreeNode((Node *) stmt);
List *commands = list_make3(DISABLE_DDL_PROPAGATION,
sql,
ENABLE_DDL_PROPAGATION);
if (IsSetTablespaceStatement(stmt))
{
/*
* Set tablespace does not work inside a transaction.Therefore, we need to use
* NontransactionalNodeDDLTask to run the command on the workers outside
* the transaction block.
*/
bool warnForPartialFailure = true;
return NontransactionalNodeDDLTaskList(NON_COORDINATOR_NODES, commands,
warnForPartialFailure);
}
else
{
return NodeDDLTaskList(NON_COORDINATOR_NODES, commands);
}
}
/*
* PreprocessAlterDatabaseRefreshCollStmt is executed before the statement is applied to
* the local postgres instance.
*
* In this stage we can prepare the commands that need to be run on all workers to grant
* on databases.
*
* We also serialize database commands globally by acquiring a Citus specific advisory
* lock based on OCLASS_DATABASE on the first primary worker node.
*/
List *
PreprocessAlterDatabaseRefreshCollStmt(Node *node, const char *queryString,
ProcessUtilityContext processUtilityContext)
{
bool missingOk = true;
AlterDatabaseRefreshCollStmt *stmt = castNode(AlterDatabaseRefreshCollStmt, node);
ObjectAddress *dbAddress = GetDatabaseAddressFromDatabaseName(stmt->dbname,
missingOk);
if (!ShouldPropagate() || !IsAnyObjectDistributed(list_make1(dbAddress)))
{
return NIL;
}
EnsureCoordinator();
SerializeDistributedDDLsOnObjectClassObject(OCLASS_DATABASE, stmt->dbname);
char *sql = DeparseTreeNode((Node *) stmt);
List *commands = list_make3(DISABLE_DDL_PROPAGATION,
(void *) sql,
ENABLE_DDL_PROPAGATION);
return NodeDDLTaskList(NON_COORDINATOR_NODES, commands);
}
/*
* PreprocessAlterDatabaseRenameStmt is executed before the statement is applied to
* the local postgres instance.
*
* We also serialize database commands globally by acquiring a Citus specific advisory
* lock based on OCLASS_DATABASE on the first primary worker node.
*
* We acquire this lock here instead of PostprocessAlterDatabaseRenameStmt because the
* command renames the database and SerializeDistributedDDLsOnObjectClass resolves the
* object on workers based on database name. For this reason, we need to acquire the lock
* before the command is applied to the local postgres instance.
*/
List *
PreprocessAlterDatabaseRenameStmt(Node *node, const char *queryString,
ProcessUtilityContext processUtilityContext)
{
bool missingOk = true;
RenameStmt *stmt = castNode(RenameStmt, node);
ObjectAddress *dbAddress = GetDatabaseAddressFromDatabaseName(stmt->subname,
missingOk);
if (!ShouldPropagate() || !IsAnyObjectDistributed(list_make1(dbAddress)))
{
return NIL;
}
EnsureCoordinator();
/*
* Different than other ALTER DATABASE commands, we first acquire a lock
* by providing InvalidOid because we want ALTER TABLE .. RENAME TO ..
* commands to block not only with ALTER DATABASE operations but also
* with CREATE DATABASE operations because they might cause name conflicts
* and that could also cause deadlocks too.
*/
SerializeDistributedDDLsOnObjectClass(OCLASS_DATABASE);
SerializeDistributedDDLsOnObjectClassObject(OCLASS_DATABASE, stmt->subname);
return NIL;
}
/*
* PostprocessAlterDatabaseRenameStmt is executed after the statement is applied to the local
* postgres instance. In this stage we prepare ALTER DATABASE RENAME statement to be run on
* all workers.
*/
List *
PostprocessAlterDatabaseRenameStmt(Node *node, const char *queryString)
{
bool missingOk = false;
RenameStmt *stmt = castNode(RenameStmt, node);
ObjectAddress *dbAddress = GetDatabaseAddressFromDatabaseName(stmt->newname,
missingOk);
if (!ShouldPropagate() || !IsAnyObjectDistributed(list_make1(dbAddress)))
{
return NIL;
}
EnsureCoordinator();
@ -424,32 +180,27 @@ PostprocessAlterDatabaseRenameStmt(Node *node, const char *queryString)
}
#if PG_VERSION_NUM >= PG_VERSION_15
/*
* PreprocessAlterDatabaseSetStmt is executed before the statement is applied to the local
* postgres instance.
*
* In this stage we can prepare the commands that need to be run on all workers to grant
* on databases.
*
* We also serialize database commands globally by acquiring a Citus specific advisory
* lock based on OCLASS_DATABASE on the first primary worker node.
*/
List *
PreprocessAlterDatabaseSetStmt(Node *node, const char *queryString,
ProcessUtilityContext processUtilityContext)
PreprocessAlterDatabaseRefreshCollStmt(Node *node, const char *queryString,
ProcessUtilityContext processUtilityContext)
{
AlterDatabaseSetStmt *stmt = castNode(AlterDatabaseSetStmt, node);
bool missingOk = true;
ObjectAddress *dbAddress = GetDatabaseAddressFromDatabaseName(stmt->dbname,
missingOk);
if (!ShouldPropagate() || !IsAnyObjectDistributed(list_make1(dbAddress)))
if (!ShouldPropagate())
{
return NIL;
}
AlterDatabaseRefreshCollStmt *stmt = castNode(AlterDatabaseRefreshCollStmt, node);
EnsureCoordinator();
SerializeDistributedDDLsOnObjectClassObject(OCLASS_DATABASE, stmt->dbname);
char *sql = DeparseTreeNode((Node *) stmt);
@ -461,540 +212,4 @@ PreprocessAlterDatabaseSetStmt(Node *node, const char *queryString,
}
/*
* PreprocessCreateDatabaseStmt is executed before the statement is applied to the local
* Postgres instance.
*
* In this stage, we perform validations that we want to ensure before delegating to
* previous utility hooks because it might not be convenient to throw an error in an
* implicit transaction that creates a database. Also in this stage, we save the original
* database name and replace dbname field with a temporary name for failure handling
* purposes. We let Postgres create the database with the temporary name, insert a cleanup
* record for the temporary database name on all nodes and let PostprocessCreateDatabaseStmt()
* to return the distributed DDL job that both creates the database with the temporary name
* and then renames it back to its original name.
*
* We also serialize database commands globally by acquiring a Citus specific advisory
* lock based on OCLASS_DATABASE on the first primary worker node.
*/
List *
PreprocessCreateDatabaseStmt(Node *node, const char *queryString,
ProcessUtilityContext processUtilityContext)
{
if (!EnableCreateDatabasePropagation || !ShouldPropagate())
{
return NIL;
}
EnsureCoordinatorIsInMetadata();
CreatedbStmt *stmt = castNode(CreatedbStmt, node);
EnsureSupportedCreateDatabaseCommand(stmt);
SerializeDistributedDDLsOnObjectClass(OCLASS_DATABASE);
OperationId operationId = RegisterOperationNeedingCleanup();
char *tempDatabaseName = psprintf(TEMP_DATABASE_NAME_FMT,
operationId, GetLocalGroupId());
List *remoteNodes = TargetWorkerSetNodeList(ALL_SHARD_NODES, RowShareLock);
WorkerNode *remoteNode = NULL;
foreach_declared_ptr(remoteNode, remoteNodes)
{
InsertCleanupRecordOutsideTransaction(
CLEANUP_OBJECT_DATABASE,
pstrdup(quote_identifier(tempDatabaseName)),
remoteNode->groupId,
CLEANUP_ON_FAILURE
);
}
CreateDatabaseCommandOriginalDbName = stmt->dbname;
stmt->dbname = tempDatabaseName;
/*
* Delete cleanup records in the same transaction so that if the current
* transactions fails for some reason, then the cleanup records won't be
* deleted. In the happy path, we will delete the cleanup records without
* deferring them to the background worker.
*/
FinalizeOperationNeedingCleanupOnSuccess("create database");
return NIL;
}
/*
* PostprocessCreateDatabaseStmt is executed after the statement is applied to the local
* postgres instance.
*
* In this stage, we first rename the temporary database back to its original name for
* local node and then return a list of distributed DDL jobs to create the database with
* the temporary name and then to rename it back to its original name. That way, if CREATE
* DATABASE fails on any of the nodes, the temporary database will be cleaned up by the
* cleanup records that we inserted in PreprocessCreateDatabaseStmt() and in case of a
* failure, we won't leak any databases called as the name that user intended to use for
* the database.
*/
List *
PostprocessCreateDatabaseStmt(Node *node, const char *queryString)
{
if (!EnableCreateDatabasePropagation || !ShouldPropagate())
{
return NIL;
}
EnsurePropagationToCoordinator();
/*
* Given that CREATE DATABASE doesn't support "IF NOT EXISTS" and we're
* in the post-process, database must exist, hence missingOk = false.
*/
bool missingOk = false;
bool isPostProcess = true;
List *addresses = GetObjectAddressListFromParseTree(node, missingOk,
isPostProcess);
EnsureAllObjectDependenciesExistOnAllNodes(addresses);
char *createDatabaseCommand = DeparseTreeNode(node);
List *createDatabaseCommands = list_make3(DISABLE_DDL_PROPAGATION,
(void *) createDatabaseCommand,
ENABLE_DDL_PROPAGATION);
/*
* Since the CREATE DATABASE statements cannot be executed in a transaction
* block, we need to use NontransactionalNodeDDLTaskList() to send the CREATE
* DATABASE statement to the workers.
*/
bool warnForPartialFailure = false;
List *createDatabaseDDLJobList =
NontransactionalNodeDDLTaskList(REMOTE_NODES, createDatabaseCommands,
warnForPartialFailure);
CreatedbStmt *stmt = castNode(CreatedbStmt, node);
char *renameDatabaseCommand =
psprintf("ALTER DATABASE %s RENAME TO %s",
quote_identifier(stmt->dbname),
quote_identifier(CreateDatabaseCommandOriginalDbName));
List *renameDatabaseCommands = list_make3(DISABLE_DDL_PROPAGATION,
renameDatabaseCommand,
ENABLE_DDL_PROPAGATION);
/*
* We use NodeDDLTaskList() to send the RENAME DATABASE statement to the
* workers because we want to execute it in a coordinated transaction.
*/
List *renameDatabaseDDLJobList =
NodeDDLTaskList(REMOTE_NODES, renameDatabaseCommands);
/*
* Temporarily disable citus.enable_ddl_propagation before issuing
* rename command locally because we don't want to execute it on remote
* nodes yet. We will execute it on remote nodes by returning it as a
* distributed DDL job.
*
* The reason why we don't want to execute it on remote nodes yet is that
* the database is not created on remote nodes yet.
*/
int saveNestLevel = NewGUCNestLevel();
set_config_option("citus.enable_ddl_propagation", "off",
(superuser() ? PGC_SUSET : PGC_USERSET), PGC_S_SESSION,
GUC_ACTION_LOCAL, true, 0, false);
ExecuteUtilityCommand(renameDatabaseCommand);
AtEOXact_GUC(true, saveNestLevel);
/*
* Restore the original database name because MarkObjectDistributed()
* resolves oid of the object based on the database name and is called
* after executing the distributed DDL job that renames temporary database.
*/
stmt->dbname = CreateDatabaseCommandOriginalDbName;
return list_concat(createDatabaseDDLJobList, renameDatabaseDDLJobList);
}
/*
* PreprocessDropDatabaseStmt is executed before the statement is applied to the local
* postgres instance. In this stage we can prepare the commands that need to be run on
* all workers to drop the database.
*
* We also serialize database commands globally by acquiring a Citus specific advisory
* lock based on OCLASS_DATABASE on the first primary worker node.
*/
List *
PreprocessDropDatabaseStmt(Node *node, const char *queryString,
ProcessUtilityContext processUtilityContext)
{
if (!EnableCreateDatabasePropagation || !ShouldPropagate())
{
return NIL;
}
EnsurePropagationToCoordinator();
DropdbStmt *stmt = (DropdbStmt *) node;
bool isPostProcess = false;
List *addresses = GetObjectAddressListFromParseTree(node, stmt->missing_ok,
isPostProcess);
if (list_length(addresses) != 1)
{
ereport(ERROR, (errmsg("unexpected number of objects found when "
"executing DROP DATABASE command")));
}
ObjectAddress *address = (ObjectAddress *) linitial(addresses);
if (address->objectId == InvalidOid || !IsAnyObjectDistributed(list_make1(address)))
{
return NIL;
}
SerializeDistributedDDLsOnObjectClassObject(OCLASS_DATABASE, stmt->dbname);
char *dropDatabaseCommand = DeparseTreeNode(node);
List *dropDatabaseCommands = list_make3(DISABLE_DDL_PROPAGATION,
(void *) dropDatabaseCommand,
ENABLE_DDL_PROPAGATION);
/*
* Due to same reason stated in PostprocessCreateDatabaseStmt(), we need to
* use NontransactionalNodeDDLTaskList() to send the DROP DATABASE statement
* to the workers.
*/
bool warnForPartialFailure = true;
List *dropDatabaseDDLJobList =
NontransactionalNodeDDLTaskList(REMOTE_NODES, dropDatabaseCommands,
warnForPartialFailure);
return dropDatabaseDDLJobList;
}
/*
* DropDatabaseStmtObjectAddress gets the ObjectAddress of the database that is the
* object of the DropdbStmt.
*/
List *
DropDatabaseStmtObjectAddress(Node *node, bool missingOk, bool isPostprocess)
{
DropdbStmt *stmt = castNode(DropdbStmt, node);
ObjectAddress *dbAddress = GetDatabaseAddressFromDatabaseName(stmt->dbname,
missingOk);
return list_make1(dbAddress);
}
/*
* CreateDatabaseStmtObjectAddress gets the ObjectAddress of the database that is the
* object of the CreatedbStmt.
*/
List *
CreateDatabaseStmtObjectAddress(Node *node, bool missingOk, bool isPostprocess)
{
CreatedbStmt *stmt = castNode(CreatedbStmt, node);
ObjectAddress *dbAddress = GetDatabaseAddressFromDatabaseName(stmt->dbname,
missingOk);
return list_make1(dbAddress);
}
/*
* EnsureSupportedCreateDatabaseCommand validates the options provided for the CREATE
* DATABASE command.
*
* Parameters:
* stmt: A CreatedbStmt struct representing a CREATE DATABASE command.
* The options field is a list of DefElem structs, each representing an option.
*
* Currently, this function checks for the following:
* - The "oid" option is not supported.
* - The "template" option is only supported with the value "template1".
* - The "strategy" option is only supported with the value "wal_log".
*/
void
EnsureSupportedCreateDatabaseCommand(CreatedbStmt *stmt)
{
DefElem *option = NULL;
foreach_declared_ptr(option, stmt->options)
{
if (strcmp(option->defname, "oid") == 0)
{
ereport(ERROR,
errmsg("CREATE DATABASE option \"%s\" is not supported",
option->defname));
}
char *optionValue = defGetString(option);
if (strcmp(option->defname, "template") == 0 &&
strcmp(optionValue, "template1") != 0)
{
ereport(ERROR, errmsg("Only template1 is supported as template "
"parameter for CREATE DATABASE"));
}
if (strcmp(option->defname, "strategy") == 0 &&
strcmp(optionValue, "wal_log") != 0)
{
ereport(ERROR, errmsg("Only wal_log is supported as strategy "
"parameter for CREATE DATABASE"));
}
}
}
/*
* GetDatabaseAddressFromDatabaseName gets the database name and returns the ObjectAddress
* of the database.
*/
static ObjectAddress *
GetDatabaseAddressFromDatabaseName(char *databaseName, bool missingOk)
{
Oid databaseOid = get_database_oid(databaseName, missingOk);
ObjectAddress *dbObjectAddress = palloc0(sizeof(ObjectAddress));
ObjectAddressSet(*dbObjectAddress, DatabaseRelationId, databaseOid);
return dbObjectAddress;
}
/*
* GetTablespaceName gets the tablespace oid and returns the tablespace name.
*/
static char *
GetTablespaceName(Oid tablespaceOid)
{
HeapTuple tuple = SearchSysCache1(TABLESPACEOID, ObjectIdGetDatum(tablespaceOid));
if (!HeapTupleIsValid(tuple))
{
return NULL;
}
Form_pg_tablespace tablespaceForm = (Form_pg_tablespace) GETSTRUCT(tuple);
char *tablespaceName = pstrdup(NameStr(tablespaceForm->spcname));
ReleaseSysCache(tuple);
return tablespaceName;
}
/*
* GetDatabaseMetadataSyncCommands returns a list of sql statements
* for the given database id. The list contains the database ddl command,
* grant commands and comment propagation commands.
*/
List *
GetDatabaseMetadataSyncCommands(Oid dbOid)
{
char *databaseName = get_database_name(dbOid);
char *databaseDDLCommand = CreateDatabaseDDLCommand(dbOid);
List *ddlCommands = list_make1(databaseDDLCommand);
List *grantDDLCommands = GrantOnDatabaseDDLCommands(dbOid);
List *commentDDLCommands = GetCommentPropagationCommands(DatabaseRelationId, dbOid,
databaseName,
OBJECT_DATABASE);
ddlCommands = list_concat(ddlCommands, grantDDLCommands);
ddlCommands = list_concat(ddlCommands, commentDDLCommands);
return ddlCommands;
}
/*
* GetDatabaseCollation gets oid of a database and returns all the collation related information
* We need this method since collation related info in Form_pg_database is not accessible.
*/
static DatabaseCollationInfo
GetDatabaseCollation(Oid dbOid)
{
DatabaseCollationInfo info;
memset(&info, 0, sizeof(DatabaseCollationInfo));
Relation rel = table_open(DatabaseRelationId, AccessShareLock);
HeapTuple tup = get_catalog_object_by_oid(rel, Anum_pg_database_oid, dbOid);
if (!HeapTupleIsValid(tup))
{
elog(ERROR, "cache lookup failed for database %u", dbOid);
}
bool isNull = false;
TupleDesc tupdesc = RelationGetDescr(rel);
Datum collationDatum = heap_getattr(tup, Anum_pg_database_datcollate, tupdesc,
&isNull);
info.datcollate = TextDatumGetCString(collationDatum);
Datum ctypeDatum = heap_getattr(tup, Anum_pg_database_datctype, tupdesc, &isNull);
info.datctype = TextDatumGetCString(ctypeDatum);
Datum icuLocaleDatum = heap_getattr(tup, Anum_pg_database_datlocale, tupdesc,
&isNull);
if (!isNull)
{
info.daticulocale = TextDatumGetCString(icuLocaleDatum);
}
Datum collverDatum = heap_getattr(tup, Anum_pg_database_datcollversion, tupdesc,
&isNull);
if (!isNull)
{
info.datcollversion = TextDatumGetCString(collverDatum);
}
#if PG_VERSION_NUM >= PG_VERSION_16
Datum icurulesDatum = heap_getattr(tup, Anum_pg_database_daticurules, tupdesc,
&isNull);
if (!isNull)
{
info.daticurules = TextDatumGetCString(icurulesDatum);
}
#endif
table_close(rel, AccessShareLock);
heap_freetuple(tup);
return info;
}
/*
* GetLocaleProviderString gets the datlocprovider stored in pg_database
* and returns the string representation of the datlocprovider
*/
static char *
GetLocaleProviderString(char datlocprovider)
{
switch (datlocprovider)
{
case 'c':
{
return "libc";
}
case 'i':
{
return "icu";
}
default:
{
ereport(ERROR, (errmsg("unexpected datlocprovider value: %c",
datlocprovider)));
}
}
}
/*
* GenerateCreateDatabaseStatementFromPgDatabase gets the pg_database tuple and returns the
* CREATE DATABASE statement that can be used to create given database.
*
* Note that this doesn't deparse OID of the database and this is not a
* problem as we anyway don't allow specifying custom OIDs for databases
* when creating them.
*/
static char *
GenerateCreateDatabaseStatementFromPgDatabase(Form_pg_database databaseForm)
{
DatabaseCollationInfo collInfo = GetDatabaseCollation(databaseForm->oid);
StringInfoData str;
initStringInfo(&str);
appendStringInfo(&str, "CREATE DATABASE %s",
quote_identifier(NameStr(databaseForm->datname)));
appendStringInfo(&str, " CONNECTION LIMIT %d", databaseForm->datconnlimit);
appendStringInfo(&str, " ALLOW_CONNECTIONS = %s",
quote_literal_cstr(databaseForm->datallowconn ? "true" : "false"));
appendStringInfo(&str, " IS_TEMPLATE = %s",
quote_literal_cstr(databaseForm->datistemplate ? "true" : "false"));
appendStringInfo(&str, " LC_COLLATE = %s",
quote_literal_cstr(collInfo.datcollate));
appendStringInfo(&str, " LC_CTYPE = %s", quote_literal_cstr(collInfo.datctype));
appendStringInfo(&str, " OWNER = %s",
quote_identifier(GetUserNameFromId(databaseForm->datdba, false)));
appendStringInfo(&str, " TABLESPACE = %s",
quote_identifier(GetTablespaceName(databaseForm->dattablespace)));
appendStringInfo(&str, " ENCODING = %s",
quote_literal_cstr(pg_encoding_to_char(databaseForm->encoding)));
if (collInfo.datcollversion != NULL)
{
appendStringInfo(&str, " COLLATION_VERSION = %s",
quote_identifier(collInfo.datcollversion));
}
if (collInfo.daticulocale != NULL)
{
appendStringInfo(&str, " ICU_LOCALE = %s", quote_identifier(
collInfo.daticulocale));
}
appendStringInfo(&str, " LOCALE_PROVIDER = %s",
quote_identifier(GetLocaleProviderString(
databaseForm->datlocprovider)));
#if PG_VERSION_NUM >= PG_VERSION_16
if (collInfo.daticurules != NULL)
{
appendStringInfo(&str, " ICU_RULES = %s", quote_identifier(
collInfo.daticurules));
}
#endif
return str.data;
}
/*
* CreateDatabaseDDLCommand returns a CREATE DATABASE command to create given
* database
*
* Command is wrapped by citus_internal_database_command() UDF
* to avoid from transaction block restrictions that apply to database commands.
*/
char *
CreateDatabaseDDLCommand(Oid dbId)
{
HeapTuple tuple = SearchSysCache1(DATABASEOID, ObjectIdGetDatum(dbId));
if (!HeapTupleIsValid(tuple))
{
ereport(ERROR, (errcode(ERRCODE_UNDEFINED_DATABASE),
errmsg("database with OID %u does not exist", dbId)));
}
Form_pg_database databaseForm = (Form_pg_database) GETSTRUCT(tuple);
char *createStmt = GenerateCreateDatabaseStatementFromPgDatabase(databaseForm);
StringInfo outerDbStmt = makeStringInfo();
/* Generate the CREATE DATABASE statement */
appendStringInfo(outerDbStmt,
"SELECT citus_internal.database_command(%s)",
quote_literal_cstr(createStmt));
ReleaseSysCache(tuple);
return outerDbStmt->data;
}

View File

@ -31,146 +31,53 @@
#include "distributed/worker_manager.h"
#include "distributed/worker_transaction.h"
typedef enum RequiredObjectSet
{
REQUIRE_ONLY_DEPENDENCIES = 1,
REQUIRE_OBJECT_AND_DEPENDENCIES = 2,
} RequiredObjectSet;
static void EnsureDependenciesCanBeDistributed(const ObjectAddress *relationAddress);
static void ErrorIfCircularDependencyExists(const ObjectAddress *objectAddress);
static int ObjectAddressComparator(const void *a, const void *b);
static void EnsureDependenciesExistOnAllNodes(const ObjectAddress *target);
static void EnsureRequiredObjectSetExistOnAllNodes(const ObjectAddress *target,
RequiredObjectSet requiredObjectSet);
static List * GetDependencyCreateDDLCommands(const ObjectAddress *dependency);
static bool ShouldPropagateObject(const ObjectAddress *address);
static char * DropTableIfExistsCommand(Oid relationId);
/*
* EnsureObjectAndDependenciesExistOnAllNodes is a wrapper around
* EnsureRequiredObjectSetExistOnAllNodes to ensure the "object itself" (together
* with its dependencies) is available on all nodes.
*
* Different than EnsureDependenciesExistOnAllNodes, we return early if the
* target object is distributed already.
*
* The reason why we don't do the same in EnsureDependenciesExistOnAllNodes
* is that it's is used when altering an object too and hence the target object
* may instantly have a dependency that needs to be propagated now. For example,
* when "GRANT non_dist_role TO dist_role" is executed, we need to propagate
* "non_dist_role" to all nodes before propagating the "GRANT" command itself.
* For this reason, we call EnsureDependenciesExistOnAllNodes for "dist_role"
* and it would automatically discover that "non_dist_role" is a dependency of
* "dist_role" and propagate it beforehand.
*
* However, when we're requested to create the target object itself (and
* implicitly its dependencies), we're sure that we're not altering the target
* object itself, hence we can return early if the target object is already
* distributed. This is the case, for example, when
* "REASSIGN OWNED BY dist_role TO non_dist_role" is executed. In that case,
* "non_dist_role" is not a dependency of "dist_role" but we want to distribute
* "non_dist_role" beforehand and we call this function for "non_dist_role",
* not for "dist_role".
*
* See EnsureRequiredObjectExistOnAllNodes to learn more about how this
* function deals with an object created within the same transaction.
*/
void
EnsureObjectAndDependenciesExistOnAllNodes(const ObjectAddress *target)
{
if (IsAnyObjectDistributed(list_make1((ObjectAddress *) target)))
{
return;
}
EnsureRequiredObjectSetExistOnAllNodes(target, REQUIRE_OBJECT_AND_DEPENDENCIES);
}
/*
* EnsureDependenciesExistOnAllNodes is a wrapper around
* EnsureRequiredObjectSetExistOnAllNodes to ensure "all dependencies" of given
* object --but not the object itself-- are available on all nodes.
*
* See EnsureRequiredObjectSetExistOnAllNodes to learn more about how this
* function deals with an object created within the same transaction.
*/
static void
EnsureDependenciesExistOnAllNodes(const ObjectAddress *target)
{
EnsureRequiredObjectSetExistOnAllNodes(target, REQUIRE_ONLY_DEPENDENCIES);
}
/*
* EnsureRequiredObjectSetExistOnAllNodes finds all the dependencies that we support and makes
* sure these are available on all nodes if required object set is REQUIRE_ONLY_DEPENDENCIES.
* Otherwise, i.e., if required object set is REQUIRE_OBJECT_AND_DEPENDENCIES, then this
* function creates the object itself on all nodes too. This function ensures that each
* of the dependencies are supported by Citus but doesn't check the same for the target
* object itself (when REQUIRE_OBJECT_AND_DEPENDENCIES) is provided because we assume that
* callers don't call this function for an unsupported function at all.
*
* If not available, they will be created on the nodes via a separate session that will be
* committed directly so that the objects are visible to potentially multiple sessions creating
* the shards.
* EnsureDependenciesExistOnAllNodes finds all the dependencies that we support and makes
* sure these are available on all workers. If not available they will be created on the
* workers via a separate session that will be committed directly so that the objects are
* visible to potentially multiple sessions creating the shards.
*
* Note; only the actual objects are created via a separate session, the records to
* pg_dist_object are created in this session. As a side effect the objects could be
* created on the nodes without a catalog entry. Updates to the objects on local node
* are not propagated to the remote nodes until the record is visible on local node.
* created on the workers without a catalog entry. Updates to the objects on the coordinator
* are not propagated to the workers until the record is visible on the coordinator.
*
* This is solved by creating the dependencies in an idempotent manner, either via
* postgres native CREATE IF NOT EXISTS, or citus helper functions.
*/
static void
EnsureRequiredObjectSetExistOnAllNodes(const ObjectAddress *target,
RequiredObjectSet requiredObjectSet)
EnsureDependenciesExistOnAllNodes(const ObjectAddress *target)
{
Assert(requiredObjectSet == REQUIRE_ONLY_DEPENDENCIES ||
requiredObjectSet == REQUIRE_OBJECT_AND_DEPENDENCIES);
List *objectsWithCommands = NIL;
List *dependenciesWithCommands = NIL;
List *ddlCommands = NULL;
/*
* If there is any unsupported dependency or circular dependency exists, Citus can
* not ensure dependencies will exist on all nodes.
*
* Note that we don't check whether "target" is distributable (in case
* REQUIRE_OBJECT_AND_DEPENDENCIES is provided) because we expect callers
* to not even call this function if Citus doesn't know how to propagate
* "target" object itself.
*/
EnsureDependenciesCanBeDistributed(target);
/* collect all dependencies in creation order and get their ddl commands */
List *objectsToBeCreated = GetDependenciesForObject(target);
/*
* Append the target object to make sure that it's created after its
* dependencies are created, if requested.
*/
if (requiredObjectSet == REQUIRE_OBJECT_AND_DEPENDENCIES)
List *dependencies = GetDependenciesForObject(target);
ObjectAddress *dependency = NULL;
foreach_ptr(dependency, dependencies)
{
ObjectAddress *targetCopy = palloc(sizeof(ObjectAddress));
*targetCopy = *target;
objectsToBeCreated = lappend(objectsToBeCreated, targetCopy);
}
ObjectAddress *object = NULL;
foreach_declared_ptr(object, objectsToBeCreated)
{
List *dependencyCommands = GetDependencyCreateDDLCommands(object);
List *dependencyCommands = GetDependencyCreateDDLCommands(dependency);
ddlCommands = list_concat(ddlCommands, dependencyCommands);
/* create a new list with objects that actually created commands */
/* create a new list with dependencies that actually created commands */
if (list_length(dependencyCommands) > 0)
{
objectsWithCommands = lappend(objectsWithCommands, object);
dependenciesWithCommands = lappend(dependenciesWithCommands, dependency);
}
}
if (list_length(ddlCommands) <= 0)
@ -190,31 +97,29 @@ EnsureRequiredObjectSetExistOnAllNodes(const ObjectAddress *target,
* either get it now, or get it in citus_add_node after this transaction finishes and
* the pg_dist_object record becomes visible.
*/
List *remoteNodeList = ActivePrimaryRemoteNodeList(RowShareLock);
List *workerNodeList = ActivePrimaryNonCoordinatorNodeList(RowShareLock);
/*
* Lock objects to be created explicitly to make sure same DDL command won't be sent
* Lock dependent objects explicitly to make sure same DDL command won't be sent
* multiple times from parallel sessions.
*
* Sort the objects that will be created on workers to not to have any deadlock
* Sort dependencies that will be created on workers to not to have any deadlock
* issue if different sessions are creating different objects.
*/
List *addressSortedDependencies = SortList(objectsWithCommands,
List *addressSortedDependencies = SortList(dependenciesWithCommands,
ObjectAddressComparator);
foreach_declared_ptr(object, addressSortedDependencies)
foreach_ptr(dependency, addressSortedDependencies)
{
LockDatabaseObject(object->classId, object->objectId,
object->objectSubId, ExclusiveLock);
LockDatabaseObject(dependency->classId, dependency->objectId,
dependency->objectSubId, ExclusiveLock);
}
/*
* We need to propagate objects via the current user's metadata connection if
* any of the objects that we're interested in are created in the current transaction.
* Our assumption is that if we rely on an object created in the current transaction,
* then the current user, most probably, has permissions to create the target object
* as well.
*
* We need to propagate dependencies via the current user's metadata connection if
* any dependency for the target is created in the current transaction. Our assumption
* is that if we rely on a dependency created in the current transaction, then the
* current user, most probably, has permissions to create the target object as well.
* Note that, user still may not be able to create the target due to no permissions
* for any of its dependencies. But this is ok since it should be rare.
*
@ -222,25 +127,14 @@ EnsureRequiredObjectSetExistOnAllNodes(const ObjectAddress *target,
* have visibility issues since propagated dependencies would be invisible to
* the separate connection until we locally commit.
*/
List *createdObjectList = GetAllSupportedDependenciesForObject(target);
/* consider target as well if we're requested to create it too */
if (requiredObjectSet == REQUIRE_OBJECT_AND_DEPENDENCIES)
if (HasAnyDependencyInPropagatedObjects(target))
{
ObjectAddress *targetCopy = palloc(sizeof(ObjectAddress));
*targetCopy = *target;
createdObjectList = lappend(createdObjectList, targetCopy);
}
if (HasAnyObjectInPropagatedObjects(createdObjectList))
{
SendCommandListToRemoteNodesWithMetadata(ddlCommands);
SendCommandListToWorkersWithMetadata(ddlCommands);
}
else
{
WorkerNode *workerNode = NULL;
foreach_declared_ptr(workerNode, remoteNodeList)
foreach_ptr(workerNode, workerNodeList)
{
const char *nodeName = workerNode->workerName;
uint32 nodePort = workerNode->workerPort;
@ -252,11 +146,11 @@ EnsureRequiredObjectSetExistOnAllNodes(const ObjectAddress *target,
}
/*
* We do this after creating the objects on remote nodes, we make sure
* that objects have been created on remote nodes before marking them
* We do this after creating the objects on the workers, we make sure
* that objects have been created on worker nodes before marking them
* distributed, so MarkObjectDistributed wouldn't fail.
*/
foreach_declared_ptr(object, objectsWithCommands)
foreach_ptr(dependency, dependenciesWithCommands)
{
/*
* pg_dist_object entries must be propagated with the super user, since
@ -266,7 +160,7 @@ EnsureRequiredObjectSetExistOnAllNodes(const ObjectAddress *target,
* Only dependent object's metadata should be propagated with super user.
* Metadata of the table itself must be propagated with the current user.
*/
MarkObjectDistributedViaSuperUser(object);
MarkObjectDistributedViaSuperUser(dependency);
}
}
@ -279,7 +173,7 @@ void
EnsureAllObjectDependenciesExistOnAllNodes(const List *targets)
{
ObjectAddress *target = NULL;
foreach_declared_ptr(target, targets)
foreach_ptr(target, targets)
{
EnsureDependenciesExistOnAllNodes(target);
}
@ -336,7 +230,7 @@ DeferErrorIfCircularDependencyExists(const ObjectAddress *objectAddress)
List *dependencies = GetAllDependenciesForObject(objectAddress);
ObjectAddress *dependency = NULL;
foreach_declared_ptr(dependency, dependencies)
foreach_ptr(dependency, dependencies)
{
if (dependency->classId == objectAddress->classId &&
dependency->objectId == objectAddress->objectId &&
@ -424,7 +318,7 @@ GetDistributableDependenciesForObject(const ObjectAddress *target)
/* filter the ones that can be distributed */
ObjectAddress *dependency = NULL;
foreach_declared_ptr(dependency, dependencies)
foreach_ptr(dependency, dependencies)
{
/*
* TODO: maybe we can optimize the logic applied in below line. Actually we
@ -508,7 +402,7 @@ GetDependencyCreateDDLCommands(const ObjectAddress *dependency)
INCLUDE_IDENTITY,
creatingShellTableOnRemoteNode);
TableDDLCommand *tableDDLCommand = NULL;
foreach_declared_ptr(tableDDLCommand, tableDDLCommands)
foreach_ptr(tableDDLCommand, tableDDLCommands)
{
Assert(CitusIsA(tableDDLCommand, TableDDLCommand));
commandList = lappend(commandList, GetTableDDLCommand(
@ -565,29 +459,16 @@ GetDependencyCreateDDLCommands(const ObjectAddress *dependency)
case OCLASS_DATABASE:
{
/*
* For the database where Citus is installed, only propagate the ownership of the
* database, only when the feature is on.
*
* This is because this database must exist on all nodes already so we shouldn't
* need to "CREATE" it on other nodes. However, we still need to correctly reflect
* its owner on other nodes too.
*/
if (dependency->objectId == MyDatabaseId && EnableAlterDatabaseOwner)
List *databaseDDLCommands = NIL;
/* only propagate the ownership of the database when the feature is on */
if (EnableAlterDatabaseOwner)
{
return DatabaseOwnerDDLCommands(dependency);
List *ownerDDLCommands = DatabaseOwnerDDLCommands(dependency);
databaseDDLCommands = list_concat(databaseDDLCommands, ownerDDLCommands);
}
/*
* For the other databases, create the database on all nodes, only when the feature
* is on.
*/
if (dependency->objectId != MyDatabaseId && EnableCreateDatabasePropagation)
{
return GetDatabaseMetadataSyncCommands(dependency->objectId);
}
return NIL;
return databaseDDLCommands;
}
case OCLASS_PROC:
@ -683,7 +564,7 @@ GetAllDependencyCreateDDLCommands(const List *dependencies)
List *commands = NIL;
ObjectAddress *dependency = NULL;
foreach_declared_ptr(dependency, dependencies)
foreach_ptr(dependency, dependencies)
{
commands = list_concat(commands, GetDependencyCreateDDLCommands(dependency));
}
@ -831,7 +712,7 @@ bool
ShouldPropagateAnyObject(List *addresses)
{
ObjectAddress *address = NULL;
foreach_declared_ptr(address, addresses)
foreach_ptr(address, addresses)
{
if (ShouldPropagateObject(address))
{
@ -853,7 +734,7 @@ FilterObjectAddressListByPredicate(List *objectAddressList, AddressPredicate pre
List *result = NIL;
ObjectAddress *address = NULL;
foreach_declared_ptr(address, objectAddressList)
foreach_ptr(address, objectAddressList)
{
if (predicate(address))
{

View File

@ -16,7 +16,6 @@
#include "distributed/commands.h"
#include "distributed/commands/utility_hook.h"
#include "distributed/comment.h"
#include "distributed/deparser.h"
#include "distributed/version_compat.h"
@ -152,17 +151,6 @@ static DistributeObjectOps Any_AlterRole = {
.address = AlterRoleStmtObjectAddress,
.markDistributed = false,
};
static DistributeObjectOps Any_AlterRoleRename = {
.deparse = DeparseRenameRoleStmt,
.qualify = NULL,
.preprocess = PreprocessAlterRoleRenameStmt,
.postprocess = NULL,
.operationType = DIST_OPS_ALTER,
.address = RenameRoleStmtObjectAddress,
.markDistributed = false,
};
static DistributeObjectOps Any_AlterRoleSet = {
.deparse = DeparseAlterRoleSetStmt,
.qualify = QualifyAlterRoleSetStmt,
@ -276,17 +264,6 @@ static DistributeObjectOps Any_CreateRole = {
.address = CreateRoleStmtObjectAddress,
.markDistributed = true,
};
static DistributeObjectOps Any_ReassignOwned = {
.deparse = DeparseReassignOwnedStmt,
.qualify = NULL,
.preprocess = NULL,
.postprocess = PostprocessReassignOwnedStmt,
.operationType = DIST_OPS_ALTER,
.address = NULL,
.markDistributed = false,
};
static DistributeObjectOps Any_DropOwned = {
.deparse = DeparseDropOwnedStmt,
.qualify = NULL,
@ -305,17 +282,6 @@ static DistributeObjectOps Any_DropRole = {
.address = NULL,
.markDistributed = false,
};
static DistributeObjectOps Role_Comment = {
.deparse = DeparseCommentStmt,
.qualify = NULL,
.preprocess = PreprocessAlterDistributedObjectStmt,
.postprocess = NULL,
.objectType = OBJECT_DATABASE,
.operationType = DIST_OPS_ALTER,
.address = CommentObjectAddress,
.markDistributed = false,
};
static DistributeObjectOps Any_CreateForeignServer = {
.deparse = DeparseCreateForeignServerStmt,
.qualify = NULL,
@ -399,37 +365,10 @@ static DistributeObjectOps Any_Rename = {
.markDistributed = false,
};
static DistributeObjectOps Any_SecLabel = {
.deparse = NULL,
.deparse = DeparseSecLabelStmt,
.qualify = NULL,
.preprocess = NULL,
.postprocess = PostprocessAnySecLabelStmt,
.operationType = DIST_OPS_ALTER,
.address = SecLabelStmtObjectAddress,
.markDistributed = false,
};
static DistributeObjectOps Role_SecLabel = {
.deparse = DeparseRoleSecLabelStmt,
.qualify = NULL,
.preprocess = NULL,
.postprocess = PostprocessRoleSecLabelStmt,
.operationType = DIST_OPS_ALTER,
.address = SecLabelStmtObjectAddress,
.markDistributed = false,
};
static DistributeObjectOps Table_SecLabel = {
.deparse = DeparseTableSecLabelStmt,
.qualify = NULL,
.preprocess = NULL,
.postprocess = PostprocessTableOrColumnSecLabelStmt,
.operationType = DIST_OPS_ALTER,
.address = SecLabelStmtObjectAddress,
.markDistributed = false,
};
static DistributeObjectOps Column_SecLabel = {
.deparse = DeparseColumnSecLabelStmt,
.qualify = NULL,
.preprocess = NULL,
.postprocess = PostprocessTableOrColumnSecLabelStmt,
.postprocess = PostprocessSecLabelStmt,
.operationType = DIST_OPS_ALTER,
.address = SecLabelStmtObjectAddress,
.markDistributed = false,
@ -526,28 +465,7 @@ static DistributeObjectOps Database_Alter = {
.markDistributed = false,
};
static DistributeObjectOps Database_Create = {
.deparse = DeparseCreateDatabaseStmt,
.qualify = NULL,
.preprocess = PreprocessCreateDatabaseStmt,
.postprocess = PostprocessCreateDatabaseStmt,
.objectType = OBJECT_DATABASE,
.operationType = DIST_OPS_CREATE,
.address = CreateDatabaseStmtObjectAddress,
.markDistributed = true,
};
static DistributeObjectOps Database_Drop = {
.deparse = DeparseDropDatabaseStmt,
.qualify = NULL,
.preprocess = PreprocessDropDatabaseStmt,
.postprocess = NULL,
.objectType = OBJECT_DATABASE,
.operationType = DIST_OPS_DROP,
.address = DropDatabaseStmtObjectAddress,
.markDistributed = false,
};
#if PG_VERSION_NUM >= PG_VERSION_15
static DistributeObjectOps Database_RefreshColl = {
.deparse = DeparseAlterDatabaseRefreshCollStmt,
.qualify = NULL,
@ -558,39 +476,7 @@ static DistributeObjectOps Database_RefreshColl = {
.address = NULL,
.markDistributed = false,
};
static DistributeObjectOps Database_Set = {
.deparse = DeparseAlterDatabaseSetStmt,
.qualify = NULL,
.preprocess = PreprocessAlterDatabaseSetStmt,
.postprocess = NULL,
.objectType = OBJECT_DATABASE,
.operationType = DIST_OPS_ALTER,
.address = NULL,
.markDistributed = false,
};
static DistributeObjectOps Database_Comment = {
.deparse = DeparseCommentStmt,
.qualify = NULL,
.preprocess = PreprocessAlterDistributedObjectStmt,
.postprocess = NULL,
.objectType = OBJECT_DATABASE,
.operationType = DIST_OPS_ALTER,
.address = CommentObjectAddress,
.markDistributed = false,
};
static DistributeObjectOps Database_Rename = {
.deparse = DeparseAlterDatabaseRenameStmt,
.qualify = NULL,
.preprocess = PreprocessAlterDatabaseRenameStmt,
.postprocess = PostprocessAlterDatabaseRenameStmt,
.objectType = OBJECT_DATABASE,
.operationType = DIST_OPS_ALTER,
.address = NULL,
.markDistributed = false,
};
#endif
static DistributeObjectOps Domain_Alter = {
.deparse = DeparseAlterDomainStmt,
@ -951,6 +837,7 @@ static DistributeObjectOps Sequence_AlterOwner = {
.address = AlterSequenceOwnerStmtObjectAddress,
.markDistributed = false,
};
#if (PG_VERSION_NUM >= PG_VERSION_15)
static DistributeObjectOps Sequence_AlterPersistence = {
.deparse = DeparseAlterSequencePersistenceStmt,
.qualify = QualifyAlterSequencePersistenceStmt,
@ -960,6 +847,7 @@ static DistributeObjectOps Sequence_AlterPersistence = {
.address = AlterSequencePersistenceStmtObjectAddress,
.markDistributed = false,
};
#endif
static DistributeObjectOps Sequence_Drop = {
.deparse = DeparseDropSequenceStmt,
.qualify = QualifyDropSequenceStmt,
@ -1018,18 +906,13 @@ static DistributeObjectOps TextSearchConfig_AlterOwner = {
.markDistributed = false,
};
static DistributeObjectOps TextSearchConfig_Comment = {
.deparse = DeparseCommentStmt,
/* TODO: When adding new comment types we should create an abstracted
* qualify function, just like we have an abstract deparse
* and adress function
*/
.deparse = DeparseTextSearchConfigurationCommentStmt,
.qualify = QualifyTextSearchConfigurationCommentStmt,
.preprocess = PreprocessAlterDistributedObjectStmt,
.postprocess = NULL,
.objectType = OBJECT_TSCONFIGURATION,
.operationType = DIST_OPS_ALTER,
.address = CommentObjectAddress,
.address = TextSearchConfigurationCommentObjectAddress,
.markDistributed = false,
};
static DistributeObjectOps TextSearchConfig_Define = {
@ -1092,13 +975,13 @@ static DistributeObjectOps TextSearchDict_AlterOwner = {
.markDistributed = false,
};
static DistributeObjectOps TextSearchDict_Comment = {
.deparse = DeparseCommentStmt,
.deparse = DeparseTextSearchDictionaryCommentStmt,
.qualify = QualifyTextSearchDictionaryCommentStmt,
.preprocess = PreprocessAlterDistributedObjectStmt,
.postprocess = NULL,
.objectType = OBJECT_TSDICTIONARY,
.operationType = DIST_OPS_ALTER,
.address = CommentObjectAddress,
.address = TextSearchDictCommentObjectAddress,
.markDistributed = false,
};
static DistributeObjectOps TextSearchDict_Define = {
@ -1416,7 +1299,7 @@ static DistributeObjectOps View_Rename = {
static DistributeObjectOps Trigger_Rename = {
.deparse = NULL,
.qualify = NULL,
.preprocess = NULL,
.preprocess = PreprocessAlterTriggerRenameStmt,
.operationType = DIST_OPS_ALTER,
.postprocess = PostprocessAlterTriggerRenameStmt,
.address = NULL,
@ -1438,27 +1321,13 @@ GetDistributeObjectOps(Node *node)
return &Database_Alter;
}
case T_CreatedbStmt:
{
return &Database_Create;
}
case T_DropdbStmt:
{
return &Database_Drop;
}
#if PG_VERSION_NUM >= PG_VERSION_15
case T_AlterDatabaseRefreshCollStmt:
{
return &Database_RefreshColl;
}
case T_AlterDatabaseSetStmt:
{
return &Database_Set;
}
#endif
case T_AlterDomainStmt:
{
return &Domain_Alter;
@ -1743,6 +1612,7 @@ GetDistributeObjectOps(Node *node)
case OBJECT_SEQUENCE:
{
#if (PG_VERSION_NUM >= PG_VERSION_15)
ListCell *cmdCell = NULL;
foreach(cmdCell, stmt->cmds)
{
@ -1770,6 +1640,7 @@ GetDistributeObjectOps(Node *node)
}
}
}
#endif
/*
* Prior to PG15, the only Alter Table statement
@ -1826,16 +1697,6 @@ GetDistributeObjectOps(Node *node)
return &TextSearchDict_Comment;
}
case OBJECT_DATABASE:
{
return &Database_Comment;
}
case OBJECT_ROLE:
{
return &Role_Comment;
}
default:
{
return &NoDistributeOps;
@ -1945,11 +1806,6 @@ GetDistributeObjectOps(Node *node)
return &Any_DropOwned;
}
case T_ReassignOwnedStmt:
{
return &Any_ReassignOwned;
}
case T_DropStmt:
{
DropStmt *stmt = castNode(DropStmt, node);
@ -2146,27 +2002,7 @@ GetDistributeObjectOps(Node *node)
case T_SecLabelStmt:
{
SecLabelStmt *stmt = castNode(SecLabelStmt, node);
switch (stmt->objtype)
{
case OBJECT_ROLE:
{
return &Role_SecLabel;
}
case OBJECT_TABLE:
{
return &Table_SecLabel;
}
case OBJECT_COLUMN:
{
return &Column_SecLabel;
}
default:
return &Any_SecLabel;
}
return &Any_SecLabel;
}
case T_RenameStmt:
@ -2189,11 +2025,6 @@ GetDistributeObjectOps(Node *node)
return &Collation_Rename;
}
case OBJECT_DATABASE:
{
return &Database_Rename;
}
case OBJECT_DOMAIN:
{
return &Domain_Rename;
@ -2224,11 +2055,6 @@ GetDistributeObjectOps(Node *node)
return &Publication_Rename;
}
case OBJECT_ROLE:
{
return &Any_AlterRoleRename;
}
case OBJECT_ROUTINE:
{
return &Routine_Rename;

View File

@ -210,7 +210,7 @@ MakeCollateClauseFromOid(Oid collationOid)
getObjectIdentityParts(&collateAddress, &objName, &objArgs, false);
char *name = NULL;
foreach_declared_ptr(name, objName)
foreach_ptr(name, objName)
{
collateClause->collname = lappend(collateClause->collname, makeString(name));
}

View File

@ -274,7 +274,7 @@ PreprocessDropExtensionStmt(Node *node, const char *queryString,
/* unmark each distributed extension */
ObjectAddress *address = NULL;
foreach_declared_ptr(address, distributedExtensionAddresses)
foreach_ptr(address, distributedExtensionAddresses)
{
UnmarkObjectDistributed(address);
}
@ -313,7 +313,7 @@ FilterDistributedExtensions(List *extensionObjectList)
List *extensionNameList = NIL;
String *objectName = NULL;
foreach_declared_ptr(objectName, extensionObjectList)
foreach_ptr(objectName, extensionObjectList)
{
const char *extensionName = strVal(objectName);
const bool missingOk = true;
@ -351,7 +351,7 @@ ExtensionNameListToObjectAddressList(List *extensionObjectList)
List *extensionObjectAddressList = NIL;
String *objectName;
foreach_declared_ptr(objectName, extensionObjectList)
foreach_ptr(objectName, extensionObjectList)
{
/*
* We set missingOk to false as we assume all the objects in
@ -527,7 +527,7 @@ MarkExistingObjectDependenciesDistributedIfSupported()
List *citusTableIdList = CitusTableTypeIdList(ANY_CITUS_TABLE_TYPE);
Oid citusTableId = InvalidOid;
foreach_declared_oid(citusTableId, citusTableIdList)
foreach_oid(citusTableId, citusTableIdList)
{
if (!ShouldMarkRelationDistributed(citusTableId))
{
@ -571,7 +571,7 @@ MarkExistingObjectDependenciesDistributedIfSupported()
*/
List *viewList = GetAllViews();
Oid viewOid = InvalidOid;
foreach_declared_oid(viewOid, viewList)
foreach_oid(viewOid, viewList)
{
if (!ShouldMarkRelationDistributed(viewOid))
{
@ -605,7 +605,7 @@ MarkExistingObjectDependenciesDistributedIfSupported()
List *distributedObjectAddressList = GetDistributedObjectAddressList();
ObjectAddress *distributedObjectAddress = NULL;
foreach_declared_ptr(distributedObjectAddress, distributedObjectAddressList)
foreach_ptr(distributedObjectAddress, distributedObjectAddressList)
{
List *distributableDependencyObjectAddresses =
GetDistributableDependenciesForObject(distributedObjectAddress);
@ -627,7 +627,7 @@ MarkExistingObjectDependenciesDistributedIfSupported()
SetLocalEnableMetadataSync(false);
ObjectAddress *objectAddress = NULL;
foreach_declared_ptr(objectAddress, uniqueObjectAddresses)
foreach_ptr(objectAddress, uniqueObjectAddresses)
{
MarkObjectDistributed(objectAddress);
}
@ -831,7 +831,7 @@ IsDropCitusExtensionStmt(Node *parseTree)
/* now that we have a DropStmt, check if citus extension is among the objects to dropped */
String *objectName;
foreach_declared_ptr(objectName, dropStmt->objects)
foreach_ptr(objectName, dropStmt->objects)
{
const char *extensionName = strVal(objectName);
@ -1061,7 +1061,7 @@ GenerateGrantCommandsOnExtensionDependentFDWs(Oid extensionId)
List *FDWOids = GetDependentFDWsToExtension(extensionId);
Oid FDWOid = InvalidOid;
foreach_declared_oid(FDWOid, FDWOids)
foreach_oid(FDWOid, FDWOids)
{
Acl *aclEntry = GetPrivilegesForFDW(FDWOid);

View File

@ -202,7 +202,7 @@ ErrorIfUnsupportedForeignConstraintExists(Relation relation, char referencingDis
List *foreignKeyOids = GetForeignKeyOids(referencingTableId, flags);
Oid foreignKeyOid = InvalidOid;
foreach_declared_oid(foreignKeyOid, foreignKeyOids)
foreach_oid(foreignKeyOid, foreignKeyOids)
{
HeapTuple heapTuple = SearchSysCache1(CONSTROID, ObjectIdGetDatum(foreignKeyOid));
@ -414,7 +414,7 @@ ForeignKeySetsNextValColumnToDefault(HeapTuple pgConstraintTuple)
List *setDefaultAttrs = ForeignKeyGetDefaultingAttrs(pgConstraintTuple);
AttrNumber setDefaultAttr = InvalidAttrNumber;
foreach_declared_int(setDefaultAttr, setDefaultAttrs)
foreach_int(setDefaultAttr, setDefaultAttrs)
{
if (ColumnDefaultsToNextVal(pgConstraintForm->conrelid, setDefaultAttr))
{
@ -467,6 +467,7 @@ ForeignKeyGetDefaultingAttrs(HeapTuple pgConstraintTuple)
}
List *onDeleteSetDefColumnList = NIL;
#if PG_VERSION_NUM >= PG_VERSION_15
Datum onDeleteSetDefColumnsDatum = SysCacheGetAttr(CONSTROID, pgConstraintTuple,
Anum_pg_constraint_confdelsetcols,
&isNull);
@ -481,6 +482,7 @@ ForeignKeyGetDefaultingAttrs(HeapTuple pgConstraintTuple)
onDeleteSetDefColumnList =
IntegerArrayTypeToList(DatumGetArrayTypeP(onDeleteSetDefColumnsDatum));
}
#endif
if (list_length(onDeleteSetDefColumnList) == 0)
{
@ -725,7 +727,7 @@ ColumnAppearsInForeignKeyToReferenceTable(char *columnName, Oid relationId)
GetForeignKeyIdsForColumn(columnName, relationId, searchForeignKeyColumnFlags);
Oid foreignKeyId = InvalidOid;
foreach_declared_oid(foreignKeyId, foreignKeyIdsColumnAppeared)
foreach_oid(foreignKeyId, foreignKeyIdsColumnAppeared)
{
Oid referencedTableId = GetReferencedTableId(foreignKeyId);
if (IsCitusTableType(referencedTableId, REFERENCE_TABLE))
@ -899,7 +901,7 @@ GetForeignConstraintCommandsInternal(Oid relationId, int flags)
int saveNestLevel = PushEmptySearchPath();
Oid foreignKeyOid = InvalidOid;
foreach_declared_oid(foreignKeyOid, foreignKeyOids)
foreach_oid(foreignKeyOid, foreignKeyOids)
{
char *statementDef = pg_get_constraintdef_command(foreignKeyOid);
@ -1155,7 +1157,7 @@ static Oid
FindForeignKeyOidWithName(List *foreignKeyOids, const char *inputConstraintName)
{
Oid foreignKeyOid = InvalidOid;
foreach_declared_oid(foreignKeyOid, foreignKeyOids)
foreach_oid(foreignKeyOid, foreignKeyOids)
{
char *constraintName = get_constraint_name(foreignKeyOid);
@ -1470,7 +1472,7 @@ RelationInvolvedInAnyNonInheritedForeignKeys(Oid relationId)
List *foreignKeysRelationInvolved = list_concat(referencingForeignKeys,
referencedForeignKeys);
Oid foreignKeyId = InvalidOid;
foreach_declared_oid(foreignKeyId, foreignKeysRelationInvolved)
foreach_oid(foreignKeyId, foreignKeysRelationInvolved)
{
HeapTuple heapTuple = SearchSysCache1(CONSTROID, ObjectIdGetDatum(foreignKeyId));
if (!HeapTupleIsValid(heapTuple))

View File

@ -86,7 +86,7 @@ static bool
NameListHasFDWOwnedByDistributedExtension(List *FDWNames)
{
String *FDWValue = NULL;
foreach_declared_ptr(FDWValue, FDWNames)
foreach_ptr(FDWValue, FDWNames)
{
/* captures the extension address during lookup */
ObjectAddress *extensionAddress = palloc0(sizeof(ObjectAddress));

View File

@ -229,7 +229,7 @@ RecreateForeignServerStmt(Oid serverId)
int location = -1;
DefElem *option = NULL;
foreach_declared_ptr(option, server->options)
foreach_ptr(option, server->options)
{
DefElem *copyOption = makeDefElem(option->defname, option->arg, location);
createStmt->options = lappend(createStmt->options, copyOption);
@ -247,7 +247,7 @@ static bool
NameListHasDistributedServer(List *serverNames)
{
String *serverValue = NULL;
foreach_declared_ptr(serverValue, serverNames)
foreach_ptr(serverValue, serverNames)
{
List *addresses = GetObjectAddressByServerName(strVal(serverValue), false);

View File

@ -256,7 +256,7 @@ create_distributed_function(PG_FUNCTION_ARGS)
createFunctionSQL, alterFunctionOwnerSQL);
List *grantDDLCommands = GrantOnFunctionDDLCommands(funcOid);
char *grantOnFunctionSQL = NULL;
foreach_declared_ptr(grantOnFunctionSQL, grantDDLCommands)
foreach_ptr(grantOnFunctionSQL, grantDDLCommands)
{
appendStringInfo(&ddlCommand, ";%s", grantOnFunctionSQL);
}
@ -370,7 +370,7 @@ ErrorIfAnyNodeDoesNotHaveMetadata(void)
ActivePrimaryNonCoordinatorNodeList(ShareLock);
WorkerNode *workerNode = NULL;
foreach_declared_ptr(workerNode, workerNodeList)
foreach_ptr(workerNode, workerNodeList)
{
if (!workerNode->hasMetadata)
{
@ -885,7 +885,6 @@ UpdateFunctionDistributionInfo(const ObjectAddress *distAddress,
char *workerPgDistObjectUpdateCommand =
MarkObjectsDistributedCreateCommand(objectAddressList,
NIL,
distArgumentIndexList,
colocationIdList,
forceDelegationList);
@ -981,6 +980,7 @@ GetAggregateDDLCommand(const RegProcedure funcOid, bool useCreateOrReplace)
char *argmodes = NULL;
int insertorderbyat = -1;
int argsprinted = 0;
int inputargno = 0;
HeapTuple proctup = SearchSysCache1(PROCOID, funcOid);
if (!HeapTupleIsValid(proctup))
@ -1060,6 +1060,7 @@ GetAggregateDDLCommand(const RegProcedure funcOid, bool useCreateOrReplace)
}
}
inputargno++; /* this is a 1-based counter */
if (argsprinted == insertorderbyat)
{
appendStringInfoString(&buf, " ORDER BY ");
@ -1476,7 +1477,7 @@ CreateFunctionStmtObjectAddress(Node *node, bool missing_ok, bool isPostprocess)
objectWithArgs->objname = stmt->funcname;
FunctionParameter *funcParam = NULL;
foreach_declared_ptr(funcParam, stmt->parameters)
foreach_ptr(funcParam, stmt->parameters)
{
if (ShouldAddFunctionSignature(funcParam->mode))
{
@ -1519,7 +1520,7 @@ DefineAggregateStmtObjectAddress(Node *node, bool missing_ok, bool isPostprocess
if (stmt->args != NIL)
{
FunctionParameter *funcParam = NULL;
foreach_declared_ptr(funcParam, linitial(stmt->args))
foreach_ptr(funcParam, linitial(stmt->args))
{
objectWithArgs->objargs = lappend(objectWithArgs->objargs,
funcParam->argType);
@ -1528,7 +1529,7 @@ DefineAggregateStmtObjectAddress(Node *node, bool missing_ok, bool isPostprocess
else
{
DefElem *defItem = NULL;
foreach_declared_ptr(defItem, stmt->definition)
foreach_ptr(defItem, stmt->definition)
{
/*
* If no explicit args are given, pg includes basetype in the signature.
@ -1933,7 +1934,7 @@ static void
ErrorIfUnsupportedAlterFunctionStmt(AlterFunctionStmt *stmt)
{
DefElem *action = NULL;
foreach_declared_ptr(action, stmt->actions)
foreach_ptr(action, stmt->actions)
{
if (strcmp(action->defname, "set") == 0)
{
@ -2040,7 +2041,7 @@ PreprocessGrantOnFunctionStmt(Node *node, const char *queryString,
List *grantFunctionList = NIL;
ObjectAddress *functionAddress = NULL;
foreach_declared_ptr(functionAddress, distributedFunctions)
foreach_ptr(functionAddress, distributedFunctions)
{
ObjectWithArgs *distFunction = ObjectWithArgsFromOid(
functionAddress->objectId);
@ -2083,7 +2084,7 @@ PostprocessGrantOnFunctionStmt(Node *node, const char *queryString)
}
ObjectAddress *functionAddress = NULL;
foreach_declared_ptr(functionAddress, distributedFunctions)
foreach_ptr(functionAddress, distributedFunctions)
{
EnsureAllObjectDependenciesExistOnAllNodes(list_make1(functionAddress));
}
@ -2120,7 +2121,7 @@ FilterDistributedFunctions(GrantStmt *grantStmt)
/* iterate over all namespace names provided to get their oid's */
String *namespaceValue = NULL;
foreach_declared_ptr(namespaceValue, grantStmt->objects)
foreach_ptr(namespaceValue, grantStmt->objects)
{
char *nspname = strVal(namespaceValue);
bool missing_ok = false;
@ -2132,7 +2133,7 @@ FilterDistributedFunctions(GrantStmt *grantStmt)
* iterate over all distributed functions to filter the ones
* that belong to one of the namespaces from above
*/
foreach_declared_ptr(distributedFunction, distributedFunctionList)
foreach_ptr(distributedFunction, distributedFunctionList)
{
Oid namespaceOid = get_func_namespace(distributedFunction->objectId);
@ -2151,7 +2152,7 @@ FilterDistributedFunctions(GrantStmt *grantStmt)
{
bool missingOk = false;
ObjectWithArgs *objectWithArgs = NULL;
foreach_declared_ptr(objectWithArgs, grantStmt->objects)
foreach_ptr(objectWithArgs, grantStmt->objects)
{
ObjectAddress *functionAddress = palloc0(sizeof(ObjectAddress));
functionAddress->classId = ProcedureRelationId;

View File

@ -17,7 +17,6 @@
#include "distributed/citus_ruleutils.h"
#include "distributed/commands.h"
#include "distributed/commands/utility_hook.h"
#include "distributed/deparser.h"
#include "distributed/metadata/distobject.h"
#include "distributed/metadata_cache.h"
#include "distributed/version_compat.h"
@ -33,6 +32,7 @@ static List * CollectGrantTableIdList(GrantStmt *grantStmt);
* needed during the worker node portion of DDL execution before returning the
* DDLJobs in a List. If no distributed table is involved, this returns NIL.
*
* NB: So far column level privileges are not supported.
*/
List *
PreprocessGrantStmt(Node *node, const char *queryString,
@ -70,12 +70,9 @@ PreprocessGrantStmt(Node *node, const char *queryString,
return NIL;
}
EnsureCoordinator();
/* deparse the privileges */
if (grantStmt->privileges == NIL)
{
/* this is used for table level only */
appendStringInfo(&privsString, "ALL");
}
else
@ -91,44 +88,18 @@ PreprocessGrantStmt(Node *node, const char *queryString,
{
appendStringInfoString(&privsString, ", ");
}
if (priv->priv_name)
{
appendStringInfo(&privsString, "%s", priv->priv_name);
}
/*
* ALL can only be set alone.
* And ALL is not added as a keyword in priv_name by parser, but
* because there are column(s) defined, a grantStmt->privileges is
* defined. So we need to handle this special case here (see if
* condition above).
*/
else if (isFirst)
{
/* this is used for column level only */
appendStringInfo(&privsString, "ALL");
}
/*
* Instead of relying only on the syntax check done by Postgres and
* adding an assert here, add a default ERROR if ALL is not first
* and no priv_name is defined.
*/
else
{
ereport(ERROR, (errcode(ERRCODE_INTERNAL_ERROR),
errmsg("Cannot parse GRANT/REVOKE privileges")));
}
isFirst = false;
if (priv->cols != NIL)
{
StringInfoData colsString;
initStringInfo(&colsString);
AppendColumnNameList(&colsString, priv->cols);
appendStringInfo(&privsString, "%s", colsString.data);
ereport(ERROR, (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
errmsg("grant/revoke on column list is currently "
"unsupported")));
}
Assert(priv->priv_name != NULL);
appendStringInfo(&privsString, "%s", priv->priv_name);
}
}
@ -182,15 +153,6 @@ PreprocessGrantStmt(Node *node, const char *queryString,
appendStringInfo(&ddlString, "REVOKE %s%s ON %s FROM %s",
grantOption, privsString.data, targetString.data,
granteesString.data);
if (grantStmt->behavior == DROP_CASCADE)
{
appendStringInfoString(&ddlString, " CASCADE");
}
else
{
appendStringInfoString(&ddlString, " RESTRICT");
}
}
DDLJob *ddlJob = palloc0(sizeof(DDLJob));

View File

@ -10,8 +10,6 @@
#include "postgres.h"
#include "miscadmin.h"
#include "access/genam.h"
#include "access/htup_details.h"
#include "access/xact.h"
@ -19,6 +17,13 @@
#include "catalog/index.h"
#include "catalog/namespace.h"
#include "catalog/pg_class.h"
#include "pg_version_constants.h"
#if PG_VERSION_NUM >= PG_VERSION_16
#include "catalog/pg_namespace.h"
#endif
#include "miscadmin.h"
#include "commands/defrem.h"
#include "commands/tablecmds.h"
#include "lib/stringinfo.h"
@ -31,8 +36,6 @@
#include "utils/lsyscache.h"
#include "utils/syscache.h"
#include "pg_version_constants.h"
#include "distributed/citus_ruleutils.h"
#include "distributed/commands.h"
#include "distributed/commands/utility_hook.h"
@ -53,10 +56,6 @@
#include "distributed/version_compat.h"
#include "distributed/worker_manager.h"
#if PG_VERSION_NUM >= PG_VERSION_16
#include "catalog/pg_namespace.h"
#endif
/* Local functions forward declarations for helper functions */
static void ErrorIfCreateIndexHasTooManyColumns(IndexStmt *createIndexStatement);
@ -184,8 +183,6 @@ PreprocessIndexStmt(Node *node, const char *createIndexCommand,
return NIL;
}
EnsureCoordinator();
if (createIndexStatement->idxname == NULL)
{
/*
@ -337,7 +334,7 @@ ExecuteFunctionOnEachTableIndex(Oid relationId, PGIndexProcessor pgIndexProcesso
List *indexIdList = RelationGetIndexList(relation);
Oid indexId = InvalidOid;
foreach_declared_oid(indexId, indexIdList)
foreach_oid(indexId, indexIdList)
{
HeapTuple indexTuple = SearchSysCache1(INDEXRELID, ObjectIdGetDatum(indexId));
if (!HeapTupleIsValid(indexTuple))
@ -493,7 +490,6 @@ GenerateCreateIndexDDLJob(IndexStmt *createIndexStatement, const char *createInd
ddlJob->startNewTransaction = createIndexStatement->concurrent;
ddlJob->metadataSyncCommand = createIndexCommand;
ddlJob->taskList = CreateIndexTaskList(createIndexStatement);
ddlJob->warnForPartialFailure = true;
return ddlJob;
}
@ -653,7 +649,6 @@ PreprocessReindexStmt(Node *node, const char *reindexCommand,
"concurrently");
ddlJob->metadataSyncCommand = reindexCommand;
ddlJob->taskList = CreateReindexTaskList(relationId, reindexStatement);
ddlJob->warnForPartialFailure = true;
ddlJobs = list_make1(ddlJob);
}
@ -708,7 +703,7 @@ PreprocessDropIndexStmt(Node *node, const char *dropIndexCommand,
/* check if any of the indexes being dropped belong to a distributed table */
List *objectNameList = NULL;
foreach_declared_ptr(objectNameList, dropIndexStatement->objects)
foreach_ptr(objectNameList, dropIndexStatement->objects)
{
struct DropRelationCallbackState state;
uint32 rvrFlags = RVR_MISSING_OK;
@ -782,7 +777,6 @@ PreprocessDropIndexStmt(Node *node, const char *dropIndexCommand,
ddlJob->metadataSyncCommand = dropIndexCommand;
ddlJob->taskList = DropIndexTaskList(distributedRelationId, distributedIndexId,
dropIndexStatement);
ddlJob->warnForPartialFailure = true;
ddlJobs = list_make1(ddlJob);
}
@ -880,7 +874,7 @@ ErrorIfUnsupportedAlterIndexStmt(AlterTableStmt *alterTableStatement)
/* error out if any of the subcommands are unsupported */
List *commandList = alterTableStatement->cmds;
AlterTableCmd *command = NULL;
foreach_declared_ptr(command, commandList)
foreach_ptr(command, commandList)
{
AlterTableType alterTableType = command->subtype;
@ -932,7 +926,7 @@ CreateIndexTaskList(IndexStmt *indexStmt)
LockShardListMetadata(shardIntervalList, ShareLock);
ShardInterval *shardInterval = NULL;
foreach_declared_ptr(shardInterval, shardIntervalList)
foreach_ptr(shardInterval, shardIntervalList)
{
uint64 shardId = shardInterval->shardId;
@ -947,7 +941,7 @@ CreateIndexTaskList(IndexStmt *indexStmt)
task->dependentTaskList = NULL;
task->anchorShardId = shardId;
task->taskPlacementList = ActiveShardPlacementList(shardId);
task->cannotBeExecutedInTransaction = indexStmt->concurrent;
task->cannotBeExecutedInTransction = indexStmt->concurrent;
taskList = lappend(taskList, task);
@ -977,7 +971,7 @@ CreateReindexTaskList(Oid relationId, ReindexStmt *reindexStmt)
LockShardListMetadata(shardIntervalList, ShareLock);
ShardInterval *shardInterval = NULL;
foreach_declared_ptr(shardInterval, shardIntervalList)
foreach_ptr(shardInterval, shardIntervalList)
{
uint64 shardId = shardInterval->shardId;
@ -992,7 +986,7 @@ CreateReindexTaskList(Oid relationId, ReindexStmt *reindexStmt)
task->dependentTaskList = NULL;
task->anchorShardId = shardId;
task->taskPlacementList = ActiveShardPlacementList(shardId);
task->cannotBeExecutedInTransaction =
task->cannotBeExecutedInTransction =
IsReindexWithParam_compat(reindexStmt, "concurrently");
taskList = lappend(taskList, task);
@ -1115,7 +1109,6 @@ RangeVarCallbackForReindexIndex(const RangeVar *relation, Oid relId, Oid oldRelI
char relkind;
struct ReindexIndexCallbackState *state = arg;
LOCKMODE table_lockmode;
Oid table_oid;
/*
* Lock level here should match table lock in reindex_index() for
@ -1153,24 +1146,13 @@ RangeVarCallbackForReindexIndex(const RangeVar *relation, Oid relId, Oid oldRelI
errmsg("\"%s\" is not an index", relation->relname)));
/* Check permissions */
#if PG_VERSION_NUM >= PG_VERSION_17
table_oid = IndexGetRelation(relId, true);
if (OidIsValid(table_oid))
{
AclResult aclresult = pg_class_aclcheck(table_oid, GetUserId(), ACL_MAINTAIN);
if (aclresult != ACLCHECK_OK)
aclcheck_error(aclresult, OBJECT_INDEX, relation->relname);
}
#else
if (!object_ownercheck(RelationRelationId, relId, GetUserId()))
aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_INDEX, relation->relname);
#endif
/* Lock heap before index to avoid deadlock. */
if (relId != oldRelId)
{
table_oid = IndexGetRelation(relId, true);
Oid table_oid = IndexGetRelation(relId, true);
/*
* If the OID isn't valid, it means the index was concurrently
@ -1238,7 +1220,7 @@ ErrorIfUnsupportedIndexStmt(IndexStmt *createIndexStatement)
Var *partitionKey = DistPartitionKeyOrError(relationId);
List *indexParameterList = createIndexStatement->indexParams;
IndexElem *indexElement = NULL;
foreach_declared_ptr(indexElement, indexParameterList)
foreach_ptr(indexElement, indexParameterList)
{
const char *columnName = indexElement->name;
@ -1307,7 +1289,7 @@ DropIndexTaskList(Oid relationId, Oid indexId, DropStmt *dropStmt)
LockShardListMetadata(shardIntervalList, ShareLock);
ShardInterval *shardInterval = NULL;
foreach_declared_ptr(shardInterval, shardIntervalList)
foreach_ptr(shardInterval, shardIntervalList)
{
uint64 shardId = shardInterval->shardId;
char *shardIndexName = pstrdup(indexName);
@ -1330,7 +1312,7 @@ DropIndexTaskList(Oid relationId, Oid indexId, DropStmt *dropStmt)
task->dependentTaskList = NULL;
task->anchorShardId = shardId;
task->taskPlacementList = ActiveShardPlacementList(shardId);
task->cannotBeExecutedInTransaction = dropStmt->concurrent;
task->cannotBeExecutedInTransction = dropStmt->concurrent;
taskList = lappend(taskList, task);

View File

@ -64,6 +64,28 @@
#include "commands/copy.h"
#include "commands/defrem.h"
#include "commands/progress.h"
#include "pg_version_constants.h"
#include "distributed/citus_safe_lib.h"
#include "distributed/commands/multi_copy.h"
#include "distributed/commands/utility_hook.h"
#include "distributed/coordinator_protocol.h"
#include "distributed/intermediate_results.h"
#include "distributed/listutils.h"
#include "distributed/local_executor.h"
#include "distributed/locally_reserved_shared_connections.h"
#include "distributed/log_utils.h"
#include "distributed/metadata_cache.h"
#include "distributed/multi_executor.h"
#include "distributed/multi_partitioning_utils.h"
#include "distributed/multi_physical_planner.h"
#include "distributed/multi_router_planner.h"
#include "distributed/placement_connection.h"
#include "distributed/relation_access_tracking.h"
#if PG_VERSION_NUM >= PG_VERSION_16
#include "distributed/relation_utils.h"
#endif
#include "executor/executor.h"
#include "foreign/foreign.h"
#include "libpq/libpq.h"
@ -80,41 +102,18 @@
#include "utils/rel.h"
#include "utils/syscache.h"
#include "pg_version_constants.h"
#include "distributed/citus_safe_lib.h"
#include "distributed/commands/multi_copy.h"
#include "distributed/commands/utility_hook.h"
#include "distributed/coordinator_protocol.h"
#include "distributed/hash_helpers.h"
#include "distributed/intermediate_results.h"
#include "distributed/listutils.h"
#include "distributed/local_executor.h"
#include "distributed/local_multi_copy.h"
#include "distributed/locally_reserved_shared_connections.h"
#include "distributed/log_utils.h"
#include "distributed/metadata_cache.h"
#include "distributed/multi_executor.h"
#include "distributed/multi_partitioning_utils.h"
#include "distributed/multi_physical_planner.h"
#include "distributed/multi_router_planner.h"
#include "distributed/placement_connection.h"
#include "distributed/relation_access_tracking.h"
#include "distributed/remote_commands.h"
#include "distributed/remote_transaction.h"
#include "distributed/replication_origin_session_utils.h"
#include "distributed/resource_lock.h"
#include "distributed/shard_pruning.h"
#include "distributed/shared_connection_stats.h"
#include "distributed/stats/stat_counters.h"
#include "distributed/transmit.h"
#include "distributed/version_compat.h"
#include "distributed/worker_protocol.h"
#if PG_VERSION_NUM >= PG_VERSION_16
#include "distributed/relation_utils.h"
#endif
/* constant used in binary protocol */
static const char BinarySignature[11] = "PGCOPY\n\377\r\n\0";
@ -302,7 +301,6 @@ static SelectStmt * CitusCopySelect(CopyStmt *copyStatement);
static void CitusCopyTo(CopyStmt *copyStatement, QueryCompletion *completionTag);
static int64 ForwardCopyDataFromConnection(CopyOutState copyOutState,
MultiConnection *connection);
static void ErrorIfCopyHasOnErrorLogVerbosity(CopyStmt *copyStatement);
/* Private functions copied and adapted from copy.c in PostgreSQL */
static void SendCopyBegin(CopyOutState cstate);
@ -348,7 +346,6 @@ static LocalCopyStatus GetLocalCopyStatus(void);
static bool ShardIntervalListHasLocalPlacements(List *shardIntervalList);
static void LogLocalCopyToRelationExecution(uint64 shardId);
static void LogLocalCopyToFileExecution(uint64 shardId);
static void ErrorIfMergeInCopy(CopyStmt *copyStatement);
/* exports for SQL callable functions */
@ -500,14 +497,10 @@ CopyToExistingShards(CopyStmt *copyStatement, QueryCompletion *completionTag)
/* set up the destination for the COPY */
const bool publishableData = true;
/* we want to track query counters for "COPY (to) distributed-table .." commands */
const bool trackQueryCounters = true;
CitusCopyDestReceiver *copyDest = CreateCitusCopyDestReceiver(tableId, columnNameList,
partitionColumnIndex,
executorState, NULL,
publishableData,
trackQueryCounters);
publishableData);
/* if the user specified an explicit append-to_shard option, write to it */
uint64 appendShardId = ProcessAppendToShardOption(tableId, copyStatement);
@ -1882,15 +1875,11 @@ CopyFlushOutput(CopyOutState cstate, char *start, char *pointer)
* of intermediate results that are co-located with the actual table.
* The names of the intermediate results with be of the form:
* intermediateResultIdPrefix_<shardid>
*
* If trackQueryCounters is true, the COPY will increment the query stat
* counters as needed at the end of the COPY.
*/
CitusCopyDestReceiver *
CreateCitusCopyDestReceiver(Oid tableId, List *columnNameList, int partitionColumnIndex,
EState *executorState,
char *intermediateResultIdPrefix, bool isPublishable,
bool trackQueryCounters)
char *intermediateResultIdPrefix, bool isPublishable)
{
CitusCopyDestReceiver *copyDest = (CitusCopyDestReceiver *) palloc0(
sizeof(CitusCopyDestReceiver));
@ -1910,7 +1899,6 @@ CreateCitusCopyDestReceiver(Oid tableId, List *columnNameList, int partitionColu
copyDest->colocatedIntermediateResultIdPrefix = intermediateResultIdPrefix;
copyDest->memoryContext = CurrentMemoryContext;
copyDest->isPublishable = isPublishable;
copyDest->trackQueryCounters = trackQueryCounters;
return copyDest;
}
@ -1969,7 +1957,7 @@ ShardIntervalListHasLocalPlacements(List *shardIntervalList)
{
int32 localGroupId = GetLocalGroupId();
ShardInterval *shardInterval = NULL;
foreach_declared_ptr(shardInterval, shardIntervalList)
foreach_ptr(shardInterval, shardIntervalList)
{
if (ActiveShardPlacementOnGroup(localGroupId, shardInterval->shardId) != NULL)
{
@ -2464,7 +2452,7 @@ ProcessAppendToShardOption(Oid relationId, CopyStmt *copyStatement)
bool appendToShardSet = false;
DefElem *defel = NULL;
foreach_declared_ptr(defel, copyStatement->options)
foreach_ptr(defel, copyStatement->options)
{
if (strncmp(defel->defname, APPEND_TO_SHARD_OPTION, NAMEDATALEN) == 0)
{
@ -2559,8 +2547,12 @@ ShardIdForTuple(CitusCopyDestReceiver *copyDest, Datum *columnValues, bool *colu
if (columnNulls[partitionColumnIndex])
{
char *qualifiedTableName = generate_qualified_relation_name(
copyDest->distributedRelationId);
Oid relationId = copyDest->distributedRelationId;
char *relationName = get_rel_name(relationId);
Oid schemaOid = get_rel_namespace(relationId);
char *schemaName = get_namespace_name(schemaOid);
char *qualifiedTableName = quote_qualified_identifier(schemaName,
relationName);
ereport(ERROR, (errcode(ERRCODE_NULL_VALUE_NOT_ALLOWED),
errmsg("the partition column of table %s cannot be NULL",
@ -2597,9 +2589,8 @@ ShardIdForTuple(CitusCopyDestReceiver *copyDest, Datum *columnValues, bool *colu
/*
* CitusCopyDestReceiverShutdown implements the rShutdown interface of
* CitusCopyDestReceiver. It ends the COPY on all the open connections, closes
* the relation and increments the query stat counters based on the shards
* copied into if requested.
* CitusCopyDestReceiver. It ends the COPY on all the open connections and closes
* the relation.
*/
static void
CitusCopyDestReceiverShutdown(DestReceiver *destReceiver)
@ -2610,26 +2601,6 @@ CitusCopyDestReceiverShutdown(DestReceiver *destReceiver)
ListCell *connectionStateCell = NULL;
Relation distributedRelation = copyDest->distributedRelation;
/*
* Increment the query stat counters based on the shards copied into
* if requested.
*/
if (copyDest->trackQueryCounters)
{
int copiedShardCount =
copyDest->shardStateHash ?
hash_get_num_entries(copyDest->shardStateHash) :
0;
if (copiedShardCount <= 1)
{
IncrementStatCounterForMyDb(STAT_QUERY_EXECUTION_SINGLE_SHARD);
}
else
{
IncrementStatCounterForMyDb(STAT_QUERY_EXECUTION_MULTI_SHARD);
}
}
List *connectionStateList = ConnectionStateList(connectionStateHash);
FinishLocalColocatedIntermediateFiles(copyDest);
@ -2696,6 +2667,7 @@ CreateLocalColocatedIntermediateFile(CitusCopyDestReceiver *copyDest,
CreateIntermediateResultsDirectory();
const int fileFlags = (O_CREAT | O_RDWR | O_TRUNC);
const int fileMode = (S_IRUSR | S_IWUSR);
StringInfo filePath = makeStringInfo();
appendStringInfo(filePath, "%s_%ld", copyDest->colocatedIntermediateResultIdPrefix,
@ -2703,7 +2675,7 @@ CreateLocalColocatedIntermediateFile(CitusCopyDestReceiver *copyDest,
const char *fileName = QueryResultFileName(filePath->data);
shardState->fileDest =
FileCompatFromFileStart(FileOpenForTransmit(fileName, fileFlags));
FileCompatFromFileStart(FileOpenForTransmit(fileName, fileFlags, fileMode));
CopyOutState localFileCopyOutState = shardState->copyOutState;
bool isBinaryCopy = localFileCopyOutState->binary;
@ -2856,70 +2828,6 @@ CopyStatementHasFormat(CopyStmt *copyStatement, char *formatName)
}
/*
* ErrorIfCopyHasOnErrorLogVerbosity errors out if the COPY statement
* has on_error option or log_verbosity option specified
*/
static void
ErrorIfCopyHasOnErrorLogVerbosity(CopyStmt *copyStatement)
{
#if PG_VERSION_NUM >= PG_VERSION_17
bool log_verbosity = false;
foreach_ptr(DefElem, option, copyStatement->options)
{
if (strcmp(option->defname, "on_error") == 0)
{
ereport(ERROR, (errmsg(
"Citus does not support COPY FROM with ON_ERROR option.")));
}
else if (strcmp(option->defname, "log_verbosity") == 0)
{
log_verbosity = true;
}
}
/*
* Given that log_verbosity is currently used in COPY FROM
* when ON_ERROR option is set to ignore, it makes more
* sense to error out for ON_ERROR option first. For this reason,
* we don't error out in the previous loop directly.
* Relevant PG17 commit: https://github.com/postgres/postgres/commit/f5a227895
*/
if (log_verbosity)
{
ereport(ERROR, (errmsg(
"Citus does not support COPY FROM with LOG_VERBOSITY option.")));
}
#endif
}
/*
* ErrorIfMergeInCopy Raises an exception if the MERGE is called in the COPY
* where Citus tables are involved, as we don't support this yet
* Relevant PG17 commit: c649fa24a
*/
static void
ErrorIfMergeInCopy(CopyStmt *copyStatement)
{
#if PG_VERSION_NUM < 170000
return;
#else
if (!copyStatement->relation && (IsA(copyStatement->query, MergeStmt)))
{
/*
* This path is currently not reachable because Merge in COPY can
* only work with a RETURNING clause, and a RETURNING check
* will error out sooner for Citus
*/
ereport(ERROR, (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
errmsg("MERGE with Citus tables "
"is not yet supported in COPY")));
}
#endif
}
/*
* ProcessCopyStmt handles Citus specific concerns for COPY like supporting
* COPYing from distributed tables and preventing unsupported actions. The
@ -2957,8 +2865,6 @@ ProcessCopyStmt(CopyStmt *copyStatement, QueryCompletion *completionTag, const
*/
if (copyStatement->relation != NULL)
{
ErrorIfMergeInCopy(copyStatement);
bool isFrom = copyStatement->is_from;
/* consider using RangeVarGetRelidExtended to check perms before locking */
@ -2996,8 +2902,6 @@ ProcessCopyStmt(CopyStmt *copyStatement, QueryCompletion *completionTag, const
"Citus does not support COPY FROM with WHERE")));
}
ErrorIfCopyHasOnErrorLogVerbosity(copyStatement);
/* check permissions, we're bypassing postgres' normal checks */
CheckCopyPermissions(copyStatement);
CitusCopyFrom(copyStatement, completionTag);
@ -3172,15 +3076,6 @@ CitusCopyTo(CopyStmt *copyStatement, QueryCompletion *completionTag)
SendCopyEnd(copyOutState);
if (list_length(shardIntervalList) <= 1)
{
IncrementStatCounterForMyDb(STAT_QUERY_EXECUTION_SINGLE_SHARD);
}
else
{
IncrementStatCounterForMyDb(STAT_QUERY_EXECUTION_MULTI_SHARD);
}
table_close(distributedRelation, AccessShareLock);
if (completionTag != NULL)

View File

@ -1,351 +0,0 @@
/*-------------------------------------------------------------------------
*
* non_main_db_distribute_object_ops.c
*
* Routines to support node-wide object management commands from non-main
* databases.
*
* RunPreprocessNonMainDBCommand and RunPostprocessNonMainDBCommand are
* the entrypoints for this module. These functions are called from
* utility_hook.c to support some of the node-wide object management
* commands from non-main databases.
*
* To add support for a new command type, one needs to define a new
* NonMainDbDistributeObjectOps object within OperationArray. Also, if
* the command requires marking or unmarking some objects as distributed,
* the necessary operations can be implemented in
* RunPreprocessNonMainDBCommand and RunPostprocessNonMainDBCommand.
*
*-------------------------------------------------------------------------
*/
#include "postgres.h"
#include "access/xact.h"
#include "catalog/pg_authid_d.h"
#include "nodes/nodes.h"
#include "nodes/parsenodes.h"
#include "utils/builtins.h"
#include "distributed/commands.h"
#include "distributed/deparser.h"
#include "distributed/listutils.h"
#include "distributed/metadata_cache.h"
#include "distributed/remote_transaction.h"
#define EXECUTE_COMMAND_ON_REMOTE_NODES_AS_USER \
"SELECT citus_internal.execute_command_on_remote_nodes_as_user(%s, %s)"
#define START_MANAGEMENT_TRANSACTION \
"SELECT citus_internal.start_management_transaction('%lu')"
#define MARK_OBJECT_DISTRIBUTED \
"SELECT citus_internal.mark_object_distributed(%d, %s, %d, %s)"
#define UNMARK_OBJECT_DISTRIBUTED \
"SELECT pg_catalog.citus_unmark_object_distributed(%d, %d, %d, %s)"
/*
* NonMainDbDistributeObjectOps contains the necessary callbacks / flags to
* support node-wide object management commands from non-main databases.
*
* cannotBeExecutedInTransaction:
* Indicates whether the statement cannot be executed in a transaction. If
* this is set to true, the statement will be executed directly on the main
* database because there are no transactional visibility issues for such
* commands.
*
* checkSupportedObjectType:
* Callback function that checks whether type of the object referred to by
* given statement is supported. Can be NULL if not applicable for the
* statement type.
*/
typedef struct NonMainDbDistributeObjectOps
{
bool cannotBeExecutedInTransaction;
bool (*checkSupportedObjectType)(Node *parsetree);
} NonMainDbDistributeObjectOps;
/*
* checkSupportedObjectType callbacks for OperationArray.
*/
static bool CreateDbStmtCheckSupportedObjectType(Node *node);
static bool DropDbStmtCheckSupportedObjectType(Node *node);
static bool GrantStmtCheckSupportedObjectType(Node *node);
static bool SecLabelStmtCheckSupportedObjectType(Node *node);
/*
* OperationArray that holds NonMainDbDistributeObjectOps for different command types.
*/
static const NonMainDbDistributeObjectOps *const OperationArray[] = {
[T_CreateRoleStmt] = &(NonMainDbDistributeObjectOps) {
.cannotBeExecutedInTransaction = false,
.checkSupportedObjectType = NULL
},
[T_DropRoleStmt] = &(NonMainDbDistributeObjectOps) {
.cannotBeExecutedInTransaction = false,
.checkSupportedObjectType = NULL
},
[T_AlterRoleStmt] = &(NonMainDbDistributeObjectOps) {
.cannotBeExecutedInTransaction = false,
.checkSupportedObjectType = NULL
},
[T_GrantRoleStmt] = &(NonMainDbDistributeObjectOps) {
.cannotBeExecutedInTransaction = false,
.checkSupportedObjectType = NULL
},
[T_CreatedbStmt] = &(NonMainDbDistributeObjectOps) {
.cannotBeExecutedInTransaction = true,
.checkSupportedObjectType = CreateDbStmtCheckSupportedObjectType
},
[T_DropdbStmt] = &(NonMainDbDistributeObjectOps) {
.cannotBeExecutedInTransaction = true,
.checkSupportedObjectType = DropDbStmtCheckSupportedObjectType
},
[T_GrantStmt] = &(NonMainDbDistributeObjectOps) {
.cannotBeExecutedInTransaction = false,
.checkSupportedObjectType = GrantStmtCheckSupportedObjectType
},
[T_SecLabelStmt] = &(NonMainDbDistributeObjectOps) {
.cannotBeExecutedInTransaction = false,
.checkSupportedObjectType = SecLabelStmtCheckSupportedObjectType
},
};
/* other static function declarations */
const NonMainDbDistributeObjectOps * GetNonMainDbDistributeObjectOps(Node *parsetree);
static void CreateRoleStmtMarkDistGloballyOnMainDbs(CreateRoleStmt *createRoleStmt);
static void DropRoleStmtUnmarkDistOnLocalMainDb(DropRoleStmt *dropRoleStmt);
static void MarkObjectDistributedGloballyOnMainDbs(Oid catalogRelId, Oid objectId,
char *objectName);
static void UnmarkObjectDistributedOnLocalMainDb(uint16 catalogRelId, Oid objectId);
/*
* RunPreprocessNonMainDBCommand runs the necessary commands for a query, in main
* database before query is run on the local node with PrevProcessUtility.
*
* Returns true if previous utility hook needs to be skipped after completing
* preprocess phase.
*/
bool
RunPreprocessNonMainDBCommand(Node *parsetree)
{
if (IsMainDB)
{
return false;
}
const NonMainDbDistributeObjectOps *ops = GetNonMainDbDistributeObjectOps(parsetree);
if (!ops)
{
return false;
}
char *queryString = DeparseTreeNode(parsetree);
/*
* For the commands that cannot be executed in a transaction, there are no
* transactional visibility issues. We directly route them to main database
* so that we only have to consider one code-path for such commands.
*/
if (ops->cannotBeExecutedInTransaction)
{
IsMainDBCommandInXact = false;
RunCitusMainDBQuery((char *) queryString);
return true;
}
IsMainDBCommandInXact = true;
StringInfo mainDBQuery = makeStringInfo();
appendStringInfo(mainDBQuery,
START_MANAGEMENT_TRANSACTION,
GetCurrentFullTransactionId().value);
RunCitusMainDBQuery(mainDBQuery->data);
mainDBQuery = makeStringInfo();
appendStringInfo(mainDBQuery,
EXECUTE_COMMAND_ON_REMOTE_NODES_AS_USER,
quote_literal_cstr(queryString),
quote_literal_cstr(CurrentUserName()));
RunCitusMainDBQuery(mainDBQuery->data);
if (IsA(parsetree, DropRoleStmt))
{
DropRoleStmtUnmarkDistOnLocalMainDb((DropRoleStmt *) parsetree);
}
return false;
}
/*
* RunPostprocessNonMainDBCommand runs the necessary commands for a query, in main
* database after query is run on the local node with PrevProcessUtility.
*/
void
RunPostprocessNonMainDBCommand(Node *parsetree)
{
if (IsMainDB || !GetNonMainDbDistributeObjectOps(parsetree))
{
return;
}
if (IsA(parsetree, CreateRoleStmt))
{
CreateRoleStmtMarkDistGloballyOnMainDbs((CreateRoleStmt *) parsetree);
}
}
/*
* GetNonMainDbDistributeObjectOps returns the NonMainDbDistributeObjectOps for given
* command if it's node-wide object management command that's supported from non-main
* databases.
*/
const NonMainDbDistributeObjectOps *
GetNonMainDbDistributeObjectOps(Node *parsetree)
{
NodeTag tag = nodeTag(parsetree);
if (tag >= lengthof(OperationArray))
{
return NULL;
}
const NonMainDbDistributeObjectOps *ops = OperationArray[tag];
if (ops == NULL)
{
return NULL;
}
if (!ops->checkSupportedObjectType ||
ops->checkSupportedObjectType(parsetree))
{
return ops;
}
return NULL;
}
/*
* CreateRoleStmtMarkDistGloballyOnMainDbs marks the role as
* distributed on all main databases globally.
*/
static void
CreateRoleStmtMarkDistGloballyOnMainDbs(CreateRoleStmt *createRoleStmt)
{
/* object must exist as we've just created it */
bool missingOk = false;
Oid roleId = get_role_oid(createRoleStmt->role, missingOk);
MarkObjectDistributedGloballyOnMainDbs(AuthIdRelationId, roleId,
createRoleStmt->role);
}
/*
* DropRoleStmtUnmarkDistOnLocalMainDb unmarks the roles as
* distributed on the local main database.
*/
static void
DropRoleStmtUnmarkDistOnLocalMainDb(DropRoleStmt *dropRoleStmt)
{
RoleSpec *roleSpec = NULL;
foreach_declared_ptr(roleSpec, dropRoleStmt->roles)
{
Oid roleOid = get_role_oid(roleSpec->rolename,
dropRoleStmt->missing_ok);
if (roleOid == InvalidOid)
{
continue;
}
UnmarkObjectDistributedOnLocalMainDb(AuthIdRelationId, roleOid);
}
}
/*
* MarkObjectDistributedGloballyOnMainDbs marks an object as
* distributed on all main databases globally.
*/
static void
MarkObjectDistributedGloballyOnMainDbs(Oid catalogRelId, Oid objectId, char *objectName)
{
StringInfo mainDBQuery = makeStringInfo();
appendStringInfo(mainDBQuery,
MARK_OBJECT_DISTRIBUTED,
catalogRelId,
quote_literal_cstr(objectName),
objectId,
quote_literal_cstr(CurrentUserName()));
RunCitusMainDBQuery(mainDBQuery->data);
}
/*
* UnmarkObjectDistributedOnLocalMainDb unmarks an object as
* distributed on the local main database.
*/
static void
UnmarkObjectDistributedOnLocalMainDb(uint16 catalogRelId, Oid objectId)
{
const int subObjectId = 0;
const char *checkObjectExistence = "false";
StringInfo query = makeStringInfo();
appendStringInfo(query,
UNMARK_OBJECT_DISTRIBUTED,
catalogRelId, objectId,
subObjectId, checkObjectExistence);
RunCitusMainDBQuery(query->data);
}
/*
* checkSupportedObjectTypes callbacks for OperationArray lie below.
*/
static bool
CreateDbStmtCheckSupportedObjectType(Node *node)
{
/*
* We don't try to send the query to the main database if the CREATE
* DATABASE command is for the main database itself, this is a very
* rare case but it's exercised by our test suite.
*/
CreatedbStmt *stmt = castNode(CreatedbStmt, node);
return strcmp(stmt->dbname, MainDb) != 0;
}
static bool
DropDbStmtCheckSupportedObjectType(Node *node)
{
/*
* We don't try to send the query to the main database if the DROP
* DATABASE command is for the main database itself, this is a very
* rare case but it's exercised by our test suite.
*/
DropdbStmt *stmt = castNode(DropdbStmt, node);
return strcmp(stmt->dbname, MainDb) != 0;
}
static bool
GrantStmtCheckSupportedObjectType(Node *node)
{
GrantStmt *stmt = castNode(GrantStmt, node);
return stmt->objtype == OBJECT_DATABASE;
}
static bool
SecLabelStmtCheckSupportedObjectType(Node *node)
{
SecLabelStmt *stmt = castNode(SecLabelStmt, node);
return stmt->objtype == OBJECT_ROLE;
}

View File

@ -48,9 +48,6 @@
#include "distributed/version_compat.h"
#include "distributed/worker_transaction.h"
static ObjectAddress * GetNewRoleAddress(ReassignOwnedStmt *stmt);
/*
* PreprocessDropOwnedStmt finds the distributed role out of the ones
* being dropped and unmarks them distributed and creates the drop statements
@ -92,81 +89,3 @@ PreprocessDropOwnedStmt(Node *node, const char *queryString,
return NodeDDLTaskList(NON_COORDINATOR_NODES, commands);
}
/*
* PostprocessReassignOwnedStmt takes a Node pointer representing a REASSIGN
* OWNED statement and performs any necessary post-processing after the statement
* has been executed locally.
*
* We filter out local roles in OWNED BY clause before deparsing the command,
* meaning that we skip reassigning what is owned by local roles. However,
* if the role specified in TO clause is local, we automatically distribute
* it before deparsing the command.
*/
List *
PostprocessReassignOwnedStmt(Node *node, const char *queryString)
{
ReassignOwnedStmt *stmt = castNode(ReassignOwnedStmt, node);
List *allReassignRoles = stmt->roles;
List *distributedReassignRoles = FilterDistributedRoles(allReassignRoles);
if (list_length(distributedReassignRoles) <= 0)
{
return NIL;
}
if (!ShouldPropagate())
{
return NIL;
}
EnsureCoordinator();
stmt->roles = distributedReassignRoles;
char *sql = DeparseTreeNode((Node *) stmt);
stmt->roles = allReassignRoles;
ObjectAddress *newRoleAddress = GetNewRoleAddress(stmt);
/*
* We temporarily enable create / alter role propagation to properly
* propagate the role specified in TO clause.
*/
int saveNestLevel = NewGUCNestLevel();
set_config_option("citus.enable_create_role_propagation", "on",
(superuser() ? PGC_SUSET : PGC_USERSET), PGC_S_SESSION,
GUC_ACTION_LOCAL, true, 0, false);
set_config_option("citus.enable_alter_role_propagation", "on",
(superuser() ? PGC_SUSET : PGC_USERSET), PGC_S_SESSION,
GUC_ACTION_LOCAL, true, 0, false);
set_config_option("citus.enable_alter_role_set_propagation", "on",
(superuser() ? PGC_SUSET : PGC_USERSET), PGC_S_SESSION,
GUC_ACTION_LOCAL, true, 0, false);
EnsureObjectAndDependenciesExistOnAllNodes(newRoleAddress);
/* rollback GUCs to the state before this session */
AtEOXact_GUC(true, saveNestLevel);
List *commands = list_make3(DISABLE_DDL_PROPAGATION,
sql,
ENABLE_DDL_PROPAGATION);
return NodeDDLTaskList(NON_COORDINATOR_NODES, commands);
}
/*
* GetNewRoleAddress returns the ObjectAddress of the new role
*/
static ObjectAddress *
GetNewRoleAddress(ReassignOwnedStmt *stmt)
{
Oid roleOid = get_role_oid(stmt->newrole->rolename, false);
ObjectAddress *address = palloc0(sizeof(ObjectAddress));
ObjectAddressSet(*address, AuthIdRelationId, roleOid);
return address;
}

View File

@ -48,7 +48,7 @@ CreatePolicyCommands(Oid relationId)
List *policyList = GetPolicyListForRelation(relationId);
RowSecurityPolicy *policy;
foreach_declared_ptr(policy, policyList)
foreach_ptr(policy, policyList)
{
char *createPolicyCommand = CreatePolicyCommandForPolicy(relationId, policy);
commands = lappend(commands, makeTableDDLCommandString(createPolicyCommand));
@ -88,7 +88,7 @@ GetPolicyListForRelation(Oid relationId)
List *policyList = NIL;
RowSecurityPolicy *policy;
foreach_declared_ptr(policy, relation->rd_rsdesc->policies)
foreach_ptr(policy, relation->rd_rsdesc->policies)
{
policyList = lappend(policyList, policy);
}
@ -310,7 +310,7 @@ GetPolicyByName(Oid relationId, const char *policyName)
List *policyList = GetPolicyListForRelation(relationId);
RowSecurityPolicy *policy = NULL;
foreach_declared_ptr(policy, policyList)
foreach_ptr(policy, policyList)
{
if (strncmp(policy->policy_name, policyName, NAMEDATALEN) == 0)
{

View File

@ -33,9 +33,11 @@
static CreatePublicationStmt * BuildCreatePublicationStmt(Oid publicationId);
#if (PG_VERSION_NUM >= PG_VERSION_15)
static PublicationObjSpec * BuildPublicationRelationObjSpec(Oid relationId,
Oid publicationId,
bool tableOnly);
#endif
static void AppendPublishOptionList(StringInfo str, List *strings);
static char * AlterPublicationOwnerCommand(Oid publicationId);
static bool ShouldPropagateCreatePublication(CreatePublicationStmt *stmt);
@ -152,10 +154,11 @@ BuildCreatePublicationStmt(Oid publicationId)
ReleaseSysCache(publicationTuple);
#if (PG_VERSION_NUM >= PG_VERSION_15)
List *schemaIds = GetPublicationSchemas(publicationId);
Oid schemaId = InvalidOid;
foreach_declared_oid(schemaId, schemaIds)
foreach_oid(schemaId, schemaIds)
{
char *schemaName = get_namespace_name(schemaId);
@ -167,18 +170,21 @@ BuildCreatePublicationStmt(Oid publicationId)
createPubStmt->pubobjects = lappend(createPubStmt->pubobjects, publicationObject);
}
#endif
List *relationIds = GetPublicationRelations(publicationId,
publicationForm->pubviaroot ?
PUBLICATION_PART_ROOT :
PUBLICATION_PART_LEAF);
Oid relationId = InvalidOid;
int citusTableCount PG_USED_FOR_ASSERTS_ONLY = 0;
/* mainly for consistent ordering in test output */
relationIds = SortList(relationIds, CompareOids);
foreach_declared_oid(relationId, relationIds)
foreach_oid(relationId, relationIds)
{
#if (PG_VERSION_NUM >= PG_VERSION_15)
bool tableOnly = false;
/* since postgres 15, tables can have a column list and filter */
@ -186,6 +192,20 @@ BuildCreatePublicationStmt(Oid publicationId)
BuildPublicationRelationObjSpec(relationId, publicationId, tableOnly);
createPubStmt->pubobjects = lappend(createPubStmt->pubobjects, publicationObject);
#else
/* before postgres 15, only full tables are supported */
char *schemaName = get_namespace_name(get_rel_namespace(relationId));
char *tableName = get_rel_name(relationId);
RangeVar *rangeVar = makeRangeVar(schemaName, tableName, -1);
createPubStmt->tables = lappend(createPubStmt->tables, rangeVar);
#endif
if (IsCitusTable(relationId))
{
citusTableCount++;
}
}
/* WITH (publish_via_partition_root = true) option */
@ -256,6 +276,8 @@ AppendPublishOptionList(StringInfo str, List *options)
}
#if (PG_VERSION_NUM >= PG_VERSION_15)
/*
* BuildPublicationRelationObjSpec returns a PublicationObjSpec that
* can be included in a CREATE or ALTER PUBLICATION statement.
@ -335,6 +357,9 @@ BuildPublicationRelationObjSpec(Oid relationId, Oid publicationId,
}
#endif
/*
* PreprocessAlterPublicationStmt handles ALTER PUBLICATION statements
* in a way that is mostly similar to PreprocessAlterDistributedObjectStmt,
@ -395,7 +420,7 @@ GetAlterPublicationDDLCommandsForTable(Oid relationId, bool isAdd)
List *publicationIds = GetRelationPublications(relationId);
Oid publicationId = InvalidOid;
foreach_declared_oid(publicationId, publicationIds)
foreach_oid(publicationId, publicationIds)
{
char *command = GetAlterPublicationTableDDLCommand(publicationId,
relationId, isAdd);
@ -433,6 +458,7 @@ GetAlterPublicationTableDDLCommand(Oid publicationId, Oid relationId,
ReleaseSysCache(pubTuple);
#if (PG_VERSION_NUM >= PG_VERSION_15)
bool tableOnly = !isAdd;
/* since postgres 15, tables can have a column list and filter */
@ -441,6 +467,16 @@ GetAlterPublicationTableDDLCommand(Oid publicationId, Oid relationId,
alterPubStmt->pubobjects = lappend(alterPubStmt->pubobjects, publicationObject);
alterPubStmt->action = isAdd ? AP_AddObjects : AP_DropObjects;
#else
/* before postgres 15, only full tables are supported */
char *schemaName = get_namespace_name(get_rel_namespace(relationId));
char *tableName = get_rel_name(relationId);
RangeVar *rangeVar = makeRangeVar(schemaName, tableName, -1);
alterPubStmt->tables = lappend(alterPubStmt->tables, rangeVar);
alterPubStmt->tableAction = isAdd ? DEFELEM_ADD : DEFELEM_DROP;
#endif
/* we take the WHERE clause from the catalog where it is already transformed */
bool whereClauseNeedsTransform = false;

View File

@ -45,7 +45,6 @@
#include "distributed/citus_safe_lib.h"
#include "distributed/commands.h"
#include "distributed/commands/utility_hook.h"
#include "distributed/comment.h"
#include "distributed/coordinator_protocol.h"
#include "distributed/deparser.h"
#include "distributed/listutils.h"
@ -74,15 +73,14 @@ static char * GetRoleNameFromDbRoleSetting(HeapTuple tuple,
TupleDesc DbRoleSettingDescription);
static char * GetDatabaseNameFromDbRoleSetting(HeapTuple tuple,
TupleDesc DbRoleSettingDescription);
#if PG_VERSION_NUM < PG_VERSION_17
static Node * makeStringConst(char *str, int location);
#endif
static Node * makeIntConst(int val, int location);
static Node * makeFloatConst(char *str, int location);
static const char * WrapQueryInAlterRoleIfExistsCall(const char *query, RoleSpec *role);
static VariableSetStmt * MakeVariableSetStmt(const char *config);
static int ConfigGenericNameCompare(const void *lhs, const void *rhs);
static List * RoleSpecToObjectAddress(RoleSpec *role, bool missing_ok);
static bool IsGrantRoleWithInheritOrSetOption(GrantRoleStmt *stmt);
/* controlled via GUC */
bool EnableCreateRolePropagation = true;
@ -160,12 +158,12 @@ PostprocessAlterRoleStmt(Node *node, const char *queryString)
return NIL;
}
EnsurePropagationToCoordinator();
EnsureCoordinator();
AlterRoleStmt *stmt = castNode(AlterRoleStmt, node);
DefElem *option = NULL;
foreach_declared_ptr(option, stmt->options)
foreach_ptr(option, stmt->options)
{
if (strcasecmp(option->defname, "password") == 0)
{
@ -189,7 +187,7 @@ PostprocessAlterRoleStmt(Node *node, const char *queryString)
(void *) CreateAlterRoleIfExistsCommand(stmt),
ENABLE_DDL_PROPAGATION);
return NodeDDLTaskList(REMOTE_NODES, commands);
return NodeDDLTaskList(NON_COORDINATOR_NODES, commands);
}
@ -235,7 +233,7 @@ PreprocessAlterRoleSetStmt(Node *node, const char *queryString,
return NIL;
}
EnsurePropagationToCoordinator();
EnsureCoordinator();
QualifyTreeNode((Node *) stmt);
const char *sql = DeparseTreeNode((Node *) stmt);
@ -244,7 +242,7 @@ PreprocessAlterRoleSetStmt(Node *node, const char *queryString,
(void *) sql,
ENABLE_DDL_PROPAGATION);
return NodeDDLTaskList(REMOTE_NODES, commandList);
return NodeDDLTaskList(NON_COORDINATOR_NODES, commandList);
}
@ -493,18 +491,19 @@ GenerateRoleOptionsList(HeapTuple tuple)
options = lappend(options, makeDefElem("password", NULL, -1));
}
/* load valid until data from the heap tuple */
/* load valid unitl data from the heap tuple, use default of infinity if not set */
Datum rolValidUntilDatum = SysCacheGetAttr(AUTHNAME, tuple,
Anum_pg_authid_rolvaliduntil, &isNull);
char *rolValidUntil = "infinity";
if (!isNull)
{
char *rolValidUntil = pstrdup((char *) timestamptz_to_str(rolValidUntilDatum));
Node *validUntilStringNode = (Node *) makeString(rolValidUntil);
DefElem *validUntilOption = makeDefElem("validUntil", validUntilStringNode, -1);
options = lappend(options, validUntilOption);
rolValidUntil = pstrdup((char *) timestamptz_to_str(rolValidUntilDatum));
}
Node *validUntilStringNode = (Node *) makeString(rolValidUntil);
DefElem *validUntilOption = makeDefElem("validUntil", validUntilStringNode, -1);
options = lappend(options, validUntilOption);
return options;
}
@ -566,7 +565,7 @@ GenerateCreateOrAlterRoleCommand(Oid roleOid)
{
List *grantRoleStmts = GenerateGrantRoleStmtsOfRole(roleOid);
Node *stmt = NULL;
foreach_declared_ptr(stmt, grantRoleStmts)
foreach_ptr(stmt, grantRoleStmts)
{
completeRoleList = lappend(completeRoleList, DeparseTreeNode(stmt));
}
@ -580,21 +579,10 @@ GenerateCreateOrAlterRoleCommand(Oid roleOid)
*/
List *secLabelOnRoleStmts = GenerateSecLabelOnRoleStmts(roleOid, rolename);
stmt = NULL;
foreach_declared_ptr(stmt, secLabelOnRoleStmts)
foreach_ptr(stmt, secLabelOnRoleStmts)
{
completeRoleList = lappend(completeRoleList, DeparseTreeNode(stmt));
}
/*
* append COMMENT ON ROLE commands for this specific user
* When we propagate user creation, we also want to make sure that we propagate
* all the comments it has been given. For this, we check pg_shdescription
* for the ROLE entry corresponding to roleOid, and generate the relevant
* Comment stmts to be run in the new node.
*/
List *commentStmts = GetCommentPropagationCommands(AuthIdRelationId, roleOid,
rolename, OBJECT_ROLE);
completeRoleList = list_concat(completeRoleList, commentStmts);
}
return completeRoleList;
@ -789,7 +777,7 @@ MakeSetStatementArguments(char *configurationName, char *configurationValue)
}
char *configuration = NULL;
foreach_declared_ptr(configuration, configurationList)
foreach_ptr(configuration, configurationList)
{
Node *arg = makeStringConst(configuration, -1);
args = lappend(args, arg);
@ -825,7 +813,7 @@ GenerateGrantRoleStmtsFromOptions(RoleSpec *roleSpec, List *options)
List *stmts = NIL;
DefElem *option = NULL;
foreach_declared_ptr(option, options)
foreach_ptr(option, options)
{
if (strcmp(option->defname, "adminmembers") != 0 &&
strcmp(option->defname, "rolemembers") != 0 &&
@ -887,14 +875,6 @@ GenerateGrantRoleStmtsOfRole(Oid roleid)
{
Form_pg_auth_members membership = (Form_pg_auth_members) GETSTRUCT(tuple);
ObjectAddress *roleAddress = palloc0(sizeof(ObjectAddress));
ObjectAddressSet(*roleAddress, AuthIdRelationId, membership->grantor);
if (!IsAnyObjectDistributed(list_make1(roleAddress)))
{
/* we only need to propagate the grant if the grantor is distributed */
continue;
}
GrantRoleStmt *grantRoleStmt = makeNode(GrantRoleStmt);
grantRoleStmt->is_grant = true;
@ -910,38 +890,13 @@ GenerateGrantRoleStmtsOfRole(Oid roleid)
granteeRole->rolename = GetUserNameFromId(membership->member, true);
grantRoleStmt->grantee_roles = list_make1(granteeRole);
RoleSpec *grantorRole = makeNode(RoleSpec);
grantorRole->roletype = ROLESPEC_CSTRING;
grantorRole->location = -1;
grantorRole->rolename = GetUserNameFromId(membership->grantor, false);
grantRoleStmt->grantor = grantorRole;
grantRoleStmt->grantor = NULL;
#if PG_VERSION_NUM >= PG_VERSION_16
/* inherit option is always included */
DefElem *inherit_opt;
if (membership->inherit_option)
{
inherit_opt = makeDefElem("inherit", (Node *) makeBoolean(true), -1);
}
else
{
inherit_opt = makeDefElem("inherit", (Node *) makeBoolean(false), -1);
}
grantRoleStmt->opt = list_make1(inherit_opt);
/* admin option is false by default, only include true case */
if (membership->admin_option)
{
DefElem *admin_opt = makeDefElem("admin", (Node *) makeBoolean(true), -1);
grantRoleStmt->opt = lappend(grantRoleStmt->opt, admin_opt);
}
/* set option is true by default, only include false case */
if (!membership->set_option)
{
DefElem *set_opt = makeDefElem("set", (Node *) makeBoolean(false), -1);
grantRoleStmt->opt = lappend(grantRoleStmt->opt, set_opt);
DefElem *opt = makeDefElem("admin", (Node *) makeBoolean(true), -1);
grantRoleStmt->opt = list_make1(opt);
}
#else
grantRoleStmt->admin_opt = membership->admin_option;
@ -1020,8 +975,7 @@ PreprocessCreateRoleStmt(Node *node, const char *queryString,
return NIL;
}
EnsurePropagationToCoordinator();
EnsureCoordinator();
EnsureSequentialModeForRoleDDL();
LockRelationOid(DistNodeRelationId(), RowShareLock);
@ -1049,19 +1003,17 @@ PreprocessCreateRoleStmt(Node *node, const char *queryString,
/* deparse all grant statements and add them to the commands list */
Node *stmt = NULL;
foreach_declared_ptr(stmt, grantRoleStmts)
foreach_ptr(stmt, grantRoleStmts)
{
commands = lappend(commands, DeparseTreeNode(stmt));
}
commands = lappend(commands, ENABLE_DDL_PROPAGATION);
return NodeDDLTaskList(REMOTE_NODES, commands);
return NodeDDLTaskList(NON_COORDINATOR_NODES, commands);
}
#if PG_VERSION_NUM < PG_VERSION_17
/*
* makeStringConst creates a Const Node that stores a given string
*
@ -1072,17 +1024,19 @@ makeStringConst(char *str, int location)
{
A_Const *n = makeNode(A_Const);
#if PG_VERSION_NUM >= PG_VERSION_15
n->val.sval.type = T_String;
n->val.sval.sval = str;
#else
n->val.type = T_String;
n->val.val.str = str;
#endif
n->location = location;
return (Node *) n;
}
#endif
/*
* makeIntConst creates a Const Node that stores a given integer
*
@ -1093,8 +1047,13 @@ makeIntConst(int val, int location)
{
A_Const *n = makeNode(A_Const);
#if PG_VERSION_NUM >= PG_VERSION_15
n->val.ival.type = T_Integer;
n->val.ival.ival = val;
#else
n->val.type = T_Integer;
n->val.val.ival = val;
#endif
n->location = location;
return (Node *) n;
@ -1111,8 +1070,13 @@ makeFloatConst(char *str, int location)
{
A_Const *n = makeNode(A_Const);
#if PG_VERSION_NUM >= PG_VERSION_15
n->val.fval.type = T_Float;
n->val.fval.fval = str;
#else
n->val.type = T_Float;
n->val.val.str = str;
#endif
n->location = location;
return (Node *) n;
@ -1142,8 +1106,7 @@ PreprocessDropRoleStmt(Node *node, const char *queryString,
return NIL;
}
EnsurePropagationToCoordinator();
EnsureCoordinator();
EnsureSequentialModeForRoleDDL();
@ -1155,7 +1118,7 @@ PreprocessDropRoleStmt(Node *node, const char *queryString,
sql,
ENABLE_DDL_PROPAGATION);
return NodeDDLTaskList(REMOTE_NODES, commands);
return NodeDDLTaskList(NON_COORDINATOR_NODES, commands);
}
@ -1166,7 +1129,7 @@ void
UnmarkRolesDistributed(List *roles)
{
Node *roleNode = NULL;
foreach_declared_ptr(roleNode, roles)
foreach_ptr(roleNode, roles)
{
RoleSpec *role = castNode(RoleSpec, roleNode);
ObjectAddress roleAddress = { 0 };
@ -1196,7 +1159,7 @@ FilterDistributedRoles(List *roles)
{
List *distributedRoles = NIL;
Node *roleNode = NULL;
foreach_declared_ptr(roleNode, roles)
foreach_ptr(roleNode, roles)
{
RoleSpec *role = castNode(RoleSpec, roleNode);
Oid roleOid = get_rolespec_oid(role, true);
@ -1232,7 +1195,7 @@ PreprocessGrantRoleStmt(Node *node, const char *queryString,
return NIL;
}
EnsurePropagationToCoordinator();
EnsureCoordinator();
GrantRoleStmt *stmt = castNode(GrantRoleStmt, node);
List *allGranteeRoles = stmt->grantee_roles;
@ -1244,6 +1207,25 @@ PreprocessGrantRoleStmt(Node *node, const char *queryString,
return NIL;
}
if (IsGrantRoleWithInheritOrSetOption(stmt))
{
if (EnableUnsupportedFeatureMessages)
{
ereport(NOTICE, (errmsg("not propagating GRANT/REVOKE commands with specified"
" INHERIT/SET options to worker nodes"),
errhint(
"Connect to worker nodes directly to manually run the same"
" GRANT/REVOKE command after disabling DDL propagation.")));
}
return NIL;
}
/*
* Postgres don't seem to use the grantor. Even dropping the grantor doesn't
* seem to affect the membership. If this changes, we might need to add grantors
* to the dependency resolution too. For now we just don't propagate it.
*/
stmt->grantor = NULL;
stmt->grantee_roles = distributedGranteeRoles;
char *sql = DeparseTreeNode((Node *) stmt);
stmt->grantee_roles = allGranteeRoles;
@ -1253,7 +1235,7 @@ PreprocessGrantRoleStmt(Node *node, const char *queryString,
sql,
ENABLE_DDL_PROPAGATION);
return NodeDDLTaskList(REMOTE_NODES, commands);
return NodeDDLTaskList(NON_COORDINATOR_NODES, commands);
}
@ -1264,17 +1246,15 @@ PreprocessGrantRoleStmt(Node *node, const char *queryString,
List *
PostprocessGrantRoleStmt(Node *node, const char *queryString)
{
if (!EnableCreateRolePropagation || !ShouldPropagate())
if (!EnableCreateRolePropagation || !IsCoordinator() || !ShouldPropagate())
{
return NIL;
}
EnsurePropagationToCoordinator();
GrantRoleStmt *stmt = castNode(GrantRoleStmt, node);
RoleSpec *role = NULL;
foreach_declared_ptr(role, stmt->grantee_roles)
foreach_ptr(role, stmt->grantee_roles)
{
Oid roleOid = get_rolespec_oid(role, false);
ObjectAddress *roleAddress = palloc0(sizeof(ObjectAddress));
@ -1289,6 +1269,27 @@ PostprocessGrantRoleStmt(Node *node, const char *queryString)
}
/*
* IsGrantRoleWithInheritOrSetOption returns true if the given
* GrantRoleStmt has inherit or set option specified in its options
*/
static bool
IsGrantRoleWithInheritOrSetOption(GrantRoleStmt *stmt)
{
#if PG_VERSION_NUM >= PG_VERSION_16
DefElem *opt = NULL;
foreach_ptr(opt, stmt->opt)
{
if (strcmp(opt->defname, "inherit") == 0 || strcmp(opt->defname, "set") == 0)
{
return true;
}
}
#endif
return false;
}
/*
* ConfigGenericNameCompare compares two config_generic structs based on their
* name fields. If the name fields contain the same strings two structs are
@ -1370,54 +1371,3 @@ EnsureSequentialModeForRoleDDL(void)
"use only one connection for all future commands")));
SetLocalMultiShardModifyModeToSequential();
}
/*
* PreprocessAlterDatabaseSetStmt is executed before the statement is applied to the local
* postgres instance.
*
* In this stage we can prepare the commands that need to be run on all workers to grant
* on databases.
*/
List *
PreprocessAlterRoleRenameStmt(Node *node, const char *queryString,
ProcessUtilityContext processUtilityContext)
{
if (!ShouldPropagate())
{
return NIL;
}
if (!EnableAlterRolePropagation)
{
return NIL;
}
RenameStmt *stmt = castNode(RenameStmt, node);
Assert(stmt->renameType == OBJECT_ROLE);
EnsurePropagationToCoordinator();
char *sql = DeparseTreeNode((Node *) stmt);
List *commands = list_make3(DISABLE_DDL_PROPAGATION,
(void *) sql,
ENABLE_DDL_PROPAGATION);
return NodeDDLTaskList(REMOTE_NODES, commands);
}
List *
RenameRoleStmtObjectAddress(Node *node, bool missing_ok, bool isPostprocess)
{
RenameStmt *stmt = castNode(RenameStmt, node);
Assert(stmt->renameType == OBJECT_ROLE);
Oid roleOid = get_role_oid(stmt->subname, missing_ok);
ObjectAddress *address = palloc0(sizeof(ObjectAddress));
ObjectAddressSet(*address, AuthIdRelationId, roleOid);
return list_make1(address);
}

View File

@ -162,7 +162,7 @@ PreprocessDropSchemaStmt(Node *node, const char *queryString,
EnsureSequentialMode(OBJECT_SCHEMA);
String *schemaVal = NULL;
foreach_declared_ptr(schemaVal, distributedSchemas)
foreach_ptr(schemaVal, distributedSchemas)
{
if (SchemaHasDistributedTableWithFKey(strVal(schemaVal)))
{
@ -322,7 +322,7 @@ FilterDistributedSchemas(List *schemas)
List *distributedSchemas = NIL;
String *schemaValue = NULL;
foreach_declared_ptr(schemaValue, schemas)
foreach_ptr(schemaValue, schemas)
{
const char *schemaName = strVal(schemaValue);
Oid schemaOid = get_namespace_oid(schemaName, true);
@ -443,7 +443,7 @@ GetGrantCommandsFromCreateSchemaStmt(Node *node)
CreateSchemaStmt *stmt = castNode(CreateSchemaStmt, node);
Node *element = NULL;
foreach_declared_ptr(element, stmt->schemaElts)
foreach_ptr(element, stmt->schemaElts)
{
if (!IsA(element, GrantStmt))
{
@ -480,7 +480,7 @@ static bool
CreateSchemaStmtCreatesTable(CreateSchemaStmt *stmt)
{
Node *element = NULL;
foreach_declared_ptr(element, stmt->schemaElts)
foreach_ptr(element, stmt->schemaElts)
{
/*
* CREATE TABLE AS and CREATE FOREIGN TABLE commands cannot be

View File

@ -174,7 +174,7 @@ EnsureTableKindSupportedForTenantSchema(Oid relationId)
List *partitionList = PartitionList(relationId);
Oid partitionRelationId = InvalidOid;
foreach_declared_oid(partitionRelationId, partitionList)
foreach_oid(partitionRelationId, partitionList)
{
ErrorIfIllegalPartitioningInTenantSchema(relationId, partitionRelationId);
}
@ -199,7 +199,7 @@ EnsureFKeysForTenantTable(Oid relationId)
int fKeyReferencingFlags = INCLUDE_REFERENCING_CONSTRAINTS | INCLUDE_ALL_TABLE_TYPES;
List *referencingForeignKeys = GetForeignKeyOids(relationId, fKeyReferencingFlags);
Oid foreignKeyId = InvalidOid;
foreach_declared_oid(foreignKeyId, referencingForeignKeys)
foreach_oid(foreignKeyId, referencingForeignKeys)
{
Oid referencingTableId = GetReferencingTableId(foreignKeyId);
Oid referencedTableId = GetReferencedTableId(foreignKeyId);
@ -232,7 +232,7 @@ EnsureFKeysForTenantTable(Oid relationId)
int fKeyReferencedFlags = INCLUDE_REFERENCED_CONSTRAINTS | INCLUDE_ALL_TABLE_TYPES;
List *referencedForeignKeys = GetForeignKeyOids(relationId, fKeyReferencedFlags);
foreach_declared_oid(foreignKeyId, referencedForeignKeys)
foreach_oid(foreignKeyId, referencedForeignKeys)
{
Oid referencingTableId = GetReferencingTableId(foreignKeyId);
Oid referencedTableId = GetReferencedTableId(foreignKeyId);
@ -429,7 +429,7 @@ EnsureSchemaCanBeDistributed(Oid schemaId, List *schemaTableIdList)
}
Oid relationId = InvalidOid;
foreach_declared_oid(relationId, schemaTableIdList)
foreach_oid(relationId, schemaTableIdList)
{
EnsureTenantTable(relationId, "citus_schema_distribute");
}
@ -637,7 +637,7 @@ citus_schema_distribute(PG_FUNCTION_ARGS)
List *tableIdListInSchema = SchemaGetNonShardTableIdList(schemaId);
List *tableIdListToConvert = NIL;
Oid relationId = InvalidOid;
foreach_declared_oid(relationId, tableIdListInSchema)
foreach_oid(relationId, tableIdListInSchema)
{
/* prevent concurrent drop of the relation */
LockRelationOid(relationId, AccessShareLock);
@ -675,7 +675,7 @@ citus_schema_distribute(PG_FUNCTION_ARGS)
* tables.
*/
List *originalForeignKeyRecreationCommands = NIL;
foreach_declared_oid(relationId, tableIdListToConvert)
foreach_oid(relationId, tableIdListToConvert)
{
List *fkeyCommandsForRelation =
GetFKeyCreationCommandsRelationInvolvedWithTableType(relationId,
@ -741,7 +741,7 @@ citus_schema_undistribute(PG_FUNCTION_ARGS)
List *tableIdListInSchema = SchemaGetNonShardTableIdList(schemaId);
List *tableIdListToConvert = NIL;
Oid relationId = InvalidOid;
foreach_declared_oid(relationId, tableIdListInSchema)
foreach_oid(relationId, tableIdListInSchema)
{
/* prevent concurrent drop of the relation */
LockRelationOid(relationId, AccessShareLock);
@ -883,7 +883,7 @@ TenantSchemaPickAnchorShardId(Oid schemaId)
}
Oid relationId = InvalidOid;
foreach_declared_oid(relationId, tablesInSchema)
foreach_oid(relationId, tablesInSchema)
{
/*
* Make sure the relation isn't dropped for the remainder of

View File

@ -15,20 +15,21 @@
#include "distributed/commands/utility_hook.h"
#include "distributed/coordinator_protocol.h"
#include "distributed/deparser.h"
#include "distributed/listutils.h"
#include "distributed/log_utils.h"
#include "distributed/metadata/distobject.h"
#include "distributed/metadata_sync.h"
/*
* PostprocessRoleSecLabelStmt prepares the commands that need to be run on all workers to assign
* security labels on distributed roles. It also ensures that all object dependencies exist on all
* nodes for the role in the SecLabelStmt.
* PostprocessSecLabelStmt prepares the commands that need to be run on all workers to assign
* security labels on distributed objects, currently supporting just Role objects.
* It also ensures that all object dependencies exist on all
* nodes for the object in the SecLabelStmt.
*/
List *
PostprocessRoleSecLabelStmt(Node *node, const char *queryString)
PostprocessSecLabelStmt(Node *node, const char *queryString)
{
if (!EnableAlterRolePropagation || !ShouldPropagate())
if (!ShouldPropagate())
{
return NIL;
}
@ -41,91 +42,38 @@ PostprocessRoleSecLabelStmt(Node *node, const char *queryString)
return NIL;
}
EnsurePropagationToCoordinator();
EnsureAllObjectDependenciesExistOnAllNodes(objectAddresses);
if (secLabelStmt->objtype != OBJECT_ROLE)
{
/*
* If we are not in the coordinator, we don't want to interrupt the security
* label command with notices, the user expects that from the worker node
* the command will not be propagated
*/
if (EnableUnsupportedFeatureMessages && IsCoordinator())
{
ereport(NOTICE, (errmsg("not propagating SECURITY LABEL commands whose "
"object type is not role"),
errhint("Connect to worker nodes directly to manually "
"run the same SECURITY LABEL command.")));
}
return NIL;
}
const char *secLabelCommands = DeparseTreeNode((Node *) secLabelStmt);
List *commandList = list_make3(DISABLE_DDL_PROPAGATION,
(void *) secLabelCommands,
ENABLE_DDL_PROPAGATION);
return NodeDDLTaskList(REMOTE_NODES, commandList);
}
/*
* PostprocessTableOrColumnSecLabelStmt prepares the commands that need to be run on all
* workers to assign security labels on distributed tables or the columns of a distributed
* table. It also ensures that all object dependencies exist on all nodes for the table in
* the SecLabelStmt.
*/
List *
PostprocessTableOrColumnSecLabelStmt(Node *node, const char *queryString)
{
if (!EnableAlterRolePropagation || !ShouldPropagate())
if (!EnableCreateRolePropagation)
{
return NIL;
}
SecLabelStmt *secLabelStmt = castNode(SecLabelStmt, node);
List *objectAddresses = GetObjectAddressListFromParseTree(node, false, true);
if (!IsAnyParentObjectDistributed(objectAddresses))
{
return NIL;
}
EnsurePropagationToCoordinator();
EnsureCoordinator();
EnsureAllObjectDependenciesExistOnAllNodes(objectAddresses);
const char *secLabelCommands = DeparseTreeNode((Node *) secLabelStmt);
const char *sql = DeparseTreeNode((Node *) secLabelStmt);
List *commandList = list_make3(DISABLE_DDL_PROPAGATION,
(void *) secLabelCommands,
(void *) sql,
ENABLE_DDL_PROPAGATION);
List *DDLJobs = NodeDDLTaskList(REMOTE_NODES, commandList);
ListCell *lc = NULL;
/*
* The label is for a table or a column, so we need to set the targetObjectAddress
* of the DDLJob to the relationId of the table. This is needed to ensure that
* the search path is correctly set for the remote security label command; it
* needs to be able to resolve the table that the label is being defined on.
*/
Assert(list_length(objectAddresses) == 1);
ObjectAddress *target = linitial(objectAddresses);
Oid relationId = target->objectId;
Assert(relationId != InvalidOid);
foreach(lc, DDLJobs)
{
DDLJob *ddlJob = (DDLJob *) lfirst(lc);
ObjectAddressSet(ddlJob->targetObjectAddress, RelationRelationId, relationId);
}
return DDLJobs;
}
/*
* PostprocessAnySecLabelStmt is used for any other object types
* that are not supported by Citus. It issues a notice to the client
* if appropriate. Is effectively a nop.
*/
List *
PostprocessAnySecLabelStmt(Node *node, const char *queryString)
{
/*
* If we are not in the coordinator, we don't want to interrupt the security
* label command with notices, the user expects that from the worker node
* the command will not be propagated
*/
if (EnableUnsupportedFeatureMessages && IsCoordinator())
{
ereport(NOTICE, (errmsg("not propagating SECURITY LABEL commands whose "
"object type is not role or table or column"),
errhint("Connect to worker nodes directly to manually "
"run the same SECURITY LABEL command.")));
}
return NIL;
return NodeDDLTaskList(NON_COORDINATOR_NODES, commandList);
}

View File

@ -123,7 +123,7 @@ static bool
OptionsSpecifyOwnedBy(List *optionList, Oid *ownedByTableId)
{
DefElem *defElem = NULL;
foreach_declared_ptr(defElem, optionList)
foreach_ptr(defElem, optionList)
{
if (strcmp(defElem->defname, "owned_by") == 0)
{
@ -202,7 +202,7 @@ ExtractDefaultColumnsAndOwnedSequences(Oid relationId, List **columnNameList,
}
Oid ownedSequenceId = InvalidOid;
foreach_declared_oid(ownedSequenceId, columnOwnedSequences)
foreach_oid(ownedSequenceId, columnOwnedSequences)
{
/*
* A column might have multiple sequences one via OWNED BY one another
@ -288,7 +288,7 @@ PreprocessDropSequenceStmt(Node *node, const char *queryString,
*/
List *deletingSequencesList = stmt->objects;
List *objectNameList = NULL;
foreach_declared_ptr(objectNameList, deletingSequencesList)
foreach_ptr(objectNameList, deletingSequencesList)
{
RangeVar *seq = makeRangeVarFromNameList(objectNameList);
@ -322,7 +322,7 @@ PreprocessDropSequenceStmt(Node *node, const char *queryString,
/* remove the entries for the distributed objects on dropping */
ObjectAddress *address = NULL;
foreach_declared_ptr(address, distributedSequenceAddresses)
foreach_ptr(address, distributedSequenceAddresses)
{
UnmarkObjectDistributed(address);
}
@ -356,7 +356,7 @@ SequenceDropStmtObjectAddress(Node *stmt, bool missing_ok, bool isPostprocess)
List *droppingSequencesList = dropSeqStmt->objects;
List *objectNameList = NULL;
foreach_declared_ptr(objectNameList, droppingSequencesList)
foreach_ptr(objectNameList, droppingSequencesList)
{
RangeVar *seq = makeRangeVarFromNameList(objectNameList);
@ -476,7 +476,7 @@ PreprocessAlterSequenceStmt(Node *node, const char *queryString,
{
List *options = stmt->options;
DefElem *defel = NULL;
foreach_declared_ptr(defel, options)
foreach_ptr(defel, options)
{
if (strcmp(defel->defname, "as") == 0)
{
@ -511,7 +511,7 @@ SequenceUsedInDistributedTable(const ObjectAddress *sequenceAddress, char depTyp
Oid relationId;
List *relations = GetDependentRelationsWithSequence(sequenceAddress->objectId,
depType);
foreach_declared_oid(relationId, relations)
foreach_oid(relationId, relations)
{
if (IsCitusTable(relationId))
{
@ -735,6 +735,8 @@ PostprocessAlterSequenceOwnerStmt(Node *node, const char *queryString)
}
#if (PG_VERSION_NUM >= PG_VERSION_15)
/*
* PreprocessAlterSequencePersistenceStmt is called for change of persistence
* of sequences before the persistence is changed on the local instance.
@ -845,6 +847,9 @@ PreprocessSequenceAlterTableStmt(Node *node, const char *queryString,
}
#endif
/*
* PreprocessGrantOnSequenceStmt is executed before the statement is applied to the local
* postgres instance.
@ -925,7 +930,7 @@ PostprocessGrantOnSequenceStmt(Node *node, const char *queryString)
EnsureCoordinator();
RangeVar *sequence = NULL;
foreach_declared_ptr(sequence, distributedSequences)
foreach_ptr(sequence, distributedSequences)
{
ObjectAddress *sequenceAddress = palloc0(sizeof(ObjectAddress));
Oid sequenceOid = RangeVarGetRelid(sequence, NoLock, false);
@ -1009,7 +1014,7 @@ FilterDistributedSequences(GrantStmt *stmt)
/* iterate over all namespace names provided to get their oid's */
List *namespaceOidList = NIL;
String *namespaceValue = NULL;
foreach_declared_ptr(namespaceValue, stmt->objects)
foreach_ptr(namespaceValue, stmt->objects)
{
char *nspname = strVal(namespaceValue);
bool missing_ok = false;
@ -1023,7 +1028,7 @@ FilterDistributedSequences(GrantStmt *stmt)
*/
List *distributedSequenceList = DistributedSequenceList();
ObjectAddress *sequenceAddress = NULL;
foreach_declared_ptr(sequenceAddress, distributedSequenceList)
foreach_ptr(sequenceAddress, distributedSequenceList)
{
Oid namespaceOid = get_rel_namespace(sequenceAddress->objectId);
@ -1047,7 +1052,7 @@ FilterDistributedSequences(GrantStmt *stmt)
{
bool missing_ok = false;
RangeVar *sequenceRangeVar = NULL;
foreach_declared_ptr(sequenceRangeVar, stmt->objects)
foreach_ptr(sequenceRangeVar, stmt->objects)
{
Oid sequenceOid = RangeVarGetRelid(sequenceRangeVar, NoLock, missing_ok);
ObjectAddress *sequenceAddress = palloc0(sizeof(ObjectAddress));

View File

@ -1,275 +0,0 @@
/*-------------------------------------------------------------------------
*
* serialize_distributed_ddls.c
*
* This file contains functions for serializing distributed DDLs.
*
* If you're adding support for serializing a new DDL, you should
* extend the following functions to support the new object class:
* AcquireCitusAdvisoryObjectClassLockGetOid()
* AcquireCitusAdvisoryObjectClassLockCheckPrivileges()
*
*-------------------------------------------------------------------------
*/
#include "postgres.h"
#include "miscadmin.h"
#include "catalog/dependency.h"
#include "catalog/pg_database_d.h"
#include "commands/dbcommands.h"
#include "storage/lock.h"
#include "utils/builtins.h"
#include "pg_version_compat.h"
#include "distributed/adaptive_executor.h"
#include "distributed/argutils.h"
#include "distributed/commands/serialize_distributed_ddls.h"
#include "distributed/deparse_shard_query.h"
#include "distributed/resource_lock.h"
PG_FUNCTION_INFO_V1(citus_internal_acquire_citus_advisory_object_class_lock);
static void SerializeDistributedDDLsOnObjectClassInternal(ObjectClass objectClass,
char *qualifiedObjectName);
static char * AcquireCitusAdvisoryObjectClassLockCommand(ObjectClass objectClass,
char *qualifiedObjectName);
static void AcquireCitusAdvisoryObjectClassLock(ObjectClass objectClass,
char *qualifiedObjectName);
static Oid AcquireCitusAdvisoryObjectClassLockGetOid(ObjectClass objectClass,
char *qualifiedObjectName);
static void AcquireCitusAdvisoryObjectClassLockCheckPrivileges(ObjectClass objectClass,
Oid oid);
/*
* citus_internal_acquire_citus_advisory_object_class_lock is an internal UDF
* to call AcquireCitusAdvisoryObjectClassLock().
*/
Datum
citus_internal_acquire_citus_advisory_object_class_lock(PG_FUNCTION_ARGS)
{
CheckCitusVersion(ERROR);
PG_ENSURE_ARGNOTNULL(0, "object_class");
ObjectClass objectClass = PG_GETARG_INT32(0);
char *qualifiedObjectName = PG_ARGISNULL(1) ? NULL : PG_GETARG_CSTRING(1);
AcquireCitusAdvisoryObjectClassLock(objectClass, qualifiedObjectName);
PG_RETURN_VOID();
}
/*
* SerializeDistributedDDLsOnObjectClass is a wrapper around
* SerializeDistributedDDLsOnObjectClassInternal to acquire the lock on given
* object class itself, see the comment in header file for more details about
* the difference between this function and
* SerializeDistributedDDLsOnObjectClassObject().
*/
void
SerializeDistributedDDLsOnObjectClass(ObjectClass objectClass)
{
SerializeDistributedDDLsOnObjectClassInternal(objectClass, NULL);
}
/*
* SerializeDistributedDDLsOnObjectClassObject is a wrapper around
* SerializeDistributedDDLsOnObjectClassInternal to acquire the lock on given
* object that belongs to given object class, see the comment in header file
* for more details about the difference between this function and
* SerializeDistributedDDLsOnObjectClass().
*/
void
SerializeDistributedDDLsOnObjectClassObject(ObjectClass objectClass,
char *qualifiedObjectName)
{
if (qualifiedObjectName == NULL)
{
elog(ERROR, "qualified object name cannot be NULL");
}
SerializeDistributedDDLsOnObjectClassInternal(objectClass, qualifiedObjectName);
}
/*
* SerializeDistributedDDLsOnObjectClassInternal serializes distributed DDLs
* that target given object class by acquiring a Citus specific advisory lock
* on the first primary worker node if there are any workers in the cluster.
*
* The lock is acquired via a coordinated transaction. For this reason,
* it automatically gets released when (maybe implicit) transaction on
* current server commits or rolls back.
*
* If qualifiedObjectName is provided to be non-null, then the oid of the
* object is first resolved on the first primary worker node and then the
* lock is acquired on that oid. If qualifiedObjectName is null, then the
* lock is acquired on the object class itself.
*
* Note that those two lock types don't conflict with each other and are
* acquired for different purposes. The lock on the object class
* (qualifiedObjectName = NULL) is used to serialize DDLs that target the
* object class itself, e.g., when creating a new object of that class, and
* the latter is used to serialize DDLs that target a specific object of
* that class, e.g., when altering an object.
*
* In some cases, we may want to acquire both locks at the same time. For
* example, when renaming a database, we want to acquire both lock types
* because while the object class lock is used to ensure that another session
* doesn't create a new database with the same name, the object lock is used
* to ensure that another session doesn't alter the same database.
*/
static void
SerializeDistributedDDLsOnObjectClassInternal(ObjectClass objectClass,
char *qualifiedObjectName)
{
WorkerNode *firstWorkerNode = GetFirstPrimaryWorkerNode();
if (firstWorkerNode == NULL)
{
/*
* If there are no worker nodes in the cluster, then we don't need
* to acquire the lock at all; and we cannot indeed.
*/
return;
}
/*
* Indeed we would already ensure permission checks in remote node
* --via AcquireCitusAdvisoryObjectClassLock()-- but we first do so on
* the local node to avoid from reporting confusing error messages.
*/
Oid oid = AcquireCitusAdvisoryObjectClassLockGetOid(objectClass, qualifiedObjectName);
AcquireCitusAdvisoryObjectClassLockCheckPrivileges(objectClass, oid);
Task *task = CitusMakeNode(Task);
task->taskType = DDL_TASK;
char *command = AcquireCitusAdvisoryObjectClassLockCommand(objectClass,
qualifiedObjectName);
SetTaskQueryString(task, command);
ShardPlacement *targetPlacement = CitusMakeNode(ShardPlacement);
SetPlacementNodeMetadata(targetPlacement, firstWorkerNode);
task->taskPlacementList = list_make1(targetPlacement);
/* need to be in a transaction to acquire a lock that's bound to transactions */
UseCoordinatedTransaction();
bool localExecutionSupported = true;
ExecuteUtilityTaskList(list_make1(task), localExecutionSupported);
}
/*
* AcquireCitusAdvisoryObjectClassLockCommand returns a command to call
* citus_internal.acquire_citus_advisory_object_class_lock().
*/
static char *
AcquireCitusAdvisoryObjectClassLockCommand(ObjectClass objectClass,
char *qualifiedObjectName)
{
/* safe to cast to int as it's an enum */
int objectClassInt = (int) objectClass;
char *quotedObjectName =
!qualifiedObjectName ? "NULL" :
quote_literal_cstr(qualifiedObjectName);
StringInfo command = makeStringInfo();
appendStringInfo(command,
"SELECT citus_internal.acquire_citus_advisory_object_class_lock(%d, %s)",
objectClassInt, quotedObjectName);
return command->data;
}
/*
* AcquireCitusAdvisoryObjectClassLock acquires a Citus specific advisory
* ExclusiveLock based on given object class.
*/
static void
AcquireCitusAdvisoryObjectClassLock(ObjectClass objectClass, char *qualifiedObjectName)
{
Oid oid = AcquireCitusAdvisoryObjectClassLockGetOid(objectClass, qualifiedObjectName);
AcquireCitusAdvisoryObjectClassLockCheckPrivileges(objectClass, oid);
LOCKTAG locktag;
SET_LOCKTAG_GLOBAL_DDL_SERIALIZATION(locktag, objectClass, oid);
LOCKMODE lockmode = ExclusiveLock;
bool sessionLock = false;
bool dontWait = false;
LockAcquire(&locktag, lockmode, sessionLock, dontWait);
}
/*
* AcquireCitusAdvisoryObjectClassLockGetOid returns the oid of given object
* that belongs to given object class. If qualifiedObjectName is NULL, then
* it returns InvalidOid.
*/
static Oid
AcquireCitusAdvisoryObjectClassLockGetOid(ObjectClass objectClass,
char *qualifiedObjectName)
{
if (qualifiedObjectName == NULL)
{
return InvalidOid;
}
bool missingOk = false;
switch (objectClass)
{
case OCLASS_DATABASE:
{
return get_database_oid(qualifiedObjectName, missingOk);
}
default:
elog(ERROR, "unsupported object class: %d", objectClass);
}
}
/*
* AcquireCitusAdvisoryObjectClassLockCheckPrivileges is used to perform privilege checks
* before acquiring the Citus specific advisory lock on given object class and oid.
*/
static void
AcquireCitusAdvisoryObjectClassLockCheckPrivileges(ObjectClass objectClass, Oid oid)
{
switch (objectClass)
{
case OCLASS_DATABASE:
{
if (OidIsValid(oid) && !object_ownercheck(DatabaseRelationId, oid,
GetUserId()))
{
aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_DATABASE,
get_database_name(oid));
}
else if (!OidIsValid(oid) && !have_createdb_privilege())
{
ereport(ERROR,
(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
errmsg("permission denied to create / rename database")));
}
break;
}
default:
elog(ERROR, "unsupported object class: %d", objectClass);
}
}

View File

@ -184,7 +184,7 @@ PreprocessDropStatisticsStmt(Node *node, const char *queryString,
List *ddlJobs = NIL;
List *processedStatsOids = NIL;
List *objectNameList = NULL;
foreach_declared_ptr(objectNameList, dropStatisticsStmt->objects)
foreach_ptr(objectNameList, dropStatisticsStmt->objects)
{
Oid statsOid = get_statistics_object_oid(objectNameList,
dropStatisticsStmt->missing_ok);
@ -234,7 +234,7 @@ DropStatisticsObjectAddress(Node *node, bool missing_ok, bool isPostprocess)
List *objectAddresses = NIL;
List *objectNameList = NULL;
foreach_declared_ptr(objectNameList, dropStatisticsStmt->objects)
foreach_ptr(objectNameList, dropStatisticsStmt->objects)
{
Oid statsOid = get_statistics_object_oid(objectNameList,
dropStatisticsStmt->missing_ok);
@ -535,7 +535,7 @@ GetExplicitStatisticsCommandList(Oid relationId)
int saveNestLevel = PushEmptySearchPath();
Oid statisticsId = InvalidOid;
foreach_declared_oid(statisticsId, statisticsIdList)
foreach_oid(statisticsId, statisticsIdList)
{
/* we need create commands for already created stats before distribution */
Datum commandText = DirectFunctionCall1(pg_get_statisticsobjdef,
@ -606,7 +606,7 @@ GetExplicitStatisticsSchemaIdList(Oid relationId)
RelationClose(relation);
Oid statsId = InvalidOid;
foreach_declared_oid(statsId, statsIdList)
foreach_oid(statsId, statsIdList)
{
HeapTuple heapTuple = SearchSysCache1(STATEXTOID, ObjectIdGetDatum(statsId));
if (!HeapTupleIsValid(heapTuple))
@ -651,15 +651,14 @@ GetAlterIndexStatisticsCommands(Oid indexOid)
}
Form_pg_attribute targetAttr = (Form_pg_attribute) GETSTRUCT(attTuple);
int32 targetAttstattarget = getAttstattarget_compat(attTuple);
if (targetAttstattarget != DEFAULT_STATISTICS_TARGET)
if (targetAttr->attstattarget != DEFAULT_STATISTICS_TARGET)
{
char *indexNameWithSchema = generate_qualified_relation_name(indexOid);
char *command =
GenerateAlterIndexColumnSetStatsCommand(indexNameWithSchema,
targetAttr->attnum,
targetAttstattarget);
targetAttr->attstattarget);
alterIndexStatisticsCommandList =
lappend(alterIndexStatisticsCommandList,
@ -774,10 +773,9 @@ CreateAlterCommandIfTargetNotDefault(Oid statsOid)
}
Form_pg_statistic_ext statisticsForm = (Form_pg_statistic_ext) GETSTRUCT(tup);
int16 currentStxstattarget = getStxstattarget_compat(tup);
ReleaseSysCache(tup);
if (currentStxstattarget == -1)
if (statisticsForm->stxstattarget == -1)
{
return NULL;
}
@ -787,8 +785,7 @@ CreateAlterCommandIfTargetNotDefault(Oid statsOid)
char *schemaName = get_namespace_name(statisticsForm->stxnamespace);
char *statName = NameStr(statisticsForm->stxname);
alterStatsStmt->stxstattarget = getAlterStatsStxstattarget_compat(
currentStxstattarget);
alterStatsStmt->stxstattarget = statisticsForm->stxstattarget;
alterStatsStmt->defnames = list_make2(makeString(schemaName), makeString(statName));
return DeparseAlterStatisticsStmt((Node *) alterStatsStmt);

View File

@ -154,7 +154,7 @@ PreprocessDropTableStmt(Node *node, const char *queryString,
Assert(dropTableStatement->removeType == OBJECT_TABLE);
List *tableNameList = NULL;
foreach_declared_ptr(tableNameList, dropTableStatement->objects)
foreach_ptr(tableNameList, dropTableStatement->objects)
{
RangeVar *tableRangeVar = makeRangeVarFromNameList(tableNameList);
bool missingOK = true;
@ -202,7 +202,7 @@ PreprocessDropTableStmt(Node *node, const char *queryString,
SendCommandToWorkersWithMetadata(DISABLE_DDL_PROPAGATION);
Oid partitionRelationId = InvalidOid;
foreach_declared_oid(partitionRelationId, partitionList)
foreach_oid(partitionRelationId, partitionList)
{
char *detachPartitionCommand =
GenerateDetachPartitionCommand(partitionRelationId);
@ -263,7 +263,7 @@ PostprocessCreateTableStmt(CreateStmt *createStatement, const char *queryString)
}
RangeVar *parentRelation = NULL;
foreach_declared_ptr(parentRelation, createStatement->inhRelations)
foreach_ptr(parentRelation, createStatement->inhRelations)
{
Oid parentRelationId = RangeVarGetRelid(parentRelation, NoLock,
missingOk);
@ -480,7 +480,7 @@ PreprocessAlterTableStmtAttachPartition(AlterTableStmt *alterTableStatement,
{
List *commandList = alterTableStatement->cmds;
AlterTableCmd *alterTableCommand = NULL;
foreach_declared_ptr(alterTableCommand, commandList)
foreach_ptr(alterTableCommand, commandList)
{
if (alterTableCommand->subtype == AT_AttachPartition)
{
@ -792,7 +792,7 @@ ChooseForeignKeyConstraintNameAddition(List *columnNames)
String *columnNameString = NULL;
foreach_declared_ptr(columnNameString, columnNames)
foreach_ptr(columnNameString, columnNames)
{
const char *name = strVal(columnNameString);
@ -1153,6 +1153,7 @@ PreprocessAlterTableStmt(Node *node, const char *alterTableCommand,
{
AlterTableStmt *stmtCopy = copyObject(alterTableStatement);
stmtCopy->objtype = OBJECT_SEQUENCE;
#if (PG_VERSION_NUM >= PG_VERSION_15)
/*
* it must be ALTER TABLE .. OWNER TO ..
@ -1162,6 +1163,16 @@ PreprocessAlterTableStmt(Node *node, const char *alterTableCommand,
*/
return PreprocessSequenceAlterTableStmt((Node *) stmtCopy, alterTableCommand,
processUtilityContext);
#else
/*
* it must be ALTER TABLE .. OWNER TO .. command
* since this is the only ALTER command of a sequence that
* passes through an AlterTableStmt
*/
return PreprocessAlterSequenceOwnerStmt((Node *) stmtCopy, alterTableCommand,
processUtilityContext);
#endif
}
else if (relKind == RELKIND_VIEW)
{
@ -1303,7 +1314,7 @@ PreprocessAlterTableStmt(Node *node, const char *alterTableCommand,
AlterTableCmd *newCmd = makeNode(AlterTableCmd);
AlterTableCmd *command = NULL;
foreach_declared_ptr(command, commandList)
foreach_ptr(command, commandList)
{
AlterTableType alterTableType = command->subtype;
@ -1407,7 +1418,7 @@ PreprocessAlterTableStmt(Node *node, const char *alterTableCommand,
List *columnConstraints = columnDefinition->constraints;
Constraint *constraint = NULL;
foreach_declared_ptr(constraint, columnConstraints)
foreach_ptr(constraint, columnConstraints)
{
if (constraint->contype == CONSTR_FOREIGN)
{
@ -1431,7 +1442,7 @@ PreprocessAlterTableStmt(Node *node, const char *alterTableCommand,
deparseAT = true;
constraint = NULL;
foreach_declared_ptr(constraint, columnConstraints)
foreach_ptr(constraint, columnConstraints)
{
if (ConstrTypeCitusCanDefaultName(constraint->contype))
{
@ -1456,7 +1467,7 @@ PreprocessAlterTableStmt(Node *node, const char *alterTableCommand,
*/
constraint = NULL;
int constraintIdx = 0;
foreach_declared_ptr(constraint, columnConstraints)
foreach_ptr(constraint, columnConstraints)
{
if (constraint->contype == CONSTR_DEFAULT)
{
@ -1685,7 +1696,7 @@ DeparserSupportsAlterTableAddColumn(AlterTableStmt *alterTableStatement,
{
ColumnDef *columnDefinition = (ColumnDef *) addColumnSubCommand->def;
Constraint *constraint = NULL;
foreach_declared_ptr(constraint, columnDefinition->constraints)
foreach_ptr(constraint, columnDefinition->constraints)
{
if (constraint->contype == CONSTR_CHECK)
{
@ -1781,7 +1792,7 @@ static bool
RelationIdListContainsCitusTableType(List *relationIdList, CitusTableType citusTableType)
{
Oid relationId = InvalidOid;
foreach_declared_oid(relationId, relationIdList)
foreach_oid(relationId, relationIdList)
{
if (IsCitusTableType(relationId, citusTableType))
{
@ -1801,7 +1812,7 @@ static bool
RelationIdListContainsPostgresTable(List *relationIdList)
{
Oid relationId = InvalidOid;
foreach_declared_oid(relationId, relationIdList)
foreach_oid(relationId, relationIdList)
{
if (OidIsValid(relationId) && !IsCitusTable(relationId))
{
@ -1840,7 +1851,7 @@ ConvertPostgresLocalTablesToCitusLocalTables(AlterTableStmt *alterTableStatement
* change in below loop due to CreateCitusLocalTable.
*/
RangeVar *relationRangeVar;
foreach_declared_ptr(relationRangeVar, relationRangeVarList)
foreach_ptr(relationRangeVar, relationRangeVarList)
{
List *commandList = alterTableStatement->cmds;
LOCKMODE lockMode = AlterTableGetLockLevel(commandList);
@ -1968,7 +1979,7 @@ RangeVarListHasLocalRelationConvertedByUser(List *relationRangeVarList,
AlterTableStmt *alterTableStatement)
{
RangeVar *relationRangeVar;
foreach_declared_ptr(relationRangeVar, relationRangeVarList)
foreach_ptr(relationRangeVar, relationRangeVarList)
{
/*
* Here we iterate the relation list, and if at least one of the relations
@ -2065,7 +2076,7 @@ GetAlterTableAddFKeyConstraintList(AlterTableStmt *alterTableStatement)
List *commandList = alterTableStatement->cmds;
AlterTableCmd *command = NULL;
foreach_declared_ptr(command, commandList)
foreach_ptr(command, commandList)
{
List *commandForeignKeyConstraintList =
GetAlterTableCommandFKeyConstraintList(command);
@ -2105,7 +2116,7 @@ GetAlterTableCommandFKeyConstraintList(AlterTableCmd *command)
List *columnConstraints = columnDefinition->constraints;
Constraint *constraint = NULL;
foreach_declared_ptr(constraint, columnConstraints)
foreach_ptr(constraint, columnConstraints)
{
if (constraint->contype == CONSTR_FOREIGN)
{
@ -2128,7 +2139,7 @@ GetRangeVarListFromFKeyConstraintList(List *fKeyConstraintList)
List *rightRelationRangeVarList = NIL;
Constraint *fKeyConstraint = NULL;
foreach_declared_ptr(fKeyConstraint, fKeyConstraintList)
foreach_ptr(fKeyConstraint, fKeyConstraintList)
{
RangeVar *rightRelationRangeVar = fKeyConstraint->pktable;
rightRelationRangeVarList = lappend(rightRelationRangeVarList,
@ -2149,7 +2160,7 @@ GetRelationIdListFromRangeVarList(List *rangeVarList, LOCKMODE lockMode, bool mi
List *relationIdList = NIL;
RangeVar *rangeVar = NULL;
foreach_declared_ptr(rangeVar, rangeVarList)
foreach_ptr(rangeVar, rangeVarList)
{
Oid rightRelationId = RangeVarGetRelid(rangeVar, lockMode, missingOk);
relationIdList = lappend_oid(relationIdList, rightRelationId);
@ -2223,7 +2234,7 @@ AlterTableDropsForeignKey(AlterTableStmt *alterTableStatement)
Oid relationId = AlterTableLookupRelation(alterTableStatement, lockmode);
AlterTableCmd *command = NULL;
foreach_declared_ptr(command, alterTableStatement->cmds)
foreach_ptr(command, alterTableStatement->cmds)
{
AlterTableType alterTableType = command->subtype;
@ -2285,7 +2296,7 @@ AnyForeignKeyDependsOnIndex(Oid indexId)
GetPgDependTuplesForDependingObjects(dependentObjectClassId, dependentObjectId);
HeapTuple dependencyTuple = NULL;
foreach_declared_ptr(dependencyTuple, dependencyTupleList)
foreach_ptr(dependencyTuple, dependencyTupleList)
{
Form_pg_depend dependencyForm = (Form_pg_depend) GETSTRUCT(dependencyTuple);
Oid dependingClassId = dependencyForm->classid;
@ -2473,7 +2484,7 @@ SkipForeignKeyValidationIfConstraintIsFkey(AlterTableStmt *alterTableStatement,
* shards anyway.
*/
AlterTableCmd *command = NULL;
foreach_declared_ptr(command, alterTableStatement->cmds)
foreach_ptr(command, alterTableStatement->cmds)
{
AlterTableType alterTableType = command->subtype;
@ -2554,7 +2565,7 @@ ErrorIfAlterDropsPartitionColumn(AlterTableStmt *alterTableStatement)
/* then check if any of subcommands drop partition column.*/
List *commandList = alterTableStatement->cmds;
AlterTableCmd *command = NULL;
foreach_declared_ptr(command, commandList)
foreach_ptr(command, commandList)
{
AlterTableType alterTableType = command->subtype;
if (alterTableType == AT_DropColumn)
@ -2623,7 +2634,7 @@ PostprocessAlterTableStmt(AlterTableStmt *alterTableStatement)
List *commandList = alterTableStatement->cmds;
AlterTableCmd *command = NULL;
foreach_declared_ptr(command, commandList)
foreach_ptr(command, commandList)
{
AlterTableType alterTableType = command->subtype;
@ -2659,7 +2670,7 @@ PostprocessAlterTableStmt(AlterTableStmt *alterTableStatement)
}
Constraint *constraint = NULL;
foreach_declared_ptr(constraint, columnConstraints)
foreach_ptr(constraint, columnConstraints)
{
if (constraint->conname == NULL &&
(constraint->contype == CONSTR_PRIMARY ||
@ -2679,7 +2690,7 @@ PostprocessAlterTableStmt(AlterTableStmt *alterTableStatement)
* that sequence is supported
*/
constraint = NULL;
foreach_declared_ptr(constraint, columnConstraints)
foreach_ptr(constraint, columnConstraints)
{
if (constraint->contype == CONSTR_DEFAULT)
{
@ -2791,7 +2802,7 @@ FixAlterTableStmtIndexNames(AlterTableStmt *alterTableStatement)
List *commandList = alterTableStatement->cmds;
AlterTableCmd *command = NULL;
foreach_declared_ptr(command, commandList)
foreach_ptr(command, commandList)
{
AlterTableType alterTableType = command->subtype;
@ -3154,7 +3165,7 @@ ErrorIfUnsupportedConstraint(Relation relation, char distributionMethod,
List *indexOidList = RelationGetIndexList(relation);
Oid indexOid = InvalidOid;
foreach_declared_oid(indexOid, indexOidList)
foreach_oid(indexOid, indexOidList)
{
Relation indexDesc = index_open(indexOid, RowExclusiveLock);
bool hasDistributionColumn = false;
@ -3299,7 +3310,7 @@ ErrorIfUnsupportedAlterTableStmt(AlterTableStmt *alterTableStatement)
/* error out if any of the subcommands are unsupported */
AlterTableCmd *command = NULL;
foreach_declared_ptr(command, commandList)
foreach_ptr(command, commandList)
{
AlterTableType alterTableType = command->subtype;
@ -3374,7 +3385,7 @@ ErrorIfUnsupportedAlterTableStmt(AlterTableStmt *alterTableStatement)
Constraint *columnConstraint = NULL;
foreach_declared_ptr(columnConstraint, column->constraints)
foreach_ptr(columnConstraint, column->constraints)
{
if (columnConstraint->contype == CONSTR_IDENTITY)
{
@ -3406,7 +3417,7 @@ ErrorIfUnsupportedAlterTableStmt(AlterTableStmt *alterTableStatement)
List *columnConstraints = column->constraints;
Constraint *constraint = NULL;
foreach_declared_ptr(constraint, columnConstraints)
foreach_ptr(constraint, columnConstraints)
{
if (constraint->contype == CONSTR_DEFAULT)
{
@ -3653,36 +3664,9 @@ ErrorIfUnsupportedAlterTableStmt(AlterTableStmt *alterTableStatement)
break;
}
#if PG_VERSION_NUM >= PG_VERSION_17
case AT_SetExpression:
{
ereport(ERROR, (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
errmsg(
"ALTER TABLE ... ALTER COLUMN ... SET EXPRESSION commands "
"are currently unsupported.")));
break;
}
#endif
#if PG_VERSION_NUM >= PG_VERSION_15
case AT_SetAccessMethod:
{
/*
* If command->name == NULL, that means the user is trying to use
* ALTER TABLE ... SET ACCESS METHOD DEFAULT
* which we don't support currently.
*/
if (command->name == NULL)
{
ereport(ERROR, (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
errmsg(
"DEFAULT option in ALTER TABLE ... SET ACCESS METHOD "
"is currently unsupported."),
errhint(
"You can rerun the command by explicitly writing the access method name.")));
}
break;
}
#endif
case AT_SetNotNull:
case AT_ReplicaIdentity:
case AT_ChangeOwner:
@ -3786,7 +3770,7 @@ SetupExecutionModeForAlterTable(Oid relationId, AlterTableCmd *command)
List *columnConstraints = columnDefinition->constraints;
Constraint *constraint = NULL;
foreach_declared_ptr(constraint, columnConstraints)
foreach_ptr(constraint, columnConstraints)
{
if (constraint->contype == CONSTR_FOREIGN)
{
@ -3986,10 +3970,10 @@ SetInterShardDDLTaskPlacementList(Task *task, ShardInterval *leftShardInterval,
List *intersectedPlacementList = NIL;
ShardPlacement *leftShardPlacement = NULL;
foreach_declared_ptr(leftShardPlacement, leftShardPlacementList)
foreach_ptr(leftShardPlacement, leftShardPlacementList)
{
ShardPlacement *rightShardPlacement = NULL;
foreach_declared_ptr(rightShardPlacement, rightShardPlacementList)
foreach_ptr(rightShardPlacement, rightShardPlacementList)
{
if (leftShardPlacement->nodeId == rightShardPlacement->nodeId)
{

View File

@ -790,6 +790,45 @@ AlterTextSearchDictionarySchemaStmtObjectAddress(Node *node, bool missing_ok, bo
}
/*
* TextSearchConfigurationCommentObjectAddress resolves the ObjectAddress for the TEXT
* SEARCH CONFIGURATION on which the comment is placed. Optionally errors if the
* configuration does not exist based on the missing_ok flag passed in by the caller.
*/
List *
TextSearchConfigurationCommentObjectAddress(Node *node, bool missing_ok, bool
isPostprocess)
{
CommentStmt *stmt = castNode(CommentStmt, node);
Assert(stmt->objtype == OBJECT_TSCONFIGURATION);
Oid objid = get_ts_config_oid(castNode(List, stmt->object), missing_ok);
ObjectAddress *address = palloc0(sizeof(ObjectAddress));
ObjectAddressSet(*address, TSConfigRelationId, objid);
return list_make1(address);
}
/*
* TextSearchDictCommentObjectAddress resolves the ObjectAddress for the TEXT SEARCH
* DICTIONARY on which the comment is placed. Optionally errors if the dictionary does not
* exist based on the missing_ok flag passed in by the caller.
*/
List *
TextSearchDictCommentObjectAddress(Node *node, bool missing_ok, bool isPostprocess)
{
CommentStmt *stmt = castNode(CommentStmt, node);
Assert(stmt->objtype == OBJECT_TSDICTIONARY);
Oid objid = get_ts_dict_oid(castNode(List, stmt->object), missing_ok);
ObjectAddress *address = palloc0(sizeof(ObjectAddress));
ObjectAddressSet(*address, TSDictionaryRelationId, objid);
return list_make1(address);
}
/*
* AlterTextSearchConfigurationOwnerObjectAddress resolves the ObjectAddress for the TEXT
* SEARCH CONFIGURATION for which the owner is changed. Optionally errors if the

View File

@ -57,6 +57,9 @@ static void ExtractDropStmtTriggerAndRelationName(DropStmt *dropTriggerStmt,
static void ErrorIfDropStmtDropsMultipleTriggers(DropStmt *dropTriggerStmt);
static char * GetTriggerNameById(Oid triggerId);
static int16 GetTriggerTypeById(Oid triggerId);
#if (PG_VERSION_NUM < PG_VERSION_15)
static void ErrorOutIfCloneTrigger(Oid tgrelid, const char *tgname);
#endif
/* GUC that overrides trigger checks for distributed tables and reference tables */
@ -78,7 +81,7 @@ GetExplicitTriggerCommandList(Oid relationId)
List *triggerIdList = GetExplicitTriggerIdList(relationId);
Oid triggerId = InvalidOid;
foreach_declared_oid(triggerId, triggerIdList)
foreach_oid(triggerId, triggerIdList)
{
bool prettyOutput = false;
Datum commandText = DirectFunctionCall2(pg_get_triggerdef_ext,
@ -401,6 +404,40 @@ CreateTriggerEventExtendNames(CreateTrigStmt *createTriggerStmt, char *schemaNam
}
/*
* PreprocessAlterTriggerRenameStmt is called before a ALTER TRIGGER RENAME
* command has been executed by standard process utility. This function errors
* out if we are trying to rename a child trigger on a partition of a distributed
* table. In PG15, this is not allowed anyway.
*/
List *
PreprocessAlterTriggerRenameStmt(Node *node, const char *queryString,
ProcessUtilityContext processUtilityContext)
{
#if (PG_VERSION_NUM < PG_VERSION_15)
RenameStmt *renameTriggerStmt = castNode(RenameStmt, node);
Assert(renameTriggerStmt->renameType == OBJECT_TRIGGER);
RangeVar *relation = renameTriggerStmt->relation;
bool missingOk = false;
Oid relationId = RangeVarGetRelid(relation, ALTER_TRIGGER_LOCK_MODE, missingOk);
if (!IsCitusTable(relationId))
{
return NIL;
}
EnsureCoordinator();
ErrorOutForTriggerIfNotSupported(relationId);
ErrorOutIfCloneTrigger(relationId, renameTriggerStmt->subname);
#endif
return NIL;
}
/*
* PostprocessAlterTriggerRenameStmt is called after a ALTER TRIGGER RENAME
* command has been executed by standard process utility. This function errors
@ -705,7 +742,7 @@ ErrorIfRelationHasUnsupportedTrigger(Oid relationId)
List *relationTriggerList = GetExplicitTriggerIdList(relationId);
Oid triggerId = InvalidOid;
foreach_declared_oid(triggerId, relationTriggerList)
foreach_oid(triggerId, relationTriggerList)
{
ObjectAddress triggerObjectAddress = InvalidObjectAddress;
ObjectAddressSet(triggerObjectAddress, TriggerRelationId, triggerId);
@ -722,6 +759,64 @@ ErrorIfRelationHasUnsupportedTrigger(Oid relationId)
}
#if (PG_VERSION_NUM < PG_VERSION_15)
/*
* ErrorOutIfCloneTrigger is a helper function to error
* out if we are trying to rename a child trigger on a
* partition of a distributed table.
* A lot of this code is borrowed from PG15 because
* renaming clone triggers isn't allowed in PG15 anymore.
*/
static void
ErrorOutIfCloneTrigger(Oid tgrelid, const char *tgname)
{
HeapTuple tuple;
ScanKeyData key[2];
Relation tgrel = table_open(TriggerRelationId, RowExclusiveLock);
/*
* Search for the trigger to modify.
*/
ScanKeyInit(&key[0],
Anum_pg_trigger_tgrelid,
BTEqualStrategyNumber, F_OIDEQ,
ObjectIdGetDatum(tgrelid));
ScanKeyInit(&key[1],
Anum_pg_trigger_tgname,
BTEqualStrategyNumber, F_NAMEEQ,
CStringGetDatum(tgname));
SysScanDesc tgscan = systable_beginscan(tgrel, TriggerRelidNameIndexId, true,
NULL, 2, key);
if (HeapTupleIsValid(tuple = systable_getnext(tgscan)))
{
Form_pg_trigger trigform = (Form_pg_trigger) GETSTRUCT(tuple);
/*
* If the trigger descends from a trigger on a parent partitioned
* table, reject the rename.
* Appended shard ids to find the trigger on the partition's shards
* are not correct. Hence we would fail to find the trigger on the
* partition's shard.
*/
if (OidIsValid(trigform->tgparentid))
{
ereport(ERROR, (
errmsg(
"cannot rename child triggers on distributed partitions")));
}
}
systable_endscan(tgscan);
table_close(tgrel, RowExclusiveLock);
}
#endif
/*
* GetDropTriggerStmtRelation takes a DropStmt for a trigger object and returns
* RangeVar for the relation that owns the trigger.

View File

@ -135,7 +135,7 @@ TruncateTaskList(Oid relationId)
LockShardListMetadata(shardIntervalList, ShareLock);
ShardInterval *shardInterval = NULL;
foreach_declared_ptr(shardInterval, shardIntervalList)
foreach_ptr(shardInterval, shardIntervalList)
{
uint64 shardId = shardInterval->shardId;
char *shardRelationName = pstrdup(relationName);
@ -264,7 +264,7 @@ ErrorIfUnsupportedTruncateStmt(TruncateStmt *truncateStatement)
{
List *relationList = truncateStatement->relations;
RangeVar *rangeVar = NULL;
foreach_declared_ptr(rangeVar, relationList)
foreach_ptr(rangeVar, relationList)
{
Oid relationId = RangeVarGetRelid(rangeVar, NoLock, false);
@ -294,7 +294,7 @@ static void
EnsurePartitionTableNotReplicatedForTruncate(TruncateStmt *truncateStatement)
{
RangeVar *rangeVar = NULL;
foreach_declared_ptr(rangeVar, truncateStatement->relations)
foreach_ptr(rangeVar, truncateStatement->relations)
{
Oid relationId = RangeVarGetRelid(rangeVar, NoLock, false);
@ -322,7 +322,7 @@ ExecuteTruncateStmtSequentialIfNecessary(TruncateStmt *command)
bool failOK = false;
RangeVar *rangeVar = NULL;
foreach_declared_ptr(rangeVar, relationList)
foreach_ptr(rangeVar, relationList)
{
Oid relationId = RangeVarGetRelid(rangeVar, NoLock, failOK);

View File

@ -34,8 +34,6 @@
#include "access/htup_details.h"
#include "catalog/catalog.h"
#include "catalog/dependency.h"
#include "catalog/pg_authid.h"
#include "catalog/pg_database.h"
#include "commands/dbcommands.h"
#include "commands/defrem.h"
#include "commands/extension.h"
@ -45,7 +43,6 @@
#include "nodes/makefuncs.h"
#include "nodes/parsenodes.h"
#include "nodes/pg_list.h"
#include "postmaster/postmaster.h"
#include "tcop/utility.h"
#include "utils/builtins.h"
#include "utils/fmgroids.h"
@ -79,7 +76,6 @@
#include "distributed/multi_partitioning_utils.h"
#include "distributed/multi_physical_planner.h"
#include "distributed/reference_table_utils.h"
#include "distributed/remote_commands.h"
#include "distributed/resource_lock.h"
#include "distributed/string_utils.h"
#include "distributed/transaction_management.h"
@ -87,6 +83,7 @@
#include "distributed/worker_shard_visibility.h"
#include "distributed/worker_transaction.h"
bool EnableDDLPropagation = true; /* ddl propagation is enabled */
int CreateObjectPropagationMode = CREATE_OBJECT_PROPAGATION_IMMEDIATE;
PropSetCmdBehavior PropagateSetCommands = PROPSETCMD_NONE; /* SET prop off */
@ -100,13 +97,13 @@ int UtilityHookLevel = 0;
/* Local functions forward declarations for helper functions */
static void citus_ProcessUtilityInternal(PlannedStmt *pstmt,
const char *queryString,
ProcessUtilityContext context,
ParamListInfo params,
struct QueryEnvironment *queryEnv,
DestReceiver *dest,
QueryCompletion *completionTag);
static void ProcessUtilityInternal(PlannedStmt *pstmt,
const char *queryString,
ProcessUtilityContext context,
ParamListInfo params,
struct QueryEnvironment *queryEnv,
DestReceiver *dest,
QueryCompletion *completionTag);
static void set_indexsafe_procflags(void);
static char * CurrentSearchPath(void);
static void IncrementUtilityHookCountersIfNecessary(Node *parsetree);
@ -115,7 +112,6 @@ static void DecrementUtilityHookCountersIfNecessary(Node *parsetree);
static bool IsDropSchemaOrDB(Node *parsetree);
static bool ShouldCheckUndistributeCitusLocalTables(void);
/*
* ProcessUtilityParseTree is a convenience method to create a PlannedStmt out of
* pieces of a utility statement before invoking ProcessUtility.
@ -136,7 +132,7 @@ ProcessUtilityParseTree(Node *node, const char *queryString, ProcessUtilityConte
/*
* citus_ProcessUtility is the main entry hook for implementing Citus-specific
* multi_ProcessUtility is the main entry hook for implementing Citus-specific
* utility behavior. Its primary responsibilities are intercepting COPY and DDL
* commands and augmenting the coordinator's command with corresponding tasks
* to be run on worker nodes, after suitably ensuring said commands' options
@ -145,7 +141,7 @@ ProcessUtilityParseTree(Node *node, const char *queryString, ProcessUtilityConte
* TRUNCATE and VACUUM are also supported.
*/
void
citus_ProcessUtility(PlannedStmt *pstmt,
multi_ProcessUtility(PlannedStmt *pstmt,
const char *queryString,
bool readOnlyTree,
ProcessUtilityContext context,
@ -247,25 +243,11 @@ citus_ProcessUtility(PlannedStmt *pstmt,
if (!CitusHasBeenLoaded())
{
/*
* Process the command via RunPreprocessNonMainDBCommand and
* RunPostprocessNonMainDBCommand hooks if we're in a non-main database
* and if the command is a node-wide object management command that we
* support from non-main databases.
* Ensure that utility commands do not behave any differently until CREATE
* EXTENSION is invoked.
*/
bool shouldSkipPrevUtilityHook = RunPreprocessNonMainDBCommand(parsetree);
if (!shouldSkipPrevUtilityHook)
{
/*
* Ensure that utility commands do not behave any differently until CREATE
* EXTENSION is invoked.
*/
PrevProcessUtility(pstmt, queryString, false, context,
params, queryEnv, dest, completionTag);
}
RunPostprocessNonMainDBCommand(parsetree);
PrevProcessUtility(pstmt, queryString, false, context,
params, queryEnv, dest, completionTag);
return;
}
@ -349,8 +331,8 @@ citus_ProcessUtility(PlannedStmt *pstmt,
PG_TRY();
{
citus_ProcessUtilityInternal(pstmt, queryString, context, params, queryEnv, dest,
completionTag);
ProcessUtilityInternal(pstmt, queryString, context, params, queryEnv, dest,
completionTag);
if (UtilityHookLevel == 1)
{
@ -424,7 +406,7 @@ citus_ProcessUtility(PlannedStmt *pstmt,
/*
* citus_ProcessUtilityInternal is a helper function for citus_ProcessUtility where majority
* ProcessUtilityInternal is a helper function for multi_ProcessUtility where majority
* of the Citus specific utility statements are handled here. The distinction between
* both functions is that Citus_ProcessUtility does not handle CALL and DO statements.
* The reason for the distinction is implemented to be able to find the "top-level" DDL
@ -432,13 +414,13 @@ citus_ProcessUtility(PlannedStmt *pstmt,
* this goal.
*/
static void
citus_ProcessUtilityInternal(PlannedStmt *pstmt,
const char *queryString,
ProcessUtilityContext context,
ParamListInfo params,
struct QueryEnvironment *queryEnv,
DestReceiver *dest,
QueryCompletion *completionTag)
ProcessUtilityInternal(PlannedStmt *pstmt,
const char *queryString,
ProcessUtilityContext context,
ParamListInfo params,
struct QueryEnvironment *queryEnv,
DestReceiver *dest,
QueryCompletion *completionTag)
{
Node *parsetree = pstmt->utilityStmt;
List *ddlJobs = NIL;
@ -454,7 +436,7 @@ citus_ProcessUtilityInternal(PlannedStmt *pstmt,
bool analyze = false;
DefElem *option = NULL;
foreach_declared_ptr(option, explainStmt->options)
foreach_ptr(option, explainStmt->options)
{
if (strcmp(option->defname, "analyze") == 0)
{
@ -695,7 +677,7 @@ citus_ProcessUtilityInternal(PlannedStmt *pstmt,
{
AlterTableStmt *alterTableStmt = (AlterTableStmt *) parsetree;
AlterTableCmd *command = NULL;
foreach_declared_ptr(command, alterTableStmt->cmds)
foreach_ptr(command, alterTableStmt->cmds)
{
AlterTableType alterTableType = command->subtype;
@ -714,32 +696,25 @@ citus_ProcessUtilityInternal(PlannedStmt *pstmt,
}
/* inform the user about potential caveats */
if (IsA(parsetree, CreatedbStmt) && !EnableCreateDatabasePropagation)
if (IsA(parsetree, CreatedbStmt))
{
if (EnableUnsupportedFeatureMessages)
{
ereport(NOTICE, (errmsg("Citus partially supports CREATE DATABASE for "
"distributed databases"),
errdetail("Citus does not propagate CREATE DATABASE "
"command to other nodes"),
"command to workers"),
errhint("You can manually create a database and its "
"extensions on other nodes.")));
"extensions on workers.")));
}
}
else if (IsA(parsetree, CreateRoleStmt) && !EnableCreateRolePropagation)
{
ereport(NOTICE, (errmsg("not propagating CREATE ROLE/USER commands to other"
ereport(NOTICE, (errmsg("not propagating CREATE ROLE/USER commands to worker"
" nodes"),
errhint("Connect to other nodes directly to manually create all"
errhint("Connect to worker nodes directly to manually create all"
" necessary users and roles.")));
}
else if (IsA(parsetree, SecLabelStmt) && !EnableAlterRolePropagation)
{
ereport(NOTICE, (errmsg("not propagating SECURITY LABEL commands to other"
" nodes"),
errhint("Connect to other nodes directly to manually assign"
" necessary labels.")));
}
/*
* Make sure that on DROP EXTENSION we terminate the background daemon
@ -751,13 +726,22 @@ citus_ProcessUtilityInternal(PlannedStmt *pstmt,
}
/*
* Make sure that dropping node-wide objects deletes the pg_dist_object
* entries. There is a separate logic for node-wide objects (such as role
* and databases), since they are not included as dropped objects in the
* drop event trigger. To handle it both on worker and coordinator nodes,
* it is not implemented as a part of process functions but here.
* Make sure that dropping the role deletes the pg_dist_object entries. There is a
* separate logic for roles, since roles are not included as dropped objects in the
* drop event trigger. To handle it both on worker and coordinator nodes, it is not
* implemented as a part of process functions but here.
*/
UnmarkNodeWideObjectsDistributed(parsetree);
if (IsA(parsetree, DropRoleStmt))
{
DropRoleStmt *stmt = castNode(DropRoleStmt, parsetree);
List *allDropRoles = stmt->roles;
List *distributedDropRoles = FilterDistributedRoles(allDropRoles);
if (list_length(distributedDropRoles) > 0)
{
UnmarkRolesDistributed(distributedDropRoles);
}
}
pstmt->utilityStmt = parsetree;
@ -835,6 +819,19 @@ citus_ProcessUtilityInternal(PlannedStmt *pstmt,
ddlJobs = processJobs;
}
}
if (IsA(parsetree, RenameStmt) && ((RenameStmt *) parsetree)->renameType ==
OBJECT_ROLE && EnableAlterRolePropagation)
{
if (EnableUnsupportedFeatureMessages)
{
ereport(NOTICE, (errmsg(
"not propagating ALTER ROLE ... RENAME TO commands "
"to worker nodes"),
errhint("Connect to worker nodes directly to manually "
"rename the role")));
}
}
}
if (IsA(parsetree, CreateStmt))
@ -879,7 +876,7 @@ citus_ProcessUtilityInternal(PlannedStmt *pstmt,
}
DDLJob *ddlJob = NULL;
foreach_declared_ptr(ddlJob, ddlJobs)
foreach_ptr(ddlJob, ddlJobs)
{
ExecuteDistributedDDLJob(ddlJob);
}
@ -939,7 +936,7 @@ citus_ProcessUtilityInternal(PlannedStmt *pstmt,
{
List *addresses = GetObjectAddressListFromParseTree(parsetree, false, true);
ObjectAddress *address = NULL;
foreach_declared_ptr(address, addresses)
foreach_ptr(address, addresses)
{
MarkObjectDistributed(address);
TrackPropagatedObject(address);
@ -962,7 +959,7 @@ UndistributeDisconnectedCitusLocalTables(void)
citusLocalTableIdList = SortList(citusLocalTableIdList, CompareOids);
Oid citusLocalTableId = InvalidOid;
foreach_declared_oid(citusLocalTableId, citusLocalTableIdList)
foreach_oid(citusLocalTableId, citusLocalTableIdList)
{
/* acquire ShareRowExclusiveLock to prevent concurrent foreign key creation */
LOCKMODE lockMode = ShareRowExclusiveLock;
@ -1124,17 +1121,16 @@ IsDropSchemaOrDB(Node *parsetree)
* each shard placement and COMMIT/ROLLBACK is handled by
* CoordinatedTransactionCallback function.
*
* The function errors out if the DDL is on a partitioned table which has replication
* factor > 1, or if the the coordinator is not added into metadata and we're on a
* worker node because we want to make sure that distributed DDL jobs are executed
* on the coordinator node too. See EnsurePropagationToCoordinator() for more details.
* The function errors out if the node is not the coordinator or if the DDL is on
* a partitioned table which has replication factor > 1.
*
*/
void
ExecuteDistributedDDLJob(DDLJob *ddlJob)
{
bool shouldSyncMetadata = false;
EnsurePropagationToCoordinator();
EnsureCoordinator();
ObjectAddress targetObjectAddress = ddlJob->targetObjectAddress;
@ -1158,24 +1154,23 @@ ExecuteDistributedDDLJob(DDLJob *ddlJob)
{
if (shouldSyncMetadata)
{
SendCommandToRemoteNodesWithMetadata(DISABLE_DDL_PROPAGATION);
SendCommandToWorkersWithMetadata(DISABLE_DDL_PROPAGATION);
char *currentSearchPath = CurrentSearchPath();
/*
* Given that we're relaying the query to the remote nodes directly,
* Given that we're relaying the query to the worker nodes directly,
* we should set the search path exactly the same when necessary.
*/
if (currentSearchPath != NULL)
{
SendCommandToRemoteNodesWithMetadata(
SendCommandToWorkersWithMetadata(
psprintf("SET LOCAL search_path TO %s;", currentSearchPath));
}
if (ddlJob->metadataSyncCommand != NULL)
{
SendCommandToRemoteNodesWithMetadata(
(char *) ddlJob->metadataSyncCommand);
SendCommandToWorkersWithMetadata((char *) ddlJob->metadataSyncCommand);
}
}
@ -1254,7 +1249,7 @@ ExecuteDistributedDDLJob(DDLJob *ddlJob)
char *currentSearchPath = CurrentSearchPath();
/*
* Given that we're relaying the query to the remote nodes directly,
* Given that we're relaying the query to the worker nodes directly,
* we should set the search path exactly the same when necessary.
*/
if (currentSearchPath != NULL)
@ -1266,7 +1261,7 @@ ExecuteDistributedDDLJob(DDLJob *ddlJob)
commandList = lappend(commandList, (char *) ddlJob->metadataSyncCommand);
SendBareCommandListToRemoteMetadataNodes(commandList);
SendBareCommandListToMetadataWorkers(commandList);
}
}
PG_CATCH();
@ -1289,18 +1284,15 @@ ExecuteDistributedDDLJob(DDLJob *ddlJob)
errhint("Use DROP INDEX CONCURRENTLY IF EXISTS to remove the "
"invalid index, then retry the original command.")));
}
else if (ddlJob->warnForPartialFailure)
else
{
ereport(WARNING,
(errmsg(
"Commands that are not transaction-safe may result in "
"partial failure, potentially leading to an inconsistent "
"state.\nIf the problematic command is a CREATE operation, "
"consider using the 'IF EXISTS' syntax to drop the object,"
"\nif applicable, and then re-attempt the original command.")));
"CONCURRENTLY-enabled index commands can fail partially, "
"leaving behind an INVALID index.\n Use DROP INDEX "
"CONCURRENTLY IF EXISTS to remove the invalid index.")));
PG_RE_THROW();
}
PG_RE_THROW();
}
PG_END_TRY();
}
@ -1349,7 +1341,7 @@ CurrentSearchPath(void)
bool schemaAdded = false;
Oid searchPathOid = InvalidOid;
foreach_declared_oid(searchPathOid, searchPathList)
foreach_oid(searchPathOid, searchPathList)
{
char *schemaName = get_namespace_name(searchPathOid);
@ -1409,7 +1401,7 @@ PostStandardProcessUtility(Node *parsetree)
* on the local table first. However, in order to decide whether the
* command leads to an invalidation, we need to check before the command
* is being executed since we read pg_constraint table. Thus, we maintain a
* local flag and do the invalidation after citus_ProcessUtility,
* local flag and do the invalidation after multi_ProcessUtility,
* before ExecuteDistributedDDLJob().
*/
InvalidateForeignKeyGraphForDDL();
@ -1483,7 +1475,7 @@ DDLTaskList(Oid relationId, const char *commandString)
LockShardListMetadata(shardIntervalList, ShareLock);
ShardInterval *shardInterval = NULL;
foreach_declared_ptr(shardInterval, shardIntervalList)
foreach_ptr(shardInterval, shardIntervalList)
{
uint64 shardId = shardInterval->shardId;
StringInfo applyCommand = makeStringInfo();
@ -1512,33 +1504,6 @@ DDLTaskList(Oid relationId, const char *commandString)
}
/*
* NontransactionalNodeDDLTaskList builds a list of tasks to execute a DDL command on a
* given target set of nodes with cannotBeExecutedInTransaction is set to make sure
* that task list is executed outside a transaction block.
*
* Also sets warnForPartialFailure for the returned DDLJobs.
*/
List *
NontransactionalNodeDDLTaskList(TargetWorkerSet targets, List *commands,
bool warnForPartialFailure)
{
List *ddlJobs = NodeDDLTaskList(targets, commands);
DDLJob *ddlJob = NULL;
foreach_declared_ptr(ddlJob, ddlJobs)
{
Task *task = NULL;
foreach_declared_ptr(task, ddlJob->taskList)
{
task->cannotBeExecutedInTransaction = true;
}
ddlJob->warnForPartialFailure = warnForPartialFailure;
}
return ddlJobs;
}
/*
* NodeDDLTaskList builds a list of tasks to execute a DDL command on a
* given target set of nodes.
@ -1564,7 +1529,7 @@ NodeDDLTaskList(TargetWorkerSet targets, List *commands)
SetTaskQueryStringList(task, commands);
WorkerNode *workerNode = NULL;
foreach_declared_ptr(workerNode, workerNodes)
foreach_ptr(workerNode, workerNodes)
{
ShardPlacement *targetPlacement = CitusMakeNode(ShardPlacement);
targetPlacement->nodeName = workerNode->workerName;

View File

@ -135,7 +135,7 @@ VacuumRelationIdList(VacuumStmt *vacuumStmt, CitusVacuumParams vacuumParams)
List *relationIdList = NIL;
RangeVar *vacuumRelation = NULL;
foreach_declared_ptr(vacuumRelation, vacuumRelationList)
foreach_ptr(vacuumRelation, vacuumRelationList)
{
/*
* If skip_locked option is enabled, we are skipping that relation
@ -164,7 +164,7 @@ static bool
IsDistributedVacuumStmt(List *vacuumRelationIdList)
{
Oid relationId = InvalidOid;
foreach_declared_oid(relationId, vacuumRelationIdList)
foreach_oid(relationId, vacuumRelationIdList)
{
if (OidIsValid(relationId) && IsCitusTable(relationId))
{
@ -185,9 +185,10 @@ ExecuteVacuumOnDistributedTables(VacuumStmt *vacuumStmt, List *relationIdList,
CitusVacuumParams vacuumParams)
{
int relationIndex = 0;
int executedVacuumCount = 0;
Oid relationId = InvalidOid;
foreach_declared_oid(relationId, relationIdList)
foreach_oid(relationId, relationIdList)
{
if (IsCitusTable(relationId))
{
@ -197,6 +198,7 @@ ExecuteVacuumOnDistributedTables(VacuumStmt *vacuumStmt, List *relationIdList,
/* local execution is not implemented for VACUUM commands */
bool localExecutionSupported = false;
ExecuteUtilityTaskList(taskList, localExecutionSupported);
executedVacuumCount++;
}
relationIndex++;
}
@ -252,7 +254,7 @@ VacuumTaskList(Oid relationId, CitusVacuumParams vacuumParams, List *vacuumColum
LockShardListMetadata(shardIntervalList, ShareLock);
ShardInterval *shardInterval = NULL;
foreach_declared_ptr(shardInterval, shardIntervalList)
foreach_ptr(shardInterval, shardIntervalList)
{
uint64 shardId = shardInterval->shardId;
char *shardRelationName = pstrdup(relationName);
@ -278,7 +280,7 @@ VacuumTaskList(Oid relationId, CitusVacuumParams vacuumParams, List *vacuumColum
task->replicationModel = REPLICATION_MODEL_INVALID;
task->anchorShardId = shardId;
task->taskPlacementList = ActiveShardPlacementList(shardId);
task->cannotBeExecutedInTransaction = ((vacuumParams.options) & VACOPT_VACUUM);
task->cannotBeExecutedInTransction = ((vacuumParams.options) & VACOPT_VACUUM);
taskList = lappend(taskList, task);
}
@ -473,7 +475,7 @@ DeparseVacuumColumnNames(List *columnNameList)
appendStringInfoString(columnNames, " (");
String *columnName = NULL;
foreach_declared_ptr(columnName, columnNameList)
foreach_ptr(columnName, columnNameList)
{
appendStringInfo(columnNames, "%s,", strVal(columnName));
}
@ -508,7 +510,7 @@ ExtractVacuumTargetRels(VacuumStmt *vacuumStmt)
List *vacuumList = NIL;
VacuumRelation *vacuumRelation = NULL;
foreach_declared_ptr(vacuumRelation, vacuumStmt->rels)
foreach_ptr(vacuumRelation, vacuumStmt->rels)
{
vacuumList = lappend(vacuumList, vacuumRelation->relation);
}
@ -552,7 +554,7 @@ VacuumStmtParams(VacuumStmt *vacstmt)
/* Parse options list */
DefElem *opt = NULL;
foreach_declared_ptr(opt, vacstmt->options)
foreach_ptr(opt, vacstmt->options)
{
/* Parse common options for VACUUM and ANALYZE */
if (strcmp(opt->defname, "verbose") == 0)
@ -718,14 +720,14 @@ ExecuteUnqualifiedVacuumTasks(VacuumStmt *vacuumStmt, CitusVacuumParams vacuumPa
SetTaskQueryStringList(task, unqualifiedVacuumCommands);
task->dependentTaskList = NULL;
task->replicationModel = REPLICATION_MODEL_INVALID;
task->cannotBeExecutedInTransaction = ((vacuumParams.options) & VACOPT_VACUUM);
task->cannotBeExecutedInTransction = ((vacuumParams.options) & VACOPT_VACUUM);
bool hasPeerWorker = false;
int32 localNodeGroupId = GetLocalGroupId();
WorkerNode *workerNode = NULL;
foreach_declared_ptr(workerNode, workerNodes)
foreach_ptr(workerNode, workerNodes)
{
if (workerNode->groupId != localNodeGroupId)
{

View File

@ -69,7 +69,7 @@ ViewHasDistributedRelationDependency(ObjectAddress *viewObjectAddress)
List *dependencies = GetAllDependenciesForObject(viewObjectAddress);
ObjectAddress *dependency = NULL;
foreach_declared_ptr(dependency, dependencies)
foreach_ptr(dependency, dependencies)
{
if (dependency->classId == RelationRelationId && IsAnyObjectDistributed(
list_make1(dependency)))
@ -304,7 +304,7 @@ DropViewStmtObjectAddress(Node *stmt, bool missing_ok, bool isPostprocess)
List *objectAddresses = NIL;
List *possiblyQualifiedViewName = NULL;
foreach_declared_ptr(possiblyQualifiedViewName, dropStmt->objects)
foreach_ptr(possiblyQualifiedViewName, dropStmt->objects)
{
RangeVar *viewRangeVar = makeRangeVarFromNameList(possiblyQualifiedViewName);
Oid viewOid = RangeVarGetRelid(viewRangeVar, AccessShareLock,
@ -332,7 +332,7 @@ FilterNameListForDistributedViews(List *viewNamesList, bool missing_ok)
List *distributedViewNames = NIL;
List *possiblyQualifiedViewName = NULL;
foreach_declared_ptr(possiblyQualifiedViewName, viewNamesList)
foreach_ptr(possiblyQualifiedViewName, viewNamesList)
{
char *viewName = NULL;
char *schemaName = NULL;
@ -392,7 +392,9 @@ CreateViewDDLCommand(Oid viewOid)
static void
AppendQualifiedViewNameToCreateViewCommand(StringInfo buf, Oid viewOid)
{
char *qualifiedViewName = generate_qualified_relation_name(viewOid);
char *viewName = get_rel_name(viewOid);
char *schemaName = get_namespace_name(get_rel_namespace(viewOid));
char *qualifiedViewName = quote_qualified_identifier(schemaName, viewName);
appendStringInfo(buf, "%s ", qualifiedViewName);
}

View File

@ -444,13 +444,11 @@ GetConnParam(const char *keyword)
/*
* GetEffectiveConnKey checks whether there is any pooler configuration for the
* provided key (host/port combination). If a corresponding row is found in the
* poolinfo table, a modified (effective) key is returned with the node, port,
* and dbname overridden, as applicable, otherwise, the original key is returned
* unmodified.
*
* In the case of Citus non-main databases we just return the key, since we
* would not have access to tables with worker information.
* provided key (host/port combination). The one case where this logic is not
* applied is for loopback connections originating within the task tracker. If
* a corresponding row is found in the poolinfo table, a modified (effective)
* key is returned with the node, port, and dbname overridden, as applicable,
* otherwise, the original key is returned unmodified.
*/
ConnectionHashKey *
GetEffectiveConnKey(ConnectionHashKey *key)
@ -460,22 +458,12 @@ GetEffectiveConnKey(ConnectionHashKey *key)
if (!IsTransactionState())
{
/* we're in the task tracker, so should only see loopback */
Assert(strncmp(LocalHostName, key->hostname, MAX_NODE_LENGTH) == 0 &&
Assert(strncmp(LOCAL_HOST_NAME, key->hostname, MAX_NODE_LENGTH) == 0 &&
PostPortNumber == key->port);
return key;
}
if (!CitusHasBeenLoaded())
{
/*
* This happens when we connect to main database over localhost
* from some non Citus database.
*/
return key;
}
WorkerNode *worker = FindWorkerNode(key->hostname, key->port);
if (worker == NULL)
{
/* this can be hit when the key references an unknown node */
@ -536,23 +524,9 @@ char *
GetAuthinfo(char *hostname, int32 port, char *user)
{
char *authinfo = NULL;
bool isLoopback = (strncmp(LocalHostName, hostname, MAX_NODE_LENGTH) == 0 &&
bool isLoopback = (strncmp(LOCAL_HOST_NAME, hostname, MAX_NODE_LENGTH) == 0 &&
PostPortNumber == port);
/*
* Citus will not be loaded when we run a global DDL command from a
* Citus non-main database.
*/
if (!CitusHasBeenLoaded())
{
/*
* We don't expect non-main databases to connect to a node other than
* the local one.
*/
Assert(isLoopback);
return "";
}
if (IsTransactionState())
{
int64 nodeId = WILDCARD_NODE_ID;

View File

@ -39,7 +39,6 @@
#include "distributed/remote_commands.h"
#include "distributed/run_from_same_connection.h"
#include "distributed/shared_connection_stats.h"
#include "distributed/stats/stat_counters.h"
#include "distributed/time_constants.h"
#include "distributed/version_compat.h"
#include "distributed/worker_log_messages.h"
@ -355,18 +354,6 @@ StartNodeUserDatabaseConnection(uint32 flags, const char *hostname, int32 port,
MultiConnection *connection = FindAvailableConnection(entry->connections, flags);
if (connection)
{
/*
* Increment the connection stat counter for the connections that are
* reused only if the connection is in a good state. Here we don't
* bother shutting down the connection or such if it is not in a good
* state but we mostly want to avoid incrementing the connection stat
* counter for a connection that the caller cannot really use.
*/
if (PQstatus(connection->pgConn) == CONNECTION_OK)
{
IncrementStatCounterForMyDb(STAT_CONNECTION_REUSED);
}
return connection;
}
}
@ -408,12 +395,6 @@ StartNodeUserDatabaseConnection(uint32 flags, const char *hostname, int32 port,
dlist_delete(&connection->connectionNode);
pfree(connection);
/*
* Here we don't increment the connection stat counter for the optional
* connections that we gave up establishing due to connection throttling
* because the callers who request optional connections know how to
* survive without them.
*/
return NULL;
}
}
@ -885,8 +866,7 @@ WaitEventSetFromMultiConnectionStates(List *connections, int *waitCount)
*waitCount = 0;
}
WaitEventSet *waitEventSet = CreateWaitEventSet(WaitEventSetTracker_compat,
eventSetSize);
WaitEventSet *waitEventSet = CreateWaitEventSet(CurrentMemoryContext, eventSetSize);
EnsureReleaseResource((MemoryContextCallbackFunction) (&FreeWaitEventSet),
waitEventSet);
@ -899,7 +879,7 @@ WaitEventSetFromMultiConnectionStates(List *connections, int *waitCount)
numEventsAdded += 2;
MultiConnectionPollState *connectionState = NULL;
foreach_declared_ptr(connectionState, connections)
foreach_ptr(connectionState, connections)
{
if (numEventsAdded >= eventSetSize)
{
@ -981,7 +961,7 @@ FinishConnectionListEstablishment(List *multiConnectionList)
int waitCount = 0;
MultiConnection *connection = NULL;
foreach_declared_ptr(connection, multiConnectionList)
foreach_ptr(connection, multiConnectionList)
{
MultiConnectionPollState *connectionState =
palloc0(sizeof(MultiConnectionPollState));
@ -1001,14 +981,6 @@ FinishConnectionListEstablishment(List *multiConnectionList)
{
waitCount++;
}
else if (connectionState->phase == MULTI_CONNECTION_PHASE_ERROR)
{
/*
* Here we count the connections establishments that failed and that
* we won't wait anymore.
*/
IncrementStatCounterForMyDb(STAT_CONNECTION_ESTABLISHMENT_FAILED);
}
}
/* prepare space for socket events */
@ -1053,11 +1025,6 @@ FinishConnectionListEstablishment(List *multiConnectionList)
if (event->events & WL_POSTMASTER_DEATH)
{
/*
* Here we don't increment the connection stat counter for the
* optional failed connections because this is not a connection
* failure, but a postmaster death in the local node.
*/
ereport(ERROR, (errmsg("postmaster was shut down, exiting")));
}
@ -1074,12 +1041,6 @@ FinishConnectionListEstablishment(List *multiConnectionList)
* reset the memory context
*/
MemoryContextDelete(MemoryContextSwitchTo(oldContext));
/*
* Similarly, we don't increment the connection stat counter for the
* failed connections here because this is not a connection failure
* but a cancellation request is received.
*/
return;
}
@ -1110,7 +1071,6 @@ FinishConnectionListEstablishment(List *multiConnectionList)
eventMask, NULL);
if (!success)
{
IncrementStatCounterForMyDb(STAT_CONNECTION_ESTABLISHMENT_FAILED);
ereport(ERROR, (errcode(ERRCODE_CONNECTION_FAILURE),
errmsg("connection establishment for node %s:%d "
"failed", connection->hostname,
@ -1127,15 +1087,7 @@ FinishConnectionListEstablishment(List *multiConnectionList)
*/
if (connectionState->phase == MULTI_CONNECTION_PHASE_CONNECTED)
{
/*
* Since WaitEventSetFromMultiConnectionStates() only adds the
* connections that we haven't completed the connection
* establishment yet, here we always have a new connection.
* In other words, at this point, we surely know that we're
* not dealing with a cached connection.
*/
bool newConnection = true;
MarkConnectionConnected(connectionState->connection, newConnection);
MarkConnectionConnected(connectionState->connection);
}
}
}
@ -1208,7 +1160,7 @@ static void
CloseNotReadyMultiConnectionStates(List *connectionStates)
{
MultiConnectionPollState *connectionState = NULL;
foreach_declared_ptr(connectionState, connectionStates)
foreach_ptr(connectionState, connectionStates)
{
MultiConnection *connection = connectionState->connection;
@ -1219,8 +1171,6 @@ CloseNotReadyMultiConnectionStates(List *connectionStates)
/* close connection, otherwise we take up resource on the other side */
CitusPQFinish(connection);
IncrementStatCounterForMyDb(STAT_CONNECTION_ESTABLISHMENT_FAILED);
}
}
@ -1633,7 +1583,7 @@ RemoteTransactionIdle(MultiConnection *connection)
* establishment time when necessary.
*/
void
MarkConnectionConnected(MultiConnection *connection, bool newConnection)
MarkConnectionConnected(MultiConnection *connection)
{
connection->connectionState = MULTI_CONNECTION_CONNECTED;
@ -1641,11 +1591,6 @@ MarkConnectionConnected(MultiConnection *connection, bool newConnection)
{
INSTR_TIME_SET_CURRENT(connection->connectionEstablishmentEnd);
}
if (newConnection)
{
IncrementStatCounterForMyDb(STAT_CONNECTION_ESTABLISHMENT_SUCCEEDED);
}
}

View File

@ -303,8 +303,8 @@ EnsureConnectionPossibilityForRemotePrimaryNodes(void)
* seem to cause any problems as none of the placements that we are
* going to access would be on the new node.
*/
List *remoteNodeList = ActivePrimaryRemoteNodeList(NoLock);
EnsureConnectionPossibilityForNodeList(remoteNodeList);
List *primaryNodeList = ActivePrimaryRemoteNodeList(NoLock);
EnsureConnectionPossibilityForNodeList(primaryNodeList);
}
@ -360,7 +360,7 @@ EnsureConnectionPossibilityForNodeList(List *nodeList)
nodeList = SortList(nodeList, CompareWorkerNodes);
WorkerNode *workerNode = NULL;
foreach_declared_ptr(workerNode, nodeList)
foreach_ptr(workerNode, nodeList)
{
bool waitForConnection = true;
EnsureConnectionPossibilityForNode(workerNode, waitForConnection);

View File

@ -370,7 +370,7 @@ AssignPlacementListToConnection(List *placementAccessList, MultiConnection *conn
const char *userName = connection->user;
ShardPlacementAccess *placementAccess = NULL;
foreach_declared_ptr(placementAccess, placementAccessList)
foreach_ptr(placementAccess, placementAccessList)
{
ShardPlacement *placement = placementAccess->placement;
ShardPlacementAccessType accessType = placementAccess->accessType;
@ -533,7 +533,7 @@ FindPlacementListConnection(int flags, List *placementAccessList, const char *us
* suitable connection found for a placement in the placementAccessList.
*/
ShardPlacementAccess *placementAccess = NULL;
foreach_declared_ptr(placementAccess, placementAccessList)
foreach_ptr(placementAccess, placementAccessList)
{
ShardPlacement *placement = placementAccess->placement;
ShardPlacementAccessType accessType = placementAccess->accessType;

View File

@ -392,7 +392,7 @@ void
ExecuteCriticalRemoteCommandList(MultiConnection *connection, List *commandList)
{
const char *command = NULL;
foreach_declared_ptr(command, commandList)
foreach_ptr(command, commandList)
{
ExecuteCriticalRemoteCommand(connection, command);
}
@ -435,7 +435,7 @@ ExecuteRemoteCommandInConnectionList(List *nodeConnectionList, const char *comma
{
MultiConnection *connection = NULL;
foreach_declared_ptr(connection, nodeConnectionList)
foreach_ptr(connection, nodeConnectionList)
{
int querySent = SendRemoteCommand(connection, command);
@ -446,7 +446,7 @@ ExecuteRemoteCommandInConnectionList(List *nodeConnectionList, const char *comma
}
/* Process the result */
foreach_declared_ptr(connection, nodeConnectionList)
foreach_ptr(connection, nodeConnectionList)
{
bool raiseInterrupts = true;
PGresult *result = GetRemoteCommandResult(connection, raiseInterrupts);
@ -887,7 +887,7 @@ WaitForAllConnections(List *connectionList, bool raiseInterrupts)
/* convert connection list to an array such that we can move items around */
MultiConnection *connectionItem = NULL;
foreach_declared_ptr(connectionItem, connectionList)
foreach_ptr(connectionItem, connectionList)
{
allConnections[connectionIndex] = connectionItem;
connectionReady[connectionIndex] = false;
@ -1130,7 +1130,7 @@ BuildWaitEventSet(MultiConnection **allConnections, int totalConnectionCount,
/* allocate pending connections + 2 for the signal latch and postmaster death */
/* (CreateWaitEventSet makes room for pgwin32_signal_event automatically) */
WaitEventSet *waitEventSet = CreateWaitEventSet(WaitEventSetTracker_compat,
WaitEventSet *waitEventSet = CreateWaitEventSet(CurrentMemoryContext,
pendingConnectionCount + 2);
for (int connectionIndex = 0; connectionIndex < pendingConnectionCount;

Some files were not shown because too many files have changed in this diff Show More