Compare commits

...

7 Commits

Author SHA1 Message Date
Alper Kocatas ad266a2c0a
Add changelog for 13.1.0 (#8006)
Add changelog entries for 13.1.0.

The list of changes was constructed using the following method: 

- include set of commits that come up in comparison of release-13.1 and
release-13.0 branch
- exclude changes in 13.0.0, 13.0.1, 13.0.2 and 13.0.3
- exclude changes related to database sharding
- exclude changes that were cherry picked from main and released before
13.1
- review remaining commits and exclude non user-facing commits.

---------

Co-authored-by: Onur Tirtir <onurcantirtir@gmail.com>
Co-authored-by: Naisila Puka <37271756+naisila@users.noreply.github.com>
2025-06-02 11:10:16 +03:00
Onur Tirtir e42847196a Add skip_qualify_public param to shard_name() to allow qualifying for "public" schema (#8014)
DESCRIPTION: Adds skip_qualify_public param to `shard_name()` UDF to
allow qualifying for "public" schema when needed.
2025-06-02 10:43:44 +03:00
Alper Kocatas b6b415c17a
Bump Citus version to 13.1.0 (#8012)
Bump Citus version to 13.1.0
2025-06-02 10:21:17 +03:00
ibrahim halatci 0355d8c392
bumbed codeql version to v3 (#7998)
DESCRIPTION: bumbed codeql version to v3
2025-05-23 14:13:49 +03:00
Naisila Puka af01fa48ec Bump PG versions to 17.5, 16.9, 15.13 (#7986)
Nontrivial bump because of the following PG15.3 commit
317aba70e
https://github.com/postgres/postgres/commit/317aba70e

Previously, when views were converted to RTE_SUBQUERY the relid
would be cleared in PG15. In this patch of PG15, relid is retained.
Therefore, we add a check with the "relkind and rtekind" to
identify the converted views in 15.13

Sister PR https://github.com/citusdata/the-process/pull/164
Using dev image sha because I encountered the libpq
symlink issue again with "-v219b87c"
2025-05-22 15:06:42 +02:00
Onur Tirtir 1cb2462818 Fix unsafe memory access in citus_unmark_object_distributed() (#7985)
_Since we've never released a Citus release that contains the commit
that introduced this bug (see #7461), we don't need to have a
DESCRIPTION line that shows up in release changelog._

From 8 valgrind test targets run for release-13.1 with PG 17.5, we got
1344 stack traces and except one of them, they were all about below
unsafe memory access because this is a very hot code-path that we
execute via our drop trigger.

On main, even `make -C src/test/regress/ check-base-vg` dumps this stack
trace with PG 16/17 to src/test/regress/citus_valgrind_test_log.txt when
executing "multi_cluster_management", and this is not the case with this
PR anymore.

```c
==27337== VALGRINDERROR-BEGIN
==27337== Conditional jump or move depends on uninitialised value(s)
==27337==    at 0x7E26B68: citus_unmark_object_distributed (home/onurctirtir/citus/src/backend/distributed/metadata/distobject.c:113)
==27337==    by 0x7E26CC7: master_unmark_object_distributed (home/onurctirtir/citus/src/backend/distributed/metadata/distobject.c:153)
==27337==    by 0x4BD852: ExecInterpExpr (home/onurctirtir/.pgenv/src/postgresql-16.2/src/backend/executor/execExprInterp.c:758)
==27337==    by 0x4BFD00: ExecInterpExprStillValid (home/onurctirtir/.pgenv/src/postgresql-16.2/src/backend/executor/execExprInterp.c:1870)
==27337==    by 0x51D82C: ExecEvalExprSwitchContext (home/onurctirtir/.pgenv/src/postgresql-16.2/src/backend/executor/../../../src/include/executor/executor.h:355)
==27337==    by 0x51D8A4: ExecProject (home/onurctirtir/.pgenv/src/postgresql-16.2/src/backend/executor/../../../src/include/executor/executor.h:389)
==27337==    by 0x51DADB: ExecResult (home/onurctirtir/.pgenv/src/postgresql-16.2/src/backend/executor/nodeResult.c:136)
==27337==    by 0x4D72ED: ExecProcNodeFirst (home/onurctirtir/.pgenv/src/postgresql-16.2/src/backend/executor/execProcnode.c:464)
==27337==    by 0x4CA394: ExecProcNode (home/onurctirtir/.pgenv/src/postgresql-16.2/src/backend/executor/../../../src/include/executor/executor.h:273)
==27337==    by 0x4CD34C: ExecutePlan (home/onurctirtir/.pgenv/src/postgresql-16.2/src/backend/executor/execMain.c:1670)
==27337==    by 0x4CAA7C: standard_ExecutorRun (home/onurctirtir/.pgenv/src/postgresql-16.2/src/backend/executor/execMain.c:365)
==27337==    by 0x7E1E475: CitusExecutorRun (home/onurctirtir/citus/src/backend/distributed/executor/multi_executor.c:238)
==27337==  Uninitialised value was created by a heap allocation
==27337==    at 0x4848899: malloc (in /usr/libexec/valgrind/vgpreload_memcheck-amd64-linux.so)
==27337==    by 0x9AB1F7: AllocSetContextCreateInternal (home/onurctirtir/.pgenv/src/postgresql-16.2/src/backend/utils/mmgr/aset.c:438)
==27337==    by 0x4E0D56: CreateExprContextInternal (home/onurctirtir/.pgenv/src/postgresql-16.2/src/backend/executor/execUtils.c:261)
==27337==    by 0x4E0E3E: CreateExprContext (home/onurctirtir/.pgenv/src/postgresql-16.2/src/backend/executor/execUtils.c:311)
==27337==    by 0x4E10D9: ExecAssignExprContext (home/onurctirtir/.pgenv/src/postgresql-16.2/src/backend/executor/execUtils.c:490)
==27337==    by 0x51EE09: ExecInitSeqScan (home/onurctirtir/.pgenv/src/postgresql-16.2/src/backend/executor/nodeSeqscan.c:147)
==27337==    by 0x4D6CE1: ExecInitNode (home/onurctirtir/.pgenv/src/postgresql-16.2/src/backend/executor/execProcnode.c:210)
==27337==    by 0x5243C7: ExecInitSubqueryScan (home/onurctirtir/.pgenv/src/postgresql-16.2/src/backend/executor/nodeSubqueryscan.c:126)
==27337==    by 0x4D6DD9: ExecInitNode (home/onurctirtir/.pgenv/src/postgresql-16.2/src/backend/executor/execProcnode.c:250)
==27337==    by 0x4F05B2: ExecInitAppend (home/onurctirtir/.pgenv/src/postgresql-16.2/src/backend/executor/nodeAppend.c:223)
==27337==    by 0x4D6C46: ExecInitNode (home/onurctirtir/.pgenv/src/postgresql-16.2/src/backend/executor/execProcnode.c:182)
==27337==    by 0x52003D: ExecInitSetOp (home/onurctirtir/.pgenv/src/postgresql-16.2/src/backend/executor/nodeSetOp.c:530)
==27337== 
==27337== VALGRINDERROR-END
```
2025-05-20 17:49:14 +03:00
Alper Kocatas 662628fe7d Add citus_nodes view (#7968)
DESCRIPTION: Adds `citus_nodes` view that displays the node name, port,
role, and "active" for nodes in the cluster.

This PR adds `citus_nodes` view to the `pg_catalog` schema. The
`citus_nodes` view is created in the `citus` schema and is used to
display the node name, port, role, and active status of each node in the
`pg_dist_node` table.

The view is granted `SELECT` permission to the `PUBLIC` role and is set
to the `pg_catalog` schema.

Test cases was added to `multi_cluster_management` tests. 

structs.py was modified to add white spaces as `citus_indent` required.

---------

Co-authored-by: Alper Kocatas <alperkocatas@microsoft.com>
2025-05-20 17:49:14 +03:00
22 changed files with 402 additions and 32 deletions

View File

@ -73,7 +73,7 @@ USER citus
# build postgres versions separately for effective parrallelism and caching of already built versions when changing only certain versions
FROM base AS pg15
RUN MAKEFLAGS="-j $(nproc)" pgenv build 15.12
RUN MAKEFLAGS="-j $(nproc)" pgenv build 15.13
RUN rm .pgenv/src/*.tar*
RUN make -C .pgenv/src/postgresql-*/ clean
RUN make -C .pgenv/src/postgresql-*/src/include install
@ -85,7 +85,7 @@ RUN cp -r .pgenv/src .pgenv/pgsql-* .pgenv/config .pgenv-staging/
RUN rm .pgenv-staging/config/default.conf
FROM base AS pg16
RUN MAKEFLAGS="-j $(nproc)" pgenv build 16.8
RUN MAKEFLAGS="-j $(nproc)" pgenv build 16.9
RUN rm .pgenv/src/*.tar*
RUN make -C .pgenv/src/postgresql-*/ clean
RUN make -C .pgenv/src/postgresql-*/src/include install
@ -97,7 +97,7 @@ RUN cp -r .pgenv/src .pgenv/pgsql-* .pgenv/config .pgenv-staging/
RUN rm .pgenv-staging/config/default.conf
FROM base AS pg17
RUN MAKEFLAGS="-j $(nproc)" pgenv build 17.4
RUN MAKEFLAGS="-j $(nproc)" pgenv build 17.5
RUN rm .pgenv/src/*.tar*
RUN make -C .pgenv/src/postgresql-*/ clean
RUN make -C .pgenv/src/postgresql-*/src/include install
@ -216,7 +216,7 @@ COPY --chown=citus:citus .psqlrc .
RUN sudo chown --from=root:root citus:citus -R ~
# sets default pg version
RUN pgenv switch 17.4
RUN pgenv switch 17.5
# make connecting to the coordinator easy
ENV PGPORT=9700

View File

@ -31,12 +31,12 @@ jobs:
pgupgrade_image_name: "ghcr.io/citusdata/pgupgradetester"
style_checker_image_name: "ghcr.io/citusdata/stylechecker"
style_checker_tools_version: "0.8.18"
sql_snapshot_pg_version: "17.4"
image_suffix: "-veab367a"
pg15_version: '{ "major": "15", "full": "15.12" }'
pg16_version: '{ "major": "16", "full": "16.8" }'
pg17_version: '{ "major": "17", "full": "17.4" }'
upgrade_pg_versions: "15.12-16.8-17.4"
sql_snapshot_pg_version: "17.5"
image_suffix: "-dev-d28f316"
pg15_version: '{ "major": "15", "full": "15.13" }'
pg16_version: '{ "major": "16", "full": "16.9" }'
pg17_version: '{ "major": "17", "full": "17.5" }'
upgrade_pg_versions: "15.13-16.9-17.5"
steps:
# Since GHA jobs need at least one step we use a noop step here.
- name: Set up parameters

View File

@ -24,7 +24,7 @@ jobs:
uses: actions/checkout@v4
- name: Initialize CodeQL
uses: github/codeql-action/init@v2
uses: github/codeql-action/init@v3
with:
languages: ${{ matrix.language }}
@ -76,4 +76,4 @@ jobs:
sudo make install-all
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v2
uses: github/codeql-action/analyze@v3

View File

@ -1,3 +1,130 @@
## citus v13.1.0 (May 30th, 2025) ###
* Adds `citus_stat_counters` view that can be used to query
stat counters that Citus collects while the feature is enabled, which is
controlled by citus.enable_stat_counters. `citus_stat_counters()` can be
used to query the stat counters for the provided database oid and
`citus_stat_counters_reset()` can be used to reset them for the provided
database oid or for the current database if nothing or 0 is provided (#7917)
* Adds `citus_nodes` view that displays the node name, port role, and "active"
for nodes in the cluster (#7968)
* Adds `citus_is_primary_node()` UDF to determine if the current node is a
primary node in the cluster (#7720)
* Adds support for propagating `GRANT/REVOKE` rights on table columns (#7918)
* Adds support for propagating `REASSIGN OWNED BY` commands (#7319)
* Adds support for propagating `CREATE`/`DROP` database from all nodes (#7240,
#7253, #7359)
* Propagates `SECURITY LABEL ON ROLE` statement from any node (#7508)
* Adds support for issuing role management commands from worker nodes (#7278)
* Adds support for propagating `ALTER USER RENAME` commands (#7204)
* Adds support for propagating `ALTER DATABASE <db_name> SET ..` commands
(#7181)
* Adds support for propagating `SECURITY LABEL` on tables and columns (#7956)
* Adds support for propagating `COMMENT ON <database>/<role>` commands (#7388)
* Moves some of the internal citus functions from `pg_catalog` to
`citus_internal` schema (#7473, #7470, #7466, 7456, 7450)
* Adjusts `max_prepared_transactions` only when it's set to default on PG >= 16
(#7712)
* Adds skip_qualify_public param to shard_name() UDF to allow qualifying for
"public" schema when needed (#8014)
* Allows `citus_*_size` on indexes on a distributed tables (#7271)
* Allows `GRANT ADMIN` to now also be `INHERIT` or `SET` in support of PG16
* Makes sure `worker_copy_table_to_node` errors out with Citus tables (#7662)
* Adds information to explain output when using
`citus.explain_distributed_queries=false` (#7412)
* Logs username in the failed connection message (#7432)
* Makes sure to avoid incorrectly pushing-down the outer joins between
distributed tables and recurring relations (like reference tables, local
tables and `VALUES(..)` etc.) prior to PG 17 (#7937)
* Prevents incorrectly pushing `nextval()` call down to workers to avoid using
incorrect sequence value for some types of `INSERT .. SELECT`s (#7976)
* Makes sure to prevent `INSERT INTO ... SELECT` queries involving subfield or
sublink, to avoid crashes (#7912)
* Makes sure to take improvement_threshold into the account
in `citus_add_rebalance_strategy()` (#7247)
* Makes sure to disallow creating a replicated distributed
table concurrently (#7219)
* Fixes a bug that causes omitting `CASCADE` clause for the commands sent to
workers for `REVOKE` commands on tables (#7958)
* Fixes an issue detected using address sanitizer (#7948, #7949)
* Fixes a bug in deparsing of shard query in case of "output-table column" name
conflict (#7932)
* Fixes a crash in columnar custom scan that happens when a columnar table is
used in a join (#7703)
* Fixes `MERGE` command when insert value does not have source distributed
column (#7627)
* Fixes performance issue when using `\d tablename` on a server with many
tables (#7577)
* Fixes performance issue in `GetForeignKeyOids` on systems with many
constraints (#7580)
* Fixes performance issue when distributing a table that depends on an
extension (#7574)
* Fixes performance issue when creating distributed tables if many already
exist (#7575)
* Fixes a crash caused by some form of `ALTER TABLE ADD COLUMN` statements. When
adding multiple columns, if one of the `ADD COLUMN` statements contains a
`FOREIGN` constraint ommitting the referenced
columns in the statement, a `SEGFAULT` occurs (#7522)
* Fixes assertion failure in maintenance daemon during Citus upgrades (#7537)
* Fixes segmentation fault when using `CASE WHEN` in `DO` block functions
(#7554)
* Fixes undefined behavior in `master_disable_node` due to argument mismatch
(#7492)
* Fixes incorrect propagating of `GRANTED BY` and `CASCADE/RESTRICT` clauses
for `REVOKE` statements (#7451)
* Fixes the incorrect column count after `ALTER TABLE` (#7379)
* Fixes timeout when underlying socket is changed for an inter-node connection
(#7377)
* Fixes memory leaks (#7441, #7440)
* Fixes leaking of memory and memory contexts when tracking foreign keys between
Citus tables (#7236)
* Fixes a potential segfault for background rebalancer (#7694)
* Fixes potential `NULL` dereference in casual clocks (#7704)
### citus v13.0.3 (March 20th, 2025) ###
* Fixes a version bump issue in 13.0.2
@ -100,9 +227,8 @@
* Allows overwriting host name for all inter-node connections by
supporting "host" parameter in citus.node_conninfo (#7541)
* Changes the order in which the locks are acquired for the target and
reference tables, when a modify request is initiated from a worker
node that is not the "FirstWorkerNode" (#7542)
* Avoids distributed deadlocks by changing the order in which the locks are
acquired for the target and reference tables (#7542)
* Fixes a performance issue when distributing a table that depends on an
extension (#7574)

18
configure vendored
View File

@ -1,6 +1,6 @@
#! /bin/sh
# Guess values for system-dependent variables and create Makefiles.
# Generated by GNU Autoconf 2.69 for Citus 13.1devel.
# Generated by GNU Autoconf 2.69 for Citus 13.1.0.
#
#
# Copyright (C) 1992-1996, 1998-2012 Free Software Foundation, Inc.
@ -579,8 +579,8 @@ MAKEFLAGS=
# Identity of this package.
PACKAGE_NAME='Citus'
PACKAGE_TARNAME='citus'
PACKAGE_VERSION='13.1devel'
PACKAGE_STRING='Citus 13.1devel'
PACKAGE_VERSION='13.1.0'
PACKAGE_STRING='Citus 13.1.0'
PACKAGE_BUGREPORT=''
PACKAGE_URL=''
@ -1262,7 +1262,7 @@ if test "$ac_init_help" = "long"; then
# Omit some internal or obsolete options to make the list less imposing.
# This message is too long to be a string in the A/UX 3.1 sh.
cat <<_ACEOF
\`configure' configures Citus 13.1devel to adapt to many kinds of systems.
\`configure' configures Citus 13.1.0 to adapt to many kinds of systems.
Usage: $0 [OPTION]... [VAR=VALUE]...
@ -1324,7 +1324,7 @@ fi
if test -n "$ac_init_help"; then
case $ac_init_help in
short | recursive ) echo "Configuration of Citus 13.1devel:";;
short | recursive ) echo "Configuration of Citus 13.1.0:";;
esac
cat <<\_ACEOF
@ -1429,7 +1429,7 @@ fi
test -n "$ac_init_help" && exit $ac_status
if $ac_init_version; then
cat <<\_ACEOF
Citus configure 13.1devel
Citus configure 13.1.0
generated by GNU Autoconf 2.69
Copyright (C) 2012 Free Software Foundation, Inc.
@ -1912,7 +1912,7 @@ cat >config.log <<_ACEOF
This file contains any messages produced by compilers while
running configure, to aid debugging if configure makes a mistake.
It was created by Citus $as_me 13.1devel, which was
It was created by Citus $as_me 13.1.0, which was
generated by GNU Autoconf 2.69. Invocation command line was
$ $0 $@
@ -5393,7 +5393,7 @@ cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1
# report actual input values of CONFIG_FILES etc. instead of their
# values after options handling.
ac_log="
This file was extended by Citus $as_me 13.1devel, which was
This file was extended by Citus $as_me 13.1.0, which was
generated by GNU Autoconf 2.69. Invocation command line was
CONFIG_FILES = $CONFIG_FILES
@ -5455,7 +5455,7 @@ _ACEOF
cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1
ac_cs_config="`$as_echo "$ac_configure_args" | sed 's/^ //; s/[\\""\`\$]/\\\\&/g'`"
ac_cs_version="\\
Citus config.status 13.1devel
Citus config.status 13.1.0
configured by $0, generated by GNU Autoconf 2.69,
with options \\"\$ac_cs_config\\"

View File

@ -5,7 +5,7 @@
# everyone needing autoconf installed, the resulting files are checked
# into the SCM.
AC_INIT([Citus], [13.1devel])
AC_INIT([Citus], [13.1.0])
AC_COPYRIGHT([Copyright (c) Citus Data, Inc.])
# we'll need sed and awk for some of the version commands

View File

@ -109,13 +109,20 @@ citus_unmark_object_distributed(PG_FUNCTION_ARGS)
Oid classid = PG_GETARG_OID(0);
Oid objid = PG_GETARG_OID(1);
int32 objsubid = PG_GETARG_INT32(2);
/*
* SQL function master_unmark_object_distributed doesn't expect the
* 4th argument but SQL function citus_unmark_object_distributed does
* so as checkobjectexistence argument. For this reason, we try to
* get the 4th argument only if this C function is called with 4
* arguments.
*/
bool checkObjectExistence = true;
if (!PG_ARGISNULL(3))
if (PG_NARGS() == 4)
{
checkObjectExistence = PG_GETARG_BOOL(3);
}
ObjectAddress address = { 0 };
ObjectAddressSubSet(address, classid, objid, objsubid);

View File

@ -2245,6 +2245,18 @@ SelectsFromDistributedTable(List *rangeTableList, Query *query)
continue;
}
#if PG_VERSION_NUM >= 150013 && PG_VERSION_NUM < PG_VERSION_16
if (rangeTableEntry->rtekind == RTE_SUBQUERY && rangeTableEntry->relkind == 0)
{
/*
* In PG15.13 commit https://github.com/postgres/postgres/commit/317aba70e
* relid is retained when converting views to subqueries,
* so we need an extra check identifying those views
*/
continue;
}
#endif
if (rangeTableEntry->relkind == RELKIND_VIEW ||
rangeTableEntry->relkind == RELKIND_MATVIEW)
{

View File

@ -962,6 +962,7 @@ shard_name(PG_FUNCTION_ARGS)
Oid relationId = PG_GETARG_OID(0);
int64 shardId = PG_GETARG_INT64(1);
bool skipQualifyPublic = PG_GETARG_BOOL(2);
char *qualifiedName = NULL;
@ -991,7 +992,7 @@ shard_name(PG_FUNCTION_ARGS)
Oid schemaId = get_rel_namespace(relationId);
char *schemaName = get_namespace_name(schemaId);
if (strncmp(schemaName, "public", NAMEDATALEN) == 0)
if (skipQualifyPublic && strncmp(schemaName, "public", NAMEDATALEN) == 0)
{
qualifiedName = (char *) quote_identifier(relationName);
}

View File

@ -50,3 +50,11 @@ DROP VIEW IF EXISTS pg_catalog.citus_lock_waits;
#include "udfs/citus_is_primary_node/13.1-1.sql"
#include "udfs/citus_stat_counters/13.1-1.sql"
#include "udfs/citus_stat_counters_reset/13.1-1.sql"
#include "udfs/citus_nodes/13.1-1.sql"
-- Since shard_name/13.1-1.sql first drops the function and then creates it, we first
-- need to drop citus_shards view since that view depends on this function. And immediately
-- after creating the function, we recreate citus_shards view again.
DROP VIEW pg_catalog.citus_shards;
#include "udfs/shard_name/13.1-1.sql"
#include "udfs/citus_shards/12.0-1.sql"

View File

@ -45,3 +45,25 @@ DROP FUNCTION citus_internal.is_replication_origin_tracking_active();
DROP VIEW pg_catalog.citus_stat_counters;
DROP FUNCTION pg_catalog.citus_stat_counters(oid);
DROP FUNCTION pg_catalog.citus_stat_counters_reset(oid);
DROP VIEW IF EXISTS pg_catalog.citus_nodes;
-- Definition of shard_name() prior to this release doesn't have a separate SQL file
-- because it's quite an old UDF that its prior definition(s) was(were) squashed into
-- citus--8.0-1.sql. For this reason, to downgrade it, here we directly execute its old
-- definition instead of including it from such a separate file.
--
-- And before dropping and creating the function, we also need to drop citus_shards view
-- since it depends on it. And immediately after creating the function, we recreate
-- citus_shards view again.
DROP VIEW pg_catalog.citus_shards;
DROP FUNCTION pg_catalog.shard_name(object_name regclass, shard_id bigint, skip_qualify_public boolean);
CREATE FUNCTION pg_catalog.shard_name(object_name regclass, shard_id bigint)
RETURNS text
LANGUAGE C STABLE STRICT
AS 'MODULE_PATHNAME', $$shard_name$$;
COMMENT ON FUNCTION pg_catalog.shard_name(object_name regclass, shard_id bigint)
IS 'returns schema-qualified, shard-extended identifier of object name';
#include "../udfs/citus_shards/12.0-1.sql"

View File

@ -0,0 +1,18 @@
SET search_path = 'pg_catalog';
DROP VIEW IF EXISTS pg_catalog.citus_nodes;
CREATE OR REPLACE VIEW citus.citus_nodes AS
SELECT
nodename,
nodeport,
CASE
WHEN groupid = 0 THEN 'coordinator'
ELSE 'worker'
END AS role,
isactive AS active
FROM pg_dist_node;
ALTER VIEW citus.citus_nodes SET SCHEMA pg_catalog;
GRANT SELECT ON pg_catalog.citus_nodes TO PUBLIC;
RESET search_path;

View File

@ -0,0 +1,18 @@
SET search_path = 'pg_catalog';
DROP VIEW IF EXISTS pg_catalog.citus_nodes;
CREATE OR REPLACE VIEW citus.citus_nodes AS
SELECT
nodename,
nodeport,
CASE
WHEN groupid = 0 THEN 'coordinator'
ELSE 'worker'
END AS role,
isactive AS active
FROM pg_dist_node;
ALTER VIEW citus.citus_nodes SET SCHEMA pg_catalog;
GRANT SELECT ON pg_catalog.citus_nodes TO PUBLIC;
RESET search_path;

View File

@ -0,0 +1,8 @@
-- skip_qualify_public is set to true by default just for backward compatibility
DROP FUNCTION pg_catalog.shard_name(object_name regclass, shard_id bigint);
CREATE FUNCTION pg_catalog.shard_name(object_name regclass, shard_id bigint, skip_qualify_public boolean DEFAULT true)
RETURNS text
LANGUAGE C STABLE STRICT
AS 'MODULE_PATHNAME', $$shard_name$$;
COMMENT ON FUNCTION pg_catalog.shard_name(object_name regclass, shard_id bigint, skip_qualify_public boolean)
IS 'returns schema-qualified, shard-extended identifier of object name';

View File

@ -0,0 +1,8 @@
-- skip_qualify_public is set to true by default just for backward compatibility
DROP FUNCTION pg_catalog.shard_name(object_name regclass, shard_id bigint);
CREATE FUNCTION pg_catalog.shard_name(object_name regclass, shard_id bigint, skip_qualify_public boolean DEFAULT true)
RETURNS text
LANGUAGE C STABLE STRICT
AS 'MODULE_PATHNAME', $$shard_name$$;
COMMENT ON FUNCTION pg_catalog.shard_name(object_name regclass, shard_id bigint, skip_qualify_public boolean)
IS 'returns schema-qualified, shard-extended identifier of object name';

View File

@ -33,5 +33,33 @@ SELECT * FROM citus_shards;
t1 | 99456903 | citus_shards.t1_99456903 | distributed | 456900 | localhost | 57638 | 8192
(8 rows)
SET search_path TO public;
CREATE TABLE t3 (i int);
SELECT citus_add_local_table_to_metadata('t3');
citus_add_local_table_to_metadata
---------------------------------------------------------------------
(1 row)
SELECT shard_name('t3', shardid) FROM pg_dist_shard WHERE logicalrelid = 't3'::regclass;
shard_name
---------------------------------------------------------------------
t3_99456908
(1 row)
SELECT shard_name('t3', shardid, true) FROM pg_dist_shard WHERE logicalrelid = 't3'::regclass;
shard_name
---------------------------------------------------------------------
t3_99456908
(1 row)
SELECT shard_name('t3', shardid, false) FROM pg_dist_shard WHERE logicalrelid = 't3'::regclass;
shard_name
---------------------------------------------------------------------
public.t3_99456908
(1 row)
DROP TABLE t3;
SET search_path TO citus_shards;
SET client_min_messages TO WARNING;
DROP SCHEMA citus_shards CASCADE;

View File

@ -72,6 +72,45 @@ SELECT master_get_active_worker_nodes();
(localhost,57637)
(2 rows)
-- get all nodes
SELECT * from citus_nodes;
nodename | nodeport | role | active
---------------------------------------------------------------------
localhost | 57637 | worker | t
localhost | 57638 | worker | t
localhost | 57636 | coordinator | t
(3 rows)
-- get get active nodes
SELECT * from citus_nodes where active = 't';
nodename | nodeport | role | active
---------------------------------------------------------------------
localhost | 57637 | worker | t
localhost | 57638 | worker | t
localhost | 57636 | coordinator | t
(3 rows)
-- get coordinator nodes
SELECT * from citus_nodes where role = 'coordinator';
nodename | nodeport | role | active
---------------------------------------------------------------------
localhost | 57636 | coordinator | t
(1 row)
-- get worker nodes
SELECT * from citus_nodes where role = 'worker';
nodename | nodeport | role | active
---------------------------------------------------------------------
localhost | 57637 | worker | t
localhost | 57638 | worker | t
(2 rows)
-- get nodes with unknown role
SELECT * from citus_nodes where role = 'foo';
nodename | nodeport | role | active
---------------------------------------------------------------------
(0 rows)
-- try to add a node that is already in the cluster
SELECT * FROM master_add_node('localhost', :worker_1_port);
master_add_node
@ -126,6 +165,34 @@ SELECT master_get_active_worker_nodes();
(localhost,57637)
(1 row)
-- get get active nodes
SELECT * from citus_nodes where active = 't';
nodename | nodeport | role | active
---------------------------------------------------------------------
localhost | 57636 | coordinator | t
localhost | 57637 | worker | t
(2 rows)
-- get get inactive nodes
SELECT * from citus_nodes where active = 'f';
nodename | nodeport | role | active
---------------------------------------------------------------------
localhost | 57638 | worker | f
(1 row)
-- make sure non-superusers can access the view
CREATE ROLE normaluser;
SET ROLE normaluser;
SELECT * FROM citus_nodes;
nodename | nodeport | role | active
---------------------------------------------------------------------
localhost | 57636 | coordinator | t
localhost | 57638 | worker | f
localhost | 57637 | worker | t
(3 rows)
SET ROLE postgres;
DROP ROLE normaluser;
-- add some shard placements to the cluster
SET citus.shard_count TO 16;
SET citus.shard_replication_factor TO 1;

View File

@ -1455,6 +1455,7 @@ SELECT * FROM multi_extension.print_extension_changes();
previous_object | current_object
---------------------------------------------------------------------
function citus_unmark_object_distributed(oid,oid,integer) void |
function shard_name(regclass,bigint) text |
| function citus_internal.acquire_citus_advisory_object_class_lock(integer,cstring) void
| function citus_internal.add_colocation_metadata(integer,integer,integer,regtype,oid) void
| function citus_internal.add_object_metadata(text,text[],text[],integer,integer,boolean) void
@ -1483,15 +1484,17 @@ SELECT * FROM multi_extension.print_extension_changes();
| function citus_stat_counters(oid) SETOF record
| function citus_stat_counters_reset(oid) void
| function citus_unmark_object_distributed(oid,oid,integer,boolean) void
| function shard_name(regclass,bigint,boolean) text
| view citus_nodes
| view citus_stat_counters
(30 rows)
(33 rows)
DROP TABLE multi_extension.prev_objects, multi_extension.extension_diff;
-- show running version
SHOW citus.version;
citus.version
---------------------------------------------------------------------
13.1devel
13.1.0
(1 row)
-- ensure no unexpected objects were created outside pg_catalog

View File

@ -289,7 +289,7 @@ ORDER BY 1;
function run_command_on_placements(regclass,text,boolean)
function run_command_on_shards(regclass,text,boolean)
function run_command_on_workers(text,boolean)
function shard_name(regclass,bigint)
function shard_name(regclass,bigint,boolean)
function start_metadata_sync_to_all_nodes()
function start_metadata_sync_to_node(text,integer)
function stop_metadata_sync_to_node(text,integer,boolean)
@ -380,6 +380,7 @@ ORDER BY 1;
view citus_dist_stat_activity
view citus_lock_waits
view citus_locks
view citus_nodes
view citus_schema.citus_schemas
view citus_schema.citus_tables
view citus_shard_indexes_on_worker
@ -392,6 +393,6 @@ ORDER BY 1;
view citus_stat_tenants_local
view pg_dist_shard_placement
view time_partitions
(361 rows)
(362 rows)
DROP TABLE extension_basic_types;

View File

@ -137,18 +137,21 @@ class Message:
class SharedMessage(Message, metaclass=MessageMeta):
"A message which could be sent by either the frontend or the backend"
_msgtypes = dict()
_classes = dict()
class FrontendMessage(Message, metaclass=MessageMeta):
"A message which will only be sent be a backend"
_msgtypes = dict()
_classes = dict()
class BackendMessage(Message, metaclass=MessageMeta):
"A message which will only be sent be a frontend"
_msgtypes = dict()
_classes = dict()

View File

@ -13,5 +13,16 @@ INSERT INTO t1 SELECT generate_series(1, 100);
INSERT INTO "t with space" SELECT generate_series(1, 1000);
SELECT * FROM citus_shards;
SET search_path TO public;
CREATE TABLE t3 (i int);
SELECT citus_add_local_table_to_metadata('t3');
SELECT shard_name('t3', shardid) FROM pg_dist_shard WHERE logicalrelid = 't3'::regclass;
SELECT shard_name('t3', shardid, true) FROM pg_dist_shard WHERE logicalrelid = 't3'::regclass;
SELECT shard_name('t3', shardid, false) FROM pg_dist_shard WHERE logicalrelid = 't3'::regclass;
DROP TABLE t3;
SET search_path TO citus_shards;
SET client_min_messages TO WARNING;
DROP SCHEMA citus_shards CASCADE;

View File

@ -32,6 +32,21 @@ SELECT result FROM run_command_on_workers('SELECT citus_is_primary_node()');
-- get the active nodes
SELECT master_get_active_worker_nodes();
-- get all nodes
SELECT * from citus_nodes;
-- get get active nodes
SELECT * from citus_nodes where active = 't';
-- get coordinator nodes
SELECT * from citus_nodes where role = 'coordinator';
-- get worker nodes
SELECT * from citus_nodes where role = 'worker';
-- get nodes with unknown role
SELECT * from citus_nodes where role = 'foo';
-- try to add a node that is already in the cluster
SELECT * FROM master_add_node('localhost', :worker_1_port);
@ -51,6 +66,20 @@ SELECT citus_disable_node('localhost', :worker_2_port);
SELECT public.wait_until_metadata_sync(20000);
SELECT master_get_active_worker_nodes();
-- get get active nodes
SELECT * from citus_nodes where active = 't';
-- get get inactive nodes
SELECT * from citus_nodes where active = 'f';
-- make sure non-superusers can access the view
CREATE ROLE normaluser;
SET ROLE normaluser;
SELECT * FROM citus_nodes;
SET ROLE postgres;
DROP ROLE normaluser;
-- add some shard placements to the cluster
SET citus.shard_count TO 16;
SET citus.shard_replication_factor TO 1;