diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index eaec55c3e..e1900642d 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -175,7 +175,7 @@ that are missing in earlier minor versions. ### Following our coding conventions -CircleCI will automatically reject any PRs which do not follow our coding +CI pipeline will automatically reject any PRs which do not follow our coding conventions. The easiest way to ensure your PR adheres to those conventions is to use the [citus_indent](https://github.com/citusdata/tools/tree/develop/uncrustify) tool. This tool uses `uncrustify` under the hood. diff --git a/ci/check_gucs_are_alphabetically_sorted.sh b/ci/check_gucs_are_alphabetically_sorted.sh index 763b5305f..214a5c9cf 100755 --- a/ci/check_gucs_are_alphabetically_sorted.sh +++ b/ci/check_gucs_are_alphabetically_sorted.sh @@ -4,7 +4,22 @@ set -euo pipefail # shellcheck disable=SC1091 source ci/ci_helpers.sh -# extract citus gucs in the form of "citus.X" -grep -o -E "(\.*\"citus\.\w+\")," src/backend/distributed/shared_library_init.c > gucs.out +# Find the line that exactly matches "RegisterCitusConfigVariables(void)" in +# shared_library_init.c. grep command returns something like +# "934:RegisterCitusConfigVariables(void)" and we extract the line number +# with cut. +RegisterCitusConfigVariables_begin_linenumber=$(grep -n "^RegisterCitusConfigVariables(void)$" src/backend/distributed/shared_library_init.c | cut -d: -f1) + +# Consider the lines starting from $RegisterCitusConfigVariables_begin_linenumber, +# grep the first line that starts with "}" and extract the line number with cut +# as in the previous step. +RegisterCitusConfigVariables_length=$(tail -n +$RegisterCitusConfigVariables_begin_linenumber src/backend/distributed/shared_library_init.c | grep -n -m 1 "^}$" | cut -d: -f1) + +# extract the function definition of RegisterCitusConfigVariables into a temp file +tail -n +$RegisterCitusConfigVariables_begin_linenumber src/backend/distributed/shared_library_init.c | head -n $(($RegisterCitusConfigVariables_length)) > RegisterCitusConfigVariables_func_def.out + +# extract citus gucs in the form of "citus.X" +grep -P "^[\t][\t]\"citus\.[a-zA-Z_0-9]+\"" RegisterCitusConfigVariables_func_def.out > gucs.out sort -c gucs.out rm gucs.out +rm RegisterCitusConfigVariables_func_def.out diff --git a/src/backend/distributed/README.md b/src/backend/distributed/README.md index 225b1f962..6e3d8cf1c 100644 --- a/src/backend/distributed/README.md +++ b/src/backend/distributed/README.md @@ -1749,8 +1749,6 @@ The reason for handling dependencies and deparsing in post-process step is that Not all table DDL is currently deparsed. In that case, the original command sent by the client is used. That is a shortcoming in our DDL logic that causes user-facing issues and should be addressed. We do not directly construct a separate DDL command for each shard. Instead, we call the `worker_apply_shard_ddl_command(shardid bigint, ddl_command text)` function which parses the DDL command, replaces the table names with shard names in the parse tree according to the shard ID, and then executes the command. That also has some shortcomings, because we cannot support more complex DDL commands in this manner (e.g. adding multiple foreign keys). Ideally, all DDL would be deparsed, and for table DDL the deparsed query string would have shard names, similar to regular queries. -`markDistributed` is used to indicate whether we add a record to `pg_dist_object` to mark the object as "distributed". - ## Defining a new DDL command All commands that are propagated by Citus should be defined in DistributeObjectOps struct. Below is a sample DistributeObjectOps for ALTER DATABASE command that is defined in [distribute_object_ops.c](commands/distribute_object_ops.c) file. @@ -1810,6 +1808,14 @@ GetDistributeObjectOps(Node *node) ... ``` +Finally, when adding support for propagation of a new DDL command, you also need to make sure that: +* Use `quote_identifier()` or `quote_literal_cstr()` for the fields that might need escaping some characters or bare quotes when deparsing a DDL command. +* The code is tolerant to nullable fields within given `Stmt *` object, i.e., the ones that Postgres allows not specifying at all. +* You register the object into `pg_dist_object` if it's a CREATE command and you delete the object from `pg_dist_object` if it's a DROP command. +* Node activation (e.g., `citus_add_node()`) properly propagates the object and its dependencies to new nodes. +* Add tests cases for all the scenarios noted above. +* Add test cases for different options that can be specified for the settings. For example, `CREATE DATABASE .. IS_TEMPLATE = TRUE` and `CREATE DATABASE .. IS_TEMPLATE = FALSE` should be tested separately. + ## Object & dependency propagation These two topics are closely related, so we'll discuss them together. You can start the topic by reading [Nils' blog](https://www.citusdata.com/blog/2020/06/25/using-custom-types-with-citus-and-postgres/) on the topic. @@ -1885,7 +1891,7 @@ Generally, the process is straightforward: When a new object is created, Citus a Citus employs a universal strategy for dealing with objects. Every object creation, alteration, or deletion event (like custom types, tables, or extensions) is represented by the C struct `DistributeObjectOps`. You can find a list of all supported object types in [`distribute_object_ops.c`](https://github.com/citusdata/citus/blob/2c190d068918d1c457894adf97f550e5b3739184/src/backend/distributed/commands/distribute_object_ops.c#L4). As of Citus 12.1, most Postgres objects are supported, although there are a few exceptions. -Whenever `DistributeObjectOps->markDistributed` is set to true—usually during `CREATE` operations—Citus calls `MarkObjectDistributed()`. Citus also labels the same objects as distributed across all nodes via the `citus_internal_add_object_metadata()` UDF. +Whenever `DistributeObjectOps->markDistributed` is set to true—usually during `CREATE` operations—Citus calls `MarkObjectDistributed()`. Citus also labels the same objects as distributed across all nodes via the `citus_internal.add_object_metadata()` UDF. Here's a simple example: @@ -1895,7 +1901,7 @@ CREATE TYPE type_test AS (a int, b int); ... NOTICE: issuing SELECT worker_create_or_replace_object('CREATE TYPE public.type_test AS (a integer, b integer);'); .... - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('schema', ARRAY['public']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('schema', ARRAY['public']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; ... -- Then, check pg_dist_object. This should be consistent across all nodes. diff --git a/src/backend/distributed/clock/causal_clock.c b/src/backend/distributed/clock/causal_clock.c index 3d64757e3..eb4b8d9d3 100644 --- a/src/backend/distributed/clock/causal_clock.c +++ b/src/backend/distributed/clock/causal_clock.c @@ -397,7 +397,7 @@ AdjustClocksToTransactionHighest(List *nodeConnectionList, /* Set the clock value on participating worker nodes */ appendStringInfo(queryToSend, - "SELECT pg_catalog.citus_internal_adjust_local_clock_to_remote" + "SELECT citus_internal.adjust_local_clock_to_remote" "('(%lu, %u)'::pg_catalog.cluster_clock);", transactionClockValue->logical, transactionClockValue->counter); diff --git a/src/backend/distributed/commands/database.c b/src/backend/distributed/commands/database.c index 0eb87ec19..55cd9e130 100644 --- a/src/backend/distributed/commands/database.c +++ b/src/backend/distributed/commands/database.c @@ -890,7 +890,7 @@ CreateDatabaseDDLCommand(Oid dbId) /* Generate the CREATE DATABASE statement */ appendStringInfo(outerDbStmt, - "SELECT pg_catalog.citus_internal_database_command(%s)", + "SELECT citus_internal.database_command(%s)", quote_literal_cstr(createStmt)); ReleaseSysCache(tuple); diff --git a/src/backend/distributed/commands/extension.c b/src/backend/distributed/commands/extension.c index 36267ff66..2ead0c58a 100644 --- a/src/backend/distributed/commands/extension.c +++ b/src/backend/distributed/commands/extension.c @@ -776,7 +776,7 @@ PreprocessCreateExtensionStmtForCitusColumnar(Node *parsetree) /*create extension citus version xxx*/ if (newVersionValue) { - char *newVersion = strdup(defGetString(newVersionValue)); + char *newVersion = pstrdup(defGetString(newVersionValue)); versionNumber = GetExtensionVersionNumber(newVersion); } @@ -796,7 +796,7 @@ PreprocessCreateExtensionStmtForCitusColumnar(Node *parsetree) Oid citusOid = get_extension_oid("citus", true); if (citusOid != InvalidOid) { - char *curCitusVersion = strdup(get_extension_version(citusOid)); + char *curCitusVersion = pstrdup(get_extension_version(citusOid)); int curCitusVersionNum = GetExtensionVersionNumber(curCitusVersion); if (curCitusVersionNum < 1110) { @@ -891,7 +891,7 @@ PreprocessAlterExtensionCitusStmtForCitusColumnar(Node *parseTree) if (newVersionValue) { char *newVersion = defGetString(newVersionValue); - double newVersionNumber = GetExtensionVersionNumber(strdup(newVersion)); + double newVersionNumber = GetExtensionVersionNumber(pstrdup(newVersion)); /*alter extension citus update to version >= 11.1-1, and no citus_columnar installed */ if (newVersionNumber >= 1110 && citusColumnarOid == InvalidOid) @@ -935,7 +935,7 @@ PostprocessAlterExtensionCitusStmtForCitusColumnar(Node *parseTree) if (newVersionValue) { char *newVersion = defGetString(newVersionValue); - double newVersionNumber = GetExtensionVersionNumber(strdup(newVersion)); + double newVersionNumber = GetExtensionVersionNumber(pstrdup(newVersion)); if (newVersionNumber >= 1110 && citusColumnarOid != InvalidOid) { /*upgrade citus, after "ALTER EXTENSION citus update to xxx" updates citus_columnar Y to version Z. */ diff --git a/src/backend/distributed/commands/utility_hook.c b/src/backend/distributed/commands/utility_hook.c index c2155383a..68af4b7b5 100644 --- a/src/backend/distributed/commands/utility_hook.c +++ b/src/backend/distributed/commands/utility_hook.c @@ -92,7 +92,7 @@ #define START_MANAGEMENT_TRANSACTION \ "SELECT citus_internal.start_management_transaction('%lu')" #define MARK_OBJECT_DISTRIBUTED \ - "SELECT citus_internal.mark_object_distributed(%d, %s, %d)" + "SELECT citus_internal.mark_object_distributed(%d, %s, %d, %s)" bool EnableDDLPropagation = true; /* ddl propagation is enabled */ @@ -1636,7 +1636,8 @@ RunPostprocessMainDBCommand(Node *parsetree) MARK_OBJECT_DISTRIBUTED, AuthIdRelationId, quote_literal_cstr(createRoleStmt->role), - roleOid); + roleOid, + quote_literal_cstr(CurrentUserName())); RunCitusMainDBQuery(mainDBQuery->data); } } diff --git a/src/backend/distributed/connection/connection_configuration.c b/src/backend/distributed/connection/connection_configuration.c index 57069f698..ac82d4e09 100644 --- a/src/backend/distributed/connection/connection_configuration.c +++ b/src/backend/distributed/connection/connection_configuration.c @@ -123,6 +123,10 @@ AddConnParam(const char *keyword, const char *value) errmsg("ConnParams arrays bound check failed"))); } + /* + * Don't use pstrdup here to avoid being tied to a memory context, we free + * these later using ResetConnParams + */ ConnParams.keywords[ConnParams.size] = strdup(keyword); ConnParams.values[ConnParams.size] = strdup(value); ConnParams.size++; @@ -441,7 +445,7 @@ GetEffectiveConnKey(ConnectionHashKey *key) if (!IsTransactionState()) { /* we're in the task tracker, so should only see loopback */ - Assert(strncmp(LOCAL_HOST_NAME, key->hostname, MAX_NODE_LENGTH) == 0 && + Assert(strncmp(LocalHostName, key->hostname, MAX_NODE_LENGTH) == 0 && PostPortNumber == key->port); return key; } @@ -517,9 +521,23 @@ char * GetAuthinfo(char *hostname, int32 port, char *user) { char *authinfo = NULL; - bool isLoopback = (strncmp(LOCAL_HOST_NAME, hostname, MAX_NODE_LENGTH) == 0 && + bool isLoopback = (strncmp(LocalHostName, hostname, MAX_NODE_LENGTH) == 0 && PostPortNumber == port); + /* + * Citus will not be loaded when we run a global DDL command from a + * Citus non-main database. + */ + if (!CitusHasBeenLoaded()) + { + /* + * We don't expect non-main databases to connect to a node other than + * the local one. + */ + Assert(isLoopback); + return ""; + } + if (IsTransactionState()) { int64 nodeId = WILDCARD_NODE_ID; diff --git a/src/backend/distributed/connection/remote_commands.c b/src/backend/distributed/connection/remote_commands.c index f694ff390..4b46e96d2 100644 --- a/src/backend/distributed/connection/remote_commands.c +++ b/src/backend/distributed/connection/remote_commands.c @@ -246,6 +246,7 @@ ClearResultsIfReady(MultiConnection *connection) void ReportConnectionError(MultiConnection *connection, int elevel) { + char *userName = connection->user; char *nodeName = connection->hostname; int nodePort = connection->port; PGconn *pgConn = connection->pgConn; @@ -264,15 +265,15 @@ ReportConnectionError(MultiConnection *connection, int elevel) if (messageDetail) { ereport(elevel, (errcode(ERRCODE_CONNECTION_FAILURE), - errmsg("connection to the remote node %s:%d failed with the " - "following error: %s", nodeName, nodePort, + errmsg("connection to the remote node %s@%s:%d failed with the " + "following error: %s", userName, nodeName, nodePort, messageDetail))); } else { ereport(elevel, (errcode(ERRCODE_CONNECTION_FAILURE), - errmsg("connection to the remote node %s:%d failed", - nodeName, nodePort))); + errmsg("connection to the remote node %s@%s:%d failed", + userName, nodeName, nodePort))); } } diff --git a/src/backend/distributed/deparser/ruleutils_14.c b/src/backend/distributed/deparser/ruleutils_14.c index 01b74eab1..88948cff5 100644 --- a/src/backend/distributed/deparser/ruleutils_14.c +++ b/src/backend/distributed/deparser/ruleutils_14.c @@ -1526,8 +1526,15 @@ set_join_column_names(deparse_namespace *dpns, RangeTblEntry *rte, /* Assert we processed the right number of columns */ #ifdef USE_ASSERT_CHECKING - while (i < colinfo->num_cols && colinfo->colnames[i] == NULL) - i++; + for (int col_index = 0; col_index < colinfo->num_cols; col_index++) + { + /* + * In the above processing-loops, "i" advances only if + * the column is not new, check if this is a new column. + */ + if (colinfo->is_new_col[col_index]) + i++; + } Assert(i == colinfo->num_cols); Assert(j == nnewcolumns); #endif diff --git a/src/backend/distributed/deparser/ruleutils_16.c b/src/backend/distributed/deparser/ruleutils_16.c index 10373e487..7f2a41d75 100644 --- a/src/backend/distributed/deparser/ruleutils_16.c +++ b/src/backend/distributed/deparser/ruleutils_16.c @@ -1580,8 +1580,15 @@ set_join_column_names(deparse_namespace *dpns, RangeTblEntry *rte, /* Assert we processed the right number of columns */ #ifdef USE_ASSERT_CHECKING - while (i < colinfo->num_cols && colinfo->colnames[i] == NULL) - i++; + for (int col_index = 0; col_index < colinfo->num_cols; col_index++) + { + /* + * In the above processing-loops, "i" advances only if + * the column is not new, check if this is a new column. + */ + if (colinfo->is_new_col[col_index]) + i++; + } Assert(i == colinfo->num_cols); Assert(j == nnewcolumns); #endif diff --git a/src/backend/distributed/executor/adaptive_executor.c b/src/backend/distributed/executor/adaptive_executor.c index 1b0277f2e..e912f418d 100644 --- a/src/backend/distributed/executor/adaptive_executor.c +++ b/src/backend/distributed/executor/adaptive_executor.c @@ -727,6 +727,11 @@ static uint64 MicrosecondsBetweenTimestamps(instr_time startTime, instr_time end static int WorkerPoolCompare(const void *lhsKey, const void *rhsKey); static void SetAttributeInputMetadata(DistributedExecution *execution, ShardCommandExecution *shardCommandExecution); +static ExecutionParams * CreateDefaultExecutionParams(RowModifyLevel modLevel, + List *taskList, + TupleDestination *tupleDest, + bool expectResults, + ParamListInfo paramListInfo); /* @@ -1013,14 +1018,14 @@ ExecuteTaskListOutsideTransaction(RowModifyLevel modLevel, List *taskList, /* - * ExecuteTaskListIntoTupleDestWithParam is a proxy to ExecuteTaskListExtended() which uses - * bind params from executor state, and with defaults for some of the arguments. + * CreateDefaultExecutionParams returns execution params based on given (possibly null) + * bind params (presumably from executor state) with defaults for some of the arguments. */ -uint64 -ExecuteTaskListIntoTupleDestWithParam(RowModifyLevel modLevel, List *taskList, - TupleDestination *tupleDest, - bool expectResults, - ParamListInfo paramListInfo) +static ExecutionParams * +CreateDefaultExecutionParams(RowModifyLevel modLevel, List *taskList, + TupleDestination *tupleDest, + bool expectResults, + ParamListInfo paramListInfo) { int targetPoolSize = MaxAdaptiveExecutorPoolSize; bool localExecutionSupported = true; @@ -1034,6 +1039,24 @@ ExecuteTaskListIntoTupleDestWithParam(RowModifyLevel modLevel, List *taskList, executionParams->tupleDestination = tupleDest; executionParams->paramListInfo = paramListInfo; + return executionParams; +} + + +/* + * ExecuteTaskListIntoTupleDestWithParam is a proxy to ExecuteTaskListExtended() which uses + * bind params from executor state, and with defaults for some of the arguments. + */ +uint64 +ExecuteTaskListIntoTupleDestWithParam(RowModifyLevel modLevel, List *taskList, + TupleDestination *tupleDest, + bool expectResults, + ParamListInfo paramListInfo) +{ + ExecutionParams *executionParams = CreateDefaultExecutionParams(modLevel, taskList, + tupleDest, + expectResults, + paramListInfo); return ExecuteTaskListExtended(executionParams); } @@ -1047,17 +1070,11 @@ ExecuteTaskListIntoTupleDest(RowModifyLevel modLevel, List *taskList, TupleDestination *tupleDest, bool expectResults) { - int targetPoolSize = MaxAdaptiveExecutorPoolSize; - bool localExecutionSupported = true; - ExecutionParams *executionParams = CreateBasicExecutionParams( - modLevel, taskList, targetPoolSize, localExecutionSupported - ); - - executionParams->xactProperties = DecideTransactionPropertiesForTaskList( - modLevel, taskList, false); - executionParams->expectResults = expectResults; - executionParams->tupleDestination = tupleDest; - + ParamListInfo paramListInfo = NULL; + ExecutionParams *executionParams = CreateDefaultExecutionParams(modLevel, taskList, + tupleDest, + expectResults, + paramListInfo); return ExecuteTaskListExtended(executionParams); } diff --git a/src/backend/distributed/metadata/distobject.c b/src/backend/distributed/metadata/distobject.c index 1d07be8c3..007d07bdc 100644 --- a/src/backend/distributed/metadata/distobject.c +++ b/src/backend/distributed/metadata/distobject.c @@ -67,7 +67,8 @@ PG_FUNCTION_INFO_V1(master_unmark_object_distributed); /* * mark_object_distributed adds an object to pg_dist_object - * in all of the nodes. + * in all of the nodes, for the connections to the other nodes this function + * uses the user passed. */ Datum mark_object_distributed(PG_FUNCTION_ARGS) @@ -81,6 +82,8 @@ mark_object_distributed(PG_FUNCTION_ARGS) Oid objectId = PG_GETARG_OID(2); ObjectAddress *objectAddress = palloc0(sizeof(ObjectAddress)); ObjectAddressSet(*objectAddress, classId, objectId); + text *connectionUserText = PG_GETARG_TEXT_P(3); + char *connectionUser = text_to_cstring(connectionUserText); /* * This function is called when a query is run from a Citus non-main database. @@ -88,7 +91,8 @@ mark_object_distributed(PG_FUNCTION_ARGS) * 2PC still works. */ bool useConnectionForLocalQuery = true; - MarkObjectDistributedWithName(objectAddress, objectName, useConnectionForLocalQuery); + MarkObjectDistributedWithName(objectAddress, objectName, useConnectionForLocalQuery, + connectionUser); PG_RETURN_VOID(); } @@ -193,7 +197,8 @@ void MarkObjectDistributed(const ObjectAddress *distAddress) { bool useConnectionForLocalQuery = false; - MarkObjectDistributedWithName(distAddress, "", useConnectionForLocalQuery); + MarkObjectDistributedWithName(distAddress, "", useConnectionForLocalQuery, + CurrentUserName()); } @@ -204,7 +209,7 @@ MarkObjectDistributed(const ObjectAddress *distAddress) */ void MarkObjectDistributedWithName(const ObjectAddress *distAddress, char *objectName, - bool useConnectionForLocalQuery) + bool useConnectionForLocalQuery, char *connectionUser) { if (!CitusHasBeenLoaded()) { @@ -234,7 +239,8 @@ MarkObjectDistributedWithName(const ObjectAddress *distAddress, char *objectName { char *workerPgDistObjectUpdateCommand = CreatePgDistObjectEntryCommand(distAddress, objectName); - SendCommandToRemoteNodesWithMetadata(workerPgDistObjectUpdateCommand); + SendCommandToRemoteMetadataNodesParams(workerPgDistObjectUpdateCommand, + connectionUser, 0, NULL, NULL); } } diff --git a/src/backend/distributed/metadata/metadata_cache.c b/src/backend/distributed/metadata/metadata_cache.c index 555365e68..402dedb8a 100644 --- a/src/backend/distributed/metadata/metadata_cache.c +++ b/src/backend/distributed/metadata/metadata_cache.c @@ -5723,14 +5723,6 @@ GetPoolinfoViaCatalog(int32 nodeId) char * GetAuthinfoViaCatalog(const char *roleName, int64 nodeId) { - /* - * Citus will not be loaded when we run a global DDL command from a - * Citus non-main database. - */ - if (!CitusHasBeenLoaded()) - { - return ""; - } char *authinfo = ""; Datum nodeIdDatumArray[2] = { Int32GetDatum(nodeId), diff --git a/src/backend/distributed/metadata/metadata_sync.c b/src/backend/distributed/metadata/metadata_sync.c index dd4c81bc3..29569749c 100644 --- a/src/backend/distributed/metadata/metadata_sync.c +++ b/src/backend/distributed/metadata/metadata_sync.c @@ -994,7 +994,7 @@ MarkObjectsDistributedCreateCommand(List *addresses, appendStringInfo(insertDistributedObjectsCommand, ") "); appendStringInfo(insertDistributedObjectsCommand, - "SELECT citus_internal_add_object_metadata(" + "SELECT citus_internal.add_object_metadata(" "typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) " "FROM distributed_object_data;"); @@ -1129,7 +1129,7 @@ DistributionCreateCommand(CitusTableCacheEntry *cacheEntry) } appendStringInfo(insertDistributionCommand, - "SELECT citus_internal_add_partition_metadata " + "SELECT citus_internal.add_partition_metadata " "(%s::regclass, '%c', %s, %d, '%c')", quote_literal_cstr(qualifiedRelationName), distributionMethod, @@ -1171,7 +1171,7 @@ DistributionDeleteMetadataCommand(Oid relationId) char *qualifiedRelationName = generate_qualified_relation_name(relationId); appendStringInfo(deleteCommand, - "SELECT pg_catalog.citus_internal_delete_partition_metadata(%s)", + "SELECT citus_internal.delete_partition_metadata(%s)", quote_literal_cstr(qualifiedRelationName)); return deleteCommand->data; @@ -1254,7 +1254,7 @@ ShardListInsertCommand(List *shardIntervalList) appendStringInfo(insertPlacementCommand, ") "); appendStringInfo(insertPlacementCommand, - "SELECT citus_internal_add_placement_metadata(" + "SELECT citus_internal.add_placement_metadata(" "shardid, shardlength, groupid, placementid) " "FROM placement_data;"); @@ -1310,7 +1310,7 @@ ShardListInsertCommand(List *shardIntervalList) appendStringInfo(insertShardCommand, ") "); appendStringInfo(insertShardCommand, - "SELECT citus_internal_add_shard_metadata(relationname, shardid, " + "SELECT citus_internal.add_shard_metadata(relationname, shardid, " "storagetype, shardminvalue, shardmaxvalue) " "FROM shard_data;"); @@ -1349,7 +1349,7 @@ ShardDeleteCommandList(ShardInterval *shardInterval) StringInfo deleteShardCommand = makeStringInfo(); appendStringInfo(deleteShardCommand, - "SELECT citus_internal_delete_shard_metadata(%ld);", shardId); + "SELECT citus_internal.delete_shard_metadata(%ld);", shardId); return list_make1(deleteShardCommand->data); } @@ -4099,7 +4099,7 @@ citus_internal_database_command(PG_FUNCTION_ARGS) } else { - ereport(ERROR, (errmsg("citus_internal_database_command() can only be used " + ereport(ERROR, (errmsg("citus_internal.database_command() can only be used " "for CREATE DATABASE command by Citus."))); } @@ -4140,7 +4140,7 @@ ColocationGroupCreateCommand(uint32 colocationId, int shardCount, int replicatio StringInfo insertColocationCommand = makeStringInfo(); appendStringInfo(insertColocationCommand, - "SELECT pg_catalog.citus_internal_add_colocation_metadata(" + "SELECT citus_internal.add_colocation_metadata(" "%d, %d, %d, %s, %s)", colocationId, shardCount, @@ -4252,7 +4252,7 @@ ColocationGroupDeleteCommand(uint32 colocationId) StringInfo deleteColocationCommand = makeStringInfo(); appendStringInfo(deleteColocationCommand, - "SELECT pg_catalog.citus_internal_delete_colocation_metadata(%d)", + "SELECT citus_internal.delete_colocation_metadata(%d)", colocationId); return deleteColocationCommand->data; @@ -4268,7 +4268,7 @@ TenantSchemaInsertCommand(Oid schemaId, uint32 colocationId) { StringInfo command = makeStringInfo(); appendStringInfo(command, - "SELECT pg_catalog.citus_internal_add_tenant_schema(%s, %u)", + "SELECT citus_internal.add_tenant_schema(%s, %u)", RemoteSchemaIdExpressionById(schemaId), colocationId); return command->data; @@ -4284,7 +4284,7 @@ TenantSchemaDeleteCommand(char *schemaName) { StringInfo command = makeStringInfo(); appendStringInfo(command, - "SELECT pg_catalog.citus_internal_delete_tenant_schema(%s)", + "SELECT citus_internal.delete_tenant_schema(%s)", RemoteSchemaIdExpressionByName(schemaName)); return command->data; @@ -4319,7 +4319,7 @@ AddPlacementMetadataCommand(uint64 shardId, uint64 placementId, { StringInfo command = makeStringInfo(); appendStringInfo(command, - "SELECT citus_internal_add_placement_metadata(%ld, %ld, %d, %ld)", + "SELECT citus_internal.add_placement_metadata(%ld, %ld, %d, %ld)", shardId, shardLength, groupId, placementId); return command->data; } @@ -4334,7 +4334,7 @@ DeletePlacementMetadataCommand(uint64 placementId) { StringInfo command = makeStringInfo(); appendStringInfo(command, - "SELECT pg_catalog.citus_internal_delete_placement_metadata(%ld)", + "SELECT citus_internal.delete_placement_metadata(%ld)", placementId); return command->data; } @@ -4953,7 +4953,7 @@ SendColocationMetadataCommands(MetadataSyncContext *context) } appendStringInfo(colocationGroupCreateCommand, - ") SELECT pg_catalog.citus_internal_add_colocation_metadata(" + ") SELECT citus_internal.add_colocation_metadata(" "colocationid, shardcount, replicationfactor, " "distributioncolumntype, coalesce(c.oid, 0)) " "FROM colocation_group_data d LEFT JOIN pg_collation c " @@ -5004,7 +5004,7 @@ SendTenantSchemaMetadataCommands(MetadataSyncContext *context) StringInfo insertTenantSchemaCommand = makeStringInfo(); appendStringInfo(insertTenantSchemaCommand, - "SELECT pg_catalog.citus_internal_add_tenant_schema(%s, %u)", + "SELECT citus_internal.add_tenant_schema(%s, %u)", RemoteSchemaIdExpressionById(tenantSchemaForm->schemaid), tenantSchemaForm->colocationid); diff --git a/src/backend/distributed/operations/shard_split.c b/src/backend/distributed/operations/shard_split.c index cf9f301b7..ac7ed6bf3 100644 --- a/src/backend/distributed/operations/shard_split.c +++ b/src/backend/distributed/operations/shard_split.c @@ -1314,7 +1314,7 @@ DropShardListMetadata(List *shardIntervalList) { ListCell *commandCell = NULL; - /* send the commands one by one (calls citus_internal_delete_shard_metadata internally) */ + /* send the commands one by one (calls citus_internal.delete_shard_metadata internally) */ List *shardMetadataDeleteCommandList = ShardDeleteCommandList(shardInterval); foreach(commandCell, shardMetadataDeleteCommandList) { diff --git a/src/backend/distributed/planner/query_colocation_checker.c b/src/backend/distributed/planner/query_colocation_checker.c index 827a0286c..bef91618e 100644 --- a/src/backend/distributed/planner/query_colocation_checker.c +++ b/src/backend/distributed/planner/query_colocation_checker.c @@ -433,7 +433,7 @@ CreateTargetEntryForColumn(Form_pg_attribute attributeTuple, Index rteIndex, attributeTuple->atttypmod, attributeTuple->attcollation, 0); TargetEntry *targetEntry = makeTargetEntry((Expr *) targetColumn, resno, - strdup(attributeTuple->attname.data), false); + pstrdup(attributeTuple->attname.data), false); return targetEntry; } @@ -449,7 +449,7 @@ CreateTargetEntryForNullCol(Form_pg_attribute attributeTuple, int resno) attributeTuple->attcollation); char *resName = attributeTuple->attname.data; TargetEntry *targetEntry = - makeTargetEntry(nullExpr, resno, strdup(resName), false); + makeTargetEntry(nullExpr, resno, pstrdup(resName), false); return targetEntry; } diff --git a/src/backend/distributed/planner/recursive_planning.c b/src/backend/distributed/planner/recursive_planning.c index 6c42046ff..9f520fa5f 100644 --- a/src/backend/distributed/planner/recursive_planning.c +++ b/src/backend/distributed/planner/recursive_planning.c @@ -1097,8 +1097,8 @@ RecursivelyPlanCTEs(Query *query, RecursivePlanningContext *planningContext) if (query->hasRecursive) { return DeferredError(ERRCODE_FEATURE_NOT_SUPPORTED, - "recursive CTEs are not supported in distributed " - "queries", + "recursive CTEs are only supported when they " + "contain a filter on the distribution column", NULL, NULL); } diff --git a/src/backend/distributed/serialize_distributed_ddls.c b/src/backend/distributed/serialize_distributed_ddls.c index fa8d5ebc2..11d10905b 100644 --- a/src/backend/distributed/serialize_distributed_ddls.c +++ b/src/backend/distributed/serialize_distributed_ddls.c @@ -170,7 +170,7 @@ SerializeDistributedDDLsOnObjectClassInternal(ObjectClass objectClass, /* * AcquireCitusAdvisoryObjectClassLockCommand returns a command to call - * pg_catalog.citus_internal_acquire_citus_advisory_object_class_lock(). + * citus_internal.acquire_citus_advisory_object_class_lock(). */ static char * AcquireCitusAdvisoryObjectClassLockCommand(ObjectClass objectClass, @@ -185,7 +185,7 @@ AcquireCitusAdvisoryObjectClassLockCommand(ObjectClass objectClass, StringInfo command = makeStringInfo(); appendStringInfo(command, - "SELECT pg_catalog.citus_internal_acquire_citus_advisory_object_class_lock(%d, %s)", + "SELECT citus_internal.acquire_citus_advisory_object_class_lock(%d, %s)", objectClassInt, quotedObjectName); return command->data; diff --git a/src/backend/distributed/sql/citus--12.1-1--12.2-1.sql b/src/backend/distributed/sql/citus--12.1-1--12.2-1.sql index 72ef46e6f..0042fdaa1 100644 --- a/src/backend/distributed/sql/citus--12.1-1--12.2-1.sql +++ b/src/backend/distributed/sql/citus--12.1-1--12.2-1.sql @@ -12,3 +12,29 @@ ALTER TABLE pg_catalog.pg_dist_transaction ADD COLUMN outer_xid xid8; #include "udfs/citus_internal_acquire_citus_advisory_object_class_lock/12.2-1.sql" + +GRANT USAGE ON SCHEMA citus_internal TO PUBLIC; +REVOKE ALL ON FUNCTION citus_internal.commit_management_command_2pc FROM PUBLIC; +REVOKE ALL ON FUNCTION citus_internal.execute_command_on_remote_nodes_as_user FROM PUBLIC; +REVOKE ALL ON FUNCTION citus_internal.find_groupid_for_node FROM PUBLIC; +REVOKE ALL ON FUNCTION citus_internal.mark_object_distributed FROM PUBLIC; +REVOKE ALL ON FUNCTION citus_internal.pg_dist_node_trigger_func FROM PUBLIC; +REVOKE ALL ON FUNCTION citus_internal.pg_dist_rebalance_strategy_trigger_func FROM PUBLIC; +REVOKE ALL ON FUNCTION citus_internal.pg_dist_shard_placement_trigger_func FROM PUBLIC; +REVOKE ALL ON FUNCTION citus_internal.refresh_isolation_tester_prepared_statement FROM PUBLIC; +REVOKE ALL ON FUNCTION citus_internal.replace_isolation_tester_func FROM PUBLIC; +REVOKE ALL ON FUNCTION citus_internal.restore_isolation_tester_func FROM PUBLIC; +REVOKE ALL ON FUNCTION citus_internal.start_management_transaction FROM PUBLIC; + +#include "udfs/citus_internal_add_colocation_metadata/12.2-1.sql" +#include "udfs/citus_internal_add_object_metadata/12.2-1.sql" +#include "udfs/citus_internal_add_partition_metadata/12.2-1.sql" +#include "udfs/citus_internal_add_placement_metadata/12.2-1.sql" +#include "udfs/citus_internal_add_shard_metadata/12.2-1.sql" +#include "udfs/citus_internal_add_tenant_schema/12.2-1.sql" +#include "udfs/citus_internal_adjust_local_clock_to_remote/12.2-1.sql" +#include "udfs/citus_internal_delete_colocation_metadata/12.2-1.sql" +#include "udfs/citus_internal_delete_partition_metadata/12.2-1.sql" +#include "udfs/citus_internal_delete_placement_metadata/12.2-1.sql" +#include "udfs/citus_internal_delete_shard_metadata/12.2-1.sql" +#include "udfs/citus_internal_delete_tenant_schema/12.2-1.sql" diff --git a/src/backend/distributed/sql/downgrades/citus--12.2-1--12.1-1.sql b/src/backend/distributed/sql/downgrades/citus--12.2-1--12.1-1.sql index f889a0095..337e93b98 100644 --- a/src/backend/distributed/sql/downgrades/citus--12.2-1--12.1-1.sql +++ b/src/backend/distributed/sql/downgrades/citus--12.2-1--12.1-1.sql @@ -1,7 +1,7 @@ -- citus--12.2-1--12.1-1 -DROP FUNCTION pg_catalog.citus_internal_database_command(text); -DROP FUNCTION pg_catalog.citus_internal_acquire_citus_advisory_object_class_lock(int, cstring); +DROP FUNCTION citus_internal.database_command(text); +DROP FUNCTION citus_internal.acquire_citus_advisory_object_class_lock(int, cstring); #include "../udfs/citus_add_rebalance_strategy/10.1-1.sql" @@ -15,9 +15,23 @@ DROP FUNCTION citus_internal.execute_command_on_remote_nodes_as_user( ); DROP FUNCTION citus_internal.mark_object_distributed( - classId Oid, objectName text, objectId Oid + classId Oid, objectName text, objectId Oid, connectionUser text ); DROP FUNCTION citus_internal.commit_management_command_2pc(); ALTER TABLE pg_catalog.pg_dist_transaction DROP COLUMN outer_xid; +REVOKE USAGE ON SCHEMA citus_internal FROM PUBLIC; + +DROP FUNCTION citus_internal.add_colocation_metadata(int, int, int, regtype, oid); +DROP FUNCTION citus_internal.add_object_metadata(text, text[], text[], integer, integer, boolean); +DROP FUNCTION citus_internal.add_partition_metadata(regclass, "char", text, integer, "char"); +DROP FUNCTION citus_internal.add_placement_metadata(bigint, bigint, integer, bigint); +DROP FUNCTION citus_internal.add_shard_metadata(regclass, bigint, "char", text, text); +DROP FUNCTION citus_internal.add_tenant_schema(oid, integer); +DROP FUNCTION citus_internal.adjust_local_clock_to_remote(pg_catalog.cluster_clock); +DROP FUNCTION citus_internal.delete_colocation_metadata(int); +DROP FUNCTION citus_internal.delete_partition_metadata(regclass); +DROP FUNCTION citus_internal.delete_placement_metadata(bigint); +DROP FUNCTION citus_internal.delete_shard_metadata(bigint); +DROP FUNCTION citus_internal.delete_tenant_schema(oid); diff --git a/src/backend/distributed/sql/udfs/citus_internal_acquire_citus_advisory_object_class_lock/12.2-1.sql b/src/backend/distributed/sql/udfs/citus_internal_acquire_citus_advisory_object_class_lock/12.2-1.sql index 8dc6f940e..9f32b67d4 100644 --- a/src/backend/distributed/sql/udfs/citus_internal_acquire_citus_advisory_object_class_lock/12.2-1.sql +++ b/src/backend/distributed/sql/udfs/citus_internal_acquire_citus_advisory_object_class_lock/12.2-1.sql @@ -1,4 +1,4 @@ -CREATE OR REPLACE FUNCTION pg_catalog.citus_internal_acquire_citus_advisory_object_class_lock(objectClass int, qualifiedObjectName cstring) +CREATE OR REPLACE FUNCTION citus_internal.acquire_citus_advisory_object_class_lock(objectClass int, qualifiedObjectName cstring) RETURNS void LANGUAGE C VOLATILE diff --git a/src/backend/distributed/sql/udfs/citus_internal_acquire_citus_advisory_object_class_lock/latest.sql b/src/backend/distributed/sql/udfs/citus_internal_acquire_citus_advisory_object_class_lock/latest.sql index 8dc6f940e..9f32b67d4 100644 --- a/src/backend/distributed/sql/udfs/citus_internal_acquire_citus_advisory_object_class_lock/latest.sql +++ b/src/backend/distributed/sql/udfs/citus_internal_acquire_citus_advisory_object_class_lock/latest.sql @@ -1,4 +1,4 @@ -CREATE OR REPLACE FUNCTION pg_catalog.citus_internal_acquire_citus_advisory_object_class_lock(objectClass int, qualifiedObjectName cstring) +CREATE OR REPLACE FUNCTION citus_internal.acquire_citus_advisory_object_class_lock(objectClass int, qualifiedObjectName cstring) RETURNS void LANGUAGE C VOLATILE diff --git a/src/backend/distributed/sql/udfs/citus_internal_add_colocation_metadata/12.2-1.sql b/src/backend/distributed/sql/udfs/citus_internal_add_colocation_metadata/12.2-1.sql new file mode 100644 index 000000000..e05444801 --- /dev/null +++ b/src/backend/distributed/sql/udfs/citus_internal_add_colocation_metadata/12.2-1.sql @@ -0,0 +1,27 @@ +CREATE OR REPLACE FUNCTION citus_internal.add_colocation_metadata( + colocation_id int, + shard_count int, + replication_factor int, + distribution_column_type regtype, + distribution_column_collation oid) + RETURNS void + LANGUAGE C + STRICT + AS 'MODULE_PATHNAME', $$citus_internal_add_colocation_metadata$$; + +COMMENT ON FUNCTION citus_internal.add_colocation_metadata(int,int,int,regtype,oid) IS + 'Inserts a co-location group into pg_dist_colocation'; + +CREATE OR REPLACE FUNCTION pg_catalog.citus_internal_add_colocation_metadata( + colocation_id int, + shard_count int, + replication_factor int, + distribution_column_type regtype, + distribution_column_collation oid) + RETURNS void + LANGUAGE C + STRICT + AS 'MODULE_PATHNAME'; + +COMMENT ON FUNCTION pg_catalog.citus_internal_add_colocation_metadata(int,int,int,regtype,oid) IS + 'Inserts a co-location group into pg_dist_colocation'; diff --git a/src/backend/distributed/sql/udfs/citus_internal_add_colocation_metadata/latest.sql b/src/backend/distributed/sql/udfs/citus_internal_add_colocation_metadata/latest.sql index 823f45569..e05444801 100644 --- a/src/backend/distributed/sql/udfs/citus_internal_add_colocation_metadata/latest.sql +++ b/src/backend/distributed/sql/udfs/citus_internal_add_colocation_metadata/latest.sql @@ -1,3 +1,17 @@ +CREATE OR REPLACE FUNCTION citus_internal.add_colocation_metadata( + colocation_id int, + shard_count int, + replication_factor int, + distribution_column_type regtype, + distribution_column_collation oid) + RETURNS void + LANGUAGE C + STRICT + AS 'MODULE_PATHNAME', $$citus_internal_add_colocation_metadata$$; + +COMMENT ON FUNCTION citus_internal.add_colocation_metadata(int,int,int,regtype,oid) IS + 'Inserts a co-location group into pg_dist_colocation'; + CREATE OR REPLACE FUNCTION pg_catalog.citus_internal_add_colocation_metadata( colocation_id int, shard_count int, diff --git a/src/backend/distributed/sql/udfs/citus_internal_add_object_metadata/12.2-1.sql b/src/backend/distributed/sql/udfs/citus_internal_add_object_metadata/12.2-1.sql new file mode 100644 index 000000000..560bce690 --- /dev/null +++ b/src/backend/distributed/sql/udfs/citus_internal_add_object_metadata/12.2-1.sql @@ -0,0 +1,29 @@ +CREATE OR REPLACE FUNCTION citus_internal.add_object_metadata( + typeText text, + objNames text[], + objArgs text[], + distribution_argument_index int, + colocationid int, + force_delegation bool) + RETURNS void + LANGUAGE C + STRICT + AS 'MODULE_PATHNAME', $$citus_internal_add_object_metadata$$; + +COMMENT ON FUNCTION citus_internal.add_object_metadata(text,text[],text[],int,int,bool) IS + 'Inserts distributed object into pg_dist_object'; + +CREATE OR REPLACE FUNCTION pg_catalog.citus_internal_add_object_metadata( + typeText text, + objNames text[], + objArgs text[], + distribution_argument_index int, + colocationid int, + force_delegation bool) + RETURNS void + LANGUAGE C + STRICT + AS 'MODULE_PATHNAME'; + +COMMENT ON FUNCTION pg_catalog.citus_internal_add_object_metadata(text,text[],text[],int,int,bool) IS + 'Inserts distributed object into pg_dist_object'; diff --git a/src/backend/distributed/sql/udfs/citus_internal_add_object_metadata/latest.sql b/src/backend/distributed/sql/udfs/citus_internal_add_object_metadata/latest.sql index d35198f90..560bce690 100644 --- a/src/backend/distributed/sql/udfs/citus_internal_add_object_metadata/latest.sql +++ b/src/backend/distributed/sql/udfs/citus_internal_add_object_metadata/latest.sql @@ -1,3 +1,18 @@ +CREATE OR REPLACE FUNCTION citus_internal.add_object_metadata( + typeText text, + objNames text[], + objArgs text[], + distribution_argument_index int, + colocationid int, + force_delegation bool) + RETURNS void + LANGUAGE C + STRICT + AS 'MODULE_PATHNAME', $$citus_internal_add_object_metadata$$; + +COMMENT ON FUNCTION citus_internal.add_object_metadata(text,text[],text[],int,int,bool) IS + 'Inserts distributed object into pg_dist_object'; + CREATE OR REPLACE FUNCTION pg_catalog.citus_internal_add_object_metadata( typeText text, objNames text[], diff --git a/src/backend/distributed/sql/udfs/citus_internal_add_partition_metadata/12.2-1.sql b/src/backend/distributed/sql/udfs/citus_internal_add_partition_metadata/12.2-1.sql new file mode 100644 index 000000000..511a5b1c1 --- /dev/null +++ b/src/backend/distributed/sql/udfs/citus_internal_add_partition_metadata/12.2-1.sql @@ -0,0 +1,22 @@ +CREATE OR REPLACE FUNCTION citus_internal.add_partition_metadata( + relation_id regclass, distribution_method "char", + distribution_column text, colocation_id integer, + replication_model "char") + RETURNS void + LANGUAGE C + AS 'MODULE_PATHNAME', $$citus_internal_add_partition_metadata$$; + +COMMENT ON FUNCTION citus_internal.add_partition_metadata(regclass, "char", text, integer, "char") IS + 'Inserts into pg_dist_partition with user checks'; + + +CREATE OR REPLACE FUNCTION pg_catalog.citus_internal_add_partition_metadata( + relation_id regclass, distribution_method "char", + distribution_column text, colocation_id integer, + replication_model "char") + RETURNS void + LANGUAGE C + AS 'MODULE_PATHNAME'; + +COMMENT ON FUNCTION pg_catalog.citus_internal_add_partition_metadata(regclass, "char", text, integer, "char") IS + 'Inserts into pg_dist_partition with user checks'; diff --git a/src/backend/distributed/sql/udfs/citus_internal_add_partition_metadata/latest.sql b/src/backend/distributed/sql/udfs/citus_internal_add_partition_metadata/latest.sql index ed4f853a6..511a5b1c1 100644 --- a/src/backend/distributed/sql/udfs/citus_internal_add_partition_metadata/latest.sql +++ b/src/backend/distributed/sql/udfs/citus_internal_add_partition_metadata/latest.sql @@ -1,3 +1,15 @@ +CREATE OR REPLACE FUNCTION citus_internal.add_partition_metadata( + relation_id regclass, distribution_method "char", + distribution_column text, colocation_id integer, + replication_model "char") + RETURNS void + LANGUAGE C + AS 'MODULE_PATHNAME', $$citus_internal_add_partition_metadata$$; + +COMMENT ON FUNCTION citus_internal.add_partition_metadata(regclass, "char", text, integer, "char") IS + 'Inserts into pg_dist_partition with user checks'; + + CREATE OR REPLACE FUNCTION pg_catalog.citus_internal_add_partition_metadata( relation_id regclass, distribution_method "char", distribution_column text, colocation_id integer, diff --git a/src/backend/distributed/sql/udfs/citus_internal_add_placement_metadata/12.2-1.sql b/src/backend/distributed/sql/udfs/citus_internal_add_placement_metadata/12.2-1.sql new file mode 100644 index 000000000..339fc2948 --- /dev/null +++ b/src/backend/distributed/sql/udfs/citus_internal_add_placement_metadata/12.2-1.sql @@ -0,0 +1,36 @@ +-- create a new function, without shardstate +CREATE OR REPLACE FUNCTION citus_internal.add_placement_metadata( + shard_id bigint, + shard_length bigint, group_id integer, + placement_id bigint) + RETURNS void + LANGUAGE C STRICT + AS 'MODULE_PATHNAME', $$citus_internal_add_placement_metadata$$; + +COMMENT ON FUNCTION citus_internal.add_placement_metadata(bigint, bigint, integer, bigint) IS + 'Inserts into pg_dist_shard_placement with user checks'; + +-- create a new function, without shardstate +CREATE OR REPLACE FUNCTION pg_catalog.citus_internal_add_placement_metadata( + shard_id bigint, + shard_length bigint, group_id integer, + placement_id bigint) + RETURNS void + LANGUAGE C STRICT + AS 'MODULE_PATHNAME', $$citus_internal_add_placement_metadata$$; + +COMMENT ON FUNCTION pg_catalog.citus_internal_add_placement_metadata(bigint, bigint, integer, bigint) IS + 'Inserts into pg_dist_shard_placement with user checks'; + +-- replace the old one so it would call the old C function with shard_state +CREATE OR REPLACE FUNCTION pg_catalog.citus_internal_add_placement_metadata( + shard_id bigint, shard_state integer, + shard_length bigint, group_id integer, + placement_id bigint) + RETURNS void + LANGUAGE C STRICT + AS 'MODULE_PATHNAME', $$citus_internal_add_placement_metadata_legacy$$; + +COMMENT ON FUNCTION pg_catalog.citus_internal_add_placement_metadata(bigint, integer, bigint, integer, bigint) IS + 'Inserts into pg_dist_shard_placement with user checks'; + diff --git a/src/backend/distributed/sql/udfs/citus_internal_add_placement_metadata/latest.sql b/src/backend/distributed/sql/udfs/citus_internal_add_placement_metadata/latest.sql index 9d1dd4ffa..339fc2948 100644 --- a/src/backend/distributed/sql/udfs/citus_internal_add_placement_metadata/latest.sql +++ b/src/backend/distributed/sql/udfs/citus_internal_add_placement_metadata/latest.sql @@ -1,3 +1,15 @@ +-- create a new function, without shardstate +CREATE OR REPLACE FUNCTION citus_internal.add_placement_metadata( + shard_id bigint, + shard_length bigint, group_id integer, + placement_id bigint) + RETURNS void + LANGUAGE C STRICT + AS 'MODULE_PATHNAME', $$citus_internal_add_placement_metadata$$; + +COMMENT ON FUNCTION citus_internal.add_placement_metadata(bigint, bigint, integer, bigint) IS + 'Inserts into pg_dist_shard_placement with user checks'; + -- create a new function, without shardstate CREATE OR REPLACE FUNCTION pg_catalog.citus_internal_add_placement_metadata( shard_id bigint, diff --git a/src/backend/distributed/sql/udfs/citus_internal_add_shard_metadata/12.2-1.sql b/src/backend/distributed/sql/udfs/citus_internal_add_shard_metadata/12.2-1.sql new file mode 100644 index 000000000..82c29f054 --- /dev/null +++ b/src/backend/distributed/sql/udfs/citus_internal_add_shard_metadata/12.2-1.sql @@ -0,0 +1,21 @@ +CREATE OR REPLACE FUNCTION citus_internal.add_shard_metadata( + relation_id regclass, shard_id bigint, + storage_type "char", shard_min_value text, + shard_max_value text + ) + RETURNS void + LANGUAGE C + AS 'MODULE_PATHNAME', $$citus_internal_add_shard_metadata$$; +COMMENT ON FUNCTION citus_internal.add_shard_metadata(regclass, bigint, "char", text, text) IS + 'Inserts into pg_dist_shard with user checks'; + +CREATE OR REPLACE FUNCTION pg_catalog.citus_internal_add_shard_metadata( + relation_id regclass, shard_id bigint, + storage_type "char", shard_min_value text, + shard_max_value text + ) + RETURNS void + LANGUAGE C + AS 'MODULE_PATHNAME'; +COMMENT ON FUNCTION pg_catalog.citus_internal_add_shard_metadata(regclass, bigint, "char", text, text) IS + 'Inserts into pg_dist_shard with user checks'; diff --git a/src/backend/distributed/sql/udfs/citus_internal_add_shard_metadata/latest.sql b/src/backend/distributed/sql/udfs/citus_internal_add_shard_metadata/latest.sql index 7411d9179..82c29f054 100644 --- a/src/backend/distributed/sql/udfs/citus_internal_add_shard_metadata/latest.sql +++ b/src/backend/distributed/sql/udfs/citus_internal_add_shard_metadata/latest.sql @@ -1,3 +1,14 @@ +CREATE OR REPLACE FUNCTION citus_internal.add_shard_metadata( + relation_id regclass, shard_id bigint, + storage_type "char", shard_min_value text, + shard_max_value text + ) + RETURNS void + LANGUAGE C + AS 'MODULE_PATHNAME', $$citus_internal_add_shard_metadata$$; +COMMENT ON FUNCTION citus_internal.add_shard_metadata(regclass, bigint, "char", text, text) IS + 'Inserts into pg_dist_shard with user checks'; + CREATE OR REPLACE FUNCTION pg_catalog.citus_internal_add_shard_metadata( relation_id regclass, shard_id bigint, storage_type "char", shard_min_value text, diff --git a/src/backend/distributed/sql/udfs/citus_internal_add_tenant_schema/12.2-1.sql b/src/backend/distributed/sql/udfs/citus_internal_add_tenant_schema/12.2-1.sql new file mode 100644 index 000000000..028848f90 --- /dev/null +++ b/src/backend/distributed/sql/udfs/citus_internal_add_tenant_schema/12.2-1.sql @@ -0,0 +1,17 @@ +CREATE OR REPLACE FUNCTION citus_internal.add_tenant_schema(schema_id Oid, colocation_id int) + RETURNS void + LANGUAGE C + VOLATILE + AS 'MODULE_PATHNAME', $$citus_internal_add_tenant_schema$$; + +COMMENT ON FUNCTION citus_internal.add_tenant_schema(Oid, int) IS + 'insert given tenant schema into pg_dist_schema with given colocation id'; + +CREATE OR REPLACE FUNCTION pg_catalog.citus_internal_add_tenant_schema(schema_id Oid, colocation_id int) + RETURNS void + LANGUAGE C + VOLATILE + AS 'MODULE_PATHNAME'; + +COMMENT ON FUNCTION pg_catalog.citus_internal_add_tenant_schema(Oid, int) IS + 'insert given tenant schema into pg_dist_schema with given colocation id'; diff --git a/src/backend/distributed/sql/udfs/citus_internal_add_tenant_schema/latest.sql b/src/backend/distributed/sql/udfs/citus_internal_add_tenant_schema/latest.sql index 56b3cae84..028848f90 100644 --- a/src/backend/distributed/sql/udfs/citus_internal_add_tenant_schema/latest.sql +++ b/src/backend/distributed/sql/udfs/citus_internal_add_tenant_schema/latest.sql @@ -1,3 +1,12 @@ +CREATE OR REPLACE FUNCTION citus_internal.add_tenant_schema(schema_id Oid, colocation_id int) + RETURNS void + LANGUAGE C + VOLATILE + AS 'MODULE_PATHNAME', $$citus_internal_add_tenant_schema$$; + +COMMENT ON FUNCTION citus_internal.add_tenant_schema(Oid, int) IS + 'insert given tenant schema into pg_dist_schema with given colocation id'; + CREATE OR REPLACE FUNCTION pg_catalog.citus_internal_add_tenant_schema(schema_id Oid, colocation_id int) RETURNS void LANGUAGE C diff --git a/src/backend/distributed/sql/udfs/citus_internal_adjust_local_clock_to_remote/12.2-1.sql b/src/backend/distributed/sql/udfs/citus_internal_adjust_local_clock_to_remote/12.2-1.sql new file mode 100644 index 000000000..36d37a9e6 --- /dev/null +++ b/src/backend/distributed/sql/udfs/citus_internal_adjust_local_clock_to_remote/12.2-1.sql @@ -0,0 +1,17 @@ +CREATE OR REPLACE FUNCTION citus_internal.adjust_local_clock_to_remote(pg_catalog.cluster_clock) + RETURNS void + LANGUAGE C STABLE PARALLEL SAFE STRICT + AS 'MODULE_PATHNAME', $$citus_internal_adjust_local_clock_to_remote$$; +COMMENT ON FUNCTION citus_internal.adjust_local_clock_to_remote(pg_catalog.cluster_clock) + IS 'Internal UDF used to adjust the local clock to the maximum of nodes in the cluster'; + +REVOKE ALL ON FUNCTION citus_internal.adjust_local_clock_to_remote(pg_catalog.cluster_clock) FROM PUBLIC; + +CREATE OR REPLACE FUNCTION pg_catalog.citus_internal_adjust_local_clock_to_remote(pg_catalog.cluster_clock) + RETURNS void + LANGUAGE C STABLE PARALLEL SAFE STRICT + AS 'MODULE_PATHNAME', $$citus_internal_adjust_local_clock_to_remote$$; +COMMENT ON FUNCTION pg_catalog.citus_internal_adjust_local_clock_to_remote(pg_catalog.cluster_clock) + IS 'Internal UDF used to adjust the local clock to the maximum of nodes in the cluster'; + +REVOKE ALL ON FUNCTION pg_catalog.citus_internal_adjust_local_clock_to_remote(pg_catalog.cluster_clock) FROM PUBLIC; diff --git a/src/backend/distributed/sql/udfs/citus_internal_adjust_local_clock_to_remote/latest.sql b/src/backend/distributed/sql/udfs/citus_internal_adjust_local_clock_to_remote/latest.sql index 240f7a9b7..36d37a9e6 100644 --- a/src/backend/distributed/sql/udfs/citus_internal_adjust_local_clock_to_remote/latest.sql +++ b/src/backend/distributed/sql/udfs/citus_internal_adjust_local_clock_to_remote/latest.sql @@ -1,3 +1,12 @@ +CREATE OR REPLACE FUNCTION citus_internal.adjust_local_clock_to_remote(pg_catalog.cluster_clock) + RETURNS void + LANGUAGE C STABLE PARALLEL SAFE STRICT + AS 'MODULE_PATHNAME', $$citus_internal_adjust_local_clock_to_remote$$; +COMMENT ON FUNCTION citus_internal.adjust_local_clock_to_remote(pg_catalog.cluster_clock) + IS 'Internal UDF used to adjust the local clock to the maximum of nodes in the cluster'; + +REVOKE ALL ON FUNCTION citus_internal.adjust_local_clock_to_remote(pg_catalog.cluster_clock) FROM PUBLIC; + CREATE OR REPLACE FUNCTION pg_catalog.citus_internal_adjust_local_clock_to_remote(pg_catalog.cluster_clock) RETURNS void LANGUAGE C STABLE PARALLEL SAFE STRICT diff --git a/src/backend/distributed/sql/udfs/citus_internal_database_command/12.2-1.sql b/src/backend/distributed/sql/udfs/citus_internal_database_command/12.2-1.sql index 9f6d873cc..2c6e916c0 100644 --- a/src/backend/distributed/sql/udfs/citus_internal_database_command/12.2-1.sql +++ b/src/backend/distributed/sql/udfs/citus_internal_database_command/12.2-1.sql @@ -1,10 +1,10 @@ -- --- citus_internal_database_command run given database command without transaction block restriction. +-- citus_internal.database_command run given database command without transaction block restriction. -CREATE OR REPLACE FUNCTION pg_catalog.citus_internal_database_command(command text) +CREATE OR REPLACE FUNCTION citus_internal.database_command(command text) RETURNS void LANGUAGE C VOLATILE AS 'MODULE_PATHNAME', $$citus_internal_database_command$$; -COMMENT ON FUNCTION pg_catalog.citus_internal_database_command(text) IS +COMMENT ON FUNCTION citus_internal.database_command(text) IS 'run a database command without transaction block restrictions'; diff --git a/src/backend/distributed/sql/udfs/citus_internal_database_command/latest.sql b/src/backend/distributed/sql/udfs/citus_internal_database_command/latest.sql index 9f6d873cc..2c6e916c0 100644 --- a/src/backend/distributed/sql/udfs/citus_internal_database_command/latest.sql +++ b/src/backend/distributed/sql/udfs/citus_internal_database_command/latest.sql @@ -1,10 +1,10 @@ -- --- citus_internal_database_command run given database command without transaction block restriction. +-- citus_internal.database_command run given database command without transaction block restriction. -CREATE OR REPLACE FUNCTION pg_catalog.citus_internal_database_command(command text) +CREATE OR REPLACE FUNCTION citus_internal.database_command(command text) RETURNS void LANGUAGE C VOLATILE AS 'MODULE_PATHNAME', $$citus_internal_database_command$$; -COMMENT ON FUNCTION pg_catalog.citus_internal_database_command(text) IS +COMMENT ON FUNCTION citus_internal.database_command(text) IS 'run a database command without transaction block restrictions'; diff --git a/src/backend/distributed/sql/udfs/citus_internal_delete_colocation_metadata/12.2-1.sql b/src/backend/distributed/sql/udfs/citus_internal_delete_colocation_metadata/12.2-1.sql new file mode 100644 index 000000000..cb56a25cd --- /dev/null +++ b/src/backend/distributed/sql/udfs/citus_internal_delete_colocation_metadata/12.2-1.sql @@ -0,0 +1,19 @@ +CREATE OR REPLACE FUNCTION citus_internal.delete_colocation_metadata( + colocation_id int) + RETURNS void + LANGUAGE C + STRICT + AS 'MODULE_PATHNAME', $$citus_internal_delete_colocation_metadata$$; + +COMMENT ON FUNCTION citus_internal.delete_colocation_metadata(int) IS + 'deletes a co-location group from pg_dist_colocation'; + +CREATE OR REPLACE FUNCTION pg_catalog.citus_internal_delete_colocation_metadata( + colocation_id int) + RETURNS void + LANGUAGE C + STRICT + AS 'MODULE_PATHNAME'; + +COMMENT ON FUNCTION pg_catalog.citus_internal_delete_colocation_metadata(int) IS + 'deletes a co-location group from pg_dist_colocation'; diff --git a/src/backend/distributed/sql/udfs/citus_internal_delete_colocation_metadata/latest.sql b/src/backend/distributed/sql/udfs/citus_internal_delete_colocation_metadata/latest.sql index d4c3f1be9..cb56a25cd 100644 --- a/src/backend/distributed/sql/udfs/citus_internal_delete_colocation_metadata/latest.sql +++ b/src/backend/distributed/sql/udfs/citus_internal_delete_colocation_metadata/latest.sql @@ -1,3 +1,13 @@ +CREATE OR REPLACE FUNCTION citus_internal.delete_colocation_metadata( + colocation_id int) + RETURNS void + LANGUAGE C + STRICT + AS 'MODULE_PATHNAME', $$citus_internal_delete_colocation_metadata$$; + +COMMENT ON FUNCTION citus_internal.delete_colocation_metadata(int) IS + 'deletes a co-location group from pg_dist_colocation'; + CREATE OR REPLACE FUNCTION pg_catalog.citus_internal_delete_colocation_metadata( colocation_id int) RETURNS void diff --git a/src/backend/distributed/sql/udfs/citus_internal_delete_partition_metadata/12.2-1.sql b/src/backend/distributed/sql/udfs/citus_internal_delete_partition_metadata/12.2-1.sql new file mode 100644 index 000000000..693815abf --- /dev/null +++ b/src/backend/distributed/sql/udfs/citus_internal_delete_partition_metadata/12.2-1.sql @@ -0,0 +1,14 @@ +CREATE OR REPLACE FUNCTION citus_internal.delete_partition_metadata(table_name regclass) + RETURNS void + LANGUAGE C STRICT + AS 'MODULE_PATHNAME', $$citus_internal_delete_partition_metadata$$; +COMMENT ON FUNCTION citus_internal.delete_partition_metadata(regclass) IS + 'Deletes a row from pg_dist_partition with table ownership checks'; + +CREATE OR REPLACE FUNCTION pg_catalog.citus_internal_delete_partition_metadata(table_name regclass) + RETURNS void + LANGUAGE C STRICT + AS 'MODULE_PATHNAME'; +COMMENT ON FUNCTION pg_catalog.citus_internal_delete_partition_metadata(regclass) IS + 'Deletes a row from pg_dist_partition with table ownership checks'; + diff --git a/src/backend/distributed/sql/udfs/citus_internal_delete_partition_metadata/latest.sql b/src/backend/distributed/sql/udfs/citus_internal_delete_partition_metadata/latest.sql index c7cb5455d..693815abf 100644 --- a/src/backend/distributed/sql/udfs/citus_internal_delete_partition_metadata/latest.sql +++ b/src/backend/distributed/sql/udfs/citus_internal_delete_partition_metadata/latest.sql @@ -1,3 +1,10 @@ +CREATE OR REPLACE FUNCTION citus_internal.delete_partition_metadata(table_name regclass) + RETURNS void + LANGUAGE C STRICT + AS 'MODULE_PATHNAME', $$citus_internal_delete_partition_metadata$$; +COMMENT ON FUNCTION citus_internal.delete_partition_metadata(regclass) IS + 'Deletes a row from pg_dist_partition with table ownership checks'; + CREATE OR REPLACE FUNCTION pg_catalog.citus_internal_delete_partition_metadata(table_name regclass) RETURNS void LANGUAGE C STRICT diff --git a/src/backend/distributed/sql/udfs/citus_internal_delete_placement_metadata/12.2-1.sql b/src/backend/distributed/sql/udfs/citus_internal_delete_placement_metadata/12.2-1.sql new file mode 100644 index 000000000..f78c9a08e --- /dev/null +++ b/src/backend/distributed/sql/udfs/citus_internal_delete_placement_metadata/12.2-1.sql @@ -0,0 +1,19 @@ +CREATE OR REPLACE FUNCTION citus_internal.delete_placement_metadata( + placement_id bigint) +RETURNS void +LANGUAGE C +VOLATILE +AS 'MODULE_PATHNAME', +$$citus_internal_delete_placement_metadata$$; +COMMENT ON FUNCTION citus_internal.delete_placement_metadata(bigint) + IS 'Delete placement with given id from pg_dist_placement metadata table.'; + +CREATE OR REPLACE FUNCTION pg_catalog.citus_internal_delete_placement_metadata( + placement_id bigint) +RETURNS void +LANGUAGE C +VOLATILE +AS 'MODULE_PATHNAME', +$$citus_internal_delete_placement_metadata$$; +COMMENT ON FUNCTION pg_catalog.citus_internal_delete_placement_metadata(bigint) + IS 'Delete placement with given id from pg_dist_placement metadata table.'; diff --git a/src/backend/distributed/sql/udfs/citus_internal_delete_placement_metadata/latest.sql b/src/backend/distributed/sql/udfs/citus_internal_delete_placement_metadata/latest.sql index 5af65f0be..f78c9a08e 100644 --- a/src/backend/distributed/sql/udfs/citus_internal_delete_placement_metadata/latest.sql +++ b/src/backend/distributed/sql/udfs/citus_internal_delete_placement_metadata/latest.sql @@ -1,3 +1,13 @@ +CREATE OR REPLACE FUNCTION citus_internal.delete_placement_metadata( + placement_id bigint) +RETURNS void +LANGUAGE C +VOLATILE +AS 'MODULE_PATHNAME', +$$citus_internal_delete_placement_metadata$$; +COMMENT ON FUNCTION citus_internal.delete_placement_metadata(bigint) + IS 'Delete placement with given id from pg_dist_placement metadata table.'; + CREATE OR REPLACE FUNCTION pg_catalog.citus_internal_delete_placement_metadata( placement_id bigint) RETURNS void diff --git a/src/backend/distributed/sql/udfs/citus_internal_delete_shard_metadata/12.2-1.sql b/src/backend/distributed/sql/udfs/citus_internal_delete_shard_metadata/12.2-1.sql new file mode 100644 index 000000000..bcd121b0d --- /dev/null +++ b/src/backend/distributed/sql/udfs/citus_internal_delete_shard_metadata/12.2-1.sql @@ -0,0 +1,14 @@ +CREATE OR REPLACE FUNCTION citus_internal.delete_shard_metadata(shard_id bigint) + RETURNS void + LANGUAGE C STRICT + AS 'MODULE_PATHNAME', $$citus_internal_delete_shard_metadata$$; +COMMENT ON FUNCTION citus_internal.delete_shard_metadata(bigint) IS + 'Deletes rows from pg_dist_shard and pg_dist_shard_placement with user checks'; + +CREATE OR REPLACE FUNCTION pg_catalog.citus_internal_delete_shard_metadata(shard_id bigint) + RETURNS void + LANGUAGE C STRICT + AS 'MODULE_PATHNAME'; +COMMENT ON FUNCTION pg_catalog.citus_internal_delete_shard_metadata(bigint) IS + 'Deletes rows from pg_dist_shard and pg_dist_shard_placement with user checks'; + diff --git a/src/backend/distributed/sql/udfs/citus_internal_delete_shard_metadata/latest.sql b/src/backend/distributed/sql/udfs/citus_internal_delete_shard_metadata/latest.sql index 7bfd86bdd..bcd121b0d 100644 --- a/src/backend/distributed/sql/udfs/citus_internal_delete_shard_metadata/latest.sql +++ b/src/backend/distributed/sql/udfs/citus_internal_delete_shard_metadata/latest.sql @@ -1,3 +1,10 @@ +CREATE OR REPLACE FUNCTION citus_internal.delete_shard_metadata(shard_id bigint) + RETURNS void + LANGUAGE C STRICT + AS 'MODULE_PATHNAME', $$citus_internal_delete_shard_metadata$$; +COMMENT ON FUNCTION citus_internal.delete_shard_metadata(bigint) IS + 'Deletes rows from pg_dist_shard and pg_dist_shard_placement with user checks'; + CREATE OR REPLACE FUNCTION pg_catalog.citus_internal_delete_shard_metadata(shard_id bigint) RETURNS void LANGUAGE C STRICT diff --git a/src/backend/distributed/sql/udfs/citus_internal_delete_tenant_schema/12.2-1.sql b/src/backend/distributed/sql/udfs/citus_internal_delete_tenant_schema/12.2-1.sql new file mode 100644 index 000000000..2c36108b4 --- /dev/null +++ b/src/backend/distributed/sql/udfs/citus_internal_delete_tenant_schema/12.2-1.sql @@ -0,0 +1,17 @@ +CREATE OR REPLACE FUNCTION citus_internal.delete_tenant_schema(schema_id Oid) + RETURNS void + LANGUAGE C + VOLATILE + AS 'MODULE_PATHNAME', $$citus_internal_delete_tenant_schema$$; + +COMMENT ON FUNCTION citus_internal.delete_tenant_schema(Oid) IS + 'delete given tenant schema from pg_dist_schema'; + +CREATE OR REPLACE FUNCTION pg_catalog.citus_internal_delete_tenant_schema(schema_id Oid) + RETURNS void + LANGUAGE C + VOLATILE + AS 'MODULE_PATHNAME'; + +COMMENT ON FUNCTION pg_catalog.citus_internal_delete_tenant_schema(Oid) IS + 'delete given tenant schema from pg_dist_schema'; diff --git a/src/backend/distributed/sql/udfs/citus_internal_delete_tenant_schema/latest.sql b/src/backend/distributed/sql/udfs/citus_internal_delete_tenant_schema/latest.sql index 4a2bf0067..2c36108b4 100644 --- a/src/backend/distributed/sql/udfs/citus_internal_delete_tenant_schema/latest.sql +++ b/src/backend/distributed/sql/udfs/citus_internal_delete_tenant_schema/latest.sql @@ -1,3 +1,12 @@ +CREATE OR REPLACE FUNCTION citus_internal.delete_tenant_schema(schema_id Oid) + RETURNS void + LANGUAGE C + VOLATILE + AS 'MODULE_PATHNAME', $$citus_internal_delete_tenant_schema$$; + +COMMENT ON FUNCTION citus_internal.delete_tenant_schema(Oid) IS + 'delete given tenant schema from pg_dist_schema'; + CREATE OR REPLACE FUNCTION pg_catalog.citus_internal_delete_tenant_schema(schema_id Oid) RETURNS void LANGUAGE C diff --git a/src/backend/distributed/sql/udfs/mark_object_distributed/12.2-1.sql b/src/backend/distributed/sql/udfs/mark_object_distributed/12.2-1.sql index ee2c5e7e8..25d35c028 100644 --- a/src/backend/distributed/sql/udfs/mark_object_distributed/12.2-1.sql +++ b/src/backend/distributed/sql/udfs/mark_object_distributed/12.2-1.sql @@ -1,7 +1,7 @@ -CREATE OR REPLACE FUNCTION citus_internal.mark_object_distributed(classId Oid, objectName text, objectId Oid) +CREATE OR REPLACE FUNCTION citus_internal.mark_object_distributed(classId Oid, objectName text, objectId Oid, connectionUser text) RETURNS VOID LANGUAGE C AS 'MODULE_PATHNAME', $$mark_object_distributed$$; -COMMENT ON FUNCTION citus_internal.mark_object_distributed(classId Oid, objectName text, objectId Oid) +COMMENT ON FUNCTION citus_internal.mark_object_distributed(classId Oid, objectName text, objectId Oid, connectionUser text) IS 'adds an object to pg_dist_object on all nodes'; diff --git a/src/backend/distributed/sql/udfs/mark_object_distributed/latest.sql b/src/backend/distributed/sql/udfs/mark_object_distributed/latest.sql index ee2c5e7e8..25d35c028 100644 --- a/src/backend/distributed/sql/udfs/mark_object_distributed/latest.sql +++ b/src/backend/distributed/sql/udfs/mark_object_distributed/latest.sql @@ -1,7 +1,7 @@ -CREATE OR REPLACE FUNCTION citus_internal.mark_object_distributed(classId Oid, objectName text, objectId Oid) +CREATE OR REPLACE FUNCTION citus_internal.mark_object_distributed(classId Oid, objectName text, objectId Oid, connectionUser text) RETURNS VOID LANGUAGE C AS 'MODULE_PATHNAME', $$mark_object_distributed$$; -COMMENT ON FUNCTION citus_internal.mark_object_distributed(classId Oid, objectName text, objectId Oid) +COMMENT ON FUNCTION citus_internal.mark_object_distributed(classId Oid, objectName text, objectId Oid, connectionUser text) IS 'adds an object to pg_dist_object on all nodes'; diff --git a/src/backend/distributed/test/metadata_sync.c b/src/backend/distributed/test/metadata_sync.c index d6aeb842c..dec20c772 100644 --- a/src/backend/distributed/test/metadata_sync.c +++ b/src/backend/distributed/test/metadata_sync.c @@ -125,7 +125,7 @@ wait_until_metadata_sync(PG_FUNCTION_ARGS) /* First we start listening. */ MultiConnection *connection = GetNodeConnection(FORCE_NEW_CONNECTION, - LOCAL_HOST_NAME, PostPortNumber); + LocalHostName, PostPortNumber); ExecuteCriticalRemoteCommand(connection, "LISTEN " METADATA_SYNC_CHANNEL); /* diff --git a/src/backend/distributed/test/run_from_same_connection.c b/src/backend/distributed/test/run_from_same_connection.c index 5663a42fa..52b2e0b18 100644 --- a/src/backend/distributed/test/run_from_same_connection.c +++ b/src/backend/distributed/test/run_from_same_connection.c @@ -155,7 +155,7 @@ run_commands_on_session_level_connection_to_node(PG_FUNCTION_ARGS) StringInfo processStringInfo = makeStringInfo(); StringInfo workerProcessStringInfo = makeStringInfo(); - MultiConnection *localConnection = GetNodeConnection(0, LOCAL_HOST_NAME, + MultiConnection *localConnection = GetNodeConnection(0, LocalHostName, PostPortNumber); if (!singleConnection) diff --git a/src/backend/distributed/transaction/worker_transaction.c b/src/backend/distributed/transaction/worker_transaction.c index 9c8563de0..c6fcee107 100644 --- a/src/backend/distributed/transaction/worker_transaction.c +++ b/src/backend/distributed/transaction/worker_transaction.c @@ -36,10 +36,6 @@ #include "distributed/worker_manager.h" #include "distributed/worker_transaction.h" -static void SendCommandToRemoteMetadataNodesParams(const char *command, - const char *user, int parameterCount, - const Oid *parameterTypes, - const char *const *parameterValues); static void SendBareCommandListToMetadataNodesInternal(List *commandList, TargetWorkerSet targetWorkerSet); static void SendCommandToMetadataWorkersParams(const char *command, @@ -209,7 +205,7 @@ SendCommandListToRemoteNodesWithMetadata(List *commands) * SendCommandToWorkersParamsInternal() that can be used to send commands * to remote metadata nodes. */ -static void +void SendCommandToRemoteMetadataNodesParams(const char *command, const char *user, int parameterCount, const Oid *parameterTypes, diff --git a/src/include/distributed/connection_management.h b/src/include/distributed/connection_management.h index 9eadbde9d..d93e4483a 100644 --- a/src/include/distributed/connection_management.h +++ b/src/include/distributed/connection_management.h @@ -61,14 +61,6 @@ */ #define LOCAL_NODE_ID UINT32_MAX -/* - * If you want to connect to the current node use `LocalHostName`, which is a GUC, instead - * of the hardcoded loopback hostname. Only if you really need the loopback hostname use - * this define. - */ -#define LOCAL_HOST_NAME "localhost" - - /* forward declare, to avoid forcing large headers on everyone */ struct pg_conn; /* target of the PGconn typedef */ struct MemoryContextData; diff --git a/src/include/distributed/metadata/distobject.h b/src/include/distributed/metadata/distobject.h index 13f38178b..e98e6ee86 100644 --- a/src/include/distributed/metadata/distobject.h +++ b/src/include/distributed/metadata/distobject.h @@ -24,7 +24,8 @@ extern bool IsAnyObjectDistributed(const List *addresses); extern bool ClusterHasDistributedFunctionWithDistArgument(void); extern void MarkObjectDistributed(const ObjectAddress *distAddress); extern void MarkObjectDistributedWithName(const ObjectAddress *distAddress, char *name, - bool useConnectionForLocalQuery); + bool useConnectionForLocalQuery, + char *connectionUser); extern void MarkObjectDistributedViaSuperUser(const ObjectAddress *distAddress); extern void MarkObjectDistributedLocally(const ObjectAddress *distAddress); extern void UnmarkObjectDistributed(const ObjectAddress *address); diff --git a/src/include/distributed/worker_transaction.h b/src/include/distributed/worker_transaction.h index b9a855828..1b3809a0e 100644 --- a/src/include/distributed/worker_transaction.h +++ b/src/include/distributed/worker_transaction.h @@ -68,6 +68,10 @@ extern void SendCommandToWorkersAsUser(TargetWorkerSet targetWorkerSet, const char *nodeUser, const char *command); extern void SendCommandToWorkerAsUser(const char *nodeName, int32 nodePort, const char *nodeUser, const char *command); +extern void SendCommandToRemoteMetadataNodesParams(const char *command, + const char *user, int parameterCount, + const Oid *parameterTypes, + const char *const *parameterValues); extern bool SendOptionalCommandListToWorkerOutsideTransaction(const char *nodeName, int32 nodePort, const char *nodeUser, diff --git a/src/test/regress/citus_tests/run_test.py b/src/test/regress/citus_tests/run_test.py index d84462221..9a648c0ab 100755 --- a/src/test/regress/citus_tests/run_test.py +++ b/src/test/regress/citus_tests/run_test.py @@ -212,6 +212,18 @@ DEPS = { ["columnar_create", "columnar_load"], repeatable=False, ), + "multi_metadata_sync": TestDeps( + None, + [ + "multi_sequence_default", + "alter_database_propagation", + "alter_role_propagation", + "grant_on_schema_propagation", + "multi_test_catalog_views", + "multi_drop_extension", + ], + repeatable=False, + ), } diff --git a/src/test/regress/citus_tests/test/test_other_databases.py b/src/test/regress/citus_tests/test/test_other_databases.py index 925b065a7..494301692 100644 --- a/src/test/regress/citus_tests/test/test_other_databases.py +++ b/src/test/regress/citus_tests/test/test_other_databases.py @@ -22,7 +22,7 @@ def test_main_commited_outer_not_yet(cluster): "SELECT citus_internal.execute_command_on_remote_nodes_as_user('CREATE USER u1;', 'postgres')" ) cur2.execute( - "SELECT citus_internal.mark_object_distributed(1260, 'u1', 123123)" + "SELECT citus_internal.mark_object_distributed(1260, 'u1', 123123, 'postgres')" ) cur2.execute("COMMIT") @@ -133,7 +133,7 @@ def test_main_commited_outer_aborted(cluster): "SELECT citus_internal.execute_command_on_remote_nodes_as_user('CREATE USER u2;', 'postgres')" ) cur2.execute( - "SELECT citus_internal.mark_object_distributed(1260, 'u2', 321321)" + "SELECT citus_internal.mark_object_distributed(1260, 'u2', 321321, 'postgres')" ) cur2.execute("COMMIT") diff --git a/src/test/regress/expected/citus_internal_access.out b/src/test/regress/expected/citus_internal_access.out new file mode 100644 index 000000000..21464b38f --- /dev/null +++ b/src/test/regress/expected/citus_internal_access.out @@ -0,0 +1,10 @@ +--- Create a non-superuser role and check if it can access citus_internal schema functions +CREATE USER nonsuperuser CREATEROLE; +SET ROLE nonsuperuser; +--- The non-superuser role should not be able to access citus_internal functions +SELECT citus_internal.commit_management_command_2pc(); +ERROR: permission denied for function commit_management_command_2pc +SELECT citus_internal.replace_isolation_tester_func(); +ERROR: permission denied for function replace_isolation_tester_func +RESET ROLE; +DROP USER nonsuperuser; \ No newline at end of file diff --git a/src/test/regress/expected/create_drop_database_propagation.out b/src/test/regress/expected/create_drop_database_propagation.out index 6c9be95bc..da4ec4eb7 100644 --- a/src/test/regress/expected/create_drop_database_propagation.out +++ b/src/test/regress/expected/create_drop_database_propagation.out @@ -3,7 +3,7 @@ -- For versions >= 15, pg15_create_drop_database_propagation.sql is used. -- For versions >= 16, pg16_create_drop_database_propagation.sql is used. -- Test the UDF that we use to issue database command during metadata sync. -SELECT pg_catalog.citus_internal_database_command(null); +SELECT citus_internal.database_command(null); ERROR: This is an internal Citus function can only be used in a distributed transaction CREATE ROLE test_db_commands WITH LOGIN; ALTER SYSTEM SET citus.enable_manual_metadata_changes_for_user TO 'test_db_commands'; @@ -21,22 +21,22 @@ SELECT pg_sleep(0.1); SET ROLE test_db_commands; -- fails on null input -SELECT pg_catalog.citus_internal_database_command(null); +SELECT citus_internal.database_command(null); ERROR: command cannot be NULL -- fails on non create / drop db command -SELECT pg_catalog.citus_internal_database_command('CREATE TABLE foo_bar(a int)'); -ERROR: citus_internal_database_command() can only be used for CREATE DATABASE command by Citus. -SELECT pg_catalog.citus_internal_database_command('SELECT 1'); -ERROR: citus_internal_database_command() can only be used for CREATE DATABASE command by Citus. -SELECT pg_catalog.citus_internal_database_command('asfsfdsg'); +SELECT citus_internal.database_command('CREATE TABLE foo_bar(a int)'); +ERROR: citus_internal.database_command() can only be used for CREATE DATABASE command by Citus. +SELECT citus_internal.database_command('SELECT 1'); +ERROR: citus_internal.database_command() can only be used for CREATE DATABASE command by Citus. +SELECT citus_internal.database_command('asfsfdsg'); ERROR: syntax error at or near "asfsfdsg" -SELECT pg_catalog.citus_internal_database_command(''); +SELECT citus_internal.database_command(''); ERROR: cannot execute multiple utility events RESET ROLE; ALTER ROLE test_db_commands nocreatedb; SET ROLE test_db_commands; --- make sure that pg_catalog.citus_internal_database_command doesn't cause privilege escalation -SELECT pg_catalog.citus_internal_database_command('CREATE DATABASE no_permissions'); +-- make sure that citus_internal.database_command doesn't cause privilege escalation +SELECT citus_internal.database_command('CREATE DATABASE no_permissions'); ERROR: permission denied to create database RESET ROLE; DROP USER test_db_commands; @@ -1095,46 +1095,46 @@ SELECT * FROM public.check_database_on_all_nodes('test_db') ORDER BY node_type; REVOKE CONNECT ON DATABASE test_db FROM propagated_role; DROP DATABASE test_db; DROP ROLE propagated_role, non_propagated_role; --- test pg_catalog.citus_internal_acquire_citus_advisory_object_class_lock with null input -SELECT pg_catalog.citus_internal_acquire_citus_advisory_object_class_lock(null, 'regression'); +-- test citus_internal.acquire_citus_advisory_object_class_lock with null input +SELECT citus_internal.acquire_citus_advisory_object_class_lock(null, 'regression'); ERROR: object_class cannot be NULL -SELECT pg_catalog.citus_internal_acquire_citus_advisory_object_class_lock((SELECT CASE WHEN substring(version(), '\d+')::integer < 16 THEN 25 ELSE 26 END AS oclass_database), null); - citus_internal_acquire_citus_advisory_object_class_lock +SELECT citus_internal.acquire_citus_advisory_object_class_lock((SELECT CASE WHEN substring(version(), '\d+')::integer < 16 THEN 25 ELSE 26 END AS oclass_database), null); + acquire_citus_advisory_object_class_lock --------------------------------------------------------------------- (1 row) -- OCLASS_DATABASE -SELECT pg_catalog.citus_internal_acquire_citus_advisory_object_class_lock((SELECT CASE WHEN substring(version(), '\d+')::integer < 16 THEN 25 ELSE 26 END AS oclass_database), NULL); - citus_internal_acquire_citus_advisory_object_class_lock +SELECT citus_internal.acquire_citus_advisory_object_class_lock((SELECT CASE WHEN substring(version(), '\d+')::integer < 16 THEN 25 ELSE 26 END AS oclass_database), NULL); + acquire_citus_advisory_object_class_lock --------------------------------------------------------------------- (1 row) -SELECT pg_catalog.citus_internal_acquire_citus_advisory_object_class_lock((SELECT CASE WHEN substring(version(), '\d+')::integer < 16 THEN 25 ELSE 26 END AS oclass_database), 'regression'); - citus_internal_acquire_citus_advisory_object_class_lock +SELECT citus_internal.acquire_citus_advisory_object_class_lock((SELECT CASE WHEN substring(version(), '\d+')::integer < 16 THEN 25 ELSE 26 END AS oclass_database), 'regression'); + acquire_citus_advisory_object_class_lock --------------------------------------------------------------------- (1 row) -SELECT pg_catalog.citus_internal_acquire_citus_advisory_object_class_lock((SELECT CASE WHEN substring(version(), '\d+')::integer < 16 THEN 25 ELSE 26 END AS oclass_database), ''); +SELECT citus_internal.acquire_citus_advisory_object_class_lock((SELECT CASE WHEN substring(version(), '\d+')::integer < 16 THEN 25 ELSE 26 END AS oclass_database), ''); ERROR: database "" does not exist -SELECT pg_catalog.citus_internal_acquire_citus_advisory_object_class_lock((SELECT CASE WHEN substring(version(), '\d+')::integer < 16 THEN 25 ELSE 26 END AS oclass_database), 'no_such_db'); +SELECT citus_internal.acquire_citus_advisory_object_class_lock((SELECT CASE WHEN substring(version(), '\d+')::integer < 16 THEN 25 ELSE 26 END AS oclass_database), 'no_such_db'); ERROR: database "no_such_db" does not exist -- invalid OCLASS -SELECT pg_catalog.citus_internal_acquire_citus_advisory_object_class_lock(-1, NULL); +SELECT citus_internal.acquire_citus_advisory_object_class_lock(-1, NULL); ERROR: unsupported object class: -1 -SELECT pg_catalog.citus_internal_acquire_citus_advisory_object_class_lock(-1, 'regression'); +SELECT citus_internal.acquire_citus_advisory_object_class_lock(-1, 'regression'); ERROR: unsupported object class: -1 -- invalid OCLASS -SELECT pg_catalog.citus_internal_acquire_citus_advisory_object_class_lock(100, NULL); +SELECT citus_internal.acquire_citus_advisory_object_class_lock(100, NULL); ERROR: unsupported object class: 100 -SELECT pg_catalog.citus_internal_acquire_citus_advisory_object_class_lock(100, 'regression'); +SELECT citus_internal.acquire_citus_advisory_object_class_lock(100, 'regression'); ERROR: unsupported object class: 100 -- another valid OCLASS, but not implemented yet -SELECT pg_catalog.citus_internal_acquire_citus_advisory_object_class_lock(10, NULL); +SELECT citus_internal.acquire_citus_advisory_object_class_lock(10, NULL); ERROR: unsupported object class: 10 -SELECT pg_catalog.citus_internal_acquire_citus_advisory_object_class_lock(10, 'regression'); +SELECT citus_internal.acquire_citus_advisory_object_class_lock(10, 'regression'); ERROR: unsupported object class: 10 SELECT 1 FROM run_command_on_all_nodes('ALTER SYSTEM SET citus.enable_create_database_propagation TO ON'); ?column? diff --git a/src/test/regress/expected/create_ref_dist_from_citus_local.out b/src/test/regress/expected/create_ref_dist_from_citus_local.out index dc67400e0..f38e5c5a3 100644 --- a/src/test/regress/expected/create_ref_dist_from_citus_local.out +++ b/src/test/regress/expected/create_ref_dist_from_citus_local.out @@ -371,7 +371,7 @@ ROLLBACK; -- reference tables. SELECT pg_catalog.citus_internal_update_none_dist_table_metadata(1, 't', 1, true); ERROR: This is an internal Citus function can only be used in a distributed transaction -SELECT pg_catalog.citus_internal_delete_placement_metadata(1); +SELECT citus_internal.delete_placement_metadata(1); ERROR: This is an internal Citus function can only be used in a distributed transaction CREATE ROLE test_user_create_ref_dist WITH LOGIN; GRANT ALL ON SCHEMA create_ref_dist_from_citus_local TO test_user_create_ref_dist; @@ -401,7 +401,7 @@ SELECT pg_catalog.citus_internal_update_none_dist_table_metadata(1, 't', null, t ERROR: colocation_id cannot be NULL SELECT pg_catalog.citus_internal_update_none_dist_table_metadata(1, 't', 1, null); ERROR: auto_converted cannot be NULL -SELECT pg_catalog.citus_internal_delete_placement_metadata(null); +SELECT citus_internal.delete_placement_metadata(null); ERROR: placement_id cannot be NULL CREATE TABLE udf_test (col_1 int); SELECT citus_add_local_table_to_metadata('udf_test'); @@ -426,8 +426,8 @@ BEGIN; SELECT placementid AS udf_test_placementid FROM pg_dist_shard_placement WHERE shardid = get_shard_id_for_distribution_column('create_ref_dist_from_citus_local.udf_test') \gset - SELECT pg_catalog.citus_internal_delete_placement_metadata(:udf_test_placementid); - citus_internal_delete_placement_metadata + SELECT citus_internal.delete_placement_metadata(:udf_test_placementid); + delete_placement_metadata --------------------------------------------------------------------- (1 row) diff --git a/src/test/regress/expected/detect_conn_close.out b/src/test/regress/expected/detect_conn_close.out index ad758f32e..60973de76 100644 --- a/src/test/regress/expected/detect_conn_close.out +++ b/src/test/regress/expected/detect_conn_close.out @@ -128,7 +128,7 @@ BEGIN; (1 row) SELECT count(*) FROM socket_test_table; -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open ROLLBACK; -- repartition joins also can recover SET citus.enable_repartition_joins TO on; diff --git a/src/test/regress/expected/drop_partitioned_table.out b/src/test/regress/expected/drop_partitioned_table.out index 660adb89c..a92dee711 100644 --- a/src/test/regress/expected/drop_partitioned_table.out +++ b/src/test/regress/expected/drop_partitioned_table.out @@ -354,8 +354,8 @@ NOTICE: issuing SELECT worker_drop_distributed_table('drop_partitioned_table.pa NOTICE: issuing DROP TABLE IF EXISTS drop_partitioned_table.parent_xxxxx CASCADE NOTICE: issuing SELECT worker_drop_distributed_table('drop_partitioned_table.child1') NOTICE: issuing SELECT worker_drop_distributed_table('drop_partitioned_table.child1') -NOTICE: issuing SELECT pg_catalog.citus_internal_delete_colocation_metadata(1344400) -NOTICE: issuing SELECT pg_catalog.citus_internal_delete_colocation_metadata(1344400) +NOTICE: issuing SELECT citus_internal.delete_colocation_metadata(1344400) +NOTICE: issuing SELECT citus_internal.delete_colocation_metadata(1344400) ROLLBACK; NOTICE: issuing ROLLBACK NOTICE: issuing ROLLBACK @@ -377,8 +377,8 @@ NOTICE: issuing DROP TABLE IF EXISTS drop_partitioned_table.parent_xxxxx CASCAD NOTICE: issuing SELECT worker_drop_distributed_table('drop_partitioned_table.child1') NOTICE: issuing SELECT worker_drop_distributed_table('drop_partitioned_table.child1') NOTICE: issuing DROP TABLE IF EXISTS drop_partitioned_table.child1_xxxxx CASCADE -NOTICE: issuing SELECT pg_catalog.citus_internal_delete_colocation_metadata(1344400) -NOTICE: issuing SELECT pg_catalog.citus_internal_delete_colocation_metadata(1344400) +NOTICE: issuing SELECT citus_internal.delete_colocation_metadata(1344400) +NOTICE: issuing SELECT citus_internal.delete_colocation_metadata(1344400) ROLLBACK; NOTICE: issuing ROLLBACK NOTICE: issuing ROLLBACK diff --git a/src/test/regress/expected/failure_connection_establishment.out b/src/test/regress/expected/failure_connection_establishment.out index d032755dd..f23f11d2b 100644 --- a/src/test/regress/expected/failure_connection_establishment.out +++ b/src/test/regress/expected/failure_connection_establishment.out @@ -84,7 +84,7 @@ SELECT citus.mitmproxy('conn.connect_delay(1400)'); ALTER TABLE products ADD CONSTRAINT p_key PRIMARY KEY(product_no); WARNING: could not establish connection after 900 ms -ERROR: connection to the remote node localhost:xxxxx failed +ERROR: connection to the remote node postgres@localhost:xxxxx failed RESET citus.node_connection_timeout; SELECT citus.mitmproxy('conn.allow()'); mitmproxy diff --git a/src/test/regress/expected/failure_copy_on_hash.out b/src/test/regress/expected/failure_copy_on_hash.out index 24350f707..424ab0da8 100644 --- a/src/test/regress/expected/failure_copy_on_hash.out +++ b/src/test/regress/expected/failure_copy_on_hash.out @@ -36,7 +36,7 @@ SELECT citus.mitmproxy('conn.kill()'); (1 row) \COPY test_table FROM stdin delimiter ','; -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open CONTEXT: COPY test_table, line 1: "1,2" SELECT citus.mitmproxy('conn.allow()'); mitmproxy @@ -271,7 +271,7 @@ SELECT citus.mitmproxy('conn.kill()'); (1 row) \COPY test_table_2 FROM stdin delimiter ','; -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open SELECT citus.mitmproxy('conn.allow()'); mitmproxy --------------------------------------------------------------------- diff --git a/src/test/regress/expected/failure_create_distributed_table_non_empty.out b/src/test/regress/expected/failure_create_distributed_table_non_empty.out index 0e4b85701..109d3686f 100644 --- a/src/test/regress/expected/failure_create_distributed_table_non_empty.out +++ b/src/test/regress/expected/failure_create_distributed_table_non_empty.out @@ -28,7 +28,7 @@ SELECT citus.mitmproxy('conn.kill()'); (1 row) SELECT create_distributed_table('test_table', 'id'); -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open SELECT count(*) FROM pg_dist_shard WHERE logicalrelid='create_distributed_table_non_empty_failure.test_table'::regclass; count --------------------------------------------------------------------- @@ -164,7 +164,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="CREATE TABLE").kill()'); (1 row) SELECT create_distributed_table('test_table', 'id'); -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open SELECT count(*) FROM pg_dist_shard WHERE logicalrelid='create_distributed_table_non_empty_failure.test_table'::regclass; count --------------------------------------------------------------------- @@ -436,7 +436,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="^BEGIN TRANSACTION ISOLATION LEVEL R (1 row) SELECT create_distributed_table('test_table', 'id', colocate_with => 'colocated_table'); -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open SELECT count(*) FROM pg_dist_shard WHERE logicalrelid='create_distributed_table_non_empty_failure.test_table'::regclass; count --------------------------------------------------------------------- @@ -519,7 +519,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="^SELECT worker_apply_shard_ddl_comma (1 row) SELECT create_distributed_table('test_table', 'id', colocate_with => 'colocated_table'); -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open SELECT count(*) FROM pg_dist_shard WHERE logicalrelid='create_distributed_table_non_empty_failure.test_table'::regclass; count --------------------------------------------------------------------- @@ -680,7 +680,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="CREATE TABLE").kill()'); (1 row) SELECT create_distributed_table('test_table', 'id'); -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open SELECT count(*) FROM pg_dist_shard WHERE logicalrelid='create_distributed_table_non_empty_failure.test_table'::regclass; count --------------------------------------------------------------------- @@ -901,7 +901,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="^BEGIN TRANSACTION ISOLATION LEVEL R (1 row) SELECT create_distributed_table('test_table', 'id', colocate_with => 'colocated_table'); -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open SELECT count(*) FROM pg_dist_shard WHERE logicalrelid='create_distributed_table_non_empty_failure.test_table'::regclass; count --------------------------------------------------------------------- diff --git a/src/test/regress/expected/failure_create_index_concurrently.out b/src/test/regress/expected/failure_create_index_concurrently.out index 784c91aec..947d5711e 100644 --- a/src/test/regress/expected/failure_create_index_concurrently.out +++ b/src/test/regress/expected/failure_create_index_concurrently.out @@ -29,7 +29,7 @@ CREATE INDEX CONCURRENTLY idx_index_test ON index_test(id, value_1); WARNING: Commands that are not transaction-safe may result in partial failure, potentially leading to an inconsistent state. If the problematic command is a CREATE operation, consider using the 'IF EXISTS' syntax to drop the object, if applicable, and then re-attempt the original command. -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open SELECT citus.mitmproxy('conn.allow()'); mitmproxy --------------------------------------------------------------------- @@ -63,7 +63,7 @@ CREATE INDEX CONCURRENTLY idx_index_test ON index_test(id, value_1); WARNING: Commands that are not transaction-safe may result in partial failure, potentially leading to an inconsistent state. If the problematic command is a CREATE operation, consider using the 'IF EXISTS' syntax to drop the object, if applicable, and then re-attempt the original command. -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open SELECT citus.mitmproxy('conn.allow()'); mitmproxy --------------------------------------------------------------------- @@ -144,7 +144,7 @@ DROP INDEX CONCURRENTLY IF EXISTS idx_index_test; WARNING: Commands that are not transaction-safe may result in partial failure, potentially leading to an inconsistent state. If the problematic command is a CREATE operation, consider using the 'IF EXISTS' syntax to drop the object, if applicable, and then re-attempt the original command. -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open SELECT citus.mitmproxy('conn.allow()'); mitmproxy --------------------------------------------------------------------- diff --git a/src/test/regress/expected/failure_create_table.out b/src/test/regress/expected/failure_create_table.out index 5d022d678..956bdb2b2 100644 --- a/src/test/regress/expected/failure_create_table.out +++ b/src/test/regress/expected/failure_create_table.out @@ -22,7 +22,7 @@ SELECT citus.mitmproxy('conn.kill()'); (1 row) SELECT create_distributed_table('test_table','id'); -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open SELECT citus.mitmproxy('conn.allow()'); mitmproxy --------------------------------------------------------------------- @@ -116,7 +116,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="SELECT worker_apply_shard_ddl_comman (1 row) SELECT create_distributed_table('test_table','id'); -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open SELECT citus.mitmproxy('conn.allow()'); mitmproxy --------------------------------------------------------------------- @@ -147,7 +147,7 @@ BEGIN; (1 row) SELECT create_distributed_table('test_table', 'id'); -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open COMMIT; SELECT citus.mitmproxy('conn.allow()'); mitmproxy @@ -215,7 +215,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="CREATE TABLE").kill()'); (1 row) SELECT create_distributed_table('test_table','id',colocate_with=>'temp_table'); -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open SELECT citus.mitmproxy('conn.allow()'); mitmproxy --------------------------------------------------------------------- @@ -484,7 +484,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="CREATE TABLE").kill()'); BEGIN; SELECT create_distributed_table('test_table','id'); -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open ROLLBACK; SELECT citus.mitmproxy('conn.allow()'); mitmproxy diff --git a/src/test/regress/expected/failure_cte_subquery.out b/src/test/regress/expected/failure_cte_subquery.out index 19fb11f37..89e3e1489 100644 --- a/src/test/regress/expected/failure_cte_subquery.out +++ b/src/test/regress/expected/failure_cte_subquery.out @@ -86,7 +86,7 @@ FROM ORDER BY 1 DESC LIMIT 5 ) as foo WHERE foo.user_id = cte.user_id; -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open -- kill at the third copy (pull) SELECT citus.mitmproxy('conn.onQuery(query="SELECT DISTINCT users_table.user").kill()'); mitmproxy @@ -117,7 +117,7 @@ FROM ORDER BY 1 DESC LIMIT 5 ) as foo WHERE foo.user_id = cte.user_id; -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open -- cancel at the first copy (push) SELECT citus.mitmproxy('conn.onQuery(query="^COPY").cancel(' || :pid || ')'); mitmproxy @@ -254,7 +254,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="^DELETE FROM").kill()'); WITH cte_delete AS MATERIALIZED (DELETE FROM users_table WHERE user_name in ('A', 'D') RETURNING *) INSERT INTO users_table SELECT * FROM cte_delete; -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open -- verify contents are the same SELECT citus.mitmproxy('conn.allow()'); mitmproxy @@ -365,7 +365,7 @@ BEGIN; SET LOCAL citus.multi_shard_modify_mode = 'sequential'; WITH cte_delete AS MATERIALIZED (DELETE FROM users_table WHERE user_name in ('A', 'D') RETURNING *) INSERT INTO users_table SELECT * FROM cte_delete; -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open END; RESET SEARCH_PATH; SELECT citus.mitmproxy('conn.allow()'); diff --git a/src/test/regress/expected/failure_ddl.out b/src/test/regress/expected/failure_ddl.out index 77b134a72..2f55663a0 100644 --- a/src/test/regress/expected/failure_ddl.out +++ b/src/test/regress/expected/failure_ddl.out @@ -36,7 +36,7 @@ SELECT citus.mitmproxy('conn.onAuthenticationOk().kill()'); (1 row) ALTER TABLE test_table ADD COLUMN new_column INT; -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open SELECT array_agg(name::text ORDER BY name::text) FROM public.table_attrs where relid = 'test_table'::regclass; array_agg --------------------------------------------------------------------- @@ -99,7 +99,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="worker_apply_shard_ddl_command").kil (1 row) ALTER TABLE test_table ADD COLUMN new_column INT; -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open -- show that we've never commited the changes SELECT array_agg(name::text ORDER BY name::text) FROM public.table_attrs where relid = 'test_table'::regclass; array_agg @@ -300,7 +300,7 @@ SELECT citus.mitmproxy('conn.onAuthenticationOk().kill()'); (1 row) ALTER TABLE test_table DROP COLUMN new_column; -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open SELECT array_agg(name::text ORDER BY name::text) FROM public.table_attrs where relid = 'test_table'::regclass; array_agg --------------------------------------------------------------------- @@ -361,7 +361,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="worker_apply_shard_ddl_command").kil (1 row) ALTER TABLE test_table DROP COLUMN new_column; -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open SELECT array_agg(name::text ORDER BY name::text) FROM public.table_attrs where relid = 'test_table'::regclass; array_agg --------------------------------------------------------------------- @@ -661,7 +661,7 @@ SELECT citus.mitmproxy('conn.onAuthenticationOk().kill()'); (1 row) ALTER TABLE test_table ADD COLUMN new_column INT; -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open SELECT array_agg(name::text ORDER BY name::text) FROM public.table_attrs where relid = 'test_table'::regclass; array_agg --------------------------------------------------------------------- @@ -722,7 +722,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="worker_apply_shard_ddl_command").kil (1 row) ALTER TABLE test_table ADD COLUMN new_column INT; -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open SELECT array_agg(name::text ORDER BY name::text) FROM public.table_attrs where relid = 'test_table'::regclass; array_agg --------------------------------------------------------------------- @@ -1010,7 +1010,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="worker_apply_shard_ddl_command").kil (1 row) ALTER TABLE test_table ADD COLUMN new_column INT; -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open -- kill as soon as the coordinator after it sends worker_apply_shard_ddl_command 2nd time SELECT citus.mitmproxy('conn.onQuery(query="worker_apply_shard_ddl_command").after(2).kill()'); mitmproxy @@ -1019,7 +1019,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="worker_apply_shard_ddl_command").aft (1 row) ALTER TABLE test_table ADD COLUMN new_column INT; -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open -- cancel as soon as the coordinator after it sends worker_apply_shard_ddl_command 2nd time SELECT citus.mitmproxy('conn.onQuery(query="worker_apply_shard_ddl_command").after(2).cancel(' || pg_backend_pid() || ')'); mitmproxy diff --git a/src/test/regress/expected/failure_distributed_results.out b/src/test/regress/expected/failure_distributed_results.out index a316763e3..5a2461057 100644 --- a/src/test/regress/expected/failure_distributed_results.out +++ b/src/test/regress/expected/failure_distributed_results.out @@ -88,7 +88,7 @@ CREATE TABLE distributed_result_info AS SELECT resultId, nodeport, rowcount, targetShardId, targetShardIndex FROM partition_task_list_results('test', $$ SELECT * FROM source_table $$, 'target_table') NATURAL JOIN pg_dist_node; -WARNING: connection to the remote node localhost:xxxxx failed with the following error: connection not open +WARNING: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open SELECT * FROM distributed_result_info ORDER BY resultId; resultid | nodeport | rowcount | targetshardid | targetshardindex --------------------------------------------------------------------- diff --git a/src/test/regress/expected/failure_failover_to_local_execution.out b/src/test/regress/expected/failure_failover_to_local_execution.out index 56518141a..20ad2a6df 100644 --- a/src/test/regress/expected/failure_failover_to_local_execution.out +++ b/src/test/regress/expected/failure_failover_to_local_execution.out @@ -101,7 +101,7 @@ NOTICE: issuing SELECT count(*) AS count FROM failure_failover_to_local_executi DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx NOTICE: issuing SELECT count(*) AS count FROM failure_failover_to_local_execution.failover_to_local_1980003 failover_to_local WHERE true DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx -WARNING: connection to the remote node localhost:xxxxx failed with the following error: connection not open +WARNING: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open NOTICE: executing the command locally: SELECT count(*) AS count FROM failure_failover_to_local_execution.failover_to_local_1980000 failover_to_local WHERE true NOTICE: executing the command locally: SELECT count(*) AS count FROM failure_failover_to_local_execution.failover_to_local_1980002 failover_to_local WHERE true count diff --git a/src/test/regress/expected/failure_insert_select_pushdown.out b/src/test/regress/expected/failure_insert_select_pushdown.out index ed461d040..570bf22f9 100644 --- a/src/test/regress/expected/failure_insert_select_pushdown.out +++ b/src/test/regress/expected/failure_insert_select_pushdown.out @@ -44,7 +44,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="^INSERT INTO insert_select_pushdown" (1 row) INSERT INTO events_summary SELECT user_id, event_id, count(*) FROM events_table GROUP BY 1,2; -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open --verify nothing is modified SELECT citus.mitmproxy('conn.allow()'); mitmproxy @@ -95,7 +95,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="^INSERT INTO insert_select_pushdown" (1 row) INSERT INTO events_table SELECT * FROM events_table; -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open --verify nothing is modified SELECT citus.mitmproxy('conn.allow()'); mitmproxy diff --git a/src/test/regress/expected/failure_insert_select_repartition.out b/src/test/regress/expected/failure_insert_select_repartition.out index d45318208..ca36f7e88 100644 --- a/src/test/regress/expected/failure_insert_select_repartition.out +++ b/src/test/regress/expected/failure_insert_select_repartition.out @@ -55,7 +55,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="worker_partition_query_result").kill (1 row) INSERT INTO target_table SELECT * FROM source_table; -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open SELECT * FROM target_table ORDER BY a; a | b --------------------------------------------------------------------- @@ -68,7 +68,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="worker_partition_query_result").kill (1 row) INSERT INTO target_table SELECT * FROM replicated_source_table; -WARNING: connection to the remote node localhost:xxxxx failed with the following error: connection not open +WARNING: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open SELECT * FROM target_table ORDER BY a; a | b --------------------------------------------------------------------- @@ -138,7 +138,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="read_intermediate_results").kill()') (1 row) INSERT INTO target_table SELECT * FROM source_table; -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open SELECT * FROM target_table ORDER BY a; a | b --------------------------------------------------------------------- @@ -151,7 +151,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="read_intermediate_results").kill()') (1 row) INSERT INTO target_table SELECT * FROM replicated_source_table; -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open SELECT * FROM target_table ORDER BY a; a | b --------------------------------------------------------------------- @@ -168,7 +168,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="read_intermediate_results").kill()') (1 row) INSERT INTO replicated_target_table SELECT * FROM source_table; -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open SELECT * FROM replicated_target_table; a | b --------------------------------------------------------------------- diff --git a/src/test/regress/expected/failure_multi_dml.out b/src/test/regress/expected/failure_multi_dml.out index bbea2c999..7757f574c 100644 --- a/src/test/regress/expected/failure_multi_dml.out +++ b/src/test/regress/expected/failure_multi_dml.out @@ -33,7 +33,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="DELETE").kill()'); BEGIN; DELETE FROM dml_test WHERE id = 1; -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open DELETE FROM dml_test WHERE id = 2; ERROR: current transaction is aborted, commands ignored until end of transaction block INSERT INTO dml_test VALUES (5, 'Epsilon'); @@ -93,7 +93,7 @@ BEGIN; DELETE FROM dml_test WHERE id = 1; DELETE FROM dml_test WHERE id = 2; INSERT INTO dml_test VALUES (5, 'Epsilon'); -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open UPDATE dml_test SET name = 'alpha' WHERE id = 1; ERROR: current transaction is aborted, commands ignored until end of transaction block UPDATE dml_test SET name = 'gamma' WHERE id = 3; @@ -148,7 +148,7 @@ DELETE FROM dml_test WHERE id = 1; DELETE FROM dml_test WHERE id = 2; INSERT INTO dml_test VALUES (5, 'Epsilon'); UPDATE dml_test SET name = 'alpha' WHERE id = 1; -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open UPDATE dml_test SET name = 'gamma' WHERE id = 3; ERROR: current transaction is aborted, commands ignored until end of transaction block COMMIT; diff --git a/src/test/regress/expected/failure_multi_row_insert.out b/src/test/regress/expected/failure_multi_row_insert.out index f3cd4919a..8feffbaeb 100644 --- a/src/test/regress/expected/failure_multi_row_insert.out +++ b/src/test/regress/expected/failure_multi_row_insert.out @@ -43,7 +43,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="INSERT").kill()'); (1 row) INSERT INTO distributed_table VALUES (1,1), (1,2), (1,3); -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open -- this test is broken, see https://github.com/citusdata/citus/issues/2460 -- SELECT citus.mitmproxy('conn.onQuery(query="^INSERT").cancel(' || :pid || ')'); -- INSERT INTO distributed_table VALUES (1,4), (1,5), (1,6); @@ -55,7 +55,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="^INSERT").kill()'); (1 row) INSERT INTO distributed_table VALUES (1,7), (5,8); -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open -- this test is broken, see https://github.com/citusdata/citus/issues/2460 -- SELECT citus.mitmproxy('conn.onQuery(query="^INSERT").cancel(' || :pid || ')'); -- INSERT INTO distributed_table VALUES (1,9), (5,10); @@ -67,7 +67,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="^INSERT").kill()'); (1 row) INSERT INTO distributed_table VALUES (1,11), (6,12); -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open SELECT citus.mitmproxy('conn.onQuery(query="^INSERT").cancel(' || :pid || ')'); mitmproxy --------------------------------------------------------------------- @@ -84,7 +84,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="^INSERT").after(1).kill()'); (1 row) INSERT INTO distributed_table VALUES (1,15), (6,16); -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open SELECT citus.mitmproxy('conn.onQuery(query="^INSERT").after(1).cancel(' || :pid || ')'); mitmproxy --------------------------------------------------------------------- @@ -101,7 +101,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="^INSERT").kill()'); (1 row) INSERT INTO distributed_table VALUES (2,19),(1,20); -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open SELECT citus.mitmproxy('conn.onQuery(query="^INSERT").cancel(' || :pid || ')'); mitmproxy --------------------------------------------------------------------- diff --git a/src/test/regress/expected/failure_multi_shard_update_delete.out b/src/test/regress/expected/failure_multi_shard_update_delete.out index 24cb895ea..27284ec38 100644 --- a/src/test/regress/expected/failure_multi_shard_update_delete.out +++ b/src/test/regress/expected/failure_multi_shard_update_delete.out @@ -58,7 +58,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="DELETE FROM").kill()'); -- issue a multi shard delete DELETE FROM t2 WHERE b = 2; -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open -- verify nothing is deleted SELECT count(*) FROM t2; count @@ -74,7 +74,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="DELETE FROM multi_shard.t2_201005"). (1 row) DELETE FROM t2 WHERE b = 2; -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open -- verify nothing is deleted SELECT count(*) FROM t2; count @@ -134,7 +134,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="^UPDATE").kill()'); -- issue a multi shard update UPDATE t2 SET c = 4 WHERE b = 2; -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open -- verify nothing is updated SELECT count(*) FILTER (WHERE b = 2) AS b2, count(*) FILTER (WHERE c = 4) AS c4 FROM t2; b2 | c4 @@ -150,7 +150,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="UPDATE multi_shard.t2_201005").kill( (1 row) UPDATE t2 SET c = 4 WHERE b = 2; -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open -- verify nothing is updated SELECT count(*) FILTER (WHERE b = 2) AS b2, count(*) FILTER (WHERE c = 4) AS c4 FROM t2; b2 | c4 @@ -202,7 +202,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="DELETE FROM").kill()'); -- issue a multi shard delete DELETE FROM t2 WHERE b = 2; -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open -- verify nothing is deleted SELECT count(*) FROM t2; count @@ -218,7 +218,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="DELETE FROM multi_shard.t2_201005"). (1 row) DELETE FROM t2 WHERE b = 2; -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open -- verify nothing is deleted SELECT count(*) FROM t2; count @@ -278,7 +278,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="^UPDATE").kill()'); -- issue a multi shard update UPDATE t2 SET c = 4 WHERE b = 2; -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open -- verify nothing is updated SELECT count(*) FILTER (WHERE b = 2) AS b2, count(*) FILTER (WHERE c = 4) AS c4 FROM t2; b2 | c4 @@ -294,7 +294,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="UPDATE multi_shard.t2_201005").kill( (1 row) UPDATE t2 SET c = 4 WHERE b = 2; -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open -- verify nothing is updated SELECT count(*) FILTER (WHERE b = 2) AS b2, count(*) FILTER (WHERE c = 4) AS c4 FROM t2; b2 | c4 @@ -364,7 +364,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="DELETE FROM").kill()'); (1 row) DELETE FROM r1 WHERE a = 2; -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open -- verify nothing is deleted SELECT count(*) FILTER (WHERE b = 2) AS b2 FROM t2; b2 @@ -379,7 +379,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="DELETE FROM").kill()'); (1 row) DELETE FROM t2 WHERE b = 2; -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open -- verify nothing is deleted SELECT count(*) FILTER (WHERE b = 2) AS b2 FROM t2; b2 @@ -459,7 +459,7 @@ UPDATE t3 SET c = q.c FROM ( SELECT b, max(c) as c FROM t2 GROUP BY b) q WHERE t3.b = q.b RETURNING *; -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open --- verify nothing is updated SELECT citus.mitmproxy('conn.allow()'); mitmproxy @@ -515,7 +515,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="UPDATE multi_shard.t3_201013").kill( (1 row) UPDATE t3 SET b = 2 WHERE b = 1; -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open -- verify nothing is updated SELECT count(*) FILTER (WHERE b = 1) b1, count(*) FILTER (WHERE b = 2) AS b2 FROM t3; b1 | b2 @@ -547,7 +547,7 @@ SELECT count(*) FILTER (WHERE b = 1) b1, count(*) FILTER (WHERE b = 2) AS b2 FRO -- following will fail UPDATE t3 SET b = 2 WHERE b = 1; -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open END; -- verify everything is rolled back SELECT count(*) FILTER (WHERE b = 1) b1, count(*) FILTER (WHERE b = 2) AS b2 FROM t2; @@ -563,7 +563,7 @@ SELECT count(*) FILTER (WHERE b = 1) b1, count(*) FILTER (WHERE b = 2) AS b2 FRO (1 row) UPDATE t3 SET b = 1 WHERE b = 2 RETURNING *; -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open -- verify nothing is updated SELECT count(*) FILTER (WHERE b = 1) b1, count(*) FILTER (WHERE b = 2) AS b2 FROM t3; b1 | b2 @@ -578,7 +578,7 @@ SELECT count(*) FILTER (WHERE b = 1) b1, count(*) FILTER (WHERE b = 2) AS b2 FRO (1 row) UPDATE t3 SET b = 2 WHERE b = 1; -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open -- verify nothing is updated SELECT count(*) FILTER (WHERE b = 1) b1, count(*) FILTER (WHERE b = 2) AS b2 FROM t3; b1 | b2 @@ -610,7 +610,7 @@ SELECT count(*) FILTER (WHERE b = 1) b1, count(*) FILTER (WHERE b = 2) AS b2 FRO -- following will fail UPDATE t3 SET b = 2 WHERE b = 1; -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open END; -- verify everything is rolled back SELECT count(*) FILTER (WHERE b = 1) b1, count(*) FILTER (WHERE b = 2) AS b2 FROM t2; diff --git a/src/test/regress/expected/failure_mx_metadata_sync.out b/src/test/regress/expected/failure_mx_metadata_sync.out index 7b4c04ff8..c2418e9ab 100644 --- a/src/test/regress/expected/failure_mx_metadata_sync.out +++ b/src/test/regress/expected/failure_mx_metadata_sync.out @@ -132,7 +132,7 @@ SELECT hasmetadata FROM pg_dist_node WHERE nodeport=:worker_2_proxy_port; -- Check failures on DDL command propagation CREATE TABLE t2 (id int PRIMARY KEY); -SELECT citus.mitmproxy('conn.onParse(query="citus_internal_add_placement_metadata").kill()'); +SELECT citus.mitmproxy('conn.onParse(query="citus_internal.add_placement_metadata").kill()'); mitmproxy --------------------------------------------------------------------- @@ -140,7 +140,7 @@ SELECT citus.mitmproxy('conn.onParse(query="citus_internal_add_placement_metadat SELECT create_distributed_table('t2', 'id'); ERROR: connection not open -SELECT citus.mitmproxy('conn.onParse(query="citus_internal_add_shard_metadata").cancel(' || :pid || ')'); +SELECT citus.mitmproxy('conn.onParse(query="citus_internal.add_shard_metadata").cancel(' || :pid || ')'); mitmproxy --------------------------------------------------------------------- diff --git a/src/test/regress/expected/failure_mx_metadata_sync_multi_trans.out b/src/test/regress/expected/failure_mx_metadata_sync_multi_trans.out index 541bce5c5..e71f092c3 100644 --- a/src/test/regress/expected/failure_mx_metadata_sync_multi_trans.out +++ b/src/test/regress/expected/failure_mx_metadata_sync_multi_trans.out @@ -155,7 +155,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="UPDATE pg_dist_local_group SET group (1 row) SELECT citus_activate_node('localhost', :worker_2_proxy_port); -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open -- Failure to drop node metadata SELECT citus.mitmproxy('conn.onQuery(query="DELETE FROM pg_dist_node").cancel(' || :pid || ')'); mitmproxy @@ -172,7 +172,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="DELETE FROM pg_dist_node").kill()'); (1 row) SELECT citus_activate_node('localhost', :worker_2_proxy_port); -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open -- Failure to send node metadata SELECT citus.mitmproxy('conn.onQuery(query="INSERT INTO pg_dist_node").cancel(' || :pid || ')'); mitmproxy @@ -189,7 +189,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="INSERT INTO pg_dist_node").kill()'); (1 row) SELECT citus_activate_node('localhost', :worker_2_proxy_port); -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open -- Failure to drop sequence dependency for all tables SELECT citus.mitmproxy('conn.onQuery(query="SELECT pg_catalog.worker_drop_sequence_dependency.*FROM pg_dist_partition").cancel(' || :pid || ')'); mitmproxy @@ -206,7 +206,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="SELECT pg_catalog.worker_drop_sequen (1 row) SELECT citus_activate_node('localhost', :worker_2_proxy_port); -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open -- Failure to drop shell table SELECT citus.mitmproxy('conn.onQuery(query="CALL pg_catalog.worker_drop_all_shell_tables").cancel(' || :pid || ')'); mitmproxy @@ -223,7 +223,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="CALL pg_catalog.worker_drop_all_shel (1 row) SELECT citus_activate_node('localhost', :worker_2_proxy_port); -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open -- Failure to delete all pg_dist_partition metadata SELECT citus.mitmproxy('conn.onQuery(query="DELETE FROM pg_dist_partition").cancel(' || :pid || ')'); mitmproxy @@ -240,7 +240,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="DELETE FROM pg_dist_partition").kill (1 row) SELECT citus_activate_node('localhost', :worker_2_proxy_port); -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open -- Failure to delete all pg_dist_shard metadata SELECT citus.mitmproxy('conn.onQuery(query="DELETE FROM pg_dist_shard").cancel(' || :pid || ')'); mitmproxy @@ -257,7 +257,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="DELETE FROM pg_dist_shard").kill()') (1 row) SELECT citus_activate_node('localhost', :worker_2_proxy_port); -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open -- Failure to delete all pg_dist_placement metadata SELECT citus.mitmproxy('conn.onQuery(query="DELETE FROM pg_dist_placement").cancel(' || :pid || ')'); mitmproxy @@ -274,7 +274,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="DELETE FROM pg_dist_placement").kill (1 row) SELECT citus_activate_node('localhost', :worker_2_proxy_port); -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open -- Failure to delete all pg_dist_object metadata SELECT citus.mitmproxy('conn.onQuery(query="DELETE FROM pg_catalog.pg_dist_object").cancel(' || :pid || ')'); mitmproxy @@ -291,7 +291,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="DELETE FROM pg_catalog.pg_dist_objec (1 row) SELECT citus_activate_node('localhost', :worker_2_proxy_port); -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open -- Failure to delete all pg_dist_colocation metadata SELECT citus.mitmproxy('conn.onQuery(query="DELETE FROM pg_catalog.pg_dist_colocation").cancel(' || :pid || ')'); mitmproxy @@ -308,7 +308,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="DELETE FROM pg_catalog.pg_dist_coloc (1 row) SELECT citus_activate_node('localhost', :worker_2_proxy_port); -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open -- Failure to alter or create role SELECT citus.mitmproxy('conn.onQuery(query="SELECT worker_create_or_alter_role").cancel(' || :pid || ')'); mitmproxy @@ -325,7 +325,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="SELECT worker_create_or_alter_role") (1 row) SELECT citus_activate_node('localhost', :worker_2_proxy_port); -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open -- Failure to set database owner SELECT citus.mitmproxy('conn.onQuery(query="ALTER DATABASE.*OWNER TO").cancel(' || :pid || ')'); mitmproxy @@ -342,7 +342,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="ALTER DATABASE.*OWNER TO").kill()'); (1 row) SELECT citus_activate_node('localhost', :worker_2_proxy_port); -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open -- Failure to create schema SELECT citus.mitmproxy('conn.onQuery(query="CREATE SCHEMA IF NOT EXISTS mx_metadata_sync_multi_trans AUTHORIZATION").cancel(' || :pid || ')'); mitmproxy @@ -359,7 +359,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="CREATE SCHEMA IF NOT EXISTS mx_metad (1 row) SELECT citus_activate_node('localhost', :worker_2_proxy_port); -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open -- Failure to create collation SELECT citus.mitmproxy('conn.onQuery(query="SELECT worker_create_or_replace_object.*CREATE COLLATION mx_metadata_sync_multi_trans.german_phonebook").cancel(' || :pid || ')'); mitmproxy @@ -376,7 +376,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="SELECT worker_create_or_replace_obje (1 row) SELECT citus_activate_node('localhost', :worker_2_proxy_port); -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open -- Failure to create function SELECT citus.mitmproxy('conn.onQuery(query="CREATE OR REPLACE FUNCTION mx_metadata_sync_multi_trans.one_as_result").cancel(' || :pid || ')'); mitmproxy @@ -393,7 +393,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="CREATE OR REPLACE FUNCTION mx_metada (1 row) SELECT citus_activate_node('localhost', :worker_2_proxy_port); -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open -- Failure to create text search dictionary SELECT citus.mitmproxy('conn.onQuery(query="SELECT worker_create_or_replace_object.*my_german_dict").cancel(' || :pid || ')'); mitmproxy @@ -410,7 +410,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="SELECT worker_create_or_replace_obje (1 row) SELECT citus_activate_node('localhost', :worker_2_proxy_port); -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open -- Failure to create text search config SELECT citus.mitmproxy('conn.onQuery(query="SELECT worker_create_or_replace_object.*my_ts_config").cancel(' || :pid || ')'); mitmproxy @@ -427,7 +427,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="SELECT worker_create_or_replace_obje (1 row) SELECT citus_activate_node('localhost', :worker_2_proxy_port); -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open -- Failure to create type SELECT citus.mitmproxy('conn.onQuery(query="SELECT worker_create_or_replace_object.*pair_type").cancel(' || :pid || ')'); mitmproxy @@ -444,7 +444,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="SELECT worker_create_or_replace_obje (1 row) SELECT citus_activate_node('localhost', :worker_2_proxy_port); -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open -- Failure to create publication SELECT citus.mitmproxy('conn.onQuery(query="CREATE PUBLICATION.*pub_all").cancel(' || :pid || ')'); mitmproxy @@ -461,7 +461,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="CREATE PUBLICATION.*pub_all").kill() (1 row) SELECT citus_activate_node('localhost', :worker_2_proxy_port); -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open -- Failure to create sequence SELECT citus.mitmproxy('conn.onQuery(query="SELECT worker_apply_sequence_command").cancel(' || :pid || ')'); mitmproxy @@ -478,7 +478,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="SELECT worker_apply_sequence_command (1 row) SELECT citus_activate_node('localhost', :worker_2_proxy_port); -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open -- Failure to drop sequence dependency for distributed table SELECT citus.mitmproxy('conn.onQuery(query="SELECT pg_catalog.worker_drop_sequence_dependency.*mx_metadata_sync_multi_trans.dist1").cancel(' || :pid || ')'); mitmproxy @@ -495,7 +495,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="SELECT pg_catalog.worker_drop_sequen (1 row) SELECT citus_activate_node('localhost', :worker_2_proxy_port); -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open -- Failure to drop distributed table if exists SELECT citus.mitmproxy('conn.onQuery(query="DROP TABLE IF EXISTS mx_metadata_sync_multi_trans.dist1").cancel(' || :pid || ')'); mitmproxy @@ -512,7 +512,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="DROP TABLE IF EXISTS mx_metadata_syn (1 row) SELECT citus_activate_node('localhost', :worker_2_proxy_port); -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open -- Failure to create distributed table SELECT citus.mitmproxy('conn.onQuery(query="CREATE TABLE mx_metadata_sync_multi_trans.dist1").cancel(' || :pid || ')'); mitmproxy @@ -529,7 +529,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="CREATE TABLE mx_metadata_sync_multi_ (1 row) SELECT citus_activate_node('localhost', :worker_2_proxy_port); -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open -- Failure to record sequence dependency for table SELECT citus.mitmproxy('conn.onQuery(query="SELECT pg_catalog.worker_record_sequence_dependency").cancel(' || :pid || ')'); mitmproxy @@ -546,7 +546,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="SELECT pg_catalog.worker_record_sequ (1 row) SELECT citus_activate_node('localhost', :worker_2_proxy_port); -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open -- Failure to create index for table SELECT citus.mitmproxy('conn.onQuery(query="CREATE INDEX dist1_search_phone_idx ON mx_metadata_sync_multi_trans.dist1 USING gin").cancel(' || :pid || ')'); mitmproxy @@ -563,7 +563,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="CREATE INDEX dist1_search_phone_idx (1 row) SELECT citus_activate_node('localhost', :worker_2_proxy_port); -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open -- Failure to create reference table SELECT citus.mitmproxy('conn.onQuery(query="CREATE TABLE mx_metadata_sync_multi_trans.ref").cancel(' || :pid || ')'); mitmproxy @@ -580,7 +580,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="CREATE TABLE mx_metadata_sync_multi_ (1 row) SELECT citus_activate_node('localhost', :worker_2_proxy_port); -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open -- Failure to create local table SELECT citus.mitmproxy('conn.onQuery(query="CREATE TABLE mx_metadata_sync_multi_trans.loc1").cancel(' || :pid || ')'); mitmproxy @@ -597,7 +597,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="CREATE TABLE mx_metadata_sync_multi_ (1 row) SELECT citus_activate_node('localhost', :worker_2_proxy_port); -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open -- Failure to create distributed partitioned table SELECT citus.mitmproxy('conn.onQuery(query="CREATE TABLE mx_metadata_sync_multi_trans.orders").cancel(' || :pid || ')'); mitmproxy @@ -614,7 +614,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="CREATE TABLE mx_metadata_sync_multi_ (1 row) SELECT citus_activate_node('localhost', :worker_2_proxy_port); -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open -- Failure to create distributed partition table SELECT citus.mitmproxy('conn.onQuery(query="CREATE TABLE mx_metadata_sync_multi_trans.orders_p2020_01_05").cancel(' || :pid || ')'); mitmproxy @@ -631,7 +631,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="CREATE TABLE mx_metadata_sync_multi_ (1 row) SELECT citus_activate_node('localhost', :worker_2_proxy_port); -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open -- Failure to attach partition SELECT citus.mitmproxy('conn.onQuery(query="ALTER TABLE mx_metadata_sync_multi_trans.orders ATTACH PARTITION mx_metadata_sync_multi_trans.orders_p2020_01_05").cancel(' || :pid || ')'); mitmproxy @@ -648,9 +648,9 @@ SELECT citus.mitmproxy('conn.onQuery(query="ALTER TABLE mx_metadata_sync_multi_t (1 row) SELECT citus_activate_node('localhost', :worker_2_proxy_port); -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open -- Failure to add partition metadata -SELECT citus.mitmproxy('conn.onQuery(query="SELECT citus_internal_add_partition_metadata").cancel(' || :pid || ')'); +SELECT citus.mitmproxy('conn.onQuery(query="SELECT citus_internal.add_partition_metadata").cancel(' || :pid || ')'); mitmproxy --------------------------------------------------------------------- @@ -658,16 +658,16 @@ SELECT citus.mitmproxy('conn.onQuery(query="SELECT citus_internal_add_partition_ SELECT citus_activate_node('localhost', :worker_2_proxy_port); ERROR: canceling statement due to user request -SELECT citus.mitmproxy('conn.onQuery(query="SELECT citus_internal_add_partition_metadata").kill()'); +SELECT citus.mitmproxy('conn.onQuery(query="SELECT citus_internal.add_partition_metadata").kill()'); mitmproxy --------------------------------------------------------------------- (1 row) SELECT citus_activate_node('localhost', :worker_2_proxy_port); -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open -- Failure to add shard metadata -SELECT citus.mitmproxy('conn.onQuery(query="SELECT citus_internal_add_shard_metadata").cancel(' || :pid || ')'); +SELECT citus.mitmproxy('conn.onQuery(query="SELECT citus_internal.add_shard_metadata").cancel(' || :pid || ')'); mitmproxy --------------------------------------------------------------------- @@ -675,16 +675,16 @@ SELECT citus.mitmproxy('conn.onQuery(query="SELECT citus_internal_add_shard_meta SELECT citus_activate_node('localhost', :worker_2_proxy_port); ERROR: canceling statement due to user request -SELECT citus.mitmproxy('conn.onQuery(query="SELECT citus_internal_add_shard_metadata").kill()'); +SELECT citus.mitmproxy('conn.onQuery(query="SELECT citus_internal.add_shard_metadata").kill()'); mitmproxy --------------------------------------------------------------------- (1 row) SELECT citus_activate_node('localhost', :worker_2_proxy_port); -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open -- Failure to add placement metadata -SELECT citus.mitmproxy('conn.onQuery(query="SELECT citus_internal_add_placement_metadata").cancel(' || :pid || ')'); +SELECT citus.mitmproxy('conn.onQuery(query="SELECT citus_internal.add_placement_metadata").cancel(' || :pid || ')'); mitmproxy --------------------------------------------------------------------- @@ -692,16 +692,16 @@ SELECT citus.mitmproxy('conn.onQuery(query="SELECT citus_internal_add_placement_ SELECT citus_activate_node('localhost', :worker_2_proxy_port); ERROR: canceling statement due to user request -SELECT citus.mitmproxy('conn.onQuery(query="SELECT citus_internal_add_placement_metadata").kill()'); +SELECT citus.mitmproxy('conn.onQuery(query="SELECT citus_internal.add_placement_metadata").kill()'); mitmproxy --------------------------------------------------------------------- (1 row) SELECT citus_activate_node('localhost', :worker_2_proxy_port); -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open -- Failure to add colocation metadata -SELECT citus.mitmproxy('conn.onQuery(query="SELECT pg_catalog.citus_internal_add_colocation_metadata").cancel(' || :pid || ')'); +SELECT citus.mitmproxy('conn.onQuery(query="SELECT citus_internal.add_colocation_metadata").cancel(' || :pid || ')'); mitmproxy --------------------------------------------------------------------- @@ -709,16 +709,16 @@ SELECT citus.mitmproxy('conn.onQuery(query="SELECT pg_catalog.citus_internal_add SELECT citus_activate_node('localhost', :worker_2_proxy_port); ERROR: canceling statement due to user request -SELECT citus.mitmproxy('conn.onQuery(query="SELECT pg_catalog.citus_internal_add_colocation_metadata").kill()'); +SELECT citus.mitmproxy('conn.onQuery(query="SELECT citus_internal.add_colocation_metadata").kill()'); mitmproxy --------------------------------------------------------------------- (1 row) SELECT citus_activate_node('localhost', :worker_2_proxy_port); -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open -- Failure to add distributed object metadata -SELECT citus.mitmproxy('conn.onQuery(query="SELECT citus_internal_add_object_metadata").cancel(' || :pid || ')'); +SELECT citus.mitmproxy('conn.onQuery(query="SELECT citus_internal.add_object_metadata").cancel(' || :pid || ')'); mitmproxy --------------------------------------------------------------------- @@ -726,14 +726,14 @@ SELECT citus.mitmproxy('conn.onQuery(query="SELECT citus_internal_add_object_met SELECT citus_activate_node('localhost', :worker_2_proxy_port); ERROR: canceling statement due to user request -SELECT citus.mitmproxy('conn.onQuery(query="SELECT citus_internal_add_object_metadata").kill()'); +SELECT citus.mitmproxy('conn.onQuery(query="SELECT citus_internal.add_object_metadata").kill()'); mitmproxy --------------------------------------------------------------------- (1 row) SELECT citus_activate_node('localhost', :worker_2_proxy_port); -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open -- Failure to mark function as distributed SELECT citus.mitmproxy('conn.onQuery(query="WITH distributed_object_data.*one_as_result").cancel(' || :pid || ')'); mitmproxy @@ -750,7 +750,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="WITH distributed_object_data.*one_as (1 row) SELECT citus_activate_node('localhost', :worker_2_proxy_port); -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open -- Failure to mark collation as distributed SELECT citus.mitmproxy('conn.onQuery(query="WITH distributed_object_data.*german_phonebook").cancel(' || :pid || ')'); mitmproxy @@ -767,7 +767,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="WITH distributed_object_data.*german (1 row) SELECT citus_activate_node('localhost', :worker_2_proxy_port); -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open -- Failure to mark text search dictionary as distributed SELECT citus.mitmproxy('conn.onQuery(query="WITH distributed_object_data.*my_german_dict").cancel(' || :pid || ')'); mitmproxy @@ -784,7 +784,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="WITH distributed_object_data.*my_ger (1 row) SELECT citus_activate_node('localhost', :worker_2_proxy_port); -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open -- Failure to mark text search configuration as distributed SELECT citus.mitmproxy('conn.onQuery(query="WITH distributed_object_data.*my_ts_config").cancel(' || :pid || ')'); mitmproxy @@ -801,7 +801,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="WITH distributed_object_data.*my_ts_ (1 row) SELECT citus_activate_node('localhost', :worker_2_proxy_port); -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open -- Failure to mark type as distributed SELECT citus.mitmproxy('conn.onQuery(query="WITH distributed_object_data.*pair_type").cancel(' || :pid || ')'); mitmproxy @@ -818,7 +818,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="WITH distributed_object_data.*pair_t (1 row) SELECT citus_activate_node('localhost', :worker_2_proxy_port); -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open -- Failure to mark sequence as distributed SELECT citus.mitmproxy('conn.onQuery(query="WITH distributed_object_data.*seq_owned").cancel(' || :pid || ')'); mitmproxy @@ -835,7 +835,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="WITH distributed_object_data.*seq_ow (1 row) SELECT citus_activate_node('localhost', :worker_2_proxy_port); -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open -- Failure to mark publication as distributed SELECT citus.mitmproxy('conn.onQuery(query="WITH distributed_object_data.*pub_all").cancel(' || :pid || ')'); mitmproxy @@ -852,7 +852,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="WITH distributed_object_data.*pub_al (1 row) SELECT citus_activate_node('localhost', :worker_2_proxy_port); -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open -- Failure to set isactive to true SELECT citus.mitmproxy('conn.onQuery(query="UPDATE pg_dist_node SET isactive = TRUE").cancel(' || :pid || ')'); mitmproxy diff --git a/src/test/regress/expected/failure_on_create_subscription.out b/src/test/regress/expected/failure_on_create_subscription.out index 19df82d3e..a42df24d2 100644 --- a/src/test/regress/expected/failure_on_create_subscription.out +++ b/src/test/regress/expected/failure_on_create_subscription.out @@ -43,9 +43,9 @@ SELECT * FROM shards_in_workers; -- Failure on creating the subscription -- Failing exactly on CREATE SUBSCRIPTION is causing flaky test where we fail with either: --- 1) ERROR: connection to the remote node localhost:xxxxx failed with the following error: ERROR: subscription "citus_shard_move_subscription_xxxxxxx" does not exist +-- 1) ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: ERROR: subscription "citus_shard_move_subscription_xxxxxxx" does not exist -- another command is already in progress --- 2) ERROR: connection to the remote node localhost:xxxxx failed with the following error: another command is already in progress +-- 2) ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: another command is already in progress -- Instead fail on the next step (ALTER SUBSCRIPTION) instead which is also required logically as part of uber CREATE SUBSCRIPTION operation. SELECT citus.mitmproxy('conn.onQuery(query="ALTER SUBSCRIPTION").kill()'); mitmproxy diff --git a/src/test/regress/expected/failure_online_move_shard_placement.out b/src/test/regress/expected/failure_online_move_shard_placement.out index cf5890f35..0a881fe42 100644 --- a/src/test/regress/expected/failure_online_move_shard_placement.out +++ b/src/test/regress/expected/failure_online_move_shard_placement.out @@ -407,7 +407,7 @@ SELECT citus.mitmproxy('conn.matches(b"CREATE INDEX").killall()'); (1 row) SELECT master_move_shard_placement(101, 'localhost', :worker_1_port, 'localhost', :worker_2_proxy_port); -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open -- cleanup leftovers SELECT citus.mitmproxy('conn.allow()'); mitmproxy @@ -442,7 +442,7 @@ SELECT citus.mitmproxy('conn.matches(b"CREATE INDEX").killall()'); (1 row) SELECT master_move_shard_placement(101, 'localhost', :worker_1_port, 'localhost', :worker_2_proxy_port); -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open -- failure on parallel create index ALTER SYSTEM RESET citus.max_adaptive_executor_pool_size; SELECT pg_reload_conf(); @@ -458,7 +458,7 @@ SELECT citus.mitmproxy('conn.matches(b"CREATE INDEX").killall()'); (1 row) SELECT master_move_shard_placement(101, 'localhost', :worker_1_port, 'localhost', :worker_2_proxy_port); -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open -- Verify that the shard is not moved and the number of rows are still 100k SELECT citus.mitmproxy('conn.allow()'); mitmproxy diff --git a/src/test/regress/expected/failure_ref_tables.out b/src/test/regress/expected/failure_ref_tables.out index 4984cc1bf..e9a7e4571 100644 --- a/src/test/regress/expected/failure_ref_tables.out +++ b/src/test/regress/expected/failure_ref_tables.out @@ -33,7 +33,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="INSERT").kill()'); (1 row) INSERT INTO ref_table VALUES (5, 6); -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open SELECT COUNT(*) FROM ref_table WHERE key=5; count --------------------------------------------------------------------- @@ -48,7 +48,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="UPDATE").kill()'); (1 row) UPDATE ref_table SET key=7 RETURNING value; -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open SELECT COUNT(*) FROM ref_table WHERE key=7; count --------------------------------------------------------------------- @@ -65,7 +65,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="UPDATE").kill()'); BEGIN; DELETE FROM ref_table WHERE key=5; UPDATE ref_table SET key=value; -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open COMMIT; SELECT COUNT(*) FROM ref_table WHERE key=value; count diff --git a/src/test/regress/expected/failure_replicated_partitions.out b/src/test/regress/expected/failure_replicated_partitions.out index 7294df98b..67dda269c 100644 --- a/src/test/regress/expected/failure_replicated_partitions.out +++ b/src/test/regress/expected/failure_replicated_partitions.out @@ -28,7 +28,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="INSERT").kill()'); (1 row) INSERT INTO partitioned_table VALUES (0, 0); -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open -- use both placements SET citus.task_assignment_policy TO "round-robin"; -- the results should be the same diff --git a/src/test/regress/expected/failure_savepoints.out b/src/test/regress/expected/failure_savepoints.out index 9b155e90e..ca5cb91f6 100644 --- a/src/test/regress/expected/failure_savepoints.out +++ b/src/test/regress/expected/failure_savepoints.out @@ -312,7 +312,7 @@ SELECT * FROM ref; ROLLBACK TO SAVEPOINT start; SELECT * FROM ref; -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open END; -- clean up RESET client_min_messages; diff --git a/src/test/regress/expected/failure_single_mod.out b/src/test/regress/expected/failure_single_mod.out index 2a6ed2d77..aa6c10e66 100644 --- a/src/test/regress/expected/failure_single_mod.out +++ b/src/test/regress/expected/failure_single_mod.out @@ -27,7 +27,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="INSERT").kill()'); (1 row) INSERT INTO mod_test VALUES (2, 6); -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open SELECT COUNT(*) FROM mod_test WHERE key=2; count --------------------------------------------------------------------- @@ -59,7 +59,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="UPDATE").kill()'); (1 row) UPDATE mod_test SET value='ok' WHERE key=2 RETURNING key; -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open SELECT COUNT(*) FROM mod_test WHERE value='ok'; count --------------------------------------------------------------------- @@ -89,7 +89,7 @@ INSERT INTO mod_test VALUES (2, 6); INSERT INTO mod_test VALUES (2, 7); DELETE FROM mod_test WHERE key=2 AND value = '7'; UPDATE mod_test SET value='ok' WHERE key=2; -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open COMMIT; SELECT COUNT(*) FROM mod_test WHERE key=2; count diff --git a/src/test/regress/expected/failure_single_select.out b/src/test/regress/expected/failure_single_select.out index 1b60f3125..586dd4756 100644 --- a/src/test/regress/expected/failure_single_select.out +++ b/src/test/regress/expected/failure_single_select.out @@ -30,14 +30,14 @@ SELECT citus.mitmproxy('conn.onQuery(query="SELECT.*select_test").kill()'); (1 row) SELECT * FROM select_test WHERE key = 3; -WARNING: connection to the remote node localhost:xxxxx failed with the following error: connection not open +WARNING: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open key | value --------------------------------------------------------------------- 3 | test data (1 row) SELECT * FROM select_test WHERE key = 3; -WARNING: connection to the remote node localhost:xxxxx failed with the following error: connection not open +WARNING: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open key | value --------------------------------------------------------------------- 3 | test data @@ -54,7 +54,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="SELECT.*select_test").kill()'); BEGIN; INSERT INTO select_test VALUES (3, 'more data'); SELECT * FROM select_test WHERE key = 3; -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open COMMIT; SELECT citus.mitmproxy('conn.allow()'); mitmproxy @@ -142,7 +142,7 @@ SELECT * FROM select_test WHERE key = 3; INSERT INTO select_test VALUES (3, 'even more data'); SELECT * FROM select_test WHERE key = 3; -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open COMMIT; SELECT citus.mitmproxy('conn.onQuery(query="SELECT.*pg_prepared_xacts").after(2).kill()'); mitmproxy @@ -186,7 +186,7 @@ SELECT * FROM select_test WHERE key = 1; (1 row) SELECT * FROM select_test WHERE key = 1; -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open -- now the same test with query cancellation SELECT citus.mitmproxy('conn.onQuery(query="SELECT.*select_test").after(1).cancel(' || pg_backend_pid() || ')'); mitmproxy diff --git a/src/test/regress/expected/failure_split_cleanup.out b/src/test/regress/expected/failure_split_cleanup.out index f9eacd47c..ba8624725 100644 --- a/src/test/regress/expected/failure_split_cleanup.out +++ b/src/test/regress/expected/failure_split_cleanup.out @@ -627,10 +627,10 @@ WARNING: connection not open CONTEXT: while executing command on localhost:xxxxx WARNING: connection not open CONTEXT: while executing command on localhost:xxxxx -WARNING: connection to the remote node localhost:xxxxx failed with the following error: connection not open -WARNING: connection to the remote node localhost:xxxxx failed with the following error: connection not open -WARNING: connection to the remote node localhost:xxxxx failed with the following error: connection not open -WARNING: connection to the remote node localhost:xxxxx failed with the following error: connection not open +WARNING: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open +WARNING: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open +WARNING: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open +WARNING: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open ERROR: connection not open CONTEXT: while executing command on localhost:xxxxx SELECT operation_id, object_type, object_name, node_group_id, policy_type diff --git a/src/test/regress/expected/failure_tenant_isolation.out b/src/test/regress/expected/failure_tenant_isolation.out index 6be4580be..b406aa2a3 100644 --- a/src/test/regress/expected/failure_tenant_isolation.out +++ b/src/test/regress/expected/failure_tenant_isolation.out @@ -76,7 +76,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="worker_split_copy\(302").kill()'); (1 row) SELECT isolate_tenant_to_new_shard('table_1', 5, 'CASCADE', shard_transfer_mode => 'block_writes'); -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open -- cancellation on colocated table population SELECT citus.mitmproxy('conn.onQuery(query="worker_split_copy\(302").cancel(' || :pid || ')'); mitmproxy @@ -94,7 +94,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="ALTER TABLE tenant_isolation.table_2 (1 row) SELECT isolate_tenant_to_new_shard('table_1', 5, 'CASCADE', shard_transfer_mode => 'block_writes'); -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open -- cancellation on colocated table constraints SELECT citus.mitmproxy('conn.onQuery(query="ALTER TABLE tenant_isolation.table_2 ADD CONSTRAINT").after(2).cancel(' || :pid || ')'); mitmproxy @@ -131,7 +131,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="worker_split_copy\(300").kill()'); (1 row) SELECT isolate_tenant_to_new_shard('table_1', 5, 'CASCADE', shard_transfer_mode => 'block_writes'); -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open -- cancellation on table population SELECT citus.mitmproxy('conn.onQuery(query="worker_split_copy\(300").cancel(' || :pid || ')'); mitmproxy @@ -149,7 +149,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="ALTER TABLE tenant_isolation.table_1 (1 row) SELECT isolate_tenant_to_new_shard('table_1', 5, 'CASCADE', shard_transfer_mode => 'block_writes'); -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open -- cancellation on table constraints SELECT citus.mitmproxy('conn.onQuery(query="ALTER TABLE tenant_isolation.table_1 ADD CONSTRAINT").after(2).cancel(' || :pid || ')'); mitmproxy diff --git a/src/test/regress/expected/failure_tenant_isolation_nonblocking.out b/src/test/regress/expected/failure_tenant_isolation_nonblocking.out index e40842e2a..aecde71c0 100644 --- a/src/test/regress/expected/failure_tenant_isolation_nonblocking.out +++ b/src/test/regress/expected/failure_tenant_isolation_nonblocking.out @@ -159,7 +159,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="SET TRANSACTION SNAPSHOT").kill()'); (1 row) SELECT isolate_tenant_to_new_shard('table_1', 5, 'CASCADE', shard_transfer_mode := 'force_logical'); -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open -- cancellation on setting snapshot SELECT citus.mitmproxy('conn.onQuery(query="SET TRANSACTION SNAPSHOT").cancel(' || :pid || ')'); mitmproxy @@ -177,7 +177,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="worker_split_copy\(300").kill()'); (1 row) SELECT isolate_tenant_to_new_shard('table_1', 5, 'CASCADE', shard_transfer_mode := 'force_logical'); -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open -- cancellation on table population SELECT citus.mitmproxy('conn.onQuery(query="worker_split_copy\(300").cancel(' || :pid || ')'); mitmproxy @@ -195,7 +195,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="worker_split_copy\(302").kill()'); (1 row) SELECT isolate_tenant_to_new_shard('table_1', 5, 'CASCADE', shard_transfer_mode := 'force_logical'); -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open -- cancellation on colocated table population SELECT citus.mitmproxy('conn.onQuery(query="worker_split_copy\(302").cancel(' || :pid || ')'); mitmproxy diff --git a/src/test/regress/expected/failure_truncate.out b/src/test/regress/expected/failure_truncate.out index 4e332252e..253314ee9 100644 --- a/src/test/regress/expected/failure_truncate.out +++ b/src/test/regress/expected/failure_truncate.out @@ -43,7 +43,7 @@ SELECT citus.mitmproxy('conn.onAuthenticationOk().kill()'); (1 row) TRUNCATE test_table; -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open SELECT citus.mitmproxy('conn.allow()'); mitmproxy --------------------------------------------------------------------- @@ -152,7 +152,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="TRUNCATE TABLE truncate_failure.test (1 row) TRUNCATE test_table; -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open SELECT citus.mitmproxy('conn.allow()'); mitmproxy --------------------------------------------------------------------- @@ -414,7 +414,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="^TRUNCATE TABLE").after(2).kill()'); (1 row) TRUNCATE reference_table CASCADE; -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open SELECT citus.mitmproxy('conn.allow()'); mitmproxy --------------------------------------------------------------------- @@ -553,7 +553,7 @@ SELECT citus.mitmproxy('conn.onAuthenticationOk().kill()'); (1 row) TRUNCATE test_table; -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open SELECT citus.mitmproxy('conn.allow()'); mitmproxy --------------------------------------------------------------------- @@ -662,7 +662,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="^TRUNCATE TABLE truncate_failure.tes (1 row) TRUNCATE test_table; -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open SELECT citus.mitmproxy('conn.allow()'); mitmproxy --------------------------------------------------------------------- @@ -922,7 +922,7 @@ SELECT citus.mitmproxy('conn.onAuthenticationOk().kill()'); (1 row) TRUNCATE test_table; -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open SELECT citus.mitmproxy('conn.allow()'); mitmproxy --------------------------------------------------------------------- @@ -1031,7 +1031,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="TRUNCATE TABLE truncate_failure.test (1 row) TRUNCATE test_table; -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open SELECT citus.mitmproxy('conn.allow()'); mitmproxy --------------------------------------------------------------------- diff --git a/src/test/regress/expected/failure_vacuum.out b/src/test/regress/expected/failure_vacuum.out index 617d40d3a..b438f413b 100644 --- a/src/test/regress/expected/failure_vacuum.out +++ b/src/test/regress/expected/failure_vacuum.out @@ -30,7 +30,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="^VACUUM").kill()'); (1 row) VACUUM vacuum_test; -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open SELECT citus.mitmproxy('conn.onQuery(query="^ANALYZE").kill()'); mitmproxy --------------------------------------------------------------------- @@ -38,7 +38,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="^ANALYZE").kill()'); (1 row) ANALYZE vacuum_test; -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open SET client_min_messages TO ERROR; SELECT citus.mitmproxy('conn.onQuery(query="^COMMIT").kill()'); mitmproxy @@ -113,7 +113,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="^VACUUM.*other").kill()'); (1 row) VACUUM vacuum_test, other_vacuum_test; -ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open SELECT citus.mitmproxy('conn.onQuery(query="^VACUUM.*other").cancel(' || pg_backend_pid() || ')'); mitmproxy --------------------------------------------------------------------- diff --git a/src/test/regress/expected/isolation_database_cmd_from_any_node.out b/src/test/regress/expected/isolation_database_cmd_from_any_node.out index 5ca30fb1e..771c67fe8 100644 --- a/src/test/regress/expected/isolation_database_cmd_from_any_node.out +++ b/src/test/regress/expected/isolation_database_cmd_from_any_node.out @@ -3,16 +3,16 @@ Parsed test spec with 2 sessions starting permutation: s1-begin s2-begin s1-acquire-citus-adv-oclass-lock s2-acquire-citus-adv-oclass-lock s1-commit s2-commit step s1-begin: BEGIN; step s2-begin: BEGIN; -step s1-acquire-citus-adv-oclass-lock: SELECT citus_internal_acquire_citus_advisory_object_class_lock(value, NULL) FROM oclass_database; -citus_internal_acquire_citus_advisory_object_class_lock +step s1-acquire-citus-adv-oclass-lock: SELECT citus_internal.acquire_citus_advisory_object_class_lock(value, NULL) FROM oclass_database; +acquire_citus_advisory_object_class_lock --------------------------------------------------------------------- (1 row) -step s2-acquire-citus-adv-oclass-lock: SELECT citus_internal_acquire_citus_advisory_object_class_lock(value, NULL) FROM oclass_database; +step s2-acquire-citus-adv-oclass-lock: SELECT citus_internal.acquire_citus_advisory_object_class_lock(value, NULL) FROM oclass_database; step s1-commit: COMMIT; step s2-acquire-citus-adv-oclass-lock: <... completed> -citus_internal_acquire_citus_advisory_object_class_lock +acquire_citus_advisory_object_class_lock --------------------------------------------------------------------- (1 row) @@ -23,16 +23,16 @@ starting permutation: s1-create-testdb1 s1-begin s2-begin s1-acquire-citus-adv-o step s1-create-testdb1: CREATE DATABASE testdb1; step s1-begin: BEGIN; step s2-begin: BEGIN; -step s1-acquire-citus-adv-oclass-lock-with-oid-testdb1: SELECT citus_internal_acquire_citus_advisory_object_class_lock(value, 'testdb1') FROM oclass_database; -citus_internal_acquire_citus_advisory_object_class_lock +step s1-acquire-citus-adv-oclass-lock-with-oid-testdb1: SELECT citus_internal.acquire_citus_advisory_object_class_lock(value, 'testdb1') FROM oclass_database; +acquire_citus_advisory_object_class_lock --------------------------------------------------------------------- (1 row) -step s2-acquire-citus-adv-oclass-lock-with-oid-testdb1: SELECT citus_internal_acquire_citus_advisory_object_class_lock(value, 'testdb1') FROM oclass_database; +step s2-acquire-citus-adv-oclass-lock-with-oid-testdb1: SELECT citus_internal.acquire_citus_advisory_object_class_lock(value, 'testdb1') FROM oclass_database; step s1-commit: COMMIT; step s2-acquire-citus-adv-oclass-lock-with-oid-testdb1: <... completed> -citus_internal_acquire_citus_advisory_object_class_lock +acquire_citus_advisory_object_class_lock --------------------------------------------------------------------- (1 row) @@ -45,14 +45,14 @@ step s1-create-testdb1: CREATE DATABASE testdb1; step s2-create-testdb2: CREATE DATABASE testdb2; step s1-begin: BEGIN; step s2-begin: BEGIN; -step s1-acquire-citus-adv-oclass-lock-with-oid-testdb1: SELECT citus_internal_acquire_citus_advisory_object_class_lock(value, 'testdb1') FROM oclass_database; -citus_internal_acquire_citus_advisory_object_class_lock +step s1-acquire-citus-adv-oclass-lock-with-oid-testdb1: SELECT citus_internal.acquire_citus_advisory_object_class_lock(value, 'testdb1') FROM oclass_database; +acquire_citus_advisory_object_class_lock --------------------------------------------------------------------- (1 row) -step s2-acquire-citus-adv-oclass-lock-with-oid-testdb2: SELECT citus_internal_acquire_citus_advisory_object_class_lock(value, 'testdb2') FROM oclass_database; -citus_internal_acquire_citus_advisory_object_class_lock +step s2-acquire-citus-adv-oclass-lock-with-oid-testdb2: SELECT citus_internal.acquire_citus_advisory_object_class_lock(value, 'testdb2') FROM oclass_database; +acquire_citus_advisory_object_class_lock --------------------------------------------------------------------- (1 row) @@ -66,14 +66,14 @@ starting permutation: s2-create-testdb2 s1-begin s2-begin s1-acquire-citus-adv-o step s2-create-testdb2: CREATE DATABASE testdb2; step s1-begin: BEGIN; step s2-begin: BEGIN; -step s1-acquire-citus-adv-oclass-lock: SELECT citus_internal_acquire_citus_advisory_object_class_lock(value, NULL) FROM oclass_database; -citus_internal_acquire_citus_advisory_object_class_lock +step s1-acquire-citus-adv-oclass-lock: SELECT citus_internal.acquire_citus_advisory_object_class_lock(value, NULL) FROM oclass_database; +acquire_citus_advisory_object_class_lock --------------------------------------------------------------------- (1 row) -step s2-acquire-citus-adv-oclass-lock-with-oid-testdb2: SELECT citus_internal_acquire_citus_advisory_object_class_lock(value, 'testdb2') FROM oclass_database; -citus_internal_acquire_citus_advisory_object_class_lock +step s2-acquire-citus-adv-oclass-lock-with-oid-testdb2: SELECT citus_internal.acquire_citus_advisory_object_class_lock(value, 'testdb2') FROM oclass_database; +acquire_citus_advisory_object_class_lock --------------------------------------------------------------------- (1 row) diff --git a/src/test/regress/expected/isolation_update_node.out b/src/test/regress/expected/isolation_update_node.out index 53d792e61..703fcc427 100644 --- a/src/test/regress/expected/isolation_update_node.out +++ b/src/test/regress/expected/isolation_update_node.out @@ -250,7 +250,7 @@ count step s1-commit-prepared: COMMIT prepared 'label'; -s2: WARNING: connection to the remote node non-existent:57637 failed with the following error: could not translate host name "non-existent" to address: +s2: WARNING: connection to the remote node postgres@non-existent:57637 failed with the following error: could not translate host name "non-existent" to address: step s2-execute-prepared: EXECUTE foo; diff --git a/src/test/regress/expected/local_shard_execution.out b/src/test/regress/expected/local_shard_execution.out index f6e4db7ee..58293a2d6 100644 --- a/src/test/regress/expected/local_shard_execution.out +++ b/src/test/regress/expected/local_shard_execution.out @@ -3281,9 +3281,9 @@ SELECT pg_sleep(0.1); -- wait to make sure the config has changed before running SET citus.enable_local_execution TO false; -- force a connection to the dummy placements -- run queries that use dummy placements for local execution SELECT * FROM event_responses WHERE FALSE; -ERROR: connection to the remote node foobar:57636 failed with the following error: could not translate host name "foobar" to address: +ERROR: connection to the remote node postgres@foobar:57636 failed with the following error: could not translate host name "foobar" to address: WITH cte_1 AS (SELECT * FROM event_responses LIMIT 1) SELECT count(*) FROM cte_1; -ERROR: connection to the remote node foobar:57636 failed with the following error: could not translate host name "foobar" to address: +ERROR: connection to the remote node postgres@foobar:57636 failed with the following error: could not translate host name "foobar" to address: ALTER SYSTEM RESET citus.local_hostname; SELECT pg_reload_conf(); pg_reload_conf diff --git a/src/test/regress/expected/local_shard_execution_0.out b/src/test/regress/expected/local_shard_execution_0.out index 8c4fbfd74..948941aad 100644 --- a/src/test/regress/expected/local_shard_execution_0.out +++ b/src/test/regress/expected/local_shard_execution_0.out @@ -3281,9 +3281,9 @@ SELECT pg_sleep(0.1); -- wait to make sure the config has changed before running SET citus.enable_local_execution TO false; -- force a connection to the dummy placements -- run queries that use dummy placements for local execution SELECT * FROM event_responses WHERE FALSE; -ERROR: connection to the remote node foobar:57636 failed with the following error: could not translate host name "foobar" to address: +ERROR: connection to the remote node postgres@foobar:57636 failed with the following error: could not translate host name "foobar" to address: WITH cte_1 AS (SELECT * FROM event_responses LIMIT 1) SELECT count(*) FROM cte_1; -ERROR: connection to the remote node foobar:57636 failed with the following error: could not translate host name "foobar" to address: +ERROR: connection to the remote node postgres@foobar:57636 failed with the following error: could not translate host name "foobar" to address: ALTER SYSTEM RESET citus.local_hostname; SELECT pg_reload_conf(); pg_reload_conf diff --git a/src/test/regress/expected/metadata_sync_helpers.out b/src/test/regress/expected/metadata_sync_helpers.out index a41ac9d5f..13dd70939 100644 --- a/src/test/regress/expected/metadata_sync_helpers.out +++ b/src/test/regress/expected/metadata_sync_helpers.out @@ -12,7 +12,7 @@ RESET client_min_messages; SET search_path TO metadata_sync_helpers; CREATE TABLE test(col_1 int); -- not in a distributed transaction -SELECT citus_internal_add_partition_metadata ('test'::regclass, 'h', 'col_1', 0, 's'); +SELECT citus_internal.add_partition_metadata ('test'::regclass, 'h', 'col_1', 0, 's'); ERROR: This is an internal Citus function can only be used in a distributed transaction SELECT citus_internal_update_relation_colocation ('test'::regclass, 1); ERROR: This is an internal Citus function can only be used in a distributed transaction @@ -24,7 +24,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; (1 row) - SELECT citus_internal_add_partition_metadata ('test'::regclass, 'h', 'col_1', 0, 's'); + SELECT citus_internal.add_partition_metadata ('test'::regclass, 'h', 'col_1', 0, 's'); ERROR: This is an internal Citus function can only be used in a distributed transaction ROLLBACK; -- in a distributed transaction and the application name is Citus, allowed. @@ -36,8 +36,8 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; (1 row) SET application_name to 'citus_internal gpid=10000000001'; - SELECT citus_internal_add_partition_metadata ('test'::regclass, 'h', 'col_1', 0, 's'); - citus_internal_add_partition_metadata + SELECT citus_internal.add_partition_metadata ('test'::regclass, 'h', 'col_1', 0, 's'); + add_partition_metadata --------------------------------------------------------------------- (1 row) @@ -61,7 +61,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; (1 row) SET application_name to 'citus_internal gpid=10000000001'; - SELECT citus_internal_add_partition_metadata ('test'::regclass, 'h', 'col_1', 0, 's'); + SELECT citus_internal.add_partition_metadata ('test'::regclass, 'h', 'col_1', 0, 's'); ERROR: must be owner of table test ROLLBACK; -- we do not own the relation @@ -87,8 +87,8 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; (1 row) SET application_name to 'citus_internal gpid=10000000001'; - SELECT citus_internal_add_partition_metadata ('test_2'::regclass, 'h', 'col_1', 0, 's'); - citus_internal_add_partition_metadata + SELECT citus_internal.add_partition_metadata ('test_2'::regclass, 'h', 'col_1', 0, 's'); + add_partition_metadata --------------------------------------------------------------------- (1 row) @@ -109,8 +109,8 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; (1 row) SET application_name to 'citus_rebalancer gpid=10000000001'; - SELECT citus_internal_add_partition_metadata ('test_2'::regclass, 'h', 'col_1', 0, 's'); - citus_internal_add_partition_metadata + SELECT citus_internal.add_partition_metadata ('test_2'::regclass, 'h', 'col_1', 0, 's'); + add_partition_metadata --------------------------------------------------------------------- (1 row) @@ -125,7 +125,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; (1 row) SET application_name to 'citus_internal gpid=not a correct gpid'; - SELECT citus_internal_add_partition_metadata ('test_2'::regclass, 'h', 'col_1', 0, 's'); + SELECT citus_internal.add_partition_metadata ('test_2'::regclass, 'h', 'col_1', 0, 's'); ERROR: This is an internal Citus function can only be used in a distributed transaction ROLLBACK; -- also faills if done by the rebalancer @@ -137,7 +137,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; (1 row) SET application_name to 'citus_rebalancer gpid=not a correct gpid'; - SELECT citus_internal_add_partition_metadata ('test_2'::regclass, 'h', 'col_1', 0, 's'); + SELECT citus_internal.add_partition_metadata ('test_2'::regclass, 'h', 'col_1', 0, 's'); ERROR: This is an internal Citus function can only be used in a distributed transaction ROLLBACK; -- application_name with suffix is ok (e.g. pgbouncer might add this) @@ -149,8 +149,8 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; (1 row) SET application_name to 'citus_internal gpid=10000000001 - from 10.12.14.16:10370'; - SELECT citus_internal_add_partition_metadata ('test_2'::regclass, 'h', 'col_1', 0, 's'); - citus_internal_add_partition_metadata + SELECT citus_internal.add_partition_metadata ('test_2'::regclass, 'h', 'col_1', 0, 's'); + add_partition_metadata --------------------------------------------------------------------- (1 row) @@ -165,7 +165,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; (1 row) SET application_name to 'citus_internal gpid='; - SELECT citus_internal_add_partition_metadata ('test_2'::regclass, 'h', 'col_1', 0, 's'); + SELECT citus_internal.add_partition_metadata ('test_2'::regclass, 'h', 'col_1', 0, 's'); ERROR: This is an internal Citus function can only be used in a distributed transaction ROLLBACK; -- empty application_name @@ -177,7 +177,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; (1 row) SET application_name to ''; - SELECT citus_internal_add_partition_metadata ('test_2'::regclass, 'h', 'col_1', 0, 's'); + SELECT citus_internal.add_partition_metadata ('test_2'::regclass, 'h', 'col_1', 0, 's'); ERROR: This is an internal Citus function can only be used in a distributed transaction ROLLBACK; -- application_name with incorrect prefix @@ -189,7 +189,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; (1 row) SET application_name to 'citus gpid=10000000001'; - SELECT citus_internal_add_partition_metadata ('test_2'::regclass, 'h', 'col_1', 0, 's'); + SELECT citus_internal.add_partition_metadata ('test_2'::regclass, 'h', 'col_1', 0, 's'); ERROR: This is an internal Citus function can only be used in a distributed transaction ROLLBACK; -- fails because there is no X distribution method @@ -201,7 +201,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; (1 row) SET application_name to 'citus_internal gpid=10000000001'; - SELECT citus_internal_add_partition_metadata ('test_2'::regclass, 'X', 'col_1', 0, 's'); + SELECT citus_internal.add_partition_metadata ('test_2'::regclass, 'X', 'col_1', 0, 's'); ERROR: Metadata syncing is only allowed for hash, reference and local tables:X ROLLBACK; -- fails because there is the column does not exist @@ -213,7 +213,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; (1 row) SET application_name to 'citus_internal gpid=10000000001'; - SELECT citus_internal_add_partition_metadata ('test_2'::regclass, 'h', 'non_existing_col', 0, 's'); + SELECT citus_internal.add_partition_metadata ('test_2'::regclass, 'h', 'non_existing_col', 0, 's'); ERROR: column "non_existing_col" of relation "test_2" does not exist ROLLBACK; --- fails because we do not allow NULL parameters @@ -225,7 +225,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; (1 row) SET application_name to 'citus_internal gpid=10000000001'; - SELECT citus_internal_add_partition_metadata (NULL, 'h', 'non_existing_col', 0, 's'); + SELECT citus_internal.add_partition_metadata (NULL, 'h', 'non_existing_col', 0, 's'); ERROR: relation cannot be NULL ROLLBACK; -- fails because colocationId cannot be negative @@ -237,7 +237,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; (1 row) SET application_name to 'citus_internal gpid=10000000001'; - SELECT citus_internal_add_partition_metadata ('test_2'::regclass, 'h', 'col_1', -1, 's'); + SELECT citus_internal.add_partition_metadata ('test_2'::regclass, 'h', 'col_1', -1, 's'); ERROR: Metadata syncing is only allowed for valid colocation id values. ROLLBACK; -- fails because there is no X replication model @@ -249,7 +249,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; (1 row) SET application_name to 'citus_internal gpid=10000000001'; - SELECT citus_internal_add_partition_metadata ('test_2'::regclass, 'X', 'col_1', 0, 'X'); + SELECT citus_internal.add_partition_metadata ('test_2'::regclass, 'X', 'col_1', 0, 'X'); ERROR: Metadata syncing is only allowed for hash, reference and local tables:X ROLLBACK; -- the same table cannot be added twice, that is enforced by a primary key @@ -262,13 +262,13 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; SET application_name to 'citus_internal gpid=10000000001'; \set VERBOSITY terse - SELECT citus_internal_add_partition_metadata ('test_2'::regclass, 'h', 'col_1', 0, 's'); - citus_internal_add_partition_metadata + SELECT citus_internal.add_partition_metadata ('test_2'::regclass, 'h', 'col_1', 0, 's'); + add_partition_metadata --------------------------------------------------------------------- (1 row) - SELECT citus_internal_add_partition_metadata ('test_2'::regclass, 'h', 'col_1', 0, 's'); + SELECT citus_internal.add_partition_metadata ('test_2'::regclass, 'h', 'col_1', 0, 's'); ERROR: duplicate key value violates unique constraint "pg_dist_partition_logical_relid_index" ROLLBACK; -- the same table cannot be added twice, that is enforced by a primary key even if distribution key changes @@ -281,13 +281,13 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; SET application_name to 'citus_internal gpid=10000000001'; \set VERBOSITY terse - SELECT citus_internal_add_partition_metadata ('test_2'::regclass, 'h', 'col_1', 0, 's'); - citus_internal_add_partition_metadata + SELECT citus_internal.add_partition_metadata ('test_2'::regclass, 'h', 'col_1', 0, 's'); + add_partition_metadata --------------------------------------------------------------------- (1 row) - SELECT citus_internal_add_partition_metadata ('test_2'::regclass, 'h', 'col_2', 0, 's'); + SELECT citus_internal.add_partition_metadata ('test_2'::regclass, 'h', 'col_2', 0, 's'); ERROR: duplicate key value violates unique constraint "pg_dist_partition_logical_relid_index" ROLLBACK; -- hash distributed table cannot have NULL distribution key @@ -300,7 +300,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; SET application_name to 'citus_internal gpid=10000000001'; \set VERBOSITY terse - SELECT citus_internal_add_partition_metadata ('test_2'::regclass, 'h', NULL, 0, 's'); + SELECT citus_internal.add_partition_metadata ('test_2'::regclass, 'h', NULL, 0, 's'); ERROR: Distribution column cannot be NULL for relation "test_2" ROLLBACK; -- even if metadata_sync_helper_role is not owner of the table test @@ -332,8 +332,8 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; (1 row) SET application_name to 'citus_internal gpid=10000000001'; - SELECT citus_internal_add_partition_metadata ('test_2'::regclass, 'X', 'col_1', 0, 's'); - citus_internal_add_partition_metadata + SELECT citus_internal.add_partition_metadata ('test_2'::regclass, 'X', 'col_1', 0, 's'); + add_partition_metadata --------------------------------------------------------------------- (1 row) @@ -378,7 +378,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; (1 row) SET application_name to 'citus_internal gpid=10000000001'; - SELECT citus_internal_add_partition_metadata ('test_2'::regclass, 'X', 'col_1', 0, 's'); + SELECT citus_internal.add_partition_metadata ('test_2'::regclass, 'X', 'col_1', 0, 's'); ERROR: role "non_existing_user" does not exist ROLLBACK; \c - postgres - :worker_1_port @@ -409,7 +409,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; (1 row) SET application_name to 'citus_internal gpid=10000000001'; - SELECT citus_internal_add_partition_metadata ('test_ref'::regclass, 'n', 'col_1', 0, 's'); + SELECT citus_internal.add_partition_metadata ('test_ref'::regclass, 'n', 'col_1', 0, 's'); ERROR: Reference or local tables cannot have distribution columns ROLLBACK; -- non-valid replication model @@ -421,7 +421,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; (1 row) SET application_name to 'citus_internal gpid=10000000001'; - SELECT citus_internal_add_partition_metadata ('test_ref'::regclass, 'n', NULL, 0, 'A'); + SELECT citus_internal.add_partition_metadata ('test_ref'::regclass, 'n', NULL, 0, 'A'); ERROR: Metadata syncing is only allowed for known replication models. ROLLBACK; -- not-matching replication model for reference table @@ -433,7 +433,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; (1 row) SET application_name to 'citus_internal gpid=10000000001'; - SELECT citus_internal_add_partition_metadata ('test_ref'::regclass, 'n', NULL, 0, 'c'); + SELECT citus_internal.add_partition_metadata ('test_ref'::regclass, 'n', NULL, 0, 'c'); ERROR: Local or references tables can only have 's' or 't' as the replication model. ROLLBACK; -- add entry for super user table @@ -448,8 +448,8 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; (1 row) SET application_name to 'citus_internal gpid=10000000001'; - SELECT citus_internal_add_partition_metadata ('super_user_table'::regclass, 'h', 'col_1', 0, 's'); - citus_internal_add_partition_metadata + SELECT citus_internal.add_partition_metadata ('super_user_table'::regclass, 'h', 'col_1', 0, 's'); + add_partition_metadata --------------------------------------------------------------------- (1 row) @@ -470,7 +470,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; \set VERBOSITY terse WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('super_user_table'::regclass, 1420000::bigint, 't'::"char", '-2147483648'::text, '-1610612737'::text)) - SELECT citus_internal_add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; + SELECT citus_internal.add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; ERROR: must be owner of table super_user_table ROLLBACK; -- the user is only allowed to add a shard for add a table which is in pg_dist_partition @@ -485,7 +485,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; \set VERBOSITY terse WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('test_2'::regclass, 1420000::bigint, 't'::"char", '-2147483648'::text, '-1610612737'::text)) - SELECT citus_internal_add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; + SELECT citus_internal.add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; ERROR: The relation "test_2" does not have a valid entry in pg_dist_partition. ROLLBACK; -- ok, now add the table to the pg_dist_partition @@ -497,20 +497,20 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; (1 row) SET application_name to 'citus_internal gpid=10000000001'; - SELECT citus_internal_add_partition_metadata ('test_2'::regclass, 'h', 'col_1', 250, 's'); - citus_internal_add_partition_metadata + SELECT citus_internal.add_partition_metadata ('test_2'::regclass, 'h', 'col_1', 250, 's'); + add_partition_metadata --------------------------------------------------------------------- (1 row) - SELECT citus_internal_add_partition_metadata ('test_3'::regclass, 'h', 'col_1', 251, 's'); - citus_internal_add_partition_metadata + SELECT citus_internal.add_partition_metadata ('test_3'::regclass, 'h', 'col_1', 251, 's'); + add_partition_metadata --------------------------------------------------------------------- (1 row) - SELECT citus_internal_add_partition_metadata ('test_ref'::regclass, 'n', NULL, 0, 't'); - citus_internal_add_partition_metadata + SELECT citus_internal.add_partition_metadata ('test_ref'::regclass, 'n', NULL, 0, 't'); + add_partition_metadata --------------------------------------------------------------------- (1 row) @@ -544,7 +544,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; \set VERBOSITY terse WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('test_2'::regclass, -1, 't'::"char", '-2147483648'::text, '-1610612737'::text)) - SELECT citus_internal_add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; + SELECT citus_internal.add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; ERROR: Invalid shard id: -1 ROLLBACK; -- invalid storage types are not allowed @@ -559,7 +559,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; \set VERBOSITY terse WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('test_2'::regclass, 1420000, 'X'::"char", '-2147483648'::text, '-1610612737'::text)) - SELECT citus_internal_add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; + SELECT citus_internal.add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; ERROR: Invalid shard storage type: X ROLLBACK; -- NULL shard ranges are not allowed for hash distributed tables @@ -574,7 +574,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; \set VERBOSITY terse WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('test_2'::regclass, 1420000, 't'::"char", NULL, '-1610612737'::text)) - SELECT citus_internal_add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; + SELECT citus_internal.add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; ERROR: Shards of has distributed table "test_2" cannot have NULL shard ranges ROLLBACK; -- non-integer shard ranges are not allowed @@ -589,7 +589,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; \set VERBOSITY terse WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('test_2'::regclass, 1420000::bigint, 't'::"char", 'non-int'::text, '-1610612737'::text)) - SELECT citus_internal_add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; + SELECT citus_internal.add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; ERROR: invalid input syntax for type integer: "non-int" ROLLBACK; -- shardMinValue should be smaller than shardMaxValue @@ -604,7 +604,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; \set VERBOSITY terse WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('test_2'::regclass, 1420000::bigint, 't'::"char", '-1610612737'::text, '-2147483648'::text)) - SELECT citus_internal_add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; + SELECT citus_internal.add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; ERROR: shardMinValue=-1610612737 is greater than shardMaxValue=-2147483648 for table "test_2", which is not allowed ROLLBACK; -- we do not allow overlapping shards for the same table @@ -621,7 +621,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; AS (VALUES ('test_2'::regclass, 1420000::bigint, 't'::"char", '10'::text, '20'::text), ('test_2'::regclass, 1420001::bigint, 't'::"char", '20'::text, '30'::text), ('test_2'::regclass, 1420002::bigint, 't'::"char", '10'::text, '50'::text)) - SELECT citus_internal_add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; + SELECT citus_internal.add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; ERROR: Shard intervals overlap for table "test_2": 1420001 and 1420000 ROLLBACK; -- Now let's check valid pg_dist_object updates @@ -637,7 +637,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; \set VERBOSITY terse WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('non_existing_type', ARRAY['non_existing_user']::text[], ARRAY[]::text[], -1, 0, false)) - SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) FROM distributed_object_data; + SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) FROM distributed_object_data; ERROR: unrecognized object type "non_existing_type" ROLLBACK; -- check the sanity of distributionArgumentIndex and colocationId @@ -652,7 +652,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; \set VERBOSITY terse WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('role', ARRAY['metadata_sync_helper_role']::text[], ARRAY[]::text[], -100, 0, false)) - SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) FROM distributed_object_data; + SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) FROM distributed_object_data; ERROR: distribution_argument_index must be between 0 and 100 ROLLBACK; BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; @@ -666,7 +666,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; \set VERBOSITY terse WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('role', ARRAY['metadata_sync_helper_role']::text[], ARRAY[]::text[], -1, -1, false)) - SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) FROM distributed_object_data; + SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) FROM distributed_object_data; ERROR: colocationId must be a positive number ROLLBACK; -- check with non-existing object @@ -681,10 +681,10 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; \set VERBOSITY terse WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('role', ARRAY['non_existing_user']::text[], ARRAY[]::text[], -1, 0, false)) - SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) FROM distributed_object_data; + SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) FROM distributed_object_data; ERROR: role "non_existing_user" does not exist ROLLBACK; --- since citus_internal_add_object_metadata is strict function returns NULL +-- since citus_internal.add_object_metadata is strict function returns NULL -- if any parameter is NULL BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; SELECT assign_distributed_transaction_id(0, 8, '2021-07-09 15:41:55.542377+02'); @@ -697,15 +697,15 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; \set VERBOSITY terse WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('role', ARRAY['metadata_sync_helper_role']::text[], ARRAY[]::text[], 0, NULL::int, false)) - SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) FROM distributed_object_data; - citus_internal_add_object_metadata + SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) FROM distributed_object_data; + add_object_metadata --------------------------------------------------------------------- (1 row) ROLLBACK; \c - postgres - :worker_1_port --- Show that citus_internal_add_object_metadata only works for object types +-- Show that citus_internal.add_object_metadata only works for object types -- which is known how to distribute BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; SELECT assign_distributed_transaction_id(0, 8, '2021-07-09 15:41:55.542377+02'); @@ -724,10 +724,10 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; SET ROLE metadata_sync_helper_role; WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('operator', ARRAY['===']::text[], ARRAY['int','int']::text[], -1, 0, false)) - SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) FROM distributed_object_data; + SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) FROM distributed_object_data; ERROR: operator object can not be distributed by Citus ROLLBACK; --- Show that citus_internal_add_object_metadata checks the priviliges +-- Show that citus_internal.add_object_metadata checks the priviliges BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; SELECT assign_distributed_transaction_id(0, 8, '2021-07-09 15:41:55.542377+02'); assign_distributed_transaction_id @@ -744,7 +744,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; SET ROLE metadata_sync_helper_role; WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('function', ARRAY['distribution_test_function']::text[], ARRAY['integer']::text[], -1, 0, false)) - SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) FROM distributed_object_data; + SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) FROM distributed_object_data; ERROR: must be owner of function distribution_test_function ROLLBACK; BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; @@ -761,7 +761,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; SET ROLE metadata_sync_helper_role; WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('type', ARRAY['distributed_test_type']::text[], ARRAY[]::text[], -1, 0, false)) - SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) FROM distributed_object_data; + SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) FROM distributed_object_data; ERROR: must be owner of type distributed_test_type ROLLBACK; -- we do not allow wrong partmethod @@ -780,7 +780,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('test_2'::regclass, 1420000::bigint, 't'::"char", '10'::text, '20'::text), ('test_2'::regclass, 1420001::bigint, 't'::"char", '20'::text, '30'::text)) - SELECT citus_internal_add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; + SELECT citus_internal.add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; ERROR: Metadata syncing is only allowed for hash, reference and local tables: X ROLLBACK; -- we do not allow NULL shardMinMax values @@ -797,8 +797,8 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; \set VERBOSITY terse WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('test_2'::regclass, 1420000::bigint, 't'::"char", '10'::text, '20'::text)) - SELECT citus_internal_add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; - citus_internal_add_shard_metadata + SELECT citus_internal.add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; + add_shard_metadata --------------------------------------------------------------------- (1 row) @@ -807,7 +807,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; UPDATE pg_dist_shard SET shardminvalue = NULL WHERE shardid = 1420000; WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('test_2'::regclass, 1420001::bigint, 't'::"char", '20'::text, '30'::text)) - SELECT citus_internal_add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; + SELECT citus_internal.add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; ERROR: Shards of has distributed table "test_2" cannot have NULL shard ranges ROLLBACK; \c - metadata_sync_helper_role - :worker_1_port @@ -830,8 +830,8 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; ('test_2'::regclass, 1420004::bigint, 't'::"char", '51'::text, '60'::text), ('test_2'::regclass, 1420005::bigint, 't'::"char", '61'::text, '70'::text), ('test_3'::regclass, 1420008::bigint, 't'::"char", '11'::text, '20'::text)) - SELECT citus_internal_add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; - citus_internal_add_shard_metadata + SELECT citus_internal.add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; + add_shard_metadata --------------------------------------------------------------------- @@ -871,8 +871,8 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; ('test_3'::regclass, 1420011::bigint, 't'::"char", '41'::text, '50'::text), ('test_3'::regclass, 1420012::bigint, 't'::"char", '51'::text, '60'::text), ('test_3'::regclass, 1420013::bigint, 't'::"char", '61'::text, '70'::text)) - SELECT citus_internal_add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; - citus_internal_add_shard_metadata + SELECT citus_internal.add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; + add_shard_metadata --------------------------------------------------------------------- @@ -894,7 +894,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; \set VERBOSITY terse WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('test_ref'::regclass, 1420003::bigint, 't'::"char", '-1610612737'::text, NULL)) - SELECT citus_internal_add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; + SELECT citus_internal.add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; ERROR: Shards of reference or local table "test_ref" should have NULL shard ranges ROLLBACK; -- reference tables cannot have multiple shards @@ -910,7 +910,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('test_ref'::regclass, 1420006::bigint, 't'::"char", NULL, NULL), ('test_ref'::regclass, 1420007::bigint, 't'::"char", NULL, NULL)) - SELECT citus_internal_add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; + SELECT citus_internal.add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; ERROR: relation "test_ref" has already at least one shard, adding more is not allowed ROLLBACK; -- finally, add a shard for reference tables @@ -925,8 +925,8 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; \set VERBOSITY terse WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('test_ref'::regclass, 1420006::bigint, 't'::"char", NULL, NULL)) - SELECT citus_internal_add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; - citus_internal_add_shard_metadata + SELECT citus_internal.add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; + add_shard_metadata --------------------------------------------------------------------- (1 row) @@ -946,8 +946,8 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; \set VERBOSITY terse WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('super_user_table'::regclass, 1420007::bigint, 't'::"char", '11'::text, '20'::text)) - SELECT citus_internal_add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; - citus_internal_add_shard_metadata + SELECT citus_internal.add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; + add_shard_metadata --------------------------------------------------------------------- (1 row) @@ -966,9 +966,9 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; SET application_name to 'citus_internal gpid=10000000001'; \set VERBOSITY terse - WITH placement_data(shardid, shardstate, shardlength, groupid, placementid) AS - (VALUES (-10, 1, 0::bigint, 1::int, 1500000::bigint)) - SELECT citus_internal_add_placement_metadata(shardid, shardstate, shardlength, groupid, placementid) FROM placement_data; + WITH placement_data(shardid, shardlength, groupid, placementid) AS + (VALUES (-10, 0::bigint, 1::int, 1500000::bigint)) + SELECT citus_internal.add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data; ERROR: could not find valid entry for shard xxxxx ROLLBACK; -- invalid placementid @@ -983,7 +983,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; \set VERBOSITY terse WITH placement_data(shardid, shardlength, groupid, placementid) AS (VALUES (1420000, 0::bigint, 1::int, -10)) - SELECT citus_internal_add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data; + SELECT citus_internal.add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data; ERROR: Shard placement has invalid placement id (-10) for shard(1420000) ROLLBACK; -- non-existing shard @@ -998,7 +998,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; \set VERBOSITY terse WITH placement_data(shardid, shardlength, groupid, placementid) AS (VALUES (1430100, 0::bigint, 1::int, 10)) - SELECT citus_internal_add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data; + SELECT citus_internal.add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data; ERROR: could not find valid entry for shard xxxxx ROLLBACK; -- non-existing node with non-existing node-id 123123123 @@ -1013,7 +1013,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; \set VERBOSITY terse WITH placement_data(shardid, shardlength, groupid, placementid) AS (VALUES ( 1420000, 0::bigint, 123123123::int, 1500000)) - SELECT citus_internal_add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data; + SELECT citus_internal.add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data; ERROR: Node with group id 123123123 for shard placement xxxxx does not exist ROLLBACK; -- create a volatile function that returns the local node id @@ -1044,7 +1044,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; WITH placement_data(shardid, shardlength, groupid, placementid) AS (VALUES (1420000, 0::bigint, get_node_id(), 1500000), (1420000, 0::bigint, get_node_id(), 1500001)) - SELECT citus_internal_add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data; + SELECT citus_internal.add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data; ERROR: duplicate key value violates unique constraint "placement_shardid_groupid_unique_index" ROLLBACK; -- shard is not owned by us @@ -1059,7 +1059,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; \set VERBOSITY terse WITH placement_data(shardid, shardlength, groupid, placementid) AS (VALUES (1420007, 0::bigint, get_node_id(), 1500000)) - SELECT citus_internal_add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data; + SELECT citus_internal.add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data; ERROR: must be owner of table super_user_table ROLLBACK; -- sucessfully add placements @@ -1085,8 +1085,8 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; (1420011, 0::bigint, get_node_id(), 1500009), (1420012, 0::bigint, get_node_id(), 1500010), (1420013, 0::bigint, get_node_id(), 1500011)) - SELECT citus_internal_add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data; - citus_internal_add_placement_metadata + SELECT citus_internal.add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data; + add_placement_metadata --------------------------------------------------------------------- @@ -1197,7 +1197,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; \set VERBOSITY terse WITH shard_data(shardid) AS (VALUES (1420007)) - SELECT citus_internal_delete_shard_metadata(shardid) FROM shard_data; + SELECT citus_internal.delete_shard_metadata(shardid) FROM shard_data; ERROR: must be owner of table super_user_table ROLLBACK; -- the user cannot delete non-existing shards @@ -1212,7 +1212,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; \set VERBOSITY terse WITH shard_data(shardid) AS (VALUES (1420100)) - SELECT citus_internal_delete_shard_metadata(shardid) FROM shard_data; + SELECT citus_internal.delete_shard_metadata(shardid) FROM shard_data; ERROR: Shard id does not exists: 1420100 ROLLBACK; -- sucessfully delete shards @@ -1239,8 +1239,8 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; \set VERBOSITY terse WITH shard_data(shardid) AS (VALUES (1420000)) - SELECT citus_internal_delete_shard_metadata(shardid) FROM shard_data; - citus_internal_delete_shard_metadata + SELECT citus_internal.delete_shard_metadata(shardid) FROM shard_data; + delete_shard_metadata --------------------------------------------------------------------- (1 row) @@ -1343,13 +1343,13 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; SET application_name to 'citus_internal gpid=10000000001'; \set VERBOSITY terse - SELECT citus_internal_add_partition_metadata ('test_5'::regclass, 'h', 'int_col', 500, 's'); - citus_internal_add_partition_metadata + SELECT citus_internal.add_partition_metadata ('test_5'::regclass, 'h', 'int_col', 500, 's'); + add_partition_metadata --------------------------------------------------------------------- (1 row) - SELECT citus_internal_add_partition_metadata ('test_6'::regclass, 'h', 'text_col', 500, 's'); + SELECT citus_internal.add_partition_metadata ('test_6'::regclass, 'h', 'text_col', 500, 's'); ERROR: cannot colocate tables test_6 and test_5 ROLLBACK; BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; @@ -1367,13 +1367,13 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; SET application_name to 'citus_internal gpid=10000000001'; \set VERBOSITY terse - SELECT citus_internal_add_partition_metadata ('test_7'::regclass, 'h', 'text_col', 500, 's'); - citus_internal_add_partition_metadata + SELECT citus_internal.add_partition_metadata ('test_7'::regclass, 'h', 'text_col', 500, 's'); + add_partition_metadata --------------------------------------------------------------------- (1 row) - SELECT citus_internal_add_partition_metadata ('test_8'::regclass, 'h', 'text_col', 500, 's'); + SELECT citus_internal.add_partition_metadata ('test_8'::regclass, 'h', 'text_col', 500, 's'); ERROR: cannot colocate tables test_8 and test_7 ROLLBACK; -- we don't need the table/schema anymore diff --git a/src/test/regress/expected/multi_citus_tools.out b/src/test/regress/expected/multi_citus_tools.out index eef7a98ca..b47763686 100644 --- a/src/test/regress/expected/multi_citus_tools.out +++ b/src/test/regress/expected/multi_citus_tools.out @@ -587,7 +587,7 @@ SET client_min_messages TO DEBUG; -- verify that we can create connections only with users with login privileges. SET ROLE role_without_login; SELECT citus_check_connection_to_node('localhost', :worker_1_port); -WARNING: connection to the remote node localhost:xxxxx failed with the following error: FATAL: role "role_without_login" is not permitted to log in +WARNING: connection to the remote node role_without_login@localhost:xxxxx failed with the following error: FATAL: role "role_without_login" is not permitted to log in citus_check_connection_to_node --------------------------------------------------------------------- f diff --git a/src/test/regress/expected/multi_copy.out b/src/test/regress/expected/multi_copy.out index abd58eb1d..ff4cbdd2c 100644 --- a/src/test/regress/expected/multi_copy.out +++ b/src/test/regress/expected/multi_copy.out @@ -730,7 +730,7 @@ ALTER USER test_user WITH nologin; \c - test_user - :master_port -- reissue copy, and it should fail COPY numbers_hash FROM STDIN WITH (FORMAT 'csv'); -ERROR: connection to the remote node localhost:xxxxx failed with the following error: FATAL: role "test_user" is not permitted to log in +ERROR: connection to the remote node test_user@localhost:xxxxx failed with the following error: FATAL: role "test_user" is not permitted to log in -- verify shards in the none of the workers as marked invalid SELECT shardid, shardstate, nodename, nodeport FROM pg_dist_shard_placement join pg_dist_shard using(shardid) @@ -749,7 +749,7 @@ SELECT shardid, shardstate, nodename, nodeport -- try to insert into a reference table copy should fail COPY numbers_reference FROM STDIN WITH (FORMAT 'csv'); -ERROR: connection to the remote node localhost:xxxxx failed with the following error: FATAL: role "test_user" is not permitted to log in +ERROR: connection to the remote node test_user@localhost:xxxxx failed with the following error: FATAL: role "test_user" is not permitted to log in -- verify shards for reference table are still valid SELECT shardid, shardstate, nodename, nodeport FROM pg_dist_shard_placement join pg_dist_shard using(shardid) @@ -765,7 +765,7 @@ SELECT shardid, shardstate, nodename, nodeport -- since it can not insert into either copies of a shard. shards are expected to -- stay valid since the operation is rolled back. COPY numbers_hash_other FROM STDIN WITH (FORMAT 'csv'); -ERROR: connection to the remote node localhost:xxxxx failed with the following error: FATAL: role "test_user" is not permitted to log in +ERROR: connection to the remote node test_user@localhost:xxxxx failed with the following error: FATAL: role "test_user" is not permitted to log in -- verify shards for numbers_hash_other are still valid -- since copy has failed altogether SELECT shardid, shardstate, nodename, nodeport diff --git a/src/test/regress/expected/multi_extension.out b/src/test/regress/expected/multi_extension.out index 60e283800..dcd325367 100644 --- a/src/test/regress/expected/multi_extension.out +++ b/src/test/regress/expected/multi_extension.out @@ -1420,15 +1420,27 @@ SELECT * FROM multi_extension.print_extension_changes(); -- Snapshot of state at 12.2-1 ALTER EXTENSION citus UPDATE TO '12.2-1'; SELECT * FROM multi_extension.print_extension_changes(); - previous_object | current_object + previous_object | current_object --------------------------------------------------------------------- - | function citus_internal.commit_management_command_2pc() void - | function citus_internal.execute_command_on_remote_nodes_as_user(text,text) void - | function citus_internal.mark_object_distributed(oid,text,oid) void - | function citus_internal.start_management_transaction(xid8) void - | function citus_internal_acquire_citus_advisory_object_class_lock(integer,cstring) void - | function citus_internal_database_command(text) void -(6 rows) + | function citus_internal.acquire_citus_advisory_object_class_lock(integer,cstring) void + | function citus_internal.add_colocation_metadata(integer,integer,integer,regtype,oid) void + | function citus_internal.add_object_metadata(text,text[],text[],integer,integer,boolean) void + | function citus_internal.add_partition_metadata(regclass,"char",text,integer,"char") void + | function citus_internal.add_placement_metadata(bigint,bigint,integer,bigint) void + | function citus_internal.add_shard_metadata(regclass,bigint,"char",text,text) void + | function citus_internal.add_tenant_schema(oid,integer) void + | function citus_internal.adjust_local_clock_to_remote(cluster_clock) void + | function citus_internal.commit_management_command_2pc() void + | function citus_internal.database_command(text) void + | function citus_internal.delete_colocation_metadata(integer) void + | function citus_internal.delete_partition_metadata(regclass) void + | function citus_internal.delete_placement_metadata(bigint) void + | function citus_internal.delete_shard_metadata(bigint) void + | function citus_internal.delete_tenant_schema(oid) void + | function citus_internal.execute_command_on_remote_nodes_as_user(text,text) void + | function citus_internal.mark_object_distributed(oid,text,oid,text) void + | function citus_internal.start_management_transaction(xid8) void +(18 rows) DROP TABLE multi_extension.prev_objects, multi_extension.extension_diff; -- show running version diff --git a/src/test/regress/expected/multi_fix_partition_shard_index_names.out b/src/test/regress/expected/multi_fix_partition_shard_index_names.out index e6795317c..975a49351 100644 --- a/src/test/regress/expected/multi_fix_partition_shard_index_names.out +++ b/src/test/regress/expected/multi_fix_partition_shard_index_names.out @@ -658,9 +658,9 @@ NOTICE: issuing BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;SELECT assign_ DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx NOTICE: issuing BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;SELECT assign_distributed_transaction_id(xx, xx, 'xxxxxxx'); DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx -NOTICE: issuing WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('extension', ARRAY['citus_columnar']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; +NOTICE: issuing WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('extension', ARRAY['citus_columnar']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx -NOTICE: issuing WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('extension', ARRAY['citus_columnar']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; +NOTICE: issuing WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('extension', ARRAY['citus_columnar']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx NOTICE: issuing SELECT worker_apply_shard_ddl_command (915002, 'fix_idx_names', 'CREATE TABLE fix_idx_names.p2 (dist_col integer NOT NULL, another_col integer, partition_col timestamp without time zone NOT NULL, name text) USING columnar') DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx @@ -692,25 +692,25 @@ NOTICE: issuing SET citus.enable_ddl_propagation TO 'off' DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx NOTICE: issuing SET citus.enable_ddl_propagation TO 'off' DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx -NOTICE: issuing SELECT citus_internal_add_partition_metadata ('fix_idx_names.p2'::regclass, 'h', 'dist_col', 1370001, 's') +NOTICE: issuing SELECT citus_internal.add_partition_metadata ('fix_idx_names.p2'::regclass, 'h', 'dist_col', 1370001, 's') DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx -NOTICE: issuing SELECT citus_internal_add_partition_metadata ('fix_idx_names.p2'::regclass, 'h', 'dist_col', 1370001, 's') +NOTICE: issuing SELECT citus_internal.add_partition_metadata ('fix_idx_names.p2'::regclass, 'h', 'dist_col', 1370001, 's') DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx -NOTICE: issuing WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('fix_idx_names.p2'::regclass, 915002, 't'::"char", '-2147483648', '2147483647')) SELECT citus_internal_add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; +NOTICE: issuing WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('fix_idx_names.p2'::regclass, 915002, 't'::"char", '-2147483648', '2147483647')) SELECT citus_internal.add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx -NOTICE: issuing WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('fix_idx_names.p2'::regclass, 915002, 't'::"char", '-2147483648', '2147483647')) SELECT citus_internal_add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; +NOTICE: issuing WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('fix_idx_names.p2'::regclass, 915002, 't'::"char", '-2147483648', '2147483647')) SELECT citus_internal.add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx -NOTICE: issuing WITH placement_data(shardid, shardlength, groupid, placementid) AS (VALUES (xxxxxx, xxxxxx, xxxxxx, xxxxxx)) SELECT citus_internal_add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data; +NOTICE: issuing WITH placement_data(shardid, shardlength, groupid, placementid) AS (VALUES (xxxxxx, xxxxxx, xxxxxx, xxxxxx)) SELECT citus_internal.add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data; DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx -NOTICE: issuing WITH placement_data(shardid, shardlength, groupid, placementid) AS (VALUES (xxxxxx, xxxxxx, xxxxxx, xxxxxx)) SELECT citus_internal_add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data; +NOTICE: issuing WITH placement_data(shardid, shardlength, groupid, placementid) AS (VALUES (xxxxxx, xxxxxx, xxxxxx, xxxxxx)) SELECT citus_internal.add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data; DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx NOTICE: issuing SET citus.enable_ddl_propagation TO 'off' DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx NOTICE: issuing SET citus.enable_ddl_propagation TO 'off' DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx -NOTICE: issuing WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('table', ARRAY['fix_idx_names', 'p2']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; +NOTICE: issuing WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('table', ARRAY['fix_idx_names', 'p2']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx -NOTICE: issuing WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('table', ARRAY['fix_idx_names', 'p2']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; +NOTICE: issuing WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('table', ARRAY['fix_idx_names', 'p2']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx NOTICE: issuing SET citus.enable_ddl_propagation TO 'off' DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx diff --git a/src/test/regress/expected/multi_metadata_sync.out b/src/test/regress/expected/multi_metadata_sync.out index cb8f0c0e1..d15e7516c 100644 --- a/src/test/regress/expected/multi_metadata_sync.out +++ b/src/test/regress/expected/multi_metadata_sync.out @@ -101,9 +101,9 @@ SELECT unnest(activate_node_snapshot()) order by 1; UPDATE pg_dist_node SET hasmetadata = TRUE WHERE nodeid = 2 UPDATE pg_dist_node SET isactive = TRUE WHERE nodeid = 2 UPDATE pg_dist_node SET metadatasynced = TRUE WHERE nodeid = 2 - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('database', ARRAY['regression']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('role', ARRAY['postgres']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('schema', ARRAY['public']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('database', ARRAY['regression']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('role', ARRAY['postgres']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('schema', ARRAY['public']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; (33 rows) -- Create a test table with constraints and SERIAL and default from user defined sequence @@ -162,8 +162,8 @@ SELECT unnest(activate_node_snapshot()) order by 1; RESET ROLE RESET ROLE SELECT alter_role_if_exists('postgres', 'ALTER ROLE postgres SET lc_messages = ''C''') - SELECT citus_internal_add_partition_metadata ('public.mx_test_table'::regclass, 'h', 'col_1', 2, 's') - SELECT citus_internal_add_partition_metadata ('public.single_shard_tbl'::regclass, 'n', NULL, 3, 's') + SELECT citus_internal.add_partition_metadata ('public.mx_test_table'::regclass, 'h', 'col_1', 2, 's') + SELECT citus_internal.add_partition_metadata ('public.single_shard_tbl'::regclass, 'n', NULL, 3, 's') SELECT pg_catalog.worker_drop_sequence_dependency('public.mx_test_table'); SELECT pg_catalog.worker_drop_sequence_dependency('public.single_shard_tbl'); SELECT pg_catalog.worker_drop_sequence_dependency(logicalrelid::regclass::text) FROM pg_dist_partition @@ -183,19 +183,19 @@ SELECT unnest(activate_node_snapshot()) order by 1; UPDATE pg_dist_node SET hasmetadata = TRUE WHERE nodeid = 2 UPDATE pg_dist_node SET isactive = TRUE WHERE nodeid = 2 UPDATE pg_dist_node SET metadatasynced = TRUE WHERE nodeid = 2 - WITH colocation_group_data (colocationid, shardcount, replicationfactor, distributioncolumntype, distributioncolumncollationname, distributioncolumncollationschema) AS (VALUES (2, 8, 1, 'integer'::regtype, NULL, NULL)) SELECT pg_catalog.citus_internal_add_colocation_metadata(colocationid, shardcount, replicationfactor, distributioncolumntype, coalesce(c.oid, 0)) FROM colocation_group_data d LEFT JOIN pg_collation c ON (d.distributioncolumncollationname = c.collname AND d.distributioncolumncollationschema::regnamespace = c.collnamespace) - WITH colocation_group_data (colocationid, shardcount, replicationfactor, distributioncolumntype, distributioncolumncollationname, distributioncolumncollationschema) AS (VALUES (3, 1, 1, 0, NULL, NULL)) SELECT pg_catalog.citus_internal_add_colocation_metadata(colocationid, shardcount, replicationfactor, distributioncolumntype, coalesce(c.oid, 0)) FROM colocation_group_data d LEFT JOIN pg_collation c ON (d.distributioncolumncollationname = c.collname AND d.distributioncolumncollationschema::regnamespace = c.collnamespace) - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('database', ARRAY['regression']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('role', ARRAY['postgres']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('schema', ARRAY['public']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('sequence', ARRAY['public', 'mx_test_table_col_3_seq']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('sequence', ARRAY['public', 'user_defined_seq']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('table', ARRAY['public', 'mx_test_table']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('table', ARRAY['public', 'single_shard_tbl']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH placement_data(shardid, shardlength, groupid, placementid) AS (VALUES (1310000, 0, 1, 100000), (1310001, 0, 2, 100001), (1310002, 0, 1, 100002), (1310003, 0, 2, 100003), (1310004, 0, 1, 100004), (1310005, 0, 2, 100005), (1310006, 0, 1, 100006), (1310007, 0, 2, 100007)) SELECT citus_internal_add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data; - WITH placement_data(shardid, shardlength, groupid, placementid) AS (VALUES (1310008, 0, 2, 100008)) SELECT citus_internal_add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data; - WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('public.mx_test_table'::regclass, 1310000, 't'::"char", '-2147483648', '-1610612737'), ('public.mx_test_table'::regclass, 1310001, 't'::"char", '-1610612736', '-1073741825'), ('public.mx_test_table'::regclass, 1310002, 't'::"char", '-1073741824', '-536870913'), ('public.mx_test_table'::regclass, 1310003, 't'::"char", '-536870912', '-1'), ('public.mx_test_table'::regclass, 1310004, 't'::"char", '0', '536870911'), ('public.mx_test_table'::regclass, 1310005, 't'::"char", '536870912', '1073741823'), ('public.mx_test_table'::regclass, 1310006, 't'::"char", '1073741824', '1610612735'), ('public.mx_test_table'::regclass, 1310007, 't'::"char", '1610612736', '2147483647')) SELECT citus_internal_add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; - WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('public.single_shard_tbl'::regclass, 1310008, 't'::"char", NULL, NULL)) SELECT citus_internal_add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; + WITH colocation_group_data (colocationid, shardcount, replicationfactor, distributioncolumntype, distributioncolumncollationname, distributioncolumncollationschema) AS (VALUES (2, 8, 1, 'integer'::regtype, NULL, NULL)) SELECT citus_internal.add_colocation_metadata(colocationid, shardcount, replicationfactor, distributioncolumntype, coalesce(c.oid, 0)) FROM colocation_group_data d LEFT JOIN pg_collation c ON (d.distributioncolumncollationname = c.collname AND d.distributioncolumncollationschema::regnamespace = c.collnamespace) + WITH colocation_group_data (colocationid, shardcount, replicationfactor, distributioncolumntype, distributioncolumncollationname, distributioncolumncollationschema) AS (VALUES (3, 1, 1, 0, NULL, NULL)) SELECT citus_internal.add_colocation_metadata(colocationid, shardcount, replicationfactor, distributioncolumntype, coalesce(c.oid, 0)) FROM colocation_group_data d LEFT JOIN pg_collation c ON (d.distributioncolumncollationname = c.collname AND d.distributioncolumncollationschema::regnamespace = c.collnamespace) + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('database', ARRAY['regression']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('role', ARRAY['postgres']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('schema', ARRAY['public']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('sequence', ARRAY['public', 'mx_test_table_col_3_seq']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('sequence', ARRAY['public', 'user_defined_seq']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('table', ARRAY['public', 'mx_test_table']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('table', ARRAY['public', 'single_shard_tbl']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH placement_data(shardid, shardlength, groupid, placementid) AS (VALUES (1310000, 0, 1, 100000), (1310001, 0, 2, 100001), (1310002, 0, 1, 100002), (1310003, 0, 2, 100003), (1310004, 0, 1, 100004), (1310005, 0, 2, 100005), (1310006, 0, 1, 100006), (1310007, 0, 2, 100007)) SELECT citus_internal.add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data; + WITH placement_data(shardid, shardlength, groupid, placementid) AS (VALUES (1310008, 0, 2, 100008)) SELECT citus_internal.add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data; + WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('public.mx_test_table'::regclass, 1310000, 't'::"char", '-2147483648', '-1610612737'), ('public.mx_test_table'::regclass, 1310001, 't'::"char", '-1610612736', '-1073741825'), ('public.mx_test_table'::regclass, 1310002, 't'::"char", '-1073741824', '-536870913'), ('public.mx_test_table'::regclass, 1310003, 't'::"char", '-536870912', '-1'), ('public.mx_test_table'::regclass, 1310004, 't'::"char", '0', '536870911'), ('public.mx_test_table'::regclass, 1310005, 't'::"char", '536870912', '1073741823'), ('public.mx_test_table'::regclass, 1310006, 't'::"char", '1073741824', '1610612735'), ('public.mx_test_table'::regclass, 1310007, 't'::"char", '1610612736', '2147483647')) SELECT citus_internal.add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; + WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('public.single_shard_tbl'::regclass, 1310008, 't'::"char", NULL, NULL)) SELECT citus_internal.add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; (61 rows) -- Drop single shard table @@ -230,7 +230,7 @@ SELECT unnest(activate_node_snapshot()) order by 1; RESET ROLE RESET ROLE SELECT alter_role_if_exists('postgres', 'ALTER ROLE postgres SET lc_messages = ''C''') - SELECT citus_internal_add_partition_metadata ('public.mx_test_table'::regclass, 'h', 'col_1', 2, 's') + SELECT citus_internal.add_partition_metadata ('public.mx_test_table'::regclass, 'h', 'col_1', 2, 's') SELECT pg_catalog.worker_drop_sequence_dependency('public.mx_test_table'); SELECT pg_catalog.worker_drop_sequence_dependency(logicalrelid::regclass::text) FROM pg_dist_partition SELECT pg_catalog.worker_record_sequence_dependency('public.mx_test_table_col_3_seq'::regclass,'public.mx_test_table'::regclass,'col_3') @@ -248,15 +248,15 @@ SELECT unnest(activate_node_snapshot()) order by 1; UPDATE pg_dist_node SET hasmetadata = TRUE WHERE nodeid = 2 UPDATE pg_dist_node SET isactive = TRUE WHERE nodeid = 2 UPDATE pg_dist_node SET metadatasynced = TRUE WHERE nodeid = 2 - WITH colocation_group_data (colocationid, shardcount, replicationfactor, distributioncolumntype, distributioncolumncollationname, distributioncolumncollationschema) AS (VALUES (2, 8, 1, 'integer'::regtype, NULL, NULL)) SELECT pg_catalog.citus_internal_add_colocation_metadata(colocationid, shardcount, replicationfactor, distributioncolumntype, coalesce(c.oid, 0)) FROM colocation_group_data d LEFT JOIN pg_collation c ON (d.distributioncolumncollationname = c.collname AND d.distributioncolumncollationschema::regnamespace = c.collnamespace) - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('database', ARRAY['regression']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('role', ARRAY['postgres']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('schema', ARRAY['public']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('sequence', ARRAY['public', 'mx_test_table_col_3_seq']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('sequence', ARRAY['public', 'user_defined_seq']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('table', ARRAY['public', 'mx_test_table']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH placement_data(shardid, shardlength, groupid, placementid) AS (VALUES (1310000, 0, 1, 100000), (1310001, 0, 2, 100001), (1310002, 0, 1, 100002), (1310003, 0, 2, 100003), (1310004, 0, 1, 100004), (1310005, 0, 2, 100005), (1310006, 0, 1, 100006), (1310007, 0, 2, 100007)) SELECT citus_internal_add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data; - WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('public.mx_test_table'::regclass, 1310000, 't'::"char", '-2147483648', '-1610612737'), ('public.mx_test_table'::regclass, 1310001, 't'::"char", '-1610612736', '-1073741825'), ('public.mx_test_table'::regclass, 1310002, 't'::"char", '-1073741824', '-536870913'), ('public.mx_test_table'::regclass, 1310003, 't'::"char", '-536870912', '-1'), ('public.mx_test_table'::regclass, 1310004, 't'::"char", '0', '536870911'), ('public.mx_test_table'::regclass, 1310005, 't'::"char", '536870912', '1073741823'), ('public.mx_test_table'::regclass, 1310006, 't'::"char", '1073741824', '1610612735'), ('public.mx_test_table'::regclass, 1310007, 't'::"char", '1610612736', '2147483647')) SELECT citus_internal_add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; + WITH colocation_group_data (colocationid, shardcount, replicationfactor, distributioncolumntype, distributioncolumncollationname, distributioncolumncollationschema) AS (VALUES (2, 8, 1, 'integer'::regtype, NULL, NULL)) SELECT citus_internal.add_colocation_metadata(colocationid, shardcount, replicationfactor, distributioncolumntype, coalesce(c.oid, 0)) FROM colocation_group_data d LEFT JOIN pg_collation c ON (d.distributioncolumncollationname = c.collname AND d.distributioncolumncollationschema::regnamespace = c.collnamespace) + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('database', ARRAY['regression']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('role', ARRAY['postgres']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('schema', ARRAY['public']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('sequence', ARRAY['public', 'mx_test_table_col_3_seq']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('sequence', ARRAY['public', 'user_defined_seq']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('table', ARRAY['public', 'mx_test_table']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH placement_data(shardid, shardlength, groupid, placementid) AS (VALUES (1310000, 0, 1, 100000), (1310001, 0, 2, 100001), (1310002, 0, 1, 100002), (1310003, 0, 2, 100003), (1310004, 0, 1, 100004), (1310005, 0, 2, 100005), (1310006, 0, 1, 100006), (1310007, 0, 2, 100007)) SELECT citus_internal.add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data; + WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('public.mx_test_table'::regclass, 1310000, 't'::"char", '-2147483648', '-1610612737'), ('public.mx_test_table'::regclass, 1310001, 't'::"char", '-1610612736', '-1073741825'), ('public.mx_test_table'::regclass, 1310002, 't'::"char", '-1073741824', '-536870913'), ('public.mx_test_table'::regclass, 1310003, 't'::"char", '-536870912', '-1'), ('public.mx_test_table'::regclass, 1310004, 't'::"char", '0', '536870911'), ('public.mx_test_table'::regclass, 1310005, 't'::"char", '536870912', '1073741823'), ('public.mx_test_table'::regclass, 1310006, 't'::"char", '1073741824', '1610612735'), ('public.mx_test_table'::regclass, 1310007, 't'::"char", '1610612736', '2147483647')) SELECT citus_internal.add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; (52 rows) -- Show that schema changes are included in the activate node snapshot @@ -291,7 +291,7 @@ SELECT unnest(activate_node_snapshot()) order by 1; RESET ROLE RESET ROLE SELECT alter_role_if_exists('postgres', 'ALTER ROLE postgres SET lc_messages = ''C''') - SELECT citus_internal_add_partition_metadata ('mx_testing_schema.mx_test_table'::regclass, 'h', 'col_1', 2, 's') + SELECT citus_internal.add_partition_metadata ('mx_testing_schema.mx_test_table'::regclass, 'h', 'col_1', 2, 's') SELECT pg_catalog.worker_drop_sequence_dependency('mx_testing_schema.mx_test_table'); SELECT pg_catalog.worker_drop_sequence_dependency(logicalrelid::regclass::text) FROM pg_dist_partition SELECT pg_catalog.worker_record_sequence_dependency('mx_testing_schema.mx_test_table_col_3_seq'::regclass,'mx_testing_schema.mx_test_table'::regclass,'col_3') @@ -309,16 +309,16 @@ SELECT unnest(activate_node_snapshot()) order by 1; UPDATE pg_dist_node SET hasmetadata = TRUE WHERE nodeid = 2 UPDATE pg_dist_node SET isactive = TRUE WHERE nodeid = 2 UPDATE pg_dist_node SET metadatasynced = TRUE WHERE nodeid = 2 - WITH colocation_group_data (colocationid, shardcount, replicationfactor, distributioncolumntype, distributioncolumncollationname, distributioncolumncollationschema) AS (VALUES (2, 8, 1, 'integer'::regtype, NULL, NULL)) SELECT pg_catalog.citus_internal_add_colocation_metadata(colocationid, shardcount, replicationfactor, distributioncolumntype, coalesce(c.oid, 0)) FROM colocation_group_data d LEFT JOIN pg_collation c ON (d.distributioncolumncollationname = c.collname AND d.distributioncolumncollationschema::regnamespace = c.collnamespace) - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('database', ARRAY['regression']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('role', ARRAY['postgres']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('schema', ARRAY['mx_testing_schema']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('schema', ARRAY['public']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('sequence', ARRAY['mx_testing_schema', 'mx_test_table_col_3_seq']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('sequence', ARRAY['public', 'user_defined_seq']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('table', ARRAY['mx_testing_schema', 'mx_test_table']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH placement_data(shardid, shardlength, groupid, placementid) AS (VALUES (1310000, 0, 1, 100000), (1310001, 0, 2, 100001), (1310002, 0, 1, 100002), (1310003, 0, 2, 100003), (1310004, 0, 1, 100004), (1310005, 0, 2, 100005), (1310006, 0, 1, 100006), (1310007, 0, 2, 100007)) SELECT citus_internal_add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data; - WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('mx_testing_schema.mx_test_table'::regclass, 1310000, 't'::"char", '-2147483648', '-1610612737'), ('mx_testing_schema.mx_test_table'::regclass, 1310001, 't'::"char", '-1610612736', '-1073741825'), ('mx_testing_schema.mx_test_table'::regclass, 1310002, 't'::"char", '-1073741824', '-536870913'), ('mx_testing_schema.mx_test_table'::regclass, 1310003, 't'::"char", '-536870912', '-1'), ('mx_testing_schema.mx_test_table'::regclass, 1310004, 't'::"char", '0', '536870911'), ('mx_testing_schema.mx_test_table'::regclass, 1310005, 't'::"char", '536870912', '1073741823'), ('mx_testing_schema.mx_test_table'::regclass, 1310006, 't'::"char", '1073741824', '1610612735'), ('mx_testing_schema.mx_test_table'::regclass, 1310007, 't'::"char", '1610612736', '2147483647')) SELECT citus_internal_add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; + WITH colocation_group_data (colocationid, shardcount, replicationfactor, distributioncolumntype, distributioncolumncollationname, distributioncolumncollationschema) AS (VALUES (2, 8, 1, 'integer'::regtype, NULL, NULL)) SELECT citus_internal.add_colocation_metadata(colocationid, shardcount, replicationfactor, distributioncolumntype, coalesce(c.oid, 0)) FROM colocation_group_data d LEFT JOIN pg_collation c ON (d.distributioncolumncollationname = c.collname AND d.distributioncolumncollationschema::regnamespace = c.collnamespace) + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('database', ARRAY['regression']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('role', ARRAY['postgres']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('schema', ARRAY['mx_testing_schema']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('schema', ARRAY['public']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('sequence', ARRAY['mx_testing_schema', 'mx_test_table_col_3_seq']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('sequence', ARRAY['public', 'user_defined_seq']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('table', ARRAY['mx_testing_schema', 'mx_test_table']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH placement_data(shardid, shardlength, groupid, placementid) AS (VALUES (1310000, 0, 1, 100000), (1310001, 0, 2, 100001), (1310002, 0, 1, 100002), (1310003, 0, 2, 100003), (1310004, 0, 1, 100004), (1310005, 0, 2, 100005), (1310006, 0, 1, 100006), (1310007, 0, 2, 100007)) SELECT citus_internal.add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data; + WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('mx_testing_schema.mx_test_table'::regclass, 1310000, 't'::"char", '-2147483648', '-1610612737'), ('mx_testing_schema.mx_test_table'::regclass, 1310001, 't'::"char", '-1610612736', '-1073741825'), ('mx_testing_schema.mx_test_table'::regclass, 1310002, 't'::"char", '-1073741824', '-536870913'), ('mx_testing_schema.mx_test_table'::regclass, 1310003, 't'::"char", '-536870912', '-1'), ('mx_testing_schema.mx_test_table'::regclass, 1310004, 't'::"char", '0', '536870911'), ('mx_testing_schema.mx_test_table'::regclass, 1310005, 't'::"char", '536870912', '1073741823'), ('mx_testing_schema.mx_test_table'::regclass, 1310006, 't'::"char", '1073741824', '1610612735'), ('mx_testing_schema.mx_test_table'::regclass, 1310007, 't'::"char", '1610612736', '2147483647')) SELECT citus_internal.add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; (54 rows) -- Show that append distributed tables are not included in the activate node snapshot @@ -359,7 +359,7 @@ SELECT unnest(activate_node_snapshot()) order by 1; RESET ROLE RESET ROLE SELECT alter_role_if_exists('postgres', 'ALTER ROLE postgres SET lc_messages = ''C''') - SELECT citus_internal_add_partition_metadata ('mx_testing_schema.mx_test_table'::regclass, 'h', 'col_1', 2, 's') + SELECT citus_internal.add_partition_metadata ('mx_testing_schema.mx_test_table'::regclass, 'h', 'col_1', 2, 's') SELECT pg_catalog.worker_drop_sequence_dependency('mx_testing_schema.mx_test_table'); SELECT pg_catalog.worker_drop_sequence_dependency(logicalrelid::regclass::text) FROM pg_dist_partition SELECT pg_catalog.worker_record_sequence_dependency('mx_testing_schema.mx_test_table_col_3_seq'::regclass,'mx_testing_schema.mx_test_table'::regclass,'col_3') @@ -377,16 +377,16 @@ SELECT unnest(activate_node_snapshot()) order by 1; UPDATE pg_dist_node SET hasmetadata = TRUE WHERE nodeid = 2 UPDATE pg_dist_node SET isactive = TRUE WHERE nodeid = 2 UPDATE pg_dist_node SET metadatasynced = TRUE WHERE nodeid = 2 - WITH colocation_group_data (colocationid, shardcount, replicationfactor, distributioncolumntype, distributioncolumncollationname, distributioncolumncollationschema) AS (VALUES (2, 8, 1, 'integer'::regtype, NULL, NULL)) SELECT pg_catalog.citus_internal_add_colocation_metadata(colocationid, shardcount, replicationfactor, distributioncolumntype, coalesce(c.oid, 0)) FROM colocation_group_data d LEFT JOIN pg_collation c ON (d.distributioncolumncollationname = c.collname AND d.distributioncolumncollationschema::regnamespace = c.collnamespace) - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('database', ARRAY['regression']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('role', ARRAY['postgres']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('schema', ARRAY['mx_testing_schema']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('schema', ARRAY['public']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('sequence', ARRAY['mx_testing_schema', 'mx_test_table_col_3_seq']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('sequence', ARRAY['public', 'user_defined_seq']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('table', ARRAY['mx_testing_schema', 'mx_test_table']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH placement_data(shardid, shardlength, groupid, placementid) AS (VALUES (1310000, 0, 1, 100000), (1310001, 0, 2, 100001), (1310002, 0, 1, 100002), (1310003, 0, 2, 100003), (1310004, 0, 1, 100004), (1310005, 0, 2, 100005), (1310006, 0, 1, 100006), (1310007, 0, 2, 100007)) SELECT citus_internal_add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data; - WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('mx_testing_schema.mx_test_table'::regclass, 1310000, 't'::"char", '-2147483648', '-1610612737'), ('mx_testing_schema.mx_test_table'::regclass, 1310001, 't'::"char", '-1610612736', '-1073741825'), ('mx_testing_schema.mx_test_table'::regclass, 1310002, 't'::"char", '-1073741824', '-536870913'), ('mx_testing_schema.mx_test_table'::regclass, 1310003, 't'::"char", '-536870912', '-1'), ('mx_testing_schema.mx_test_table'::regclass, 1310004, 't'::"char", '0', '536870911'), ('mx_testing_schema.mx_test_table'::regclass, 1310005, 't'::"char", '536870912', '1073741823'), ('mx_testing_schema.mx_test_table'::regclass, 1310006, 't'::"char", '1073741824', '1610612735'), ('mx_testing_schema.mx_test_table'::regclass, 1310007, 't'::"char", '1610612736', '2147483647')) SELECT citus_internal_add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; + WITH colocation_group_data (colocationid, shardcount, replicationfactor, distributioncolumntype, distributioncolumncollationname, distributioncolumncollationschema) AS (VALUES (2, 8, 1, 'integer'::regtype, NULL, NULL)) SELECT citus_internal.add_colocation_metadata(colocationid, shardcount, replicationfactor, distributioncolumntype, coalesce(c.oid, 0)) FROM colocation_group_data d LEFT JOIN pg_collation c ON (d.distributioncolumncollationname = c.collname AND d.distributioncolumncollationschema::regnamespace = c.collnamespace) + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('database', ARRAY['regression']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('role', ARRAY['postgres']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('schema', ARRAY['mx_testing_schema']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('schema', ARRAY['public']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('sequence', ARRAY['mx_testing_schema', 'mx_test_table_col_3_seq']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('sequence', ARRAY['public', 'user_defined_seq']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('table', ARRAY['mx_testing_schema', 'mx_test_table']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH placement_data(shardid, shardlength, groupid, placementid) AS (VALUES (1310000, 0, 1, 100000), (1310001, 0, 2, 100001), (1310002, 0, 1, 100002), (1310003, 0, 2, 100003), (1310004, 0, 1, 100004), (1310005, 0, 2, 100005), (1310006, 0, 1, 100006), (1310007, 0, 2, 100007)) SELECT citus_internal.add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data; + WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('mx_testing_schema.mx_test_table'::regclass, 1310000, 't'::"char", '-2147483648', '-1610612737'), ('mx_testing_schema.mx_test_table'::regclass, 1310001, 't'::"char", '-1610612736', '-1073741825'), ('mx_testing_schema.mx_test_table'::regclass, 1310002, 't'::"char", '-1073741824', '-536870913'), ('mx_testing_schema.mx_test_table'::regclass, 1310003, 't'::"char", '-536870912', '-1'), ('mx_testing_schema.mx_test_table'::regclass, 1310004, 't'::"char", '0', '536870911'), ('mx_testing_schema.mx_test_table'::regclass, 1310005, 't'::"char", '536870912', '1073741823'), ('mx_testing_schema.mx_test_table'::regclass, 1310006, 't'::"char", '1073741824', '1610612735'), ('mx_testing_schema.mx_test_table'::regclass, 1310007, 't'::"char", '1610612736', '2147483647')) SELECT citus_internal.add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; (54 rows) -- Show that range distributed tables are not included in the activate node snapshot @@ -420,7 +420,7 @@ SELECT unnest(activate_node_snapshot()) order by 1; RESET ROLE RESET ROLE SELECT alter_role_if_exists('postgres', 'ALTER ROLE postgres SET lc_messages = ''C''') - SELECT citus_internal_add_partition_metadata ('mx_testing_schema.mx_test_table'::regclass, 'h', 'col_1', 2, 's') + SELECT citus_internal.add_partition_metadata ('mx_testing_schema.mx_test_table'::regclass, 'h', 'col_1', 2, 's') SELECT pg_catalog.worker_drop_sequence_dependency('mx_testing_schema.mx_test_table'); SELECT pg_catalog.worker_drop_sequence_dependency(logicalrelid::regclass::text) FROM pg_dist_partition SELECT pg_catalog.worker_record_sequence_dependency('mx_testing_schema.mx_test_table_col_3_seq'::regclass,'mx_testing_schema.mx_test_table'::regclass,'col_3') @@ -438,16 +438,16 @@ SELECT unnest(activate_node_snapshot()) order by 1; UPDATE pg_dist_node SET hasmetadata = TRUE WHERE nodeid = 2 UPDATE pg_dist_node SET isactive = TRUE WHERE nodeid = 2 UPDATE pg_dist_node SET metadatasynced = TRUE WHERE nodeid = 2 - WITH colocation_group_data (colocationid, shardcount, replicationfactor, distributioncolumntype, distributioncolumncollationname, distributioncolumncollationschema) AS (VALUES (2, 8, 1, 'integer'::regtype, NULL, NULL)) SELECT pg_catalog.citus_internal_add_colocation_metadata(colocationid, shardcount, replicationfactor, distributioncolumntype, coalesce(c.oid, 0)) FROM colocation_group_data d LEFT JOIN pg_collation c ON (d.distributioncolumncollationname = c.collname AND d.distributioncolumncollationschema::regnamespace = c.collnamespace) - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('database', ARRAY['regression']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('role', ARRAY['postgres']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('schema', ARRAY['mx_testing_schema']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('schema', ARRAY['public']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('sequence', ARRAY['mx_testing_schema', 'mx_test_table_col_3_seq']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('sequence', ARRAY['public', 'user_defined_seq']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('table', ARRAY['mx_testing_schema', 'mx_test_table']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH placement_data(shardid, shardlength, groupid, placementid) AS (VALUES (1310000, 0, 1, 100000), (1310001, 0, 2, 100001), (1310002, 0, 1, 100002), (1310003, 0, 2, 100003), (1310004, 0, 1, 100004), (1310005, 0, 2, 100005), (1310006, 0, 1, 100006), (1310007, 0, 2, 100007)) SELECT citus_internal_add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data; - WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('mx_testing_schema.mx_test_table'::regclass, 1310000, 't'::"char", '-2147483648', '-1610612737'), ('mx_testing_schema.mx_test_table'::regclass, 1310001, 't'::"char", '-1610612736', '-1073741825'), ('mx_testing_schema.mx_test_table'::regclass, 1310002, 't'::"char", '-1073741824', '-536870913'), ('mx_testing_schema.mx_test_table'::regclass, 1310003, 't'::"char", '-536870912', '-1'), ('mx_testing_schema.mx_test_table'::regclass, 1310004, 't'::"char", '0', '536870911'), ('mx_testing_schema.mx_test_table'::regclass, 1310005, 't'::"char", '536870912', '1073741823'), ('mx_testing_schema.mx_test_table'::regclass, 1310006, 't'::"char", '1073741824', '1610612735'), ('mx_testing_schema.mx_test_table'::regclass, 1310007, 't'::"char", '1610612736', '2147483647')) SELECT citus_internal_add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; + WITH colocation_group_data (colocationid, shardcount, replicationfactor, distributioncolumntype, distributioncolumncollationname, distributioncolumncollationschema) AS (VALUES (2, 8, 1, 'integer'::regtype, NULL, NULL)) SELECT citus_internal.add_colocation_metadata(colocationid, shardcount, replicationfactor, distributioncolumntype, coalesce(c.oid, 0)) FROM colocation_group_data d LEFT JOIN pg_collation c ON (d.distributioncolumncollationname = c.collname AND d.distributioncolumncollationschema::regnamespace = c.collnamespace) + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('database', ARRAY['regression']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('role', ARRAY['postgres']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('schema', ARRAY['mx_testing_schema']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('schema', ARRAY['public']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('sequence', ARRAY['mx_testing_schema', 'mx_test_table_col_3_seq']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('sequence', ARRAY['public', 'user_defined_seq']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('table', ARRAY['mx_testing_schema', 'mx_test_table']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH placement_data(shardid, shardlength, groupid, placementid) AS (VALUES (1310000, 0, 1, 100000), (1310001, 0, 2, 100001), (1310002, 0, 1, 100002), (1310003, 0, 2, 100003), (1310004, 0, 1, 100004), (1310005, 0, 2, 100005), (1310006, 0, 1, 100006), (1310007, 0, 2, 100007)) SELECT citus_internal.add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data; + WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('mx_testing_schema.mx_test_table'::regclass, 1310000, 't'::"char", '-2147483648', '-1610612737'), ('mx_testing_schema.mx_test_table'::regclass, 1310001, 't'::"char", '-1610612736', '-1073741825'), ('mx_testing_schema.mx_test_table'::regclass, 1310002, 't'::"char", '-1073741824', '-536870913'), ('mx_testing_schema.mx_test_table'::regclass, 1310003, 't'::"char", '-536870912', '-1'), ('mx_testing_schema.mx_test_table'::regclass, 1310004, 't'::"char", '0', '536870911'), ('mx_testing_schema.mx_test_table'::regclass, 1310005, 't'::"char", '536870912', '1073741823'), ('mx_testing_schema.mx_test_table'::regclass, 1310006, 't'::"char", '1073741824', '1610612735'), ('mx_testing_schema.mx_test_table'::regclass, 1310007, 't'::"char", '1610612736', '2147483647')) SELECT citus_internal.add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; (54 rows) -- Test start_metadata_sync_to_node and citus_activate_node UDFs @@ -1996,12 +1996,12 @@ SELECT unnest(activate_node_snapshot()) order by 1; RESET ROLE RESET ROLE SELECT alter_role_if_exists('postgres', 'ALTER ROLE postgres SET lc_messages = ''C''') - SELECT citus_internal_add_partition_metadata ('mx_test_schema_1.mx_table_1'::regclass, 'h', 'col1', 7, 's') - SELECT citus_internal_add_partition_metadata ('mx_test_schema_2.mx_table_2'::regclass, 'h', 'col1', 7, 's') - SELECT citus_internal_add_partition_metadata ('mx_testing_schema.mx_test_table'::regclass, 'h', 'col_1', 2, 's') - SELECT citus_internal_add_partition_metadata ('public.dist_table_1'::regclass, 'h', 'a', 10010, 's') - SELECT citus_internal_add_partition_metadata ('public.mx_ref'::regclass, 'n', NULL, 10009, 't') - SELECT citus_internal_add_partition_metadata ('public.test_table'::regclass, 'h', 'id', 10010, 's') + SELECT citus_internal.add_partition_metadata ('mx_test_schema_1.mx_table_1'::regclass, 'h', 'col1', 7, 's') + SELECT citus_internal.add_partition_metadata ('mx_test_schema_2.mx_table_2'::regclass, 'h', 'col1', 7, 's') + SELECT citus_internal.add_partition_metadata ('mx_testing_schema.mx_test_table'::regclass, 'h', 'col_1', 2, 's') + SELECT citus_internal.add_partition_metadata ('public.dist_table_1'::regclass, 'h', 'a', 10010, 's') + SELECT citus_internal.add_partition_metadata ('public.mx_ref'::regclass, 'n', NULL, 10009, 't') + SELECT citus_internal.add_partition_metadata ('public.test_table'::regclass, 'h', 'id', 10010, 's') SELECT pg_catalog.worker_drop_sequence_dependency('mx_test_schema_1.mx_table_1'); SELECT pg_catalog.worker_drop_sequence_dependency('mx_test_schema_2.mx_table_2'); SELECT pg_catalog.worker_drop_sequence_dependency('mx_testing_schema.mx_test_table'); @@ -2031,37 +2031,37 @@ SELECT unnest(activate_node_snapshot()) order by 1; UPDATE pg_dist_node SET hasmetadata = TRUE WHERE nodeid = 2 UPDATE pg_dist_node SET isactive = TRUE WHERE nodeid = 2 UPDATE pg_dist_node SET metadatasynced = TRUE WHERE nodeid = 2 - WITH colocation_group_data (colocationid, shardcount, replicationfactor, distributioncolumntype, distributioncolumncollationname, distributioncolumncollationschema) AS (VALUES (10009, 1, -1, 0, NULL, NULL)) SELECT pg_catalog.citus_internal_add_colocation_metadata(colocationid, shardcount, replicationfactor, distributioncolumntype, coalesce(c.oid, 0)) FROM colocation_group_data d LEFT JOIN pg_collation c ON (d.distributioncolumncollationname = c.collname AND d.distributioncolumncollationschema::regnamespace = c.collnamespace) - WITH colocation_group_data (colocationid, shardcount, replicationfactor, distributioncolumntype, distributioncolumncollationname, distributioncolumncollationschema) AS (VALUES (10010, 4, 1, 'integer'::regtype, NULL, NULL)) SELECT pg_catalog.citus_internal_add_colocation_metadata(colocationid, shardcount, replicationfactor, distributioncolumntype, coalesce(c.oid, 0)) FROM colocation_group_data d LEFT JOIN pg_collation c ON (d.distributioncolumncollationname = c.collname AND d.distributioncolumncollationschema::regnamespace = c.collnamespace) - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('database', ARRAY['regression']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('role', ARRAY['postgres']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('schema', ARRAY['mx_test_schema_1']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('schema', ARRAY['mx_test_schema_2']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('schema', ARRAY['mx_testing_schema']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('schema', ARRAY['mx_testing_schema_2']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('schema', ARRAY['public']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('sequence', ARRAY['mx_testing_schema', 'mx_test_table_col_3_seq']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('sequence', ARRAY['public', 'mx_test_sequence_0']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('sequence', ARRAY['public', 'mx_test_sequence_1']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('sequence', ARRAY['public', 'user_defined_seq']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('table', ARRAY['mx_test_schema_1', 'mx_table_1']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('table', ARRAY['mx_test_schema_2', 'mx_table_2']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('table', ARRAY['mx_testing_schema', 'mx_test_table']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('table', ARRAY['public', 'dist_table_1']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('table', ARRAY['public', 'mx_ref']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('table', ARRAY['public', 'test_table']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH placement_data(shardid, shardlength, groupid, placementid) AS (VALUES (1310000, 0, 1, 100000), (1310001, 0, 5, 100001), (1310002, 0, 1, 100002), (1310003, 0, 5, 100003), (1310004, 0, 1, 100004), (1310005, 0, 5, 100005), (1310006, 0, 1, 100006), (1310007, 0, 5, 100007)) SELECT citus_internal_add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data; - WITH placement_data(shardid, shardlength, groupid, placementid) AS (VALUES (1310022, 0, 1, 100022), (1310023, 0, 5, 100023), (1310024, 0, 1, 100024), (1310025, 0, 5, 100025), (1310026, 0, 1, 100026)) SELECT citus_internal_add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data; - WITH placement_data(shardid, shardlength, groupid, placementid) AS (VALUES (1310027, 0, 1, 100027), (1310028, 0, 5, 100028), (1310029, 0, 1, 100029), (1310030, 0, 5, 100030), (1310031, 0, 1, 100031)) SELECT citus_internal_add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data; - WITH placement_data(shardid, shardlength, groupid, placementid) AS (VALUES (1310075, 0, 0, 100077), (1310075, 0, 1, 100078), (1310075, 0, 5, 100079)) SELECT citus_internal_add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data; - WITH placement_data(shardid, shardlength, groupid, placementid) AS (VALUES (1310076, 0, 1, 100080), (1310077, 0, 5, 100081), (1310078, 0, 1, 100082), (1310079, 0, 5, 100083)) SELECT citus_internal_add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data; - WITH placement_data(shardid, shardlength, groupid, placementid) AS (VALUES (1310085, 0, 1, 100091), (1310086, 0, 5, 100092), (1310087, 0, 1, 100093), (1310088, 0, 5, 100094)) SELECT citus_internal_add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data; - WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('mx_test_schema_1.mx_table_1'::regclass, 1310022, 't'::"char", '-2147483648', '-1288490190'), ('mx_test_schema_1.mx_table_1'::regclass, 1310023, 't'::"char", '-1288490189', '-429496731'), ('mx_test_schema_1.mx_table_1'::regclass, 1310024, 't'::"char", '-429496730', '429496728'), ('mx_test_schema_1.mx_table_1'::regclass, 1310025, 't'::"char", '429496729', '1288490187'), ('mx_test_schema_1.mx_table_1'::regclass, 1310026, 't'::"char", '1288490188', '2147483647')) SELECT citus_internal_add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; - WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('mx_test_schema_2.mx_table_2'::regclass, 1310027, 't'::"char", '-2147483648', '-1288490190'), ('mx_test_schema_2.mx_table_2'::regclass, 1310028, 't'::"char", '-1288490189', '-429496731'), ('mx_test_schema_2.mx_table_2'::regclass, 1310029, 't'::"char", '-429496730', '429496728'), ('mx_test_schema_2.mx_table_2'::regclass, 1310030, 't'::"char", '429496729', '1288490187'), ('mx_test_schema_2.mx_table_2'::regclass, 1310031, 't'::"char", '1288490188', '2147483647')) SELECT citus_internal_add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; - WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('mx_testing_schema.mx_test_table'::regclass, 1310000, 't'::"char", '-2147483648', '-1610612737'), ('mx_testing_schema.mx_test_table'::regclass, 1310001, 't'::"char", '-1610612736', '-1073741825'), ('mx_testing_schema.mx_test_table'::regclass, 1310002, 't'::"char", '-1073741824', '-536870913'), ('mx_testing_schema.mx_test_table'::regclass, 1310003, 't'::"char", '-536870912', '-1'), ('mx_testing_schema.mx_test_table'::regclass, 1310004, 't'::"char", '0', '536870911'), ('mx_testing_schema.mx_test_table'::regclass, 1310005, 't'::"char", '536870912', '1073741823'), ('mx_testing_schema.mx_test_table'::regclass, 1310006, 't'::"char", '1073741824', '1610612735'), ('mx_testing_schema.mx_test_table'::regclass, 1310007, 't'::"char", '1610612736', '2147483647')) SELECT citus_internal_add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; - WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('public.dist_table_1'::regclass, 1310076, 't'::"char", '-2147483648', '-1073741825'), ('public.dist_table_1'::regclass, 1310077, 't'::"char", '-1073741824', '-1'), ('public.dist_table_1'::regclass, 1310078, 't'::"char", '0', '1073741823'), ('public.dist_table_1'::regclass, 1310079, 't'::"char", '1073741824', '2147483647')) SELECT citus_internal_add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; - WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('public.mx_ref'::regclass, 1310075, 't'::"char", NULL, NULL)) SELECT citus_internal_add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; - WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('public.test_table'::regclass, 1310085, 't'::"char", '-2147483648', '-1073741825'), ('public.test_table'::regclass, 1310086, 't'::"char", '-1073741824', '-1'), ('public.test_table'::regclass, 1310087, 't'::"char", '0', '1073741823'), ('public.test_table'::regclass, 1310088, 't'::"char", '1073741824', '2147483647')) SELECT citus_internal_add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; + WITH colocation_group_data (colocationid, shardcount, replicationfactor, distributioncolumntype, distributioncolumncollationname, distributioncolumncollationschema) AS (VALUES (10009, 1, -1, 0, NULL, NULL)) SELECT citus_internal.add_colocation_metadata(colocationid, shardcount, replicationfactor, distributioncolumntype, coalesce(c.oid, 0)) FROM colocation_group_data d LEFT JOIN pg_collation c ON (d.distributioncolumncollationname = c.collname AND d.distributioncolumncollationschema::regnamespace = c.collnamespace) + WITH colocation_group_data (colocationid, shardcount, replicationfactor, distributioncolumntype, distributioncolumncollationname, distributioncolumncollationschema) AS (VALUES (10010, 4, 1, 'integer'::regtype, NULL, NULL)) SELECT citus_internal.add_colocation_metadata(colocationid, shardcount, replicationfactor, distributioncolumntype, coalesce(c.oid, 0)) FROM colocation_group_data d LEFT JOIN pg_collation c ON (d.distributioncolumncollationname = c.collname AND d.distributioncolumncollationschema::regnamespace = c.collnamespace) + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('database', ARRAY['regression']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('role', ARRAY['postgres']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('schema', ARRAY['mx_test_schema_1']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('schema', ARRAY['mx_test_schema_2']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('schema', ARRAY['mx_testing_schema']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('schema', ARRAY['mx_testing_schema_2']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('schema', ARRAY['public']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('sequence', ARRAY['mx_testing_schema', 'mx_test_table_col_3_seq']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('sequence', ARRAY['public', 'mx_test_sequence_0']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('sequence', ARRAY['public', 'mx_test_sequence_1']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('sequence', ARRAY['public', 'user_defined_seq']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('table', ARRAY['mx_test_schema_1', 'mx_table_1']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('table', ARRAY['mx_test_schema_2', 'mx_table_2']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('table', ARRAY['mx_testing_schema', 'mx_test_table']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('table', ARRAY['public', 'dist_table_1']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('table', ARRAY['public', 'mx_ref']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('table', ARRAY['public', 'test_table']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH placement_data(shardid, shardlength, groupid, placementid) AS (VALUES (1310000, 0, 1, 100000), (1310001, 0, 5, 100001), (1310002, 0, 1, 100002), (1310003, 0, 5, 100003), (1310004, 0, 1, 100004), (1310005, 0, 5, 100005), (1310006, 0, 1, 100006), (1310007, 0, 5, 100007)) SELECT citus_internal.add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data; + WITH placement_data(shardid, shardlength, groupid, placementid) AS (VALUES (1310022, 0, 1, 100022), (1310023, 0, 5, 100023), (1310024, 0, 1, 100024), (1310025, 0, 5, 100025), (1310026, 0, 1, 100026)) SELECT citus_internal.add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data; + WITH placement_data(shardid, shardlength, groupid, placementid) AS (VALUES (1310027, 0, 1, 100027), (1310028, 0, 5, 100028), (1310029, 0, 1, 100029), (1310030, 0, 5, 100030), (1310031, 0, 1, 100031)) SELECT citus_internal.add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data; + WITH placement_data(shardid, shardlength, groupid, placementid) AS (VALUES (1310075, 0, 0, 100077), (1310075, 0, 1, 100078), (1310075, 0, 5, 100079)) SELECT citus_internal.add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data; + WITH placement_data(shardid, shardlength, groupid, placementid) AS (VALUES (1310076, 0, 1, 100080), (1310077, 0, 5, 100081), (1310078, 0, 1, 100082), (1310079, 0, 5, 100083)) SELECT citus_internal.add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data; + WITH placement_data(shardid, shardlength, groupid, placementid) AS (VALUES (1310085, 0, 1, 100091), (1310086, 0, 5, 100092), (1310087, 0, 1, 100093), (1310088, 0, 5, 100094)) SELECT citus_internal.add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data; + WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('mx_test_schema_1.mx_table_1'::regclass, 1310022, 't'::"char", '-2147483648', '-1288490190'), ('mx_test_schema_1.mx_table_1'::regclass, 1310023, 't'::"char", '-1288490189', '-429496731'), ('mx_test_schema_1.mx_table_1'::regclass, 1310024, 't'::"char", '-429496730', '429496728'), ('mx_test_schema_1.mx_table_1'::regclass, 1310025, 't'::"char", '429496729', '1288490187'), ('mx_test_schema_1.mx_table_1'::regclass, 1310026, 't'::"char", '1288490188', '2147483647')) SELECT citus_internal.add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; + WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('mx_test_schema_2.mx_table_2'::regclass, 1310027, 't'::"char", '-2147483648', '-1288490190'), ('mx_test_schema_2.mx_table_2'::regclass, 1310028, 't'::"char", '-1288490189', '-429496731'), ('mx_test_schema_2.mx_table_2'::regclass, 1310029, 't'::"char", '-429496730', '429496728'), ('mx_test_schema_2.mx_table_2'::regclass, 1310030, 't'::"char", '429496729', '1288490187'), ('mx_test_schema_2.mx_table_2'::regclass, 1310031, 't'::"char", '1288490188', '2147483647')) SELECT citus_internal.add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; + WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('mx_testing_schema.mx_test_table'::regclass, 1310000, 't'::"char", '-2147483648', '-1610612737'), ('mx_testing_schema.mx_test_table'::regclass, 1310001, 't'::"char", '-1610612736', '-1073741825'), ('mx_testing_schema.mx_test_table'::regclass, 1310002, 't'::"char", '-1073741824', '-536870913'), ('mx_testing_schema.mx_test_table'::regclass, 1310003, 't'::"char", '-536870912', '-1'), ('mx_testing_schema.mx_test_table'::regclass, 1310004, 't'::"char", '0', '536870911'), ('mx_testing_schema.mx_test_table'::regclass, 1310005, 't'::"char", '536870912', '1073741823'), ('mx_testing_schema.mx_test_table'::regclass, 1310006, 't'::"char", '1073741824', '1610612735'), ('mx_testing_schema.mx_test_table'::regclass, 1310007, 't'::"char", '1610612736', '2147483647')) SELECT citus_internal.add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; + WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('public.dist_table_1'::regclass, 1310076, 't'::"char", '-2147483648', '-1073741825'), ('public.dist_table_1'::regclass, 1310077, 't'::"char", '-1073741824', '-1'), ('public.dist_table_1'::regclass, 1310078, 't'::"char", '0', '1073741823'), ('public.dist_table_1'::regclass, 1310079, 't'::"char", '1073741824', '2147483647')) SELECT citus_internal.add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; + WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('public.mx_ref'::regclass, 1310075, 't'::"char", NULL, NULL)) SELECT citus_internal.add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; + WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('public.test_table'::regclass, 1310085, 't'::"char", '-2147483648', '-1073741825'), ('public.test_table'::regclass, 1310086, 't'::"char", '-1073741824', '-1'), ('public.test_table'::regclass, 1310087, 't'::"char", '0', '1073741823'), ('public.test_table'::regclass, 1310088, 't'::"char", '1073741824', '2147483647')) SELECT citus_internal.add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; (118 rows) -- shouldn't work since test_table is MX diff --git a/src/test/regress/expected/multi_metadata_sync_0.out b/src/test/regress/expected/multi_metadata_sync_0.out index c81462e6f..bc1775ada 100644 --- a/src/test/regress/expected/multi_metadata_sync_0.out +++ b/src/test/regress/expected/multi_metadata_sync_0.out @@ -101,9 +101,9 @@ SELECT unnest(activate_node_snapshot()) order by 1; UPDATE pg_dist_node SET hasmetadata = TRUE WHERE nodeid = 2 UPDATE pg_dist_node SET isactive = TRUE WHERE nodeid = 2 UPDATE pg_dist_node SET metadatasynced = TRUE WHERE nodeid = 2 - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('database', ARRAY['regression']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('role', ARRAY['postgres']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('schema', ARRAY['public']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('database', ARRAY['regression']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('role', ARRAY['postgres']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('schema', ARRAY['public']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; (33 rows) -- Create a test table with constraints and SERIAL and default from user defined sequence @@ -162,8 +162,8 @@ SELECT unnest(activate_node_snapshot()) order by 1; RESET ROLE RESET ROLE SELECT alter_role_if_exists('postgres', 'ALTER ROLE postgres SET lc_messages = ''C''') - SELECT citus_internal_add_partition_metadata ('public.mx_test_table'::regclass, 'h', 'col_1', 2, 's') - SELECT citus_internal_add_partition_metadata ('public.single_shard_tbl'::regclass, 'n', NULL, 3, 's') + SELECT citus_internal.add_partition_metadata ('public.mx_test_table'::regclass, 'h', 'col_1', 2, 's') + SELECT citus_internal.add_partition_metadata ('public.single_shard_tbl'::regclass, 'n', NULL, 3, 's') SELECT pg_catalog.worker_drop_sequence_dependency('public.mx_test_table'); SELECT pg_catalog.worker_drop_sequence_dependency('public.single_shard_tbl'); SELECT pg_catalog.worker_drop_sequence_dependency(logicalrelid::regclass::text) FROM pg_dist_partition @@ -183,19 +183,19 @@ SELECT unnest(activate_node_snapshot()) order by 1; UPDATE pg_dist_node SET hasmetadata = TRUE WHERE nodeid = 2 UPDATE pg_dist_node SET isactive = TRUE WHERE nodeid = 2 UPDATE pg_dist_node SET metadatasynced = TRUE WHERE nodeid = 2 - WITH colocation_group_data (colocationid, shardcount, replicationfactor, distributioncolumntype, distributioncolumncollationname, distributioncolumncollationschema) AS (VALUES (2, 8, 1, 'integer'::regtype, NULL, NULL)) SELECT pg_catalog.citus_internal_add_colocation_metadata(colocationid, shardcount, replicationfactor, distributioncolumntype, coalesce(c.oid, 0)) FROM colocation_group_data d LEFT JOIN pg_collation c ON (d.distributioncolumncollationname = c.collname AND d.distributioncolumncollationschema::regnamespace = c.collnamespace) - WITH colocation_group_data (colocationid, shardcount, replicationfactor, distributioncolumntype, distributioncolumncollationname, distributioncolumncollationschema) AS (VALUES (3, 1, 1, 0, NULL, NULL)) SELECT pg_catalog.citus_internal_add_colocation_metadata(colocationid, shardcount, replicationfactor, distributioncolumntype, coalesce(c.oid, 0)) FROM colocation_group_data d LEFT JOIN pg_collation c ON (d.distributioncolumncollationname = c.collname AND d.distributioncolumncollationschema::regnamespace = c.collnamespace) - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('database', ARRAY['regression']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('role', ARRAY['postgres']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('schema', ARRAY['public']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('sequence', ARRAY['public', 'mx_test_table_col_3_seq']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('sequence', ARRAY['public', 'user_defined_seq']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('table', ARRAY['public', 'mx_test_table']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('table', ARRAY['public', 'single_shard_tbl']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH placement_data(shardid, shardlength, groupid, placementid) AS (VALUES (1310000, 0, 1, 100000), (1310001, 0, 2, 100001), (1310002, 0, 1, 100002), (1310003, 0, 2, 100003), (1310004, 0, 1, 100004), (1310005, 0, 2, 100005), (1310006, 0, 1, 100006), (1310007, 0, 2, 100007)) SELECT citus_internal_add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data; - WITH placement_data(shardid, shardlength, groupid, placementid) AS (VALUES (1310008, 0, 2, 100008)) SELECT citus_internal_add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data; - WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('public.mx_test_table'::regclass, 1310000, 't'::"char", '-2147483648', '-1610612737'), ('public.mx_test_table'::regclass, 1310001, 't'::"char", '-1610612736', '-1073741825'), ('public.mx_test_table'::regclass, 1310002, 't'::"char", '-1073741824', '-536870913'), ('public.mx_test_table'::regclass, 1310003, 't'::"char", '-536870912', '-1'), ('public.mx_test_table'::regclass, 1310004, 't'::"char", '0', '536870911'), ('public.mx_test_table'::regclass, 1310005, 't'::"char", '536870912', '1073741823'), ('public.mx_test_table'::regclass, 1310006, 't'::"char", '1073741824', '1610612735'), ('public.mx_test_table'::regclass, 1310007, 't'::"char", '1610612736', '2147483647')) SELECT citus_internal_add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; - WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('public.single_shard_tbl'::regclass, 1310008, 't'::"char", NULL, NULL)) SELECT citus_internal_add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; + WITH colocation_group_data (colocationid, shardcount, replicationfactor, distributioncolumntype, distributioncolumncollationname, distributioncolumncollationschema) AS (VALUES (2, 8, 1, 'integer'::regtype, NULL, NULL)) SELECT citus_internal.add_colocation_metadata(colocationid, shardcount, replicationfactor, distributioncolumntype, coalesce(c.oid, 0)) FROM colocation_group_data d LEFT JOIN pg_collation c ON (d.distributioncolumncollationname = c.collname AND d.distributioncolumncollationschema::regnamespace = c.collnamespace) + WITH colocation_group_data (colocationid, shardcount, replicationfactor, distributioncolumntype, distributioncolumncollationname, distributioncolumncollationschema) AS (VALUES (3, 1, 1, 0, NULL, NULL)) SELECT citus_internal.add_colocation_metadata(colocationid, shardcount, replicationfactor, distributioncolumntype, coalesce(c.oid, 0)) FROM colocation_group_data d LEFT JOIN pg_collation c ON (d.distributioncolumncollationname = c.collname AND d.distributioncolumncollationschema::regnamespace = c.collnamespace) + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('database', ARRAY['regression']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('role', ARRAY['postgres']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('schema', ARRAY['public']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('sequence', ARRAY['public', 'mx_test_table_col_3_seq']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('sequence', ARRAY['public', 'user_defined_seq']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('table', ARRAY['public', 'mx_test_table']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('table', ARRAY['public', 'single_shard_tbl']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH placement_data(shardid, shardlength, groupid, placementid) AS (VALUES (1310000, 0, 1, 100000), (1310001, 0, 2, 100001), (1310002, 0, 1, 100002), (1310003, 0, 2, 100003), (1310004, 0, 1, 100004), (1310005, 0, 2, 100005), (1310006, 0, 1, 100006), (1310007, 0, 2, 100007)) SELECT citus_internal.add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data; + WITH placement_data(shardid, shardlength, groupid, placementid) AS (VALUES (1310008, 0, 2, 100008)) SELECT citus_internal.add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data; + WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('public.mx_test_table'::regclass, 1310000, 't'::"char", '-2147483648', '-1610612737'), ('public.mx_test_table'::regclass, 1310001, 't'::"char", '-1610612736', '-1073741825'), ('public.mx_test_table'::regclass, 1310002, 't'::"char", '-1073741824', '-536870913'), ('public.mx_test_table'::regclass, 1310003, 't'::"char", '-536870912', '-1'), ('public.mx_test_table'::regclass, 1310004, 't'::"char", '0', '536870911'), ('public.mx_test_table'::regclass, 1310005, 't'::"char", '536870912', '1073741823'), ('public.mx_test_table'::regclass, 1310006, 't'::"char", '1073741824', '1610612735'), ('public.mx_test_table'::regclass, 1310007, 't'::"char", '1610612736', '2147483647')) SELECT citus_internal.add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; + WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('public.single_shard_tbl'::regclass, 1310008, 't'::"char", NULL, NULL)) SELECT citus_internal.add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; (61 rows) -- Drop single shard table @@ -230,7 +230,7 @@ SELECT unnest(activate_node_snapshot()) order by 1; RESET ROLE RESET ROLE SELECT alter_role_if_exists('postgres', 'ALTER ROLE postgres SET lc_messages = ''C''') - SELECT citus_internal_add_partition_metadata ('public.mx_test_table'::regclass, 'h', 'col_1', 2, 's') + SELECT citus_internal.add_partition_metadata ('public.mx_test_table'::regclass, 'h', 'col_1', 2, 's') SELECT pg_catalog.worker_drop_sequence_dependency('public.mx_test_table'); SELECT pg_catalog.worker_drop_sequence_dependency(logicalrelid::regclass::text) FROM pg_dist_partition SELECT pg_catalog.worker_record_sequence_dependency('public.mx_test_table_col_3_seq'::regclass,'public.mx_test_table'::regclass,'col_3') @@ -248,15 +248,15 @@ SELECT unnest(activate_node_snapshot()) order by 1; UPDATE pg_dist_node SET hasmetadata = TRUE WHERE nodeid = 2 UPDATE pg_dist_node SET isactive = TRUE WHERE nodeid = 2 UPDATE pg_dist_node SET metadatasynced = TRUE WHERE nodeid = 2 - WITH colocation_group_data (colocationid, shardcount, replicationfactor, distributioncolumntype, distributioncolumncollationname, distributioncolumncollationschema) AS (VALUES (2, 8, 1, 'integer'::regtype, NULL, NULL)) SELECT pg_catalog.citus_internal_add_colocation_metadata(colocationid, shardcount, replicationfactor, distributioncolumntype, coalesce(c.oid, 0)) FROM colocation_group_data d LEFT JOIN pg_collation c ON (d.distributioncolumncollationname = c.collname AND d.distributioncolumncollationschema::regnamespace = c.collnamespace) - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('database', ARRAY['regression']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('role', ARRAY['postgres']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('schema', ARRAY['public']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('sequence', ARRAY['public', 'mx_test_table_col_3_seq']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('sequence', ARRAY['public', 'user_defined_seq']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('table', ARRAY['public', 'mx_test_table']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH placement_data(shardid, shardlength, groupid, placementid) AS (VALUES (1310000, 0, 1, 100000), (1310001, 0, 2, 100001), (1310002, 0, 1, 100002), (1310003, 0, 2, 100003), (1310004, 0, 1, 100004), (1310005, 0, 2, 100005), (1310006, 0, 1, 100006), (1310007, 0, 2, 100007)) SELECT citus_internal_add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data; - WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('public.mx_test_table'::regclass, 1310000, 't'::"char", '-2147483648', '-1610612737'), ('public.mx_test_table'::regclass, 1310001, 't'::"char", '-1610612736', '-1073741825'), ('public.mx_test_table'::regclass, 1310002, 't'::"char", '-1073741824', '-536870913'), ('public.mx_test_table'::regclass, 1310003, 't'::"char", '-536870912', '-1'), ('public.mx_test_table'::regclass, 1310004, 't'::"char", '0', '536870911'), ('public.mx_test_table'::regclass, 1310005, 't'::"char", '536870912', '1073741823'), ('public.mx_test_table'::regclass, 1310006, 't'::"char", '1073741824', '1610612735'), ('public.mx_test_table'::regclass, 1310007, 't'::"char", '1610612736', '2147483647')) SELECT citus_internal_add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; + WITH colocation_group_data (colocationid, shardcount, replicationfactor, distributioncolumntype, distributioncolumncollationname, distributioncolumncollationschema) AS (VALUES (2, 8, 1, 'integer'::regtype, NULL, NULL)) SELECT citus_internal.add_colocation_metadata(colocationid, shardcount, replicationfactor, distributioncolumntype, coalesce(c.oid, 0)) FROM colocation_group_data d LEFT JOIN pg_collation c ON (d.distributioncolumncollationname = c.collname AND d.distributioncolumncollationschema::regnamespace = c.collnamespace) + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('database', ARRAY['regression']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('role', ARRAY['postgres']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('schema', ARRAY['public']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('sequence', ARRAY['public', 'mx_test_table_col_3_seq']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('sequence', ARRAY['public', 'user_defined_seq']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('table', ARRAY['public', 'mx_test_table']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH placement_data(shardid, shardlength, groupid, placementid) AS (VALUES (1310000, 0, 1, 100000), (1310001, 0, 2, 100001), (1310002, 0, 1, 100002), (1310003, 0, 2, 100003), (1310004, 0, 1, 100004), (1310005, 0, 2, 100005), (1310006, 0, 1, 100006), (1310007, 0, 2, 100007)) SELECT citus_internal.add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data; + WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('public.mx_test_table'::regclass, 1310000, 't'::"char", '-2147483648', '-1610612737'), ('public.mx_test_table'::regclass, 1310001, 't'::"char", '-1610612736', '-1073741825'), ('public.mx_test_table'::regclass, 1310002, 't'::"char", '-1073741824', '-536870913'), ('public.mx_test_table'::regclass, 1310003, 't'::"char", '-536870912', '-1'), ('public.mx_test_table'::regclass, 1310004, 't'::"char", '0', '536870911'), ('public.mx_test_table'::regclass, 1310005, 't'::"char", '536870912', '1073741823'), ('public.mx_test_table'::regclass, 1310006, 't'::"char", '1073741824', '1610612735'), ('public.mx_test_table'::regclass, 1310007, 't'::"char", '1610612736', '2147483647')) SELECT citus_internal.add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; (52 rows) -- Show that schema changes are included in the activate node snapshot @@ -291,7 +291,7 @@ SELECT unnest(activate_node_snapshot()) order by 1; RESET ROLE RESET ROLE SELECT alter_role_if_exists('postgres', 'ALTER ROLE postgres SET lc_messages = ''C''') - SELECT citus_internal_add_partition_metadata ('mx_testing_schema.mx_test_table'::regclass, 'h', 'col_1', 2, 's') + SELECT citus_internal.add_partition_metadata ('mx_testing_schema.mx_test_table'::regclass, 'h', 'col_1', 2, 's') SELECT pg_catalog.worker_drop_sequence_dependency('mx_testing_schema.mx_test_table'); SELECT pg_catalog.worker_drop_sequence_dependency(logicalrelid::regclass::text) FROM pg_dist_partition SELECT pg_catalog.worker_record_sequence_dependency('mx_testing_schema.mx_test_table_col_3_seq'::regclass,'mx_testing_schema.mx_test_table'::regclass,'col_3') @@ -309,16 +309,16 @@ SELECT unnest(activate_node_snapshot()) order by 1; UPDATE pg_dist_node SET hasmetadata = TRUE WHERE nodeid = 2 UPDATE pg_dist_node SET isactive = TRUE WHERE nodeid = 2 UPDATE pg_dist_node SET metadatasynced = TRUE WHERE nodeid = 2 - WITH colocation_group_data (colocationid, shardcount, replicationfactor, distributioncolumntype, distributioncolumncollationname, distributioncolumncollationschema) AS (VALUES (2, 8, 1, 'integer'::regtype, NULL, NULL)) SELECT pg_catalog.citus_internal_add_colocation_metadata(colocationid, shardcount, replicationfactor, distributioncolumntype, coalesce(c.oid, 0)) FROM colocation_group_data d LEFT JOIN pg_collation c ON (d.distributioncolumncollationname = c.collname AND d.distributioncolumncollationschema::regnamespace = c.collnamespace) - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('database', ARRAY['regression']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('role', ARRAY['postgres']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('schema', ARRAY['mx_testing_schema']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('schema', ARRAY['public']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('sequence', ARRAY['mx_testing_schema', 'mx_test_table_col_3_seq']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('sequence', ARRAY['public', 'user_defined_seq']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('table', ARRAY['mx_testing_schema', 'mx_test_table']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH placement_data(shardid, shardlength, groupid, placementid) AS (VALUES (1310000, 0, 1, 100000), (1310001, 0, 2, 100001), (1310002, 0, 1, 100002), (1310003, 0, 2, 100003), (1310004, 0, 1, 100004), (1310005, 0, 2, 100005), (1310006, 0, 1, 100006), (1310007, 0, 2, 100007)) SELECT citus_internal_add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data; - WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('mx_testing_schema.mx_test_table'::regclass, 1310000, 't'::"char", '-2147483648', '-1610612737'), ('mx_testing_schema.mx_test_table'::regclass, 1310001, 't'::"char", '-1610612736', '-1073741825'), ('mx_testing_schema.mx_test_table'::regclass, 1310002, 't'::"char", '-1073741824', '-536870913'), ('mx_testing_schema.mx_test_table'::regclass, 1310003, 't'::"char", '-536870912', '-1'), ('mx_testing_schema.mx_test_table'::regclass, 1310004, 't'::"char", '0', '536870911'), ('mx_testing_schema.mx_test_table'::regclass, 1310005, 't'::"char", '536870912', '1073741823'), ('mx_testing_schema.mx_test_table'::regclass, 1310006, 't'::"char", '1073741824', '1610612735'), ('mx_testing_schema.mx_test_table'::regclass, 1310007, 't'::"char", '1610612736', '2147483647')) SELECT citus_internal_add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; + WITH colocation_group_data (colocationid, shardcount, replicationfactor, distributioncolumntype, distributioncolumncollationname, distributioncolumncollationschema) AS (VALUES (2, 8, 1, 'integer'::regtype, NULL, NULL)) SELECT citus_internal.add_colocation_metadata(colocationid, shardcount, replicationfactor, distributioncolumntype, coalesce(c.oid, 0)) FROM colocation_group_data d LEFT JOIN pg_collation c ON (d.distributioncolumncollationname = c.collname AND d.distributioncolumncollationschema::regnamespace = c.collnamespace) + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('database', ARRAY['regression']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('role', ARRAY['postgres']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('schema', ARRAY['mx_testing_schema']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('schema', ARRAY['public']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('sequence', ARRAY['mx_testing_schema', 'mx_test_table_col_3_seq']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('sequence', ARRAY['public', 'user_defined_seq']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('table', ARRAY['mx_testing_schema', 'mx_test_table']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH placement_data(shardid, shardlength, groupid, placementid) AS (VALUES (1310000, 0, 1, 100000), (1310001, 0, 2, 100001), (1310002, 0, 1, 100002), (1310003, 0, 2, 100003), (1310004, 0, 1, 100004), (1310005, 0, 2, 100005), (1310006, 0, 1, 100006), (1310007, 0, 2, 100007)) SELECT citus_internal.add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data; + WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('mx_testing_schema.mx_test_table'::regclass, 1310000, 't'::"char", '-2147483648', '-1610612737'), ('mx_testing_schema.mx_test_table'::regclass, 1310001, 't'::"char", '-1610612736', '-1073741825'), ('mx_testing_schema.mx_test_table'::regclass, 1310002, 't'::"char", '-1073741824', '-536870913'), ('mx_testing_schema.mx_test_table'::regclass, 1310003, 't'::"char", '-536870912', '-1'), ('mx_testing_schema.mx_test_table'::regclass, 1310004, 't'::"char", '0', '536870911'), ('mx_testing_schema.mx_test_table'::regclass, 1310005, 't'::"char", '536870912', '1073741823'), ('mx_testing_schema.mx_test_table'::regclass, 1310006, 't'::"char", '1073741824', '1610612735'), ('mx_testing_schema.mx_test_table'::regclass, 1310007, 't'::"char", '1610612736', '2147483647')) SELECT citus_internal.add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; (54 rows) -- Show that append distributed tables are not included in the activate node snapshot @@ -359,7 +359,7 @@ SELECT unnest(activate_node_snapshot()) order by 1; RESET ROLE RESET ROLE SELECT alter_role_if_exists('postgres', 'ALTER ROLE postgres SET lc_messages = ''C''') - SELECT citus_internal_add_partition_metadata ('mx_testing_schema.mx_test_table'::regclass, 'h', 'col_1', 2, 's') + SELECT citus_internal.add_partition_metadata ('mx_testing_schema.mx_test_table'::regclass, 'h', 'col_1', 2, 's') SELECT pg_catalog.worker_drop_sequence_dependency('mx_testing_schema.mx_test_table'); SELECT pg_catalog.worker_drop_sequence_dependency(logicalrelid::regclass::text) FROM pg_dist_partition SELECT pg_catalog.worker_record_sequence_dependency('mx_testing_schema.mx_test_table_col_3_seq'::regclass,'mx_testing_schema.mx_test_table'::regclass,'col_3') @@ -377,16 +377,16 @@ SELECT unnest(activate_node_snapshot()) order by 1; UPDATE pg_dist_node SET hasmetadata = TRUE WHERE nodeid = 2 UPDATE pg_dist_node SET isactive = TRUE WHERE nodeid = 2 UPDATE pg_dist_node SET metadatasynced = TRUE WHERE nodeid = 2 - WITH colocation_group_data (colocationid, shardcount, replicationfactor, distributioncolumntype, distributioncolumncollationname, distributioncolumncollationschema) AS (VALUES (2, 8, 1, 'integer'::regtype, NULL, NULL)) SELECT pg_catalog.citus_internal_add_colocation_metadata(colocationid, shardcount, replicationfactor, distributioncolumntype, coalesce(c.oid, 0)) FROM colocation_group_data d LEFT JOIN pg_collation c ON (d.distributioncolumncollationname = c.collname AND d.distributioncolumncollationschema::regnamespace = c.collnamespace) - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('database', ARRAY['regression']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('role', ARRAY['postgres']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('schema', ARRAY['mx_testing_schema']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('schema', ARRAY['public']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('sequence', ARRAY['mx_testing_schema', 'mx_test_table_col_3_seq']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('sequence', ARRAY['public', 'user_defined_seq']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('table', ARRAY['mx_testing_schema', 'mx_test_table']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH placement_data(shardid, shardlength, groupid, placementid) AS (VALUES (1310000, 0, 1, 100000), (1310001, 0, 2, 100001), (1310002, 0, 1, 100002), (1310003, 0, 2, 100003), (1310004, 0, 1, 100004), (1310005, 0, 2, 100005), (1310006, 0, 1, 100006), (1310007, 0, 2, 100007)) SELECT citus_internal_add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data; - WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('mx_testing_schema.mx_test_table'::regclass, 1310000, 't'::"char", '-2147483648', '-1610612737'), ('mx_testing_schema.mx_test_table'::regclass, 1310001, 't'::"char", '-1610612736', '-1073741825'), ('mx_testing_schema.mx_test_table'::regclass, 1310002, 't'::"char", '-1073741824', '-536870913'), ('mx_testing_schema.mx_test_table'::regclass, 1310003, 't'::"char", '-536870912', '-1'), ('mx_testing_schema.mx_test_table'::regclass, 1310004, 't'::"char", '0', '536870911'), ('mx_testing_schema.mx_test_table'::regclass, 1310005, 't'::"char", '536870912', '1073741823'), ('mx_testing_schema.mx_test_table'::regclass, 1310006, 't'::"char", '1073741824', '1610612735'), ('mx_testing_schema.mx_test_table'::regclass, 1310007, 't'::"char", '1610612736', '2147483647')) SELECT citus_internal_add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; + WITH colocation_group_data (colocationid, shardcount, replicationfactor, distributioncolumntype, distributioncolumncollationname, distributioncolumncollationschema) AS (VALUES (2, 8, 1, 'integer'::regtype, NULL, NULL)) SELECT citus_internal.add_colocation_metadata(colocationid, shardcount, replicationfactor, distributioncolumntype, coalesce(c.oid, 0)) FROM colocation_group_data d LEFT JOIN pg_collation c ON (d.distributioncolumncollationname = c.collname AND d.distributioncolumncollationschema::regnamespace = c.collnamespace) + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('database', ARRAY['regression']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('role', ARRAY['postgres']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('schema', ARRAY['mx_testing_schema']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('schema', ARRAY['public']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('sequence', ARRAY['mx_testing_schema', 'mx_test_table_col_3_seq']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('sequence', ARRAY['public', 'user_defined_seq']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('table', ARRAY['mx_testing_schema', 'mx_test_table']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH placement_data(shardid, shardlength, groupid, placementid) AS (VALUES (1310000, 0, 1, 100000), (1310001, 0, 2, 100001), (1310002, 0, 1, 100002), (1310003, 0, 2, 100003), (1310004, 0, 1, 100004), (1310005, 0, 2, 100005), (1310006, 0, 1, 100006), (1310007, 0, 2, 100007)) SELECT citus_internal.add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data; + WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('mx_testing_schema.mx_test_table'::regclass, 1310000, 't'::"char", '-2147483648', '-1610612737'), ('mx_testing_schema.mx_test_table'::regclass, 1310001, 't'::"char", '-1610612736', '-1073741825'), ('mx_testing_schema.mx_test_table'::regclass, 1310002, 't'::"char", '-1073741824', '-536870913'), ('mx_testing_schema.mx_test_table'::regclass, 1310003, 't'::"char", '-536870912', '-1'), ('mx_testing_schema.mx_test_table'::regclass, 1310004, 't'::"char", '0', '536870911'), ('mx_testing_schema.mx_test_table'::regclass, 1310005, 't'::"char", '536870912', '1073741823'), ('mx_testing_schema.mx_test_table'::regclass, 1310006, 't'::"char", '1073741824', '1610612735'), ('mx_testing_schema.mx_test_table'::regclass, 1310007, 't'::"char", '1610612736', '2147483647')) SELECT citus_internal.add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; (54 rows) -- Show that range distributed tables are not included in the activate node snapshot @@ -420,7 +420,7 @@ SELECT unnest(activate_node_snapshot()) order by 1; RESET ROLE RESET ROLE SELECT alter_role_if_exists('postgres', 'ALTER ROLE postgres SET lc_messages = ''C''') - SELECT citus_internal_add_partition_metadata ('mx_testing_schema.mx_test_table'::regclass, 'h', 'col_1', 2, 's') + SELECT citus_internal.add_partition_metadata ('mx_testing_schema.mx_test_table'::regclass, 'h', 'col_1', 2, 's') SELECT pg_catalog.worker_drop_sequence_dependency('mx_testing_schema.mx_test_table'); SELECT pg_catalog.worker_drop_sequence_dependency(logicalrelid::regclass::text) FROM pg_dist_partition SELECT pg_catalog.worker_record_sequence_dependency('mx_testing_schema.mx_test_table_col_3_seq'::regclass,'mx_testing_schema.mx_test_table'::regclass,'col_3') @@ -438,16 +438,16 @@ SELECT unnest(activate_node_snapshot()) order by 1; UPDATE pg_dist_node SET hasmetadata = TRUE WHERE nodeid = 2 UPDATE pg_dist_node SET isactive = TRUE WHERE nodeid = 2 UPDATE pg_dist_node SET metadatasynced = TRUE WHERE nodeid = 2 - WITH colocation_group_data (colocationid, shardcount, replicationfactor, distributioncolumntype, distributioncolumncollationname, distributioncolumncollationschema) AS (VALUES (2, 8, 1, 'integer'::regtype, NULL, NULL)) SELECT pg_catalog.citus_internal_add_colocation_metadata(colocationid, shardcount, replicationfactor, distributioncolumntype, coalesce(c.oid, 0)) FROM colocation_group_data d LEFT JOIN pg_collation c ON (d.distributioncolumncollationname = c.collname AND d.distributioncolumncollationschema::regnamespace = c.collnamespace) - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('database', ARRAY['regression']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('role', ARRAY['postgres']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('schema', ARRAY['mx_testing_schema']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('schema', ARRAY['public']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('sequence', ARRAY['mx_testing_schema', 'mx_test_table_col_3_seq']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('sequence', ARRAY['public', 'user_defined_seq']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('table', ARRAY['mx_testing_schema', 'mx_test_table']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH placement_data(shardid, shardlength, groupid, placementid) AS (VALUES (1310000, 0, 1, 100000), (1310001, 0, 2, 100001), (1310002, 0, 1, 100002), (1310003, 0, 2, 100003), (1310004, 0, 1, 100004), (1310005, 0, 2, 100005), (1310006, 0, 1, 100006), (1310007, 0, 2, 100007)) SELECT citus_internal_add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data; - WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('mx_testing_schema.mx_test_table'::regclass, 1310000, 't'::"char", '-2147483648', '-1610612737'), ('mx_testing_schema.mx_test_table'::regclass, 1310001, 't'::"char", '-1610612736', '-1073741825'), ('mx_testing_schema.mx_test_table'::regclass, 1310002, 't'::"char", '-1073741824', '-536870913'), ('mx_testing_schema.mx_test_table'::regclass, 1310003, 't'::"char", '-536870912', '-1'), ('mx_testing_schema.mx_test_table'::regclass, 1310004, 't'::"char", '0', '536870911'), ('mx_testing_schema.mx_test_table'::regclass, 1310005, 't'::"char", '536870912', '1073741823'), ('mx_testing_schema.mx_test_table'::regclass, 1310006, 't'::"char", '1073741824', '1610612735'), ('mx_testing_schema.mx_test_table'::regclass, 1310007, 't'::"char", '1610612736', '2147483647')) SELECT citus_internal_add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; + WITH colocation_group_data (colocationid, shardcount, replicationfactor, distributioncolumntype, distributioncolumncollationname, distributioncolumncollationschema) AS (VALUES (2, 8, 1, 'integer'::regtype, NULL, NULL)) SELECT citus_internal.add_colocation_metadata(colocationid, shardcount, replicationfactor, distributioncolumntype, coalesce(c.oid, 0)) FROM colocation_group_data d LEFT JOIN pg_collation c ON (d.distributioncolumncollationname = c.collname AND d.distributioncolumncollationschema::regnamespace = c.collnamespace) + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('database', ARRAY['regression']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('role', ARRAY['postgres']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('schema', ARRAY['mx_testing_schema']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('schema', ARRAY['public']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('sequence', ARRAY['mx_testing_schema', 'mx_test_table_col_3_seq']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('sequence', ARRAY['public', 'user_defined_seq']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('table', ARRAY['mx_testing_schema', 'mx_test_table']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH placement_data(shardid, shardlength, groupid, placementid) AS (VALUES (1310000, 0, 1, 100000), (1310001, 0, 2, 100001), (1310002, 0, 1, 100002), (1310003, 0, 2, 100003), (1310004, 0, 1, 100004), (1310005, 0, 2, 100005), (1310006, 0, 1, 100006), (1310007, 0, 2, 100007)) SELECT citus_internal.add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data; + WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('mx_testing_schema.mx_test_table'::regclass, 1310000, 't'::"char", '-2147483648', '-1610612737'), ('mx_testing_schema.mx_test_table'::regclass, 1310001, 't'::"char", '-1610612736', '-1073741825'), ('mx_testing_schema.mx_test_table'::regclass, 1310002, 't'::"char", '-1073741824', '-536870913'), ('mx_testing_schema.mx_test_table'::regclass, 1310003, 't'::"char", '-536870912', '-1'), ('mx_testing_schema.mx_test_table'::regclass, 1310004, 't'::"char", '0', '536870911'), ('mx_testing_schema.mx_test_table'::regclass, 1310005, 't'::"char", '536870912', '1073741823'), ('mx_testing_schema.mx_test_table'::regclass, 1310006, 't'::"char", '1073741824', '1610612735'), ('mx_testing_schema.mx_test_table'::regclass, 1310007, 't'::"char", '1610612736', '2147483647')) SELECT citus_internal.add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; (54 rows) -- Test start_metadata_sync_to_node and citus_activate_node UDFs @@ -1996,12 +1996,12 @@ SELECT unnest(activate_node_snapshot()) order by 1; RESET ROLE RESET ROLE SELECT alter_role_if_exists('postgres', 'ALTER ROLE postgres SET lc_messages = ''C''') - SELECT citus_internal_add_partition_metadata ('mx_test_schema_1.mx_table_1'::regclass, 'h', 'col1', 7, 's') - SELECT citus_internal_add_partition_metadata ('mx_test_schema_2.mx_table_2'::regclass, 'h', 'col1', 7, 's') - SELECT citus_internal_add_partition_metadata ('mx_testing_schema.mx_test_table'::regclass, 'h', 'col_1', 2, 's') - SELECT citus_internal_add_partition_metadata ('public.dist_table_1'::regclass, 'h', 'a', 10010, 's') - SELECT citus_internal_add_partition_metadata ('public.mx_ref'::regclass, 'n', NULL, 10009, 't') - SELECT citus_internal_add_partition_metadata ('public.test_table'::regclass, 'h', 'id', 10010, 's') + SELECT citus_internal.add_partition_metadata ('mx_test_schema_1.mx_table_1'::regclass, 'h', 'col1', 7, 's') + SELECT citus_internal.add_partition_metadata ('mx_test_schema_2.mx_table_2'::regclass, 'h', 'col1', 7, 's') + SELECT citus_internal.add_partition_metadata ('mx_testing_schema.mx_test_table'::regclass, 'h', 'col_1', 2, 's') + SELECT citus_internal.add_partition_metadata ('public.dist_table_1'::regclass, 'h', 'a', 10010, 's') + SELECT citus_internal.add_partition_metadata ('public.mx_ref'::regclass, 'n', NULL, 10009, 't') + SELECT citus_internal.add_partition_metadata ('public.test_table'::regclass, 'h', 'id', 10010, 's') SELECT pg_catalog.worker_drop_sequence_dependency('mx_test_schema_1.mx_table_1'); SELECT pg_catalog.worker_drop_sequence_dependency('mx_test_schema_2.mx_table_2'); SELECT pg_catalog.worker_drop_sequence_dependency('mx_testing_schema.mx_test_table'); @@ -2031,37 +2031,37 @@ SELECT unnest(activate_node_snapshot()) order by 1; UPDATE pg_dist_node SET hasmetadata = TRUE WHERE nodeid = 2 UPDATE pg_dist_node SET isactive = TRUE WHERE nodeid = 2 UPDATE pg_dist_node SET metadatasynced = TRUE WHERE nodeid = 2 - WITH colocation_group_data (colocationid, shardcount, replicationfactor, distributioncolumntype, distributioncolumncollationname, distributioncolumncollationschema) AS (VALUES (10009, 1, -1, 0, NULL, NULL)) SELECT pg_catalog.citus_internal_add_colocation_metadata(colocationid, shardcount, replicationfactor, distributioncolumntype, coalesce(c.oid, 0)) FROM colocation_group_data d LEFT JOIN pg_collation c ON (d.distributioncolumncollationname = c.collname AND d.distributioncolumncollationschema::regnamespace = c.collnamespace) - WITH colocation_group_data (colocationid, shardcount, replicationfactor, distributioncolumntype, distributioncolumncollationname, distributioncolumncollationschema) AS (VALUES (10010, 4, 1, 'integer'::regtype, NULL, NULL)) SELECT pg_catalog.citus_internal_add_colocation_metadata(colocationid, shardcount, replicationfactor, distributioncolumntype, coalesce(c.oid, 0)) FROM colocation_group_data d LEFT JOIN pg_collation c ON (d.distributioncolumncollationname = c.collname AND d.distributioncolumncollationschema::regnamespace = c.collnamespace) - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('database', ARRAY['regression']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('role', ARRAY['postgres']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('schema', ARRAY['mx_test_schema_1']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('schema', ARRAY['mx_test_schema_2']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('schema', ARRAY['mx_testing_schema']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('schema', ARRAY['mx_testing_schema_2']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('schema', ARRAY['public']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('sequence', ARRAY['mx_testing_schema', 'mx_test_table_col_3_seq']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('sequence', ARRAY['public', 'mx_test_sequence_0']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('sequence', ARRAY['public', 'mx_test_sequence_1']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('sequence', ARRAY['public', 'user_defined_seq']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('table', ARRAY['mx_test_schema_1', 'mx_table_1']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('table', ARRAY['mx_test_schema_2', 'mx_table_2']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('table', ARRAY['mx_testing_schema', 'mx_test_table']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('table', ARRAY['public', 'dist_table_1']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('table', ARRAY['public', 'mx_ref']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('table', ARRAY['public', 'test_table']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; - WITH placement_data(shardid, shardlength, groupid, placementid) AS (VALUES (1310000, 0, 1, 100000), (1310001, 0, 5, 100001), (1310002, 0, 1, 100002), (1310003, 0, 5, 100003), (1310004, 0, 1, 100004), (1310005, 0, 5, 100005), (1310006, 0, 1, 100006), (1310007, 0, 5, 100007)) SELECT citus_internal_add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data; - WITH placement_data(shardid, shardlength, groupid, placementid) AS (VALUES (1310022, 0, 1, 100022), (1310023, 0, 5, 100023), (1310024, 0, 1, 100024), (1310025, 0, 5, 100025), (1310026, 0, 1, 100026)) SELECT citus_internal_add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data; - WITH placement_data(shardid, shardlength, groupid, placementid) AS (VALUES (1310027, 0, 1, 100027), (1310028, 0, 5, 100028), (1310029, 0, 1, 100029), (1310030, 0, 5, 100030), (1310031, 0, 1, 100031)) SELECT citus_internal_add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data; - WITH placement_data(shardid, shardlength, groupid, placementid) AS (VALUES (1310075, 0, 0, 100077), (1310075, 0, 1, 100078), (1310075, 0, 5, 100079)) SELECT citus_internal_add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data; - WITH placement_data(shardid, shardlength, groupid, placementid) AS (VALUES (1310076, 0, 1, 100080), (1310077, 0, 5, 100081), (1310078, 0, 1, 100082), (1310079, 0, 5, 100083)) SELECT citus_internal_add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data; - WITH placement_data(shardid, shardlength, groupid, placementid) AS (VALUES (1310085, 0, 1, 100091), (1310086, 0, 5, 100092), (1310087, 0, 1, 100093), (1310088, 0, 5, 100094)) SELECT citus_internal_add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data; - WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('mx_test_schema_1.mx_table_1'::regclass, 1310022, 't'::"char", '-2147483648', '-1288490190'), ('mx_test_schema_1.mx_table_1'::regclass, 1310023, 't'::"char", '-1288490189', '-429496731'), ('mx_test_schema_1.mx_table_1'::regclass, 1310024, 't'::"char", '-429496730', '429496728'), ('mx_test_schema_1.mx_table_1'::regclass, 1310025, 't'::"char", '429496729', '1288490187'), ('mx_test_schema_1.mx_table_1'::regclass, 1310026, 't'::"char", '1288490188', '2147483647')) SELECT citus_internal_add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; - WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('mx_test_schema_2.mx_table_2'::regclass, 1310027, 't'::"char", '-2147483648', '-1288490190'), ('mx_test_schema_2.mx_table_2'::regclass, 1310028, 't'::"char", '-1288490189', '-429496731'), ('mx_test_schema_2.mx_table_2'::regclass, 1310029, 't'::"char", '-429496730', '429496728'), ('mx_test_schema_2.mx_table_2'::regclass, 1310030, 't'::"char", '429496729', '1288490187'), ('mx_test_schema_2.mx_table_2'::regclass, 1310031, 't'::"char", '1288490188', '2147483647')) SELECT citus_internal_add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; - WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('mx_testing_schema.mx_test_table'::regclass, 1310000, 't'::"char", '-2147483648', '-1610612737'), ('mx_testing_schema.mx_test_table'::regclass, 1310001, 't'::"char", '-1610612736', '-1073741825'), ('mx_testing_schema.mx_test_table'::regclass, 1310002, 't'::"char", '-1073741824', '-536870913'), ('mx_testing_schema.mx_test_table'::regclass, 1310003, 't'::"char", '-536870912', '-1'), ('mx_testing_schema.mx_test_table'::regclass, 1310004, 't'::"char", '0', '536870911'), ('mx_testing_schema.mx_test_table'::regclass, 1310005, 't'::"char", '536870912', '1073741823'), ('mx_testing_schema.mx_test_table'::regclass, 1310006, 't'::"char", '1073741824', '1610612735'), ('mx_testing_schema.mx_test_table'::regclass, 1310007, 't'::"char", '1610612736', '2147483647')) SELECT citus_internal_add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; - WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('public.dist_table_1'::regclass, 1310076, 't'::"char", '-2147483648', '-1073741825'), ('public.dist_table_1'::regclass, 1310077, 't'::"char", '-1073741824', '-1'), ('public.dist_table_1'::regclass, 1310078, 't'::"char", '0', '1073741823'), ('public.dist_table_1'::regclass, 1310079, 't'::"char", '1073741824', '2147483647')) SELECT citus_internal_add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; - WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('public.mx_ref'::regclass, 1310075, 't'::"char", NULL, NULL)) SELECT citus_internal_add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; - WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('public.test_table'::regclass, 1310085, 't'::"char", '-2147483648', '-1073741825'), ('public.test_table'::regclass, 1310086, 't'::"char", '-1073741824', '-1'), ('public.test_table'::regclass, 1310087, 't'::"char", '0', '1073741823'), ('public.test_table'::regclass, 1310088, 't'::"char", '1073741824', '2147483647')) SELECT citus_internal_add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; + WITH colocation_group_data (colocationid, shardcount, replicationfactor, distributioncolumntype, distributioncolumncollationname, distributioncolumncollationschema) AS (VALUES (10009, 1, -1, 0, NULL, NULL)) SELECT citus_internal.add_colocation_metadata(colocationid, shardcount, replicationfactor, distributioncolumntype, coalesce(c.oid, 0)) FROM colocation_group_data d LEFT JOIN pg_collation c ON (d.distributioncolumncollationname = c.collname AND d.distributioncolumncollationschema::regnamespace = c.collnamespace) + WITH colocation_group_data (colocationid, shardcount, replicationfactor, distributioncolumntype, distributioncolumncollationname, distributioncolumncollationschema) AS (VALUES (10010, 4, 1, 'integer'::regtype, NULL, NULL)) SELECT citus_internal.add_colocation_metadata(colocationid, shardcount, replicationfactor, distributioncolumntype, coalesce(c.oid, 0)) FROM colocation_group_data d LEFT JOIN pg_collation c ON (d.distributioncolumncollationname = c.collname AND d.distributioncolumncollationschema::regnamespace = c.collnamespace) + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('database', ARRAY['regression']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('role', ARRAY['postgres']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('schema', ARRAY['mx_test_schema_1']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('schema', ARRAY['mx_test_schema_2']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('schema', ARRAY['mx_testing_schema']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('schema', ARRAY['mx_testing_schema_2']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('schema', ARRAY['public']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('sequence', ARRAY['mx_testing_schema', 'mx_test_table_col_3_seq']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('sequence', ARRAY['public', 'mx_test_sequence_0']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('sequence', ARRAY['public', 'mx_test_sequence_1']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('sequence', ARRAY['public', 'user_defined_seq']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('table', ARRAY['mx_test_schema_1', 'mx_table_1']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('table', ARRAY['mx_test_schema_2', 'mx_table_2']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('table', ARRAY['mx_testing_schema', 'mx_test_table']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('table', ARRAY['public', 'dist_table_1']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('table', ARRAY['public', 'mx_ref']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('table', ARRAY['public', 'test_table']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; + WITH placement_data(shardid, shardlength, groupid, placementid) AS (VALUES (1310000, 0, 1, 100000), (1310001, 0, 5, 100001), (1310002, 0, 1, 100002), (1310003, 0, 5, 100003), (1310004, 0, 1, 100004), (1310005, 0, 5, 100005), (1310006, 0, 1, 100006), (1310007, 0, 5, 100007)) SELECT citus_internal.add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data; + WITH placement_data(shardid, shardlength, groupid, placementid) AS (VALUES (1310022, 0, 1, 100022), (1310023, 0, 5, 100023), (1310024, 0, 1, 100024), (1310025, 0, 5, 100025), (1310026, 0, 1, 100026)) SELECT citus_internal.add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data; + WITH placement_data(shardid, shardlength, groupid, placementid) AS (VALUES (1310027, 0, 1, 100027), (1310028, 0, 5, 100028), (1310029, 0, 1, 100029), (1310030, 0, 5, 100030), (1310031, 0, 1, 100031)) SELECT citus_internal.add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data; + WITH placement_data(shardid, shardlength, groupid, placementid) AS (VALUES (1310075, 0, 0, 100077), (1310075, 0, 1, 100078), (1310075, 0, 5, 100079)) SELECT citus_internal.add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data; + WITH placement_data(shardid, shardlength, groupid, placementid) AS (VALUES (1310076, 0, 1, 100080), (1310077, 0, 5, 100081), (1310078, 0, 1, 100082), (1310079, 0, 5, 100083)) SELECT citus_internal.add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data; + WITH placement_data(shardid, shardlength, groupid, placementid) AS (VALUES (1310085, 0, 1, 100091), (1310086, 0, 5, 100092), (1310087, 0, 1, 100093), (1310088, 0, 5, 100094)) SELECT citus_internal.add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data; + WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('mx_test_schema_1.mx_table_1'::regclass, 1310022, 't'::"char", '-2147483648', '-1288490190'), ('mx_test_schema_1.mx_table_1'::regclass, 1310023, 't'::"char", '-1288490189', '-429496731'), ('mx_test_schema_1.mx_table_1'::regclass, 1310024, 't'::"char", '-429496730', '429496728'), ('mx_test_schema_1.mx_table_1'::regclass, 1310025, 't'::"char", '429496729', '1288490187'), ('mx_test_schema_1.mx_table_1'::regclass, 1310026, 't'::"char", '1288490188', '2147483647')) SELECT citus_internal.add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; + WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('mx_test_schema_2.mx_table_2'::regclass, 1310027, 't'::"char", '-2147483648', '-1288490190'), ('mx_test_schema_2.mx_table_2'::regclass, 1310028, 't'::"char", '-1288490189', '-429496731'), ('mx_test_schema_2.mx_table_2'::regclass, 1310029, 't'::"char", '-429496730', '429496728'), ('mx_test_schema_2.mx_table_2'::regclass, 1310030, 't'::"char", '429496729', '1288490187'), ('mx_test_schema_2.mx_table_2'::regclass, 1310031, 't'::"char", '1288490188', '2147483647')) SELECT citus_internal.add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; + WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('mx_testing_schema.mx_test_table'::regclass, 1310000, 't'::"char", '-2147483648', '-1610612737'), ('mx_testing_schema.mx_test_table'::regclass, 1310001, 't'::"char", '-1610612736', '-1073741825'), ('mx_testing_schema.mx_test_table'::regclass, 1310002, 't'::"char", '-1073741824', '-536870913'), ('mx_testing_schema.mx_test_table'::regclass, 1310003, 't'::"char", '-536870912', '-1'), ('mx_testing_schema.mx_test_table'::regclass, 1310004, 't'::"char", '0', '536870911'), ('mx_testing_schema.mx_test_table'::regclass, 1310005, 't'::"char", '536870912', '1073741823'), ('mx_testing_schema.mx_test_table'::regclass, 1310006, 't'::"char", '1073741824', '1610612735'), ('mx_testing_schema.mx_test_table'::regclass, 1310007, 't'::"char", '1610612736', '2147483647')) SELECT citus_internal.add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; + WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('public.dist_table_1'::regclass, 1310076, 't'::"char", '-2147483648', '-1073741825'), ('public.dist_table_1'::regclass, 1310077, 't'::"char", '-1073741824', '-1'), ('public.dist_table_1'::regclass, 1310078, 't'::"char", '0', '1073741823'), ('public.dist_table_1'::regclass, 1310079, 't'::"char", '1073741824', '2147483647')) SELECT citus_internal.add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; + WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('public.mx_ref'::regclass, 1310075, 't'::"char", NULL, NULL)) SELECT citus_internal.add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; + WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('public.test_table'::regclass, 1310085, 't'::"char", '-2147483648', '-1073741825'), ('public.test_table'::regclass, 1310086, 't'::"char", '-1073741824', '-1'), ('public.test_table'::regclass, 1310087, 't'::"char", '0', '1073741823'), ('public.test_table'::regclass, 1310088, 't'::"char", '1073741824', '2147483647')) SELECT citus_internal.add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; (118 rows) -- shouldn't work since test_table is MX diff --git a/src/test/regress/expected/multi_modifying_xacts.out b/src/test/regress/expected/multi_modifying_xacts.out index 7c3344ee5..849a28c73 100644 --- a/src/test/regress/expected/multi_modifying_xacts.out +++ b/src/test/regress/expected/multi_modifying_xacts.out @@ -1208,15 +1208,15 @@ set citus.enable_alter_role_propagation=true; SET search_path TO multi_modifying_xacts; -- should fail since the worker doesn't have test_user anymore INSERT INTO reference_failure_test VALUES (1, '1'); -ERROR: connection to the remote node localhost:xxxxx failed with the following error: FATAL: role "test_user" does not exist +ERROR: connection to the remote node test_user@localhost:xxxxx failed with the following error: FATAL: role "test_user" does not exist -- the same as the above, but wrapped within a transaction BEGIN; INSERT INTO reference_failure_test VALUES (1, '1'); -ERROR: connection to the remote node localhost:xxxxx failed with the following error: FATAL: role "test_user" does not exist +ERROR: connection to the remote node test_user@localhost:xxxxx failed with the following error: FATAL: role "test_user" does not exist COMMIT; BEGIN; COPY reference_failure_test FROM STDIN WITH (FORMAT 'csv'); -ERROR: connection to the remote node localhost:xxxxx failed with the following error: FATAL: role "test_user" does not exist +ERROR: connection to the remote node test_user@localhost:xxxxx failed with the following error: FATAL: role "test_user" does not exist COMMIT; -- show that no data go through the table and shard states are good SET client_min_messages to 'ERROR'; @@ -1242,7 +1242,7 @@ ORDER BY s.logicalrelid, sp.shardstate; -- any failure rollbacks the transaction BEGIN; COPY numbers_hash_failure_test FROM STDIN WITH (FORMAT 'csv'); -ERROR: connection to the remote node localhost:xxxxx failed with the following error: FATAL: role "test_user" does not exist +ERROR: connection to the remote node test_user@localhost:xxxxx failed with the following error: FATAL: role "test_user" does not exist ABORT; -- none of placements are invalid after abort SELECT shardid, shardstate, nodename, nodeport @@ -1263,8 +1263,8 @@ ORDER BY shardid, nodeport; -- verify nothing is inserted SELECT count(*) FROM numbers_hash_failure_test; -WARNING: connection to the remote node localhost:xxxxx failed with the following error: FATAL: role "test_user" does not exist -WARNING: connection to the remote node localhost:xxxxx failed with the following error: FATAL: role "test_user" does not exist +WARNING: connection to the remote node test_user@localhost:xxxxx failed with the following error: FATAL: role "test_user" does not exist +WARNING: connection to the remote node test_user@localhost:xxxxx failed with the following error: FATAL: role "test_user" does not exist count --------------------------------------------------------------------- 0 @@ -1290,7 +1290,7 @@ ORDER BY shardid, nodeport; -- all failures roll back the transaction BEGIN; COPY numbers_hash_failure_test FROM STDIN WITH (FORMAT 'csv'); -ERROR: connection to the remote node localhost:xxxxx failed with the following error: FATAL: role "test_user" does not exist +ERROR: connection to the remote node test_user@localhost:xxxxx failed with the following error: FATAL: role "test_user" does not exist COMMIT; -- expect none of the placements to be market invalid after commit SELECT shardid, shardstate, nodename, nodeport @@ -1311,8 +1311,8 @@ ORDER BY shardid, nodeport; -- verify no data is inserted SELECT count(*) FROM numbers_hash_failure_test; -WARNING: connection to the remote node localhost:xxxxx failed with the following error: FATAL: role "test_user" does not exist -WARNING: connection to the remote node localhost:xxxxx failed with the following error: FATAL: role "test_user" does not exist +WARNING: connection to the remote node test_user@localhost:xxxxx failed with the following error: FATAL: role "test_user" does not exist +WARNING: connection to the remote node test_user@localhost:xxxxx failed with the following error: FATAL: role "test_user" does not exist count --------------------------------------------------------------------- 0 @@ -1328,7 +1328,7 @@ set citus.enable_alter_role_propagation=true; SET search_path TO multi_modifying_xacts; -- fails on all shard placements INSERT INTO numbers_hash_failure_test VALUES (2,2); -ERROR: connection to the remote node localhost:xxxxx failed with the following error: FATAL: role "test_user" does not exist +ERROR: connection to the remote node test_user@localhost:xxxxx failed with the following error: FATAL: role "test_user" does not exist -- connect back to the master with the proper user to continue the tests \c - :default_user - :master_port SET search_path TO multi_modifying_xacts; diff --git a/src/test/regress/expected/multi_multiuser_auth.out b/src/test/regress/expected/multi_multiuser_auth.out index 7a72eeba1..6b0e85b67 100644 --- a/src/test/regress/expected/multi_multiuser_auth.out +++ b/src/test/regress/expected/multi_multiuser_auth.out @@ -72,7 +72,7 @@ GRANT ALL ON TABLE lineitem, orders, lineitem, customer, nation, part, supplier \c :alice_conninfo -- router query (should break because of bad password) INSERT INTO customer VALUES (12345, 'name', NULL, 5, 'phone', 123.45, 'segment', 'comment'); -ERROR: connection to the remote node localhost:xxxxx failed with the following error: FATAL: password authentication failed for user "alice" +ERROR: connection to the remote node alice@localhost:xxxxx failed with the following error: FATAL: password authentication failed for user "alice" -- fix alice's worker1 password ... UPDATE pg_dist_authinfo SET authinfo = ('password=' || :'alice_worker_1_pw') diff --git a/src/test/regress/expected/multi_mx_router_planner.out b/src/test/regress/expected/multi_mx_router_planner.out index e7855a898..5ac6093cb 100644 --- a/src/test/regress/expected/multi_mx_router_planner.out +++ b/src/test/regress/expected/multi_mx_router_planner.out @@ -370,7 +370,7 @@ WITH RECURSIVE hierarchy as ( h.company_id = ce.company_id)) SELECT * FROM hierarchy WHERE LEVEL <= 2; DEBUG: Router planner cannot handle multi-shard select queries -ERROR: recursive CTEs are not supported in distributed queries +ERROR: recursive CTEs are only supported when they contain a filter on the distribution column -- logically wrong query, query involves different shards -- from the same table, but still router plannable due to -- shard being placed on the same worker. @@ -386,7 +386,7 @@ WITH RECURSIVE hierarchy as ( ce.company_id = 2)) SELECT * FROM hierarchy WHERE LEVEL <= 2; DEBUG: router planner does not support queries that reference non-colocated distributed tables -ERROR: recursive CTEs are not supported in distributed queries +ERROR: recursive CTEs are only supported when they contain a filter on the distribution column -- grouping sets are supported on single shard SELECT id, substring(title, 2, 1) AS subtitle, count(*) diff --git a/src/test/regress/expected/multi_router_planner.out b/src/test/regress/expected/multi_router_planner.out index c6d46ccc9..fee821a7d 100644 --- a/src/test/regress/expected/multi_router_planner.out +++ b/src/test/regress/expected/multi_router_planner.out @@ -436,7 +436,7 @@ WITH RECURSIVE hierarchy as MATERIALIZED ( h.company_id = ce.company_id)) SELECT * FROM hierarchy WHERE LEVEL <= 2; DEBUG: Router planner cannot handle multi-shard select queries -ERROR: recursive CTEs are not supported in distributed queries +ERROR: recursive CTEs are only supported when they contain a filter on the distribution column -- logically wrong query, query involves different shards -- from the same table WITH RECURSIVE hierarchy as MATERIALIZED ( @@ -451,7 +451,7 @@ WITH RECURSIVE hierarchy as MATERIALIZED ( ce.company_id = 2)) SELECT * FROM hierarchy WHERE LEVEL <= 2; DEBUG: router planner does not support queries that reference non-colocated distributed tables -ERROR: recursive CTEs are not supported in distributed queries +ERROR: recursive CTEs are only supported when they contain a filter on the distribution column -- Test router modifying CTEs WITH new_article AS MATERIALIZED( INSERT INTO articles_hash VALUES (1, 1, 'arsenous', 9) RETURNING * @@ -2703,10 +2703,10 @@ SET search_path TO multi_router_planner; -- still, we never mark placements inactive. Instead, fail the transaction BEGIN; INSERT INTO failure_test VALUES (1, 1); -ERROR: connection to the remote node localhost:xxxxx failed with the following error: FATAL: role "router_user" does not exist +ERROR: connection to the remote node router_user@localhost:xxxxx failed with the following error: FATAL: role "router_user" does not exist ROLLBACK; INSERT INTO failure_test VALUES (2, 1); -ERROR: connection to the remote node localhost:xxxxx failed with the following error: FATAL: role "router_user" does not exist +ERROR: connection to the remote node router_user@localhost:xxxxx failed with the following error: FATAL: role "router_user" does not exist SELECT shardid, shardstate, nodename, nodeport FROM pg_dist_shard_placement WHERE shardid IN ( SELECT shardid FROM pg_dist_shard diff --git a/src/test/regress/expected/node_conninfo_reload.out b/src/test/regress/expected/node_conninfo_reload.out index d2e33d950..785e3e1b1 100644 --- a/src/test/regress/expected/node_conninfo_reload.out +++ b/src/test/regress/expected/node_conninfo_reload.out @@ -47,7 +47,7 @@ show citus.node_conninfo; -- Should give a connection error because of bad sslmode select count(*) from test where a = 0; -ERROR: connection to the remote node localhost:xxxxx failed with the following error: invalid sslmode value: "doesnotexist" +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: invalid sslmode value: "doesnotexist" -- Reset it again ALTER SYSTEM RESET citus.node_conninfo; select pg_reload_conf(); @@ -118,7 +118,7 @@ select count(*) from test where a = 0; COMMIT; -- Should fail now with connection error, when transaction is finished select count(*) from test where a = 0; -ERROR: connection to the remote node localhost:xxxxx failed with the following error: invalid sslmode value: "doesnotexist" +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: invalid sslmode value: "doesnotexist" -- Reset it again ALTER SYSTEM RESET citus.node_conninfo; select pg_reload_conf(); @@ -181,7 +181,7 @@ COMMIT; -- Should fail now, when transaction is finished SET client_min_messages TO ERROR; select count(*) from test where a = 0; -ERROR: connection to the remote node localhost:xxxxx failed with the following error: invalid sslmode value: "doesnotexist" +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: invalid sslmode value: "doesnotexist" RESET client_min_messages; -- Reset it again ALTER SYSTEM RESET citus.node_conninfo; @@ -235,11 +235,11 @@ show citus.node_conninfo; -- Should fail since a different shard is accessed and thus a new connection -- will to be created. select count(*) from test where a = 0; -ERROR: connection to the remote node localhost:xxxxx failed with the following error: invalid sslmode value: "doesnotexist" +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: invalid sslmode value: "doesnotexist" COMMIT; -- Should still fail now, when transaction is finished select count(*) from test where a = 0; -ERROR: connection to the remote node localhost:xxxxx failed with the following error: invalid sslmode value: "doesnotexist" +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: invalid sslmode value: "doesnotexist" -- Reset it again ALTER SYSTEM RESET citus.node_conninfo; select pg_reload_conf(); @@ -301,7 +301,7 @@ COMMIT; -- Should fail now, when transaction is finished SET client_min_messages TO ERROR; select count(*) from test; -ERROR: connection to the remote node localhost:xxxxx failed with the following error: invalid sslmode value: "doesnotexist" +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: invalid sslmode value: "doesnotexist" RESET client_min_messages; -- Reset it again ALTER SYSTEM RESET citus.node_conninfo; @@ -359,7 +359,7 @@ ROLLBACK; -- Should fail now, when transaction is finished SET client_min_messages TO ERROR; select count(*) from test; -ERROR: connection to the remote node localhost:xxxxx failed with the following error: invalid sslmode value: "doesnotexist" +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: invalid sslmode value: "doesnotexist" RESET client_min_messages; -- Reset it again ALTER SYSTEM RESET citus.node_conninfo; @@ -497,7 +497,7 @@ ALTER TABLE test ADD COLUMN c INT; COMMIT; -- Should fail now, when transaction is finished ALTER TABLE test ADD COLUMN d INT; -ERROR: connection to the remote node localhost:xxxxx failed with the following error: invalid sslmode value: "doesnotexist" +ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: invalid sslmode value: "doesnotexist" -- Reset it again ALTER SYSTEM RESET citus.node_conninfo; select pg_reload_conf(); diff --git a/src/test/regress/expected/other_databases.out b/src/test/regress/expected/other_databases.out index 1b81af3b7..a15c4bb50 100644 --- a/src/test/regress/expected/other_databases.out +++ b/src/test/regress/expected/other_databases.out @@ -68,11 +68,17 @@ CREATE USER nonsuperuser CREATEROLE; GRANT ALL ON SCHEMA citus_internal TO nonsuperuser; SET ROLE nonsuperuser; SELECT citus_internal.execute_command_on_remote_nodes_as_user($$SELECT 'dangerous query'$$, 'postgres'); -ERROR: operation is not allowed -HINT: Run the command with a superuser. +ERROR: permission denied for function execute_command_on_remote_nodes_as_user \c other_db1 +SET citus.local_hostname TO '127.0.0.1'; +SET ROLE nonsuperuser; +-- Make sure that we don't try to access pg_dist_node. +-- Otherwise, we would get the following error: +-- ERROR: cache lookup failed for pg_dist_node, called too early? CREATE USER other_db_user9; RESET ROLE; +RESET citus.local_hostname; +RESET ROLE; \c regression SELECT usename FROM pg_user WHERE usename LIKE 'other\_db\_user%' ORDER BY 1; usename diff --git a/src/test/regress/expected/pg14.out b/src/test/regress/expected/pg14.out index badd23240..bbfd5dafa 100644 --- a/src/test/regress/expected/pg14.out +++ b/src/test/regress/expected/pg14.out @@ -1142,7 +1142,7 @@ WITH RECURSIVE search_graph(f, t, label) AS ( WHERE g.f = sg.t and g.f = 1 ) SEARCH DEPTH FIRST BY f, t SET seq SELECT * FROM search_graph ORDER BY seq; -ERROR: recursive CTEs are not supported in distributed queries +ERROR: recursive CTEs are only supported when they contain a filter on the distribution column WITH RECURSIVE search_graph(f, t, label) AS ( SELECT * FROM graph0 g WHERE f = 1 UNION ALL @@ -1151,7 +1151,7 @@ WITH RECURSIVE search_graph(f, t, label) AS ( WHERE g.f = sg.t and g.f = 1 ) SEARCH DEPTH FIRST BY f, t SET seq DELETE FROM graph0 WHERE t IN (SELECT t FROM search_graph ORDER BY seq); -ERROR: recursive CTEs are not supported in distributed queries +ERROR: recursive CTEs are only supported when they contain a filter on the distribution column CREATE TABLE graph1(f INT, t INT, label TEXT); SELECT create_reference_table('graph1'); create_reference_table @@ -1170,7 +1170,7 @@ WITH RECURSIVE search_graph(f, t, label) AS ( WHERE g.f = sg.t and g.f = 1 ) SEARCH DEPTH FIRST BY f, t SET seq SELECT * FROM search_graph ORDER BY seq; -ERROR: recursive CTEs are not supported in distributed queries +ERROR: recursive CTEs are only supported when they contain a filter on the distribution column WITH RECURSIVE search_graph(f, t, label) AS ( SELECT * FROM graph1 g WHERE f = 1 UNION ALL @@ -1179,7 +1179,7 @@ WITH RECURSIVE search_graph(f, t, label) AS ( WHERE g.f = sg.t and g.f = 1 ) SEARCH DEPTH FIRST BY f, t SET seq DELETE FROM graph1 WHERE t IN (SELECT t FROM search_graph ORDER BY seq); -ERROR: recursive CTEs are not supported in distributed queries +ERROR: recursive CTEs are only supported when they contain a filter on the distribution column SELECT * FROM ( WITH RECURSIVE search_graph(f, t, label) AS ( SELECT * @@ -1191,7 +1191,7 @@ SELECT * FROM ( ) SEARCH DEPTH FIRST BY f, t SET seq SELECT * FROM search_graph ORDER BY seq ) as foo; -ERROR: recursive CTEs are not supported in distributed queries +ERROR: recursive CTEs are only supported when they contain a filter on the distribution column -- -- https://github.com/citusdata/citus/issues/5258 -- diff --git a/src/test/regress/expected/query_single_shard_table.out b/src/test/regress/expected/query_single_shard_table.out index ad6037b65..5f551a988 100644 --- a/src/test/regress/expected/query_single_shard_table.out +++ b/src/test/regress/expected/query_single_shard_table.out @@ -1529,7 +1529,7 @@ DEBUG: Router planner cannot handle multi-shard select queries DEBUG: Router planner cannot handle multi-shard select queries DEBUG: generating subplan XXX_1 for CTE level_1: WITH RECURSIVE level_2_recursive(x) AS (VALUES (1) UNION ALL SELECT (nullkey_c1_t1.a OPERATOR(pg_catalog.+) 1) FROM (query_single_shard_table.nullkey_c1_t1 JOIN level_2_recursive level_2_recursive_1 ON ((nullkey_c1_t1.a OPERATOR(pg_catalog.=) level_2_recursive_1.x))) WHERE (nullkey_c1_t1.a OPERATOR(pg_catalog.<) 100)) SELECT level_2_recursive.x, distributed_table.a, distributed_table.b FROM (level_2_recursive JOIN query_single_shard_table.distributed_table ON ((level_2_recursive.x OPERATOR(pg_catalog.=) distributed_table.a))) DEBUG: Router planner cannot handle multi-shard select queries -ERROR: recursive CTEs are not supported in distributed queries +ERROR: recursive CTEs are only supported when they contain a filter on the distribution column -- grouping set SELECT id, substring(title, 2, 1) AS subtitle, count(*) diff --git a/src/test/regress/expected/schema_based_sharding.out b/src/test/regress/expected/schema_based_sharding.out index 28cb45688..5204d60d5 100644 --- a/src/test/regress/expected/schema_based_sharding.out +++ b/src/test/regress/expected/schema_based_sharding.out @@ -13,11 +13,11 @@ SELECT 1 FROM citus_add_node('localhost', :master_port, groupid => 0); SET client_min_messages TO NOTICE; -- Verify that the UDFs used to sync tenant schema metadata to workers -- fail on NULL input. -SELECT citus_internal_add_tenant_schema(NULL, 1); +SELECT citus_internal.add_tenant_schema(NULL, 1); ERROR: schema_id cannot be NULL -SELECT citus_internal_add_tenant_schema(1, NULL); +SELECT citus_internal.add_tenant_schema(1, NULL); ERROR: colocation_id cannot be NULL -SELECT citus_internal_delete_tenant_schema(NULL); +SELECT citus_internal.delete_tenant_schema(NULL); ERROR: schema_id cannot be NULL SELECT citus_internal_unregister_tenant_schema_globally(1, NULL); ERROR: schema_name cannot be NULL diff --git a/src/test/regress/expected/shard_rebalancer.out b/src/test/regress/expected/shard_rebalancer.out index 2c399f24a..a7cd6b38c 100644 --- a/src/test/regress/expected/shard_rebalancer.out +++ b/src/test/regress/expected/shard_rebalancer.out @@ -127,7 +127,7 @@ SELECT pg_sleep(.1); -- wait to make sure the config has changed before running (1 row) SELECT master_drain_node('localhost', :master_port); -ERROR: connection to the remote node foobar:57636 failed with the following error: could not translate host name "foobar" to address: +ERROR: connection to the remote node postgres@foobar:57636 failed with the following error: could not translate host name "foobar" to address: CALL citus_cleanup_orphaned_resources(); ALTER SYSTEM RESET citus.local_hostname; SELECT pg_reload_conf(); @@ -197,7 +197,7 @@ SELECT pg_sleep(.1); -- wait to make sure the config has changed before running (1 row) SELECT replicate_table_shards('dist_table_test_2', max_shard_copies := 4, shard_transfer_mode:='block_writes'); -ERROR: connection to the remote node foobar:57636 failed with the following error: could not translate host name "foobar" to address: +ERROR: connection to the remote node postgres@foobar:57636 failed with the following error: could not translate host name "foobar" to address: ALTER SYSTEM RESET citus.local_hostname; SELECT pg_reload_conf(); pg_reload_conf @@ -681,7 +681,7 @@ FROM ( FROM pg_dist_shard WHERE logicalrelid = 'rebalance_test_table'::regclass ) T; -ERROR: connection to the remote node foobar:57636 failed with the following error: could not translate host name "foobar" to address: +ERROR: connection to the remote node postgres@foobar:57636 failed with the following error: could not translate host name "foobar" to address: CALL citus_cleanup_orphaned_resources(); ALTER SYSTEM RESET citus.local_hostname; SELECT pg_reload_conf(); diff --git a/src/test/regress/expected/subquery_and_cte.out b/src/test/regress/expected/subquery_and_cte.out index c15e9b9d7..896860865 100644 --- a/src/test/regress/expected/subquery_and_cte.out +++ b/src/test/regress/expected/subquery_and_cte.out @@ -527,7 +527,7 @@ FROM ) as bar WHERE foo.user_id = bar.user_id ORDER BY 1 DESC; -ERROR: recursive CTEs are not supported in distributed queries +ERROR: recursive CTEs are only supported when they contain a filter on the distribution column CREATE TABLE ref_table_1 (a int); SELECT create_reference_table('ref_table_1'); create_reference_table diff --git a/src/test/regress/expected/upgrade_list_citus_objects.out b/src/test/regress/expected/upgrade_list_citus_objects.out index 97e5c0928..a4f948ee6 100644 --- a/src/test/regress/expected/upgrade_list_citus_objects.out +++ b/src/test/regress/expected/upgrade_list_citus_objects.out @@ -56,10 +56,24 @@ ORDER BY 1; function citus_get_active_worker_nodes() function citus_get_node_clock() function citus_get_transaction_clock() + function citus_internal.acquire_citus_advisory_object_class_lock(integer,cstring) + function citus_internal.add_colocation_metadata(integer,integer,integer,regtype,oid) + function citus_internal.add_object_metadata(text,text[],text[],integer,integer,boolean) + function citus_internal.add_partition_metadata(regclass,"char",text,integer,"char") + function citus_internal.add_placement_metadata(bigint,bigint,integer,bigint) + function citus_internal.add_shard_metadata(regclass,bigint,"char",text,text) + function citus_internal.add_tenant_schema(oid,integer) + function citus_internal.adjust_local_clock_to_remote(cluster_clock) function citus_internal.commit_management_command_2pc() + function citus_internal.database_command(text) + function citus_internal.delete_colocation_metadata(integer) + function citus_internal.delete_partition_metadata(regclass) + function citus_internal.delete_placement_metadata(bigint) + function citus_internal.delete_shard_metadata(bigint) + function citus_internal.delete_tenant_schema(oid) function citus_internal.execute_command_on_remote_nodes_as_user(text,text) function citus_internal.find_groupid_for_node(text,integer) - function citus_internal.mark_object_distributed(oid,text,oid) + function citus_internal.mark_object_distributed(oid,text,oid,text) function citus_internal.pg_dist_node_trigger_func() function citus_internal.pg_dist_rebalance_strategy_trigger_func() function citus_internal.pg_dist_shard_placement_trigger_func() @@ -67,7 +81,6 @@ ORDER BY 1; function citus_internal.replace_isolation_tester_func() function citus_internal.restore_isolation_tester_func() function citus_internal.start_management_transaction(xid8) - function citus_internal_acquire_citus_advisory_object_class_lock(integer,cstring) function citus_internal_add_colocation_metadata(integer,integer,integer,regtype,oid) function citus_internal_add_object_metadata(text,text[],text[],integer,integer,boolean) function citus_internal_add_partition_metadata(regclass,"char",text,integer,"char") @@ -76,7 +89,6 @@ ORDER BY 1; function citus_internal_add_shard_metadata(regclass,bigint,"char",text,text) function citus_internal_add_tenant_schema(oid,integer) function citus_internal_adjust_local_clock_to_remote(cluster_clock) - function citus_internal_database_command(text) function citus_internal_delete_colocation_metadata(integer) function citus_internal_delete_partition_metadata(regclass) function citus_internal_delete_placement_metadata(bigint) @@ -349,5 +361,5 @@ ORDER BY 1; view citus_stat_tenants_local view pg_dist_shard_placement view time_partitions -(339 rows) +(351 rows) diff --git a/src/test/regress/expected/with_basics.out b/src/test/regress/expected/with_basics.out index 4eefb8837..78ed17317 100644 --- a/src/test/regress/expected/with_basics.out +++ b/src/test/regress/expected/with_basics.out @@ -664,14 +664,14 @@ WITH RECURSIVE basic_recursive(x) AS ( SELECT user_id + 1 FROM users_table JOIN basic_recursive ON (user_id = x) WHERE user_id < 100 ) SELECT sum(x) FROM basic_recursive; -ERROR: recursive CTEs are not supported in distributed queries +ERROR: recursive CTEs are only supported when they contain a filter on the distribution column WITH RECURSIVE basic_recursive AS ( SELECT -1 as user_id, '2017-11-22 20:16:16.614779'::timestamp, -1, -1, -1, -1 UNION ALL SELECT basic_recursive.* FROM users_table JOIN basic_recursive USING (user_id) WHERE user_id>1 ) SELECT * FROM basic_recursive ORDER BY user_id LIMIT 1; -ERROR: recursive CTEs are not supported in distributed queries +ERROR: recursive CTEs are only supported when they contain a filter on the distribution column -- basic_recursive in FROM should error out SELECT * @@ -682,7 +682,7 @@ FROM SELECT basic_recursive.* FROM users_table JOIN basic_recursive USING (user_id) WHERE user_id>1 ) SELECT * FROM basic_recursive ORDER BY user_id LIMIT 1) cte_rec; -ERROR: recursive CTEs are not supported in distributed queries +ERROR: recursive CTEs are only supported when they contain a filter on the distribution column -- basic_recursive in WHERE with UNION ALL SELECT * @@ -696,7 +696,7 @@ WHERE SELECT basic_recursive.* FROM users_table JOIN basic_recursive USING (user_id) WHERE user_id>1 ) SELECT * FROM basic_recursive ORDER BY user_id LIMIT 1); -ERROR: recursive CTEs are not supported in distributed queries +ERROR: recursive CTEs are only supported when they contain a filter on the distribution column -- one recursive one regular CTE should error out WITH RECURSIVE basic_recursive(x) AS( VALUES (1) @@ -707,7 +707,7 @@ basic AS ( SELECT count(user_id) FROM users_table ) SELECT x FROM basic, basic_recursive; -ERROR: recursive CTEs are not supported in distributed queries +ERROR: recursive CTEs are only supported when they contain a filter on the distribution column -- one recursive one regular which SELECTs from the recursive CTE under a simple SELECT WITH RECURSIVE basic_recursive(x) AS( VALUES (1) @@ -718,7 +718,7 @@ basic AS ( SELECT count(x) FROM basic_recursive ) SELECT * FROM basic; -ERROR: recursive CTEs are not supported in distributed queries +ERROR: recursive CTEs are only supported when they contain a filter on the distribution column -- recursive CTE in a NESTED manner WITH regular_cte AS ( WITH regular_2 AS ( @@ -732,7 +732,7 @@ WITH regular_cte AS ( SELECT * FROM regular_2 ) SELECT * FROM regular_cte; -ERROR: recursive CTEs are not supported in distributed queries +ERROR: recursive CTEs are only supported when they contain a filter on the distribution column -- CTEs should work with VIEWs as well CREATE VIEW basic_view AS SELECT * FROM users_table; diff --git a/src/test/regress/multi_schedule b/src/test/regress/multi_schedule index 5c9d8a45c..f599363a9 100644 --- a/src/test/regress/multi_schedule +++ b/src/test/regress/multi_schedule @@ -109,6 +109,7 @@ test: undistribute_table test: run_command_on_all_nodes test: background_task_queue_monitor test: other_databases +test: citus_internal_access # Causal clock test test: clock diff --git a/src/test/regress/spec/isolation_database_cmd_from_any_node.spec b/src/test/regress/spec/isolation_database_cmd_from_any_node.spec index 85dc27433..1e004cb33 100644 --- a/src/test/regress/spec/isolation_database_cmd_from_any_node.spec +++ b/src/test/regress/spec/isolation_database_cmd_from_any_node.spec @@ -20,8 +20,8 @@ step "s1-rollback" { ROLLBACK; } step "s1-create-user-dbuser" { CREATE USER dbuser; } step "s1-drop-user-dbuser" { DROP USER dbuser; } -step "s1-acquire-citus-adv-oclass-lock" { SELECT citus_internal_acquire_citus_advisory_object_class_lock(value, NULL) FROM oclass_database; } -step "s1-acquire-citus-adv-oclass-lock-with-oid-testdb1" { SELECT citus_internal_acquire_citus_advisory_object_class_lock(value, 'testdb1') FROM oclass_database; } +step "s1-acquire-citus-adv-oclass-lock" { SELECT citus_internal.acquire_citus_advisory_object_class_lock(value, NULL) FROM oclass_database; } +step "s1-acquire-citus-adv-oclass-lock-with-oid-testdb1" { SELECT citus_internal.acquire_citus_advisory_object_class_lock(value, 'testdb1') FROM oclass_database; } step "s1-create-testdb1" { CREATE DATABASE testdb1; } step "s1-drop-testdb1" { DROP DATABASE testdb1; } @@ -42,9 +42,9 @@ step "s2-begin" { BEGIN; } step "s2-commit" { COMMIT; } step "s2-rollback" { ROLLBACK; } -step "s2-acquire-citus-adv-oclass-lock" { SELECT citus_internal_acquire_citus_advisory_object_class_lock(value, NULL) FROM oclass_database; } -step "s2-acquire-citus-adv-oclass-lock-with-oid-testdb1" { SELECT citus_internal_acquire_citus_advisory_object_class_lock(value, 'testdb1') FROM oclass_database; } -step "s2-acquire-citus-adv-oclass-lock-with-oid-testdb2" { SELECT citus_internal_acquire_citus_advisory_object_class_lock(value, 'testdb2') FROM oclass_database; } +step "s2-acquire-citus-adv-oclass-lock" { SELECT citus_internal.acquire_citus_advisory_object_class_lock(value, NULL) FROM oclass_database; } +step "s2-acquire-citus-adv-oclass-lock-with-oid-testdb1" { SELECT citus_internal.acquire_citus_advisory_object_class_lock(value, 'testdb1') FROM oclass_database; } +step "s2-acquire-citus-adv-oclass-lock-with-oid-testdb2" { SELECT citus_internal.acquire_citus_advisory_object_class_lock(value, 'testdb2') FROM oclass_database; } step "s2-alter-testdb1-rename-to-db1" { ALTER DATABASE testdb1 RENAME TO db1; } diff --git a/src/test/regress/sql/citus_internal_access.sql b/src/test/regress/sql/citus_internal_access.sql new file mode 100644 index 000000000..8e97448f3 --- /dev/null +++ b/src/test/regress/sql/citus_internal_access.sql @@ -0,0 +1,10 @@ +--- Create a non-superuser role and check if it can access citus_internal schema functions +CREATE USER nonsuperuser CREATEROLE; + +SET ROLE nonsuperuser; +--- The non-superuser role should not be able to access citus_internal functions +SELECT citus_internal.commit_management_command_2pc(); +SELECT citus_internal.replace_isolation_tester_func(); + +RESET ROLE; +DROP USER nonsuperuser; diff --git a/src/test/regress/sql/create_drop_database_propagation.sql b/src/test/regress/sql/create_drop_database_propagation.sql index 26e61c7b8..329f48612 100644 --- a/src/test/regress/sql/create_drop_database_propagation.sql +++ b/src/test/regress/sql/create_drop_database_propagation.sql @@ -5,7 +5,7 @@ -- For versions >= 16, pg16_create_drop_database_propagation.sql is used. -- Test the UDF that we use to issue database command during metadata sync. -SELECT pg_catalog.citus_internal_database_command(null); +SELECT citus_internal.database_command(null); CREATE ROLE test_db_commands WITH LOGIN; ALTER SYSTEM SET citus.enable_manual_metadata_changes_for_user TO 'test_db_commands'; @@ -14,20 +14,20 @@ SELECT pg_sleep(0.1); SET ROLE test_db_commands; -- fails on null input -SELECT pg_catalog.citus_internal_database_command(null); +SELECT citus_internal.database_command(null); -- fails on non create / drop db command -SELECT pg_catalog.citus_internal_database_command('CREATE TABLE foo_bar(a int)'); -SELECT pg_catalog.citus_internal_database_command('SELECT 1'); -SELECT pg_catalog.citus_internal_database_command('asfsfdsg'); -SELECT pg_catalog.citus_internal_database_command(''); +SELECT citus_internal.database_command('CREATE TABLE foo_bar(a int)'); +SELECT citus_internal.database_command('SELECT 1'); +SELECT citus_internal.database_command('asfsfdsg'); +SELECT citus_internal.database_command(''); RESET ROLE; ALTER ROLE test_db_commands nocreatedb; SET ROLE test_db_commands; --- make sure that pg_catalog.citus_internal_database_command doesn't cause privilege escalation -SELECT pg_catalog.citus_internal_database_command('CREATE DATABASE no_permissions'); +-- make sure that citus_internal.database_command doesn't cause privilege escalation +SELECT citus_internal.database_command('CREATE DATABASE no_permissions'); RESET ROLE; DROP USER test_db_commands; @@ -660,27 +660,27 @@ REVOKE CONNECT ON DATABASE test_db FROM propagated_role; DROP DATABASE test_db; DROP ROLE propagated_role, non_propagated_role; --- test pg_catalog.citus_internal_acquire_citus_advisory_object_class_lock with null input -SELECT pg_catalog.citus_internal_acquire_citus_advisory_object_class_lock(null, 'regression'); -SELECT pg_catalog.citus_internal_acquire_citus_advisory_object_class_lock((SELECT CASE WHEN substring(version(), '\d+')::integer < 16 THEN 25 ELSE 26 END AS oclass_database), null); +-- test citus_internal.acquire_citus_advisory_object_class_lock with null input +SELECT citus_internal.acquire_citus_advisory_object_class_lock(null, 'regression'); +SELECT citus_internal.acquire_citus_advisory_object_class_lock((SELECT CASE WHEN substring(version(), '\d+')::integer < 16 THEN 25 ELSE 26 END AS oclass_database), null); -- OCLASS_DATABASE -SELECT pg_catalog.citus_internal_acquire_citus_advisory_object_class_lock((SELECT CASE WHEN substring(version(), '\d+')::integer < 16 THEN 25 ELSE 26 END AS oclass_database), NULL); -SELECT pg_catalog.citus_internal_acquire_citus_advisory_object_class_lock((SELECT CASE WHEN substring(version(), '\d+')::integer < 16 THEN 25 ELSE 26 END AS oclass_database), 'regression'); -SELECT pg_catalog.citus_internal_acquire_citus_advisory_object_class_lock((SELECT CASE WHEN substring(version(), '\d+')::integer < 16 THEN 25 ELSE 26 END AS oclass_database), ''); -SELECT pg_catalog.citus_internal_acquire_citus_advisory_object_class_lock((SELECT CASE WHEN substring(version(), '\d+')::integer < 16 THEN 25 ELSE 26 END AS oclass_database), 'no_such_db'); +SELECT citus_internal.acquire_citus_advisory_object_class_lock((SELECT CASE WHEN substring(version(), '\d+')::integer < 16 THEN 25 ELSE 26 END AS oclass_database), NULL); +SELECT citus_internal.acquire_citus_advisory_object_class_lock((SELECT CASE WHEN substring(version(), '\d+')::integer < 16 THEN 25 ELSE 26 END AS oclass_database), 'regression'); +SELECT citus_internal.acquire_citus_advisory_object_class_lock((SELECT CASE WHEN substring(version(), '\d+')::integer < 16 THEN 25 ELSE 26 END AS oclass_database), ''); +SELECT citus_internal.acquire_citus_advisory_object_class_lock((SELECT CASE WHEN substring(version(), '\d+')::integer < 16 THEN 25 ELSE 26 END AS oclass_database), 'no_such_db'); -- invalid OCLASS -SELECT pg_catalog.citus_internal_acquire_citus_advisory_object_class_lock(-1, NULL); -SELECT pg_catalog.citus_internal_acquire_citus_advisory_object_class_lock(-1, 'regression'); +SELECT citus_internal.acquire_citus_advisory_object_class_lock(-1, NULL); +SELECT citus_internal.acquire_citus_advisory_object_class_lock(-1, 'regression'); -- invalid OCLASS -SELECT pg_catalog.citus_internal_acquire_citus_advisory_object_class_lock(100, NULL); -SELECT pg_catalog.citus_internal_acquire_citus_advisory_object_class_lock(100, 'regression'); +SELECT citus_internal.acquire_citus_advisory_object_class_lock(100, NULL); +SELECT citus_internal.acquire_citus_advisory_object_class_lock(100, 'regression'); -- another valid OCLASS, but not implemented yet -SELECT pg_catalog.citus_internal_acquire_citus_advisory_object_class_lock(10, NULL); -SELECT pg_catalog.citus_internal_acquire_citus_advisory_object_class_lock(10, 'regression'); +SELECT citus_internal.acquire_citus_advisory_object_class_lock(10, NULL); +SELECT citus_internal.acquire_citus_advisory_object_class_lock(10, 'regression'); SELECT 1 FROM run_command_on_all_nodes('ALTER SYSTEM SET citus.enable_create_database_propagation TO ON'); SELECT 1 FROM run_command_on_all_nodes('SELECT pg_reload_conf()'); diff --git a/src/test/regress/sql/create_ref_dist_from_citus_local.sql b/src/test/regress/sql/create_ref_dist_from_citus_local.sql index 7c10abce6..2b78ab29e 100644 --- a/src/test/regress/sql/create_ref_dist_from_citus_local.sql +++ b/src/test/regress/sql/create_ref_dist_from_citus_local.sql @@ -220,7 +220,7 @@ ROLLBACK; -- reference tables. SELECT pg_catalog.citus_internal_update_none_dist_table_metadata(1, 't', 1, true); -SELECT pg_catalog.citus_internal_delete_placement_metadata(1); +SELECT citus_internal.delete_placement_metadata(1); CREATE ROLE test_user_create_ref_dist WITH LOGIN; GRANT ALL ON SCHEMA create_ref_dist_from_citus_local TO test_user_create_ref_dist; @@ -239,7 +239,7 @@ SELECT pg_catalog.citus_internal_update_none_dist_table_metadata(1, null, 1, tru SELECT pg_catalog.citus_internal_update_none_dist_table_metadata(1, 't', null, true); SELECT pg_catalog.citus_internal_update_none_dist_table_metadata(1, 't', 1, null); -SELECT pg_catalog.citus_internal_delete_placement_metadata(null); +SELECT citus_internal.delete_placement_metadata(null); CREATE TABLE udf_test (col_1 int); SELECT citus_add_local_table_to_metadata('udf_test'); @@ -253,7 +253,7 @@ BEGIN; SELECT placementid AS udf_test_placementid FROM pg_dist_shard_placement WHERE shardid = get_shard_id_for_distribution_column('create_ref_dist_from_citus_local.udf_test') \gset - SELECT pg_catalog.citus_internal_delete_placement_metadata(:udf_test_placementid); + SELECT citus_internal.delete_placement_metadata(:udf_test_placementid); SELECT COUNT(*)=0 FROM pg_dist_placement WHERE placementid = :udf_test_placementid; ROLLBACK; diff --git a/src/test/regress/sql/failure_mx_metadata_sync.sql b/src/test/regress/sql/failure_mx_metadata_sync.sql index 90e882fe5..d8f82296f 100644 --- a/src/test/regress/sql/failure_mx_metadata_sync.sql +++ b/src/test/regress/sql/failure_mx_metadata_sync.sql @@ -56,10 +56,10 @@ SELECT hasmetadata FROM pg_dist_node WHERE nodeport=:worker_2_proxy_port; -- Check failures on DDL command propagation CREATE TABLE t2 (id int PRIMARY KEY); -SELECT citus.mitmproxy('conn.onParse(query="citus_internal_add_placement_metadata").kill()'); +SELECT citus.mitmproxy('conn.onParse(query="citus_internal.add_placement_metadata").kill()'); SELECT create_distributed_table('t2', 'id'); -SELECT citus.mitmproxy('conn.onParse(query="citus_internal_add_shard_metadata").cancel(' || :pid || ')'); +SELECT citus.mitmproxy('conn.onParse(query="citus_internal.add_shard_metadata").cancel(' || :pid || ')'); SELECT create_distributed_table('t2', 'id'); -- Verify that the table was not distributed diff --git a/src/test/regress/sql/failure_mx_metadata_sync_multi_trans.sql b/src/test/regress/sql/failure_mx_metadata_sync_multi_trans.sql index c0e575c14..afe6a64e9 100644 --- a/src/test/regress/sql/failure_mx_metadata_sync_multi_trans.sql +++ b/src/test/regress/sql/failure_mx_metadata_sync_multi_trans.sql @@ -279,33 +279,33 @@ SELECT citus.mitmproxy('conn.onQuery(query="ALTER TABLE mx_metadata_sync_multi_t SELECT citus_activate_node('localhost', :worker_2_proxy_port); -- Failure to add partition metadata -SELECT citus.mitmproxy('conn.onQuery(query="SELECT citus_internal_add_partition_metadata").cancel(' || :pid || ')'); +SELECT citus.mitmproxy('conn.onQuery(query="SELECT citus_internal.add_partition_metadata").cancel(' || :pid || ')'); SELECT citus_activate_node('localhost', :worker_2_proxy_port); -SELECT citus.mitmproxy('conn.onQuery(query="SELECT citus_internal_add_partition_metadata").kill()'); +SELECT citus.mitmproxy('conn.onQuery(query="SELECT citus_internal.add_partition_metadata").kill()'); SELECT citus_activate_node('localhost', :worker_2_proxy_port); -- Failure to add shard metadata -SELECT citus.mitmproxy('conn.onQuery(query="SELECT citus_internal_add_shard_metadata").cancel(' || :pid || ')'); +SELECT citus.mitmproxy('conn.onQuery(query="SELECT citus_internal.add_shard_metadata").cancel(' || :pid || ')'); SELECT citus_activate_node('localhost', :worker_2_proxy_port); -SELECT citus.mitmproxy('conn.onQuery(query="SELECT citus_internal_add_shard_metadata").kill()'); +SELECT citus.mitmproxy('conn.onQuery(query="SELECT citus_internal.add_shard_metadata").kill()'); SELECT citus_activate_node('localhost', :worker_2_proxy_port); -- Failure to add placement metadata -SELECT citus.mitmproxy('conn.onQuery(query="SELECT citus_internal_add_placement_metadata").cancel(' || :pid || ')'); +SELECT citus.mitmproxy('conn.onQuery(query="SELECT citus_internal.add_placement_metadata").cancel(' || :pid || ')'); SELECT citus_activate_node('localhost', :worker_2_proxy_port); -SELECT citus.mitmproxy('conn.onQuery(query="SELECT citus_internal_add_placement_metadata").kill()'); +SELECT citus.mitmproxy('conn.onQuery(query="SELECT citus_internal.add_placement_metadata").kill()'); SELECT citus_activate_node('localhost', :worker_2_proxy_port); -- Failure to add colocation metadata -SELECT citus.mitmproxy('conn.onQuery(query="SELECT pg_catalog.citus_internal_add_colocation_metadata").cancel(' || :pid || ')'); +SELECT citus.mitmproxy('conn.onQuery(query="SELECT citus_internal.add_colocation_metadata").cancel(' || :pid || ')'); SELECT citus_activate_node('localhost', :worker_2_proxy_port); -SELECT citus.mitmproxy('conn.onQuery(query="SELECT pg_catalog.citus_internal_add_colocation_metadata").kill()'); +SELECT citus.mitmproxy('conn.onQuery(query="SELECT citus_internal.add_colocation_metadata").kill()'); SELECT citus_activate_node('localhost', :worker_2_proxy_port); -- Failure to add distributed object metadata -SELECT citus.mitmproxy('conn.onQuery(query="SELECT citus_internal_add_object_metadata").cancel(' || :pid || ')'); +SELECT citus.mitmproxy('conn.onQuery(query="SELECT citus_internal.add_object_metadata").cancel(' || :pid || ')'); SELECT citus_activate_node('localhost', :worker_2_proxy_port); -SELECT citus.mitmproxy('conn.onQuery(query="SELECT citus_internal_add_object_metadata").kill()'); +SELECT citus.mitmproxy('conn.onQuery(query="SELECT citus_internal.add_object_metadata").kill()'); SELECT citus_activate_node('localhost', :worker_2_proxy_port); -- Failure to mark function as distributed diff --git a/src/test/regress/sql/failure_on_create_subscription.sql b/src/test/regress/sql/failure_on_create_subscription.sql index 3a0ae3b5e..60af71e47 100644 --- a/src/test/regress/sql/failure_on_create_subscription.sql +++ b/src/test/regress/sql/failure_on_create_subscription.sql @@ -34,9 +34,9 @@ SELECT * FROM shards_in_workers; -- Failure on creating the subscription -- Failing exactly on CREATE SUBSCRIPTION is causing flaky test where we fail with either: --- 1) ERROR: connection to the remote node localhost:xxxxx failed with the following error: ERROR: subscription "citus_shard_move_subscription_xxxxxxx" does not exist +-- 1) ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: ERROR: subscription "citus_shard_move_subscription_xxxxxxx" does not exist -- another command is already in progress --- 2) ERROR: connection to the remote node localhost:xxxxx failed with the following error: another command is already in progress +-- 2) ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: another command is already in progress -- Instead fail on the next step (ALTER SUBSCRIPTION) instead which is also required logically as part of uber CREATE SUBSCRIPTION operation. SELECT citus.mitmproxy('conn.onQuery(query="ALTER SUBSCRIPTION").kill()'); diff --git a/src/test/regress/sql/metadata_sync_helpers.sql b/src/test/regress/sql/metadata_sync_helpers.sql index 642b2f708..c669e9069 100644 --- a/src/test/regress/sql/metadata_sync_helpers.sql +++ b/src/test/regress/sql/metadata_sync_helpers.sql @@ -15,20 +15,20 @@ SET search_path TO metadata_sync_helpers; CREATE TABLE test(col_1 int); -- not in a distributed transaction -SELECT citus_internal_add_partition_metadata ('test'::regclass, 'h', 'col_1', 0, 's'); +SELECT citus_internal.add_partition_metadata ('test'::regclass, 'h', 'col_1', 0, 's'); SELECT citus_internal_update_relation_colocation ('test'::regclass, 1); -- in a distributed transaction, but the application name is not Citus BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; SELECT assign_distributed_transaction_id(0, 8, '2021-07-09 15:41:55.542377+02'); - SELECT citus_internal_add_partition_metadata ('test'::regclass, 'h', 'col_1', 0, 's'); + SELECT citus_internal.add_partition_metadata ('test'::regclass, 'h', 'col_1', 0, 's'); ROLLBACK; -- in a distributed transaction and the application name is Citus, allowed. BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; SELECT assign_distributed_transaction_id(0, 8, '2021-07-09 15:41:55.542377+02'); SET application_name to 'citus_internal gpid=10000000001'; - SELECT citus_internal_add_partition_metadata ('test'::regclass, 'h', 'col_1', 0, 's'); + SELECT citus_internal.add_partition_metadata ('test'::regclass, 'h', 'col_1', 0, 's'); ROLLBACK; \c - postgres - \c - - - :worker_1_port @@ -47,7 +47,7 @@ SET search_path TO metadata_sync_helpers; BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; SELECT assign_distributed_transaction_id(0, 8, '2021-07-09 15:41:55.542377+02'); SET application_name to 'citus_internal gpid=10000000001'; - SELECT citus_internal_add_partition_metadata ('test'::regclass, 'h', 'col_1', 0, 's'); + SELECT citus_internal.add_partition_metadata ('test'::regclass, 'h', 'col_1', 0, 's'); ROLLBACK; -- we do not own the relation @@ -63,7 +63,7 @@ CREATE TABLE test_3(col_1 int, col_2 int); BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; SELECT assign_distributed_transaction_id(0, 8, '2021-07-09 15:41:55.542377+02'); SET application_name to 'citus_internal gpid=10000000001'; - SELECT citus_internal_add_partition_metadata ('test_2'::regclass, 'h', 'col_1', 0, 's'); + SELECT citus_internal.add_partition_metadata ('test_2'::regclass, 'h', 'col_1', 0, 's'); SELECT count(*) FROM pg_dist_partition WHERE logicalrelid = 'metadata_sync_helpers.test_2'::regclass; ROLLBACK; @@ -71,84 +71,84 @@ ROLLBACK; BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; SELECT assign_distributed_transaction_id(0, 8, '2021-07-09 15:41:55.542377+02'); SET application_name to 'citus_rebalancer gpid=10000000001'; - SELECT citus_internal_add_partition_metadata ('test_2'::regclass, 'h', 'col_1', 0, 's'); + SELECT citus_internal.add_partition_metadata ('test_2'::regclass, 'h', 'col_1', 0, 's'); ROLLBACK; -- application_name with incorrect gpid BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; SELECT assign_distributed_transaction_id(0, 8, '2021-07-09 15:41:55.542377+02'); SET application_name to 'citus_internal gpid=not a correct gpid'; - SELECT citus_internal_add_partition_metadata ('test_2'::regclass, 'h', 'col_1', 0, 's'); + SELECT citus_internal.add_partition_metadata ('test_2'::regclass, 'h', 'col_1', 0, 's'); ROLLBACK; -- also faills if done by the rebalancer BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; SELECT assign_distributed_transaction_id(0, 8, '2021-07-09 15:41:55.542377+02'); SET application_name to 'citus_rebalancer gpid=not a correct gpid'; - SELECT citus_internal_add_partition_metadata ('test_2'::regclass, 'h', 'col_1', 0, 's'); + SELECT citus_internal.add_partition_metadata ('test_2'::regclass, 'h', 'col_1', 0, 's'); ROLLBACK; -- application_name with suffix is ok (e.g. pgbouncer might add this) BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; SELECT assign_distributed_transaction_id(0, 8, '2021-07-09 15:41:55.542377+02'); SET application_name to 'citus_internal gpid=10000000001 - from 10.12.14.16:10370'; - SELECT citus_internal_add_partition_metadata ('test_2'::regclass, 'h', 'col_1', 0, 's'); + SELECT citus_internal.add_partition_metadata ('test_2'::regclass, 'h', 'col_1', 0, 's'); ROLLBACK; -- application_name with empty gpid BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; SELECT assign_distributed_transaction_id(0, 8, '2021-07-09 15:41:55.542377+02'); SET application_name to 'citus_internal gpid='; - SELECT citus_internal_add_partition_metadata ('test_2'::regclass, 'h', 'col_1', 0, 's'); + SELECT citus_internal.add_partition_metadata ('test_2'::regclass, 'h', 'col_1', 0, 's'); ROLLBACK; -- empty application_name BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; SELECT assign_distributed_transaction_id(0, 8, '2021-07-09 15:41:55.542377+02'); SET application_name to ''; - SELECT citus_internal_add_partition_metadata ('test_2'::regclass, 'h', 'col_1', 0, 's'); + SELECT citus_internal.add_partition_metadata ('test_2'::regclass, 'h', 'col_1', 0, 's'); ROLLBACK; -- application_name with incorrect prefix BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; SELECT assign_distributed_transaction_id(0, 8, '2021-07-09 15:41:55.542377+02'); SET application_name to 'citus gpid=10000000001'; - SELECT citus_internal_add_partition_metadata ('test_2'::regclass, 'h', 'col_1', 0, 's'); + SELECT citus_internal.add_partition_metadata ('test_2'::regclass, 'h', 'col_1', 0, 's'); ROLLBACK; -- fails because there is no X distribution method BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; SELECT assign_distributed_transaction_id(0, 8, '2021-07-09 15:41:55.542377+02'); SET application_name to 'citus_internal gpid=10000000001'; - SELECT citus_internal_add_partition_metadata ('test_2'::regclass, 'X', 'col_1', 0, 's'); + SELECT citus_internal.add_partition_metadata ('test_2'::regclass, 'X', 'col_1', 0, 's'); ROLLBACK; -- fails because there is the column does not exist BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; SELECT assign_distributed_transaction_id(0, 8, '2021-07-09 15:41:55.542377+02'); SET application_name to 'citus_internal gpid=10000000001'; - SELECT citus_internal_add_partition_metadata ('test_2'::regclass, 'h', 'non_existing_col', 0, 's'); + SELECT citus_internal.add_partition_metadata ('test_2'::regclass, 'h', 'non_existing_col', 0, 's'); ROLLBACK; --- fails because we do not allow NULL parameters BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; SELECT assign_distributed_transaction_id(0, 8, '2021-07-09 15:41:55.542377+02'); SET application_name to 'citus_internal gpid=10000000001'; - SELECT citus_internal_add_partition_metadata (NULL, 'h', 'non_existing_col', 0, 's'); + SELECT citus_internal.add_partition_metadata (NULL, 'h', 'non_existing_col', 0, 's'); ROLLBACK; -- fails because colocationId cannot be negative BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; SELECT assign_distributed_transaction_id(0, 8, '2021-07-09 15:41:55.542377+02'); SET application_name to 'citus_internal gpid=10000000001'; - SELECT citus_internal_add_partition_metadata ('test_2'::regclass, 'h', 'col_1', -1, 's'); + SELECT citus_internal.add_partition_metadata ('test_2'::regclass, 'h', 'col_1', -1, 's'); ROLLBACK; -- fails because there is no X replication model BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; SELECT assign_distributed_transaction_id(0, 8, '2021-07-09 15:41:55.542377+02'); SET application_name to 'citus_internal gpid=10000000001'; - SELECT citus_internal_add_partition_metadata ('test_2'::regclass, 'X', 'col_1', 0, 'X'); + SELECT citus_internal.add_partition_metadata ('test_2'::regclass, 'X', 'col_1', 0, 'X'); ROLLBACK; -- the same table cannot be added twice, that is enforced by a primary key @@ -156,8 +156,8 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; SELECT assign_distributed_transaction_id(0, 8, '2021-07-09 15:41:55.542377+02'); SET application_name to 'citus_internal gpid=10000000001'; \set VERBOSITY terse - SELECT citus_internal_add_partition_metadata ('test_2'::regclass, 'h', 'col_1', 0, 's'); - SELECT citus_internal_add_partition_metadata ('test_2'::regclass, 'h', 'col_1', 0, 's'); + SELECT citus_internal.add_partition_metadata ('test_2'::regclass, 'h', 'col_1', 0, 's'); + SELECT citus_internal.add_partition_metadata ('test_2'::regclass, 'h', 'col_1', 0, 's'); ROLLBACK; -- the same table cannot be added twice, that is enforced by a primary key even if distribution key changes @@ -165,8 +165,8 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; SELECT assign_distributed_transaction_id(0, 8, '2021-07-09 15:41:55.542377+02'); SET application_name to 'citus_internal gpid=10000000001'; \set VERBOSITY terse - SELECT citus_internal_add_partition_metadata ('test_2'::regclass, 'h', 'col_1', 0, 's'); - SELECT citus_internal_add_partition_metadata ('test_2'::regclass, 'h', 'col_2', 0, 's'); + SELECT citus_internal.add_partition_metadata ('test_2'::regclass, 'h', 'col_1', 0, 's'); + SELECT citus_internal.add_partition_metadata ('test_2'::regclass, 'h', 'col_2', 0, 's'); ROLLBACK; -- hash distributed table cannot have NULL distribution key @@ -174,7 +174,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; SELECT assign_distributed_transaction_id(0, 8, '2021-07-09 15:41:55.542377+02'); SET application_name to 'citus_internal gpid=10000000001'; \set VERBOSITY terse - SELECT citus_internal_add_partition_metadata ('test_2'::regclass, 'h', NULL, 0, 's'); + SELECT citus_internal.add_partition_metadata ('test_2'::regclass, 'h', NULL, 0, 's'); ROLLBACK; -- even if metadata_sync_helper_role is not owner of the table test @@ -194,7 +194,7 @@ SET search_path TO metadata_sync_helpers; BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; SELECT assign_distributed_transaction_id(0, 8, '2021-07-09 15:41:55.542377+02'); SET application_name to 'citus_internal gpid=10000000001'; - SELECT citus_internal_add_partition_metadata ('test_2'::regclass, 'X', 'col_1', 0, 's'); + SELECT citus_internal.add_partition_metadata ('test_2'::regclass, 'X', 'col_1', 0, 's'); ROLLBACK; -- should throw error even if we skip the checks, there are no such nodes @@ -218,7 +218,7 @@ SET search_path TO metadata_sync_helpers; BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; SELECT assign_distributed_transaction_id(0, 8, '2021-07-09 15:41:55.542377+02'); SET application_name to 'citus_internal gpid=10000000001'; - SELECT citus_internal_add_partition_metadata ('test_2'::regclass, 'X', 'col_1', 0, 's'); + SELECT citus_internal.add_partition_metadata ('test_2'::regclass, 'X', 'col_1', 0, 's'); ROLLBACK; \c - postgres - :worker_1_port @@ -236,21 +236,21 @@ CREATE TABLE test_ref(col_1 int, col_2 int); BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; SELECT assign_distributed_transaction_id(0, 8, '2021-07-09 15:41:55.542377+02'); SET application_name to 'citus_internal gpid=10000000001'; - SELECT citus_internal_add_partition_metadata ('test_ref'::regclass, 'n', 'col_1', 0, 's'); + SELECT citus_internal.add_partition_metadata ('test_ref'::regclass, 'n', 'col_1', 0, 's'); ROLLBACK; -- non-valid replication model BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; SELECT assign_distributed_transaction_id(0, 8, '2021-07-09 15:41:55.542377+02'); SET application_name to 'citus_internal gpid=10000000001'; - SELECT citus_internal_add_partition_metadata ('test_ref'::regclass, 'n', NULL, 0, 'A'); + SELECT citus_internal.add_partition_metadata ('test_ref'::regclass, 'n', NULL, 0, 'A'); ROLLBACK; -- not-matching replication model for reference table BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; SELECT assign_distributed_transaction_id(0, 8, '2021-07-09 15:41:55.542377+02'); SET application_name to 'citus_internal gpid=10000000001'; - SELECT citus_internal_add_partition_metadata ('test_ref'::regclass, 'n', NULL, 0, 'c'); + SELECT citus_internal.add_partition_metadata ('test_ref'::regclass, 'n', NULL, 0, 'c'); ROLLBACK; -- add entry for super user table @@ -260,7 +260,7 @@ CREATE TABLE super_user_table(col_1 int); BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; SELECT assign_distributed_transaction_id(0, 8, '2021-07-09 15:41:55.542377+02'); SET application_name to 'citus_internal gpid=10000000001'; - SELECT citus_internal_add_partition_metadata ('super_user_table'::regclass, 'h', 'col_1', 0, 's'); + SELECT citus_internal.add_partition_metadata ('super_user_table'::regclass, 'h', 'col_1', 0, 's'); COMMIT; -- now, lets check shard metadata @@ -276,7 +276,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; \set VERBOSITY terse WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('super_user_table'::regclass, 1420000::bigint, 't'::"char", '-2147483648'::text, '-1610612737'::text)) - SELECT citus_internal_add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; + SELECT citus_internal.add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; ROLLBACK; -- the user is only allowed to add a shard for add a table which is in pg_dist_partition @@ -286,16 +286,16 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; \set VERBOSITY terse WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('test_2'::regclass, 1420000::bigint, 't'::"char", '-2147483648'::text, '-1610612737'::text)) - SELECT citus_internal_add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; + SELECT citus_internal.add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; ROLLBACK; -- ok, now add the table to the pg_dist_partition BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; SELECT assign_distributed_transaction_id(0, 8, '2021-07-09 15:41:55.542377+02'); SET application_name to 'citus_internal gpid=10000000001'; - SELECT citus_internal_add_partition_metadata ('test_2'::regclass, 'h', 'col_1', 250, 's'); - SELECT citus_internal_add_partition_metadata ('test_3'::regclass, 'h', 'col_1', 251, 's'); - SELECT citus_internal_add_partition_metadata ('test_ref'::regclass, 'n', NULL, 0, 't'); + SELECT citus_internal.add_partition_metadata ('test_2'::regclass, 'h', 'col_1', 250, 's'); + SELECT citus_internal.add_partition_metadata ('test_3'::regclass, 'h', 'col_1', 251, 's'); + SELECT citus_internal.add_partition_metadata ('test_ref'::regclass, 'n', NULL, 0, 't'); COMMIT; -- we can update to a non-existing colocation group (e.g., colocate_with:=none) @@ -312,7 +312,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; \set VERBOSITY terse WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('test_2'::regclass, -1, 't'::"char", '-2147483648'::text, '-1610612737'::text)) - SELECT citus_internal_add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; + SELECT citus_internal.add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; ROLLBACK; -- invalid storage types are not allowed @@ -322,7 +322,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; \set VERBOSITY terse WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('test_2'::regclass, 1420000, 'X'::"char", '-2147483648'::text, '-1610612737'::text)) - SELECT citus_internal_add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; + SELECT citus_internal.add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; ROLLBACK; -- NULL shard ranges are not allowed for hash distributed tables @@ -332,7 +332,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; \set VERBOSITY terse WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('test_2'::regclass, 1420000, 't'::"char", NULL, '-1610612737'::text)) - SELECT citus_internal_add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; + SELECT citus_internal.add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; ROLLBACK; -- non-integer shard ranges are not allowed @@ -342,7 +342,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; \set VERBOSITY terse WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('test_2'::regclass, 1420000::bigint, 't'::"char", 'non-int'::text, '-1610612737'::text)) - SELECT citus_internal_add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; + SELECT citus_internal.add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; ROLLBACK; -- shardMinValue should be smaller than shardMaxValue @@ -352,7 +352,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; \set VERBOSITY terse WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('test_2'::regclass, 1420000::bigint, 't'::"char", '-1610612737'::text, '-2147483648'::text)) - SELECT citus_internal_add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; + SELECT citus_internal.add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; ROLLBACK; -- we do not allow overlapping shards for the same table @@ -364,7 +364,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; AS (VALUES ('test_2'::regclass, 1420000::bigint, 't'::"char", '10'::text, '20'::text), ('test_2'::regclass, 1420001::bigint, 't'::"char", '20'::text, '30'::text), ('test_2'::regclass, 1420002::bigint, 't'::"char", '10'::text, '50'::text)) - SELECT citus_internal_add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; + SELECT citus_internal.add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; ROLLBACK; -- Now let's check valid pg_dist_object updates @@ -376,7 +376,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; \set VERBOSITY terse WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('non_existing_type', ARRAY['non_existing_user']::text[], ARRAY[]::text[], -1, 0, false)) - SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) FROM distributed_object_data; + SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) FROM distributed_object_data; ROLLBACK; -- check the sanity of distributionArgumentIndex and colocationId @@ -386,7 +386,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; \set VERBOSITY terse WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('role', ARRAY['metadata_sync_helper_role']::text[], ARRAY[]::text[], -100, 0, false)) - SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) FROM distributed_object_data; + SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) FROM distributed_object_data; ROLLBACK; BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; @@ -395,7 +395,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; \set VERBOSITY terse WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('role', ARRAY['metadata_sync_helper_role']::text[], ARRAY[]::text[], -1, -1, false)) - SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) FROM distributed_object_data; + SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) FROM distributed_object_data; ROLLBACK; -- check with non-existing object @@ -405,10 +405,10 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; \set VERBOSITY terse WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('role', ARRAY['non_existing_user']::text[], ARRAY[]::text[], -1, 0, false)) - SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) FROM distributed_object_data; + SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) FROM distributed_object_data; ROLLBACK; --- since citus_internal_add_object_metadata is strict function returns NULL +-- since citus_internal.add_object_metadata is strict function returns NULL -- if any parameter is NULL BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; SELECT assign_distributed_transaction_id(0, 8, '2021-07-09 15:41:55.542377+02'); @@ -416,12 +416,12 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; \set VERBOSITY terse WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('role', ARRAY['metadata_sync_helper_role']::text[], ARRAY[]::text[], 0, NULL::int, false)) - SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) FROM distributed_object_data; + SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) FROM distributed_object_data; ROLLBACK; \c - postgres - :worker_1_port --- Show that citus_internal_add_object_metadata only works for object types +-- Show that citus_internal.add_object_metadata only works for object types -- which is known how to distribute BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; SELECT assign_distributed_transaction_id(0, 8, '2021-07-09 15:41:55.542377+02'); @@ -437,10 +437,10 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; SET ROLE metadata_sync_helper_role; WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('operator', ARRAY['===']::text[], ARRAY['int','int']::text[], -1, 0, false)) - SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) FROM distributed_object_data; + SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) FROM distributed_object_data; ROLLBACK; --- Show that citus_internal_add_object_metadata checks the priviliges +-- Show that citus_internal.add_object_metadata checks the priviliges BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; SELECT assign_distributed_transaction_id(0, 8, '2021-07-09 15:41:55.542377+02'); SET application_name to 'citus_internal gpid=10000000001'; @@ -454,7 +454,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; SET ROLE metadata_sync_helper_role; WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('function', ARRAY['distribution_test_function']::text[], ARRAY['integer']::text[], -1, 0, false)) - SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) FROM distributed_object_data; + SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) FROM distributed_object_data; ROLLBACK; BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; @@ -468,7 +468,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; SET ROLE metadata_sync_helper_role; WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('type', ARRAY['distributed_test_type']::text[], ARRAY[]::text[], -1, 0, false)) - SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) FROM distributed_object_data; + SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) FROM distributed_object_data; ROLLBACK; -- we do not allow wrong partmethod @@ -482,7 +482,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('test_2'::regclass, 1420000::bigint, 't'::"char", '10'::text, '20'::text), ('test_2'::regclass, 1420001::bigint, 't'::"char", '20'::text, '30'::text)) - SELECT citus_internal_add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; + SELECT citus_internal.add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; ROLLBACK; -- we do not allow NULL shardMinMax values @@ -494,12 +494,12 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; \set VERBOSITY terse WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('test_2'::regclass, 1420000::bigint, 't'::"char", '10'::text, '20'::text)) - SELECT citus_internal_add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; + SELECT citus_internal.add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; -- manually ingest NULL values, otherwise not likely unless metadata is corrupted UPDATE pg_dist_shard SET shardminvalue = NULL WHERE shardid = 1420000; WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('test_2'::regclass, 1420001::bigint, 't'::"char", '20'::text, '30'::text)) - SELECT citus_internal_add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; + SELECT citus_internal.add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; ROLLBACK; \c - metadata_sync_helper_role - :worker_1_port @@ -518,7 +518,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; ('test_2'::regclass, 1420004::bigint, 't'::"char", '51'::text, '60'::text), ('test_2'::regclass, 1420005::bigint, 't'::"char", '61'::text, '70'::text), ('test_3'::regclass, 1420008::bigint, 't'::"char", '11'::text, '20'::text)) - SELECT citus_internal_add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; + SELECT citus_internal.add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; COMMIT; -- we cannot mark these two tables colocated because they are not colocated @@ -539,7 +539,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; ('test_3'::regclass, 1420011::bigint, 't'::"char", '41'::text, '50'::text), ('test_3'::regclass, 1420012::bigint, 't'::"char", '51'::text, '60'::text), ('test_3'::regclass, 1420013::bigint, 't'::"char", '61'::text, '70'::text)) - SELECT citus_internal_add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; + SELECT citus_internal.add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; COMMIT; -- shardMin/MaxValues should be NULL for reference tables @@ -549,7 +549,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; \set VERBOSITY terse WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('test_ref'::regclass, 1420003::bigint, 't'::"char", '-1610612737'::text, NULL)) - SELECT citus_internal_add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; + SELECT citus_internal.add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; ROLLBACK; -- reference tables cannot have multiple shards @@ -560,7 +560,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('test_ref'::regclass, 1420006::bigint, 't'::"char", NULL, NULL), ('test_ref'::regclass, 1420007::bigint, 't'::"char", NULL, NULL)) - SELECT citus_internal_add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; + SELECT citus_internal.add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; ROLLBACK; -- finally, add a shard for reference tables @@ -570,7 +570,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; \set VERBOSITY terse WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('test_ref'::regclass, 1420006::bigint, 't'::"char", NULL, NULL)) - SELECT citus_internal_add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; + SELECT citus_internal.add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; COMMIT; \c - postgres - :worker_1_port @@ -583,7 +583,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; \set VERBOSITY terse WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('super_user_table'::regclass, 1420007::bigint, 't'::"char", '11'::text, '20'::text)) - SELECT citus_internal_add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; + SELECT citus_internal.add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; COMMIT; \c - metadata_sync_helper_role - :worker_1_port @@ -596,9 +596,9 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; SELECT assign_distributed_transaction_id(0, 8, '2021-07-09 15:41:55.542377+02'); SET application_name to 'citus_internal gpid=10000000001'; \set VERBOSITY terse - WITH placement_data(shardid, shardstate, shardlength, groupid, placementid) AS - (VALUES (-10, 1, 0::bigint, 1::int, 1500000::bigint)) - SELECT citus_internal_add_placement_metadata(shardid, shardstate, shardlength, groupid, placementid) FROM placement_data; + WITH placement_data(shardid, shardlength, groupid, placementid) AS + (VALUES (-10, 0::bigint, 1::int, 1500000::bigint)) + SELECT citus_internal.add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data; ROLLBACK; -- invalid placementid @@ -608,7 +608,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; \set VERBOSITY terse WITH placement_data(shardid, shardlength, groupid, placementid) AS (VALUES (1420000, 0::bigint, 1::int, -10)) - SELECT citus_internal_add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data; + SELECT citus_internal.add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data; ROLLBACK; -- non-existing shard @@ -618,7 +618,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; \set VERBOSITY terse WITH placement_data(shardid, shardlength, groupid, placementid) AS (VALUES (1430100, 0::bigint, 1::int, 10)) - SELECT citus_internal_add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data; + SELECT citus_internal.add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data; ROLLBACK; -- non-existing node with non-existing node-id 123123123 @@ -628,7 +628,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; \set VERBOSITY terse WITH placement_data(shardid, shardlength, groupid, placementid) AS (VALUES ( 1420000, 0::bigint, 123123123::int, 1500000)) - SELECT citus_internal_add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data; + SELECT citus_internal.add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data; ROLLBACK; -- create a volatile function that returns the local node id @@ -655,7 +655,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; WITH placement_data(shardid, shardlength, groupid, placementid) AS (VALUES (1420000, 0::bigint, get_node_id(), 1500000), (1420000, 0::bigint, get_node_id(), 1500001)) - SELECT citus_internal_add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data; + SELECT citus_internal.add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data; ROLLBACK; -- shard is not owned by us @@ -665,7 +665,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; \set VERBOSITY terse WITH placement_data(shardid, shardlength, groupid, placementid) AS (VALUES (1420007, 0::bigint, get_node_id(), 1500000)) - SELECT citus_internal_add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data; + SELECT citus_internal.add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data; ROLLBACK; -- sucessfully add placements @@ -686,7 +686,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; (1420011, 0::bigint, get_node_id(), 1500009), (1420012, 0::bigint, get_node_id(), 1500010), (1420013, 0::bigint, get_node_id(), 1500011)) - SELECT citus_internal_add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data; + SELECT citus_internal.add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data; COMMIT; -- we should be able to colocate both tables now @@ -745,7 +745,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; \set VERBOSITY terse WITH shard_data(shardid) AS (VALUES (1420007)) - SELECT citus_internal_delete_shard_metadata(shardid) FROM shard_data; + SELECT citus_internal.delete_shard_metadata(shardid) FROM shard_data; ROLLBACK; -- the user cannot delete non-existing shards @@ -755,7 +755,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; \set VERBOSITY terse WITH shard_data(shardid) AS (VALUES (1420100)) - SELECT citus_internal_delete_shard_metadata(shardid) FROM shard_data; + SELECT citus_internal.delete_shard_metadata(shardid) FROM shard_data; ROLLBACK; @@ -770,7 +770,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; \set VERBOSITY terse WITH shard_data(shardid) AS (VALUES (1420000)) - SELECT citus_internal_delete_shard_metadata(shardid) FROM shard_data; + SELECT citus_internal.delete_shard_metadata(shardid) FROM shard_data; SELECT count(*) FROM pg_dist_shard WHERE shardid = 1420000; SELECT count(*) FROM pg_dist_placement WHERE shardid = 1420000; @@ -841,8 +841,8 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; SELECT assign_distributed_transaction_id(0, 8, '2021-07-09 15:41:55.542377+02'); SET application_name to 'citus_internal gpid=10000000001'; \set VERBOSITY terse - SELECT citus_internal_add_partition_metadata ('test_5'::regclass, 'h', 'int_col', 500, 's'); - SELECT citus_internal_add_partition_metadata ('test_6'::regclass, 'h', 'text_col', 500, 's'); + SELECT citus_internal.add_partition_metadata ('test_5'::regclass, 'h', 'int_col', 500, 's'); + SELECT citus_internal.add_partition_metadata ('test_6'::regclass, 'h', 'text_col', 500, 's'); ROLLBACK; @@ -858,8 +858,8 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED; SELECT assign_distributed_transaction_id(0, 8, '2021-07-09 15:41:55.542377+02'); SET application_name to 'citus_internal gpid=10000000001'; \set VERBOSITY terse - SELECT citus_internal_add_partition_metadata ('test_7'::regclass, 'h', 'text_col', 500, 's'); - SELECT citus_internal_add_partition_metadata ('test_8'::regclass, 'h', 'text_col', 500, 's'); + SELECT citus_internal.add_partition_metadata ('test_7'::regclass, 'h', 'text_col', 500, 's'); + SELECT citus_internal.add_partition_metadata ('test_8'::regclass, 'h', 'text_col', 500, 's'); ROLLBACK; -- we don't need the table/schema anymore diff --git a/src/test/regress/sql/other_databases.sql b/src/test/regress/sql/other_databases.sql index 629f74f45..8cd54f354 100644 --- a/src/test/regress/sql/other_databases.sql +++ b/src/test/regress/sql/other_databases.sql @@ -51,8 +51,16 @@ SET ROLE nonsuperuser; SELECT citus_internal.execute_command_on_remote_nodes_as_user($$SELECT 'dangerous query'$$, 'postgres'); \c other_db1 +SET citus.local_hostname TO '127.0.0.1'; +SET ROLE nonsuperuser; + +-- Make sure that we don't try to access pg_dist_node. +-- Otherwise, we would get the following error: +-- ERROR: cache lookup failed for pg_dist_node, called too early? CREATE USER other_db_user9; +RESET ROLE; +RESET citus.local_hostname; RESET ROLE; \c regression SELECT usename FROM pg_user WHERE usename LIKE 'other\_db\_user%' ORDER BY 1; diff --git a/src/test/regress/sql/schema_based_sharding.sql b/src/test/regress/sql/schema_based_sharding.sql index bd8065ab9..af5c201f4 100644 --- a/src/test/regress/sql/schema_based_sharding.sql +++ b/src/test/regress/sql/schema_based_sharding.sql @@ -12,9 +12,9 @@ SET client_min_messages TO NOTICE; -- Verify that the UDFs used to sync tenant schema metadata to workers -- fail on NULL input. -SELECT citus_internal_add_tenant_schema(NULL, 1); -SELECT citus_internal_add_tenant_schema(1, NULL); -SELECT citus_internal_delete_tenant_schema(NULL); +SELECT citus_internal.add_tenant_schema(NULL, 1); +SELECT citus_internal.add_tenant_schema(1, NULL); +SELECT citus_internal.delete_tenant_schema(NULL); SELECT citus_internal_unregister_tenant_schema_globally(1, NULL); SELECT citus_internal_unregister_tenant_schema_globally(NULL, 'text');