mirror of https://github.com/citusdata/citus.git
Merge branch 'main' into grant_parameter_propagation
commit
dcfb386a09
|
@ -175,7 +175,7 @@ that are missing in earlier minor versions.
|
|||
|
||||
### Following our coding conventions
|
||||
|
||||
CircleCI will automatically reject any PRs which do not follow our coding
|
||||
CI pipeline will automatically reject any PRs which do not follow our coding
|
||||
conventions. The easiest way to ensure your PR adheres to those conventions is
|
||||
to use the [citus_indent](https://github.com/citusdata/tools/tree/develop/uncrustify)
|
||||
tool. This tool uses `uncrustify` under the hood.
|
||||
|
|
|
@ -4,7 +4,22 @@ set -euo pipefail
|
|||
# shellcheck disable=SC1091
|
||||
source ci/ci_helpers.sh
|
||||
|
||||
# extract citus gucs in the form of "citus.X"
|
||||
grep -o -E "(\.*\"citus\.\w+\")," src/backend/distributed/shared_library_init.c > gucs.out
|
||||
# Find the line that exactly matches "RegisterCitusConfigVariables(void)" in
|
||||
# shared_library_init.c. grep command returns something like
|
||||
# "934:RegisterCitusConfigVariables(void)" and we extract the line number
|
||||
# with cut.
|
||||
RegisterCitusConfigVariables_begin_linenumber=$(grep -n "^RegisterCitusConfigVariables(void)$" src/backend/distributed/shared_library_init.c | cut -d: -f1)
|
||||
|
||||
# Consider the lines starting from $RegisterCitusConfigVariables_begin_linenumber,
|
||||
# grep the first line that starts with "}" and extract the line number with cut
|
||||
# as in the previous step.
|
||||
RegisterCitusConfigVariables_length=$(tail -n +$RegisterCitusConfigVariables_begin_linenumber src/backend/distributed/shared_library_init.c | grep -n -m 1 "^}$" | cut -d: -f1)
|
||||
|
||||
# extract the function definition of RegisterCitusConfigVariables into a temp file
|
||||
tail -n +$RegisterCitusConfigVariables_begin_linenumber src/backend/distributed/shared_library_init.c | head -n $(($RegisterCitusConfigVariables_length)) > RegisterCitusConfigVariables_func_def.out
|
||||
|
||||
# extract citus gucs in the form of <tab><tab>"citus.X"
|
||||
grep -P "^[\t][\t]\"citus\.[a-zA-Z_0-9]+\"" RegisterCitusConfigVariables_func_def.out > gucs.out
|
||||
sort -c gucs.out
|
||||
rm gucs.out
|
||||
rm RegisterCitusConfigVariables_func_def.out
|
||||
|
|
|
@ -1749,8 +1749,6 @@ The reason for handling dependencies and deparsing in post-process step is that
|
|||
|
||||
Not all table DDL is currently deparsed. In that case, the original command sent by the client is used. That is a shortcoming in our DDL logic that causes user-facing issues and should be addressed. We do not directly construct a separate DDL command for each shard. Instead, we call the `worker_apply_shard_ddl_command(shardid bigint, ddl_command text)` function which parses the DDL command, replaces the table names with shard names in the parse tree according to the shard ID, and then executes the command. That also has some shortcomings, because we cannot support more complex DDL commands in this manner (e.g. adding multiple foreign keys). Ideally, all DDL would be deparsed, and for table DDL the deparsed query string would have shard names, similar to regular queries.
|
||||
|
||||
`markDistributed` is used to indicate whether we add a record to `pg_dist_object` to mark the object as "distributed".
|
||||
|
||||
## Defining a new DDL command
|
||||
|
||||
All commands that are propagated by Citus should be defined in DistributeObjectOps struct. Below is a sample DistributeObjectOps for ALTER DATABASE command that is defined in [distribute_object_ops.c](commands/distribute_object_ops.c) file.
|
||||
|
@ -1810,6 +1808,14 @@ GetDistributeObjectOps(Node *node)
|
|||
...
|
||||
```
|
||||
|
||||
Finally, when adding support for propagation of a new DDL command, you also need to make sure that:
|
||||
* Use `quote_identifier()` or `quote_literal_cstr()` for the fields that might need escaping some characters or bare quotes when deparsing a DDL command.
|
||||
* The code is tolerant to nullable fields within given `Stmt *` object, i.e., the ones that Postgres allows not specifying at all.
|
||||
* You register the object into `pg_dist_object` if it's a CREATE command and you delete the object from `pg_dist_object` if it's a DROP command.
|
||||
* Node activation (e.g., `citus_add_node()`) properly propagates the object and its dependencies to new nodes.
|
||||
* Add tests cases for all the scenarios noted above.
|
||||
* Add test cases for different options that can be specified for the settings. For example, `CREATE DATABASE .. IS_TEMPLATE = TRUE` and `CREATE DATABASE .. IS_TEMPLATE = FALSE` should be tested separately.
|
||||
|
||||
## Object & dependency propagation
|
||||
|
||||
These two topics are closely related, so we'll discuss them together. You can start the topic by reading [Nils' blog](https://www.citusdata.com/blog/2020/06/25/using-custom-types-with-citus-and-postgres/) on the topic.
|
||||
|
@ -1885,7 +1891,7 @@ Generally, the process is straightforward: When a new object is created, Citus a
|
|||
|
||||
Citus employs a universal strategy for dealing with objects. Every object creation, alteration, or deletion event (like custom types, tables, or extensions) is represented by the C struct `DistributeObjectOps`. You can find a list of all supported object types in [`distribute_object_ops.c`](https://github.com/citusdata/citus/blob/2c190d068918d1c457894adf97f550e5b3739184/src/backend/distributed/commands/distribute_object_ops.c#L4). As of Citus 12.1, most Postgres objects are supported, although there are a few exceptions.
|
||||
|
||||
Whenever `DistributeObjectOps->markDistributed` is set to true—usually during `CREATE` operations—Citus calls `MarkObjectDistributed()`. Citus also labels the same objects as distributed across all nodes via the `citus_internal_add_object_metadata()` UDF.
|
||||
Whenever `DistributeObjectOps->markDistributed` is set to true—usually during `CREATE` operations—Citus calls `MarkObjectDistributed()`. Citus also labels the same objects as distributed across all nodes via the `citus_internal.add_object_metadata()` UDF.
|
||||
|
||||
Here's a simple example:
|
||||
|
||||
|
@ -1895,7 +1901,7 @@ CREATE TYPE type_test AS (a int, b int);
|
|||
...
|
||||
NOTICE: issuing SELECT worker_create_or_replace_object('CREATE TYPE public.type_test AS (a integer, b integer);');
|
||||
....
|
||||
WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('schema', ARRAY['public']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data;
|
||||
WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('schema', ARRAY['public']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data;
|
||||
...
|
||||
|
||||
-- Then, check pg_dist_object. This should be consistent across all nodes.
|
||||
|
|
|
@ -397,7 +397,7 @@ AdjustClocksToTransactionHighest(List *nodeConnectionList,
|
|||
|
||||
/* Set the clock value on participating worker nodes */
|
||||
appendStringInfo(queryToSend,
|
||||
"SELECT pg_catalog.citus_internal_adjust_local_clock_to_remote"
|
||||
"SELECT citus_internal.adjust_local_clock_to_remote"
|
||||
"('(%lu, %u)'::pg_catalog.cluster_clock);",
|
||||
transactionClockValue->logical, transactionClockValue->counter);
|
||||
|
||||
|
|
|
@ -890,7 +890,7 @@ CreateDatabaseDDLCommand(Oid dbId)
|
|||
|
||||
/* Generate the CREATE DATABASE statement */
|
||||
appendStringInfo(outerDbStmt,
|
||||
"SELECT pg_catalog.citus_internal_database_command(%s)",
|
||||
"SELECT citus_internal.database_command(%s)",
|
||||
quote_literal_cstr(createStmt));
|
||||
|
||||
ReleaseSysCache(tuple);
|
||||
|
|
|
@ -776,7 +776,7 @@ PreprocessCreateExtensionStmtForCitusColumnar(Node *parsetree)
|
|||
/*create extension citus version xxx*/
|
||||
if (newVersionValue)
|
||||
{
|
||||
char *newVersion = strdup(defGetString(newVersionValue));
|
||||
char *newVersion = pstrdup(defGetString(newVersionValue));
|
||||
versionNumber = GetExtensionVersionNumber(newVersion);
|
||||
}
|
||||
|
||||
|
@ -796,7 +796,7 @@ PreprocessCreateExtensionStmtForCitusColumnar(Node *parsetree)
|
|||
Oid citusOid = get_extension_oid("citus", true);
|
||||
if (citusOid != InvalidOid)
|
||||
{
|
||||
char *curCitusVersion = strdup(get_extension_version(citusOid));
|
||||
char *curCitusVersion = pstrdup(get_extension_version(citusOid));
|
||||
int curCitusVersionNum = GetExtensionVersionNumber(curCitusVersion);
|
||||
if (curCitusVersionNum < 1110)
|
||||
{
|
||||
|
@ -891,7 +891,7 @@ PreprocessAlterExtensionCitusStmtForCitusColumnar(Node *parseTree)
|
|||
if (newVersionValue)
|
||||
{
|
||||
char *newVersion = defGetString(newVersionValue);
|
||||
double newVersionNumber = GetExtensionVersionNumber(strdup(newVersion));
|
||||
double newVersionNumber = GetExtensionVersionNumber(pstrdup(newVersion));
|
||||
|
||||
/*alter extension citus update to version >= 11.1-1, and no citus_columnar installed */
|
||||
if (newVersionNumber >= 1110 && citusColumnarOid == InvalidOid)
|
||||
|
@ -935,7 +935,7 @@ PostprocessAlterExtensionCitusStmtForCitusColumnar(Node *parseTree)
|
|||
if (newVersionValue)
|
||||
{
|
||||
char *newVersion = defGetString(newVersionValue);
|
||||
double newVersionNumber = GetExtensionVersionNumber(strdup(newVersion));
|
||||
double newVersionNumber = GetExtensionVersionNumber(pstrdup(newVersion));
|
||||
if (newVersionNumber >= 1110 && citusColumnarOid != InvalidOid)
|
||||
{
|
||||
/*upgrade citus, after "ALTER EXTENSION citus update to xxx" updates citus_columnar Y to version Z. */
|
||||
|
|
|
@ -92,7 +92,7 @@
|
|||
#define START_MANAGEMENT_TRANSACTION \
|
||||
"SELECT citus_internal.start_management_transaction('%lu')"
|
||||
#define MARK_OBJECT_DISTRIBUTED \
|
||||
"SELECT citus_internal.mark_object_distributed(%d, %s, %d)"
|
||||
"SELECT citus_internal.mark_object_distributed(%d, %s, %d, %s)"
|
||||
|
||||
|
||||
bool EnableDDLPropagation = true; /* ddl propagation is enabled */
|
||||
|
@ -1636,7 +1636,8 @@ RunPostprocessMainDBCommand(Node *parsetree)
|
|||
MARK_OBJECT_DISTRIBUTED,
|
||||
AuthIdRelationId,
|
||||
quote_literal_cstr(createRoleStmt->role),
|
||||
roleOid);
|
||||
roleOid,
|
||||
quote_literal_cstr(CurrentUserName()));
|
||||
RunCitusMainDBQuery(mainDBQuery->data);
|
||||
}
|
||||
}
|
||||
|
|
|
@ -123,6 +123,10 @@ AddConnParam(const char *keyword, const char *value)
|
|||
errmsg("ConnParams arrays bound check failed")));
|
||||
}
|
||||
|
||||
/*
|
||||
* Don't use pstrdup here to avoid being tied to a memory context, we free
|
||||
* these later using ResetConnParams
|
||||
*/
|
||||
ConnParams.keywords[ConnParams.size] = strdup(keyword);
|
||||
ConnParams.values[ConnParams.size] = strdup(value);
|
||||
ConnParams.size++;
|
||||
|
@ -441,7 +445,7 @@ GetEffectiveConnKey(ConnectionHashKey *key)
|
|||
if (!IsTransactionState())
|
||||
{
|
||||
/* we're in the task tracker, so should only see loopback */
|
||||
Assert(strncmp(LOCAL_HOST_NAME, key->hostname, MAX_NODE_LENGTH) == 0 &&
|
||||
Assert(strncmp(LocalHostName, key->hostname, MAX_NODE_LENGTH) == 0 &&
|
||||
PostPortNumber == key->port);
|
||||
return key;
|
||||
}
|
||||
|
@ -517,9 +521,23 @@ char *
|
|||
GetAuthinfo(char *hostname, int32 port, char *user)
|
||||
{
|
||||
char *authinfo = NULL;
|
||||
bool isLoopback = (strncmp(LOCAL_HOST_NAME, hostname, MAX_NODE_LENGTH) == 0 &&
|
||||
bool isLoopback = (strncmp(LocalHostName, hostname, MAX_NODE_LENGTH) == 0 &&
|
||||
PostPortNumber == port);
|
||||
|
||||
/*
|
||||
* Citus will not be loaded when we run a global DDL command from a
|
||||
* Citus non-main database.
|
||||
*/
|
||||
if (!CitusHasBeenLoaded())
|
||||
{
|
||||
/*
|
||||
* We don't expect non-main databases to connect to a node other than
|
||||
* the local one.
|
||||
*/
|
||||
Assert(isLoopback);
|
||||
return "";
|
||||
}
|
||||
|
||||
if (IsTransactionState())
|
||||
{
|
||||
int64 nodeId = WILDCARD_NODE_ID;
|
||||
|
|
|
@ -246,6 +246,7 @@ ClearResultsIfReady(MultiConnection *connection)
|
|||
void
|
||||
ReportConnectionError(MultiConnection *connection, int elevel)
|
||||
{
|
||||
char *userName = connection->user;
|
||||
char *nodeName = connection->hostname;
|
||||
int nodePort = connection->port;
|
||||
PGconn *pgConn = connection->pgConn;
|
||||
|
@ -264,15 +265,15 @@ ReportConnectionError(MultiConnection *connection, int elevel)
|
|||
if (messageDetail)
|
||||
{
|
||||
ereport(elevel, (errcode(ERRCODE_CONNECTION_FAILURE),
|
||||
errmsg("connection to the remote node %s:%d failed with the "
|
||||
"following error: %s", nodeName, nodePort,
|
||||
errmsg("connection to the remote node %s@%s:%d failed with the "
|
||||
"following error: %s", userName, nodeName, nodePort,
|
||||
messageDetail)));
|
||||
}
|
||||
else
|
||||
{
|
||||
ereport(elevel, (errcode(ERRCODE_CONNECTION_FAILURE),
|
||||
errmsg("connection to the remote node %s:%d failed",
|
||||
nodeName, nodePort)));
|
||||
errmsg("connection to the remote node %s@%s:%d failed",
|
||||
userName, nodeName, nodePort)));
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -1526,8 +1526,15 @@ set_join_column_names(deparse_namespace *dpns, RangeTblEntry *rte,
|
|||
|
||||
/* Assert we processed the right number of columns */
|
||||
#ifdef USE_ASSERT_CHECKING
|
||||
while (i < colinfo->num_cols && colinfo->colnames[i] == NULL)
|
||||
i++;
|
||||
for (int col_index = 0; col_index < colinfo->num_cols; col_index++)
|
||||
{
|
||||
/*
|
||||
* In the above processing-loops, "i" advances only if
|
||||
* the column is not new, check if this is a new column.
|
||||
*/
|
||||
if (colinfo->is_new_col[col_index])
|
||||
i++;
|
||||
}
|
||||
Assert(i == colinfo->num_cols);
|
||||
Assert(j == nnewcolumns);
|
||||
#endif
|
||||
|
|
|
@ -1580,8 +1580,15 @@ set_join_column_names(deparse_namespace *dpns, RangeTblEntry *rte,
|
|||
|
||||
/* Assert we processed the right number of columns */
|
||||
#ifdef USE_ASSERT_CHECKING
|
||||
while (i < colinfo->num_cols && colinfo->colnames[i] == NULL)
|
||||
i++;
|
||||
for (int col_index = 0; col_index < colinfo->num_cols; col_index++)
|
||||
{
|
||||
/*
|
||||
* In the above processing-loops, "i" advances only if
|
||||
* the column is not new, check if this is a new column.
|
||||
*/
|
||||
if (colinfo->is_new_col[col_index])
|
||||
i++;
|
||||
}
|
||||
Assert(i == colinfo->num_cols);
|
||||
Assert(j == nnewcolumns);
|
||||
#endif
|
||||
|
|
|
@ -727,6 +727,11 @@ static uint64 MicrosecondsBetweenTimestamps(instr_time startTime, instr_time end
|
|||
static int WorkerPoolCompare(const void *lhsKey, const void *rhsKey);
|
||||
static void SetAttributeInputMetadata(DistributedExecution *execution,
|
||||
ShardCommandExecution *shardCommandExecution);
|
||||
static ExecutionParams * CreateDefaultExecutionParams(RowModifyLevel modLevel,
|
||||
List *taskList,
|
||||
TupleDestination *tupleDest,
|
||||
bool expectResults,
|
||||
ParamListInfo paramListInfo);
|
||||
|
||||
|
||||
/*
|
||||
|
@ -1013,14 +1018,14 @@ ExecuteTaskListOutsideTransaction(RowModifyLevel modLevel, List *taskList,
|
|||
|
||||
|
||||
/*
|
||||
* ExecuteTaskListIntoTupleDestWithParam is a proxy to ExecuteTaskListExtended() which uses
|
||||
* bind params from executor state, and with defaults for some of the arguments.
|
||||
* CreateDefaultExecutionParams returns execution params based on given (possibly null)
|
||||
* bind params (presumably from executor state) with defaults for some of the arguments.
|
||||
*/
|
||||
uint64
|
||||
ExecuteTaskListIntoTupleDestWithParam(RowModifyLevel modLevel, List *taskList,
|
||||
TupleDestination *tupleDest,
|
||||
bool expectResults,
|
||||
ParamListInfo paramListInfo)
|
||||
static ExecutionParams *
|
||||
CreateDefaultExecutionParams(RowModifyLevel modLevel, List *taskList,
|
||||
TupleDestination *tupleDest,
|
||||
bool expectResults,
|
||||
ParamListInfo paramListInfo)
|
||||
{
|
||||
int targetPoolSize = MaxAdaptiveExecutorPoolSize;
|
||||
bool localExecutionSupported = true;
|
||||
|
@ -1034,6 +1039,24 @@ ExecuteTaskListIntoTupleDestWithParam(RowModifyLevel modLevel, List *taskList,
|
|||
executionParams->tupleDestination = tupleDest;
|
||||
executionParams->paramListInfo = paramListInfo;
|
||||
|
||||
return executionParams;
|
||||
}
|
||||
|
||||
|
||||
/*
|
||||
* ExecuteTaskListIntoTupleDestWithParam is a proxy to ExecuteTaskListExtended() which uses
|
||||
* bind params from executor state, and with defaults for some of the arguments.
|
||||
*/
|
||||
uint64
|
||||
ExecuteTaskListIntoTupleDestWithParam(RowModifyLevel modLevel, List *taskList,
|
||||
TupleDestination *tupleDest,
|
||||
bool expectResults,
|
||||
ParamListInfo paramListInfo)
|
||||
{
|
||||
ExecutionParams *executionParams = CreateDefaultExecutionParams(modLevel, taskList,
|
||||
tupleDest,
|
||||
expectResults,
|
||||
paramListInfo);
|
||||
return ExecuteTaskListExtended(executionParams);
|
||||
}
|
||||
|
||||
|
@ -1047,17 +1070,11 @@ ExecuteTaskListIntoTupleDest(RowModifyLevel modLevel, List *taskList,
|
|||
TupleDestination *tupleDest,
|
||||
bool expectResults)
|
||||
{
|
||||
int targetPoolSize = MaxAdaptiveExecutorPoolSize;
|
||||
bool localExecutionSupported = true;
|
||||
ExecutionParams *executionParams = CreateBasicExecutionParams(
|
||||
modLevel, taskList, targetPoolSize, localExecutionSupported
|
||||
);
|
||||
|
||||
executionParams->xactProperties = DecideTransactionPropertiesForTaskList(
|
||||
modLevel, taskList, false);
|
||||
executionParams->expectResults = expectResults;
|
||||
executionParams->tupleDestination = tupleDest;
|
||||
|
||||
ParamListInfo paramListInfo = NULL;
|
||||
ExecutionParams *executionParams = CreateDefaultExecutionParams(modLevel, taskList,
|
||||
tupleDest,
|
||||
expectResults,
|
||||
paramListInfo);
|
||||
return ExecuteTaskListExtended(executionParams);
|
||||
}
|
||||
|
||||
|
|
|
@ -67,7 +67,8 @@ PG_FUNCTION_INFO_V1(master_unmark_object_distributed);
|
|||
|
||||
/*
|
||||
* mark_object_distributed adds an object to pg_dist_object
|
||||
* in all of the nodes.
|
||||
* in all of the nodes, for the connections to the other nodes this function
|
||||
* uses the user passed.
|
||||
*/
|
||||
Datum
|
||||
mark_object_distributed(PG_FUNCTION_ARGS)
|
||||
|
@ -81,6 +82,8 @@ mark_object_distributed(PG_FUNCTION_ARGS)
|
|||
Oid objectId = PG_GETARG_OID(2);
|
||||
ObjectAddress *objectAddress = palloc0(sizeof(ObjectAddress));
|
||||
ObjectAddressSet(*objectAddress, classId, objectId);
|
||||
text *connectionUserText = PG_GETARG_TEXT_P(3);
|
||||
char *connectionUser = text_to_cstring(connectionUserText);
|
||||
|
||||
/*
|
||||
* This function is called when a query is run from a Citus non-main database.
|
||||
|
@ -88,7 +91,8 @@ mark_object_distributed(PG_FUNCTION_ARGS)
|
|||
* 2PC still works.
|
||||
*/
|
||||
bool useConnectionForLocalQuery = true;
|
||||
MarkObjectDistributedWithName(objectAddress, objectName, useConnectionForLocalQuery);
|
||||
MarkObjectDistributedWithName(objectAddress, objectName, useConnectionForLocalQuery,
|
||||
connectionUser);
|
||||
PG_RETURN_VOID();
|
||||
}
|
||||
|
||||
|
@ -193,7 +197,8 @@ void
|
|||
MarkObjectDistributed(const ObjectAddress *distAddress)
|
||||
{
|
||||
bool useConnectionForLocalQuery = false;
|
||||
MarkObjectDistributedWithName(distAddress, "", useConnectionForLocalQuery);
|
||||
MarkObjectDistributedWithName(distAddress, "", useConnectionForLocalQuery,
|
||||
CurrentUserName());
|
||||
}
|
||||
|
||||
|
||||
|
@ -204,7 +209,7 @@ MarkObjectDistributed(const ObjectAddress *distAddress)
|
|||
*/
|
||||
void
|
||||
MarkObjectDistributedWithName(const ObjectAddress *distAddress, char *objectName,
|
||||
bool useConnectionForLocalQuery)
|
||||
bool useConnectionForLocalQuery, char *connectionUser)
|
||||
{
|
||||
if (!CitusHasBeenLoaded())
|
||||
{
|
||||
|
@ -234,7 +239,8 @@ MarkObjectDistributedWithName(const ObjectAddress *distAddress, char *objectName
|
|||
{
|
||||
char *workerPgDistObjectUpdateCommand =
|
||||
CreatePgDistObjectEntryCommand(distAddress, objectName);
|
||||
SendCommandToRemoteNodesWithMetadata(workerPgDistObjectUpdateCommand);
|
||||
SendCommandToRemoteMetadataNodesParams(workerPgDistObjectUpdateCommand,
|
||||
connectionUser, 0, NULL, NULL);
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -5723,14 +5723,6 @@ GetPoolinfoViaCatalog(int32 nodeId)
|
|||
char *
|
||||
GetAuthinfoViaCatalog(const char *roleName, int64 nodeId)
|
||||
{
|
||||
/*
|
||||
* Citus will not be loaded when we run a global DDL command from a
|
||||
* Citus non-main database.
|
||||
*/
|
||||
if (!CitusHasBeenLoaded())
|
||||
{
|
||||
return "";
|
||||
}
|
||||
char *authinfo = "";
|
||||
Datum nodeIdDatumArray[2] = {
|
||||
Int32GetDatum(nodeId),
|
||||
|
|
|
@ -994,7 +994,7 @@ MarkObjectsDistributedCreateCommand(List *addresses,
|
|||
appendStringInfo(insertDistributedObjectsCommand, ") ");
|
||||
|
||||
appendStringInfo(insertDistributedObjectsCommand,
|
||||
"SELECT citus_internal_add_object_metadata("
|
||||
"SELECT citus_internal.add_object_metadata("
|
||||
"typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) "
|
||||
"FROM distributed_object_data;");
|
||||
|
||||
|
@ -1129,7 +1129,7 @@ DistributionCreateCommand(CitusTableCacheEntry *cacheEntry)
|
|||
}
|
||||
|
||||
appendStringInfo(insertDistributionCommand,
|
||||
"SELECT citus_internal_add_partition_metadata "
|
||||
"SELECT citus_internal.add_partition_metadata "
|
||||
"(%s::regclass, '%c', %s, %d, '%c')",
|
||||
quote_literal_cstr(qualifiedRelationName),
|
||||
distributionMethod,
|
||||
|
@ -1171,7 +1171,7 @@ DistributionDeleteMetadataCommand(Oid relationId)
|
|||
char *qualifiedRelationName = generate_qualified_relation_name(relationId);
|
||||
|
||||
appendStringInfo(deleteCommand,
|
||||
"SELECT pg_catalog.citus_internal_delete_partition_metadata(%s)",
|
||||
"SELECT citus_internal.delete_partition_metadata(%s)",
|
||||
quote_literal_cstr(qualifiedRelationName));
|
||||
|
||||
return deleteCommand->data;
|
||||
|
@ -1254,7 +1254,7 @@ ShardListInsertCommand(List *shardIntervalList)
|
|||
appendStringInfo(insertPlacementCommand, ") ");
|
||||
|
||||
appendStringInfo(insertPlacementCommand,
|
||||
"SELECT citus_internal_add_placement_metadata("
|
||||
"SELECT citus_internal.add_placement_metadata("
|
||||
"shardid, shardlength, groupid, placementid) "
|
||||
"FROM placement_data;");
|
||||
|
||||
|
@ -1310,7 +1310,7 @@ ShardListInsertCommand(List *shardIntervalList)
|
|||
appendStringInfo(insertShardCommand, ") ");
|
||||
|
||||
appendStringInfo(insertShardCommand,
|
||||
"SELECT citus_internal_add_shard_metadata(relationname, shardid, "
|
||||
"SELECT citus_internal.add_shard_metadata(relationname, shardid, "
|
||||
"storagetype, shardminvalue, shardmaxvalue) "
|
||||
"FROM shard_data;");
|
||||
|
||||
|
@ -1349,7 +1349,7 @@ ShardDeleteCommandList(ShardInterval *shardInterval)
|
|||
|
||||
StringInfo deleteShardCommand = makeStringInfo();
|
||||
appendStringInfo(deleteShardCommand,
|
||||
"SELECT citus_internal_delete_shard_metadata(%ld);", shardId);
|
||||
"SELECT citus_internal.delete_shard_metadata(%ld);", shardId);
|
||||
|
||||
return list_make1(deleteShardCommand->data);
|
||||
}
|
||||
|
@ -4099,7 +4099,7 @@ citus_internal_database_command(PG_FUNCTION_ARGS)
|
|||
}
|
||||
else
|
||||
{
|
||||
ereport(ERROR, (errmsg("citus_internal_database_command() can only be used "
|
||||
ereport(ERROR, (errmsg("citus_internal.database_command() can only be used "
|
||||
"for CREATE DATABASE command by Citus.")));
|
||||
}
|
||||
|
||||
|
@ -4140,7 +4140,7 @@ ColocationGroupCreateCommand(uint32 colocationId, int shardCount, int replicatio
|
|||
StringInfo insertColocationCommand = makeStringInfo();
|
||||
|
||||
appendStringInfo(insertColocationCommand,
|
||||
"SELECT pg_catalog.citus_internal_add_colocation_metadata("
|
||||
"SELECT citus_internal.add_colocation_metadata("
|
||||
"%d, %d, %d, %s, %s)",
|
||||
colocationId,
|
||||
shardCount,
|
||||
|
@ -4252,7 +4252,7 @@ ColocationGroupDeleteCommand(uint32 colocationId)
|
|||
StringInfo deleteColocationCommand = makeStringInfo();
|
||||
|
||||
appendStringInfo(deleteColocationCommand,
|
||||
"SELECT pg_catalog.citus_internal_delete_colocation_metadata(%d)",
|
||||
"SELECT citus_internal.delete_colocation_metadata(%d)",
|
||||
colocationId);
|
||||
|
||||
return deleteColocationCommand->data;
|
||||
|
@ -4268,7 +4268,7 @@ TenantSchemaInsertCommand(Oid schemaId, uint32 colocationId)
|
|||
{
|
||||
StringInfo command = makeStringInfo();
|
||||
appendStringInfo(command,
|
||||
"SELECT pg_catalog.citus_internal_add_tenant_schema(%s, %u)",
|
||||
"SELECT citus_internal.add_tenant_schema(%s, %u)",
|
||||
RemoteSchemaIdExpressionById(schemaId), colocationId);
|
||||
|
||||
return command->data;
|
||||
|
@ -4284,7 +4284,7 @@ TenantSchemaDeleteCommand(char *schemaName)
|
|||
{
|
||||
StringInfo command = makeStringInfo();
|
||||
appendStringInfo(command,
|
||||
"SELECT pg_catalog.citus_internal_delete_tenant_schema(%s)",
|
||||
"SELECT citus_internal.delete_tenant_schema(%s)",
|
||||
RemoteSchemaIdExpressionByName(schemaName));
|
||||
|
||||
return command->data;
|
||||
|
@ -4319,7 +4319,7 @@ AddPlacementMetadataCommand(uint64 shardId, uint64 placementId,
|
|||
{
|
||||
StringInfo command = makeStringInfo();
|
||||
appendStringInfo(command,
|
||||
"SELECT citus_internal_add_placement_metadata(%ld, %ld, %d, %ld)",
|
||||
"SELECT citus_internal.add_placement_metadata(%ld, %ld, %d, %ld)",
|
||||
shardId, shardLength, groupId, placementId);
|
||||
return command->data;
|
||||
}
|
||||
|
@ -4334,7 +4334,7 @@ DeletePlacementMetadataCommand(uint64 placementId)
|
|||
{
|
||||
StringInfo command = makeStringInfo();
|
||||
appendStringInfo(command,
|
||||
"SELECT pg_catalog.citus_internal_delete_placement_metadata(%ld)",
|
||||
"SELECT citus_internal.delete_placement_metadata(%ld)",
|
||||
placementId);
|
||||
return command->data;
|
||||
}
|
||||
|
@ -4953,7 +4953,7 @@ SendColocationMetadataCommands(MetadataSyncContext *context)
|
|||
}
|
||||
|
||||
appendStringInfo(colocationGroupCreateCommand,
|
||||
") SELECT pg_catalog.citus_internal_add_colocation_metadata("
|
||||
") SELECT citus_internal.add_colocation_metadata("
|
||||
"colocationid, shardcount, replicationfactor, "
|
||||
"distributioncolumntype, coalesce(c.oid, 0)) "
|
||||
"FROM colocation_group_data d LEFT JOIN pg_collation c "
|
||||
|
@ -5004,7 +5004,7 @@ SendTenantSchemaMetadataCommands(MetadataSyncContext *context)
|
|||
|
||||
StringInfo insertTenantSchemaCommand = makeStringInfo();
|
||||
appendStringInfo(insertTenantSchemaCommand,
|
||||
"SELECT pg_catalog.citus_internal_add_tenant_schema(%s, %u)",
|
||||
"SELECT citus_internal.add_tenant_schema(%s, %u)",
|
||||
RemoteSchemaIdExpressionById(tenantSchemaForm->schemaid),
|
||||
tenantSchemaForm->colocationid);
|
||||
|
||||
|
|
|
@ -1314,7 +1314,7 @@ DropShardListMetadata(List *shardIntervalList)
|
|||
{
|
||||
ListCell *commandCell = NULL;
|
||||
|
||||
/* send the commands one by one (calls citus_internal_delete_shard_metadata internally) */
|
||||
/* send the commands one by one (calls citus_internal.delete_shard_metadata internally) */
|
||||
List *shardMetadataDeleteCommandList = ShardDeleteCommandList(shardInterval);
|
||||
foreach(commandCell, shardMetadataDeleteCommandList)
|
||||
{
|
||||
|
|
|
@ -433,7 +433,7 @@ CreateTargetEntryForColumn(Form_pg_attribute attributeTuple, Index rteIndex,
|
|||
attributeTuple->atttypmod, attributeTuple->attcollation, 0);
|
||||
TargetEntry *targetEntry =
|
||||
makeTargetEntry((Expr *) targetColumn, resno,
|
||||
strdup(attributeTuple->attname.data), false);
|
||||
pstrdup(attributeTuple->attname.data), false);
|
||||
return targetEntry;
|
||||
}
|
||||
|
||||
|
@ -449,7 +449,7 @@ CreateTargetEntryForNullCol(Form_pg_attribute attributeTuple, int resno)
|
|||
attributeTuple->attcollation);
|
||||
char *resName = attributeTuple->attname.data;
|
||||
TargetEntry *targetEntry =
|
||||
makeTargetEntry(nullExpr, resno, strdup(resName), false);
|
||||
makeTargetEntry(nullExpr, resno, pstrdup(resName), false);
|
||||
return targetEntry;
|
||||
}
|
||||
|
||||
|
|
|
@ -1097,8 +1097,8 @@ RecursivelyPlanCTEs(Query *query, RecursivePlanningContext *planningContext)
|
|||
if (query->hasRecursive)
|
||||
{
|
||||
return DeferredError(ERRCODE_FEATURE_NOT_SUPPORTED,
|
||||
"recursive CTEs are not supported in distributed "
|
||||
"queries",
|
||||
"recursive CTEs are only supported when they "
|
||||
"contain a filter on the distribution column",
|
||||
NULL, NULL);
|
||||
}
|
||||
|
||||
|
|
|
@ -170,7 +170,7 @@ SerializeDistributedDDLsOnObjectClassInternal(ObjectClass objectClass,
|
|||
|
||||
/*
|
||||
* AcquireCitusAdvisoryObjectClassLockCommand returns a command to call
|
||||
* pg_catalog.citus_internal_acquire_citus_advisory_object_class_lock().
|
||||
* citus_internal.acquire_citus_advisory_object_class_lock().
|
||||
*/
|
||||
static char *
|
||||
AcquireCitusAdvisoryObjectClassLockCommand(ObjectClass objectClass,
|
||||
|
@ -185,7 +185,7 @@ AcquireCitusAdvisoryObjectClassLockCommand(ObjectClass objectClass,
|
|||
|
||||
StringInfo command = makeStringInfo();
|
||||
appendStringInfo(command,
|
||||
"SELECT pg_catalog.citus_internal_acquire_citus_advisory_object_class_lock(%d, %s)",
|
||||
"SELECT citus_internal.acquire_citus_advisory_object_class_lock(%d, %s)",
|
||||
objectClassInt, quotedObjectName);
|
||||
|
||||
return command->data;
|
||||
|
|
|
@ -12,3 +12,29 @@
|
|||
ALTER TABLE pg_catalog.pg_dist_transaction ADD COLUMN outer_xid xid8;
|
||||
|
||||
#include "udfs/citus_internal_acquire_citus_advisory_object_class_lock/12.2-1.sql"
|
||||
|
||||
GRANT USAGE ON SCHEMA citus_internal TO PUBLIC;
|
||||
REVOKE ALL ON FUNCTION citus_internal.commit_management_command_2pc FROM PUBLIC;
|
||||
REVOKE ALL ON FUNCTION citus_internal.execute_command_on_remote_nodes_as_user FROM PUBLIC;
|
||||
REVOKE ALL ON FUNCTION citus_internal.find_groupid_for_node FROM PUBLIC;
|
||||
REVOKE ALL ON FUNCTION citus_internal.mark_object_distributed FROM PUBLIC;
|
||||
REVOKE ALL ON FUNCTION citus_internal.pg_dist_node_trigger_func FROM PUBLIC;
|
||||
REVOKE ALL ON FUNCTION citus_internal.pg_dist_rebalance_strategy_trigger_func FROM PUBLIC;
|
||||
REVOKE ALL ON FUNCTION citus_internal.pg_dist_shard_placement_trigger_func FROM PUBLIC;
|
||||
REVOKE ALL ON FUNCTION citus_internal.refresh_isolation_tester_prepared_statement FROM PUBLIC;
|
||||
REVOKE ALL ON FUNCTION citus_internal.replace_isolation_tester_func FROM PUBLIC;
|
||||
REVOKE ALL ON FUNCTION citus_internal.restore_isolation_tester_func FROM PUBLIC;
|
||||
REVOKE ALL ON FUNCTION citus_internal.start_management_transaction FROM PUBLIC;
|
||||
|
||||
#include "udfs/citus_internal_add_colocation_metadata/12.2-1.sql"
|
||||
#include "udfs/citus_internal_add_object_metadata/12.2-1.sql"
|
||||
#include "udfs/citus_internal_add_partition_metadata/12.2-1.sql"
|
||||
#include "udfs/citus_internal_add_placement_metadata/12.2-1.sql"
|
||||
#include "udfs/citus_internal_add_shard_metadata/12.2-1.sql"
|
||||
#include "udfs/citus_internal_add_tenant_schema/12.2-1.sql"
|
||||
#include "udfs/citus_internal_adjust_local_clock_to_remote/12.2-1.sql"
|
||||
#include "udfs/citus_internal_delete_colocation_metadata/12.2-1.sql"
|
||||
#include "udfs/citus_internal_delete_partition_metadata/12.2-1.sql"
|
||||
#include "udfs/citus_internal_delete_placement_metadata/12.2-1.sql"
|
||||
#include "udfs/citus_internal_delete_shard_metadata/12.2-1.sql"
|
||||
#include "udfs/citus_internal_delete_tenant_schema/12.2-1.sql"
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
-- citus--12.2-1--12.1-1
|
||||
|
||||
DROP FUNCTION pg_catalog.citus_internal_database_command(text);
|
||||
DROP FUNCTION pg_catalog.citus_internal_acquire_citus_advisory_object_class_lock(int, cstring);
|
||||
DROP FUNCTION citus_internal.database_command(text);
|
||||
DROP FUNCTION citus_internal.acquire_citus_advisory_object_class_lock(int, cstring);
|
||||
|
||||
#include "../udfs/citus_add_rebalance_strategy/10.1-1.sql"
|
||||
|
||||
|
@ -15,9 +15,23 @@ DROP FUNCTION citus_internal.execute_command_on_remote_nodes_as_user(
|
|||
);
|
||||
|
||||
DROP FUNCTION citus_internal.mark_object_distributed(
|
||||
classId Oid, objectName text, objectId Oid
|
||||
classId Oid, objectName text, objectId Oid, connectionUser text
|
||||
);
|
||||
|
||||
DROP FUNCTION citus_internal.commit_management_command_2pc();
|
||||
|
||||
ALTER TABLE pg_catalog.pg_dist_transaction DROP COLUMN outer_xid;
|
||||
REVOKE USAGE ON SCHEMA citus_internal FROM PUBLIC;
|
||||
|
||||
DROP FUNCTION citus_internal.add_colocation_metadata(int, int, int, regtype, oid);
|
||||
DROP FUNCTION citus_internal.add_object_metadata(text, text[], text[], integer, integer, boolean);
|
||||
DROP FUNCTION citus_internal.add_partition_metadata(regclass, "char", text, integer, "char");
|
||||
DROP FUNCTION citus_internal.add_placement_metadata(bigint, bigint, integer, bigint);
|
||||
DROP FUNCTION citus_internal.add_shard_metadata(regclass, bigint, "char", text, text);
|
||||
DROP FUNCTION citus_internal.add_tenant_schema(oid, integer);
|
||||
DROP FUNCTION citus_internal.adjust_local_clock_to_remote(pg_catalog.cluster_clock);
|
||||
DROP FUNCTION citus_internal.delete_colocation_metadata(int);
|
||||
DROP FUNCTION citus_internal.delete_partition_metadata(regclass);
|
||||
DROP FUNCTION citus_internal.delete_placement_metadata(bigint);
|
||||
DROP FUNCTION citus_internal.delete_shard_metadata(bigint);
|
||||
DROP FUNCTION citus_internal.delete_tenant_schema(oid);
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
CREATE OR REPLACE FUNCTION pg_catalog.citus_internal_acquire_citus_advisory_object_class_lock(objectClass int, qualifiedObjectName cstring)
|
||||
CREATE OR REPLACE FUNCTION citus_internal.acquire_citus_advisory_object_class_lock(objectClass int, qualifiedObjectName cstring)
|
||||
RETURNS void
|
||||
LANGUAGE C
|
||||
VOLATILE
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
CREATE OR REPLACE FUNCTION pg_catalog.citus_internal_acquire_citus_advisory_object_class_lock(objectClass int, qualifiedObjectName cstring)
|
||||
CREATE OR REPLACE FUNCTION citus_internal.acquire_citus_advisory_object_class_lock(objectClass int, qualifiedObjectName cstring)
|
||||
RETURNS void
|
||||
LANGUAGE C
|
||||
VOLATILE
|
||||
|
|
27
src/backend/distributed/sql/udfs/citus_internal_add_colocation_metadata/12.2-1.sql
generated
Normal file
27
src/backend/distributed/sql/udfs/citus_internal_add_colocation_metadata/12.2-1.sql
generated
Normal file
|
@ -0,0 +1,27 @@
|
|||
CREATE OR REPLACE FUNCTION citus_internal.add_colocation_metadata(
|
||||
colocation_id int,
|
||||
shard_count int,
|
||||
replication_factor int,
|
||||
distribution_column_type regtype,
|
||||
distribution_column_collation oid)
|
||||
RETURNS void
|
||||
LANGUAGE C
|
||||
STRICT
|
||||
AS 'MODULE_PATHNAME', $$citus_internal_add_colocation_metadata$$;
|
||||
|
||||
COMMENT ON FUNCTION citus_internal.add_colocation_metadata(int,int,int,regtype,oid) IS
|
||||
'Inserts a co-location group into pg_dist_colocation';
|
||||
|
||||
CREATE OR REPLACE FUNCTION pg_catalog.citus_internal_add_colocation_metadata(
|
||||
colocation_id int,
|
||||
shard_count int,
|
||||
replication_factor int,
|
||||
distribution_column_type regtype,
|
||||
distribution_column_collation oid)
|
||||
RETURNS void
|
||||
LANGUAGE C
|
||||
STRICT
|
||||
AS 'MODULE_PATHNAME';
|
||||
|
||||
COMMENT ON FUNCTION pg_catalog.citus_internal_add_colocation_metadata(int,int,int,regtype,oid) IS
|
||||
'Inserts a co-location group into pg_dist_colocation';
|
|
@ -1,3 +1,17 @@
|
|||
CREATE OR REPLACE FUNCTION citus_internal.add_colocation_metadata(
|
||||
colocation_id int,
|
||||
shard_count int,
|
||||
replication_factor int,
|
||||
distribution_column_type regtype,
|
||||
distribution_column_collation oid)
|
||||
RETURNS void
|
||||
LANGUAGE C
|
||||
STRICT
|
||||
AS 'MODULE_PATHNAME', $$citus_internal_add_colocation_metadata$$;
|
||||
|
||||
COMMENT ON FUNCTION citus_internal.add_colocation_metadata(int,int,int,regtype,oid) IS
|
||||
'Inserts a co-location group into pg_dist_colocation';
|
||||
|
||||
CREATE OR REPLACE FUNCTION pg_catalog.citus_internal_add_colocation_metadata(
|
||||
colocation_id int,
|
||||
shard_count int,
|
||||
|
|
29
src/backend/distributed/sql/udfs/citus_internal_add_object_metadata/12.2-1.sql
generated
Normal file
29
src/backend/distributed/sql/udfs/citus_internal_add_object_metadata/12.2-1.sql
generated
Normal file
|
@ -0,0 +1,29 @@
|
|||
CREATE OR REPLACE FUNCTION citus_internal.add_object_metadata(
|
||||
typeText text,
|
||||
objNames text[],
|
||||
objArgs text[],
|
||||
distribution_argument_index int,
|
||||
colocationid int,
|
||||
force_delegation bool)
|
||||
RETURNS void
|
||||
LANGUAGE C
|
||||
STRICT
|
||||
AS 'MODULE_PATHNAME', $$citus_internal_add_object_metadata$$;
|
||||
|
||||
COMMENT ON FUNCTION citus_internal.add_object_metadata(text,text[],text[],int,int,bool) IS
|
||||
'Inserts distributed object into pg_dist_object';
|
||||
|
||||
CREATE OR REPLACE FUNCTION pg_catalog.citus_internal_add_object_metadata(
|
||||
typeText text,
|
||||
objNames text[],
|
||||
objArgs text[],
|
||||
distribution_argument_index int,
|
||||
colocationid int,
|
||||
force_delegation bool)
|
||||
RETURNS void
|
||||
LANGUAGE C
|
||||
STRICT
|
||||
AS 'MODULE_PATHNAME';
|
||||
|
||||
COMMENT ON FUNCTION pg_catalog.citus_internal_add_object_metadata(text,text[],text[],int,int,bool) IS
|
||||
'Inserts distributed object into pg_dist_object';
|
|
@ -1,3 +1,18 @@
|
|||
CREATE OR REPLACE FUNCTION citus_internal.add_object_metadata(
|
||||
typeText text,
|
||||
objNames text[],
|
||||
objArgs text[],
|
||||
distribution_argument_index int,
|
||||
colocationid int,
|
||||
force_delegation bool)
|
||||
RETURNS void
|
||||
LANGUAGE C
|
||||
STRICT
|
||||
AS 'MODULE_PATHNAME', $$citus_internal_add_object_metadata$$;
|
||||
|
||||
COMMENT ON FUNCTION citus_internal.add_object_metadata(text,text[],text[],int,int,bool) IS
|
||||
'Inserts distributed object into pg_dist_object';
|
||||
|
||||
CREATE OR REPLACE FUNCTION pg_catalog.citus_internal_add_object_metadata(
|
||||
typeText text,
|
||||
objNames text[],
|
||||
|
|
22
src/backend/distributed/sql/udfs/citus_internal_add_partition_metadata/12.2-1.sql
generated
Normal file
22
src/backend/distributed/sql/udfs/citus_internal_add_partition_metadata/12.2-1.sql
generated
Normal file
|
@ -0,0 +1,22 @@
|
|||
CREATE OR REPLACE FUNCTION citus_internal.add_partition_metadata(
|
||||
relation_id regclass, distribution_method "char",
|
||||
distribution_column text, colocation_id integer,
|
||||
replication_model "char")
|
||||
RETURNS void
|
||||
LANGUAGE C
|
||||
AS 'MODULE_PATHNAME', $$citus_internal_add_partition_metadata$$;
|
||||
|
||||
COMMENT ON FUNCTION citus_internal.add_partition_metadata(regclass, "char", text, integer, "char") IS
|
||||
'Inserts into pg_dist_partition with user checks';
|
||||
|
||||
|
||||
CREATE OR REPLACE FUNCTION pg_catalog.citus_internal_add_partition_metadata(
|
||||
relation_id regclass, distribution_method "char",
|
||||
distribution_column text, colocation_id integer,
|
||||
replication_model "char")
|
||||
RETURNS void
|
||||
LANGUAGE C
|
||||
AS 'MODULE_PATHNAME';
|
||||
|
||||
COMMENT ON FUNCTION pg_catalog.citus_internal_add_partition_metadata(regclass, "char", text, integer, "char") IS
|
||||
'Inserts into pg_dist_partition with user checks';
|
|
@ -1,3 +1,15 @@
|
|||
CREATE OR REPLACE FUNCTION citus_internal.add_partition_metadata(
|
||||
relation_id regclass, distribution_method "char",
|
||||
distribution_column text, colocation_id integer,
|
||||
replication_model "char")
|
||||
RETURNS void
|
||||
LANGUAGE C
|
||||
AS 'MODULE_PATHNAME', $$citus_internal_add_partition_metadata$$;
|
||||
|
||||
COMMENT ON FUNCTION citus_internal.add_partition_metadata(regclass, "char", text, integer, "char") IS
|
||||
'Inserts into pg_dist_partition with user checks';
|
||||
|
||||
|
||||
CREATE OR REPLACE FUNCTION pg_catalog.citus_internal_add_partition_metadata(
|
||||
relation_id regclass, distribution_method "char",
|
||||
distribution_column text, colocation_id integer,
|
||||
|
|
36
src/backend/distributed/sql/udfs/citus_internal_add_placement_metadata/12.2-1.sql
generated
Normal file
36
src/backend/distributed/sql/udfs/citus_internal_add_placement_metadata/12.2-1.sql
generated
Normal file
|
@ -0,0 +1,36 @@
|
|||
-- create a new function, without shardstate
|
||||
CREATE OR REPLACE FUNCTION citus_internal.add_placement_metadata(
|
||||
shard_id bigint,
|
||||
shard_length bigint, group_id integer,
|
||||
placement_id bigint)
|
||||
RETURNS void
|
||||
LANGUAGE C STRICT
|
||||
AS 'MODULE_PATHNAME', $$citus_internal_add_placement_metadata$$;
|
||||
|
||||
COMMENT ON FUNCTION citus_internal.add_placement_metadata(bigint, bigint, integer, bigint) IS
|
||||
'Inserts into pg_dist_shard_placement with user checks';
|
||||
|
||||
-- create a new function, without shardstate
|
||||
CREATE OR REPLACE FUNCTION pg_catalog.citus_internal_add_placement_metadata(
|
||||
shard_id bigint,
|
||||
shard_length bigint, group_id integer,
|
||||
placement_id bigint)
|
||||
RETURNS void
|
||||
LANGUAGE C STRICT
|
||||
AS 'MODULE_PATHNAME', $$citus_internal_add_placement_metadata$$;
|
||||
|
||||
COMMENT ON FUNCTION pg_catalog.citus_internal_add_placement_metadata(bigint, bigint, integer, bigint) IS
|
||||
'Inserts into pg_dist_shard_placement with user checks';
|
||||
|
||||
-- replace the old one so it would call the old C function with shard_state
|
||||
CREATE OR REPLACE FUNCTION pg_catalog.citus_internal_add_placement_metadata(
|
||||
shard_id bigint, shard_state integer,
|
||||
shard_length bigint, group_id integer,
|
||||
placement_id bigint)
|
||||
RETURNS void
|
||||
LANGUAGE C STRICT
|
||||
AS 'MODULE_PATHNAME', $$citus_internal_add_placement_metadata_legacy$$;
|
||||
|
||||
COMMENT ON FUNCTION pg_catalog.citus_internal_add_placement_metadata(bigint, integer, bigint, integer, bigint) IS
|
||||
'Inserts into pg_dist_shard_placement with user checks';
|
||||
|
|
@ -1,3 +1,15 @@
|
|||
-- create a new function, without shardstate
|
||||
CREATE OR REPLACE FUNCTION citus_internal.add_placement_metadata(
|
||||
shard_id bigint,
|
||||
shard_length bigint, group_id integer,
|
||||
placement_id bigint)
|
||||
RETURNS void
|
||||
LANGUAGE C STRICT
|
||||
AS 'MODULE_PATHNAME', $$citus_internal_add_placement_metadata$$;
|
||||
|
||||
COMMENT ON FUNCTION citus_internal.add_placement_metadata(bigint, bigint, integer, bigint) IS
|
||||
'Inserts into pg_dist_shard_placement with user checks';
|
||||
|
||||
-- create a new function, without shardstate
|
||||
CREATE OR REPLACE FUNCTION pg_catalog.citus_internal_add_placement_metadata(
|
||||
shard_id bigint,
|
||||
|
|
21
src/backend/distributed/sql/udfs/citus_internal_add_shard_metadata/12.2-1.sql
generated
Normal file
21
src/backend/distributed/sql/udfs/citus_internal_add_shard_metadata/12.2-1.sql
generated
Normal file
|
@ -0,0 +1,21 @@
|
|||
CREATE OR REPLACE FUNCTION citus_internal.add_shard_metadata(
|
||||
relation_id regclass, shard_id bigint,
|
||||
storage_type "char", shard_min_value text,
|
||||
shard_max_value text
|
||||
)
|
||||
RETURNS void
|
||||
LANGUAGE C
|
||||
AS 'MODULE_PATHNAME', $$citus_internal_add_shard_metadata$$;
|
||||
COMMENT ON FUNCTION citus_internal.add_shard_metadata(regclass, bigint, "char", text, text) IS
|
||||
'Inserts into pg_dist_shard with user checks';
|
||||
|
||||
CREATE OR REPLACE FUNCTION pg_catalog.citus_internal_add_shard_metadata(
|
||||
relation_id regclass, shard_id bigint,
|
||||
storage_type "char", shard_min_value text,
|
||||
shard_max_value text
|
||||
)
|
||||
RETURNS void
|
||||
LANGUAGE C
|
||||
AS 'MODULE_PATHNAME';
|
||||
COMMENT ON FUNCTION pg_catalog.citus_internal_add_shard_metadata(regclass, bigint, "char", text, text) IS
|
||||
'Inserts into pg_dist_shard with user checks';
|
|
@ -1,3 +1,14 @@
|
|||
CREATE OR REPLACE FUNCTION citus_internal.add_shard_metadata(
|
||||
relation_id regclass, shard_id bigint,
|
||||
storage_type "char", shard_min_value text,
|
||||
shard_max_value text
|
||||
)
|
||||
RETURNS void
|
||||
LANGUAGE C
|
||||
AS 'MODULE_PATHNAME', $$citus_internal_add_shard_metadata$$;
|
||||
COMMENT ON FUNCTION citus_internal.add_shard_metadata(regclass, bigint, "char", text, text) IS
|
||||
'Inserts into pg_dist_shard with user checks';
|
||||
|
||||
CREATE OR REPLACE FUNCTION pg_catalog.citus_internal_add_shard_metadata(
|
||||
relation_id regclass, shard_id bigint,
|
||||
storage_type "char", shard_min_value text,
|
||||
|
|
17
src/backend/distributed/sql/udfs/citus_internal_add_tenant_schema/12.2-1.sql
generated
Normal file
17
src/backend/distributed/sql/udfs/citus_internal_add_tenant_schema/12.2-1.sql
generated
Normal file
|
@ -0,0 +1,17 @@
|
|||
CREATE OR REPLACE FUNCTION citus_internal.add_tenant_schema(schema_id Oid, colocation_id int)
|
||||
RETURNS void
|
||||
LANGUAGE C
|
||||
VOLATILE
|
||||
AS 'MODULE_PATHNAME', $$citus_internal_add_tenant_schema$$;
|
||||
|
||||
COMMENT ON FUNCTION citus_internal.add_tenant_schema(Oid, int) IS
|
||||
'insert given tenant schema into pg_dist_schema with given colocation id';
|
||||
|
||||
CREATE OR REPLACE FUNCTION pg_catalog.citus_internal_add_tenant_schema(schema_id Oid, colocation_id int)
|
||||
RETURNS void
|
||||
LANGUAGE C
|
||||
VOLATILE
|
||||
AS 'MODULE_PATHNAME';
|
||||
|
||||
COMMENT ON FUNCTION pg_catalog.citus_internal_add_tenant_schema(Oid, int) IS
|
||||
'insert given tenant schema into pg_dist_schema with given colocation id';
|
|
@ -1,3 +1,12 @@
|
|||
CREATE OR REPLACE FUNCTION citus_internal.add_tenant_schema(schema_id Oid, colocation_id int)
|
||||
RETURNS void
|
||||
LANGUAGE C
|
||||
VOLATILE
|
||||
AS 'MODULE_PATHNAME', $$citus_internal_add_tenant_schema$$;
|
||||
|
||||
COMMENT ON FUNCTION citus_internal.add_tenant_schema(Oid, int) IS
|
||||
'insert given tenant schema into pg_dist_schema with given colocation id';
|
||||
|
||||
CREATE OR REPLACE FUNCTION pg_catalog.citus_internal_add_tenant_schema(schema_id Oid, colocation_id int)
|
||||
RETURNS void
|
||||
LANGUAGE C
|
||||
|
|
17
src/backend/distributed/sql/udfs/citus_internal_adjust_local_clock_to_remote/12.2-1.sql
generated
Normal file
17
src/backend/distributed/sql/udfs/citus_internal_adjust_local_clock_to_remote/12.2-1.sql
generated
Normal file
|
@ -0,0 +1,17 @@
|
|||
CREATE OR REPLACE FUNCTION citus_internal.adjust_local_clock_to_remote(pg_catalog.cluster_clock)
|
||||
RETURNS void
|
||||
LANGUAGE C STABLE PARALLEL SAFE STRICT
|
||||
AS 'MODULE_PATHNAME', $$citus_internal_adjust_local_clock_to_remote$$;
|
||||
COMMENT ON FUNCTION citus_internal.adjust_local_clock_to_remote(pg_catalog.cluster_clock)
|
||||
IS 'Internal UDF used to adjust the local clock to the maximum of nodes in the cluster';
|
||||
|
||||
REVOKE ALL ON FUNCTION citus_internal.adjust_local_clock_to_remote(pg_catalog.cluster_clock) FROM PUBLIC;
|
||||
|
||||
CREATE OR REPLACE FUNCTION pg_catalog.citus_internal_adjust_local_clock_to_remote(pg_catalog.cluster_clock)
|
||||
RETURNS void
|
||||
LANGUAGE C STABLE PARALLEL SAFE STRICT
|
||||
AS 'MODULE_PATHNAME', $$citus_internal_adjust_local_clock_to_remote$$;
|
||||
COMMENT ON FUNCTION pg_catalog.citus_internal_adjust_local_clock_to_remote(pg_catalog.cluster_clock)
|
||||
IS 'Internal UDF used to adjust the local clock to the maximum of nodes in the cluster';
|
||||
|
||||
REVOKE ALL ON FUNCTION pg_catalog.citus_internal_adjust_local_clock_to_remote(pg_catalog.cluster_clock) FROM PUBLIC;
|
|
@ -1,3 +1,12 @@
|
|||
CREATE OR REPLACE FUNCTION citus_internal.adjust_local_clock_to_remote(pg_catalog.cluster_clock)
|
||||
RETURNS void
|
||||
LANGUAGE C STABLE PARALLEL SAFE STRICT
|
||||
AS 'MODULE_PATHNAME', $$citus_internal_adjust_local_clock_to_remote$$;
|
||||
COMMENT ON FUNCTION citus_internal.adjust_local_clock_to_remote(pg_catalog.cluster_clock)
|
||||
IS 'Internal UDF used to adjust the local clock to the maximum of nodes in the cluster';
|
||||
|
||||
REVOKE ALL ON FUNCTION citus_internal.adjust_local_clock_to_remote(pg_catalog.cluster_clock) FROM PUBLIC;
|
||||
|
||||
CREATE OR REPLACE FUNCTION pg_catalog.citus_internal_adjust_local_clock_to_remote(pg_catalog.cluster_clock)
|
||||
RETURNS void
|
||||
LANGUAGE C STABLE PARALLEL SAFE STRICT
|
||||
|
|
|
@ -1,10 +1,10 @@
|
|||
--
|
||||
-- citus_internal_database_command run given database command without transaction block restriction.
|
||||
-- citus_internal.database_command run given database command without transaction block restriction.
|
||||
|
||||
CREATE OR REPLACE FUNCTION pg_catalog.citus_internal_database_command(command text)
|
||||
CREATE OR REPLACE FUNCTION citus_internal.database_command(command text)
|
||||
RETURNS void
|
||||
LANGUAGE C
|
||||
VOLATILE
|
||||
AS 'MODULE_PATHNAME', $$citus_internal_database_command$$;
|
||||
COMMENT ON FUNCTION pg_catalog.citus_internal_database_command(text) IS
|
||||
COMMENT ON FUNCTION citus_internal.database_command(text) IS
|
||||
'run a database command without transaction block restrictions';
|
||||
|
|
|
@ -1,10 +1,10 @@
|
|||
--
|
||||
-- citus_internal_database_command run given database command without transaction block restriction.
|
||||
-- citus_internal.database_command run given database command without transaction block restriction.
|
||||
|
||||
CREATE OR REPLACE FUNCTION pg_catalog.citus_internal_database_command(command text)
|
||||
CREATE OR REPLACE FUNCTION citus_internal.database_command(command text)
|
||||
RETURNS void
|
||||
LANGUAGE C
|
||||
VOLATILE
|
||||
AS 'MODULE_PATHNAME', $$citus_internal_database_command$$;
|
||||
COMMENT ON FUNCTION pg_catalog.citus_internal_database_command(text) IS
|
||||
COMMENT ON FUNCTION citus_internal.database_command(text) IS
|
||||
'run a database command without transaction block restrictions';
|
||||
|
|
19
src/backend/distributed/sql/udfs/citus_internal_delete_colocation_metadata/12.2-1.sql
generated
Normal file
19
src/backend/distributed/sql/udfs/citus_internal_delete_colocation_metadata/12.2-1.sql
generated
Normal file
|
@ -0,0 +1,19 @@
|
|||
CREATE OR REPLACE FUNCTION citus_internal.delete_colocation_metadata(
|
||||
colocation_id int)
|
||||
RETURNS void
|
||||
LANGUAGE C
|
||||
STRICT
|
||||
AS 'MODULE_PATHNAME', $$citus_internal_delete_colocation_metadata$$;
|
||||
|
||||
COMMENT ON FUNCTION citus_internal.delete_colocation_metadata(int) IS
|
||||
'deletes a co-location group from pg_dist_colocation';
|
||||
|
||||
CREATE OR REPLACE FUNCTION pg_catalog.citus_internal_delete_colocation_metadata(
|
||||
colocation_id int)
|
||||
RETURNS void
|
||||
LANGUAGE C
|
||||
STRICT
|
||||
AS 'MODULE_PATHNAME';
|
||||
|
||||
COMMENT ON FUNCTION pg_catalog.citus_internal_delete_colocation_metadata(int) IS
|
||||
'deletes a co-location group from pg_dist_colocation';
|
|
@ -1,3 +1,13 @@
|
|||
CREATE OR REPLACE FUNCTION citus_internal.delete_colocation_metadata(
|
||||
colocation_id int)
|
||||
RETURNS void
|
||||
LANGUAGE C
|
||||
STRICT
|
||||
AS 'MODULE_PATHNAME', $$citus_internal_delete_colocation_metadata$$;
|
||||
|
||||
COMMENT ON FUNCTION citus_internal.delete_colocation_metadata(int) IS
|
||||
'deletes a co-location group from pg_dist_colocation';
|
||||
|
||||
CREATE OR REPLACE FUNCTION pg_catalog.citus_internal_delete_colocation_metadata(
|
||||
colocation_id int)
|
||||
RETURNS void
|
||||
|
|
14
src/backend/distributed/sql/udfs/citus_internal_delete_partition_metadata/12.2-1.sql
generated
Normal file
14
src/backend/distributed/sql/udfs/citus_internal_delete_partition_metadata/12.2-1.sql
generated
Normal file
|
@ -0,0 +1,14 @@
|
|||
CREATE OR REPLACE FUNCTION citus_internal.delete_partition_metadata(table_name regclass)
|
||||
RETURNS void
|
||||
LANGUAGE C STRICT
|
||||
AS 'MODULE_PATHNAME', $$citus_internal_delete_partition_metadata$$;
|
||||
COMMENT ON FUNCTION citus_internal.delete_partition_metadata(regclass) IS
|
||||
'Deletes a row from pg_dist_partition with table ownership checks';
|
||||
|
||||
CREATE OR REPLACE FUNCTION pg_catalog.citus_internal_delete_partition_metadata(table_name regclass)
|
||||
RETURNS void
|
||||
LANGUAGE C STRICT
|
||||
AS 'MODULE_PATHNAME';
|
||||
COMMENT ON FUNCTION pg_catalog.citus_internal_delete_partition_metadata(regclass) IS
|
||||
'Deletes a row from pg_dist_partition with table ownership checks';
|
||||
|
|
@ -1,3 +1,10 @@
|
|||
CREATE OR REPLACE FUNCTION citus_internal.delete_partition_metadata(table_name regclass)
|
||||
RETURNS void
|
||||
LANGUAGE C STRICT
|
||||
AS 'MODULE_PATHNAME', $$citus_internal_delete_partition_metadata$$;
|
||||
COMMENT ON FUNCTION citus_internal.delete_partition_metadata(regclass) IS
|
||||
'Deletes a row from pg_dist_partition with table ownership checks';
|
||||
|
||||
CREATE OR REPLACE FUNCTION pg_catalog.citus_internal_delete_partition_metadata(table_name regclass)
|
||||
RETURNS void
|
||||
LANGUAGE C STRICT
|
||||
|
|
19
src/backend/distributed/sql/udfs/citus_internal_delete_placement_metadata/12.2-1.sql
generated
Normal file
19
src/backend/distributed/sql/udfs/citus_internal_delete_placement_metadata/12.2-1.sql
generated
Normal file
|
@ -0,0 +1,19 @@
|
|||
CREATE OR REPLACE FUNCTION citus_internal.delete_placement_metadata(
|
||||
placement_id bigint)
|
||||
RETURNS void
|
||||
LANGUAGE C
|
||||
VOLATILE
|
||||
AS 'MODULE_PATHNAME',
|
||||
$$citus_internal_delete_placement_metadata$$;
|
||||
COMMENT ON FUNCTION citus_internal.delete_placement_metadata(bigint)
|
||||
IS 'Delete placement with given id from pg_dist_placement metadata table.';
|
||||
|
||||
CREATE OR REPLACE FUNCTION pg_catalog.citus_internal_delete_placement_metadata(
|
||||
placement_id bigint)
|
||||
RETURNS void
|
||||
LANGUAGE C
|
||||
VOLATILE
|
||||
AS 'MODULE_PATHNAME',
|
||||
$$citus_internal_delete_placement_metadata$$;
|
||||
COMMENT ON FUNCTION pg_catalog.citus_internal_delete_placement_metadata(bigint)
|
||||
IS 'Delete placement with given id from pg_dist_placement metadata table.';
|
|
@ -1,3 +1,13 @@
|
|||
CREATE OR REPLACE FUNCTION citus_internal.delete_placement_metadata(
|
||||
placement_id bigint)
|
||||
RETURNS void
|
||||
LANGUAGE C
|
||||
VOLATILE
|
||||
AS 'MODULE_PATHNAME',
|
||||
$$citus_internal_delete_placement_metadata$$;
|
||||
COMMENT ON FUNCTION citus_internal.delete_placement_metadata(bigint)
|
||||
IS 'Delete placement with given id from pg_dist_placement metadata table.';
|
||||
|
||||
CREATE OR REPLACE FUNCTION pg_catalog.citus_internal_delete_placement_metadata(
|
||||
placement_id bigint)
|
||||
RETURNS void
|
||||
|
|
14
src/backend/distributed/sql/udfs/citus_internal_delete_shard_metadata/12.2-1.sql
generated
Normal file
14
src/backend/distributed/sql/udfs/citus_internal_delete_shard_metadata/12.2-1.sql
generated
Normal file
|
@ -0,0 +1,14 @@
|
|||
CREATE OR REPLACE FUNCTION citus_internal.delete_shard_metadata(shard_id bigint)
|
||||
RETURNS void
|
||||
LANGUAGE C STRICT
|
||||
AS 'MODULE_PATHNAME', $$citus_internal_delete_shard_metadata$$;
|
||||
COMMENT ON FUNCTION citus_internal.delete_shard_metadata(bigint) IS
|
||||
'Deletes rows from pg_dist_shard and pg_dist_shard_placement with user checks';
|
||||
|
||||
CREATE OR REPLACE FUNCTION pg_catalog.citus_internal_delete_shard_metadata(shard_id bigint)
|
||||
RETURNS void
|
||||
LANGUAGE C STRICT
|
||||
AS 'MODULE_PATHNAME';
|
||||
COMMENT ON FUNCTION pg_catalog.citus_internal_delete_shard_metadata(bigint) IS
|
||||
'Deletes rows from pg_dist_shard and pg_dist_shard_placement with user checks';
|
||||
|
|
@ -1,3 +1,10 @@
|
|||
CREATE OR REPLACE FUNCTION citus_internal.delete_shard_metadata(shard_id bigint)
|
||||
RETURNS void
|
||||
LANGUAGE C STRICT
|
||||
AS 'MODULE_PATHNAME', $$citus_internal_delete_shard_metadata$$;
|
||||
COMMENT ON FUNCTION citus_internal.delete_shard_metadata(bigint) IS
|
||||
'Deletes rows from pg_dist_shard and pg_dist_shard_placement with user checks';
|
||||
|
||||
CREATE OR REPLACE FUNCTION pg_catalog.citus_internal_delete_shard_metadata(shard_id bigint)
|
||||
RETURNS void
|
||||
LANGUAGE C STRICT
|
||||
|
|
17
src/backend/distributed/sql/udfs/citus_internal_delete_tenant_schema/12.2-1.sql
generated
Normal file
17
src/backend/distributed/sql/udfs/citus_internal_delete_tenant_schema/12.2-1.sql
generated
Normal file
|
@ -0,0 +1,17 @@
|
|||
CREATE OR REPLACE FUNCTION citus_internal.delete_tenant_schema(schema_id Oid)
|
||||
RETURNS void
|
||||
LANGUAGE C
|
||||
VOLATILE
|
||||
AS 'MODULE_PATHNAME', $$citus_internal_delete_tenant_schema$$;
|
||||
|
||||
COMMENT ON FUNCTION citus_internal.delete_tenant_schema(Oid) IS
|
||||
'delete given tenant schema from pg_dist_schema';
|
||||
|
||||
CREATE OR REPLACE FUNCTION pg_catalog.citus_internal_delete_tenant_schema(schema_id Oid)
|
||||
RETURNS void
|
||||
LANGUAGE C
|
||||
VOLATILE
|
||||
AS 'MODULE_PATHNAME';
|
||||
|
||||
COMMENT ON FUNCTION pg_catalog.citus_internal_delete_tenant_schema(Oid) IS
|
||||
'delete given tenant schema from pg_dist_schema';
|
|
@ -1,3 +1,12 @@
|
|||
CREATE OR REPLACE FUNCTION citus_internal.delete_tenant_schema(schema_id Oid)
|
||||
RETURNS void
|
||||
LANGUAGE C
|
||||
VOLATILE
|
||||
AS 'MODULE_PATHNAME', $$citus_internal_delete_tenant_schema$$;
|
||||
|
||||
COMMENT ON FUNCTION citus_internal.delete_tenant_schema(Oid) IS
|
||||
'delete given tenant schema from pg_dist_schema';
|
||||
|
||||
CREATE OR REPLACE FUNCTION pg_catalog.citus_internal_delete_tenant_schema(schema_id Oid)
|
||||
RETURNS void
|
||||
LANGUAGE C
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
CREATE OR REPLACE FUNCTION citus_internal.mark_object_distributed(classId Oid, objectName text, objectId Oid)
|
||||
CREATE OR REPLACE FUNCTION citus_internal.mark_object_distributed(classId Oid, objectName text, objectId Oid, connectionUser text)
|
||||
RETURNS VOID
|
||||
LANGUAGE C
|
||||
AS 'MODULE_PATHNAME', $$mark_object_distributed$$;
|
||||
|
||||
COMMENT ON FUNCTION citus_internal.mark_object_distributed(classId Oid, objectName text, objectId Oid)
|
||||
COMMENT ON FUNCTION citus_internal.mark_object_distributed(classId Oid, objectName text, objectId Oid, connectionUser text)
|
||||
IS 'adds an object to pg_dist_object on all nodes';
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
CREATE OR REPLACE FUNCTION citus_internal.mark_object_distributed(classId Oid, objectName text, objectId Oid)
|
||||
CREATE OR REPLACE FUNCTION citus_internal.mark_object_distributed(classId Oid, objectName text, objectId Oid, connectionUser text)
|
||||
RETURNS VOID
|
||||
LANGUAGE C
|
||||
AS 'MODULE_PATHNAME', $$mark_object_distributed$$;
|
||||
|
||||
COMMENT ON FUNCTION citus_internal.mark_object_distributed(classId Oid, objectName text, objectId Oid)
|
||||
COMMENT ON FUNCTION citus_internal.mark_object_distributed(classId Oid, objectName text, objectId Oid, connectionUser text)
|
||||
IS 'adds an object to pg_dist_object on all nodes';
|
||||
|
|
|
@ -125,7 +125,7 @@ wait_until_metadata_sync(PG_FUNCTION_ARGS)
|
|||
|
||||
/* First we start listening. */
|
||||
MultiConnection *connection = GetNodeConnection(FORCE_NEW_CONNECTION,
|
||||
LOCAL_HOST_NAME, PostPortNumber);
|
||||
LocalHostName, PostPortNumber);
|
||||
ExecuteCriticalRemoteCommand(connection, "LISTEN " METADATA_SYNC_CHANNEL);
|
||||
|
||||
/*
|
||||
|
|
|
@ -155,7 +155,7 @@ run_commands_on_session_level_connection_to_node(PG_FUNCTION_ARGS)
|
|||
|
||||
StringInfo processStringInfo = makeStringInfo();
|
||||
StringInfo workerProcessStringInfo = makeStringInfo();
|
||||
MultiConnection *localConnection = GetNodeConnection(0, LOCAL_HOST_NAME,
|
||||
MultiConnection *localConnection = GetNodeConnection(0, LocalHostName,
|
||||
PostPortNumber);
|
||||
|
||||
if (!singleConnection)
|
||||
|
|
|
@ -36,10 +36,6 @@
|
|||
#include "distributed/worker_manager.h"
|
||||
#include "distributed/worker_transaction.h"
|
||||
|
||||
static void SendCommandToRemoteMetadataNodesParams(const char *command,
|
||||
const char *user, int parameterCount,
|
||||
const Oid *parameterTypes,
|
||||
const char *const *parameterValues);
|
||||
static void SendBareCommandListToMetadataNodesInternal(List *commandList,
|
||||
TargetWorkerSet targetWorkerSet);
|
||||
static void SendCommandToMetadataWorkersParams(const char *command,
|
||||
|
@ -209,7 +205,7 @@ SendCommandListToRemoteNodesWithMetadata(List *commands)
|
|||
* SendCommandToWorkersParamsInternal() that can be used to send commands
|
||||
* to remote metadata nodes.
|
||||
*/
|
||||
static void
|
||||
void
|
||||
SendCommandToRemoteMetadataNodesParams(const char *command,
|
||||
const char *user, int parameterCount,
|
||||
const Oid *parameterTypes,
|
||||
|
|
|
@ -61,14 +61,6 @@
|
|||
*/
|
||||
#define LOCAL_NODE_ID UINT32_MAX
|
||||
|
||||
/*
|
||||
* If you want to connect to the current node use `LocalHostName`, which is a GUC, instead
|
||||
* of the hardcoded loopback hostname. Only if you really need the loopback hostname use
|
||||
* this define.
|
||||
*/
|
||||
#define LOCAL_HOST_NAME "localhost"
|
||||
|
||||
|
||||
/* forward declare, to avoid forcing large headers on everyone */
|
||||
struct pg_conn; /* target of the PGconn typedef */
|
||||
struct MemoryContextData;
|
||||
|
|
|
@ -24,7 +24,8 @@ extern bool IsAnyObjectDistributed(const List *addresses);
|
|||
extern bool ClusterHasDistributedFunctionWithDistArgument(void);
|
||||
extern void MarkObjectDistributed(const ObjectAddress *distAddress);
|
||||
extern void MarkObjectDistributedWithName(const ObjectAddress *distAddress, char *name,
|
||||
bool useConnectionForLocalQuery);
|
||||
bool useConnectionForLocalQuery,
|
||||
char *connectionUser);
|
||||
extern void MarkObjectDistributedViaSuperUser(const ObjectAddress *distAddress);
|
||||
extern void MarkObjectDistributedLocally(const ObjectAddress *distAddress);
|
||||
extern void UnmarkObjectDistributed(const ObjectAddress *address);
|
||||
|
|
|
@ -68,6 +68,10 @@ extern void SendCommandToWorkersAsUser(TargetWorkerSet targetWorkerSet,
|
|||
const char *nodeUser, const char *command);
|
||||
extern void SendCommandToWorkerAsUser(const char *nodeName, int32 nodePort,
|
||||
const char *nodeUser, const char *command);
|
||||
extern void SendCommandToRemoteMetadataNodesParams(const char *command,
|
||||
const char *user, int parameterCount,
|
||||
const Oid *parameterTypes,
|
||||
const char *const *parameterValues);
|
||||
extern bool SendOptionalCommandListToWorkerOutsideTransaction(const char *nodeName,
|
||||
int32 nodePort,
|
||||
const char *nodeUser,
|
||||
|
|
|
@ -212,6 +212,18 @@ DEPS = {
|
|||
["columnar_create", "columnar_load"],
|
||||
repeatable=False,
|
||||
),
|
||||
"multi_metadata_sync": TestDeps(
|
||||
None,
|
||||
[
|
||||
"multi_sequence_default",
|
||||
"alter_database_propagation",
|
||||
"alter_role_propagation",
|
||||
"grant_on_schema_propagation",
|
||||
"multi_test_catalog_views",
|
||||
"multi_drop_extension",
|
||||
],
|
||||
repeatable=False,
|
||||
),
|
||||
}
|
||||
|
||||
|
||||
|
|
|
@ -22,7 +22,7 @@ def test_main_commited_outer_not_yet(cluster):
|
|||
"SELECT citus_internal.execute_command_on_remote_nodes_as_user('CREATE USER u1;', 'postgres')"
|
||||
)
|
||||
cur2.execute(
|
||||
"SELECT citus_internal.mark_object_distributed(1260, 'u1', 123123)"
|
||||
"SELECT citus_internal.mark_object_distributed(1260, 'u1', 123123, 'postgres')"
|
||||
)
|
||||
cur2.execute("COMMIT")
|
||||
|
||||
|
@ -133,7 +133,7 @@ def test_main_commited_outer_aborted(cluster):
|
|||
"SELECT citus_internal.execute_command_on_remote_nodes_as_user('CREATE USER u2;', 'postgres')"
|
||||
)
|
||||
cur2.execute(
|
||||
"SELECT citus_internal.mark_object_distributed(1260, 'u2', 321321)"
|
||||
"SELECT citus_internal.mark_object_distributed(1260, 'u2', 321321, 'postgres')"
|
||||
)
|
||||
cur2.execute("COMMIT")
|
||||
|
||||
|
|
|
@ -0,0 +1,10 @@
|
|||
--- Create a non-superuser role and check if it can access citus_internal schema functions
|
||||
CREATE USER nonsuperuser CREATEROLE;
|
||||
SET ROLE nonsuperuser;
|
||||
--- The non-superuser role should not be able to access citus_internal functions
|
||||
SELECT citus_internal.commit_management_command_2pc();
|
||||
ERROR: permission denied for function commit_management_command_2pc
|
||||
SELECT citus_internal.replace_isolation_tester_func();
|
||||
ERROR: permission denied for function replace_isolation_tester_func
|
||||
RESET ROLE;
|
||||
DROP USER nonsuperuser;
|
|
@ -3,7 +3,7 @@
|
|||
-- For versions >= 15, pg15_create_drop_database_propagation.sql is used.
|
||||
-- For versions >= 16, pg16_create_drop_database_propagation.sql is used.
|
||||
-- Test the UDF that we use to issue database command during metadata sync.
|
||||
SELECT pg_catalog.citus_internal_database_command(null);
|
||||
SELECT citus_internal.database_command(null);
|
||||
ERROR: This is an internal Citus function can only be used in a distributed transaction
|
||||
CREATE ROLE test_db_commands WITH LOGIN;
|
||||
ALTER SYSTEM SET citus.enable_manual_metadata_changes_for_user TO 'test_db_commands';
|
||||
|
@ -21,22 +21,22 @@ SELECT pg_sleep(0.1);
|
|||
|
||||
SET ROLE test_db_commands;
|
||||
-- fails on null input
|
||||
SELECT pg_catalog.citus_internal_database_command(null);
|
||||
SELECT citus_internal.database_command(null);
|
||||
ERROR: command cannot be NULL
|
||||
-- fails on non create / drop db command
|
||||
SELECT pg_catalog.citus_internal_database_command('CREATE TABLE foo_bar(a int)');
|
||||
ERROR: citus_internal_database_command() can only be used for CREATE DATABASE command by Citus.
|
||||
SELECT pg_catalog.citus_internal_database_command('SELECT 1');
|
||||
ERROR: citus_internal_database_command() can only be used for CREATE DATABASE command by Citus.
|
||||
SELECT pg_catalog.citus_internal_database_command('asfsfdsg');
|
||||
SELECT citus_internal.database_command('CREATE TABLE foo_bar(a int)');
|
||||
ERROR: citus_internal.database_command() can only be used for CREATE DATABASE command by Citus.
|
||||
SELECT citus_internal.database_command('SELECT 1');
|
||||
ERROR: citus_internal.database_command() can only be used for CREATE DATABASE command by Citus.
|
||||
SELECT citus_internal.database_command('asfsfdsg');
|
||||
ERROR: syntax error at or near "asfsfdsg"
|
||||
SELECT pg_catalog.citus_internal_database_command('');
|
||||
SELECT citus_internal.database_command('');
|
||||
ERROR: cannot execute multiple utility events
|
||||
RESET ROLE;
|
||||
ALTER ROLE test_db_commands nocreatedb;
|
||||
SET ROLE test_db_commands;
|
||||
-- make sure that pg_catalog.citus_internal_database_command doesn't cause privilege escalation
|
||||
SELECT pg_catalog.citus_internal_database_command('CREATE DATABASE no_permissions');
|
||||
-- make sure that citus_internal.database_command doesn't cause privilege escalation
|
||||
SELECT citus_internal.database_command('CREATE DATABASE no_permissions');
|
||||
ERROR: permission denied to create database
|
||||
RESET ROLE;
|
||||
DROP USER test_db_commands;
|
||||
|
@ -1095,46 +1095,46 @@ SELECT * FROM public.check_database_on_all_nodes('test_db') ORDER BY node_type;
|
|||
REVOKE CONNECT ON DATABASE test_db FROM propagated_role;
|
||||
DROP DATABASE test_db;
|
||||
DROP ROLE propagated_role, non_propagated_role;
|
||||
-- test pg_catalog.citus_internal_acquire_citus_advisory_object_class_lock with null input
|
||||
SELECT pg_catalog.citus_internal_acquire_citus_advisory_object_class_lock(null, 'regression');
|
||||
-- test citus_internal.acquire_citus_advisory_object_class_lock with null input
|
||||
SELECT citus_internal.acquire_citus_advisory_object_class_lock(null, 'regression');
|
||||
ERROR: object_class cannot be NULL
|
||||
SELECT pg_catalog.citus_internal_acquire_citus_advisory_object_class_lock((SELECT CASE WHEN substring(version(), '\d+')::integer < 16 THEN 25 ELSE 26 END AS oclass_database), null);
|
||||
citus_internal_acquire_citus_advisory_object_class_lock
|
||||
SELECT citus_internal.acquire_citus_advisory_object_class_lock((SELECT CASE WHEN substring(version(), '\d+')::integer < 16 THEN 25 ELSE 26 END AS oclass_database), null);
|
||||
acquire_citus_advisory_object_class_lock
|
||||
---------------------------------------------------------------------
|
||||
|
||||
(1 row)
|
||||
|
||||
-- OCLASS_DATABASE
|
||||
SELECT pg_catalog.citus_internal_acquire_citus_advisory_object_class_lock((SELECT CASE WHEN substring(version(), '\d+')::integer < 16 THEN 25 ELSE 26 END AS oclass_database), NULL);
|
||||
citus_internal_acquire_citus_advisory_object_class_lock
|
||||
SELECT citus_internal.acquire_citus_advisory_object_class_lock((SELECT CASE WHEN substring(version(), '\d+')::integer < 16 THEN 25 ELSE 26 END AS oclass_database), NULL);
|
||||
acquire_citus_advisory_object_class_lock
|
||||
---------------------------------------------------------------------
|
||||
|
||||
(1 row)
|
||||
|
||||
SELECT pg_catalog.citus_internal_acquire_citus_advisory_object_class_lock((SELECT CASE WHEN substring(version(), '\d+')::integer < 16 THEN 25 ELSE 26 END AS oclass_database), 'regression');
|
||||
citus_internal_acquire_citus_advisory_object_class_lock
|
||||
SELECT citus_internal.acquire_citus_advisory_object_class_lock((SELECT CASE WHEN substring(version(), '\d+')::integer < 16 THEN 25 ELSE 26 END AS oclass_database), 'regression');
|
||||
acquire_citus_advisory_object_class_lock
|
||||
---------------------------------------------------------------------
|
||||
|
||||
(1 row)
|
||||
|
||||
SELECT pg_catalog.citus_internal_acquire_citus_advisory_object_class_lock((SELECT CASE WHEN substring(version(), '\d+')::integer < 16 THEN 25 ELSE 26 END AS oclass_database), '');
|
||||
SELECT citus_internal.acquire_citus_advisory_object_class_lock((SELECT CASE WHEN substring(version(), '\d+')::integer < 16 THEN 25 ELSE 26 END AS oclass_database), '');
|
||||
ERROR: database "" does not exist
|
||||
SELECT pg_catalog.citus_internal_acquire_citus_advisory_object_class_lock((SELECT CASE WHEN substring(version(), '\d+')::integer < 16 THEN 25 ELSE 26 END AS oclass_database), 'no_such_db');
|
||||
SELECT citus_internal.acquire_citus_advisory_object_class_lock((SELECT CASE WHEN substring(version(), '\d+')::integer < 16 THEN 25 ELSE 26 END AS oclass_database), 'no_such_db');
|
||||
ERROR: database "no_such_db" does not exist
|
||||
-- invalid OCLASS
|
||||
SELECT pg_catalog.citus_internal_acquire_citus_advisory_object_class_lock(-1, NULL);
|
||||
SELECT citus_internal.acquire_citus_advisory_object_class_lock(-1, NULL);
|
||||
ERROR: unsupported object class: -1
|
||||
SELECT pg_catalog.citus_internal_acquire_citus_advisory_object_class_lock(-1, 'regression');
|
||||
SELECT citus_internal.acquire_citus_advisory_object_class_lock(-1, 'regression');
|
||||
ERROR: unsupported object class: -1
|
||||
-- invalid OCLASS
|
||||
SELECT pg_catalog.citus_internal_acquire_citus_advisory_object_class_lock(100, NULL);
|
||||
SELECT citus_internal.acquire_citus_advisory_object_class_lock(100, NULL);
|
||||
ERROR: unsupported object class: 100
|
||||
SELECT pg_catalog.citus_internal_acquire_citus_advisory_object_class_lock(100, 'regression');
|
||||
SELECT citus_internal.acquire_citus_advisory_object_class_lock(100, 'regression');
|
||||
ERROR: unsupported object class: 100
|
||||
-- another valid OCLASS, but not implemented yet
|
||||
SELECT pg_catalog.citus_internal_acquire_citus_advisory_object_class_lock(10, NULL);
|
||||
SELECT citus_internal.acquire_citus_advisory_object_class_lock(10, NULL);
|
||||
ERROR: unsupported object class: 10
|
||||
SELECT pg_catalog.citus_internal_acquire_citus_advisory_object_class_lock(10, 'regression');
|
||||
SELECT citus_internal.acquire_citus_advisory_object_class_lock(10, 'regression');
|
||||
ERROR: unsupported object class: 10
|
||||
SELECT 1 FROM run_command_on_all_nodes('ALTER SYSTEM SET citus.enable_create_database_propagation TO ON');
|
||||
?column?
|
||||
|
|
|
@ -371,7 +371,7 @@ ROLLBACK;
|
|||
-- reference tables.
|
||||
SELECT pg_catalog.citus_internal_update_none_dist_table_metadata(1, 't', 1, true);
|
||||
ERROR: This is an internal Citus function can only be used in a distributed transaction
|
||||
SELECT pg_catalog.citus_internal_delete_placement_metadata(1);
|
||||
SELECT citus_internal.delete_placement_metadata(1);
|
||||
ERROR: This is an internal Citus function can only be used in a distributed transaction
|
||||
CREATE ROLE test_user_create_ref_dist WITH LOGIN;
|
||||
GRANT ALL ON SCHEMA create_ref_dist_from_citus_local TO test_user_create_ref_dist;
|
||||
|
@ -401,7 +401,7 @@ SELECT pg_catalog.citus_internal_update_none_dist_table_metadata(1, 't', null, t
|
|||
ERROR: colocation_id cannot be NULL
|
||||
SELECT pg_catalog.citus_internal_update_none_dist_table_metadata(1, 't', 1, null);
|
||||
ERROR: auto_converted cannot be NULL
|
||||
SELECT pg_catalog.citus_internal_delete_placement_metadata(null);
|
||||
SELECT citus_internal.delete_placement_metadata(null);
|
||||
ERROR: placement_id cannot be NULL
|
||||
CREATE TABLE udf_test (col_1 int);
|
||||
SELECT citus_add_local_table_to_metadata('udf_test');
|
||||
|
@ -426,8 +426,8 @@ BEGIN;
|
|||
|
||||
SELECT placementid AS udf_test_placementid FROM pg_dist_shard_placement
|
||||
WHERE shardid = get_shard_id_for_distribution_column('create_ref_dist_from_citus_local.udf_test') \gset
|
||||
SELECT pg_catalog.citus_internal_delete_placement_metadata(:udf_test_placementid);
|
||||
citus_internal_delete_placement_metadata
|
||||
SELECT citus_internal.delete_placement_metadata(:udf_test_placementid);
|
||||
delete_placement_metadata
|
||||
---------------------------------------------------------------------
|
||||
|
||||
(1 row)
|
||||
|
|
|
@ -128,7 +128,7 @@ BEGIN;
|
|||
(1 row)
|
||||
|
||||
SELECT count(*) FROM socket_test_table;
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
ROLLBACK;
|
||||
-- repartition joins also can recover
|
||||
SET citus.enable_repartition_joins TO on;
|
||||
|
|
|
@ -354,8 +354,8 @@ NOTICE: issuing SELECT worker_drop_distributed_table('drop_partitioned_table.pa
|
|||
NOTICE: issuing DROP TABLE IF EXISTS drop_partitioned_table.parent_xxxxx CASCADE
|
||||
NOTICE: issuing SELECT worker_drop_distributed_table('drop_partitioned_table.child1')
|
||||
NOTICE: issuing SELECT worker_drop_distributed_table('drop_partitioned_table.child1')
|
||||
NOTICE: issuing SELECT pg_catalog.citus_internal_delete_colocation_metadata(1344400)
|
||||
NOTICE: issuing SELECT pg_catalog.citus_internal_delete_colocation_metadata(1344400)
|
||||
NOTICE: issuing SELECT citus_internal.delete_colocation_metadata(1344400)
|
||||
NOTICE: issuing SELECT citus_internal.delete_colocation_metadata(1344400)
|
||||
ROLLBACK;
|
||||
NOTICE: issuing ROLLBACK
|
||||
NOTICE: issuing ROLLBACK
|
||||
|
@ -377,8 +377,8 @@ NOTICE: issuing DROP TABLE IF EXISTS drop_partitioned_table.parent_xxxxx CASCAD
|
|||
NOTICE: issuing SELECT worker_drop_distributed_table('drop_partitioned_table.child1')
|
||||
NOTICE: issuing SELECT worker_drop_distributed_table('drop_partitioned_table.child1')
|
||||
NOTICE: issuing DROP TABLE IF EXISTS drop_partitioned_table.child1_xxxxx CASCADE
|
||||
NOTICE: issuing SELECT pg_catalog.citus_internal_delete_colocation_metadata(1344400)
|
||||
NOTICE: issuing SELECT pg_catalog.citus_internal_delete_colocation_metadata(1344400)
|
||||
NOTICE: issuing SELECT citus_internal.delete_colocation_metadata(1344400)
|
||||
NOTICE: issuing SELECT citus_internal.delete_colocation_metadata(1344400)
|
||||
ROLLBACK;
|
||||
NOTICE: issuing ROLLBACK
|
||||
NOTICE: issuing ROLLBACK
|
||||
|
|
|
@ -84,7 +84,7 @@ SELECT citus.mitmproxy('conn.connect_delay(1400)');
|
|||
|
||||
ALTER TABLE products ADD CONSTRAINT p_key PRIMARY KEY(product_no);
|
||||
WARNING: could not establish connection after 900 ms
|
||||
ERROR: connection to the remote node localhost:xxxxx failed
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed
|
||||
RESET citus.node_connection_timeout;
|
||||
SELECT citus.mitmproxy('conn.allow()');
|
||||
mitmproxy
|
||||
|
|
|
@ -36,7 +36,7 @@ SELECT citus.mitmproxy('conn.kill()');
|
|||
(1 row)
|
||||
|
||||
\COPY test_table FROM stdin delimiter ',';
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
CONTEXT: COPY test_table, line 1: "1,2"
|
||||
SELECT citus.mitmproxy('conn.allow()');
|
||||
mitmproxy
|
||||
|
@ -271,7 +271,7 @@ SELECT citus.mitmproxy('conn.kill()');
|
|||
(1 row)
|
||||
|
||||
\COPY test_table_2 FROM stdin delimiter ',';
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
SELECT citus.mitmproxy('conn.allow()');
|
||||
mitmproxy
|
||||
---------------------------------------------------------------------
|
||||
|
|
|
@ -28,7 +28,7 @@ SELECT citus.mitmproxy('conn.kill()');
|
|||
(1 row)
|
||||
|
||||
SELECT create_distributed_table('test_table', 'id');
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
SELECT count(*) FROM pg_dist_shard WHERE logicalrelid='create_distributed_table_non_empty_failure.test_table'::regclass;
|
||||
count
|
||||
---------------------------------------------------------------------
|
||||
|
@ -164,7 +164,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="CREATE TABLE").kill()');
|
|||
(1 row)
|
||||
|
||||
SELECT create_distributed_table('test_table', 'id');
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
SELECT count(*) FROM pg_dist_shard WHERE logicalrelid='create_distributed_table_non_empty_failure.test_table'::regclass;
|
||||
count
|
||||
---------------------------------------------------------------------
|
||||
|
@ -436,7 +436,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="^BEGIN TRANSACTION ISOLATION LEVEL R
|
|||
(1 row)
|
||||
|
||||
SELECT create_distributed_table('test_table', 'id', colocate_with => 'colocated_table');
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
SELECT count(*) FROM pg_dist_shard WHERE logicalrelid='create_distributed_table_non_empty_failure.test_table'::regclass;
|
||||
count
|
||||
---------------------------------------------------------------------
|
||||
|
@ -519,7 +519,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="^SELECT worker_apply_shard_ddl_comma
|
|||
(1 row)
|
||||
|
||||
SELECT create_distributed_table('test_table', 'id', colocate_with => 'colocated_table');
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
SELECT count(*) FROM pg_dist_shard WHERE logicalrelid='create_distributed_table_non_empty_failure.test_table'::regclass;
|
||||
count
|
||||
---------------------------------------------------------------------
|
||||
|
@ -680,7 +680,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="CREATE TABLE").kill()');
|
|||
(1 row)
|
||||
|
||||
SELECT create_distributed_table('test_table', 'id');
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
SELECT count(*) FROM pg_dist_shard WHERE logicalrelid='create_distributed_table_non_empty_failure.test_table'::regclass;
|
||||
count
|
||||
---------------------------------------------------------------------
|
||||
|
@ -901,7 +901,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="^BEGIN TRANSACTION ISOLATION LEVEL R
|
|||
(1 row)
|
||||
|
||||
SELECT create_distributed_table('test_table', 'id', colocate_with => 'colocated_table');
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
SELECT count(*) FROM pg_dist_shard WHERE logicalrelid='create_distributed_table_non_empty_failure.test_table'::regclass;
|
||||
count
|
||||
---------------------------------------------------------------------
|
||||
|
|
|
@ -29,7 +29,7 @@ CREATE INDEX CONCURRENTLY idx_index_test ON index_test(id, value_1);
|
|||
WARNING: Commands that are not transaction-safe may result in partial failure, potentially leading to an inconsistent state.
|
||||
If the problematic command is a CREATE operation, consider using the 'IF EXISTS' syntax to drop the object,
|
||||
if applicable, and then re-attempt the original command.
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
SELECT citus.mitmproxy('conn.allow()');
|
||||
mitmproxy
|
||||
---------------------------------------------------------------------
|
||||
|
@ -63,7 +63,7 @@ CREATE INDEX CONCURRENTLY idx_index_test ON index_test(id, value_1);
|
|||
WARNING: Commands that are not transaction-safe may result in partial failure, potentially leading to an inconsistent state.
|
||||
If the problematic command is a CREATE operation, consider using the 'IF EXISTS' syntax to drop the object,
|
||||
if applicable, and then re-attempt the original command.
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
SELECT citus.mitmproxy('conn.allow()');
|
||||
mitmproxy
|
||||
---------------------------------------------------------------------
|
||||
|
@ -144,7 +144,7 @@ DROP INDEX CONCURRENTLY IF EXISTS idx_index_test;
|
|||
WARNING: Commands that are not transaction-safe may result in partial failure, potentially leading to an inconsistent state.
|
||||
If the problematic command is a CREATE operation, consider using the 'IF EXISTS' syntax to drop the object,
|
||||
if applicable, and then re-attempt the original command.
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
SELECT citus.mitmproxy('conn.allow()');
|
||||
mitmproxy
|
||||
---------------------------------------------------------------------
|
||||
|
|
|
@ -22,7 +22,7 @@ SELECT citus.mitmproxy('conn.kill()');
|
|||
(1 row)
|
||||
|
||||
SELECT create_distributed_table('test_table','id');
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
SELECT citus.mitmproxy('conn.allow()');
|
||||
mitmproxy
|
||||
---------------------------------------------------------------------
|
||||
|
@ -116,7 +116,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="SELECT worker_apply_shard_ddl_comman
|
|||
(1 row)
|
||||
|
||||
SELECT create_distributed_table('test_table','id');
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
SELECT citus.mitmproxy('conn.allow()');
|
||||
mitmproxy
|
||||
---------------------------------------------------------------------
|
||||
|
@ -147,7 +147,7 @@ BEGIN;
|
|||
(1 row)
|
||||
|
||||
SELECT create_distributed_table('test_table', 'id');
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
COMMIT;
|
||||
SELECT citus.mitmproxy('conn.allow()');
|
||||
mitmproxy
|
||||
|
@ -215,7 +215,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="CREATE TABLE").kill()');
|
|||
(1 row)
|
||||
|
||||
SELECT create_distributed_table('test_table','id',colocate_with=>'temp_table');
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
SELECT citus.mitmproxy('conn.allow()');
|
||||
mitmproxy
|
||||
---------------------------------------------------------------------
|
||||
|
@ -484,7 +484,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="CREATE TABLE").kill()');
|
|||
|
||||
BEGIN;
|
||||
SELECT create_distributed_table('test_table','id');
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
ROLLBACK;
|
||||
SELECT citus.mitmproxy('conn.allow()');
|
||||
mitmproxy
|
||||
|
|
|
@ -86,7 +86,7 @@ FROM
|
|||
ORDER BY 1 DESC LIMIT 5
|
||||
) as foo
|
||||
WHERE foo.user_id = cte.user_id;
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
-- kill at the third copy (pull)
|
||||
SELECT citus.mitmproxy('conn.onQuery(query="SELECT DISTINCT users_table.user").kill()');
|
||||
mitmproxy
|
||||
|
@ -117,7 +117,7 @@ FROM
|
|||
ORDER BY 1 DESC LIMIT 5
|
||||
) as foo
|
||||
WHERE foo.user_id = cte.user_id;
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
-- cancel at the first copy (push)
|
||||
SELECT citus.mitmproxy('conn.onQuery(query="^COPY").cancel(' || :pid || ')');
|
||||
mitmproxy
|
||||
|
@ -254,7 +254,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="^DELETE FROM").kill()');
|
|||
|
||||
WITH cte_delete AS MATERIALIZED (DELETE FROM users_table WHERE user_name in ('A', 'D') RETURNING *)
|
||||
INSERT INTO users_table SELECT * FROM cte_delete;
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
-- verify contents are the same
|
||||
SELECT citus.mitmproxy('conn.allow()');
|
||||
mitmproxy
|
||||
|
@ -365,7 +365,7 @@ BEGIN;
|
|||
SET LOCAL citus.multi_shard_modify_mode = 'sequential';
|
||||
WITH cte_delete AS MATERIALIZED (DELETE FROM users_table WHERE user_name in ('A', 'D') RETURNING *)
|
||||
INSERT INTO users_table SELECT * FROM cte_delete;
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
END;
|
||||
RESET SEARCH_PATH;
|
||||
SELECT citus.mitmproxy('conn.allow()');
|
||||
|
|
|
@ -36,7 +36,7 @@ SELECT citus.mitmproxy('conn.onAuthenticationOk().kill()');
|
|||
(1 row)
|
||||
|
||||
ALTER TABLE test_table ADD COLUMN new_column INT;
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
SELECT array_agg(name::text ORDER BY name::text) FROM public.table_attrs where relid = 'test_table'::regclass;
|
||||
array_agg
|
||||
---------------------------------------------------------------------
|
||||
|
@ -99,7 +99,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="worker_apply_shard_ddl_command").kil
|
|||
(1 row)
|
||||
|
||||
ALTER TABLE test_table ADD COLUMN new_column INT;
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
-- show that we've never commited the changes
|
||||
SELECT array_agg(name::text ORDER BY name::text) FROM public.table_attrs where relid = 'test_table'::regclass;
|
||||
array_agg
|
||||
|
@ -300,7 +300,7 @@ SELECT citus.mitmproxy('conn.onAuthenticationOk().kill()');
|
|||
(1 row)
|
||||
|
||||
ALTER TABLE test_table DROP COLUMN new_column;
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
SELECT array_agg(name::text ORDER BY name::text) FROM public.table_attrs where relid = 'test_table'::regclass;
|
||||
array_agg
|
||||
---------------------------------------------------------------------
|
||||
|
@ -361,7 +361,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="worker_apply_shard_ddl_command").kil
|
|||
(1 row)
|
||||
|
||||
ALTER TABLE test_table DROP COLUMN new_column;
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
SELECT array_agg(name::text ORDER BY name::text) FROM public.table_attrs where relid = 'test_table'::regclass;
|
||||
array_agg
|
||||
---------------------------------------------------------------------
|
||||
|
@ -661,7 +661,7 @@ SELECT citus.mitmproxy('conn.onAuthenticationOk().kill()');
|
|||
(1 row)
|
||||
|
||||
ALTER TABLE test_table ADD COLUMN new_column INT;
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
SELECT array_agg(name::text ORDER BY name::text) FROM public.table_attrs where relid = 'test_table'::regclass;
|
||||
array_agg
|
||||
---------------------------------------------------------------------
|
||||
|
@ -722,7 +722,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="worker_apply_shard_ddl_command").kil
|
|||
(1 row)
|
||||
|
||||
ALTER TABLE test_table ADD COLUMN new_column INT;
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
SELECT array_agg(name::text ORDER BY name::text) FROM public.table_attrs where relid = 'test_table'::regclass;
|
||||
array_agg
|
||||
---------------------------------------------------------------------
|
||||
|
@ -1010,7 +1010,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="worker_apply_shard_ddl_command").kil
|
|||
(1 row)
|
||||
|
||||
ALTER TABLE test_table ADD COLUMN new_column INT;
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
-- kill as soon as the coordinator after it sends worker_apply_shard_ddl_command 2nd time
|
||||
SELECT citus.mitmproxy('conn.onQuery(query="worker_apply_shard_ddl_command").after(2).kill()');
|
||||
mitmproxy
|
||||
|
@ -1019,7 +1019,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="worker_apply_shard_ddl_command").aft
|
|||
(1 row)
|
||||
|
||||
ALTER TABLE test_table ADD COLUMN new_column INT;
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
-- cancel as soon as the coordinator after it sends worker_apply_shard_ddl_command 2nd time
|
||||
SELECT citus.mitmproxy('conn.onQuery(query="worker_apply_shard_ddl_command").after(2).cancel(' || pg_backend_pid() || ')');
|
||||
mitmproxy
|
||||
|
|
|
@ -88,7 +88,7 @@ CREATE TABLE distributed_result_info AS
|
|||
SELECT resultId, nodeport, rowcount, targetShardId, targetShardIndex
|
||||
FROM partition_task_list_results('test', $$ SELECT * FROM source_table $$, 'target_table')
|
||||
NATURAL JOIN pg_dist_node;
|
||||
WARNING: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
WARNING: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
SELECT * FROM distributed_result_info ORDER BY resultId;
|
||||
resultid | nodeport | rowcount | targetshardid | targetshardindex
|
||||
---------------------------------------------------------------------
|
||||
|
|
|
@ -101,7 +101,7 @@ NOTICE: issuing SELECT count(*) AS count FROM failure_failover_to_local_executi
|
|||
DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx
|
||||
NOTICE: issuing SELECT count(*) AS count FROM failure_failover_to_local_execution.failover_to_local_1980003 failover_to_local WHERE true
|
||||
DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx
|
||||
WARNING: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
WARNING: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
NOTICE: executing the command locally: SELECT count(*) AS count FROM failure_failover_to_local_execution.failover_to_local_1980000 failover_to_local WHERE true
|
||||
NOTICE: executing the command locally: SELECT count(*) AS count FROM failure_failover_to_local_execution.failover_to_local_1980002 failover_to_local WHERE true
|
||||
count
|
||||
|
|
|
@ -44,7 +44,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="^INSERT INTO insert_select_pushdown"
|
|||
(1 row)
|
||||
|
||||
INSERT INTO events_summary SELECT user_id, event_id, count(*) FROM events_table GROUP BY 1,2;
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
--verify nothing is modified
|
||||
SELECT citus.mitmproxy('conn.allow()');
|
||||
mitmproxy
|
||||
|
@ -95,7 +95,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="^INSERT INTO insert_select_pushdown"
|
|||
(1 row)
|
||||
|
||||
INSERT INTO events_table SELECT * FROM events_table;
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
--verify nothing is modified
|
||||
SELECT citus.mitmproxy('conn.allow()');
|
||||
mitmproxy
|
||||
|
|
|
@ -55,7 +55,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="worker_partition_query_result").kill
|
|||
(1 row)
|
||||
|
||||
INSERT INTO target_table SELECT * FROM source_table;
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
SELECT * FROM target_table ORDER BY a;
|
||||
a | b
|
||||
---------------------------------------------------------------------
|
||||
|
@ -68,7 +68,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="worker_partition_query_result").kill
|
|||
(1 row)
|
||||
|
||||
INSERT INTO target_table SELECT * FROM replicated_source_table;
|
||||
WARNING: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
WARNING: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
SELECT * FROM target_table ORDER BY a;
|
||||
a | b
|
||||
---------------------------------------------------------------------
|
||||
|
@ -138,7 +138,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="read_intermediate_results").kill()')
|
|||
(1 row)
|
||||
|
||||
INSERT INTO target_table SELECT * FROM source_table;
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
SELECT * FROM target_table ORDER BY a;
|
||||
a | b
|
||||
---------------------------------------------------------------------
|
||||
|
@ -151,7 +151,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="read_intermediate_results").kill()')
|
|||
(1 row)
|
||||
|
||||
INSERT INTO target_table SELECT * FROM replicated_source_table;
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
SELECT * FROM target_table ORDER BY a;
|
||||
a | b
|
||||
---------------------------------------------------------------------
|
||||
|
@ -168,7 +168,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="read_intermediate_results").kill()')
|
|||
(1 row)
|
||||
|
||||
INSERT INTO replicated_target_table SELECT * FROM source_table;
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
SELECT * FROM replicated_target_table;
|
||||
a | b
|
||||
---------------------------------------------------------------------
|
||||
|
|
|
@ -33,7 +33,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="DELETE").kill()');
|
|||
|
||||
BEGIN;
|
||||
DELETE FROM dml_test WHERE id = 1;
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
DELETE FROM dml_test WHERE id = 2;
|
||||
ERROR: current transaction is aborted, commands ignored until end of transaction block
|
||||
INSERT INTO dml_test VALUES (5, 'Epsilon');
|
||||
|
@ -93,7 +93,7 @@ BEGIN;
|
|||
DELETE FROM dml_test WHERE id = 1;
|
||||
DELETE FROM dml_test WHERE id = 2;
|
||||
INSERT INTO dml_test VALUES (5, 'Epsilon');
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
UPDATE dml_test SET name = 'alpha' WHERE id = 1;
|
||||
ERROR: current transaction is aborted, commands ignored until end of transaction block
|
||||
UPDATE dml_test SET name = 'gamma' WHERE id = 3;
|
||||
|
@ -148,7 +148,7 @@ DELETE FROM dml_test WHERE id = 1;
|
|||
DELETE FROM dml_test WHERE id = 2;
|
||||
INSERT INTO dml_test VALUES (5, 'Epsilon');
|
||||
UPDATE dml_test SET name = 'alpha' WHERE id = 1;
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
UPDATE dml_test SET name = 'gamma' WHERE id = 3;
|
||||
ERROR: current transaction is aborted, commands ignored until end of transaction block
|
||||
COMMIT;
|
||||
|
|
|
@ -43,7 +43,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="INSERT").kill()');
|
|||
(1 row)
|
||||
|
||||
INSERT INTO distributed_table VALUES (1,1), (1,2), (1,3);
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
-- this test is broken, see https://github.com/citusdata/citus/issues/2460
|
||||
-- SELECT citus.mitmproxy('conn.onQuery(query="^INSERT").cancel(' || :pid || ')');
|
||||
-- INSERT INTO distributed_table VALUES (1,4), (1,5), (1,6);
|
||||
|
@ -55,7 +55,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="^INSERT").kill()');
|
|||
(1 row)
|
||||
|
||||
INSERT INTO distributed_table VALUES (1,7), (5,8);
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
-- this test is broken, see https://github.com/citusdata/citus/issues/2460
|
||||
-- SELECT citus.mitmproxy('conn.onQuery(query="^INSERT").cancel(' || :pid || ')');
|
||||
-- INSERT INTO distributed_table VALUES (1,9), (5,10);
|
||||
|
@ -67,7 +67,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="^INSERT").kill()');
|
|||
(1 row)
|
||||
|
||||
INSERT INTO distributed_table VALUES (1,11), (6,12);
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
SELECT citus.mitmproxy('conn.onQuery(query="^INSERT").cancel(' || :pid || ')');
|
||||
mitmproxy
|
||||
---------------------------------------------------------------------
|
||||
|
@ -84,7 +84,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="^INSERT").after(1).kill()');
|
|||
(1 row)
|
||||
|
||||
INSERT INTO distributed_table VALUES (1,15), (6,16);
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
SELECT citus.mitmproxy('conn.onQuery(query="^INSERT").after(1).cancel(' || :pid || ')');
|
||||
mitmproxy
|
||||
---------------------------------------------------------------------
|
||||
|
@ -101,7 +101,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="^INSERT").kill()');
|
|||
(1 row)
|
||||
|
||||
INSERT INTO distributed_table VALUES (2,19),(1,20);
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
SELECT citus.mitmproxy('conn.onQuery(query="^INSERT").cancel(' || :pid || ')');
|
||||
mitmproxy
|
||||
---------------------------------------------------------------------
|
||||
|
|
|
@ -58,7 +58,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="DELETE FROM").kill()');
|
|||
|
||||
-- issue a multi shard delete
|
||||
DELETE FROM t2 WHERE b = 2;
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
-- verify nothing is deleted
|
||||
SELECT count(*) FROM t2;
|
||||
count
|
||||
|
@ -74,7 +74,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="DELETE FROM multi_shard.t2_201005").
|
|||
(1 row)
|
||||
|
||||
DELETE FROM t2 WHERE b = 2;
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
-- verify nothing is deleted
|
||||
SELECT count(*) FROM t2;
|
||||
count
|
||||
|
@ -134,7 +134,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="^UPDATE").kill()');
|
|||
|
||||
-- issue a multi shard update
|
||||
UPDATE t2 SET c = 4 WHERE b = 2;
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
-- verify nothing is updated
|
||||
SELECT count(*) FILTER (WHERE b = 2) AS b2, count(*) FILTER (WHERE c = 4) AS c4 FROM t2;
|
||||
b2 | c4
|
||||
|
@ -150,7 +150,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="UPDATE multi_shard.t2_201005").kill(
|
|||
(1 row)
|
||||
|
||||
UPDATE t2 SET c = 4 WHERE b = 2;
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
-- verify nothing is updated
|
||||
SELECT count(*) FILTER (WHERE b = 2) AS b2, count(*) FILTER (WHERE c = 4) AS c4 FROM t2;
|
||||
b2 | c4
|
||||
|
@ -202,7 +202,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="DELETE FROM").kill()');
|
|||
|
||||
-- issue a multi shard delete
|
||||
DELETE FROM t2 WHERE b = 2;
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
-- verify nothing is deleted
|
||||
SELECT count(*) FROM t2;
|
||||
count
|
||||
|
@ -218,7 +218,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="DELETE FROM multi_shard.t2_201005").
|
|||
(1 row)
|
||||
|
||||
DELETE FROM t2 WHERE b = 2;
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
-- verify nothing is deleted
|
||||
SELECT count(*) FROM t2;
|
||||
count
|
||||
|
@ -278,7 +278,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="^UPDATE").kill()');
|
|||
|
||||
-- issue a multi shard update
|
||||
UPDATE t2 SET c = 4 WHERE b = 2;
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
-- verify nothing is updated
|
||||
SELECT count(*) FILTER (WHERE b = 2) AS b2, count(*) FILTER (WHERE c = 4) AS c4 FROM t2;
|
||||
b2 | c4
|
||||
|
@ -294,7 +294,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="UPDATE multi_shard.t2_201005").kill(
|
|||
(1 row)
|
||||
|
||||
UPDATE t2 SET c = 4 WHERE b = 2;
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
-- verify nothing is updated
|
||||
SELECT count(*) FILTER (WHERE b = 2) AS b2, count(*) FILTER (WHERE c = 4) AS c4 FROM t2;
|
||||
b2 | c4
|
||||
|
@ -364,7 +364,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="DELETE FROM").kill()');
|
|||
(1 row)
|
||||
|
||||
DELETE FROM r1 WHERE a = 2;
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
-- verify nothing is deleted
|
||||
SELECT count(*) FILTER (WHERE b = 2) AS b2 FROM t2;
|
||||
b2
|
||||
|
@ -379,7 +379,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="DELETE FROM").kill()');
|
|||
(1 row)
|
||||
|
||||
DELETE FROM t2 WHERE b = 2;
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
-- verify nothing is deleted
|
||||
SELECT count(*) FILTER (WHERE b = 2) AS b2 FROM t2;
|
||||
b2
|
||||
|
@ -459,7 +459,7 @@ UPDATE t3 SET c = q.c FROM (
|
|||
SELECT b, max(c) as c FROM t2 GROUP BY b) q
|
||||
WHERE t3.b = q.b
|
||||
RETURNING *;
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
--- verify nothing is updated
|
||||
SELECT citus.mitmproxy('conn.allow()');
|
||||
mitmproxy
|
||||
|
@ -515,7 +515,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="UPDATE multi_shard.t3_201013").kill(
|
|||
(1 row)
|
||||
|
||||
UPDATE t3 SET b = 2 WHERE b = 1;
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
-- verify nothing is updated
|
||||
SELECT count(*) FILTER (WHERE b = 1) b1, count(*) FILTER (WHERE b = 2) AS b2 FROM t3;
|
||||
b1 | b2
|
||||
|
@ -547,7 +547,7 @@ SELECT count(*) FILTER (WHERE b = 1) b1, count(*) FILTER (WHERE b = 2) AS b2 FRO
|
|||
|
||||
-- following will fail
|
||||
UPDATE t3 SET b = 2 WHERE b = 1;
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
END;
|
||||
-- verify everything is rolled back
|
||||
SELECT count(*) FILTER (WHERE b = 1) b1, count(*) FILTER (WHERE b = 2) AS b2 FROM t2;
|
||||
|
@ -563,7 +563,7 @@ SELECT count(*) FILTER (WHERE b = 1) b1, count(*) FILTER (WHERE b = 2) AS b2 FRO
|
|||
(1 row)
|
||||
|
||||
UPDATE t3 SET b = 1 WHERE b = 2 RETURNING *;
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
-- verify nothing is updated
|
||||
SELECT count(*) FILTER (WHERE b = 1) b1, count(*) FILTER (WHERE b = 2) AS b2 FROM t3;
|
||||
b1 | b2
|
||||
|
@ -578,7 +578,7 @@ SELECT count(*) FILTER (WHERE b = 1) b1, count(*) FILTER (WHERE b = 2) AS b2 FRO
|
|||
(1 row)
|
||||
|
||||
UPDATE t3 SET b = 2 WHERE b = 1;
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
-- verify nothing is updated
|
||||
SELECT count(*) FILTER (WHERE b = 1) b1, count(*) FILTER (WHERE b = 2) AS b2 FROM t3;
|
||||
b1 | b2
|
||||
|
@ -610,7 +610,7 @@ SELECT count(*) FILTER (WHERE b = 1) b1, count(*) FILTER (WHERE b = 2) AS b2 FRO
|
|||
|
||||
-- following will fail
|
||||
UPDATE t3 SET b = 2 WHERE b = 1;
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
END;
|
||||
-- verify everything is rolled back
|
||||
SELECT count(*) FILTER (WHERE b = 1) b1, count(*) FILTER (WHERE b = 2) AS b2 FROM t2;
|
||||
|
|
|
@ -132,7 +132,7 @@ SELECT hasmetadata FROM pg_dist_node WHERE nodeport=:worker_2_proxy_port;
|
|||
|
||||
-- Check failures on DDL command propagation
|
||||
CREATE TABLE t2 (id int PRIMARY KEY);
|
||||
SELECT citus.mitmproxy('conn.onParse(query="citus_internal_add_placement_metadata").kill()');
|
||||
SELECT citus.mitmproxy('conn.onParse(query="citus_internal.add_placement_metadata").kill()');
|
||||
mitmproxy
|
||||
---------------------------------------------------------------------
|
||||
|
||||
|
@ -140,7 +140,7 @@ SELECT citus.mitmproxy('conn.onParse(query="citus_internal_add_placement_metadat
|
|||
|
||||
SELECT create_distributed_table('t2', 'id');
|
||||
ERROR: connection not open
|
||||
SELECT citus.mitmproxy('conn.onParse(query="citus_internal_add_shard_metadata").cancel(' || :pid || ')');
|
||||
SELECT citus.mitmproxy('conn.onParse(query="citus_internal.add_shard_metadata").cancel(' || :pid || ')');
|
||||
mitmproxy
|
||||
---------------------------------------------------------------------
|
||||
|
||||
|
|
|
@ -155,7 +155,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="UPDATE pg_dist_local_group SET group
|
|||
(1 row)
|
||||
|
||||
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
-- Failure to drop node metadata
|
||||
SELECT citus.mitmproxy('conn.onQuery(query="DELETE FROM pg_dist_node").cancel(' || :pid || ')');
|
||||
mitmproxy
|
||||
|
@ -172,7 +172,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="DELETE FROM pg_dist_node").kill()');
|
|||
(1 row)
|
||||
|
||||
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
-- Failure to send node metadata
|
||||
SELECT citus.mitmproxy('conn.onQuery(query="INSERT INTO pg_dist_node").cancel(' || :pid || ')');
|
||||
mitmproxy
|
||||
|
@ -189,7 +189,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="INSERT INTO pg_dist_node").kill()');
|
|||
(1 row)
|
||||
|
||||
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
-- Failure to drop sequence dependency for all tables
|
||||
SELECT citus.mitmproxy('conn.onQuery(query="SELECT pg_catalog.worker_drop_sequence_dependency.*FROM pg_dist_partition").cancel(' || :pid || ')');
|
||||
mitmproxy
|
||||
|
@ -206,7 +206,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="SELECT pg_catalog.worker_drop_sequen
|
|||
(1 row)
|
||||
|
||||
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
-- Failure to drop shell table
|
||||
SELECT citus.mitmproxy('conn.onQuery(query="CALL pg_catalog.worker_drop_all_shell_tables").cancel(' || :pid || ')');
|
||||
mitmproxy
|
||||
|
@ -223,7 +223,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="CALL pg_catalog.worker_drop_all_shel
|
|||
(1 row)
|
||||
|
||||
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
-- Failure to delete all pg_dist_partition metadata
|
||||
SELECT citus.mitmproxy('conn.onQuery(query="DELETE FROM pg_dist_partition").cancel(' || :pid || ')');
|
||||
mitmproxy
|
||||
|
@ -240,7 +240,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="DELETE FROM pg_dist_partition").kill
|
|||
(1 row)
|
||||
|
||||
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
-- Failure to delete all pg_dist_shard metadata
|
||||
SELECT citus.mitmproxy('conn.onQuery(query="DELETE FROM pg_dist_shard").cancel(' || :pid || ')');
|
||||
mitmproxy
|
||||
|
@ -257,7 +257,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="DELETE FROM pg_dist_shard").kill()')
|
|||
(1 row)
|
||||
|
||||
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
-- Failure to delete all pg_dist_placement metadata
|
||||
SELECT citus.mitmproxy('conn.onQuery(query="DELETE FROM pg_dist_placement").cancel(' || :pid || ')');
|
||||
mitmproxy
|
||||
|
@ -274,7 +274,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="DELETE FROM pg_dist_placement").kill
|
|||
(1 row)
|
||||
|
||||
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
-- Failure to delete all pg_dist_object metadata
|
||||
SELECT citus.mitmproxy('conn.onQuery(query="DELETE FROM pg_catalog.pg_dist_object").cancel(' || :pid || ')');
|
||||
mitmproxy
|
||||
|
@ -291,7 +291,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="DELETE FROM pg_catalog.pg_dist_objec
|
|||
(1 row)
|
||||
|
||||
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
-- Failure to delete all pg_dist_colocation metadata
|
||||
SELECT citus.mitmproxy('conn.onQuery(query="DELETE FROM pg_catalog.pg_dist_colocation").cancel(' || :pid || ')');
|
||||
mitmproxy
|
||||
|
@ -308,7 +308,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="DELETE FROM pg_catalog.pg_dist_coloc
|
|||
(1 row)
|
||||
|
||||
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
-- Failure to alter or create role
|
||||
SELECT citus.mitmproxy('conn.onQuery(query="SELECT worker_create_or_alter_role").cancel(' || :pid || ')');
|
||||
mitmproxy
|
||||
|
@ -325,7 +325,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="SELECT worker_create_or_alter_role")
|
|||
(1 row)
|
||||
|
||||
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
-- Failure to set database owner
|
||||
SELECT citus.mitmproxy('conn.onQuery(query="ALTER DATABASE.*OWNER TO").cancel(' || :pid || ')');
|
||||
mitmproxy
|
||||
|
@ -342,7 +342,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="ALTER DATABASE.*OWNER TO").kill()');
|
|||
(1 row)
|
||||
|
||||
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
-- Failure to create schema
|
||||
SELECT citus.mitmproxy('conn.onQuery(query="CREATE SCHEMA IF NOT EXISTS mx_metadata_sync_multi_trans AUTHORIZATION").cancel(' || :pid || ')');
|
||||
mitmproxy
|
||||
|
@ -359,7 +359,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="CREATE SCHEMA IF NOT EXISTS mx_metad
|
|||
(1 row)
|
||||
|
||||
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
-- Failure to create collation
|
||||
SELECT citus.mitmproxy('conn.onQuery(query="SELECT worker_create_or_replace_object.*CREATE COLLATION mx_metadata_sync_multi_trans.german_phonebook").cancel(' || :pid || ')');
|
||||
mitmproxy
|
||||
|
@ -376,7 +376,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="SELECT worker_create_or_replace_obje
|
|||
(1 row)
|
||||
|
||||
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
-- Failure to create function
|
||||
SELECT citus.mitmproxy('conn.onQuery(query="CREATE OR REPLACE FUNCTION mx_metadata_sync_multi_trans.one_as_result").cancel(' || :pid || ')');
|
||||
mitmproxy
|
||||
|
@ -393,7 +393,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="CREATE OR REPLACE FUNCTION mx_metada
|
|||
(1 row)
|
||||
|
||||
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
-- Failure to create text search dictionary
|
||||
SELECT citus.mitmproxy('conn.onQuery(query="SELECT worker_create_or_replace_object.*my_german_dict").cancel(' || :pid || ')');
|
||||
mitmproxy
|
||||
|
@ -410,7 +410,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="SELECT worker_create_or_replace_obje
|
|||
(1 row)
|
||||
|
||||
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
-- Failure to create text search config
|
||||
SELECT citus.mitmproxy('conn.onQuery(query="SELECT worker_create_or_replace_object.*my_ts_config").cancel(' || :pid || ')');
|
||||
mitmproxy
|
||||
|
@ -427,7 +427,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="SELECT worker_create_or_replace_obje
|
|||
(1 row)
|
||||
|
||||
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
-- Failure to create type
|
||||
SELECT citus.mitmproxy('conn.onQuery(query="SELECT worker_create_or_replace_object.*pair_type").cancel(' || :pid || ')');
|
||||
mitmproxy
|
||||
|
@ -444,7 +444,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="SELECT worker_create_or_replace_obje
|
|||
(1 row)
|
||||
|
||||
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
-- Failure to create publication
|
||||
SELECT citus.mitmproxy('conn.onQuery(query="CREATE PUBLICATION.*pub_all").cancel(' || :pid || ')');
|
||||
mitmproxy
|
||||
|
@ -461,7 +461,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="CREATE PUBLICATION.*pub_all").kill()
|
|||
(1 row)
|
||||
|
||||
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
-- Failure to create sequence
|
||||
SELECT citus.mitmproxy('conn.onQuery(query="SELECT worker_apply_sequence_command").cancel(' || :pid || ')');
|
||||
mitmproxy
|
||||
|
@ -478,7 +478,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="SELECT worker_apply_sequence_command
|
|||
(1 row)
|
||||
|
||||
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
-- Failure to drop sequence dependency for distributed table
|
||||
SELECT citus.mitmproxy('conn.onQuery(query="SELECT pg_catalog.worker_drop_sequence_dependency.*mx_metadata_sync_multi_trans.dist1").cancel(' || :pid || ')');
|
||||
mitmproxy
|
||||
|
@ -495,7 +495,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="SELECT pg_catalog.worker_drop_sequen
|
|||
(1 row)
|
||||
|
||||
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
-- Failure to drop distributed table if exists
|
||||
SELECT citus.mitmproxy('conn.onQuery(query="DROP TABLE IF EXISTS mx_metadata_sync_multi_trans.dist1").cancel(' || :pid || ')');
|
||||
mitmproxy
|
||||
|
@ -512,7 +512,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="DROP TABLE IF EXISTS mx_metadata_syn
|
|||
(1 row)
|
||||
|
||||
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
-- Failure to create distributed table
|
||||
SELECT citus.mitmproxy('conn.onQuery(query="CREATE TABLE mx_metadata_sync_multi_trans.dist1").cancel(' || :pid || ')');
|
||||
mitmproxy
|
||||
|
@ -529,7 +529,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="CREATE TABLE mx_metadata_sync_multi_
|
|||
(1 row)
|
||||
|
||||
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
-- Failure to record sequence dependency for table
|
||||
SELECT citus.mitmproxy('conn.onQuery(query="SELECT pg_catalog.worker_record_sequence_dependency").cancel(' || :pid || ')');
|
||||
mitmproxy
|
||||
|
@ -546,7 +546,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="SELECT pg_catalog.worker_record_sequ
|
|||
(1 row)
|
||||
|
||||
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
-- Failure to create index for table
|
||||
SELECT citus.mitmproxy('conn.onQuery(query="CREATE INDEX dist1_search_phone_idx ON mx_metadata_sync_multi_trans.dist1 USING gin").cancel(' || :pid || ')');
|
||||
mitmproxy
|
||||
|
@ -563,7 +563,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="CREATE INDEX dist1_search_phone_idx
|
|||
(1 row)
|
||||
|
||||
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
-- Failure to create reference table
|
||||
SELECT citus.mitmproxy('conn.onQuery(query="CREATE TABLE mx_metadata_sync_multi_trans.ref").cancel(' || :pid || ')');
|
||||
mitmproxy
|
||||
|
@ -580,7 +580,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="CREATE TABLE mx_metadata_sync_multi_
|
|||
(1 row)
|
||||
|
||||
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
-- Failure to create local table
|
||||
SELECT citus.mitmproxy('conn.onQuery(query="CREATE TABLE mx_metadata_sync_multi_trans.loc1").cancel(' || :pid || ')');
|
||||
mitmproxy
|
||||
|
@ -597,7 +597,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="CREATE TABLE mx_metadata_sync_multi_
|
|||
(1 row)
|
||||
|
||||
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
-- Failure to create distributed partitioned table
|
||||
SELECT citus.mitmproxy('conn.onQuery(query="CREATE TABLE mx_metadata_sync_multi_trans.orders").cancel(' || :pid || ')');
|
||||
mitmproxy
|
||||
|
@ -614,7 +614,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="CREATE TABLE mx_metadata_sync_multi_
|
|||
(1 row)
|
||||
|
||||
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
-- Failure to create distributed partition table
|
||||
SELECT citus.mitmproxy('conn.onQuery(query="CREATE TABLE mx_metadata_sync_multi_trans.orders_p2020_01_05").cancel(' || :pid || ')');
|
||||
mitmproxy
|
||||
|
@ -631,7 +631,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="CREATE TABLE mx_metadata_sync_multi_
|
|||
(1 row)
|
||||
|
||||
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
-- Failure to attach partition
|
||||
SELECT citus.mitmproxy('conn.onQuery(query="ALTER TABLE mx_metadata_sync_multi_trans.orders ATTACH PARTITION mx_metadata_sync_multi_trans.orders_p2020_01_05").cancel(' || :pid || ')');
|
||||
mitmproxy
|
||||
|
@ -648,9 +648,9 @@ SELECT citus.mitmproxy('conn.onQuery(query="ALTER TABLE mx_metadata_sync_multi_t
|
|||
(1 row)
|
||||
|
||||
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
-- Failure to add partition metadata
|
||||
SELECT citus.mitmproxy('conn.onQuery(query="SELECT citus_internal_add_partition_metadata").cancel(' || :pid || ')');
|
||||
SELECT citus.mitmproxy('conn.onQuery(query="SELECT citus_internal.add_partition_metadata").cancel(' || :pid || ')');
|
||||
mitmproxy
|
||||
---------------------------------------------------------------------
|
||||
|
||||
|
@ -658,16 +658,16 @@ SELECT citus.mitmproxy('conn.onQuery(query="SELECT citus_internal_add_partition_
|
|||
|
||||
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
|
||||
ERROR: canceling statement due to user request
|
||||
SELECT citus.mitmproxy('conn.onQuery(query="SELECT citus_internal_add_partition_metadata").kill()');
|
||||
SELECT citus.mitmproxy('conn.onQuery(query="SELECT citus_internal.add_partition_metadata").kill()');
|
||||
mitmproxy
|
||||
---------------------------------------------------------------------
|
||||
|
||||
(1 row)
|
||||
|
||||
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
-- Failure to add shard metadata
|
||||
SELECT citus.mitmproxy('conn.onQuery(query="SELECT citus_internal_add_shard_metadata").cancel(' || :pid || ')');
|
||||
SELECT citus.mitmproxy('conn.onQuery(query="SELECT citus_internal.add_shard_metadata").cancel(' || :pid || ')');
|
||||
mitmproxy
|
||||
---------------------------------------------------------------------
|
||||
|
||||
|
@ -675,16 +675,16 @@ SELECT citus.mitmproxy('conn.onQuery(query="SELECT citus_internal_add_shard_meta
|
|||
|
||||
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
|
||||
ERROR: canceling statement due to user request
|
||||
SELECT citus.mitmproxy('conn.onQuery(query="SELECT citus_internal_add_shard_metadata").kill()');
|
||||
SELECT citus.mitmproxy('conn.onQuery(query="SELECT citus_internal.add_shard_metadata").kill()');
|
||||
mitmproxy
|
||||
---------------------------------------------------------------------
|
||||
|
||||
(1 row)
|
||||
|
||||
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
-- Failure to add placement metadata
|
||||
SELECT citus.mitmproxy('conn.onQuery(query="SELECT citus_internal_add_placement_metadata").cancel(' || :pid || ')');
|
||||
SELECT citus.mitmproxy('conn.onQuery(query="SELECT citus_internal.add_placement_metadata").cancel(' || :pid || ')');
|
||||
mitmproxy
|
||||
---------------------------------------------------------------------
|
||||
|
||||
|
@ -692,16 +692,16 @@ SELECT citus.mitmproxy('conn.onQuery(query="SELECT citus_internal_add_placement_
|
|||
|
||||
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
|
||||
ERROR: canceling statement due to user request
|
||||
SELECT citus.mitmproxy('conn.onQuery(query="SELECT citus_internal_add_placement_metadata").kill()');
|
||||
SELECT citus.mitmproxy('conn.onQuery(query="SELECT citus_internal.add_placement_metadata").kill()');
|
||||
mitmproxy
|
||||
---------------------------------------------------------------------
|
||||
|
||||
(1 row)
|
||||
|
||||
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
-- Failure to add colocation metadata
|
||||
SELECT citus.mitmproxy('conn.onQuery(query="SELECT pg_catalog.citus_internal_add_colocation_metadata").cancel(' || :pid || ')');
|
||||
SELECT citus.mitmproxy('conn.onQuery(query="SELECT citus_internal.add_colocation_metadata").cancel(' || :pid || ')');
|
||||
mitmproxy
|
||||
---------------------------------------------------------------------
|
||||
|
||||
|
@ -709,16 +709,16 @@ SELECT citus.mitmproxy('conn.onQuery(query="SELECT pg_catalog.citus_internal_add
|
|||
|
||||
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
|
||||
ERROR: canceling statement due to user request
|
||||
SELECT citus.mitmproxy('conn.onQuery(query="SELECT pg_catalog.citus_internal_add_colocation_metadata").kill()');
|
||||
SELECT citus.mitmproxy('conn.onQuery(query="SELECT citus_internal.add_colocation_metadata").kill()');
|
||||
mitmproxy
|
||||
---------------------------------------------------------------------
|
||||
|
||||
(1 row)
|
||||
|
||||
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
-- Failure to add distributed object metadata
|
||||
SELECT citus.mitmproxy('conn.onQuery(query="SELECT citus_internal_add_object_metadata").cancel(' || :pid || ')');
|
||||
SELECT citus.mitmproxy('conn.onQuery(query="SELECT citus_internal.add_object_metadata").cancel(' || :pid || ')');
|
||||
mitmproxy
|
||||
---------------------------------------------------------------------
|
||||
|
||||
|
@ -726,14 +726,14 @@ SELECT citus.mitmproxy('conn.onQuery(query="SELECT citus_internal_add_object_met
|
|||
|
||||
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
|
||||
ERROR: canceling statement due to user request
|
||||
SELECT citus.mitmproxy('conn.onQuery(query="SELECT citus_internal_add_object_metadata").kill()');
|
||||
SELECT citus.mitmproxy('conn.onQuery(query="SELECT citus_internal.add_object_metadata").kill()');
|
||||
mitmproxy
|
||||
---------------------------------------------------------------------
|
||||
|
||||
(1 row)
|
||||
|
||||
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
-- Failure to mark function as distributed
|
||||
SELECT citus.mitmproxy('conn.onQuery(query="WITH distributed_object_data.*one_as_result").cancel(' || :pid || ')');
|
||||
mitmproxy
|
||||
|
@ -750,7 +750,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="WITH distributed_object_data.*one_as
|
|||
(1 row)
|
||||
|
||||
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
-- Failure to mark collation as distributed
|
||||
SELECT citus.mitmproxy('conn.onQuery(query="WITH distributed_object_data.*german_phonebook").cancel(' || :pid || ')');
|
||||
mitmproxy
|
||||
|
@ -767,7 +767,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="WITH distributed_object_data.*german
|
|||
(1 row)
|
||||
|
||||
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
-- Failure to mark text search dictionary as distributed
|
||||
SELECT citus.mitmproxy('conn.onQuery(query="WITH distributed_object_data.*my_german_dict").cancel(' || :pid || ')');
|
||||
mitmproxy
|
||||
|
@ -784,7 +784,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="WITH distributed_object_data.*my_ger
|
|||
(1 row)
|
||||
|
||||
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
-- Failure to mark text search configuration as distributed
|
||||
SELECT citus.mitmproxy('conn.onQuery(query="WITH distributed_object_data.*my_ts_config").cancel(' || :pid || ')');
|
||||
mitmproxy
|
||||
|
@ -801,7 +801,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="WITH distributed_object_data.*my_ts_
|
|||
(1 row)
|
||||
|
||||
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
-- Failure to mark type as distributed
|
||||
SELECT citus.mitmproxy('conn.onQuery(query="WITH distributed_object_data.*pair_type").cancel(' || :pid || ')');
|
||||
mitmproxy
|
||||
|
@ -818,7 +818,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="WITH distributed_object_data.*pair_t
|
|||
(1 row)
|
||||
|
||||
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
-- Failure to mark sequence as distributed
|
||||
SELECT citus.mitmproxy('conn.onQuery(query="WITH distributed_object_data.*seq_owned").cancel(' || :pid || ')');
|
||||
mitmproxy
|
||||
|
@ -835,7 +835,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="WITH distributed_object_data.*seq_ow
|
|||
(1 row)
|
||||
|
||||
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
-- Failure to mark publication as distributed
|
||||
SELECT citus.mitmproxy('conn.onQuery(query="WITH distributed_object_data.*pub_all").cancel(' || :pid || ')');
|
||||
mitmproxy
|
||||
|
@ -852,7 +852,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="WITH distributed_object_data.*pub_al
|
|||
(1 row)
|
||||
|
||||
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
-- Failure to set isactive to true
|
||||
SELECT citus.mitmproxy('conn.onQuery(query="UPDATE pg_dist_node SET isactive = TRUE").cancel(' || :pid || ')');
|
||||
mitmproxy
|
||||
|
|
|
@ -43,9 +43,9 @@ SELECT * FROM shards_in_workers;
|
|||
|
||||
-- Failure on creating the subscription
|
||||
-- Failing exactly on CREATE SUBSCRIPTION is causing flaky test where we fail with either:
|
||||
-- 1) ERROR: connection to the remote node localhost:xxxxx failed with the following error: ERROR: subscription "citus_shard_move_subscription_xxxxxxx" does not exist
|
||||
-- 1) ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: ERROR: subscription "citus_shard_move_subscription_xxxxxxx" does not exist
|
||||
-- another command is already in progress
|
||||
-- 2) ERROR: connection to the remote node localhost:xxxxx failed with the following error: another command is already in progress
|
||||
-- 2) ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: another command is already in progress
|
||||
-- Instead fail on the next step (ALTER SUBSCRIPTION) instead which is also required logically as part of uber CREATE SUBSCRIPTION operation.
|
||||
SELECT citus.mitmproxy('conn.onQuery(query="ALTER SUBSCRIPTION").kill()');
|
||||
mitmproxy
|
||||
|
|
|
@ -407,7 +407,7 @@ SELECT citus.mitmproxy('conn.matches(b"CREATE INDEX").killall()');
|
|||
(1 row)
|
||||
|
||||
SELECT master_move_shard_placement(101, 'localhost', :worker_1_port, 'localhost', :worker_2_proxy_port);
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
-- cleanup leftovers
|
||||
SELECT citus.mitmproxy('conn.allow()');
|
||||
mitmproxy
|
||||
|
@ -442,7 +442,7 @@ SELECT citus.mitmproxy('conn.matches(b"CREATE INDEX").killall()');
|
|||
(1 row)
|
||||
|
||||
SELECT master_move_shard_placement(101, 'localhost', :worker_1_port, 'localhost', :worker_2_proxy_port);
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
-- failure on parallel create index
|
||||
ALTER SYSTEM RESET citus.max_adaptive_executor_pool_size;
|
||||
SELECT pg_reload_conf();
|
||||
|
@ -458,7 +458,7 @@ SELECT citus.mitmproxy('conn.matches(b"CREATE INDEX").killall()');
|
|||
(1 row)
|
||||
|
||||
SELECT master_move_shard_placement(101, 'localhost', :worker_1_port, 'localhost', :worker_2_proxy_port);
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
-- Verify that the shard is not moved and the number of rows are still 100k
|
||||
SELECT citus.mitmproxy('conn.allow()');
|
||||
mitmproxy
|
||||
|
|
|
@ -33,7 +33,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="INSERT").kill()');
|
|||
(1 row)
|
||||
|
||||
INSERT INTO ref_table VALUES (5, 6);
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
SELECT COUNT(*) FROM ref_table WHERE key=5;
|
||||
count
|
||||
---------------------------------------------------------------------
|
||||
|
@ -48,7 +48,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="UPDATE").kill()');
|
|||
(1 row)
|
||||
|
||||
UPDATE ref_table SET key=7 RETURNING value;
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
SELECT COUNT(*) FROM ref_table WHERE key=7;
|
||||
count
|
||||
---------------------------------------------------------------------
|
||||
|
@ -65,7 +65,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="UPDATE").kill()');
|
|||
BEGIN;
|
||||
DELETE FROM ref_table WHERE key=5;
|
||||
UPDATE ref_table SET key=value;
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
COMMIT;
|
||||
SELECT COUNT(*) FROM ref_table WHERE key=value;
|
||||
count
|
||||
|
|
|
@ -28,7 +28,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="INSERT").kill()');
|
|||
(1 row)
|
||||
|
||||
INSERT INTO partitioned_table VALUES (0, 0);
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
-- use both placements
|
||||
SET citus.task_assignment_policy TO "round-robin";
|
||||
-- the results should be the same
|
||||
|
|
|
@ -312,7 +312,7 @@ SELECT * FROM ref;
|
|||
|
||||
ROLLBACK TO SAVEPOINT start;
|
||||
SELECT * FROM ref;
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
END;
|
||||
-- clean up
|
||||
RESET client_min_messages;
|
||||
|
|
|
@ -27,7 +27,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="INSERT").kill()');
|
|||
(1 row)
|
||||
|
||||
INSERT INTO mod_test VALUES (2, 6);
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
SELECT COUNT(*) FROM mod_test WHERE key=2;
|
||||
count
|
||||
---------------------------------------------------------------------
|
||||
|
@ -59,7 +59,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="UPDATE").kill()');
|
|||
(1 row)
|
||||
|
||||
UPDATE mod_test SET value='ok' WHERE key=2 RETURNING key;
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
SELECT COUNT(*) FROM mod_test WHERE value='ok';
|
||||
count
|
||||
---------------------------------------------------------------------
|
||||
|
@ -89,7 +89,7 @@ INSERT INTO mod_test VALUES (2, 6);
|
|||
INSERT INTO mod_test VALUES (2, 7);
|
||||
DELETE FROM mod_test WHERE key=2 AND value = '7';
|
||||
UPDATE mod_test SET value='ok' WHERE key=2;
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
COMMIT;
|
||||
SELECT COUNT(*) FROM mod_test WHERE key=2;
|
||||
count
|
||||
|
|
|
@ -30,14 +30,14 @@ SELECT citus.mitmproxy('conn.onQuery(query="SELECT.*select_test").kill()');
|
|||
(1 row)
|
||||
|
||||
SELECT * FROM select_test WHERE key = 3;
|
||||
WARNING: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
WARNING: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
key | value
|
||||
---------------------------------------------------------------------
|
||||
3 | test data
|
||||
(1 row)
|
||||
|
||||
SELECT * FROM select_test WHERE key = 3;
|
||||
WARNING: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
WARNING: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
key | value
|
||||
---------------------------------------------------------------------
|
||||
3 | test data
|
||||
|
@ -54,7 +54,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="SELECT.*select_test").kill()');
|
|||
BEGIN;
|
||||
INSERT INTO select_test VALUES (3, 'more data');
|
||||
SELECT * FROM select_test WHERE key = 3;
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
COMMIT;
|
||||
SELECT citus.mitmproxy('conn.allow()');
|
||||
mitmproxy
|
||||
|
@ -142,7 +142,7 @@ SELECT * FROM select_test WHERE key = 3;
|
|||
|
||||
INSERT INTO select_test VALUES (3, 'even more data');
|
||||
SELECT * FROM select_test WHERE key = 3;
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
COMMIT;
|
||||
SELECT citus.mitmproxy('conn.onQuery(query="SELECT.*pg_prepared_xacts").after(2).kill()');
|
||||
mitmproxy
|
||||
|
@ -186,7 +186,7 @@ SELECT * FROM select_test WHERE key = 1;
|
|||
(1 row)
|
||||
|
||||
SELECT * FROM select_test WHERE key = 1;
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
-- now the same test with query cancellation
|
||||
SELECT citus.mitmproxy('conn.onQuery(query="SELECT.*select_test").after(1).cancel(' || pg_backend_pid() || ')');
|
||||
mitmproxy
|
||||
|
|
|
@ -627,10 +627,10 @@ WARNING: connection not open
|
|||
CONTEXT: while executing command on localhost:xxxxx
|
||||
WARNING: connection not open
|
||||
CONTEXT: while executing command on localhost:xxxxx
|
||||
WARNING: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
WARNING: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
WARNING: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
WARNING: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
WARNING: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
WARNING: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
WARNING: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
WARNING: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection not open
|
||||
CONTEXT: while executing command on localhost:xxxxx
|
||||
SELECT operation_id, object_type, object_name, node_group_id, policy_type
|
||||
|
|
|
@ -76,7 +76,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="worker_split_copy\(302").kill()');
|
|||
(1 row)
|
||||
|
||||
SELECT isolate_tenant_to_new_shard('table_1', 5, 'CASCADE', shard_transfer_mode => 'block_writes');
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
-- cancellation on colocated table population
|
||||
SELECT citus.mitmproxy('conn.onQuery(query="worker_split_copy\(302").cancel(' || :pid || ')');
|
||||
mitmproxy
|
||||
|
@ -94,7 +94,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="ALTER TABLE tenant_isolation.table_2
|
|||
(1 row)
|
||||
|
||||
SELECT isolate_tenant_to_new_shard('table_1', 5, 'CASCADE', shard_transfer_mode => 'block_writes');
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
-- cancellation on colocated table constraints
|
||||
SELECT citus.mitmproxy('conn.onQuery(query="ALTER TABLE tenant_isolation.table_2 ADD CONSTRAINT").after(2).cancel(' || :pid || ')');
|
||||
mitmproxy
|
||||
|
@ -131,7 +131,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="worker_split_copy\(300").kill()');
|
|||
(1 row)
|
||||
|
||||
SELECT isolate_tenant_to_new_shard('table_1', 5, 'CASCADE', shard_transfer_mode => 'block_writes');
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
-- cancellation on table population
|
||||
SELECT citus.mitmproxy('conn.onQuery(query="worker_split_copy\(300").cancel(' || :pid || ')');
|
||||
mitmproxy
|
||||
|
@ -149,7 +149,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="ALTER TABLE tenant_isolation.table_1
|
|||
(1 row)
|
||||
|
||||
SELECT isolate_tenant_to_new_shard('table_1', 5, 'CASCADE', shard_transfer_mode => 'block_writes');
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
-- cancellation on table constraints
|
||||
SELECT citus.mitmproxy('conn.onQuery(query="ALTER TABLE tenant_isolation.table_1 ADD CONSTRAINT").after(2).cancel(' || :pid || ')');
|
||||
mitmproxy
|
||||
|
|
|
@ -159,7 +159,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="SET TRANSACTION SNAPSHOT").kill()');
|
|||
(1 row)
|
||||
|
||||
SELECT isolate_tenant_to_new_shard('table_1', 5, 'CASCADE', shard_transfer_mode := 'force_logical');
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
-- cancellation on setting snapshot
|
||||
SELECT citus.mitmproxy('conn.onQuery(query="SET TRANSACTION SNAPSHOT").cancel(' || :pid || ')');
|
||||
mitmproxy
|
||||
|
@ -177,7 +177,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="worker_split_copy\(300").kill()');
|
|||
(1 row)
|
||||
|
||||
SELECT isolate_tenant_to_new_shard('table_1', 5, 'CASCADE', shard_transfer_mode := 'force_logical');
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
-- cancellation on table population
|
||||
SELECT citus.mitmproxy('conn.onQuery(query="worker_split_copy\(300").cancel(' || :pid || ')');
|
||||
mitmproxy
|
||||
|
@ -195,7 +195,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="worker_split_copy\(302").kill()');
|
|||
(1 row)
|
||||
|
||||
SELECT isolate_tenant_to_new_shard('table_1', 5, 'CASCADE', shard_transfer_mode := 'force_logical');
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
-- cancellation on colocated table population
|
||||
SELECT citus.mitmproxy('conn.onQuery(query="worker_split_copy\(302").cancel(' || :pid || ')');
|
||||
mitmproxy
|
||||
|
|
|
@ -43,7 +43,7 @@ SELECT citus.mitmproxy('conn.onAuthenticationOk().kill()');
|
|||
(1 row)
|
||||
|
||||
TRUNCATE test_table;
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
SELECT citus.mitmproxy('conn.allow()');
|
||||
mitmproxy
|
||||
---------------------------------------------------------------------
|
||||
|
@ -152,7 +152,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="TRUNCATE TABLE truncate_failure.test
|
|||
(1 row)
|
||||
|
||||
TRUNCATE test_table;
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
SELECT citus.mitmproxy('conn.allow()');
|
||||
mitmproxy
|
||||
---------------------------------------------------------------------
|
||||
|
@ -414,7 +414,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="^TRUNCATE TABLE").after(2).kill()');
|
|||
(1 row)
|
||||
|
||||
TRUNCATE reference_table CASCADE;
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
SELECT citus.mitmproxy('conn.allow()');
|
||||
mitmproxy
|
||||
---------------------------------------------------------------------
|
||||
|
@ -553,7 +553,7 @@ SELECT citus.mitmproxy('conn.onAuthenticationOk().kill()');
|
|||
(1 row)
|
||||
|
||||
TRUNCATE test_table;
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
SELECT citus.mitmproxy('conn.allow()');
|
||||
mitmproxy
|
||||
---------------------------------------------------------------------
|
||||
|
@ -662,7 +662,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="^TRUNCATE TABLE truncate_failure.tes
|
|||
(1 row)
|
||||
|
||||
TRUNCATE test_table;
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
SELECT citus.mitmproxy('conn.allow()');
|
||||
mitmproxy
|
||||
---------------------------------------------------------------------
|
||||
|
@ -922,7 +922,7 @@ SELECT citus.mitmproxy('conn.onAuthenticationOk().kill()');
|
|||
(1 row)
|
||||
|
||||
TRUNCATE test_table;
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
SELECT citus.mitmproxy('conn.allow()');
|
||||
mitmproxy
|
||||
---------------------------------------------------------------------
|
||||
|
@ -1031,7 +1031,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="TRUNCATE TABLE truncate_failure.test
|
|||
(1 row)
|
||||
|
||||
TRUNCATE test_table;
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
SELECT citus.mitmproxy('conn.allow()');
|
||||
mitmproxy
|
||||
---------------------------------------------------------------------
|
||||
|
|
|
@ -30,7 +30,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="^VACUUM").kill()');
|
|||
(1 row)
|
||||
|
||||
VACUUM vacuum_test;
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
SELECT citus.mitmproxy('conn.onQuery(query="^ANALYZE").kill()');
|
||||
mitmproxy
|
||||
---------------------------------------------------------------------
|
||||
|
@ -38,7 +38,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="^ANALYZE").kill()');
|
|||
(1 row)
|
||||
|
||||
ANALYZE vacuum_test;
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
SET client_min_messages TO ERROR;
|
||||
SELECT citus.mitmproxy('conn.onQuery(query="^COMMIT").kill()');
|
||||
mitmproxy
|
||||
|
@ -113,7 +113,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="^VACUUM.*other").kill()');
|
|||
(1 row)
|
||||
|
||||
VACUUM vacuum_test, other_vacuum_test;
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
|
||||
ERROR: connection to the remote node postgres@localhost:xxxxx failed with the following error: connection not open
|
||||
SELECT citus.mitmproxy('conn.onQuery(query="^VACUUM.*other").cancel(' || pg_backend_pid() || ')');
|
||||
mitmproxy
|
||||
---------------------------------------------------------------------
|
||||
|
|
|
@ -3,16 +3,16 @@ Parsed test spec with 2 sessions
|
|||
starting permutation: s1-begin s2-begin s1-acquire-citus-adv-oclass-lock s2-acquire-citus-adv-oclass-lock s1-commit s2-commit
|
||||
step s1-begin: BEGIN;
|
||||
step s2-begin: BEGIN;
|
||||
step s1-acquire-citus-adv-oclass-lock: SELECT citus_internal_acquire_citus_advisory_object_class_lock(value, NULL) FROM oclass_database;
|
||||
citus_internal_acquire_citus_advisory_object_class_lock
|
||||
step s1-acquire-citus-adv-oclass-lock: SELECT citus_internal.acquire_citus_advisory_object_class_lock(value, NULL) FROM oclass_database;
|
||||
acquire_citus_advisory_object_class_lock
|
||||
---------------------------------------------------------------------
|
||||
|
||||
(1 row)
|
||||
|
||||
step s2-acquire-citus-adv-oclass-lock: SELECT citus_internal_acquire_citus_advisory_object_class_lock(value, NULL) FROM oclass_database; <waiting ...>
|
||||
step s2-acquire-citus-adv-oclass-lock: SELECT citus_internal.acquire_citus_advisory_object_class_lock(value, NULL) FROM oclass_database; <waiting ...>
|
||||
step s1-commit: COMMIT;
|
||||
step s2-acquire-citus-adv-oclass-lock: <... completed>
|
||||
citus_internal_acquire_citus_advisory_object_class_lock
|
||||
acquire_citus_advisory_object_class_lock
|
||||
---------------------------------------------------------------------
|
||||
|
||||
(1 row)
|
||||
|
@ -23,16 +23,16 @@ starting permutation: s1-create-testdb1 s1-begin s2-begin s1-acquire-citus-adv-o
|
|||
step s1-create-testdb1: CREATE DATABASE testdb1;
|
||||
step s1-begin: BEGIN;
|
||||
step s2-begin: BEGIN;
|
||||
step s1-acquire-citus-adv-oclass-lock-with-oid-testdb1: SELECT citus_internal_acquire_citus_advisory_object_class_lock(value, 'testdb1') FROM oclass_database;
|
||||
citus_internal_acquire_citus_advisory_object_class_lock
|
||||
step s1-acquire-citus-adv-oclass-lock-with-oid-testdb1: SELECT citus_internal.acquire_citus_advisory_object_class_lock(value, 'testdb1') FROM oclass_database;
|
||||
acquire_citus_advisory_object_class_lock
|
||||
---------------------------------------------------------------------
|
||||
|
||||
(1 row)
|
||||
|
||||
step s2-acquire-citus-adv-oclass-lock-with-oid-testdb1: SELECT citus_internal_acquire_citus_advisory_object_class_lock(value, 'testdb1') FROM oclass_database; <waiting ...>
|
||||
step s2-acquire-citus-adv-oclass-lock-with-oid-testdb1: SELECT citus_internal.acquire_citus_advisory_object_class_lock(value, 'testdb1') FROM oclass_database; <waiting ...>
|
||||
step s1-commit: COMMIT;
|
||||
step s2-acquire-citus-adv-oclass-lock-with-oid-testdb1: <... completed>
|
||||
citus_internal_acquire_citus_advisory_object_class_lock
|
||||
acquire_citus_advisory_object_class_lock
|
||||
---------------------------------------------------------------------
|
||||
|
||||
(1 row)
|
||||
|
@ -45,14 +45,14 @@ step s1-create-testdb1: CREATE DATABASE testdb1;
|
|||
step s2-create-testdb2: CREATE DATABASE testdb2;
|
||||
step s1-begin: BEGIN;
|
||||
step s2-begin: BEGIN;
|
||||
step s1-acquire-citus-adv-oclass-lock-with-oid-testdb1: SELECT citus_internal_acquire_citus_advisory_object_class_lock(value, 'testdb1') FROM oclass_database;
|
||||
citus_internal_acquire_citus_advisory_object_class_lock
|
||||
step s1-acquire-citus-adv-oclass-lock-with-oid-testdb1: SELECT citus_internal.acquire_citus_advisory_object_class_lock(value, 'testdb1') FROM oclass_database;
|
||||
acquire_citus_advisory_object_class_lock
|
||||
---------------------------------------------------------------------
|
||||
|
||||
(1 row)
|
||||
|
||||
step s2-acquire-citus-adv-oclass-lock-with-oid-testdb2: SELECT citus_internal_acquire_citus_advisory_object_class_lock(value, 'testdb2') FROM oclass_database;
|
||||
citus_internal_acquire_citus_advisory_object_class_lock
|
||||
step s2-acquire-citus-adv-oclass-lock-with-oid-testdb2: SELECT citus_internal.acquire_citus_advisory_object_class_lock(value, 'testdb2') FROM oclass_database;
|
||||
acquire_citus_advisory_object_class_lock
|
||||
---------------------------------------------------------------------
|
||||
|
||||
(1 row)
|
||||
|
@ -66,14 +66,14 @@ starting permutation: s2-create-testdb2 s1-begin s2-begin s1-acquire-citus-adv-o
|
|||
step s2-create-testdb2: CREATE DATABASE testdb2;
|
||||
step s1-begin: BEGIN;
|
||||
step s2-begin: BEGIN;
|
||||
step s1-acquire-citus-adv-oclass-lock: SELECT citus_internal_acquire_citus_advisory_object_class_lock(value, NULL) FROM oclass_database;
|
||||
citus_internal_acquire_citus_advisory_object_class_lock
|
||||
step s1-acquire-citus-adv-oclass-lock: SELECT citus_internal.acquire_citus_advisory_object_class_lock(value, NULL) FROM oclass_database;
|
||||
acquire_citus_advisory_object_class_lock
|
||||
---------------------------------------------------------------------
|
||||
|
||||
(1 row)
|
||||
|
||||
step s2-acquire-citus-adv-oclass-lock-with-oid-testdb2: SELECT citus_internal_acquire_citus_advisory_object_class_lock(value, 'testdb2') FROM oclass_database;
|
||||
citus_internal_acquire_citus_advisory_object_class_lock
|
||||
step s2-acquire-citus-adv-oclass-lock-with-oid-testdb2: SELECT citus_internal.acquire_citus_advisory_object_class_lock(value, 'testdb2') FROM oclass_database;
|
||||
acquire_citus_advisory_object_class_lock
|
||||
---------------------------------------------------------------------
|
||||
|
||||
(1 row)
|
||||
|
|
|
@ -250,7 +250,7 @@ count
|
|||
step s1-commit-prepared:
|
||||
COMMIT prepared 'label';
|
||||
|
||||
s2: WARNING: connection to the remote node non-existent:57637 failed with the following error: could not translate host name "non-existent" to address: <system specific error>
|
||||
s2: WARNING: connection to the remote node postgres@non-existent:57637 failed with the following error: could not translate host name "non-existent" to address: <system specific error>
|
||||
step s2-execute-prepared:
|
||||
EXECUTE foo;
|
||||
|
||||
|
|
|
@ -3281,9 +3281,9 @@ SELECT pg_sleep(0.1); -- wait to make sure the config has changed before running
|
|||
SET citus.enable_local_execution TO false; -- force a connection to the dummy placements
|
||||
-- run queries that use dummy placements for local execution
|
||||
SELECT * FROM event_responses WHERE FALSE;
|
||||
ERROR: connection to the remote node foobar:57636 failed with the following error: could not translate host name "foobar" to address: <system specific error>
|
||||
ERROR: connection to the remote node postgres@foobar:57636 failed with the following error: could not translate host name "foobar" to address: <system specific error>
|
||||
WITH cte_1 AS (SELECT * FROM event_responses LIMIT 1) SELECT count(*) FROM cte_1;
|
||||
ERROR: connection to the remote node foobar:57636 failed with the following error: could not translate host name "foobar" to address: <system specific error>
|
||||
ERROR: connection to the remote node postgres@foobar:57636 failed with the following error: could not translate host name "foobar" to address: <system specific error>
|
||||
ALTER SYSTEM RESET citus.local_hostname;
|
||||
SELECT pg_reload_conf();
|
||||
pg_reload_conf
|
||||
|
|
|
@ -3281,9 +3281,9 @@ SELECT pg_sleep(0.1); -- wait to make sure the config has changed before running
|
|||
SET citus.enable_local_execution TO false; -- force a connection to the dummy placements
|
||||
-- run queries that use dummy placements for local execution
|
||||
SELECT * FROM event_responses WHERE FALSE;
|
||||
ERROR: connection to the remote node foobar:57636 failed with the following error: could not translate host name "foobar" to address: <system specific error>
|
||||
ERROR: connection to the remote node postgres@foobar:57636 failed with the following error: could not translate host name "foobar" to address: <system specific error>
|
||||
WITH cte_1 AS (SELECT * FROM event_responses LIMIT 1) SELECT count(*) FROM cte_1;
|
||||
ERROR: connection to the remote node foobar:57636 failed with the following error: could not translate host name "foobar" to address: <system specific error>
|
||||
ERROR: connection to the remote node postgres@foobar:57636 failed with the following error: could not translate host name "foobar" to address: <system specific error>
|
||||
ALTER SYSTEM RESET citus.local_hostname;
|
||||
SELECT pg_reload_conf();
|
||||
pg_reload_conf
|
||||
|
|
|
@ -12,7 +12,7 @@ RESET client_min_messages;
|
|||
SET search_path TO metadata_sync_helpers;
|
||||
CREATE TABLE test(col_1 int);
|
||||
-- not in a distributed transaction
|
||||
SELECT citus_internal_add_partition_metadata ('test'::regclass, 'h', 'col_1', 0, 's');
|
||||
SELECT citus_internal.add_partition_metadata ('test'::regclass, 'h', 'col_1', 0, 's');
|
||||
ERROR: This is an internal Citus function can only be used in a distributed transaction
|
||||
SELECT citus_internal_update_relation_colocation ('test'::regclass, 1);
|
||||
ERROR: This is an internal Citus function can only be used in a distributed transaction
|
||||
|
@ -24,7 +24,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;
|
|||
|
||||
(1 row)
|
||||
|
||||
SELECT citus_internal_add_partition_metadata ('test'::regclass, 'h', 'col_1', 0, 's');
|
||||
SELECT citus_internal.add_partition_metadata ('test'::regclass, 'h', 'col_1', 0, 's');
|
||||
ERROR: This is an internal Citus function can only be used in a distributed transaction
|
||||
ROLLBACK;
|
||||
-- in a distributed transaction and the application name is Citus, allowed.
|
||||
|
@ -36,8 +36,8 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;
|
|||
(1 row)
|
||||
|
||||
SET application_name to 'citus_internal gpid=10000000001';
|
||||
SELECT citus_internal_add_partition_metadata ('test'::regclass, 'h', 'col_1', 0, 's');
|
||||
citus_internal_add_partition_metadata
|
||||
SELECT citus_internal.add_partition_metadata ('test'::regclass, 'h', 'col_1', 0, 's');
|
||||
add_partition_metadata
|
||||
---------------------------------------------------------------------
|
||||
|
||||
(1 row)
|
||||
|
@ -61,7 +61,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;
|
|||
(1 row)
|
||||
|
||||
SET application_name to 'citus_internal gpid=10000000001';
|
||||
SELECT citus_internal_add_partition_metadata ('test'::regclass, 'h', 'col_1', 0, 's');
|
||||
SELECT citus_internal.add_partition_metadata ('test'::regclass, 'h', 'col_1', 0, 's');
|
||||
ERROR: must be owner of table test
|
||||
ROLLBACK;
|
||||
-- we do not own the relation
|
||||
|
@ -87,8 +87,8 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;
|
|||
(1 row)
|
||||
|
||||
SET application_name to 'citus_internal gpid=10000000001';
|
||||
SELECT citus_internal_add_partition_metadata ('test_2'::regclass, 'h', 'col_1', 0, 's');
|
||||
citus_internal_add_partition_metadata
|
||||
SELECT citus_internal.add_partition_metadata ('test_2'::regclass, 'h', 'col_1', 0, 's');
|
||||
add_partition_metadata
|
||||
---------------------------------------------------------------------
|
||||
|
||||
(1 row)
|
||||
|
@ -109,8 +109,8 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;
|
|||
(1 row)
|
||||
|
||||
SET application_name to 'citus_rebalancer gpid=10000000001';
|
||||
SELECT citus_internal_add_partition_metadata ('test_2'::regclass, 'h', 'col_1', 0, 's');
|
||||
citus_internal_add_partition_metadata
|
||||
SELECT citus_internal.add_partition_metadata ('test_2'::regclass, 'h', 'col_1', 0, 's');
|
||||
add_partition_metadata
|
||||
---------------------------------------------------------------------
|
||||
|
||||
(1 row)
|
||||
|
@ -125,7 +125,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;
|
|||
(1 row)
|
||||
|
||||
SET application_name to 'citus_internal gpid=not a correct gpid';
|
||||
SELECT citus_internal_add_partition_metadata ('test_2'::regclass, 'h', 'col_1', 0, 's');
|
||||
SELECT citus_internal.add_partition_metadata ('test_2'::regclass, 'h', 'col_1', 0, 's');
|
||||
ERROR: This is an internal Citus function can only be used in a distributed transaction
|
||||
ROLLBACK;
|
||||
-- also faills if done by the rebalancer
|
||||
|
@ -137,7 +137,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;
|
|||
(1 row)
|
||||
|
||||
SET application_name to 'citus_rebalancer gpid=not a correct gpid';
|
||||
SELECT citus_internal_add_partition_metadata ('test_2'::regclass, 'h', 'col_1', 0, 's');
|
||||
SELECT citus_internal.add_partition_metadata ('test_2'::regclass, 'h', 'col_1', 0, 's');
|
||||
ERROR: This is an internal Citus function can only be used in a distributed transaction
|
||||
ROLLBACK;
|
||||
-- application_name with suffix is ok (e.g. pgbouncer might add this)
|
||||
|
@ -149,8 +149,8 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;
|
|||
(1 row)
|
||||
|
||||
SET application_name to 'citus_internal gpid=10000000001 - from 10.12.14.16:10370';
|
||||
SELECT citus_internal_add_partition_metadata ('test_2'::regclass, 'h', 'col_1', 0, 's');
|
||||
citus_internal_add_partition_metadata
|
||||
SELECT citus_internal.add_partition_metadata ('test_2'::regclass, 'h', 'col_1', 0, 's');
|
||||
add_partition_metadata
|
||||
---------------------------------------------------------------------
|
||||
|
||||
(1 row)
|
||||
|
@ -165,7 +165,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;
|
|||
(1 row)
|
||||
|
||||
SET application_name to 'citus_internal gpid=';
|
||||
SELECT citus_internal_add_partition_metadata ('test_2'::regclass, 'h', 'col_1', 0, 's');
|
||||
SELECT citus_internal.add_partition_metadata ('test_2'::regclass, 'h', 'col_1', 0, 's');
|
||||
ERROR: This is an internal Citus function can only be used in a distributed transaction
|
||||
ROLLBACK;
|
||||
-- empty application_name
|
||||
|
@ -177,7 +177,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;
|
|||
(1 row)
|
||||
|
||||
SET application_name to '';
|
||||
SELECT citus_internal_add_partition_metadata ('test_2'::regclass, 'h', 'col_1', 0, 's');
|
||||
SELECT citus_internal.add_partition_metadata ('test_2'::regclass, 'h', 'col_1', 0, 's');
|
||||
ERROR: This is an internal Citus function can only be used in a distributed transaction
|
||||
ROLLBACK;
|
||||
-- application_name with incorrect prefix
|
||||
|
@ -189,7 +189,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;
|
|||
(1 row)
|
||||
|
||||
SET application_name to 'citus gpid=10000000001';
|
||||
SELECT citus_internal_add_partition_metadata ('test_2'::regclass, 'h', 'col_1', 0, 's');
|
||||
SELECT citus_internal.add_partition_metadata ('test_2'::regclass, 'h', 'col_1', 0, 's');
|
||||
ERROR: This is an internal Citus function can only be used in a distributed transaction
|
||||
ROLLBACK;
|
||||
-- fails because there is no X distribution method
|
||||
|
@ -201,7 +201,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;
|
|||
(1 row)
|
||||
|
||||
SET application_name to 'citus_internal gpid=10000000001';
|
||||
SELECT citus_internal_add_partition_metadata ('test_2'::regclass, 'X', 'col_1', 0, 's');
|
||||
SELECT citus_internal.add_partition_metadata ('test_2'::regclass, 'X', 'col_1', 0, 's');
|
||||
ERROR: Metadata syncing is only allowed for hash, reference and local tables:X
|
||||
ROLLBACK;
|
||||
-- fails because there is the column does not exist
|
||||
|
@ -213,7 +213,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;
|
|||
(1 row)
|
||||
|
||||
SET application_name to 'citus_internal gpid=10000000001';
|
||||
SELECT citus_internal_add_partition_metadata ('test_2'::regclass, 'h', 'non_existing_col', 0, 's');
|
||||
SELECT citus_internal.add_partition_metadata ('test_2'::regclass, 'h', 'non_existing_col', 0, 's');
|
||||
ERROR: column "non_existing_col" of relation "test_2" does not exist
|
||||
ROLLBACK;
|
||||
--- fails because we do not allow NULL parameters
|
||||
|
@ -225,7 +225,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;
|
|||
(1 row)
|
||||
|
||||
SET application_name to 'citus_internal gpid=10000000001';
|
||||
SELECT citus_internal_add_partition_metadata (NULL, 'h', 'non_existing_col', 0, 's');
|
||||
SELECT citus_internal.add_partition_metadata (NULL, 'h', 'non_existing_col', 0, 's');
|
||||
ERROR: relation cannot be NULL
|
||||
ROLLBACK;
|
||||
-- fails because colocationId cannot be negative
|
||||
|
@ -237,7 +237,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;
|
|||
(1 row)
|
||||
|
||||
SET application_name to 'citus_internal gpid=10000000001';
|
||||
SELECT citus_internal_add_partition_metadata ('test_2'::regclass, 'h', 'col_1', -1, 's');
|
||||
SELECT citus_internal.add_partition_metadata ('test_2'::regclass, 'h', 'col_1', -1, 's');
|
||||
ERROR: Metadata syncing is only allowed for valid colocation id values.
|
||||
ROLLBACK;
|
||||
-- fails because there is no X replication model
|
||||
|
@ -249,7 +249,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;
|
|||
(1 row)
|
||||
|
||||
SET application_name to 'citus_internal gpid=10000000001';
|
||||
SELECT citus_internal_add_partition_metadata ('test_2'::regclass, 'X', 'col_1', 0, 'X');
|
||||
SELECT citus_internal.add_partition_metadata ('test_2'::regclass, 'X', 'col_1', 0, 'X');
|
||||
ERROR: Metadata syncing is only allowed for hash, reference and local tables:X
|
||||
ROLLBACK;
|
||||
-- the same table cannot be added twice, that is enforced by a primary key
|
||||
|
@ -262,13 +262,13 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;
|
|||
|
||||
SET application_name to 'citus_internal gpid=10000000001';
|
||||
\set VERBOSITY terse
|
||||
SELECT citus_internal_add_partition_metadata ('test_2'::regclass, 'h', 'col_1', 0, 's');
|
||||
citus_internal_add_partition_metadata
|
||||
SELECT citus_internal.add_partition_metadata ('test_2'::regclass, 'h', 'col_1', 0, 's');
|
||||
add_partition_metadata
|
||||
---------------------------------------------------------------------
|
||||
|
||||
(1 row)
|
||||
|
||||
SELECT citus_internal_add_partition_metadata ('test_2'::regclass, 'h', 'col_1', 0, 's');
|
||||
SELECT citus_internal.add_partition_metadata ('test_2'::regclass, 'h', 'col_1', 0, 's');
|
||||
ERROR: duplicate key value violates unique constraint "pg_dist_partition_logical_relid_index"
|
||||
ROLLBACK;
|
||||
-- the same table cannot be added twice, that is enforced by a primary key even if distribution key changes
|
||||
|
@ -281,13 +281,13 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;
|
|||
|
||||
SET application_name to 'citus_internal gpid=10000000001';
|
||||
\set VERBOSITY terse
|
||||
SELECT citus_internal_add_partition_metadata ('test_2'::regclass, 'h', 'col_1', 0, 's');
|
||||
citus_internal_add_partition_metadata
|
||||
SELECT citus_internal.add_partition_metadata ('test_2'::regclass, 'h', 'col_1', 0, 's');
|
||||
add_partition_metadata
|
||||
---------------------------------------------------------------------
|
||||
|
||||
(1 row)
|
||||
|
||||
SELECT citus_internal_add_partition_metadata ('test_2'::regclass, 'h', 'col_2', 0, 's');
|
||||
SELECT citus_internal.add_partition_metadata ('test_2'::regclass, 'h', 'col_2', 0, 's');
|
||||
ERROR: duplicate key value violates unique constraint "pg_dist_partition_logical_relid_index"
|
||||
ROLLBACK;
|
||||
-- hash distributed table cannot have NULL distribution key
|
||||
|
@ -300,7 +300,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;
|
|||
|
||||
SET application_name to 'citus_internal gpid=10000000001';
|
||||
\set VERBOSITY terse
|
||||
SELECT citus_internal_add_partition_metadata ('test_2'::regclass, 'h', NULL, 0, 's');
|
||||
SELECT citus_internal.add_partition_metadata ('test_2'::regclass, 'h', NULL, 0, 's');
|
||||
ERROR: Distribution column cannot be NULL for relation "test_2"
|
||||
ROLLBACK;
|
||||
-- even if metadata_sync_helper_role is not owner of the table test
|
||||
|
@ -332,8 +332,8 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;
|
|||
(1 row)
|
||||
|
||||
SET application_name to 'citus_internal gpid=10000000001';
|
||||
SELECT citus_internal_add_partition_metadata ('test_2'::regclass, 'X', 'col_1', 0, 's');
|
||||
citus_internal_add_partition_metadata
|
||||
SELECT citus_internal.add_partition_metadata ('test_2'::regclass, 'X', 'col_1', 0, 's');
|
||||
add_partition_metadata
|
||||
---------------------------------------------------------------------
|
||||
|
||||
(1 row)
|
||||
|
@ -378,7 +378,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;
|
|||
(1 row)
|
||||
|
||||
SET application_name to 'citus_internal gpid=10000000001';
|
||||
SELECT citus_internal_add_partition_metadata ('test_2'::regclass, 'X', 'col_1', 0, 's');
|
||||
SELECT citus_internal.add_partition_metadata ('test_2'::regclass, 'X', 'col_1', 0, 's');
|
||||
ERROR: role "non_existing_user" does not exist
|
||||
ROLLBACK;
|
||||
\c - postgres - :worker_1_port
|
||||
|
@ -409,7 +409,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;
|
|||
(1 row)
|
||||
|
||||
SET application_name to 'citus_internal gpid=10000000001';
|
||||
SELECT citus_internal_add_partition_metadata ('test_ref'::regclass, 'n', 'col_1', 0, 's');
|
||||
SELECT citus_internal.add_partition_metadata ('test_ref'::regclass, 'n', 'col_1', 0, 's');
|
||||
ERROR: Reference or local tables cannot have distribution columns
|
||||
ROLLBACK;
|
||||
-- non-valid replication model
|
||||
|
@ -421,7 +421,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;
|
|||
(1 row)
|
||||
|
||||
SET application_name to 'citus_internal gpid=10000000001';
|
||||
SELECT citus_internal_add_partition_metadata ('test_ref'::regclass, 'n', NULL, 0, 'A');
|
||||
SELECT citus_internal.add_partition_metadata ('test_ref'::regclass, 'n', NULL, 0, 'A');
|
||||
ERROR: Metadata syncing is only allowed for known replication models.
|
||||
ROLLBACK;
|
||||
-- not-matching replication model for reference table
|
||||
|
@ -433,7 +433,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;
|
|||
(1 row)
|
||||
|
||||
SET application_name to 'citus_internal gpid=10000000001';
|
||||
SELECT citus_internal_add_partition_metadata ('test_ref'::regclass, 'n', NULL, 0, 'c');
|
||||
SELECT citus_internal.add_partition_metadata ('test_ref'::regclass, 'n', NULL, 0, 'c');
|
||||
ERROR: Local or references tables can only have 's' or 't' as the replication model.
|
||||
ROLLBACK;
|
||||
-- add entry for super user table
|
||||
|
@ -448,8 +448,8 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;
|
|||
(1 row)
|
||||
|
||||
SET application_name to 'citus_internal gpid=10000000001';
|
||||
SELECT citus_internal_add_partition_metadata ('super_user_table'::regclass, 'h', 'col_1', 0, 's');
|
||||
citus_internal_add_partition_metadata
|
||||
SELECT citus_internal.add_partition_metadata ('super_user_table'::regclass, 'h', 'col_1', 0, 's');
|
||||
add_partition_metadata
|
||||
---------------------------------------------------------------------
|
||||
|
||||
(1 row)
|
||||
|
@ -470,7 +470,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;
|
|||
\set VERBOSITY terse
|
||||
WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue)
|
||||
AS (VALUES ('super_user_table'::regclass, 1420000::bigint, 't'::"char", '-2147483648'::text, '-1610612737'::text))
|
||||
SELECT citus_internal_add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data;
|
||||
SELECT citus_internal.add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data;
|
||||
ERROR: must be owner of table super_user_table
|
||||
ROLLBACK;
|
||||
-- the user is only allowed to add a shard for add a table which is in pg_dist_partition
|
||||
|
@ -485,7 +485,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;
|
|||
\set VERBOSITY terse
|
||||
WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue)
|
||||
AS (VALUES ('test_2'::regclass, 1420000::bigint, 't'::"char", '-2147483648'::text, '-1610612737'::text))
|
||||
SELECT citus_internal_add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data;
|
||||
SELECT citus_internal.add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data;
|
||||
ERROR: The relation "test_2" does not have a valid entry in pg_dist_partition.
|
||||
ROLLBACK;
|
||||
-- ok, now add the table to the pg_dist_partition
|
||||
|
@ -497,20 +497,20 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;
|
|||
(1 row)
|
||||
|
||||
SET application_name to 'citus_internal gpid=10000000001';
|
||||
SELECT citus_internal_add_partition_metadata ('test_2'::regclass, 'h', 'col_1', 250, 's');
|
||||
citus_internal_add_partition_metadata
|
||||
SELECT citus_internal.add_partition_metadata ('test_2'::regclass, 'h', 'col_1', 250, 's');
|
||||
add_partition_metadata
|
||||
---------------------------------------------------------------------
|
||||
|
||||
(1 row)
|
||||
|
||||
SELECT citus_internal_add_partition_metadata ('test_3'::regclass, 'h', 'col_1', 251, 's');
|
||||
citus_internal_add_partition_metadata
|
||||
SELECT citus_internal.add_partition_metadata ('test_3'::regclass, 'h', 'col_1', 251, 's');
|
||||
add_partition_metadata
|
||||
---------------------------------------------------------------------
|
||||
|
||||
(1 row)
|
||||
|
||||
SELECT citus_internal_add_partition_metadata ('test_ref'::regclass, 'n', NULL, 0, 't');
|
||||
citus_internal_add_partition_metadata
|
||||
SELECT citus_internal.add_partition_metadata ('test_ref'::regclass, 'n', NULL, 0, 't');
|
||||
add_partition_metadata
|
||||
---------------------------------------------------------------------
|
||||
|
||||
(1 row)
|
||||
|
@ -544,7 +544,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;
|
|||
\set VERBOSITY terse
|
||||
WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue)
|
||||
AS (VALUES ('test_2'::regclass, -1, 't'::"char", '-2147483648'::text, '-1610612737'::text))
|
||||
SELECT citus_internal_add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data;
|
||||
SELECT citus_internal.add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data;
|
||||
ERROR: Invalid shard id: -1
|
||||
ROLLBACK;
|
||||
-- invalid storage types are not allowed
|
||||
|
@ -559,7 +559,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;
|
|||
\set VERBOSITY terse
|
||||
WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue)
|
||||
AS (VALUES ('test_2'::regclass, 1420000, 'X'::"char", '-2147483648'::text, '-1610612737'::text))
|
||||
SELECT citus_internal_add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data;
|
||||
SELECT citus_internal.add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data;
|
||||
ERROR: Invalid shard storage type: X
|
||||
ROLLBACK;
|
||||
-- NULL shard ranges are not allowed for hash distributed tables
|
||||
|
@ -574,7 +574,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;
|
|||
\set VERBOSITY terse
|
||||
WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue)
|
||||
AS (VALUES ('test_2'::regclass, 1420000, 't'::"char", NULL, '-1610612737'::text))
|
||||
SELECT citus_internal_add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data;
|
||||
SELECT citus_internal.add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data;
|
||||
ERROR: Shards of has distributed table "test_2" cannot have NULL shard ranges
|
||||
ROLLBACK;
|
||||
-- non-integer shard ranges are not allowed
|
||||
|
@ -589,7 +589,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;
|
|||
\set VERBOSITY terse
|
||||
WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue)
|
||||
AS (VALUES ('test_2'::regclass, 1420000::bigint, 't'::"char", 'non-int'::text, '-1610612737'::text))
|
||||
SELECT citus_internal_add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data;
|
||||
SELECT citus_internal.add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data;
|
||||
ERROR: invalid input syntax for type integer: "non-int"
|
||||
ROLLBACK;
|
||||
-- shardMinValue should be smaller than shardMaxValue
|
||||
|
@ -604,7 +604,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;
|
|||
\set VERBOSITY terse
|
||||
WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue)
|
||||
AS (VALUES ('test_2'::regclass, 1420000::bigint, 't'::"char", '-1610612737'::text, '-2147483648'::text))
|
||||
SELECT citus_internal_add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data;
|
||||
SELECT citus_internal.add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data;
|
||||
ERROR: shardMinValue=-1610612737 is greater than shardMaxValue=-2147483648 for table "test_2", which is not allowed
|
||||
ROLLBACK;
|
||||
-- we do not allow overlapping shards for the same table
|
||||
|
@ -621,7 +621,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;
|
|||
AS (VALUES ('test_2'::regclass, 1420000::bigint, 't'::"char", '10'::text, '20'::text),
|
||||
('test_2'::regclass, 1420001::bigint, 't'::"char", '20'::text, '30'::text),
|
||||
('test_2'::regclass, 1420002::bigint, 't'::"char", '10'::text, '50'::text))
|
||||
SELECT citus_internal_add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data;
|
||||
SELECT citus_internal.add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data;
|
||||
ERROR: Shard intervals overlap for table "test_2": 1420001 and 1420000
|
||||
ROLLBACK;
|
||||
-- Now let's check valid pg_dist_object updates
|
||||
|
@ -637,7 +637,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;
|
|||
\set VERBOSITY terse
|
||||
WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation)
|
||||
AS (VALUES ('non_existing_type', ARRAY['non_existing_user']::text[], ARRAY[]::text[], -1, 0, false))
|
||||
SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) FROM distributed_object_data;
|
||||
SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) FROM distributed_object_data;
|
||||
ERROR: unrecognized object type "non_existing_type"
|
||||
ROLLBACK;
|
||||
-- check the sanity of distributionArgumentIndex and colocationId
|
||||
|
@ -652,7 +652,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;
|
|||
\set VERBOSITY terse
|
||||
WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation)
|
||||
AS (VALUES ('role', ARRAY['metadata_sync_helper_role']::text[], ARRAY[]::text[], -100, 0, false))
|
||||
SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) FROM distributed_object_data;
|
||||
SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) FROM distributed_object_data;
|
||||
ERROR: distribution_argument_index must be between 0 and 100
|
||||
ROLLBACK;
|
||||
BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;
|
||||
|
@ -666,7 +666,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;
|
|||
\set VERBOSITY terse
|
||||
WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation)
|
||||
AS (VALUES ('role', ARRAY['metadata_sync_helper_role']::text[], ARRAY[]::text[], -1, -1, false))
|
||||
SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) FROM distributed_object_data;
|
||||
SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) FROM distributed_object_data;
|
||||
ERROR: colocationId must be a positive number
|
||||
ROLLBACK;
|
||||
-- check with non-existing object
|
||||
|
@ -681,10 +681,10 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;
|
|||
\set VERBOSITY terse
|
||||
WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation)
|
||||
AS (VALUES ('role', ARRAY['non_existing_user']::text[], ARRAY[]::text[], -1, 0, false))
|
||||
SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) FROM distributed_object_data;
|
||||
SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) FROM distributed_object_data;
|
||||
ERROR: role "non_existing_user" does not exist
|
||||
ROLLBACK;
|
||||
-- since citus_internal_add_object_metadata is strict function returns NULL
|
||||
-- since citus_internal.add_object_metadata is strict function returns NULL
|
||||
-- if any parameter is NULL
|
||||
BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;
|
||||
SELECT assign_distributed_transaction_id(0, 8, '2021-07-09 15:41:55.542377+02');
|
||||
|
@ -697,15 +697,15 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;
|
|||
\set VERBOSITY terse
|
||||
WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation)
|
||||
AS (VALUES ('role', ARRAY['metadata_sync_helper_role']::text[], ARRAY[]::text[], 0, NULL::int, false))
|
||||
SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) FROM distributed_object_data;
|
||||
citus_internal_add_object_metadata
|
||||
SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) FROM distributed_object_data;
|
||||
add_object_metadata
|
||||
---------------------------------------------------------------------
|
||||
|
||||
(1 row)
|
||||
|
||||
ROLLBACK;
|
||||
\c - postgres - :worker_1_port
|
||||
-- Show that citus_internal_add_object_metadata only works for object types
|
||||
-- Show that citus_internal.add_object_metadata only works for object types
|
||||
-- which is known how to distribute
|
||||
BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;
|
||||
SELECT assign_distributed_transaction_id(0, 8, '2021-07-09 15:41:55.542377+02');
|
||||
|
@ -724,10 +724,10 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;
|
|||
SET ROLE metadata_sync_helper_role;
|
||||
WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation)
|
||||
AS (VALUES ('operator', ARRAY['===']::text[], ARRAY['int','int']::text[], -1, 0, false))
|
||||
SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) FROM distributed_object_data;
|
||||
SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) FROM distributed_object_data;
|
||||
ERROR: operator object can not be distributed by Citus
|
||||
ROLLBACK;
|
||||
-- Show that citus_internal_add_object_metadata checks the priviliges
|
||||
-- Show that citus_internal.add_object_metadata checks the priviliges
|
||||
BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;
|
||||
SELECT assign_distributed_transaction_id(0, 8, '2021-07-09 15:41:55.542377+02');
|
||||
assign_distributed_transaction_id
|
||||
|
@ -744,7 +744,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;
|
|||
SET ROLE metadata_sync_helper_role;
|
||||
WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation)
|
||||
AS (VALUES ('function', ARRAY['distribution_test_function']::text[], ARRAY['integer']::text[], -1, 0, false))
|
||||
SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) FROM distributed_object_data;
|
||||
SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) FROM distributed_object_data;
|
||||
ERROR: must be owner of function distribution_test_function
|
||||
ROLLBACK;
|
||||
BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;
|
||||
|
@ -761,7 +761,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;
|
|||
SET ROLE metadata_sync_helper_role;
|
||||
WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation)
|
||||
AS (VALUES ('type', ARRAY['distributed_test_type']::text[], ARRAY[]::text[], -1, 0, false))
|
||||
SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) FROM distributed_object_data;
|
||||
SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) FROM distributed_object_data;
|
||||
ERROR: must be owner of type distributed_test_type
|
||||
ROLLBACK;
|
||||
-- we do not allow wrong partmethod
|
||||
|
@ -780,7 +780,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;
|
|||
WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue)
|
||||
AS (VALUES ('test_2'::regclass, 1420000::bigint, 't'::"char", '10'::text, '20'::text),
|
||||
('test_2'::regclass, 1420001::bigint, 't'::"char", '20'::text, '30'::text))
|
||||
SELECT citus_internal_add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data;
|
||||
SELECT citus_internal.add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data;
|
||||
ERROR: Metadata syncing is only allowed for hash, reference and local tables: X
|
||||
ROLLBACK;
|
||||
-- we do not allow NULL shardMinMax values
|
||||
|
@ -797,8 +797,8 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;
|
|||
\set VERBOSITY terse
|
||||
WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue)
|
||||
AS (VALUES ('test_2'::regclass, 1420000::bigint, 't'::"char", '10'::text, '20'::text))
|
||||
SELECT citus_internal_add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data;
|
||||
citus_internal_add_shard_metadata
|
||||
SELECT citus_internal.add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data;
|
||||
add_shard_metadata
|
||||
---------------------------------------------------------------------
|
||||
|
||||
(1 row)
|
||||
|
@ -807,7 +807,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;
|
|||
UPDATE pg_dist_shard SET shardminvalue = NULL WHERE shardid = 1420000;
|
||||
WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue)
|
||||
AS (VALUES ('test_2'::regclass, 1420001::bigint, 't'::"char", '20'::text, '30'::text))
|
||||
SELECT citus_internal_add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data;
|
||||
SELECT citus_internal.add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data;
|
||||
ERROR: Shards of has distributed table "test_2" cannot have NULL shard ranges
|
||||
ROLLBACK;
|
||||
\c - metadata_sync_helper_role - :worker_1_port
|
||||
|
@ -830,8 +830,8 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;
|
|||
('test_2'::regclass, 1420004::bigint, 't'::"char", '51'::text, '60'::text),
|
||||
('test_2'::regclass, 1420005::bigint, 't'::"char", '61'::text, '70'::text),
|
||||
('test_3'::regclass, 1420008::bigint, 't'::"char", '11'::text, '20'::text))
|
||||
SELECT citus_internal_add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data;
|
||||
citus_internal_add_shard_metadata
|
||||
SELECT citus_internal.add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data;
|
||||
add_shard_metadata
|
||||
---------------------------------------------------------------------
|
||||
|
||||
|
||||
|
@ -871,8 +871,8 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;
|
|||
('test_3'::regclass, 1420011::bigint, 't'::"char", '41'::text, '50'::text),
|
||||
('test_3'::regclass, 1420012::bigint, 't'::"char", '51'::text, '60'::text),
|
||||
('test_3'::regclass, 1420013::bigint, 't'::"char", '61'::text, '70'::text))
|
||||
SELECT citus_internal_add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data;
|
||||
citus_internal_add_shard_metadata
|
||||
SELECT citus_internal.add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data;
|
||||
add_shard_metadata
|
||||
---------------------------------------------------------------------
|
||||
|
||||
|
||||
|
@ -894,7 +894,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;
|
|||
\set VERBOSITY terse
|
||||
WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue)
|
||||
AS (VALUES ('test_ref'::regclass, 1420003::bigint, 't'::"char", '-1610612737'::text, NULL))
|
||||
SELECT citus_internal_add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data;
|
||||
SELECT citus_internal.add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data;
|
||||
ERROR: Shards of reference or local table "test_ref" should have NULL shard ranges
|
||||
ROLLBACK;
|
||||
-- reference tables cannot have multiple shards
|
||||
|
@ -910,7 +910,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;
|
|||
WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue)
|
||||
AS (VALUES ('test_ref'::regclass, 1420006::bigint, 't'::"char", NULL, NULL),
|
||||
('test_ref'::regclass, 1420007::bigint, 't'::"char", NULL, NULL))
|
||||
SELECT citus_internal_add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data;
|
||||
SELECT citus_internal.add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data;
|
||||
ERROR: relation "test_ref" has already at least one shard, adding more is not allowed
|
||||
ROLLBACK;
|
||||
-- finally, add a shard for reference tables
|
||||
|
@ -925,8 +925,8 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;
|
|||
\set VERBOSITY terse
|
||||
WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue)
|
||||
AS (VALUES ('test_ref'::regclass, 1420006::bigint, 't'::"char", NULL, NULL))
|
||||
SELECT citus_internal_add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data;
|
||||
citus_internal_add_shard_metadata
|
||||
SELECT citus_internal.add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data;
|
||||
add_shard_metadata
|
||||
---------------------------------------------------------------------
|
||||
|
||||
(1 row)
|
||||
|
@ -946,8 +946,8 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;
|
|||
\set VERBOSITY terse
|
||||
WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue)
|
||||
AS (VALUES ('super_user_table'::regclass, 1420007::bigint, 't'::"char", '11'::text, '20'::text))
|
||||
SELECT citus_internal_add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data;
|
||||
citus_internal_add_shard_metadata
|
||||
SELECT citus_internal.add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data;
|
||||
add_shard_metadata
|
||||
---------------------------------------------------------------------
|
||||
|
||||
(1 row)
|
||||
|
@ -966,9 +966,9 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;
|
|||
|
||||
SET application_name to 'citus_internal gpid=10000000001';
|
||||
\set VERBOSITY terse
|
||||
WITH placement_data(shardid, shardstate, shardlength, groupid, placementid) AS
|
||||
(VALUES (-10, 1, 0::bigint, 1::int, 1500000::bigint))
|
||||
SELECT citus_internal_add_placement_metadata(shardid, shardstate, shardlength, groupid, placementid) FROM placement_data;
|
||||
WITH placement_data(shardid, shardlength, groupid, placementid) AS
|
||||
(VALUES (-10, 0::bigint, 1::int, 1500000::bigint))
|
||||
SELECT citus_internal.add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data;
|
||||
ERROR: could not find valid entry for shard xxxxx
|
||||
ROLLBACK;
|
||||
-- invalid placementid
|
||||
|
@ -983,7 +983,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;
|
|||
\set VERBOSITY terse
|
||||
WITH placement_data(shardid, shardlength, groupid, placementid) AS
|
||||
(VALUES (1420000, 0::bigint, 1::int, -10))
|
||||
SELECT citus_internal_add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data;
|
||||
SELECT citus_internal.add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data;
|
||||
ERROR: Shard placement has invalid placement id (-10) for shard(1420000)
|
||||
ROLLBACK;
|
||||
-- non-existing shard
|
||||
|
@ -998,7 +998,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;
|
|||
\set VERBOSITY terse
|
||||
WITH placement_data(shardid, shardlength, groupid, placementid) AS
|
||||
(VALUES (1430100, 0::bigint, 1::int, 10))
|
||||
SELECT citus_internal_add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data;
|
||||
SELECT citus_internal.add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data;
|
||||
ERROR: could not find valid entry for shard xxxxx
|
||||
ROLLBACK;
|
||||
-- non-existing node with non-existing node-id 123123123
|
||||
|
@ -1013,7 +1013,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;
|
|||
\set VERBOSITY terse
|
||||
WITH placement_data(shardid, shardlength, groupid, placementid) AS
|
||||
(VALUES ( 1420000, 0::bigint, 123123123::int, 1500000))
|
||||
SELECT citus_internal_add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data;
|
||||
SELECT citus_internal.add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data;
|
||||
ERROR: Node with group id 123123123 for shard placement xxxxx does not exist
|
||||
ROLLBACK;
|
||||
-- create a volatile function that returns the local node id
|
||||
|
@ -1044,7 +1044,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;
|
|||
WITH placement_data(shardid, shardlength, groupid, placementid) AS
|
||||
(VALUES (1420000, 0::bigint, get_node_id(), 1500000),
|
||||
(1420000, 0::bigint, get_node_id(), 1500001))
|
||||
SELECT citus_internal_add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data;
|
||||
SELECT citus_internal.add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data;
|
||||
ERROR: duplicate key value violates unique constraint "placement_shardid_groupid_unique_index"
|
||||
ROLLBACK;
|
||||
-- shard is not owned by us
|
||||
|
@ -1059,7 +1059,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;
|
|||
\set VERBOSITY terse
|
||||
WITH placement_data(shardid, shardlength, groupid, placementid) AS
|
||||
(VALUES (1420007, 0::bigint, get_node_id(), 1500000))
|
||||
SELECT citus_internal_add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data;
|
||||
SELECT citus_internal.add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data;
|
||||
ERROR: must be owner of table super_user_table
|
||||
ROLLBACK;
|
||||
-- sucessfully add placements
|
||||
|
@ -1085,8 +1085,8 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;
|
|||
(1420011, 0::bigint, get_node_id(), 1500009),
|
||||
(1420012, 0::bigint, get_node_id(), 1500010),
|
||||
(1420013, 0::bigint, get_node_id(), 1500011))
|
||||
SELECT citus_internal_add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data;
|
||||
citus_internal_add_placement_metadata
|
||||
SELECT citus_internal.add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data;
|
||||
add_placement_metadata
|
||||
---------------------------------------------------------------------
|
||||
|
||||
|
||||
|
@ -1197,7 +1197,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;
|
|||
\set VERBOSITY terse
|
||||
WITH shard_data(shardid)
|
||||
AS (VALUES (1420007))
|
||||
SELECT citus_internal_delete_shard_metadata(shardid) FROM shard_data;
|
||||
SELECT citus_internal.delete_shard_metadata(shardid) FROM shard_data;
|
||||
ERROR: must be owner of table super_user_table
|
||||
ROLLBACK;
|
||||
-- the user cannot delete non-existing shards
|
||||
|
@ -1212,7 +1212,7 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;
|
|||
\set VERBOSITY terse
|
||||
WITH shard_data(shardid)
|
||||
AS (VALUES (1420100))
|
||||
SELECT citus_internal_delete_shard_metadata(shardid) FROM shard_data;
|
||||
SELECT citus_internal.delete_shard_metadata(shardid) FROM shard_data;
|
||||
ERROR: Shard id does not exists: 1420100
|
||||
ROLLBACK;
|
||||
-- sucessfully delete shards
|
||||
|
@ -1239,8 +1239,8 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;
|
|||
\set VERBOSITY terse
|
||||
WITH shard_data(shardid)
|
||||
AS (VALUES (1420000))
|
||||
SELECT citus_internal_delete_shard_metadata(shardid) FROM shard_data;
|
||||
citus_internal_delete_shard_metadata
|
||||
SELECT citus_internal.delete_shard_metadata(shardid) FROM shard_data;
|
||||
delete_shard_metadata
|
||||
---------------------------------------------------------------------
|
||||
|
||||
(1 row)
|
||||
|
@ -1343,13 +1343,13 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;
|
|||
|
||||
SET application_name to 'citus_internal gpid=10000000001';
|
||||
\set VERBOSITY terse
|
||||
SELECT citus_internal_add_partition_metadata ('test_5'::regclass, 'h', 'int_col', 500, 's');
|
||||
citus_internal_add_partition_metadata
|
||||
SELECT citus_internal.add_partition_metadata ('test_5'::regclass, 'h', 'int_col', 500, 's');
|
||||
add_partition_metadata
|
||||
---------------------------------------------------------------------
|
||||
|
||||
(1 row)
|
||||
|
||||
SELECT citus_internal_add_partition_metadata ('test_6'::regclass, 'h', 'text_col', 500, 's');
|
||||
SELECT citus_internal.add_partition_metadata ('test_6'::regclass, 'h', 'text_col', 500, 's');
|
||||
ERROR: cannot colocate tables test_6 and test_5
|
||||
ROLLBACK;
|
||||
BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;
|
||||
|
@ -1367,13 +1367,13 @@ BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;
|
|||
|
||||
SET application_name to 'citus_internal gpid=10000000001';
|
||||
\set VERBOSITY terse
|
||||
SELECT citus_internal_add_partition_metadata ('test_7'::regclass, 'h', 'text_col', 500, 's');
|
||||
citus_internal_add_partition_metadata
|
||||
SELECT citus_internal.add_partition_metadata ('test_7'::regclass, 'h', 'text_col', 500, 's');
|
||||
add_partition_metadata
|
||||
---------------------------------------------------------------------
|
||||
|
||||
(1 row)
|
||||
|
||||
SELECT citus_internal_add_partition_metadata ('test_8'::regclass, 'h', 'text_col', 500, 's');
|
||||
SELECT citus_internal.add_partition_metadata ('test_8'::regclass, 'h', 'text_col', 500, 's');
|
||||
ERROR: cannot colocate tables test_8 and test_7
|
||||
ROLLBACK;
|
||||
-- we don't need the table/schema anymore
|
||||
|
|
|
@ -587,7 +587,7 @@ SET client_min_messages TO DEBUG;
|
|||
-- verify that we can create connections only with users with login privileges.
|
||||
SET ROLE role_without_login;
|
||||
SELECT citus_check_connection_to_node('localhost', :worker_1_port);
|
||||
WARNING: connection to the remote node localhost:xxxxx failed with the following error: FATAL: role "role_without_login" is not permitted to log in
|
||||
WARNING: connection to the remote node role_without_login@localhost:xxxxx failed with the following error: FATAL: role "role_without_login" is not permitted to log in
|
||||
citus_check_connection_to_node
|
||||
---------------------------------------------------------------------
|
||||
f
|
||||
|
|
|
@ -730,7 +730,7 @@ ALTER USER test_user WITH nologin;
|
|||
\c - test_user - :master_port
|
||||
-- reissue copy, and it should fail
|
||||
COPY numbers_hash FROM STDIN WITH (FORMAT 'csv');
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: FATAL: role "test_user" is not permitted to log in
|
||||
ERROR: connection to the remote node test_user@localhost:xxxxx failed with the following error: FATAL: role "test_user" is not permitted to log in
|
||||
-- verify shards in the none of the workers as marked invalid
|
||||
SELECT shardid, shardstate, nodename, nodeport
|
||||
FROM pg_dist_shard_placement join pg_dist_shard using(shardid)
|
||||
|
@ -749,7 +749,7 @@ SELECT shardid, shardstate, nodename, nodeport
|
|||
|
||||
-- try to insert into a reference table copy should fail
|
||||
COPY numbers_reference FROM STDIN WITH (FORMAT 'csv');
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: FATAL: role "test_user" is not permitted to log in
|
||||
ERROR: connection to the remote node test_user@localhost:xxxxx failed with the following error: FATAL: role "test_user" is not permitted to log in
|
||||
-- verify shards for reference table are still valid
|
||||
SELECT shardid, shardstate, nodename, nodeport
|
||||
FROM pg_dist_shard_placement join pg_dist_shard using(shardid)
|
||||
|
@ -765,7 +765,7 @@ SELECT shardid, shardstate, nodename, nodeport
|
|||
-- since it can not insert into either copies of a shard. shards are expected to
|
||||
-- stay valid since the operation is rolled back.
|
||||
COPY numbers_hash_other FROM STDIN WITH (FORMAT 'csv');
|
||||
ERROR: connection to the remote node localhost:xxxxx failed with the following error: FATAL: role "test_user" is not permitted to log in
|
||||
ERROR: connection to the remote node test_user@localhost:xxxxx failed with the following error: FATAL: role "test_user" is not permitted to log in
|
||||
-- verify shards for numbers_hash_other are still valid
|
||||
-- since copy has failed altogether
|
||||
SELECT shardid, shardstate, nodename, nodeport
|
||||
|
|
|
@ -1420,15 +1420,27 @@ SELECT * FROM multi_extension.print_extension_changes();
|
|||
-- Snapshot of state at 12.2-1
|
||||
ALTER EXTENSION citus UPDATE TO '12.2-1';
|
||||
SELECT * FROM multi_extension.print_extension_changes();
|
||||
previous_object | current_object
|
||||
previous_object | current_object
|
||||
---------------------------------------------------------------------
|
||||
| function citus_internal.commit_management_command_2pc() void
|
||||
| function citus_internal.execute_command_on_remote_nodes_as_user(text,text) void
|
||||
| function citus_internal.mark_object_distributed(oid,text,oid) void
|
||||
| function citus_internal.start_management_transaction(xid8) void
|
||||
| function citus_internal_acquire_citus_advisory_object_class_lock(integer,cstring) void
|
||||
| function citus_internal_database_command(text) void
|
||||
(6 rows)
|
||||
| function citus_internal.acquire_citus_advisory_object_class_lock(integer,cstring) void
|
||||
| function citus_internal.add_colocation_metadata(integer,integer,integer,regtype,oid) void
|
||||
| function citus_internal.add_object_metadata(text,text[],text[],integer,integer,boolean) void
|
||||
| function citus_internal.add_partition_metadata(regclass,"char",text,integer,"char") void
|
||||
| function citus_internal.add_placement_metadata(bigint,bigint,integer,bigint) void
|
||||
| function citus_internal.add_shard_metadata(regclass,bigint,"char",text,text) void
|
||||
| function citus_internal.add_tenant_schema(oid,integer) void
|
||||
| function citus_internal.adjust_local_clock_to_remote(cluster_clock) void
|
||||
| function citus_internal.commit_management_command_2pc() void
|
||||
| function citus_internal.database_command(text) void
|
||||
| function citus_internal.delete_colocation_metadata(integer) void
|
||||
| function citus_internal.delete_partition_metadata(regclass) void
|
||||
| function citus_internal.delete_placement_metadata(bigint) void
|
||||
| function citus_internal.delete_shard_metadata(bigint) void
|
||||
| function citus_internal.delete_tenant_schema(oid) void
|
||||
| function citus_internal.execute_command_on_remote_nodes_as_user(text,text) void
|
||||
| function citus_internal.mark_object_distributed(oid,text,oid,text) void
|
||||
| function citus_internal.start_management_transaction(xid8) void
|
||||
(18 rows)
|
||||
|
||||
DROP TABLE multi_extension.prev_objects, multi_extension.extension_diff;
|
||||
-- show running version
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue