Compare commits

...

11 Commits

Author SHA1 Message Date
Hanefi Onaldi 4886473f35 Bump Citus version to 11.2.1
This commit slightly smaller than similar commits that bump the Citus
version. This is mainly because #6811 attempted to bump the version to
11.2.1 but missed some portion of the generated scripts. This commit
contains only the missing changes in commit with sha
5153986ae4.
2023-04-25 13:21:41 +03:00
Hanefi Onaldi abb9da60fe Add changelog entries for 11.2.1
(cherry picked from commit f7fd0dbae7)
2023-04-25 13:21:41 +03:00
Emel ĹžimĹźek d883c96098 When creating a HTAB we need to use HASH_COMPARE flag in order to set a user defined comparison function. (#6845)
DESCRIPTION: Fixes memory errors, caught by valgrind, of type
"conditional jump or move depends on uninitialized value"

When running Citus tests under Postgres with valgrind, the test cases
calling into `NonBlockingShardSplit` function produce valgrind errors of
type "conditional jump or move depends on uninitialized value".

The issue is caused by creating a HTAB in a wrong way. HASH_COMPARE flag
should have been used when creating a HTAB with user defined comparison
function. In the absence of HASH_COMPARE flag, HTAB falls back into
built-in string comparison function. However, valgrind somehow discovers
that the match function is not assigned to the user defined function as
intended.

Fixes #6835

(cherry picked from commit e7a25d82c9)
2023-04-24 14:46:31 +03:00
Gokhan Gulbiz 5153986ae4
Backport identity column improvements to v11.2 (#6811)
(Had to cherry-pick
[48d8f53](48d8f53b6c)
not to fail in bookworm packaging)

---------

Co-authored-by: Gürkan İndibay <gindibay@microsoft.com>
2023-04-06 09:46:00 +03:00
Emel ĹžimĹźek 19b5be65c9 Exclude-Generated-Columns-In-Copy (#6721)
DESCRIPTION: Fixes a bug in shard copy operations.

For copying shards in both shard move and shard split operations, Citus
uses the COPY statement.

A COPY all statement in the following form
` COPY target_shard FROM STDIN;`
throws an error when there is a GENERATED column in the shard table.

In order to fix this issue, we need to exclude the GENERATED columns in
the COPY and the matching SELECT statements. Hence this fix converts the
COPY and SELECT all statements to the following form:
```
COPY target_shard (col1, col2, ..., coln) FROM STDIN;
SELECT (col1, col2, ..., coln) FROM source_shard;
```
where (col1, col2, ..., coln) does not include a GENERATED column.
GENERATED column values are created in the target_shard as the values
are inserted.

Fixes #6705.

---------

Co-authored-by: Teja Mupparti <temuppar@microsoft.com>
Co-authored-by: aykut-bozkurt <51649454+aykut-bozkurt@users.noreply.github.com>
Co-authored-by: Jelte Fennema <jelte.fennema@microsoft.com>
Co-authored-by: Gürkan İndibay <gindibay@microsoft.com>
(cherry picked from commit 4043abd5aa)
2023-03-07 20:05:41 +03:00
Jelte Fennema 2cf7ec5e26 Use pg_total_relation_size in citus_shards (#6748)
DESCRIPTION: Correctly report shard size in citus_shards view

When looking at citus_shards, people are interested in the actual size
that all the data related to the shard takes up on disk.
`pg_total_relation_size` is the function to use for that purpose. The
previously used `pg_relation_size` does not include indexes or TOAST.
Especially the missing toast can have enormous impact on the size of the
shown data.

(cherry picked from commit b489d763e1)
2023-03-06 11:37:48 +01:00
aykut-bozkurt 7619b50b7f fix memory leak during altering distributed table with a lot of partition and shards (#6726)
2 improvements to prevent memory leaks during altering or undistributing
distributed tables with a lot of partitions and shards:

1. Free memory for each call to ConvertTable so that colocated and partition tables at
`AlterDistributedTable`, `UndistributeTable`, or
`AlterTableSetAccessMethod` will not cause an increase
in memory usage,
2. Free memory while executing attach partition commands for each partition table at
`AlterDistributedTable` to prevent an increase in memory usage.

DESCRIPTION: Fixes memory leak issue during altering distributed table
with a lot of partition and shards.

Fixes https://github.com/citusdata/citus/issues/6503.

(cherry picked from commit e2654deeae)
2023-02-28 21:25:39 +03:00
aykut-bozkurt 0626f366c1 fix single tuple result memory leak (#6724)
We should not omit to free PGResult when we receive single tuple result
from an internal backend.
Single tuple results are normally freed by our ReceiveResults for
`tupleDescriptor != NULL` flow but not for those with `tupleDescriptor
== NULL`. See PR #6722 for details.

DESCRIPTION: Fixes memory leak issue with query results that returns
single row.

(cherry picked from commit 9e69dd0e7f)
2023-02-17 14:31:34 +03:00
Jelte Fennema e7ac6fd0c1 Fix dubious ownership error from git (#6703)
We started getting this error in CI:
```
Summary coverage rate:
  lines......: 43.4% (28347 of 65321 lines)
  functions..: 53.2% (2544 of 4786 functions)
  branches...: no data found
fatal: detected dubious ownership in repository at '/home/circleci/project'
To add an exception for this directory, call:

	git config --global --add safe.directory /home/circleci/project
Error: exit status 128
```

This fixes that by running the proposed command to command in CI. This
error is
related to a CVE that does not apply to this case, since this is not a
multiuser
system.

Commit on git itself that fixed the CVE:
8959555cee

(cherry picked from commit 69b7f23932)
2023-02-10 16:40:11 +01:00
Jelte Fennema 341fdb32fc Support compilation and run tests on latest PG versions (#6711)
Postgres got minor updates this starts using the images with the latest
version for our tests.

These new Postgres versions caused a compilation issue in PG14 and PG13
due to some function being backported that we had already backported
ourselves. Due this backport being a static inline function it doesn't
matter who provides this and there will be no linkage errors when either
running old Citus packages on new PG versions or the other way around.

(cherry picked from commit 3200187757)
2023-02-10 16:34:46 +01:00
Onur Tirtir c173b13b73 Bump citus version to 11.2.0 2023-02-03 11:13:35 +03:00
48 changed files with 1915 additions and 1193 deletions

View File

@ -6,19 +6,19 @@ orbs:
parameters:
image_suffix:
type: string
default: '-v7e4468f'
default: '-vc4b1573'
pg13_version:
type: string
default: '13.9'
default: '13.10'
pg14_version:
type: string
default: '14.6'
default: '14.7'
pg15_version:
type: string
default: '15.1'
default: '15.2'
upgrade_pg_versions:
type: string
default: '13.9-14.6-15.1'
default: '13.10-14.7-15.2'
style_checker_tools_version:
type: string
default: '0.8.18'
@ -114,6 +114,9 @@ commands:
lcov --remove lcov.info -o lcov.info '/usr/*'
sed "s=^SF:$PWD/=SF:=g" -i lcov.info # relative pats are required by codeclimate
mkdir -p /tmp/codeclimate
# We started getting permissions error. This fixes them and since
# weqre not on a multi-user system so this is safe to do.
git config --global --add safe.directory /home/circleci/project
cc-test-reporter format-coverage -t lcov -o /tmp/codeclimate/$CIRCLE_JOB.json lcov.info
- persist_to_workspace:
root: /tmp

View File

@ -157,7 +157,6 @@ jobs:
apt-get update -y
## Install required packages to execute packaging tools for deb based distros
apt install python3-dev python3-pip -y
sudo apt-get purge -y python3-yaml
python3 -m pip install --upgrade pip setuptools==57.5.0
apt-get install python3-dev python3-pip -y
apt-get purge -y python3-yaml
./.github/packaging/validate_build_output.sh "deb"

View File

@ -1,3 +1,25 @@
### citus v11.2.1 (April 20, 2023) ###
* Correctly reports shard size in `citus_shards` view (#6748)
* Fixes a bug in shard copy operations (#6721)
* Fixes a bug with `INSERT .. SELECT` queries with identity columns (#6802)
* Fixes an uninitialized memory access in shard split API (#6845)
* Fixes compilation for PG13.10 and PG14.7 (#6711)
* Fixes memory leak in `alter_distributed_table` (#6726)
* Fixes memory leak issue with query results that returns single row (#6724)
* Prevents using `alter_distributed_table` and `undistribute_table` UDFs when a
table has identity columns (#6738)
* Prevents using identity columns on data types other than `bigint` on
distributed tables (#6738)
### citus v11.2.0 (January 30, 2023) ###
* Adds support for outer joins with reference tables / complex subquery-CTEs

18
configure vendored
View File

@ -1,6 +1,6 @@
#! /bin/sh
# Guess values for system-dependent variables and create Makefiles.
# Generated by GNU Autoconf 2.69 for Citus 11.2devel.
# Generated by GNU Autoconf 2.69 for Citus 11.2.1.
#
#
# Copyright (C) 1992-1996, 1998-2012 Free Software Foundation, Inc.
@ -579,8 +579,8 @@ MAKEFLAGS=
# Identity of this package.
PACKAGE_NAME='Citus'
PACKAGE_TARNAME='citus'
PACKAGE_VERSION='11.2devel'
PACKAGE_STRING='Citus 11.2devel'
PACKAGE_VERSION='11.2.1'
PACKAGE_STRING='Citus 11.2.1'
PACKAGE_BUGREPORT=''
PACKAGE_URL=''
@ -1262,7 +1262,7 @@ if test "$ac_init_help" = "long"; then
# Omit some internal or obsolete options to make the list less imposing.
# This message is too long to be a string in the A/UX 3.1 sh.
cat <<_ACEOF
\`configure' configures Citus 11.2devel to adapt to many kinds of systems.
\`configure' configures Citus 11.2.1 to adapt to many kinds of systems.
Usage: $0 [OPTION]... [VAR=VALUE]...
@ -1324,7 +1324,7 @@ fi
if test -n "$ac_init_help"; then
case $ac_init_help in
short | recursive ) echo "Configuration of Citus 11.2devel:";;
short | recursive ) echo "Configuration of Citus 11.2.1:";;
esac
cat <<\_ACEOF
@ -1429,7 +1429,7 @@ fi
test -n "$ac_init_help" && exit $ac_status
if $ac_init_version; then
cat <<\_ACEOF
Citus configure 11.2devel
Citus configure 11.2.1
generated by GNU Autoconf 2.69
Copyright (C) 2012 Free Software Foundation, Inc.
@ -1912,7 +1912,7 @@ cat >config.log <<_ACEOF
This file contains any messages produced by compilers while
running configure, to aid debugging if configure makes a mistake.
It was created by Citus $as_me 11.2devel, which was
It was created by Citus $as_me 11.2.1, which was
generated by GNU Autoconf 2.69. Invocation command line was
$ $0 $@
@ -5393,7 +5393,7 @@ cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1
# report actual input values of CONFIG_FILES etc. instead of their
# values after options handling.
ac_log="
This file was extended by Citus $as_me 11.2devel, which was
This file was extended by Citus $as_me 11.2.1, which was
generated by GNU Autoconf 2.69. Invocation command line was
CONFIG_FILES = $CONFIG_FILES
@ -5455,7 +5455,7 @@ _ACEOF
cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1
ac_cs_config="`$as_echo "$ac_configure_args" | sed 's/^ //; s/[\\""\`\$]/\\\\&/g'`"
ac_cs_version="\\
Citus config.status 11.2devel
Citus config.status 11.2.1
configured by $0, generated by GNU Autoconf 2.69,
with options \\"\$ac_cs_config\\"

View File

@ -5,7 +5,7 @@
# everyone needing autoconf installed, the resulting files are checked
# into the SCM.
AC_INIT([Citus], [11.2devel])
AC_INIT([Citus], [11.2.1])
AC_COPYRIGHT([Copyright (c) Citus Data, Inc.])
# we'll need sed and awk for some of the version commands

View File

@ -1,6 +1,6 @@
# Citus extension
comment = 'Citus distributed database'
default_version = '11.2-1'
default_version = '11.2-2'
module_pathname = '$libdir/citus'
relocatable = false
schema = pg_catalog

View File

@ -183,6 +183,7 @@ static TableConversionReturn * AlterDistributedTable(TableConversionParameters *
static TableConversionReturn * AlterTableSetAccessMethod(
TableConversionParameters *params);
static TableConversionReturn * ConvertTable(TableConversionState *con);
static TableConversionReturn * ConvertTableInternal(TableConversionState *con);
static bool SwitchToSequentialAndLocalExecutionIfShardNameTooLong(char *relationName,
char *longestShardName);
static void DropIndexesNotSupportedByColumnar(Oid relationId,
@ -216,6 +217,8 @@ static bool WillRecreateForeignKeyToReferenceTable(Oid relationId,
static void WarningsForDroppingForeignKeysWithDistributedTables(Oid relationId);
static void ErrorIfUnsupportedCascadeObjects(Oid relationId);
static bool DoesCascadeDropUnsupportedObject(Oid classId, Oid id, HTAB *nodeMap);
static TableConversionReturn * CopyTableConversionReturnIntoCurrentContext(
TableConversionReturn *tableConversionReturn);
PG_FUNCTION_INFO_V1(undistribute_table);
PG_FUNCTION_INFO_V1(alter_distributed_table);
@ -402,6 +405,7 @@ UndistributeTable(TableConversionParameters *params)
params->conversionType = UNDISTRIBUTE_TABLE;
params->shardCountIsNull = true;
TableConversionState *con = CreateTableConversion(params);
return ConvertTable(con);
}
@ -441,6 +445,7 @@ AlterDistributedTable(TableConversionParameters *params)
ereport(DEBUG1, (errmsg("setting multi shard modify mode to sequential")));
SetLocalMultiShardModifyModeToSequential();
}
return ConvertTable(con);
}
@ -511,9 +516,9 @@ AlterTableSetAccessMethod(TableConversionParameters *params)
/*
* ConvertTable is used for converting a table into a new table with different properties.
* The conversion is done by creating a new table, moving everything to the new table and
* dropping the old one. So the oid of the table is not preserved.
* ConvertTableInternal is used for converting a table into a new table with different
* properties. The conversion is done by creating a new table, moving everything to the
* new table and dropping the old one. So the oid of the table is not preserved.
*
* The new table will have the same name, columns and rows. It will also have partitions,
* views, sequences of the old table. Finally it will have everything created by
@ -532,7 +537,7 @@ AlterTableSetAccessMethod(TableConversionParameters *params)
* in case you add a new way to return from this function.
*/
TableConversionReturn *
ConvertTable(TableConversionState *con)
ConvertTableInternal(TableConversionState *con)
{
InTableTypeConversionFunctionCall = true;
@ -869,10 +874,77 @@ ConvertTable(TableConversionState *con)
SetLocalEnableLocalReferenceForeignKeys(oldEnableLocalReferenceForeignKeys);
InTableTypeConversionFunctionCall = false;
return ret;
}
/*
* CopyTableConversionReturnIntoCurrentContext copies given tableConversionReturn
* into CurrentMemoryContext.
*/
static TableConversionReturn *
CopyTableConversionReturnIntoCurrentContext(TableConversionReturn *tableConversionReturn)
{
TableConversionReturn *tableConversionReturnCopy = NULL;
if (tableConversionReturn)
{
tableConversionReturnCopy = palloc0(sizeof(TableConversionReturn));
List *copyForeignKeyCommands = NIL;
char *foreignKeyCommand = NULL;
foreach_ptr(foreignKeyCommand, tableConversionReturn->foreignKeyCommands)
{
char *copyForeignKeyCommand = MemoryContextStrdup(CurrentMemoryContext,
foreignKeyCommand);
copyForeignKeyCommands = lappend(copyForeignKeyCommands,
copyForeignKeyCommand);
}
tableConversionReturnCopy->foreignKeyCommands = copyForeignKeyCommands;
}
return tableConversionReturnCopy;
}
/*
* ConvertTable is a wrapper for ConvertTableInternal to persist only
* TableConversionReturn and delete all other allocations.
*/
static TableConversionReturn *
ConvertTable(TableConversionState *con)
{
/*
* We do not allow alter_distributed_table and undistribute_table operations
* for tables with identity columns. This is because we do not have a proper way
* of keeping sequence states consistent across the cluster.
*/
ErrorIfTableHasIdentityColumn(con->relationId);
/*
* when there are many partitions or colocated tables, memory usage is
* accumulated. Free context for each call to ConvertTable.
*/
MemoryContext convertTableContext =
AllocSetContextCreate(CurrentMemoryContext,
"citus_convert_table_context",
ALLOCSET_DEFAULT_SIZES);
MemoryContext oldContext = MemoryContextSwitchTo(convertTableContext);
TableConversionReturn *tableConversionReturn = ConvertTableInternal(con);
MemoryContextSwitchTo(oldContext);
/* persist TableConversionReturn in oldContext */
TableConversionReturn *tableConversionReturnCopy =
CopyTableConversionReturnIntoCurrentContext(tableConversionReturn);
/* delete convertTableContext */
MemoryContextDelete(convertTableContext);
return tableConversionReturnCopy;
}
/*
* DropIndexesNotSupportedByColumnar is a helper function used during accces
* method conversion to drop the indexes that are not supported by columnarAM.
@ -1523,96 +1595,6 @@ CreateMaterializedViewDDLCommand(Oid matViewOid)
}
/*
* This function marks all the identity sequences as distributed on the given table.
*/
static void
MarkIdentitiesAsDistributed(Oid targetRelationId)
{
Relation relation = relation_open(targetRelationId, AccessShareLock);
TupleDesc tupleDescriptor = RelationGetDescr(relation);
relation_close(relation, NoLock);
bool missingSequenceOk = false;
for (int attributeIndex = 0; attributeIndex < tupleDescriptor->natts;
attributeIndex++)
{
Form_pg_attribute attributeForm = TupleDescAttr(tupleDescriptor, attributeIndex);
if (attributeForm->attidentity)
{
Oid seqOid = getIdentitySequence(targetRelationId, attributeForm->attnum,
missingSequenceOk);
ObjectAddress seqAddress = { 0 };
ObjectAddressSet(seqAddress, RelationRelationId, seqOid);
MarkObjectDistributed(&seqAddress);
}
}
}
/*
* This function returns sql statements to rename identites on the given table
*/
static void
PrepareRenameIdentitiesCommands(Oid sourceRelationId, Oid targetRelationId,
List **outCoordinatorCommands, List **outWorkerCommands)
{
Relation targetRelation = relation_open(targetRelationId, AccessShareLock);
TupleDesc targetTupleDescriptor = RelationGetDescr(targetRelation);
relation_close(targetRelation, NoLock);
bool missingSequenceOk = false;
for (int attributeIndex = 0; attributeIndex < targetTupleDescriptor->natts;
attributeIndex++)
{
Form_pg_attribute attributeForm = TupleDescAttr(targetTupleDescriptor,
attributeIndex);
if (attributeForm->attidentity)
{
char *columnName = NameStr(attributeForm->attname);
Oid targetSequenceOid = getIdentitySequence(targetRelationId,
attributeForm->attnum,
missingSequenceOk);
char *targetSequenceName = generate_relation_name(targetSequenceOid, NIL);
Oid sourceSequenceOid = getIdentitySequence(sourceRelationId,
attributeForm->attnum,
missingSequenceOk);
char *sourceSequenceName = generate_relation_name(sourceSequenceOid, NIL);
/* to rename sequence on the coordinator */
*outCoordinatorCommands = lappend(*outCoordinatorCommands, psprintf(
"SET citus.enable_ddl_propagation TO OFF; ALTER SEQUENCE %s RENAME TO %s; RESET citus.enable_ddl_propagation;",
quote_identifier(
targetSequenceName),
quote_identifier(
sourceSequenceName)));
/* update workers to use existing sequence and drop the new one generated by PG */
bool missingTableOk = true;
*outWorkerCommands = lappend(*outWorkerCommands,
GetAlterColumnWithNextvalDefaultCmd(
sourceSequenceOid, sourceRelationId,
columnName,
missingTableOk));
/* drop the sequence generated by identity column */
*outWorkerCommands = lappend(*outWorkerCommands, psprintf(
"DROP SEQUENCE IF EXISTS %s",
quote_identifier(
targetSequenceName)));
}
}
}
/*
* ReplaceTable replaces the source table with the target table.
* It moves all the rows of the source table to target table with INSERT SELECT.
@ -1671,24 +1653,6 @@ ReplaceTable(Oid sourceId, Oid targetId, List *justBeforeDropCommands,
ExecuteQueryViaSPI(query->data, SPI_OK_INSERT);
}
/*
* Drop identity dependencies (sequences marked as DEPENDENCY_INTERNAL) on the workers
* to keep their states after the source table is dropped.
*/
List *ownedIdentitySequences = getOwnedSequences_internal(sourceId, 0,
DEPENDENCY_INTERNAL);
if (ownedIdentitySequences != NIL && ShouldSyncTableMetadata(sourceId))
{
char *qualifiedTableName = quote_qualified_identifier(schemaName, sourceName);
StringInfo command = makeStringInfo();
appendStringInfo(command,
"SELECT pg_catalog.worker_drop_sequence_dependency(%s);",
quote_literal_cstr(qualifiedTableName));
SendCommandToWorkersWithMetadata(command->data);
}
/*
* Modify regular sequence dependencies (sequences marked as DEPENDENCY_AUTO)
*/
@ -1748,23 +1712,6 @@ ReplaceTable(Oid sourceId, Oid targetId, List *justBeforeDropCommands,
quote_qualified_identifier(schemaName, sourceName))));
}
/*
* We need to prepare rename identities commands before dropping the original table,
* otherwise we can't find the original names of the identity sequences.
* We prepare separate commands for the coordinator and the workers because:
* In the coordinator, we simply need to rename the identity sequences
* to their names on the old table, because right now the identity
* sequences have default names generated by Postgres with the creation of the new table
* In the workers, we have not dropped the original identity sequences,
* so what we do is we alter the columns and set their default to the
* original identity sequences, and after that we drop the new sequences.
*/
List *coordinatorCommandsToRenameIdentites = NIL;
List *workerCommandsToRenameIdentites = NIL;
PrepareRenameIdentitiesCommands(sourceId, targetId,
&coordinatorCommandsToRenameIdentites,
&workerCommandsToRenameIdentites);
resetStringInfo(query);
appendStringInfo(query, "DROP %sTABLE %s CASCADE",
IsForeignTable(sourceId) ? "FOREIGN " : "",
@ -1782,27 +1729,6 @@ ReplaceTable(Oid sourceId, Oid targetId, List *justBeforeDropCommands,
quote_qualified_identifier(schemaName, targetName),
quote_identifier(sourceName));
ExecuteQueryViaSPI(query->data, SPI_OK_UTILITY);
char *coordinatorCommand = NULL;
foreach_ptr(coordinatorCommand, coordinatorCommandsToRenameIdentites)
{
ExecuteQueryViaSPI(coordinatorCommand, SPI_OK_UTILITY);
}
char *workerCommand = NULL;
foreach_ptr(workerCommand, workerCommandsToRenameIdentites)
{
SendCommandToWorkersWithMetadata(workerCommand);
}
/*
* To preserve identity sequences states in case of redistributing the table again,
* we don't drop them when we undistribute a table. To maintain consistency and
* avoid future problems if we redistribute the table, we want to apply all changes happening to
* the identity sequence in the coordinator to their corresponding sequences in the workers as well.
* That's why we have to mark identity sequences as distributed
*/
MarkIdentitiesAsDistributed(targetId);
}

View File

@ -1131,7 +1131,7 @@ DropIdentitiesOnTable(Oid relationId)
{
Relation relation = relation_open(relationId, AccessShareLock);
TupleDesc tupleDescriptor = RelationGetDescr(relation);
relation_close(relation, NoLock);
List *dropCommandList = NIL;
for (int attributeIndex = 0; attributeIndex < tupleDescriptor->natts;
attributeIndex++)
@ -1151,16 +1151,24 @@ DropIdentitiesOnTable(Oid relationId)
qualifiedTableName,
columnName);
dropCommandList = lappend(dropCommandList, dropCommand->data);
}
}
relation_close(relation, NoLock);
char *dropCommand = NULL;
foreach_ptr(dropCommand, dropCommandList)
{
/*
* We need to disable/enable ddl propagation for this command, to prevent
* sending unnecessary ALTER COLUMN commands for partitions, to MX workers.
*/
ExecuteAndLogUtilityCommandList(list_make3(DISABLE_DDL_PROPAGATION,
dropCommand->data,
dropCommand,
ENABLE_DDL_PROPAGATION));
}
}
}
/*

View File

@ -1190,7 +1190,7 @@ EnsureSequenceTypeSupported(Oid seqOid, Oid attributeTypeId, Oid ownerRelationId
foreach_oid(citusTableId, citusTableIdList)
{
List *seqInfoList = NIL;
GetDependentSequencesWithRelation(citusTableId, &seqInfoList, 0);
GetDependentSequencesWithRelation(citusTableId, &seqInfoList, 0, DEPENDENCY_AUTO);
SequenceInfo *seqInfo = NULL;
foreach_ptr(seqInfo, seqInfoList)
@ -1267,7 +1267,7 @@ EnsureRelationHasCompatibleSequenceTypes(Oid relationId)
{
List *seqInfoList = NIL;
GetDependentSequencesWithRelation(relationId, &seqInfoList, 0);
GetDependentSequencesWithRelation(relationId, &seqInfoList, 0, DEPENDENCY_AUTO);
EnsureDistributedSequencesHaveOneType(relationId, seqInfoList);
}
@ -1608,6 +1608,8 @@ EnsureRelationCanBeDistributed(Oid relationId, Var *distributionColumn,
{
Oid parentRelationId = InvalidOid;
ErrorIfTableHasUnsupportedIdentityColumn(relationId);
EnsureLocalTableEmptyIfNecessary(relationId, distributionMethod);
/* user really wants triggers? */

View File

@ -370,7 +370,7 @@ GetDependencyCreateDDLCommands(const ObjectAddress *dependency)
bool creatingShellTableOnRemoteNode = true;
List *tableDDLCommands = GetFullTableCreationCommands(relationId,
WORKER_NEXTVAL_SEQUENCE_DEFAULTS,
INCLUDE_IDENTITY_AS_SEQUENCE_DEFAULTS,
INCLUDE_IDENTITY,
creatingShellTableOnRemoteNode);
TableDDLCommand *tableDDLCommand = NULL;
foreach_ptr(tableDDLCommand, tableDDLCommands)

View File

@ -33,7 +33,8 @@
/* Local functions forward declarations for helper functions */
static bool OptionsSpecifyOwnedBy(List *optionList, Oid *ownedByTableId);
static Oid SequenceUsedInDistributedTable(const ObjectAddress *sequenceAddress);
static Oid SequenceUsedInDistributedTable(const ObjectAddress *sequenceAddress, char
depType);
static List * FilterDistributedSequences(GrantStmt *stmt);
@ -183,7 +184,7 @@ ExtractDefaultColumnsAndOwnedSequences(Oid relationId, List **columnNameList,
char *columnName = NameStr(attributeForm->attname);
List *columnOwnedSequences =
getOwnedSequences_internal(relationId, attributeIndex + 1, 0);
getOwnedSequences_internal(relationId, attributeIndex + 1, DEPENDENCY_AUTO);
if (attributeForm->atthasdef && list_length(columnOwnedSequences) == 0)
{
@ -453,21 +454,22 @@ PreprocessAlterSequenceStmt(Node *node, const char *queryString,
/* the code-path only supports a single object */
Assert(list_length(addresses) == 1);
/* We have already asserted that we have exactly 1 address in the addresses. */
ObjectAddress *address = linitial(addresses);
/* error out if the sequence is distributed */
if (IsAnyObjectDistributed(addresses))
if (IsAnyObjectDistributed(addresses) || SequenceUsedInDistributedTable(address,
DEPENDENCY_INTERNAL))
{
ereport(ERROR, (errmsg(
"Altering a distributed sequence is currently not supported.")));
}
/* We have already asserted that we have exactly 1 address in the addresses. */
ObjectAddress *address = linitial(addresses);
/*
* error out if the sequence is used in a distributed table
* and this is an ALTER SEQUENCE .. AS .. statement
*/
Oid citusTableId = SequenceUsedInDistributedTable(address);
Oid citusTableId = SequenceUsedInDistributedTable(address, DEPENDENCY_AUTO);
if (citusTableId != InvalidOid)
{
List *options = stmt->options;
@ -497,16 +499,19 @@ PreprocessAlterSequenceStmt(Node *node, const char *queryString,
* SequenceUsedInDistributedTable returns true if the argument sequence
* is used as the default value of a column in a distributed table.
* Returns false otherwise
* See DependencyType for the possible values of depType.
* We use DEPENDENCY_INTERNAL for sequences created by identity column.
* DEPENDENCY_AUTO for regular sequences.
*/
static Oid
SequenceUsedInDistributedTable(const ObjectAddress *sequenceAddress)
SequenceUsedInDistributedTable(const ObjectAddress *sequenceAddress, char depType)
{
List *citusTableIdList = CitusTableTypeIdList(ANY_CITUS_TABLE_TYPE);
Oid citusTableId = InvalidOid;
foreach_oid(citusTableId, citusTableIdList)
{
List *seqInfoList = NIL;
GetDependentSequencesWithRelation(citusTableId, &seqInfoList, 0);
GetDependentSequencesWithRelation(citusTableId, &seqInfoList, 0, depType);
SequenceInfo *seqInfo = NULL;
foreach_ptr(seqInfo, seqInfoList)
{

View File

@ -1378,29 +1378,6 @@ PreprocessAlterTableStmt(Node *node, const char *alterTableCommand,
}
}
/*
* We check for ADD COLUMN .. GENERATED .. AS IDENTITY expr
* since it uses a sequence as an internal dependency
* we should deparse the statement
*/
constraint = NULL;
foreach_ptr(constraint, columnConstraints)
{
if (constraint->contype == CONSTR_IDENTITY)
{
deparseAT = true;
useInitialDDLCommandString = false;
/*
* Since we don't support constraints for AT_AddColumn
* we have to set is_not_null to true explicitly for identity columns
*/
ColumnDef *newColDef = copyObject(columnDefinition);
newColDef->constraints = NULL;
newColDef->is_not_null = true;
newCmd->def = (Node *) newColDef;
}
}
/*
* We check for ADD COLUMN .. SERIAL pseudo-type
@ -2539,34 +2516,6 @@ PostprocessAlterTableStmt(AlterTableStmt *alterTableStatement)
}
}
}
/*
* We check for ADD COLUMN .. GENERATED AS IDENTITY expr
* since it uses a seqeunce as an internal dependency
*/
constraint = NULL;
foreach_ptr(constraint, columnConstraints)
{
if (constraint->contype == CONSTR_IDENTITY)
{
AttrNumber attnum = get_attnum(relationId,
columnDefinition->colname);
bool missing_ok = false;
Oid seqOid = getIdentitySequence(relationId, attnum, missing_ok);
if (ShouldSyncTableMetadata(relationId))
{
needMetadataSyncForNewSequences = true;
alterTableDefaultNextvalCmd =
GetAddColumnWithNextvalDefaultCmd(seqOid,
relationId,
columnDefinition
->colname,
columnDefinition
->typeName);
}
}
}
}
/*
* We check for ALTER COLUMN .. SET DEFAULT nextval('user_defined_seq')
@ -3222,6 +3171,17 @@ ErrorIfUnsupportedAlterTableStmt(AlterTableStmt *alterTableStatement)
{
if (columnConstraint->contype == CONSTR_IDENTITY)
{
/*
* We currently don't support adding an identity column for an MX table
*/
if (ShouldSyncTableMetadata(relationId))
{
ereport(ERROR, (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
errmsg(
"cannot execute ADD COLUMN commands involving identity"
" columns when metadata is synchronized to workers")));
}
/*
* Currently we don't support backfilling the new identity column with default values
* if the table is not empty
@ -3352,7 +3312,8 @@ ErrorIfUnsupportedAlterTableStmt(AlterTableStmt *alterTableStatement)
*/
AttrNumber attnum = get_attnum(relationId, command->name);
List *seqInfoList = NIL;
GetDependentSequencesWithRelation(relationId, &seqInfoList, attnum);
GetDependentSequencesWithRelation(relationId, &seqInfoList, attnum,
DEPENDENCY_AUTO);
if (seqInfoList != NIL)
{
ereport(ERROR, (errmsg("cannot execute ALTER COLUMN TYPE .. command "
@ -4011,3 +3972,59 @@ MakeNameListFromRangeVar(const RangeVar *rel)
return list_make1(makeString(rel->relname));
}
}
/*
* ErrorIfTableHasUnsupportedIdentityColumn errors out if the given table has any identity column other than bigint identity column.
*/
void
ErrorIfTableHasUnsupportedIdentityColumn(Oid relationId)
{
Relation relation = relation_open(relationId, AccessShareLock);
TupleDesc tupleDescriptor = RelationGetDescr(relation);
for (int attributeIndex = 0; attributeIndex < tupleDescriptor->natts;
attributeIndex++)
{
Form_pg_attribute attributeForm = TupleDescAttr(tupleDescriptor, attributeIndex);
if (attributeForm->attidentity && attributeForm->atttypid != INT8OID)
{
char *qualifiedRelationName = generate_qualified_relation_name(relationId);
ereport(ERROR, (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
errmsg(
"cannot complete operation on %s with smallint/int identity column",
qualifiedRelationName),
errhint(
"Use bigint identity column instead.")));
}
}
relation_close(relation, NoLock);
}
/*
* ErrorIfTableHasIdentityColumn errors out if the given table has identity column
*/
void
ErrorIfTableHasIdentityColumn(Oid relationId)
{
Relation relation = relation_open(relationId, AccessShareLock);
TupleDesc tupleDescriptor = RelationGetDescr(relation);
for (int attributeIndex = 0; attributeIndex < tupleDescriptor->natts;
attributeIndex++)
{
Form_pg_attribute attributeForm = TupleDescAttr(tupleDescriptor, attributeIndex);
if (attributeForm->attidentity)
{
ereport(ERROR, (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
errmsg(
"cannot complete operation on a table with identity column")));
}
}
relation_close(relation, NoLock);
}

View File

@ -304,10 +304,7 @@ pg_get_sequencedef(Oid sequenceRelationId)
* When it's WORKER_NEXTVAL_SEQUENCE_DEFAULTS, the function creates the DEFAULT
* clause using worker_nextval('sequence') and not nextval('sequence')
* When IncludeIdentities is NO_IDENTITY, the function does not include identity column
* specifications. When it's INCLUDE_IDENTITY_AS_SEQUENCE_DEFAULTS, the function
* uses sequences and set them as default values for identity columns by using exactly
* the same approach with worker_nextval('sequence') & nextval('sequence') logic
* desribed above. When it's INCLUDE_IDENTITY it creates GENERATED .. AS IDENTIY clauses.
* specifications. When it's INCLUDE_IDENTITY it creates GENERATED .. AS IDENTIY clauses.
*/
char *
pg_get_tableschemadef_string(Oid tableRelationId, IncludeSequenceDefaults
@ -403,26 +400,9 @@ pg_get_tableschemadef_string(Oid tableRelationId, IncludeSequenceDefaults
Oid seqOid = getIdentitySequence(RelationGetRelid(relation),
attributeForm->attnum, missing_ok);
char *sequenceName = generate_qualified_relation_name(seqOid);
if (includeIdentityDefaults == INCLUDE_IDENTITY_AS_SEQUENCE_DEFAULTS)
{
if (pg_get_sequencedef(seqOid)->seqtypid != INT8OID)
{
appendStringInfo(&buffer,
" DEFAULT worker_nextval(%s::regclass)",
quote_literal_cstr(sequenceName));
}
else
{
appendStringInfo(&buffer, " DEFAULT nextval(%s::regclass)",
quote_literal_cstr(sequenceName));
}
}
else if (includeIdentityDefaults == INCLUDE_IDENTITY)
if (includeIdentityDefaults == INCLUDE_IDENTITY)
{
Form_pg_sequence pgSequenceForm = pg_get_sequencedef(seqOid);
uint64 sequenceStart = nextval_internal(seqOid, false);
char *sequenceDef = psprintf(
" GENERATED %s AS IDENTITY (INCREMENT BY " INT64_FORMAT \
" MINVALUE " INT64_FORMAT " MAXVALUE "
@ -433,7 +413,8 @@ pg_get_tableschemadef_string(Oid tableRelationId, IncludeSequenceDefaults
"ALWAYS" : "BY DEFAULT",
pgSequenceForm->seqincrement,
pgSequenceForm->seqmin,
pgSequenceForm->seqmax, sequenceStart,
pgSequenceForm->seqmax,
pgSequenceForm->seqstart,
pgSequenceForm->seqcache,
pgSequenceForm->seqcycle ? "" : "NO ");
@ -1391,7 +1372,7 @@ convert_aclright_to_string(int aclright)
/*
* contain_nextval_expression_walker walks over expression tree and returns
* true if it contains call to 'nextval' function.
* true if it contains call to 'nextval' function or it has an identity column.
*/
bool
contain_nextval_expression_walker(Node *node, void *context)
@ -1401,6 +1382,13 @@ contain_nextval_expression_walker(Node *node, void *context)
return false;
}
/* check if the node contains an identity column */
if (IsA(node, NextValueExpr))
{
return true;
}
/* check if the node contains call to 'nextval' */
if (IsA(node, FuncExpr))
{
FuncExpr *funcExpr = (FuncExpr *) node;

View File

@ -4777,6 +4777,7 @@ ReceiveResults(WorkerSession *session, bool storeRows)
TupleDesc tupleDescriptor = tupleDest->tupleDescForQuery(tupleDest, queryIndex);
if (tupleDescriptor == NULL)
{
PQclear(result);
continue;
}

View File

@ -1834,7 +1834,7 @@ static List *
GetRelationSequenceDependencyList(Oid relationId)
{
List *seqInfoList = NIL;
GetDependentSequencesWithRelation(relationId, &seqInfoList, 0);
GetDependentSequencesWithRelation(relationId, &seqInfoList, 0, DEPENDENCY_AUTO);
List *seqIdList = NIL;
SequenceInfo *seqInfo = NULL;

View File

@ -1586,10 +1586,13 @@ GetAttributeTypeOid(Oid relationId, AttrNumber attnum)
* For both cases, we use the intermediate AttrDefault object from pg_depend.
* If attnum is specified, we only return the sequences related to that
* attribute of the relationId.
* See DependencyType for the possible values of depType.
* We use DEPENDENCY_INTERNAL for sequences created by identity column.
* DEPENDENCY_AUTO for regular sequences.
*/
void
GetDependentSequencesWithRelation(Oid relationId, List **seqInfoList,
AttrNumber attnum)
AttrNumber attnum, char depType)
{
Assert(*seqInfoList == NIL);
@ -1626,7 +1629,7 @@ GetDependentSequencesWithRelation(Oid relationId, List **seqInfoList,
if (deprec->classid == AttrDefaultRelationId &&
deprec->objsubid == 0 &&
deprec->refobjsubid != 0 &&
deprec->deptype == DEPENDENCY_AUTO)
deprec->deptype == depType)
{
/*
* We are going to generate corresponding SequenceInfo
@ -1635,8 +1638,7 @@ GetDependentSequencesWithRelation(Oid relationId, List **seqInfoList,
attrdefResult = lappend_oid(attrdefResult, deprec->objid);
attrdefAttnumResult = lappend_int(attrdefAttnumResult, deprec->refobjsubid);
}
else if ((deprec->deptype == DEPENDENCY_AUTO || deprec->deptype ==
DEPENDENCY_INTERNAL) &&
else if (deprec->deptype == depType &&
deprec->refobjsubid != 0 &&
deprec->classid == RelationRelationId &&
get_rel_relkind(deprec->objid) == RELKIND_SEQUENCE)
@ -1883,6 +1885,53 @@ SequenceDependencyCommandList(Oid relationId)
}
/*
* IdentitySequenceDependencyCommandList generate a command to execute
* a UDF (WORKER_ADJUST_IDENTITY_COLUMN_SEQ_RANGES) on workers to modify the identity
* columns min/max values to produce unique values on workers.
*/
List *
IdentitySequenceDependencyCommandList(Oid targetRelationId)
{
List *commandList = NIL;
Relation relation = relation_open(targetRelationId, AccessShareLock);
TupleDesc tupleDescriptor = RelationGetDescr(relation);
bool tableHasIdentityColumn = false;
for (int attributeIndex = 0; attributeIndex < tupleDescriptor->natts;
attributeIndex++)
{
Form_pg_attribute attributeForm = TupleDescAttr(tupleDescriptor, attributeIndex);
if (attributeForm->attidentity)
{
tableHasIdentityColumn = true;
break;
}
}
relation_close(relation, NoLock);
if (tableHasIdentityColumn)
{
StringInfo stringInfo = makeStringInfo();
char *tableName = generate_qualified_relation_name(targetRelationId);
appendStringInfo(stringInfo,
WORKER_ADJUST_IDENTITY_COLUMN_SEQ_RANGES,
quote_literal_cstr(tableName));
commandList = lappend(commandList,
makeTableDDLCommandString(
stringInfo->data));
}
return commandList;
}
/*
* CreateSequenceDependencyCommand generates a query string for calling
* worker_record_sequence_dependency on the worker to recreate a sequence->table
@ -2605,8 +2654,7 @@ CreateShellTableOnWorkers(Oid relationId)
List *commandList = list_make1(DISABLE_DDL_PROPAGATION);
IncludeSequenceDefaults includeSequenceDefaults = WORKER_NEXTVAL_SEQUENCE_DEFAULTS;
IncludeIdentities includeIdentityDefaults =
INCLUDE_IDENTITY_AS_SEQUENCE_DEFAULTS;
IncludeIdentities includeIdentityDefaults = INCLUDE_IDENTITY;
bool creatingShellTableOnRemoteNode = true;
List *tableDDLCommands = GetFullTableCreationCommands(relationId,

View File

@ -985,7 +985,7 @@ AppendShardSizeQuery(StringInfo selectQuery, ShardInterval *shardInterval)
appendStringInfo(selectQuery, "SELECT " UINT64_FORMAT " AS shard_id, ", shardId);
appendStringInfo(selectQuery, "%s AS shard_name, ", quotedShardName);
appendStringInfo(selectQuery, PG_RELATION_SIZE_FUNCTION, quotedShardName);
appendStringInfo(selectQuery, PG_TOTAL_RELATION_SIZE_FUNCTION, quotedShardName);
}

View File

@ -461,10 +461,7 @@ ResolveRelationId(text *relationName, bool missingOk)
* definition, optional column storage and statistics definitions, and index
* constraint and trigger definitions.
* When IncludeIdentities is NO_IDENTITY, the function does not include identity column
* specifications. When it's INCLUDE_IDENTITY_AS_SEQUENCE_DEFAULTS, the function
* uses sequences and set them as default values for identity columns by using exactly
* the same approach with worker_nextval('sequence') & nextval('sequence') logic
* desribed above. When it's INCLUDE_IDENTITY it creates GENERATED .. AS IDENTIY clauses.
* specifications. When it's INCLUDE_IDENTITY it creates GENERATED .. AS IDENTIY clauses.
*/
List *
GetFullTableCreationCommands(Oid relationId,
@ -500,6 +497,15 @@ GetFullTableCreationCommands(Oid relationId,
tableDDLEventList = lappend(tableDDLEventList,
truncateTriggerCommand);
}
/*
* For identity column sequences, we only need to modify
* their min/max values to produce unique values on the worker nodes.
*/
List *identitySequenceDependencyCommandList =
IdentitySequenceDependencyCommandList(relationId);
tableDDLEventList = list_concat(tableDDLEventList,
identitySequenceDependencyCommandList);
}
tableDDLEventList = list_concat(tableDDLEventList, postLoadCreationCommandList);

View File

@ -1810,7 +1810,7 @@ CreateWorkerForPlacementSet(List *workersForPlacementList)
/* we don't have value field as it's a set */
info.entrysize = info.keysize;
uint32 hashFlags = (HASH_ELEM | HASH_FUNCTION | HASH_CONTEXT);
uint32 hashFlags = (HASH_ELEM | HASH_FUNCTION | HASH_CONTEXT | HASH_COMPARE);
HTAB *workerForPlacementSet = hash_create("worker placement set", 32, &info,
hashFlags);

View File

@ -53,8 +53,14 @@ worker_copy_table_to_node(PG_FUNCTION_ARGS)
targetNodeId);
StringInfo selectShardQueryForCopy = makeStringInfo();
/*
* Even though we do COPY(SELECT ...) all the columns, we can't just do SELECT * because we need to not COPY generated colums.
*/
const char *columnList = CopyableColumnNamesFromRelationName(relationSchemaName,
relationName);
appendStringInfo(selectShardQueryForCopy,
"SELECT * FROM %s;", relationQualifiedName);
"SELECT %s FROM %s;", columnList, relationQualifiedName);
ParamListInfo params = NULL;
ExecuteQueryStringIntoDestReceiver(selectShardQueryForCopy->data, params,

View File

@ -73,7 +73,7 @@ static void ShardCopyDestReceiverDestroy(DestReceiver *destReceiver);
static bool CanUseLocalCopy(uint32_t destinationNodeId);
static StringInfo ConstructShardCopyStatement(List *destinationShardFullyQualifiedName,
bool
useBinaryFormat);
useBinaryFormat, TupleDesc tupleDesc);
static void WriteLocalTuple(TupleTableSlot *slot, ShardCopyDestReceiver *copyDest);
static int ReadFromLocalBufferCallback(void *outBuf, int minRead, int maxRead);
static void LocalCopyToShard(ShardCopyDestReceiver *copyDest, CopyOutState
@ -105,7 +105,8 @@ ConnectToRemoteAndStartCopy(ShardCopyDestReceiver *copyDest)
StringInfo copyStatement = ConstructShardCopyStatement(
copyDest->destinationShardFullyQualifiedName,
copyDest->copyOutState->binary);
copyDest->copyOutState->binary,
copyDest->tupleDescriptor);
if (!SendRemoteCommand(copyDest->connection, copyStatement->data))
{
@ -344,21 +345,80 @@ ShardCopyDestReceiverDestroy(DestReceiver *dest)
}
/*
* CopyableColumnNamesFromTupleDesc function creates and returns a comma seperated column names string to be used in COPY
* and SELECT statements when copying a table. The COPY and SELECT statements should filter out the GENERATED columns since COPY
* statement fails to handle them. Iterating over the attributes of the table we also need to skip the dropped columns.
*/
const char *
CopyableColumnNamesFromTupleDesc(TupleDesc tupDesc)
{
StringInfo columnList = makeStringInfo();
bool firstInList = true;
for (int i = 0; i < tupDesc->natts; i++)
{
Form_pg_attribute att = TupleDescAttr(tupDesc, i);
if (att->attgenerated || att->attisdropped)
{
continue;
}
if (!firstInList)
{
appendStringInfo(columnList, ",");
}
firstInList = false;
appendStringInfo(columnList, "%s", quote_identifier(NameStr(att->attname)));
}
return columnList->data;
}
/*
* CopyableColumnNamesFromRelationName function is a wrapper for CopyableColumnNamesFromTupleDesc.
*/
const char *
CopyableColumnNamesFromRelationName(const char *schemaName, const char *relationName)
{
Oid namespaceOid = get_namespace_oid(schemaName, true);
Oid relationId = get_relname_relid(relationName, namespaceOid);
Relation relation = relation_open(relationId, AccessShareLock);
TupleDesc tupleDesc = RelationGetDescr(relation);
const char *columnList = CopyableColumnNamesFromTupleDesc(tupleDesc);
relation_close(relation, NoLock);
return columnList;
}
/*
* ConstructShardCopyStatement constructs the text of a COPY statement
* for copying into a result table
*/
static StringInfo
ConstructShardCopyStatement(List *destinationShardFullyQualifiedName, bool
useBinaryFormat)
useBinaryFormat,
TupleDesc tupleDesc)
{
char *destinationShardSchemaName = linitial(destinationShardFullyQualifiedName);
char *destinationShardRelationName = lsecond(destinationShardFullyQualifiedName);
StringInfo command = makeStringInfo();
appendStringInfo(command, "COPY %s.%s FROM STDIN",
const char *columnList = CopyableColumnNamesFromTupleDesc(tupleDesc);
appendStringInfo(command, "COPY %s.%s (%s) FROM STDIN",
quote_identifier(destinationShardSchemaName), quote_identifier(
destinationShardRelationName));
destinationShardRelationName), columnList);
if (useBinaryFormat)
{

View File

@ -110,8 +110,13 @@ worker_split_copy(PG_FUNCTION_ARGS)
splitCopyInfoList))));
StringInfo selectShardQueryForCopy = makeStringInfo();
const char *columnList = CopyableColumnNamesFromRelationName(
sourceShardToCopySchemaName,
sourceShardToCopyName);
appendStringInfo(selectShardQueryForCopy,
"SELECT * FROM %s;", sourceShardToCopyQualifiedName);
"SELECT %s FROM %s;", columnList,
sourceShardToCopyQualifiedName);
ParamListInfo params = NULL;
ExecuteQueryStringIntoDestReceiver(selectShardQueryForCopy->data, params,

View File

@ -0,0 +1,5 @@
-- citus--11.2-1--11.2-2
-- Since we backported the UDF below from version 11.3,
-- the version portion of this file does not match with
-- the version of the included file.
#include "udfs/worker_adjust_identity_column_seq_ranges/11.3-1.sql"

View File

@ -0,0 +1,2 @@
-- citus--11.2-2--11.2-1
DROP FUNCTION IF EXISTS pg_catalog.worker_adjust_identity_column_seq_ranges(regclass);

View File

@ -0,0 +1,7 @@
CREATE OR REPLACE FUNCTION pg_catalog.worker_adjust_identity_column_seq_ranges(regclass)
RETURNS VOID
LANGUAGE C STRICT
AS 'MODULE_PATHNAME', $$worker_adjust_identity_column_seq_ranges$$;
COMMENT ON FUNCTION pg_catalog.worker_adjust_identity_column_seq_ranges(regclass)
IS 'modify identity column seq ranges to produce globally unique values';

View File

@ -0,0 +1,7 @@
CREATE OR REPLACE FUNCTION pg_catalog.worker_adjust_identity_column_seq_ranges(regclass)
RETURNS VOID
LANGUAGE C STRICT
AS 'MODULE_PATHNAME', $$worker_adjust_identity_column_seq_ranges$$;
COMMENT ON FUNCTION pg_catalog.worker_adjust_identity_column_seq_ranges(regclass)
IS 'modify identity column seq ranges to produce globally unique values';

View File

@ -70,6 +70,7 @@ static void AlterSequenceMinMax(Oid sequenceId, char *schemaName, char *sequence
PG_FUNCTION_INFO_V1(worker_apply_shard_ddl_command);
PG_FUNCTION_INFO_V1(worker_apply_inter_shard_ddl_command);
PG_FUNCTION_INFO_V1(worker_apply_sequence_command);
PG_FUNCTION_INFO_V1(worker_adjust_identity_column_seq_ranges);
PG_FUNCTION_INFO_V1(worker_append_table_to_shard);
PG_FUNCTION_INFO_V1(worker_nextval);
@ -133,6 +134,60 @@ worker_apply_inter_shard_ddl_command(PG_FUNCTION_ARGS)
}
/*
* worker_adjust_identity_column_seq_ranges takes a table oid, runs an ALTER SEQUENCE statement
* for each identity column to adjust the minvalue and maxvalue of the sequence owned by
* identity column such that the sequence creates globally unique values.
* We use table oid instead of sequence name to avoid any potential conflicts between sequences of different tables. This way, we can safely iterate through identity columns on a specific table without any issues. While this may introduce a small amount of business logic to workers, it's a much safer approach overall.
*/
Datum
worker_adjust_identity_column_seq_ranges(PG_FUNCTION_ARGS)
{
CheckCitusVersion(ERROR);
Oid tableRelationId = PG_GETARG_OID(0);
EnsureTableOwner(tableRelationId);
Relation tableRelation = relation_open(tableRelationId, AccessShareLock);
TupleDesc tableTupleDesc = RelationGetDescr(tableRelation);
bool missingSequenceOk = false;
for (int attributeIndex = 0; attributeIndex < tableTupleDesc->natts;
attributeIndex++)
{
Form_pg_attribute attributeForm = TupleDescAttr(tableTupleDesc,
attributeIndex);
/* skip dropped columns */
if (attributeForm->attisdropped)
{
continue;
}
if (attributeForm->attidentity)
{
Oid sequenceOid = getIdentitySequence(tableRelationId,
attributeForm->attnum,
missingSequenceOk);
Oid sequenceSchemaOid = get_rel_namespace(sequenceOid);
char *sequenceSchemaName = get_namespace_name(sequenceSchemaOid);
char *sequenceName = get_rel_name(sequenceOid);
Oid sequenceTypeId = pg_get_sequencedef(sequenceOid)->seqtypid;
AlterSequenceMinMax(sequenceOid, sequenceSchemaName, sequenceName,
sequenceTypeId);
}
}
relation_close(tableRelation, NoLock);
PG_RETURN_VOID();
}
/*
* worker_apply_sequence_command takes a CREATE SEQUENCE command string, runs the
* CREATE SEQUENCE command then creates and runs an ALTER SEQUENCE statement

View File

@ -566,6 +566,9 @@ extern bool ConstrTypeCitusCanDefaultName(ConstrType constrType);
extern char * GetAlterColumnWithNextvalDefaultCmd(Oid sequenceOid, Oid relationId,
char *colname, bool missingTableOk);
extern void ErrorIfTableHasUnsupportedIdentityColumn(Oid relationId);
extern void ErrorIfTableHasIdentityColumn(Oid relationId);
/* text_search.c - forward declarations */
extern List * GetCreateTextSearchConfigStatements(const ObjectAddress *address);
extern List * GetCreateTextSearchDictionaryStatements(const ObjectAddress *address);

View File

@ -124,8 +124,7 @@ typedef enum IncludeSequenceDefaults
typedef enum IncludeIdentities
{
NO_IDENTITY = 0, /* don't include identities */
INCLUDE_IDENTITY_AS_SEQUENCE_DEFAULTS = 1, /* include identities as sequences */
INCLUDE_IDENTITY = 2 /* include identities as-is*/
INCLUDE_IDENTITY = 1 /* include identities as-is*/
} IncludeIdentities;

View File

@ -101,11 +101,12 @@ extern void SyncNodeMetadataToNodesMain(Datum main_arg);
extern void SignalMetadataSyncDaemon(Oid database, int sig);
extern bool ShouldInitiateMetadataSync(bool *lockFailure);
extern List * SequenceDependencyCommandList(Oid relationId);
extern List * IdentitySequenceDependencyCommandList(Oid targetRelationId);
extern List * DDLCommandsForSequence(Oid sequenceOid, char *ownerName);
extern List * GetSequencesFromAttrDef(Oid attrdefOid);
extern void GetDependentSequencesWithRelation(Oid relationId, List **seqInfoList,
AttrNumber attnum);
AttrNumber attnum, char depType);
extern List * GetDependentFunctionsWithRelation(Oid relationId);
extern Oid GetAttributeTypeOid(Oid relationId, AttrNumber attnum);
extern void SetLocalEnableMetadataSync(bool state);
@ -146,6 +147,8 @@ extern void SyncDeleteColocationGroupToNodes(uint32 colocationId);
"placementid = EXCLUDED.placementid"
#define METADATA_SYNC_CHANNEL "metadata_sync"
#define WORKER_ADJUST_IDENTITY_COLUMN_SEQ_RANGES \
"SELECT pg_catalog.worker_adjust_identity_column_seq_ranges(%s)"
/* controlled via GUC */
extern char *EnableManualMetadataChangesForUser;

View File

@ -19,4 +19,9 @@ extern DestReceiver * CreateShardCopyDestReceiver(EState *executorState,
List *destinationShardFullyQualifiedName,
uint32_t destinationNodeId);
extern const char * CopyableColumnNamesFromRelationName(const char *schemaName, const
char *relationName);
extern const char * CopyableColumnNamesFromTupleDesc(TupleDesc tupdesc);
#endif /* WORKER_SHARD_COPY_H_ */

View File

@ -55,6 +55,14 @@ pg_strtoint64(char *s)
}
/*
* RelationGetSmgr got backported in 13.10 and 14.7 so redefining it for any
* version higher causes compilation errors due to redefining of the function.
* We want to use it in all versions. So we backport it ourselves in earlier
* versions, and rely on the Postgres provided version in the later versions.
*/
#if PG_VERSION_NUM >= PG_VERSION_13 && PG_VERSION_NUM < 130010 \
|| PG_VERSION_NUM >= PG_VERSION_14 && PG_VERSION_NUM < 140007
static inline SMgrRelation
RelationGetSmgr(Relation rel)
{
@ -66,6 +74,9 @@ RelationGetSmgr(Relation rel)
}
#endif
#define CREATE_SEQUENCE_COMMAND \
"CREATE SEQUENCE IF NOT EXISTS %s AS %s INCREMENT BY " INT64_FORMAT \
" MINVALUE " INT64_FORMAT " MAXVALUE " INT64_FORMAT \

View File

@ -60,7 +60,7 @@ SELECT create_reference_table('reference_table');
(1 row)
CREATE TABLE colocated_dist_table (measureid integer PRIMARY KEY);
CREATE TABLE colocated_dist_table (measureid integer PRIMARY KEY, genid integer GENERATED ALWAYS AS ( measureid + 3 ) stored, value varchar(44), col_todrop integer);
CLUSTER colocated_dist_table USING colocated_dist_table_pkey;
SELECT create_distributed_table('colocated_dist_table', 'measureid', colocate_with:='sensors');
create_distributed_table
@ -84,8 +84,9 @@ ALTER TABLE sensors ADD CONSTRAINT fkey_table_to_dist FOREIGN KEY (measureid) RE
-- END : Create Foreign key constraints.
-- BEGIN : Load data into tables.
INSERT INTO reference_table SELECT i FROM generate_series(0,1000)i;
INSERT INTO colocated_dist_table SELECT i FROM generate_series(0,1000)i;
INSERT INTO colocated_dist_table(measureid, value, col_todrop) SELECT i,'Value',i FROM generate_series(0,1000)i;
INSERT INTO sensors SELECT i, '2020-01-05', '{}', 11011.10, 'A', 'I <3 Citus' FROM generate_series(0,1000)i;
ALTER TABLE colocated_dist_table DROP COLUMN col_todrop;
SELECT COUNT(*) FROM sensors;
count
---------------------------------------------------------------------

View File

@ -56,7 +56,7 @@ SELECT create_reference_table('reference_table');
(1 row)
CREATE TABLE colocated_dist_table (measureid integer PRIMARY KEY);
CREATE TABLE colocated_dist_table (measureid integer PRIMARY KEY, genid integer GENERATED ALWAYS AS ( measureid + 3 ) stored, value varchar(44), col_todrop integer);
CLUSTER colocated_dist_table USING colocated_dist_table_pkey;
SELECT create_distributed_table('colocated_dist_table', 'measureid', colocate_with:='sensors');
create_distributed_table
@ -80,8 +80,9 @@ ALTER TABLE sensors ADD CONSTRAINT fkey_table_to_dist FOREIGN KEY (measureid) RE
-- END : Create Foreign key constraints.
-- BEGIN : Load data into tables.
INSERT INTO reference_table SELECT i FROM generate_series(0,1000)i;
INSERT INTO colocated_dist_table SELECT i FROM generate_series(0,1000)i;
INSERT INTO colocated_dist_table(measureid, value, col_todrop) SELECT i,'Value',i FROM generate_series(0,1000)i;
INSERT INTO sensors SELECT i, '2020-01-05', '{}', 11011.10, 'A', 'I <3 Citus' FROM generate_series(0,1000)i;
ALTER TABLE colocated_dist_table DROP COLUMN col_todrop;
SELECT COUNT(*) FROM sensors;
count
---------------------------------------------------------------------

View File

@ -64,11 +64,11 @@ SET citus.multi_shard_modify_mode TO sequential;
SELECT citus_update_table_statistics('test_table_statistics_hash');
NOTICE: issuing BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;SELECT assign_distributed_transaction_id(xx, xx, 'xxxxxxx');
DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx
NOTICE: issuing SELECT 981000 AS shard_id, 'public.test_table_statistics_hash_981000' AS shard_name, pg_relation_size('public.test_table_statistics_hash_981000') UNION ALL SELECT 981001 AS shard_id, 'public.test_table_statistics_hash_981001' AS shard_name, pg_relation_size('public.test_table_statistics_hash_981001') UNION ALL SELECT 981002 AS shard_id, 'public.test_table_statistics_hash_981002' AS shard_name, pg_relation_size('public.test_table_statistics_hash_981002') UNION ALL SELECT 981003 AS shard_id, 'public.test_table_statistics_hash_981003' AS shard_name, pg_relation_size('public.test_table_statistics_hash_981003') UNION ALL SELECT 981004 AS shard_id, 'public.test_table_statistics_hash_981004' AS shard_name, pg_relation_size('public.test_table_statistics_hash_981004') UNION ALL SELECT 981005 AS shard_id, 'public.test_table_statistics_hash_981005' AS shard_name, pg_relation_size('public.test_table_statistics_hash_981005') UNION ALL SELECT 981006 AS shard_id, 'public.test_table_statistics_hash_981006' AS shard_name, pg_relation_size('public.test_table_statistics_hash_981006') UNION ALL SELECT 981007 AS shard_id, 'public.test_table_statistics_hash_981007' AS shard_name, pg_relation_size('public.test_table_statistics_hash_981007') UNION ALL SELECT 0::bigint, NULL::text, 0::bigint;
NOTICE: issuing SELECT 981000 AS shard_id, 'public.test_table_statistics_hash_981000' AS shard_name, pg_total_relation_size('public.test_table_statistics_hash_981000') UNION ALL SELECT 981001 AS shard_id, 'public.test_table_statistics_hash_981001' AS shard_name, pg_total_relation_size('public.test_table_statistics_hash_981001') UNION ALL SELECT 981002 AS shard_id, 'public.test_table_statistics_hash_981002' AS shard_name, pg_total_relation_size('public.test_table_statistics_hash_981002') UNION ALL SELECT 981003 AS shard_id, 'public.test_table_statistics_hash_981003' AS shard_name, pg_total_relation_size('public.test_table_statistics_hash_981003') UNION ALL SELECT 981004 AS shard_id, 'public.test_table_statistics_hash_981004' AS shard_name, pg_total_relation_size('public.test_table_statistics_hash_981004') UNION ALL SELECT 981005 AS shard_id, 'public.test_table_statistics_hash_981005' AS shard_name, pg_total_relation_size('public.test_table_statistics_hash_981005') UNION ALL SELECT 981006 AS shard_id, 'public.test_table_statistics_hash_981006' AS shard_name, pg_total_relation_size('public.test_table_statistics_hash_981006') UNION ALL SELECT 981007 AS shard_id, 'public.test_table_statistics_hash_981007' AS shard_name, pg_total_relation_size('public.test_table_statistics_hash_981007') UNION ALL SELECT 0::bigint, NULL::text, 0::bigint;
DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx
NOTICE: issuing BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;SELECT assign_distributed_transaction_id(xx, xx, 'xxxxxxx');
DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx
NOTICE: issuing SELECT 981000 AS shard_id, 'public.test_table_statistics_hash_981000' AS shard_name, pg_relation_size('public.test_table_statistics_hash_981000') UNION ALL SELECT 981001 AS shard_id, 'public.test_table_statistics_hash_981001' AS shard_name, pg_relation_size('public.test_table_statistics_hash_981001') UNION ALL SELECT 981002 AS shard_id, 'public.test_table_statistics_hash_981002' AS shard_name, pg_relation_size('public.test_table_statistics_hash_981002') UNION ALL SELECT 981003 AS shard_id, 'public.test_table_statistics_hash_981003' AS shard_name, pg_relation_size('public.test_table_statistics_hash_981003') UNION ALL SELECT 981004 AS shard_id, 'public.test_table_statistics_hash_981004' AS shard_name, pg_relation_size('public.test_table_statistics_hash_981004') UNION ALL SELECT 981005 AS shard_id, 'public.test_table_statistics_hash_981005' AS shard_name, pg_relation_size('public.test_table_statistics_hash_981005') UNION ALL SELECT 981006 AS shard_id, 'public.test_table_statistics_hash_981006' AS shard_name, pg_relation_size('public.test_table_statistics_hash_981006') UNION ALL SELECT 981007 AS shard_id, 'public.test_table_statistics_hash_981007' AS shard_name, pg_relation_size('public.test_table_statistics_hash_981007') UNION ALL SELECT 0::bigint, NULL::text, 0::bigint;
NOTICE: issuing SELECT 981000 AS shard_id, 'public.test_table_statistics_hash_981000' AS shard_name, pg_total_relation_size('public.test_table_statistics_hash_981000') UNION ALL SELECT 981001 AS shard_id, 'public.test_table_statistics_hash_981001' AS shard_name, pg_total_relation_size('public.test_table_statistics_hash_981001') UNION ALL SELECT 981002 AS shard_id, 'public.test_table_statistics_hash_981002' AS shard_name, pg_total_relation_size('public.test_table_statistics_hash_981002') UNION ALL SELECT 981003 AS shard_id, 'public.test_table_statistics_hash_981003' AS shard_name, pg_total_relation_size('public.test_table_statistics_hash_981003') UNION ALL SELECT 981004 AS shard_id, 'public.test_table_statistics_hash_981004' AS shard_name, pg_total_relation_size('public.test_table_statistics_hash_981004') UNION ALL SELECT 981005 AS shard_id, 'public.test_table_statistics_hash_981005' AS shard_name, pg_total_relation_size('public.test_table_statistics_hash_981005') UNION ALL SELECT 981006 AS shard_id, 'public.test_table_statistics_hash_981006' AS shard_name, pg_total_relation_size('public.test_table_statistics_hash_981006') UNION ALL SELECT 981007 AS shard_id, 'public.test_table_statistics_hash_981007' AS shard_name, pg_total_relation_size('public.test_table_statistics_hash_981007') UNION ALL SELECT 0::bigint, NULL::text, 0::bigint;
DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx
NOTICE: issuing COMMIT
DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx
@ -152,11 +152,11 @@ SET citus.multi_shard_modify_mode TO sequential;
SELECT citus_update_table_statistics('test_table_statistics_append');
NOTICE: issuing BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;SELECT assign_distributed_transaction_id(xx, xx, 'xxxxxxx');
DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx
NOTICE: issuing SELECT 981008 AS shard_id, 'public.test_table_statistics_append_981008' AS shard_name, pg_relation_size('public.test_table_statistics_append_981008') UNION ALL SELECT 981009 AS shard_id, 'public.test_table_statistics_append_981009' AS shard_name, pg_relation_size('public.test_table_statistics_append_981009') UNION ALL SELECT 0::bigint, NULL::text, 0::bigint;
NOTICE: issuing SELECT 981008 AS shard_id, 'public.test_table_statistics_append_981008' AS shard_name, pg_total_relation_size('public.test_table_statistics_append_981008') UNION ALL SELECT 981009 AS shard_id, 'public.test_table_statistics_append_981009' AS shard_name, pg_total_relation_size('public.test_table_statistics_append_981009') UNION ALL SELECT 0::bigint, NULL::text, 0::bigint;
DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx
NOTICE: issuing BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;SELECT assign_distributed_transaction_id(xx, xx, 'xxxxxxx');
DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx
NOTICE: issuing SELECT 981008 AS shard_id, 'public.test_table_statistics_append_981008' AS shard_name, pg_relation_size('public.test_table_statistics_append_981008') UNION ALL SELECT 981009 AS shard_id, 'public.test_table_statistics_append_981009' AS shard_name, pg_relation_size('public.test_table_statistics_append_981009') UNION ALL SELECT 0::bigint, NULL::text, 0::bigint;
NOTICE: issuing SELECT 981008 AS shard_id, 'public.test_table_statistics_append_981008' AS shard_name, pg_total_relation_size('public.test_table_statistics_append_981008') UNION ALL SELECT 981009 AS shard_id, 'public.test_table_statistics_append_981009' AS shard_name, pg_total_relation_size('public.test_table_statistics_append_981009') UNION ALL SELECT 0::bigint, NULL::text, 0::bigint;
DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx
NOTICE: issuing COMMIT
DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx

View File

@ -1,525 +1,431 @@
-- This test file has an alternative output because of error messages vary for PG13
SHOW server_version \gset
SELECT substring(:'server_version', '\d+')::int <= 13 AS server_version_le_13;
server_version_le_13
---------------------------------------------------------------------
f
(1 row)
CREATE SCHEMA generated_identities;
SET search_path TO generated_identities;
SET client_min_messages to ERROR;
SET citus.shard_replication_factor TO 1;
SELECT 1 from citus_add_node('localhost', :master_port, groupId=>0);
?column?
---------------------------------------------------------------------
1
(1 row)
DROP TABLE IF EXISTS generated_identities_test;
-- create a partitioned table for testing.
CREATE TABLE generated_identities_test (
a int CONSTRAINT myconname GENERATED BY DEFAULT AS IDENTITY,
b bigint GENERATED ALWAYS AS IDENTITY (START WITH 10 INCREMENT BY 10),
c smallint GENERATED BY DEFAULT AS IDENTITY,
d serial,
e bigserial,
f smallserial,
g int
)
PARTITION BY RANGE (a);
CREATE TABLE generated_identities_test_1_5 PARTITION OF generated_identities_test FOR VALUES FROM (1) TO (5);
CREATE TABLE generated_identities_test_5_50 PARTITION OF generated_identities_test FOR VALUES FROM (5) TO (50);
-- local tables
SELECT citus_add_local_table_to_metadata('generated_identities_test');
-- smallint identity column can not be distributed
CREATE TABLE smallint_identity_column (
a smallint GENERATED BY DEFAULT AS IDENTITY
);
SELECT create_distributed_table('smallint_identity_column', 'a');
ERROR: cannot complete operation on generated_identities.smallint_identity_column with smallint/int identity column
HINT: Use bigint identity column instead.
SELECT create_distributed_table_concurrently('smallint_identity_column', 'a');
ERROR: cannot complete operation on generated_identities.smallint_identity_column with smallint/int identity column
HINT: Use bigint identity column instead.
SELECT create_reference_table('smallint_identity_column');
ERROR: cannot complete operation on a table with identity column
SELECT citus_add_local_table_to_metadata('smallint_identity_column');
citus_add_local_table_to_metadata
---------------------------------------------------------------------
(1 row)
\d generated_identities_test
Partitioned table "generated_identities.generated_identities_test"
Column | Type | Collation | Nullable | Default
---------------------------------------------------------------------
a | integer | | not null | generated by default as identity
b | bigint | | not null | generated always as identity
c | smallint | | not null | generated by default as identity
d | integer | | not null | nextval('generated_identities_test_d_seq'::regclass)
e | bigint | | not null | nextval('generated_identities_test_e_seq'::regclass)
f | smallint | | not null | nextval('generated_identities_test_f_seq'::regclass)
g | integer | | |
Partition key: RANGE (a)
Number of partitions: 2 (Use \d+ to list them.)
\c - - - :worker_1_port
\d generated_identities.generated_identities_test
Partitioned table "generated_identities.generated_identities_test"
Column | Type | Collation | Nullable | Default
---------------------------------------------------------------------
a | integer | | not null | worker_nextval('generated_identities.generated_identities_test_a_seq'::regclass)
b | bigint | | not null | nextval('generated_identities.generated_identities_test_b_seq'::regclass)
c | smallint | | not null | worker_nextval('generated_identities.generated_identities_test_c_seq'::regclass)
d | integer | | not null | worker_nextval('generated_identities.generated_identities_test_d_seq'::regclass)
e | bigint | | not null | nextval('generated_identities.generated_identities_test_e_seq'::regclass)
f | smallint | | not null | worker_nextval('generated_identities.generated_identities_test_f_seq'::regclass)
g | integer | | |
Partition key: RANGE (a)
Number of partitions: 2 (Use \d+ to list them.)
\c - - - :master_port
SET search_path TO generated_identities;
SET client_min_messages to ERROR;
SELECT undistribute_table('generated_identities_test');
undistribute_table
DROP TABLE smallint_identity_column;
-- int identity column can not be distributed
CREATE TABLE int_identity_column (
a int GENERATED BY DEFAULT AS IDENTITY
);
SELECT create_distributed_table('int_identity_column', 'a');
ERROR: cannot complete operation on generated_identities.int_identity_column with smallint/int identity column
HINT: Use bigint identity column instead.
SELECT create_distributed_table_concurrently('int_identity_column', 'a');
ERROR: cannot complete operation on generated_identities.int_identity_column with smallint/int identity column
HINT: Use bigint identity column instead.
SELECT create_reference_table('int_identity_column');
ERROR: cannot complete operation on a table with identity column
SELECT citus_add_local_table_to_metadata('int_identity_column');
citus_add_local_table_to_metadata
---------------------------------------------------------------------
(1 row)
SELECT citus_remove_node('localhost', :master_port);
citus_remove_node
DROP TABLE int_identity_column;
RESET citus.shard_replication_factor;
CREATE TABLE bigint_identity_column (
a bigint GENERATED BY DEFAULT AS IDENTITY,
b int
);
SELECT citus_add_local_table_to_metadata('bigint_identity_column');
citus_add_local_table_to_metadata
---------------------------------------------------------------------
(1 row)
SELECT create_distributed_table('generated_identities_test', 'a');
DROP TABLE bigint_identity_column;
CREATE TABLE bigint_identity_column (
a bigint GENERATED BY DEFAULT AS IDENTITY,
b int
);
SELECT create_distributed_table('bigint_identity_column', 'a');
create_distributed_table
---------------------------------------------------------------------
(1 row)
\d generated_identities_test
Partitioned table "generated_identities.generated_identities_test"
\d bigint_identity_column
Table "generated_identities.bigint_identity_column"
Column | Type | Collation | Nullable | Default
---------------------------------------------------------------------
a | integer | | not null | generated by default as identity
b | bigint | | not null | generated always as identity
c | smallint | | not null | generated by default as identity
d | integer | | not null | nextval('generated_identities_test_d_seq'::regclass)
e | bigint | | not null | nextval('generated_identities_test_e_seq'::regclass)
f | smallint | | not null | nextval('generated_identities_test_f_seq'::regclass)
g | integer | | |
Partition key: RANGE (a)
Number of partitions: 2 (Use \d+ to list them.)
a | bigint | | not null | generated by default as identity
b | integer | | |
\c - - - :worker_1_port
\d generated_identities.generated_identities_test
Partitioned table "generated_identities.generated_identities_test"
SET search_path TO generated_identities;
SET client_min_messages to ERROR;
INSERT INTO bigint_identity_column (b)
SELECT s FROM generate_series(1,10) s;
\d generated_identities.bigint_identity_column
Table "generated_identities.bigint_identity_column"
Column | Type | Collation | Nullable | Default
---------------------------------------------------------------------
a | integer | | not null | worker_nextval('generated_identities.generated_identities_test_a_seq'::regclass)
b | bigint | | not null | nextval('generated_identities.generated_identities_test_b_seq'::regclass)
c | smallint | | not null | worker_nextval('generated_identities.generated_identities_test_c_seq'::regclass)
d | integer | | not null | worker_nextval('generated_identities.generated_identities_test_d_seq'::regclass)
e | bigint | | not null | nextval('generated_identities.generated_identities_test_e_seq'::regclass)
f | smallint | | not null | worker_nextval('generated_identities.generated_identities_test_f_seq'::regclass)
g | integer | | |
Partition key: RANGE (a)
Number of partitions: 2 (Use \d+ to list them.)
a | bigint | | not null | generated by default as identity
b | integer | | |
\c - - - :master_port
SET search_path TO generated_identities;
SET client_min_messages to ERROR;
insert into generated_identities_test (g) values (1);
insert into generated_identities_test (g) SELECT 2;
INSERT INTO generated_identities_test (g)
INSERT INTO bigint_identity_column (b)
SELECT s FROM generate_series(11,20) s;
SELECT * FROM bigint_identity_column ORDER BY B ASC;
a | b
---------------------------------------------------------------------
3940649673949185 | 1
3940649673949186 | 2
3940649673949187 | 3
3940649673949188 | 4
3940649673949189 | 5
3940649673949190 | 6
3940649673949191 | 7
3940649673949192 | 8
3940649673949193 | 9
3940649673949194 | 10
1 | 11
2 | 12
3 | 13
4 | 14
5 | 15
6 | 16
7 | 17
8 | 18
9 | 19
10 | 20
(20 rows)
-- table with identity column cannot be altered.
SELECT alter_distributed_table('bigint_identity_column', 'b');
ERROR: cannot complete operation on a table with identity column
-- table with identity column cannot be undistributed.
SELECT undistribute_table('bigint_identity_column');
ERROR: cannot complete operation on a table with identity column
DROP TABLE bigint_identity_column;
-- create a partitioned table for testing.
CREATE TABLE partitioned_table (
a bigint CONSTRAINT myconname GENERATED BY DEFAULT AS IDENTITY (START WITH 10 INCREMENT BY 10),
b bigint GENERATED ALWAYS AS IDENTITY (START WITH 10 INCREMENT BY 10),
c int
)
PARTITION BY RANGE (c);
CREATE TABLE partitioned_table_1_50 PARTITION OF partitioned_table FOR VALUES FROM (1) TO (50);
CREATE TABLE partitioned_table_50_500 PARTITION OF partitioned_table FOR VALUES FROM (50) TO (1000);
SELECT create_distributed_table('partitioned_table', 'a');
create_distributed_table
---------------------------------------------------------------------
(1 row)
\d partitioned_table
Partitioned table "generated_identities.partitioned_table"
Column | Type | Collation | Nullable | Default
---------------------------------------------------------------------
a | bigint | | not null | generated by default as identity
b | bigint | | not null | generated always as identity
c | integer | | |
Partition key: RANGE (c)
Number of partitions: 2 (Use \d+ to list them.)
\c - - - :worker_1_port
SET search_path TO generated_identities;
SET client_min_messages to ERROR;
\d generated_identities.partitioned_table
Partitioned table "generated_identities.partitioned_table"
Column | Type | Collation | Nullable | Default
---------------------------------------------------------------------
a | bigint | | not null | generated by default as identity
b | bigint | | not null | generated always as identity
c | integer | | |
Partition key: RANGE (c)
Number of partitions: 2 (Use \d+ to list them.)
insert into partitioned_table (c) values (1);
insert into partitioned_table (c) SELECT 2;
INSERT INTO partitioned_table (c)
SELECT s FROM generate_series(3,7) s;
SELECT * FROM generated_identities_test ORDER BY 1;
a | b | c | d | e | f | g
---------------------------------------------------------------------
1 | 10 | 1 | 1 | 1 | 1 | 1
2 | 20 | 2 | 2 | 2 | 2 | 2
3 | 30 | 3 | 3 | 3 | 3 | 3
4 | 40 | 4 | 4 | 4 | 4 | 4
5 | 50 | 5 | 5 | 5 | 5 | 5
6 | 60 | 6 | 6 | 6 | 6 | 6
7 | 70 | 7 | 7 | 7 | 7 | 7
(7 rows)
SELECT undistribute_table('generated_identities_test');
undistribute_table
---------------------------------------------------------------------
(1 row)
SELECT * FROM generated_identities_test ORDER BY 1;
a | b | c | d | e | f | g
---------------------------------------------------------------------
1 | 10 | 1 | 1 | 1 | 1 | 1
2 | 20 | 2 | 2 | 2 | 2 | 2
3 | 30 | 3 | 3 | 3 | 3 | 3
4 | 40 | 4 | 4 | 4 | 4 | 4
5 | 50 | 5 | 5 | 5 | 5 | 5
6 | 60 | 6 | 6 | 6 | 6 | 6
7 | 70 | 7 | 7 | 7 | 7 | 7
(7 rows)
\d generated_identities_test
Partitioned table "generated_identities.generated_identities_test"
Column | Type | Collation | Nullable | Default
---------------------------------------------------------------------
a | integer | | not null | generated by default as identity
b | bigint | | not null | generated always as identity
c | smallint | | not null | generated by default as identity
d | integer | | not null | nextval('generated_identities_test_d_seq'::regclass)
e | bigint | | not null | nextval('generated_identities_test_e_seq'::regclass)
f | smallint | | not null | nextval('generated_identities_test_f_seq'::regclass)
g | integer | | |
Partition key: RANGE (a)
Number of partitions: 2 (Use \d+ to list them.)
\c - - - :worker_1_port
\d generated_identities.generated_identities_test
\c - - - :master_port
SET search_path TO generated_identities;
SET client_min_messages to ERROR;
INSERT INTO generated_identities_test (g)
SELECT s FROM generate_series(8,10) s;
SELECT * FROM generated_identities_test ORDER BY 1;
a | b | c | d | e | f | g
INSERT INTO partitioned_table (c)
SELECT s FROM generate_series(10,20) s;
INSERT INTO partitioned_table (a,c) VALUES (998,998);
INSERT INTO partitioned_table (a,b,c) OVERRIDING SYSTEM VALUE VALUES (999,999,999);
SELECT * FROM partitioned_table ORDER BY c ASC;
a | b | c
---------------------------------------------------------------------
1 | 10 | 1 | 1 | 1 | 1 | 1
2 | 20 | 2 | 2 | 2 | 2 | 2
3 | 30 | 3 | 3 | 3 | 3 | 3
4 | 40 | 4 | 4 | 4 | 4 | 4
5 | 50 | 5 | 5 | 5 | 5 | 5
6 | 60 | 6 | 6 | 6 | 6 | 6
7 | 70 | 7 | 7 | 7 | 7 | 7
8 | 80 | 8 | 8 | 8 | 8 | 8
9 | 90 | 9 | 9 | 9 | 9 | 9
10 | 100 | 10 | 10 | 10 | 10 | 10
(10 rows)
-- distributed table
SELECT create_distributed_table('generated_identities_test', 'a');
create_distributed_table
---------------------------------------------------------------------
(1 row)
3940649673949185 | 3940649673949185 | 1
3940649673949195 | 3940649673949195 | 2
3940649673949205 | 3940649673949205 | 3
3940649673949215 | 3940649673949215 | 4
3940649673949225 | 3940649673949225 | 5
3940649673949235 | 3940649673949235 | 6
3940649673949245 | 3940649673949245 | 7
10 | 10 | 10
20 | 20 | 11
30 | 30 | 12
40 | 40 | 13
50 | 50 | 14
60 | 60 | 15
70 | 70 | 16
80 | 80 | 17
90 | 90 | 18
100 | 100 | 19
110 | 110 | 20
998 | 120 | 998
999 | 999 | 999
(20 rows)
-- alter table .. alter column .. add is unsupported
ALTER TABLE generated_identities_test ALTER COLUMN g ADD GENERATED ALWAYS AS IDENTITY;
ALTER TABLE partitioned_table ALTER COLUMN g ADD GENERATED ALWAYS AS IDENTITY;
ERROR: alter table command is currently unsupported
DETAIL: Only ADD|DROP COLUMN, SET|DROP NOT NULL, SET|DROP DEFAULT, ADD|DROP|VALIDATE CONSTRAINT, SET (), RESET (), ENABLE|DISABLE|NO FORCE|FORCE ROW LEVEL SECURITY, ATTACH|DETACH PARTITION and TYPE subcommands are supported.
-- alter table .. alter column is unsupported
ALTER TABLE generated_identities_test ALTER COLUMN b TYPE int;
ALTER TABLE partitioned_table ALTER COLUMN b TYPE int;
ERROR: cannot execute ALTER COLUMN command involving identity column
SELECT alter_distributed_table('generated_identities_test', 'g');
alter_distributed_table
---------------------------------------------------------------------
(1 row)
SELECT alter_distributed_table('generated_identities_test', 'b');
alter_distributed_table
---------------------------------------------------------------------
(1 row)
SELECT alter_distributed_table('generated_identities_test', 'c');
alter_distributed_table
---------------------------------------------------------------------
(1 row)
SELECT undistribute_table('generated_identities_test');
undistribute_table
---------------------------------------------------------------------
(1 row)
SELECT * FROM generated_identities_test ORDER BY g;
a | b | c | d | e | f | g
---------------------------------------------------------------------
1 | 10 | 1 | 1 | 1 | 1 | 1
2 | 20 | 2 | 2 | 2 | 2 | 2
3 | 30 | 3 | 3 | 3 | 3 | 3
4 | 40 | 4 | 4 | 4 | 4 | 4
5 | 50 | 5 | 5 | 5 | 5 | 5
6 | 60 | 6 | 6 | 6 | 6 | 6
7 | 70 | 7 | 7 | 7 | 7 | 7
8 | 80 | 8 | 8 | 8 | 8 | 8
9 | 90 | 9 | 9 | 9 | 9 | 9
10 | 100 | 10 | 10 | 10 | 10 | 10
(10 rows)
-- reference table
DROP TABLE generated_identities_test;
CREATE TABLE generated_identities_test (
a int GENERATED BY DEFAULT AS IDENTITY,
b bigint GENERATED ALWAYS AS IDENTITY (START WITH 10 INCREMENT BY 10),
c smallint GENERATED BY DEFAULT AS IDENTITY,
d serial,
e bigserial,
f smallserial,
g int
DROP TABLE partitioned_table;
-- create a table for reference table testing.
CREATE TABLE reference_table (
a bigint CONSTRAINT myconname GENERATED BY DEFAULT AS IDENTITY (START WITH 10 INCREMENT BY 10),
b bigint GENERATED ALWAYS AS IDENTITY (START WITH 10 INCREMENT BY 10) UNIQUE,
c int
);
SELECT create_reference_table('generated_identities_test');
SELECT create_reference_table('reference_table');
create_reference_table
---------------------------------------------------------------------
(1 row)
\d generated_identities_test
Table "generated_identities.generated_identities_test"
\d reference_table
Table "generated_identities.reference_table"
Column | Type | Collation | Nullable | Default
---------------------------------------------------------------------
a | integer | | not null | generated by default as identity
a | bigint | | not null | generated by default as identity
b | bigint | | not null | generated always as identity
c | smallint | | not null | generated by default as identity
d | integer | | not null | nextval('generated_identities_test_d_seq'::regclass)
e | bigint | | not null | nextval('generated_identities_test_e_seq'::regclass)
f | smallint | | not null | nextval('generated_identities_test_f_seq'::regclass)
g | integer | | |
c | integer | | |
Indexes:
"reference_table_b_key" UNIQUE CONSTRAINT, btree (b)
\c - - - :worker_1_port
\d generated_identities.generated_identities_test
Table "generated_identities.generated_identities_test"
SET search_path TO generated_identities;
\d generated_identities.reference_table
Table "generated_identities.reference_table"
Column | Type | Collation | Nullable | Default
---------------------------------------------------------------------
a | integer | | not null | worker_nextval('generated_identities.generated_identities_test_a_seq'::regclass)
b | bigint | | not null | nextval('generated_identities.generated_identities_test_b_seq'::regclass)
c | smallint | | not null | worker_nextval('generated_identities.generated_identities_test_c_seq'::regclass)
d | integer | | not null | worker_nextval('generated_identities.generated_identities_test_d_seq'::regclass)
e | bigint | | not null | nextval('generated_identities.generated_identities_test_e_seq'::regclass)
f | smallint | | not null | worker_nextval('generated_identities.generated_identities_test_f_seq'::regclass)
g | integer | | |
a | bigint | | not null | generated by default as identity
b | bigint | | not null | generated always as identity
c | integer | | |
Indexes:
"reference_table_b_key" UNIQUE CONSTRAINT, btree (b)
INSERT INTO reference_table (c)
SELECT s FROM generate_series(1,10) s;
--on master
select * from reference_table;
a | b | c
---------------------------------------------------------------------
3940649673949185 | 3940649673949185 | 1
3940649673949195 | 3940649673949195 | 2
3940649673949205 | 3940649673949205 | 3
3940649673949215 | 3940649673949215 | 4
3940649673949225 | 3940649673949225 | 5
3940649673949235 | 3940649673949235 | 6
3940649673949245 | 3940649673949245 | 7
3940649673949255 | 3940649673949255 | 8
3940649673949265 | 3940649673949265 | 9
3940649673949275 | 3940649673949275 | 10
(10 rows)
\c - - - :master_port
SET search_path TO generated_identities;
SET client_min_messages to ERROR;
INSERT INTO generated_identities_test (g)
INSERT INTO reference_table (c)
SELECT s FROM generate_series(11,20) s;
SELECT * FROM generated_identities_test ORDER BY g;
a | b | c | d | e | f | g
SELECT * FROM reference_table ORDER BY c ASC;
a | b | c
---------------------------------------------------------------------
1 | 10 | 1 | 1 | 1 | 1 | 11
2 | 20 | 2 | 2 | 2 | 2 | 12
3 | 30 | 3 | 3 | 3 | 3 | 13
4 | 40 | 4 | 4 | 4 | 4 | 14
5 | 50 | 5 | 5 | 5 | 5 | 15
6 | 60 | 6 | 6 | 6 | 6 | 16
7 | 70 | 7 | 7 | 7 | 7 | 17
8 | 80 | 8 | 8 | 8 | 8 | 18
9 | 90 | 9 | 9 | 9 | 9 | 19
10 | 100 | 10 | 10 | 10 | 10 | 20
(10 rows)
3940649673949185 | 3940649673949185 | 1
3940649673949195 | 3940649673949195 | 2
3940649673949205 | 3940649673949205 | 3
3940649673949215 | 3940649673949215 | 4
3940649673949225 | 3940649673949225 | 5
3940649673949235 | 3940649673949235 | 6
3940649673949245 | 3940649673949245 | 7
3940649673949255 | 3940649673949255 | 8
3940649673949265 | 3940649673949265 | 9
3940649673949275 | 3940649673949275 | 10
10 | 10 | 11
20 | 20 | 12
30 | 30 | 13
40 | 40 | 14
50 | 50 | 15
60 | 60 | 16
70 | 70 | 17
80 | 80 | 18
90 | 90 | 19
100 | 100 | 20
(20 rows)
SELECT undistribute_table('generated_identities_test');
undistribute_table
DROP TABLE reference_table;
CREATE TABLE color (
color_id BIGINT GENERATED ALWAYS AS IDENTITY UNIQUE,
color_name VARCHAR NOT NULL
);
-- https://github.com/citusdata/citus/issues/6694
CREATE USER identity_test_user;
GRANT INSERT ON color TO identity_test_user;
GRANT USAGE ON SCHEMA generated_identities TO identity_test_user;
SET ROLE identity_test_user;
SELECT create_distributed_table('color', 'color_id');
ERROR: must be owner of table color
SET ROLE postgres;
SET citus.shard_replication_factor TO 1;
SELECT create_distributed_table_concurrently('color', 'color_id');
create_distributed_table_concurrently
---------------------------------------------------------------------
(1 row)
\d generated_identities_test
Table "generated_identities.generated_identities_test"
Column | Type | Collation | Nullable | Default
RESET citus.shard_replication_factor;
\c - identity_test_user - :worker_1_port
SET search_path TO generated_identities;
SET client_min_messages to ERROR;
INSERT INTO color(color_name) VALUES ('Blue');
\c - postgres - :master_port
SET search_path TO generated_identities;
SET client_min_messages to ERROR;
SET citus.next_shard_id TO 12400000;
DROP TABLE Color;
CREATE TABLE color (
color_id BIGINT GENERATED ALWAYS AS IDENTITY UNIQUE,
color_name VARCHAR NOT NULL
) USING columnar;
SELECT create_distributed_table('color', 'color_id');
create_distributed_table
---------------------------------------------------------------------
a | integer | | not null | generated by default as identity
b | bigint | | not null | generated always as identity
c | smallint | | not null | generated by default as identity
d | integer | | not null | nextval('generated_identities_test_d_seq'::regclass)
e | bigint | | not null | nextval('generated_identities_test_e_seq'::regclass)
f | smallint | | not null | nextval('generated_identities_test_f_seq'::regclass)
g | integer | | |
(1 row)
INSERT INTO color(color_name) VALUES ('Blue');
\d+ color
Table "generated_identities.color"
Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
---------------------------------------------------------------------
color_id | bigint | | not null | generated always as identity | plain | |
color_name | character varying | | not null | | extended | |
Indexes:
"color_color_id_key" UNIQUE CONSTRAINT, btree (color_id)
\c - - - :worker_1_port
\d generated_identities.generated_identities_test
\c - - - :master_port
SET search_path TO generated_identities;
\d+ color
Table "generated_identities.color"
Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
---------------------------------------------------------------------
color_id | bigint | | not null | generated always as identity | plain | |
color_name | character varying | | not null | | extended | |
Indexes:
"color_color_id_key" UNIQUE CONSTRAINT, btree (color_id)
INSERT INTO color(color_name) VALUES ('Red');
-- alter sequence .. restart
ALTER SEQUENCE color_color_id_seq RESTART WITH 1000;
ERROR: Altering a distributed sequence is currently not supported.
-- override system value
INSERT INTO color(color_id, color_name) VALUES (1, 'Red');
ERROR: cannot insert a non-DEFAULT value into column "color_id"
DETAIL: Column "color_id" is an identity column defined as GENERATED ALWAYS.
HINT: Use OVERRIDING SYSTEM VALUE to override.
INSERT INTO color(color_id, color_name) VALUES (NULL, 'Red');
ERROR: cannot insert a non-DEFAULT value into column "color_id"
DETAIL: Column "color_id" is an identity column defined as GENERATED ALWAYS.
HINT: Use OVERRIDING SYSTEM VALUE to override.
INSERT INTO color(color_id, color_name) OVERRIDING SYSTEM VALUE VALUES (1, 'Red');
ERROR: duplicate key value violates unique constraint "color_color_id_key_12400000"
DETAIL: Key (color_id)=(1) already exists.
CONTEXT: while executing command on localhost:xxxxx
-- update null or custom value
UPDATE color SET color_id = NULL;
ERROR: column "color_id" can only be updated to DEFAULT
DETAIL: Column "color_id" is an identity column defined as GENERATED ALWAYS.
UPDATE color SET color_id = 1;
ERROR: column "color_id" can only be updated to DEFAULT
DETAIL: Column "color_id" is an identity column defined as GENERATED ALWAYS.
\c - postgres - :master_port
SET search_path TO generated_identities;
SET client_min_messages to ERROR;
-- alter table .. add column .. GENERATED .. AS IDENTITY
DROP TABLE IF EXISTS color;
CREATE TABLE color (
color_name VARCHAR NOT NULL
);
SELECT create_distributed_table('color', 'color_name');
create_distributed_table
---------------------------------------------------------------------
(1 row)
ALTER TABLE color ADD COLUMN color_id BIGINT GENERATED ALWAYS AS IDENTITY;
INSERT INTO color(color_name) VALUES ('Red');
ALTER TABLE color ADD COLUMN color_id_1 BIGINT GENERATED ALWAYS AS IDENTITY;
ERROR: Cannot add an identity column because the table is not empty
DROP TABLE color;
-- insert data from workers
CREATE TABLE color (
color_id BIGINT GENERATED ALWAYS AS IDENTITY UNIQUE,
color_name VARCHAR NOT NULL
);
SELECT create_distributed_table('color', 'color_id');
ERROR: cannot execute ADD COLUMN commands involving identity columns when metadata is synchronized to workers
-- alter sequence .. restart
ALTER SEQUENCE color_color_id_seq RESTART WITH 1000;
ERROR: Altering a distributed sequence is currently not supported.
-- override system value
INSERT INTO color(color_id, color_name) VALUES (1, 'Red');
ERROR: cannot insert a non-DEFAULT value into column "color_id"
DETAIL: Column "color_id" is an identity column defined as GENERATED ALWAYS.
HINT: Use OVERRIDING SYSTEM VALUE to override.
INSERT INTO color(color_id, color_name) VALUES (NULL, 'Red');
ERROR: cannot insert a non-DEFAULT value into column "color_id"
DETAIL: Column "color_id" is an identity column defined as GENERATED ALWAYS.
HINT: Use OVERRIDING SYSTEM VALUE to override.
INSERT INTO color(color_id, color_name) OVERRIDING SYSTEM VALUE VALUES (1, 'Red');
ERROR: duplicate key value violates unique constraint "color_color_id_key_12400000"
DETAIL: Key (color_id)=(1) already exists.
CONTEXT: while executing command on localhost:xxxxx
-- update null or custom value
UPDATE color SET color_id = NULL;
ERROR: column "color_id" can only be updated to DEFAULT
DETAIL: Column "color_id" is an identity column defined as GENERATED ALWAYS.
UPDATE color SET color_id = 1;
ERROR: column "color_id" can only be updated to DEFAULT
DETAIL: Column "color_id" is an identity column defined as GENERATED ALWAYS.
DROP TABLE IF EXISTS test;
CREATE TABLE test (x int, y int, z bigint generated by default as identity);
SELECT create_distributed_table('test', 'x', colocate_with := 'none');
create_distributed_table
---------------------------------------------------------------------
(1 row)
\c - - - :worker_1_port
SET search_path TO generated_identities;
SET client_min_messages to ERROR;
INSERT INTO color(color_name) VALUES ('Red');
\c - - - :master_port
SET search_path TO generated_identities;
SET client_min_messages to ERROR;
SELECT undistribute_table('color');
undistribute_table
INSERT INTO test VALUES (1,2);
INSERT INTO test SELECT x, y FROM test WHERE x = 1;
SELECT * FROM test;
x | y | z
---------------------------------------------------------------------
1 | 2 | 1
1 | 2 | 2
(2 rows)
(1 row)
SELECT create_distributed_table('color', 'color_id');
create_distributed_table
---------------------------------------------------------------------
(1 row)
\c - - - :worker_1_port
SET search_path TO generated_identities;
SET client_min_messages to ERROR;
INSERT INTO color(color_name) VALUES ('Red');
\c - - - :master_port
SET search_path TO generated_identities;
SET client_min_messages to ERROR;
INSERT INTO color(color_name) VALUES ('Red');
SELECT count(*) from color;
count
---------------------------------------------------------------------
3
(1 row)
-- modify sequence & alter table
DROP TABLE color;
CREATE TABLE color (
color_id BIGINT GENERATED ALWAYS AS IDENTITY UNIQUE,
color_name VARCHAR NOT NULL
);
SELECT create_distributed_table('color', 'color_id');
create_distributed_table
---------------------------------------------------------------------
(1 row)
\c - - - :worker_1_port
SET search_path TO generated_identities;
SET client_min_messages to ERROR;
INSERT INTO color(color_name) VALUES ('Red');
\c - - - :master_port
SET search_path TO generated_identities;
SET client_min_messages to ERROR;
SELECT undistribute_table('color');
undistribute_table
---------------------------------------------------------------------
(1 row)
ALTER SEQUENCE color_color_id_seq RENAME TO myseq;
SELECT create_distributed_table('color', 'color_id');
create_distributed_table
---------------------------------------------------------------------
(1 row)
\ds+ myseq
List of relations
Schema | Name | Type | Owner | Persistence | Size | Description
---------------------------------------------------------------------
generated_identities | myseq | sequence | postgres | permanent | 8192 bytes |
(1 row)
\ds+ color_color_id_seq
List of relations
Schema | Name | Type | Owner | Persistence | Size | Description
---------------------------------------------------------------------
(0 rows)
\d color
Table "generated_identities.color"
Column | Type | Collation | Nullable | Default
---------------------------------------------------------------------
color_id | bigint | | not null | generated always as identity
color_name | character varying | | not null |
Indexes:
"color_color_id_key" UNIQUE CONSTRAINT, btree (color_id)
\c - - - :worker_1_port
SET search_path TO generated_identities;
SET client_min_messages to ERROR;
\ds+ myseq
List of relations
Schema | Name | Type | Owner | Persistence | Size | Description
---------------------------------------------------------------------
generated_identities | myseq | sequence | postgres | permanent | 8192 bytes |
(1 row)
\ds+ color_color_id_seq
List of relations
Schema | Name | Type | Owner | Persistence | Size | Description
---------------------------------------------------------------------
(0 rows)
\d color
Table "generated_identities.color"
Column | Type | Collation | Nullable | Default
---------------------------------------------------------------------
color_id | bigint | | not null | nextval('myseq'::regclass)
color_name | character varying | | not null |
Indexes:
"color_color_id_key" UNIQUE CONSTRAINT, btree (color_id)
INSERT INTO color(color_name) VALUES ('Red');
\c - - - :master_port
SET search_path TO generated_identities;
SET client_min_messages to ERROR;
ALTER SEQUENCE myseq RENAME TO color_color_id_seq;
\ds+ myseq
List of relations
Schema | Name | Type | Owner | Persistence | Size | Description
---------------------------------------------------------------------
(0 rows)
\ds+ color_color_id_seq
List of relations
Schema | Name | Type | Owner | Persistence | Size | Description
---------------------------------------------------------------------
generated_identities | color_color_id_seq | sequence | postgres | permanent | 8192 bytes |
(1 row)
INSERT INTO color(color_name) VALUES ('Red');
\c - - - :worker_1_port
SET search_path TO generated_identities;
SET client_min_messages to ERROR;
\ds+ myseq
List of relations
Schema | Name | Type | Owner | Persistence | Size | Description
---------------------------------------------------------------------
(0 rows)
\ds+ color_color_id_seq
List of relations
Schema | Name | Type | Owner | Persistence | Size | Description
---------------------------------------------------------------------
generated_identities | color_color_id_seq | sequence | postgres | permanent | 8192 bytes |
(1 row)
\d color
Table "generated_identities.color"
Column | Type | Collation | Nullable | Default
---------------------------------------------------------------------
color_id | bigint | | not null | nextval('color_color_id_seq'::regclass)
color_name | character varying | | not null |
Indexes:
"color_color_id_key" UNIQUE CONSTRAINT, btree (color_id)
INSERT INTO color(color_name) VALUES ('Red');
\c - - - :master_port
SET search_path TO generated_identities;
SET client_min_messages to ERROR;
SELECT alter_distributed_table('co23423lor', shard_count := 6);
ERROR: relation "co23423lor" does not exist
INSERT INTO color(color_name) VALUES ('Red');
\c - - - :worker_1_port
SET search_path TO generated_identities;
SET client_min_messages to ERROR;
\ds+ color_color_id_seq
List of relations
Schema | Name | Type | Owner | Persistence | Size | Description
---------------------------------------------------------------------
generated_identities | color_color_id_seq | sequence | postgres | permanent | 8192 bytes |
(1 row)
INSERT INTO color(color_name) VALUES ('Red');
\c - - - :master_port
SET search_path TO generated_identities;
SET client_min_messages to ERROR;
DROP SCHEMA generated_identities CASCADE;
DROP USER identity_test_user;

View File

@ -0,0 +1,431 @@
-- This test file has an alternative output because of error messages vary for PG13
SHOW server_version \gset
SELECT substring(:'server_version', '\d+')::int <= 13 AS server_version_le_13;
server_version_le_13
---------------------------------------------------------------------
t
(1 row)
CREATE SCHEMA generated_identities;
SET search_path TO generated_identities;
SET client_min_messages to ERROR;
SET citus.shard_replication_factor TO 1;
SELECT 1 from citus_add_node('localhost', :master_port, groupId=>0);
?column?
---------------------------------------------------------------------
1
(1 row)
-- smallint identity column can not be distributed
CREATE TABLE smallint_identity_column (
a smallint GENERATED BY DEFAULT AS IDENTITY
);
SELECT create_distributed_table('smallint_identity_column', 'a');
ERROR: cannot complete operation on generated_identities.smallint_identity_column with smallint/int identity column
HINT: Use bigint identity column instead.
SELECT create_distributed_table_concurrently('smallint_identity_column', 'a');
ERROR: cannot complete operation on generated_identities.smallint_identity_column with smallint/int identity column
HINT: Use bigint identity column instead.
SELECT create_reference_table('smallint_identity_column');
ERROR: cannot complete operation on a table with identity column
SELECT citus_add_local_table_to_metadata('smallint_identity_column');
citus_add_local_table_to_metadata
---------------------------------------------------------------------
(1 row)
DROP TABLE smallint_identity_column;
-- int identity column can not be distributed
CREATE TABLE int_identity_column (
a int GENERATED BY DEFAULT AS IDENTITY
);
SELECT create_distributed_table('int_identity_column', 'a');
ERROR: cannot complete operation on generated_identities.int_identity_column with smallint/int identity column
HINT: Use bigint identity column instead.
SELECT create_distributed_table_concurrently('int_identity_column', 'a');
ERROR: cannot complete operation on generated_identities.int_identity_column with smallint/int identity column
HINT: Use bigint identity column instead.
SELECT create_reference_table('int_identity_column');
ERROR: cannot complete operation on a table with identity column
SELECT citus_add_local_table_to_metadata('int_identity_column');
citus_add_local_table_to_metadata
---------------------------------------------------------------------
(1 row)
DROP TABLE int_identity_column;
RESET citus.shard_replication_factor;
CREATE TABLE bigint_identity_column (
a bigint GENERATED BY DEFAULT AS IDENTITY,
b int
);
SELECT citus_add_local_table_to_metadata('bigint_identity_column');
citus_add_local_table_to_metadata
---------------------------------------------------------------------
(1 row)
DROP TABLE bigint_identity_column;
CREATE TABLE bigint_identity_column (
a bigint GENERATED BY DEFAULT AS IDENTITY,
b int
);
SELECT create_distributed_table('bigint_identity_column', 'a');
create_distributed_table
---------------------------------------------------------------------
(1 row)
\d bigint_identity_column
Table "generated_identities.bigint_identity_column"
Column | Type | Collation | Nullable | Default
---------------------------------------------------------------------
a | bigint | | not null | generated by default as identity
b | integer | | |
\c - - - :worker_1_port
SET search_path TO generated_identities;
SET client_min_messages to ERROR;
INSERT INTO bigint_identity_column (b)
SELECT s FROM generate_series(1,10) s;
\d generated_identities.bigint_identity_column
Table "generated_identities.bigint_identity_column"
Column | Type | Collation | Nullable | Default
---------------------------------------------------------------------
a | bigint | | not null | generated by default as identity
b | integer | | |
\c - - - :master_port
SET search_path TO generated_identities;
SET client_min_messages to ERROR;
INSERT INTO bigint_identity_column (b)
SELECT s FROM generate_series(11,20) s;
SELECT * FROM bigint_identity_column ORDER BY B ASC;
a | b
---------------------------------------------------------------------
3940649673949185 | 1
3940649673949186 | 2
3940649673949187 | 3
3940649673949188 | 4
3940649673949189 | 5
3940649673949190 | 6
3940649673949191 | 7
3940649673949192 | 8
3940649673949193 | 9
3940649673949194 | 10
1 | 11
2 | 12
3 | 13
4 | 14
5 | 15
6 | 16
7 | 17
8 | 18
9 | 19
10 | 20
(20 rows)
-- table with identity column cannot be altered.
SELECT alter_distributed_table('bigint_identity_column', 'b');
ERROR: cannot complete operation on a table with identity column
-- table with identity column cannot be undistributed.
SELECT undistribute_table('bigint_identity_column');
ERROR: cannot complete operation on a table with identity column
DROP TABLE bigint_identity_column;
-- create a partitioned table for testing.
CREATE TABLE partitioned_table (
a bigint CONSTRAINT myconname GENERATED BY DEFAULT AS IDENTITY (START WITH 10 INCREMENT BY 10),
b bigint GENERATED ALWAYS AS IDENTITY (START WITH 10 INCREMENT BY 10),
c int
)
PARTITION BY RANGE (c);
CREATE TABLE partitioned_table_1_50 PARTITION OF partitioned_table FOR VALUES FROM (1) TO (50);
CREATE TABLE partitioned_table_50_500 PARTITION OF partitioned_table FOR VALUES FROM (50) TO (1000);
SELECT create_distributed_table('partitioned_table', 'a');
create_distributed_table
---------------------------------------------------------------------
(1 row)
\d partitioned_table
Partitioned table "generated_identities.partitioned_table"
Column | Type | Collation | Nullable | Default
---------------------------------------------------------------------
a | bigint | | not null | generated by default as identity
b | bigint | | not null | generated always as identity
c | integer | | |
Partition key: RANGE (c)
Number of partitions: 2 (Use \d+ to list them.)
\c - - - :worker_1_port
SET search_path TO generated_identities;
SET client_min_messages to ERROR;
\d generated_identities.partitioned_table
Partitioned table "generated_identities.partitioned_table"
Column | Type | Collation | Nullable | Default
---------------------------------------------------------------------
a | bigint | | not null | generated by default as identity
b | bigint | | not null | generated always as identity
c | integer | | |
Partition key: RANGE (c)
Number of partitions: 2 (Use \d+ to list them.)
insert into partitioned_table (c) values (1);
insert into partitioned_table (c) SELECT 2;
INSERT INTO partitioned_table (c)
SELECT s FROM generate_series(3,7) s;
\c - - - :master_port
SET search_path TO generated_identities;
SET client_min_messages to ERROR;
INSERT INTO partitioned_table (c)
SELECT s FROM generate_series(10,20) s;
INSERT INTO partitioned_table (a,c) VALUES (998,998);
INSERT INTO partitioned_table (a,b,c) OVERRIDING SYSTEM VALUE VALUES (999,999,999);
SELECT * FROM partitioned_table ORDER BY c ASC;
a | b | c
---------------------------------------------------------------------
3940649673949185 | 3940649673949185 | 1
3940649673949195 | 3940649673949195 | 2
3940649673949205 | 3940649673949205 | 3
3940649673949215 | 3940649673949215 | 4
3940649673949225 | 3940649673949225 | 5
3940649673949235 | 3940649673949235 | 6
3940649673949245 | 3940649673949245 | 7
10 | 10 | 10
20 | 20 | 11
30 | 30 | 12
40 | 40 | 13
50 | 50 | 14
60 | 60 | 15
70 | 70 | 16
80 | 80 | 17
90 | 90 | 18
100 | 100 | 19
110 | 110 | 20
998 | 120 | 998
999 | 999 | 999
(20 rows)
-- alter table .. alter column .. add is unsupported
ALTER TABLE partitioned_table ALTER COLUMN g ADD GENERATED ALWAYS AS IDENTITY;
ERROR: alter table command is currently unsupported
DETAIL: Only ADD|DROP COLUMN, SET|DROP NOT NULL, SET|DROP DEFAULT, ADD|DROP|VALIDATE CONSTRAINT, SET (), RESET (), ENABLE|DISABLE|NO FORCE|FORCE ROW LEVEL SECURITY, ATTACH|DETACH PARTITION and TYPE subcommands are supported.
-- alter table .. alter column is unsupported
ALTER TABLE partitioned_table ALTER COLUMN b TYPE int;
ERROR: cannot execute ALTER COLUMN command involving identity column
DROP TABLE partitioned_table;
-- create a table for reference table testing.
CREATE TABLE reference_table (
a bigint CONSTRAINT myconname GENERATED BY DEFAULT AS IDENTITY (START WITH 10 INCREMENT BY 10),
b bigint GENERATED ALWAYS AS IDENTITY (START WITH 10 INCREMENT BY 10) UNIQUE,
c int
);
SELECT create_reference_table('reference_table');
create_reference_table
---------------------------------------------------------------------
(1 row)
\d reference_table
Table "generated_identities.reference_table"
Column | Type | Collation | Nullable | Default
---------------------------------------------------------------------
a | bigint | | not null | generated by default as identity
b | bigint | | not null | generated always as identity
c | integer | | |
Indexes:
"reference_table_b_key" UNIQUE CONSTRAINT, btree (b)
\c - - - :worker_1_port
SET search_path TO generated_identities;
\d generated_identities.reference_table
Table "generated_identities.reference_table"
Column | Type | Collation | Nullable | Default
---------------------------------------------------------------------
a | bigint | | not null | generated by default as identity
b | bigint | | not null | generated always as identity
c | integer | | |
Indexes:
"reference_table_b_key" UNIQUE CONSTRAINT, btree (b)
INSERT INTO reference_table (c)
SELECT s FROM generate_series(1,10) s;
--on master
select * from reference_table;
a | b | c
---------------------------------------------------------------------
3940649673949185 | 3940649673949185 | 1
3940649673949195 | 3940649673949195 | 2
3940649673949205 | 3940649673949205 | 3
3940649673949215 | 3940649673949215 | 4
3940649673949225 | 3940649673949225 | 5
3940649673949235 | 3940649673949235 | 6
3940649673949245 | 3940649673949245 | 7
3940649673949255 | 3940649673949255 | 8
3940649673949265 | 3940649673949265 | 9
3940649673949275 | 3940649673949275 | 10
(10 rows)
\c - - - :master_port
SET search_path TO generated_identities;
SET client_min_messages to ERROR;
INSERT INTO reference_table (c)
SELECT s FROM generate_series(11,20) s;
SELECT * FROM reference_table ORDER BY c ASC;
a | b | c
---------------------------------------------------------------------
3940649673949185 | 3940649673949185 | 1
3940649673949195 | 3940649673949195 | 2
3940649673949205 | 3940649673949205 | 3
3940649673949215 | 3940649673949215 | 4
3940649673949225 | 3940649673949225 | 5
3940649673949235 | 3940649673949235 | 6
3940649673949245 | 3940649673949245 | 7
3940649673949255 | 3940649673949255 | 8
3940649673949265 | 3940649673949265 | 9
3940649673949275 | 3940649673949275 | 10
10 | 10 | 11
20 | 20 | 12
30 | 30 | 13
40 | 40 | 14
50 | 50 | 15
60 | 60 | 16
70 | 70 | 17
80 | 80 | 18
90 | 90 | 19
100 | 100 | 20
(20 rows)
DROP TABLE reference_table;
CREATE TABLE color (
color_id BIGINT GENERATED ALWAYS AS IDENTITY UNIQUE,
color_name VARCHAR NOT NULL
);
-- https://github.com/citusdata/citus/issues/6694
CREATE USER identity_test_user;
GRANT INSERT ON color TO identity_test_user;
GRANT USAGE ON SCHEMA generated_identities TO identity_test_user;
SET ROLE identity_test_user;
SELECT create_distributed_table('color', 'color_id');
ERROR: must be owner of table color
SET ROLE postgres;
SET citus.shard_replication_factor TO 1;
SELECT create_distributed_table_concurrently('color', 'color_id');
create_distributed_table_concurrently
---------------------------------------------------------------------
(1 row)
RESET citus.shard_replication_factor;
\c - identity_test_user - :worker_1_port
SET search_path TO generated_identities;
SET client_min_messages to ERROR;
INSERT INTO color(color_name) VALUES ('Blue');
\c - postgres - :master_port
SET search_path TO generated_identities;
SET client_min_messages to ERROR;
SET citus.next_shard_id TO 12400000;
DROP TABLE Color;
CREATE TABLE color (
color_id BIGINT GENERATED ALWAYS AS IDENTITY UNIQUE,
color_name VARCHAR NOT NULL
) USING columnar;
SELECT create_distributed_table('color', 'color_id');
create_distributed_table
---------------------------------------------------------------------
(1 row)
INSERT INTO color(color_name) VALUES ('Blue');
\d+ color
Table "generated_identities.color"
Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
---------------------------------------------------------------------
color_id | bigint | | not null | generated always as identity | plain | |
color_name | character varying | | not null | | extended | |
Indexes:
"color_color_id_key" UNIQUE CONSTRAINT, btree (color_id)
\c - - - :worker_1_port
SET search_path TO generated_identities;
\d+ color
Table "generated_identities.color"
Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
---------------------------------------------------------------------
color_id | bigint | | not null | generated always as identity | plain | |
color_name | character varying | | not null | | extended | |
Indexes:
"color_color_id_key" UNIQUE CONSTRAINT, btree (color_id)
INSERT INTO color(color_name) VALUES ('Red');
-- alter sequence .. restart
ALTER SEQUENCE color_color_id_seq RESTART WITH 1000;
ERROR: Altering a distributed sequence is currently not supported.
-- override system value
INSERT INTO color(color_id, color_name) VALUES (1, 'Red');
ERROR: cannot insert into column "color_id"
DETAIL: Column "color_id" is an identity column defined as GENERATED ALWAYS.
HINT: Use OVERRIDING SYSTEM VALUE to override.
INSERT INTO color(color_id, color_name) VALUES (NULL, 'Red');
ERROR: cannot insert into column "color_id"
DETAIL: Column "color_id" is an identity column defined as GENERATED ALWAYS.
HINT: Use OVERRIDING SYSTEM VALUE to override.
INSERT INTO color(color_id, color_name) OVERRIDING SYSTEM VALUE VALUES (1, 'Red');
ERROR: duplicate key value violates unique constraint "color_color_id_key_12400000"
DETAIL: Key (color_id)=(1) already exists.
CONTEXT: while executing command on localhost:xxxxx
-- update null or custom value
UPDATE color SET color_id = NULL;
ERROR: column "color_id" can only be updated to DEFAULT
DETAIL: Column "color_id" is an identity column defined as GENERATED ALWAYS.
UPDATE color SET color_id = 1;
ERROR: column "color_id" can only be updated to DEFAULT
DETAIL: Column "color_id" is an identity column defined as GENERATED ALWAYS.
\c - postgres - :master_port
SET search_path TO generated_identities;
SET client_min_messages to ERROR;
-- alter table .. add column .. GENERATED .. AS IDENTITY
ALTER TABLE color ADD COLUMN color_id BIGINT GENERATED ALWAYS AS IDENTITY;
ERROR: cannot execute ADD COLUMN commands involving identity columns when metadata is synchronized to workers
-- alter sequence .. restart
ALTER SEQUENCE color_color_id_seq RESTART WITH 1000;
ERROR: Altering a distributed sequence is currently not supported.
-- override system value
INSERT INTO color(color_id, color_name) VALUES (1, 'Red');
ERROR: cannot insert into column "color_id"
DETAIL: Column "color_id" is an identity column defined as GENERATED ALWAYS.
HINT: Use OVERRIDING SYSTEM VALUE to override.
INSERT INTO color(color_id, color_name) VALUES (NULL, 'Red');
ERROR: cannot insert into column "color_id"
DETAIL: Column "color_id" is an identity column defined as GENERATED ALWAYS.
HINT: Use OVERRIDING SYSTEM VALUE to override.
INSERT INTO color(color_id, color_name) OVERRIDING SYSTEM VALUE VALUES (1, 'Red');
ERROR: duplicate key value violates unique constraint "color_color_id_key_12400000"
DETAIL: Key (color_id)=(1) already exists.
CONTEXT: while executing command on localhost:xxxxx
-- update null or custom value
UPDATE color SET color_id = NULL;
ERROR: column "color_id" can only be updated to DEFAULT
DETAIL: Column "color_id" is an identity column defined as GENERATED ALWAYS.
UPDATE color SET color_id = 1;
ERROR: column "color_id" can only be updated to DEFAULT
DETAIL: Column "color_id" is an identity column defined as GENERATED ALWAYS.
DROP TABLE IF EXISTS test;
CREATE TABLE test (x int, y int, z bigint generated by default as identity);
SELECT create_distributed_table('test', 'x', colocate_with := 'none');
create_distributed_table
---------------------------------------------------------------------
(1 row)
INSERT INTO test VALUES (1,2);
INSERT INTO test SELECT x, y FROM test WHERE x = 1;
SELECT * FROM test;
x | y | z
---------------------------------------------------------------------
1 | 2 | 1
1 | 2 | 2
(2 rows)
DROP SCHEMA generated_identities CASCADE;
DROP USER identity_test_user;

View File

@ -1303,14 +1303,23 @@ SELECT * FROM multi_extension.print_extension_changes();
| type cluster_clock
(38 rows)
DROP TABLE multi_extension.prev_objects, multi_extension.extension_diff;
-- show running version
SHOW citus.version;
citus.version
---------------------------------------------------------------------
11.2devel
11.2.1
(1 row)
-- Snapshot of state at 11.2-2
ALTER EXTENSION citus UPDATE TO '11.2-2';
SELECT * FROM multi_extension.print_extension_changes();
previous_object | current_object
---------------------------------------------------------------------
| function worker_adjust_identity_column_seq_ranges(regclass) void
(1 row)
-- Test downgrade to 11.2-1 from 11.2-2
ALTER EXTENSION citus UPDATE TO '11.2-1';
-- ensure no unexpected objects were created outside pg_catalog
SELECT pgio.type, pgio.identity
FROM pg_depend AS pgd,
@ -1326,6 +1335,7 @@ ORDER BY 1, 2;
view | public.citus_tables
(1 row)
DROP TABLE multi_extension.prev_objects, multi_extension.extension_diff;
-- see incompatible version errors out
RESET citus.enable_version_checks;
RESET columnar.enable_version_checks;

View File

@ -238,8 +238,40 @@ ORDER BY
LIMIT 1 OFFSET 1;
ERROR: operation is not allowed on this node
HINT: Connect to the coordinator and run it again.
-- Check that shards of a table with GENERATED columns can be moved.
\c - - - :master_port
SET citus.shard_count TO 4;
SET citus.shard_replication_factor TO 1;
CREATE TABLE mx_table_with_generated_column (a int, b int GENERATED ALWAYS AS ( a + 3 ) STORED, c int);
SELECT create_distributed_table('mx_table_with_generated_column', 'a');
create_distributed_table
---------------------------------------------------------------------
(1 row)
-- Check that dropped columns are handled properly in a move.
ALTER TABLE mx_table_with_generated_column DROP COLUMN c;
-- Move a shard from worker 1 to worker 2
SELECT
citus_move_shard_placement(shardid, 'localhost', :worker_1_port, 'localhost', :worker_2_port, 'force_logical')
FROM
pg_dist_shard NATURAL JOIN pg_dist_shard_placement
WHERE
logicalrelid = 'mx_table_with_generated_column'::regclass
AND nodeport = :worker_1_port
ORDER BY
shardid
LIMIT 1;
citus_move_shard_placement
---------------------------------------------------------------------
(1 row)
-- Cleanup
\c - - - :master_port
SET client_min_messages TO WARNING;
CALL citus_cleanup_orphaned_resources();
DROP TABLE mx_table_with_generated_column;
DROP TABLE mx_table_1;
DROP TABLE mx_table_2;
DROP TABLE mx_table_3;

View File

@ -497,22 +497,22 @@ ORDER BY table_name::text;
SELECT shard_name, table_name, citus_table_type, shard_size FROM citus_shards ORDER BY shard_name::text;
shard_name | table_name | citus_table_type | shard_size
---------------------------------------------------------------------
app_analytics_events_mx_1220096 | app_analytics_events_mx | distributed | 0
app_analytics_events_mx_1220096 | app_analytics_events_mx | distributed | 0
app_analytics_events_mx_1220096 | app_analytics_events_mx | distributed | 0
app_analytics_events_mx_1220096 | app_analytics_events_mx | distributed | 0
app_analytics_events_mx_1220096 | app_analytics_events_mx | distributed | 0
app_analytics_events_mx_1220096 | app_analytics_events_mx | distributed | 0
app_analytics_events_mx_1220096 | app_analytics_events_mx | distributed | 0
app_analytics_events_mx_1220097 | app_analytics_events_mx | distributed | 0
app_analytics_events_mx_1220098 | app_analytics_events_mx | distributed | 0
app_analytics_events_mx_1220098 | app_analytics_events_mx | distributed | 0
app_analytics_events_mx_1220098 | app_analytics_events_mx | distributed | 0
app_analytics_events_mx_1220098 | app_analytics_events_mx | distributed | 0
app_analytics_events_mx_1220098 | app_analytics_events_mx | distributed | 0
app_analytics_events_mx_1220098 | app_analytics_events_mx | distributed | 0
app_analytics_events_mx_1220098 | app_analytics_events_mx | distributed | 0
app_analytics_events_mx_1220099 | app_analytics_events_mx | distributed | 0
app_analytics_events_mx_1220096 | app_analytics_events_mx | distributed | 8192
app_analytics_events_mx_1220096 | app_analytics_events_mx | distributed | 8192
app_analytics_events_mx_1220096 | app_analytics_events_mx | distributed | 8192
app_analytics_events_mx_1220096 | app_analytics_events_mx | distributed | 8192
app_analytics_events_mx_1220096 | app_analytics_events_mx | distributed | 8192
app_analytics_events_mx_1220096 | app_analytics_events_mx | distributed | 8192
app_analytics_events_mx_1220096 | app_analytics_events_mx | distributed | 8192
app_analytics_events_mx_1220097 | app_analytics_events_mx | distributed | 8192
app_analytics_events_mx_1220098 | app_analytics_events_mx | distributed | 8192
app_analytics_events_mx_1220098 | app_analytics_events_mx | distributed | 8192
app_analytics_events_mx_1220098 | app_analytics_events_mx | distributed | 8192
app_analytics_events_mx_1220098 | app_analytics_events_mx | distributed | 8192
app_analytics_events_mx_1220098 | app_analytics_events_mx | distributed | 8192
app_analytics_events_mx_1220098 | app_analytics_events_mx | distributed | 8192
app_analytics_events_mx_1220098 | app_analytics_events_mx | distributed | 8192
app_analytics_events_mx_1220099 | app_analytics_events_mx | distributed | 8192
articles_hash_mx_1220104 | articles_hash_mx | distributed | 0
articles_hash_mx_1220104 | articles_hash_mx | distributed | 0
articles_hash_mx_1220104 | articles_hash_mx | distributed | 0
@ -608,22 +608,22 @@ SELECT shard_name, table_name, citus_table_type, shard_size FROM citus_shards OR
citus_mx_test_schema.nation_hash_collation_search_path_1220046 | citus_mx_test_schema.nation_hash_collation_search_path | distributed | 0
citus_mx_test_schema.nation_hash_collation_search_path_1220046 | citus_mx_test_schema.nation_hash_collation_search_path | distributed | 0
citus_mx_test_schema.nation_hash_collation_search_path_1220047 | citus_mx_test_schema.nation_hash_collation_search_path | distributed | 8192
citus_mx_test_schema.nation_hash_composite_types_1220048 | citus_mx_test_schema.nation_hash_composite_types | distributed | 8192
citus_mx_test_schema.nation_hash_composite_types_1220048 | citus_mx_test_schema.nation_hash_composite_types | distributed | 8192
citus_mx_test_schema.nation_hash_composite_types_1220048 | citus_mx_test_schema.nation_hash_composite_types | distributed | 8192
citus_mx_test_schema.nation_hash_composite_types_1220048 | citus_mx_test_schema.nation_hash_composite_types | distributed | 8192
citus_mx_test_schema.nation_hash_composite_types_1220048 | citus_mx_test_schema.nation_hash_composite_types | distributed | 8192
citus_mx_test_schema.nation_hash_composite_types_1220048 | citus_mx_test_schema.nation_hash_composite_types | distributed | 8192
citus_mx_test_schema.nation_hash_composite_types_1220048 | citus_mx_test_schema.nation_hash_composite_types | distributed | 8192
citus_mx_test_schema.nation_hash_composite_types_1220049 | citus_mx_test_schema.nation_hash_composite_types | distributed | 8192
citus_mx_test_schema.nation_hash_composite_types_1220050 | citus_mx_test_schema.nation_hash_composite_types | distributed | 0
citus_mx_test_schema.nation_hash_composite_types_1220050 | citus_mx_test_schema.nation_hash_composite_types | distributed | 0
citus_mx_test_schema.nation_hash_composite_types_1220050 | citus_mx_test_schema.nation_hash_composite_types | distributed | 0
citus_mx_test_schema.nation_hash_composite_types_1220050 | citus_mx_test_schema.nation_hash_composite_types | distributed | 0
citus_mx_test_schema.nation_hash_composite_types_1220050 | citus_mx_test_schema.nation_hash_composite_types | distributed | 0
citus_mx_test_schema.nation_hash_composite_types_1220050 | citus_mx_test_schema.nation_hash_composite_types | distributed | 0
citus_mx_test_schema.nation_hash_composite_types_1220050 | citus_mx_test_schema.nation_hash_composite_types | distributed | 0
citus_mx_test_schema.nation_hash_composite_types_1220051 | citus_mx_test_schema.nation_hash_composite_types | distributed | 8192
citus_mx_test_schema.nation_hash_composite_types_1220048 | citus_mx_test_schema.nation_hash_composite_types | distributed | 16384
citus_mx_test_schema.nation_hash_composite_types_1220048 | citus_mx_test_schema.nation_hash_composite_types | distributed | 16384
citus_mx_test_schema.nation_hash_composite_types_1220048 | citus_mx_test_schema.nation_hash_composite_types | distributed | 16384
citus_mx_test_schema.nation_hash_composite_types_1220048 | citus_mx_test_schema.nation_hash_composite_types | distributed | 16384
citus_mx_test_schema.nation_hash_composite_types_1220048 | citus_mx_test_schema.nation_hash_composite_types | distributed | 16384
citus_mx_test_schema.nation_hash_composite_types_1220048 | citus_mx_test_schema.nation_hash_composite_types | distributed | 16384
citus_mx_test_schema.nation_hash_composite_types_1220048 | citus_mx_test_schema.nation_hash_composite_types | distributed | 16384
citus_mx_test_schema.nation_hash_composite_types_1220049 | citus_mx_test_schema.nation_hash_composite_types | distributed | 16384
citus_mx_test_schema.nation_hash_composite_types_1220050 | citus_mx_test_schema.nation_hash_composite_types | distributed | 8192
citus_mx_test_schema.nation_hash_composite_types_1220050 | citus_mx_test_schema.nation_hash_composite_types | distributed | 8192
citus_mx_test_schema.nation_hash_composite_types_1220050 | citus_mx_test_schema.nation_hash_composite_types | distributed | 8192
citus_mx_test_schema.nation_hash_composite_types_1220050 | citus_mx_test_schema.nation_hash_composite_types | distributed | 8192
citus_mx_test_schema.nation_hash_composite_types_1220050 | citus_mx_test_schema.nation_hash_composite_types | distributed | 8192
citus_mx_test_schema.nation_hash_composite_types_1220050 | citus_mx_test_schema.nation_hash_composite_types | distributed | 8192
citus_mx_test_schema.nation_hash_composite_types_1220050 | citus_mx_test_schema.nation_hash_composite_types | distributed | 8192
citus_mx_test_schema.nation_hash_composite_types_1220051 | citus_mx_test_schema.nation_hash_composite_types | distributed | 16384
citus_mx_test_schema_join_1.nation_hash_1220032 | citus_mx_test_schema_join_1.nation_hash | distributed | 0
citus_mx_test_schema_join_1.nation_hash_1220032 | citus_mx_test_schema_join_1.nation_hash | distributed | 0
citus_mx_test_schema_join_1.nation_hash_1220032 | citus_mx_test_schema_join_1.nation_hash | distributed | 0
@ -696,109 +696,109 @@ SELECT shard_name, table_name, citus_table_type, shard_size FROM citus_shards OR
customer_mx_1220084 | customer_mx | reference | 0
customer_mx_1220084 | customer_mx | reference | 0
customer_mx_1220084 | customer_mx | reference | 0
labs_mx_1220102 | labs_mx | distributed | 0
labs_mx_1220102 | labs_mx | distributed | 0
labs_mx_1220102 | labs_mx | distributed | 0
labs_mx_1220102 | labs_mx | distributed | 0
labs_mx_1220102 | labs_mx | distributed | 0
labs_mx_1220102 | labs_mx | distributed | 0
labs_mx_1220102 | labs_mx | distributed | 0
limit_orders_mx_1220092 | limit_orders_mx | distributed | 0
limit_orders_mx_1220092 | limit_orders_mx | distributed | 0
limit_orders_mx_1220092 | limit_orders_mx | distributed | 0
limit_orders_mx_1220092 | limit_orders_mx | distributed | 0
limit_orders_mx_1220092 | limit_orders_mx | distributed | 0
limit_orders_mx_1220092 | limit_orders_mx | distributed | 0
limit_orders_mx_1220092 | limit_orders_mx | distributed | 0
limit_orders_mx_1220093 | limit_orders_mx | distributed | 0
lineitem_mx_1220052 | lineitem_mx | distributed | 0
lineitem_mx_1220052 | lineitem_mx | distributed | 0
lineitem_mx_1220052 | lineitem_mx | distributed | 0
lineitem_mx_1220052 | lineitem_mx | distributed | 0
lineitem_mx_1220052 | lineitem_mx | distributed | 0
lineitem_mx_1220052 | lineitem_mx | distributed | 0
lineitem_mx_1220052 | lineitem_mx | distributed | 0
lineitem_mx_1220053 | lineitem_mx | distributed | 0
lineitem_mx_1220054 | lineitem_mx | distributed | 0
lineitem_mx_1220054 | lineitem_mx | distributed | 0
lineitem_mx_1220054 | lineitem_mx | distributed | 0
lineitem_mx_1220054 | lineitem_mx | distributed | 0
lineitem_mx_1220054 | lineitem_mx | distributed | 0
lineitem_mx_1220054 | lineitem_mx | distributed | 0
lineitem_mx_1220054 | lineitem_mx | distributed | 0
lineitem_mx_1220055 | lineitem_mx | distributed | 0
lineitem_mx_1220056 | lineitem_mx | distributed | 0
lineitem_mx_1220056 | lineitem_mx | distributed | 0
lineitem_mx_1220056 | lineitem_mx | distributed | 0
lineitem_mx_1220056 | lineitem_mx | distributed | 0
lineitem_mx_1220056 | lineitem_mx | distributed | 0
lineitem_mx_1220056 | lineitem_mx | distributed | 0
lineitem_mx_1220056 | lineitem_mx | distributed | 0
lineitem_mx_1220057 | lineitem_mx | distributed | 0
lineitem_mx_1220058 | lineitem_mx | distributed | 0
lineitem_mx_1220058 | lineitem_mx | distributed | 0
lineitem_mx_1220058 | lineitem_mx | distributed | 0
lineitem_mx_1220058 | lineitem_mx | distributed | 0
lineitem_mx_1220058 | lineitem_mx | distributed | 0
lineitem_mx_1220058 | lineitem_mx | distributed | 0
lineitem_mx_1220058 | lineitem_mx | distributed | 0
lineitem_mx_1220059 | lineitem_mx | distributed | 0
lineitem_mx_1220060 | lineitem_mx | distributed | 0
lineitem_mx_1220060 | lineitem_mx | distributed | 0
lineitem_mx_1220060 | lineitem_mx | distributed | 0
lineitem_mx_1220060 | lineitem_mx | distributed | 0
lineitem_mx_1220060 | lineitem_mx | distributed | 0
lineitem_mx_1220060 | lineitem_mx | distributed | 0
lineitem_mx_1220060 | lineitem_mx | distributed | 0
lineitem_mx_1220061 | lineitem_mx | distributed | 0
lineitem_mx_1220062 | lineitem_mx | distributed | 0
lineitem_mx_1220062 | lineitem_mx | distributed | 0
lineitem_mx_1220062 | lineitem_mx | distributed | 0
lineitem_mx_1220062 | lineitem_mx | distributed | 0
lineitem_mx_1220062 | lineitem_mx | distributed | 0
lineitem_mx_1220062 | lineitem_mx | distributed | 0
lineitem_mx_1220062 | lineitem_mx | distributed | 0
lineitem_mx_1220063 | lineitem_mx | distributed | 0
lineitem_mx_1220064 | lineitem_mx | distributed | 0
lineitem_mx_1220064 | lineitem_mx | distributed | 0
lineitem_mx_1220064 | lineitem_mx | distributed | 0
lineitem_mx_1220064 | lineitem_mx | distributed | 0
lineitem_mx_1220064 | lineitem_mx | distributed | 0
lineitem_mx_1220064 | lineitem_mx | distributed | 0
lineitem_mx_1220064 | lineitem_mx | distributed | 0
lineitem_mx_1220065 | lineitem_mx | distributed | 0
lineitem_mx_1220066 | lineitem_mx | distributed | 0
lineitem_mx_1220066 | lineitem_mx | distributed | 0
lineitem_mx_1220066 | lineitem_mx | distributed | 0
lineitem_mx_1220066 | lineitem_mx | distributed | 0
lineitem_mx_1220066 | lineitem_mx | distributed | 0
lineitem_mx_1220066 | lineitem_mx | distributed | 0
lineitem_mx_1220066 | lineitem_mx | distributed | 0
lineitem_mx_1220067 | lineitem_mx | distributed | 0
multiple_hash_mx_1220094 | multiple_hash_mx | distributed | 0
multiple_hash_mx_1220094 | multiple_hash_mx | distributed | 0
multiple_hash_mx_1220094 | multiple_hash_mx | distributed | 0
multiple_hash_mx_1220094 | multiple_hash_mx | distributed | 0
multiple_hash_mx_1220094 | multiple_hash_mx | distributed | 0
multiple_hash_mx_1220094 | multiple_hash_mx | distributed | 0
multiple_hash_mx_1220094 | multiple_hash_mx | distributed | 0
multiple_hash_mx_1220095 | multiple_hash_mx | distributed | 0
mx_ddl_table_1220088 | mx_ddl_table | distributed | 8192
mx_ddl_table_1220088 | mx_ddl_table | distributed | 8192
mx_ddl_table_1220088 | mx_ddl_table | distributed | 8192
mx_ddl_table_1220088 | mx_ddl_table | distributed | 8192
mx_ddl_table_1220088 | mx_ddl_table | distributed | 8192
mx_ddl_table_1220088 | mx_ddl_table | distributed | 8192
mx_ddl_table_1220088 | mx_ddl_table | distributed | 8192
mx_ddl_table_1220089 | mx_ddl_table | distributed | 8192
mx_ddl_table_1220090 | mx_ddl_table | distributed | 8192
mx_ddl_table_1220090 | mx_ddl_table | distributed | 8192
mx_ddl_table_1220090 | mx_ddl_table | distributed | 8192
mx_ddl_table_1220090 | mx_ddl_table | distributed | 8192
mx_ddl_table_1220090 | mx_ddl_table | distributed | 8192
mx_ddl_table_1220090 | mx_ddl_table | distributed | 8192
mx_ddl_table_1220090 | mx_ddl_table | distributed | 8192
mx_ddl_table_1220091 | mx_ddl_table | distributed | 8192
labs_mx_1220102 | labs_mx | distributed | 8192
labs_mx_1220102 | labs_mx | distributed | 8192
labs_mx_1220102 | labs_mx | distributed | 8192
labs_mx_1220102 | labs_mx | distributed | 8192
labs_mx_1220102 | labs_mx | distributed | 8192
labs_mx_1220102 | labs_mx | distributed | 8192
labs_mx_1220102 | labs_mx | distributed | 8192
limit_orders_mx_1220092 | limit_orders_mx | distributed | 16384
limit_orders_mx_1220092 | limit_orders_mx | distributed | 16384
limit_orders_mx_1220092 | limit_orders_mx | distributed | 16384
limit_orders_mx_1220092 | limit_orders_mx | distributed | 16384
limit_orders_mx_1220092 | limit_orders_mx | distributed | 16384
limit_orders_mx_1220092 | limit_orders_mx | distributed | 16384
limit_orders_mx_1220092 | limit_orders_mx | distributed | 16384
limit_orders_mx_1220093 | limit_orders_mx | distributed | 16384
lineitem_mx_1220052 | lineitem_mx | distributed | 16384
lineitem_mx_1220052 | lineitem_mx | distributed | 16384
lineitem_mx_1220052 | lineitem_mx | distributed | 16384
lineitem_mx_1220052 | lineitem_mx | distributed | 16384
lineitem_mx_1220052 | lineitem_mx | distributed | 16384
lineitem_mx_1220052 | lineitem_mx | distributed | 16384
lineitem_mx_1220052 | lineitem_mx | distributed | 16384
lineitem_mx_1220053 | lineitem_mx | distributed | 16384
lineitem_mx_1220054 | lineitem_mx | distributed | 16384
lineitem_mx_1220054 | lineitem_mx | distributed | 16384
lineitem_mx_1220054 | lineitem_mx | distributed | 16384
lineitem_mx_1220054 | lineitem_mx | distributed | 16384
lineitem_mx_1220054 | lineitem_mx | distributed | 16384
lineitem_mx_1220054 | lineitem_mx | distributed | 16384
lineitem_mx_1220054 | lineitem_mx | distributed | 16384
lineitem_mx_1220055 | lineitem_mx | distributed | 16384
lineitem_mx_1220056 | lineitem_mx | distributed | 16384
lineitem_mx_1220056 | lineitem_mx | distributed | 16384
lineitem_mx_1220056 | lineitem_mx | distributed | 16384
lineitem_mx_1220056 | lineitem_mx | distributed | 16384
lineitem_mx_1220056 | lineitem_mx | distributed | 16384
lineitem_mx_1220056 | lineitem_mx | distributed | 16384
lineitem_mx_1220056 | lineitem_mx | distributed | 16384
lineitem_mx_1220057 | lineitem_mx | distributed | 16384
lineitem_mx_1220058 | lineitem_mx | distributed | 16384
lineitem_mx_1220058 | lineitem_mx | distributed | 16384
lineitem_mx_1220058 | lineitem_mx | distributed | 16384
lineitem_mx_1220058 | lineitem_mx | distributed | 16384
lineitem_mx_1220058 | lineitem_mx | distributed | 16384
lineitem_mx_1220058 | lineitem_mx | distributed | 16384
lineitem_mx_1220058 | lineitem_mx | distributed | 16384
lineitem_mx_1220059 | lineitem_mx | distributed | 16384
lineitem_mx_1220060 | lineitem_mx | distributed | 16384
lineitem_mx_1220060 | lineitem_mx | distributed | 16384
lineitem_mx_1220060 | lineitem_mx | distributed | 16384
lineitem_mx_1220060 | lineitem_mx | distributed | 16384
lineitem_mx_1220060 | lineitem_mx | distributed | 16384
lineitem_mx_1220060 | lineitem_mx | distributed | 16384
lineitem_mx_1220060 | lineitem_mx | distributed | 16384
lineitem_mx_1220061 | lineitem_mx | distributed | 16384
lineitem_mx_1220062 | lineitem_mx | distributed | 16384
lineitem_mx_1220062 | lineitem_mx | distributed | 16384
lineitem_mx_1220062 | lineitem_mx | distributed | 16384
lineitem_mx_1220062 | lineitem_mx | distributed | 16384
lineitem_mx_1220062 | lineitem_mx | distributed | 16384
lineitem_mx_1220062 | lineitem_mx | distributed | 16384
lineitem_mx_1220062 | lineitem_mx | distributed | 16384
lineitem_mx_1220063 | lineitem_mx | distributed | 16384
lineitem_mx_1220064 | lineitem_mx | distributed | 16384
lineitem_mx_1220064 | lineitem_mx | distributed | 16384
lineitem_mx_1220064 | lineitem_mx | distributed | 16384
lineitem_mx_1220064 | lineitem_mx | distributed | 16384
lineitem_mx_1220064 | lineitem_mx | distributed | 16384
lineitem_mx_1220064 | lineitem_mx | distributed | 16384
lineitem_mx_1220064 | lineitem_mx | distributed | 16384
lineitem_mx_1220065 | lineitem_mx | distributed | 16384
lineitem_mx_1220066 | lineitem_mx | distributed | 16384
lineitem_mx_1220066 | lineitem_mx | distributed | 16384
lineitem_mx_1220066 | lineitem_mx | distributed | 16384
lineitem_mx_1220066 | lineitem_mx | distributed | 16384
lineitem_mx_1220066 | lineitem_mx | distributed | 16384
lineitem_mx_1220066 | lineitem_mx | distributed | 16384
lineitem_mx_1220066 | lineitem_mx | distributed | 16384
lineitem_mx_1220067 | lineitem_mx | distributed | 16384
multiple_hash_mx_1220094 | multiple_hash_mx | distributed | 8192
multiple_hash_mx_1220094 | multiple_hash_mx | distributed | 8192
multiple_hash_mx_1220094 | multiple_hash_mx | distributed | 8192
multiple_hash_mx_1220094 | multiple_hash_mx | distributed | 8192
multiple_hash_mx_1220094 | multiple_hash_mx | distributed | 8192
multiple_hash_mx_1220094 | multiple_hash_mx | distributed | 8192
multiple_hash_mx_1220094 | multiple_hash_mx | distributed | 8192
multiple_hash_mx_1220095 | multiple_hash_mx | distributed | 8192
mx_ddl_table_1220088 | mx_ddl_table | distributed | 24576
mx_ddl_table_1220088 | mx_ddl_table | distributed | 24576
mx_ddl_table_1220088 | mx_ddl_table | distributed | 24576
mx_ddl_table_1220088 | mx_ddl_table | distributed | 24576
mx_ddl_table_1220088 | mx_ddl_table | distributed | 24576
mx_ddl_table_1220088 | mx_ddl_table | distributed | 24576
mx_ddl_table_1220088 | mx_ddl_table | distributed | 24576
mx_ddl_table_1220089 | mx_ddl_table | distributed | 24576
mx_ddl_table_1220090 | mx_ddl_table | distributed | 24576
mx_ddl_table_1220090 | mx_ddl_table | distributed | 24576
mx_ddl_table_1220090 | mx_ddl_table | distributed | 24576
mx_ddl_table_1220090 | mx_ddl_table | distributed | 24576
mx_ddl_table_1220090 | mx_ddl_table | distributed | 24576
mx_ddl_table_1220090 | mx_ddl_table | distributed | 24576
mx_ddl_table_1220090 | mx_ddl_table | distributed | 24576
mx_ddl_table_1220091 | mx_ddl_table | distributed | 24576
nation_hash_1220000 | nation_hash | distributed | 0
nation_hash_1220000 | nation_hash | distributed | 0
nation_hash_1220000 | nation_hash | distributed | 0
@ -871,77 +871,77 @@ SELECT shard_name, table_name, citus_table_type, shard_size FROM citus_shards OR
nation_mx_1220085 | nation_mx | reference | 0
nation_mx_1220085 | nation_mx | reference | 0
nation_mx_1220085 | nation_mx | reference | 0
objects_mx_1220103 | objects_mx | distributed | 0
objects_mx_1220103 | objects_mx | distributed | 0
objects_mx_1220103 | objects_mx | distributed | 0
objects_mx_1220103 | objects_mx | distributed | 0
objects_mx_1220103 | objects_mx | distributed | 0
objects_mx_1220103 | objects_mx | distributed | 0
objects_mx_1220103 | objects_mx | distributed | 0
orders_mx_1220068 | orders_mx | distributed | 0
orders_mx_1220068 | orders_mx | distributed | 0
orders_mx_1220068 | orders_mx | distributed | 0
orders_mx_1220068 | orders_mx | distributed | 0
orders_mx_1220068 | orders_mx | distributed | 0
orders_mx_1220068 | orders_mx | distributed | 0
orders_mx_1220068 | orders_mx | distributed | 0
orders_mx_1220069 | orders_mx | distributed | 0
orders_mx_1220070 | orders_mx | distributed | 0
orders_mx_1220070 | orders_mx | distributed | 0
orders_mx_1220070 | orders_mx | distributed | 0
orders_mx_1220070 | orders_mx | distributed | 0
orders_mx_1220070 | orders_mx | distributed | 0
orders_mx_1220070 | orders_mx | distributed | 0
orders_mx_1220070 | orders_mx | distributed | 0
orders_mx_1220071 | orders_mx | distributed | 0
orders_mx_1220072 | orders_mx | distributed | 0
orders_mx_1220072 | orders_mx | distributed | 0
orders_mx_1220072 | orders_mx | distributed | 0
orders_mx_1220072 | orders_mx | distributed | 0
orders_mx_1220072 | orders_mx | distributed | 0
orders_mx_1220072 | orders_mx | distributed | 0
orders_mx_1220072 | orders_mx | distributed | 0
orders_mx_1220073 | orders_mx | distributed | 0
orders_mx_1220074 | orders_mx | distributed | 0
orders_mx_1220074 | orders_mx | distributed | 0
orders_mx_1220074 | orders_mx | distributed | 0
orders_mx_1220074 | orders_mx | distributed | 0
orders_mx_1220074 | orders_mx | distributed | 0
orders_mx_1220074 | orders_mx | distributed | 0
orders_mx_1220074 | orders_mx | distributed | 0
orders_mx_1220075 | orders_mx | distributed | 0
orders_mx_1220076 | orders_mx | distributed | 0
orders_mx_1220076 | orders_mx | distributed | 0
orders_mx_1220076 | orders_mx | distributed | 0
orders_mx_1220076 | orders_mx | distributed | 0
orders_mx_1220076 | orders_mx | distributed | 0
orders_mx_1220076 | orders_mx | distributed | 0
orders_mx_1220076 | orders_mx | distributed | 0
orders_mx_1220077 | orders_mx | distributed | 0
orders_mx_1220078 | orders_mx | distributed | 0
orders_mx_1220078 | orders_mx | distributed | 0
orders_mx_1220078 | orders_mx | distributed | 0
orders_mx_1220078 | orders_mx | distributed | 0
orders_mx_1220078 | orders_mx | distributed | 0
orders_mx_1220078 | orders_mx | distributed | 0
orders_mx_1220078 | orders_mx | distributed | 0
orders_mx_1220079 | orders_mx | distributed | 0
orders_mx_1220080 | orders_mx | distributed | 0
orders_mx_1220080 | orders_mx | distributed | 0
orders_mx_1220080 | orders_mx | distributed | 0
orders_mx_1220080 | orders_mx | distributed | 0
orders_mx_1220080 | orders_mx | distributed | 0
orders_mx_1220080 | orders_mx | distributed | 0
orders_mx_1220080 | orders_mx | distributed | 0
orders_mx_1220081 | orders_mx | distributed | 0
orders_mx_1220082 | orders_mx | distributed | 0
orders_mx_1220082 | orders_mx | distributed | 0
orders_mx_1220082 | orders_mx | distributed | 0
orders_mx_1220082 | orders_mx | distributed | 0
orders_mx_1220082 | orders_mx | distributed | 0
orders_mx_1220082 | orders_mx | distributed | 0
orders_mx_1220082 | orders_mx | distributed | 0
orders_mx_1220083 | orders_mx | distributed | 0
objects_mx_1220103 | objects_mx | distributed | 16384
objects_mx_1220103 | objects_mx | distributed | 16384
objects_mx_1220103 | objects_mx | distributed | 16384
objects_mx_1220103 | objects_mx | distributed | 16384
objects_mx_1220103 | objects_mx | distributed | 16384
objects_mx_1220103 | objects_mx | distributed | 16384
objects_mx_1220103 | objects_mx | distributed | 16384
orders_mx_1220068 | orders_mx | distributed | 8192
orders_mx_1220068 | orders_mx | distributed | 8192
orders_mx_1220068 | orders_mx | distributed | 8192
orders_mx_1220068 | orders_mx | distributed | 8192
orders_mx_1220068 | orders_mx | distributed | 8192
orders_mx_1220068 | orders_mx | distributed | 8192
orders_mx_1220068 | orders_mx | distributed | 8192
orders_mx_1220069 | orders_mx | distributed | 8192
orders_mx_1220070 | orders_mx | distributed | 8192
orders_mx_1220070 | orders_mx | distributed | 8192
orders_mx_1220070 | orders_mx | distributed | 8192
orders_mx_1220070 | orders_mx | distributed | 8192
orders_mx_1220070 | orders_mx | distributed | 8192
orders_mx_1220070 | orders_mx | distributed | 8192
orders_mx_1220070 | orders_mx | distributed | 8192
orders_mx_1220071 | orders_mx | distributed | 8192
orders_mx_1220072 | orders_mx | distributed | 8192
orders_mx_1220072 | orders_mx | distributed | 8192
orders_mx_1220072 | orders_mx | distributed | 8192
orders_mx_1220072 | orders_mx | distributed | 8192
orders_mx_1220072 | orders_mx | distributed | 8192
orders_mx_1220072 | orders_mx | distributed | 8192
orders_mx_1220072 | orders_mx | distributed | 8192
orders_mx_1220073 | orders_mx | distributed | 8192
orders_mx_1220074 | orders_mx | distributed | 8192
orders_mx_1220074 | orders_mx | distributed | 8192
orders_mx_1220074 | orders_mx | distributed | 8192
orders_mx_1220074 | orders_mx | distributed | 8192
orders_mx_1220074 | orders_mx | distributed | 8192
orders_mx_1220074 | orders_mx | distributed | 8192
orders_mx_1220074 | orders_mx | distributed | 8192
orders_mx_1220075 | orders_mx | distributed | 8192
orders_mx_1220076 | orders_mx | distributed | 8192
orders_mx_1220076 | orders_mx | distributed | 8192
orders_mx_1220076 | orders_mx | distributed | 8192
orders_mx_1220076 | orders_mx | distributed | 8192
orders_mx_1220076 | orders_mx | distributed | 8192
orders_mx_1220076 | orders_mx | distributed | 8192
orders_mx_1220076 | orders_mx | distributed | 8192
orders_mx_1220077 | orders_mx | distributed | 8192
orders_mx_1220078 | orders_mx | distributed | 8192
orders_mx_1220078 | orders_mx | distributed | 8192
orders_mx_1220078 | orders_mx | distributed | 8192
orders_mx_1220078 | orders_mx | distributed | 8192
orders_mx_1220078 | orders_mx | distributed | 8192
orders_mx_1220078 | orders_mx | distributed | 8192
orders_mx_1220078 | orders_mx | distributed | 8192
orders_mx_1220079 | orders_mx | distributed | 8192
orders_mx_1220080 | orders_mx | distributed | 8192
orders_mx_1220080 | orders_mx | distributed | 8192
orders_mx_1220080 | orders_mx | distributed | 8192
orders_mx_1220080 | orders_mx | distributed | 8192
orders_mx_1220080 | orders_mx | distributed | 8192
orders_mx_1220080 | orders_mx | distributed | 8192
orders_mx_1220080 | orders_mx | distributed | 8192
orders_mx_1220081 | orders_mx | distributed | 8192
orders_mx_1220082 | orders_mx | distributed | 8192
orders_mx_1220082 | orders_mx | distributed | 8192
orders_mx_1220082 | orders_mx | distributed | 8192
orders_mx_1220082 | orders_mx | distributed | 8192
orders_mx_1220082 | orders_mx | distributed | 8192
orders_mx_1220082 | orders_mx | distributed | 8192
orders_mx_1220082 | orders_mx | distributed | 8192
orders_mx_1220083 | orders_mx | distributed | 8192
part_mx_1220086 | part_mx | reference | 0
part_mx_1220086 | part_mx | reference | 0
part_mx_1220086 | part_mx | reference | 0
@ -950,14 +950,14 @@ SELECT shard_name, table_name, citus_table_type, shard_size FROM citus_shards OR
part_mx_1220086 | part_mx | reference | 0
part_mx_1220086 | part_mx | reference | 0
part_mx_1220086 | part_mx | reference | 0
researchers_mx_1220100 | researchers_mx | distributed | 0
researchers_mx_1220100 | researchers_mx | distributed | 0
researchers_mx_1220100 | researchers_mx | distributed | 0
researchers_mx_1220100 | researchers_mx | distributed | 0
researchers_mx_1220100 | researchers_mx | distributed | 0
researchers_mx_1220100 | researchers_mx | distributed | 0
researchers_mx_1220100 | researchers_mx | distributed | 0
researchers_mx_1220101 | researchers_mx | distributed | 0
researchers_mx_1220100 | researchers_mx | distributed | 8192
researchers_mx_1220100 | researchers_mx | distributed | 8192
researchers_mx_1220100 | researchers_mx | distributed | 8192
researchers_mx_1220100 | researchers_mx | distributed | 8192
researchers_mx_1220100 | researchers_mx | distributed | 8192
researchers_mx_1220100 | researchers_mx | distributed | 8192
researchers_mx_1220100 | researchers_mx | distributed | 8192
researchers_mx_1220101 | researchers_mx | distributed | 8192
supplier_mx_1220087 | supplier_mx | reference | 0
supplier_mx_1220087 | supplier_mx | reference | 0
supplier_mx_1220087 | supplier_mx | reference | 0

View File

@ -230,6 +230,7 @@ ORDER BY 1;
function truncate_local_data_after_distributing_table(regclass)
function undistribute_table(regclass,boolean)
function update_distributed_table_colocation(regclass,text)
function worker_adjust_identity_column_seq_ranges(regclass)
function worker_apply_inter_shard_ddl_command(bigint,text,bigint,text,text)
function worker_apply_sequence_command(text)
function worker_apply_sequence_command(text,regtype)
@ -318,5 +319,5 @@ ORDER BY 1;
view citus_stat_statements
view pg_dist_shard_placement
view time_partitions
(310 rows)
(311 rows)

View File

@ -142,8 +142,90 @@ SELECT COUNT(*) FROM worker_split_copy_test."test !/ \n _""dist_123_table_810700
(1 row)
-- END: List updated row count for local targets shard.
-- Check that GENERATED columns are handled properly in a shard split operation.
\c - - - :master_port
SET search_path TO worker_split_copy_test;
SET citus.shard_count TO 2;
SET citus.shard_replication_factor TO 1;
SET citus.next_shard_id TO 81080000;
-- BEGIN: Create distributed table and insert data.
CREATE TABLE worker_split_copy_test.dist_table_with_generated_col(id int primary key, new_id int GENERATED ALWAYS AS ( id + 3 ) stored, value char, col_todrop int);
SELECT create_distributed_table('dist_table_with_generated_col', 'id');
create_distributed_table
---------------------------------------------------------------------
(1 row)
-- Check that dropped columns are filtered out in COPY command.
ALTER TABLE dist_table_with_generated_col DROP COLUMN col_todrop;
INSERT INTO dist_table_with_generated_col (id, value) (SELECT g.id, 'N' FROM generate_series(1, 1000) AS g(id));
-- END: Create distributed table and insert data.
-- BEGIN: Create target shards in Worker1 and Worker2 for a 2-way split copy.
\c - - - :worker_1_port
CREATE TABLE worker_split_copy_test.dist_table_with_generated_col_81080015(id int primary key, new_id int GENERATED ALWAYS AS ( id + 3 ) stored, value char);
\c - - - :worker_2_port
CREATE TABLE worker_split_copy_test.dist_table_with_generated_col_81080016(id int primary key, new_id int GENERATED ALWAYS AS ( id + 3 ) stored, value char);
-- BEGIN: List row count for source shard and targets shard in Worker1.
\c - - - :worker_1_port
SELECT COUNT(*) FROM worker_split_copy_test.dist_table_with_generated_col_81080000;
count
---------------------------------------------------------------------
510
(1 row)
SELECT COUNT(*) FROM worker_split_copy_test.dist_table_with_generated_col_81080015;
count
---------------------------------------------------------------------
0
(1 row)
-- BEGIN: List row count for target shard in Worker2.
\c - - - :worker_2_port
SELECT COUNT(*) FROM worker_split_copy_test.dist_table_with_generated_col_81080016;
count
---------------------------------------------------------------------
0
(1 row)
\c - - - :worker_1_port
SELECT * from worker_split_copy(
81080000, -- source shard id to copy
'id',
ARRAY[
-- split copy info for split children 1
ROW(81080015, -- destination shard id
-2147483648, -- split range begin
-1073741824, --split range end
:worker_1_node)::pg_catalog.split_copy_info,
-- split copy info for split children 2
ROW(81080016, --destination shard id
-1073741823, --split range begin
-1, --split range end
:worker_2_node)::pg_catalog.split_copy_info
]
);
worker_split_copy
---------------------------------------------------------------------
(1 row)
\c - - - :worker_1_port
SELECT COUNT(*) FROM worker_split_copy_test.dist_table_with_generated_col_81080015;
count
---------------------------------------------------------------------
247
(1 row)
\c - - - :worker_2_port
SELECT COUNT(*) FROM worker_split_copy_test.dist_table_with_generated_col_81080016;
count
---------------------------------------------------------------------
263
(1 row)
-- BEGIN: CLEANUP.
\c - - - :master_port
SET client_min_messages TO WARNING;
CALL citus_cleanup_orphaned_resources();
DROP SCHEMA worker_split_copy_test CASCADE;
-- END: CLEANUP.

View File

@ -53,7 +53,7 @@ SELECT create_distributed_table('sensors', 'measureid', colocate_with:='none');
CREATE TABLE reference_table (measureid integer PRIMARY KEY);
SELECT create_reference_table('reference_table');
CREATE TABLE colocated_dist_table (measureid integer PRIMARY KEY);
CREATE TABLE colocated_dist_table (measureid integer PRIMARY KEY, genid integer GENERATED ALWAYS AS ( measureid + 3 ) stored, value varchar(44), col_todrop integer);
CLUSTER colocated_dist_table USING colocated_dist_table_pkey;
SELECT create_distributed_table('colocated_dist_table', 'measureid', colocate_with:='sensors');
@ -70,9 +70,11 @@ ALTER TABLE sensors ADD CONSTRAINT fkey_table_to_dist FOREIGN KEY (measureid) RE
-- BEGIN : Load data into tables.
INSERT INTO reference_table SELECT i FROM generate_series(0,1000)i;
INSERT INTO colocated_dist_table SELECT i FROM generate_series(0,1000)i;
INSERT INTO colocated_dist_table(measureid, value, col_todrop) SELECT i,'Value',i FROM generate_series(0,1000)i;
INSERT INTO sensors SELECT i, '2020-01-05', '{}', 11011.10, 'A', 'I <3 Citus' FROM generate_series(0,1000)i;
ALTER TABLE colocated_dist_table DROP COLUMN col_todrop;
SELECT COUNT(*) FROM sensors;
SELECT COUNT(*) FROM reference_table;
SELECT COUNT(*) FROM colocated_dist_table;

View File

@ -49,7 +49,7 @@ SELECT create_distributed_table('sensors', 'measureid', colocate_with:='none');
CREATE TABLE reference_table (measureid integer PRIMARY KEY);
SELECT create_reference_table('reference_table');
CREATE TABLE colocated_dist_table (measureid integer PRIMARY KEY);
CREATE TABLE colocated_dist_table (measureid integer PRIMARY KEY, genid integer GENERATED ALWAYS AS ( measureid + 3 ) stored, value varchar(44), col_todrop integer);
CLUSTER colocated_dist_table USING colocated_dist_table_pkey;
SELECT create_distributed_table('colocated_dist_table', 'measureid', colocate_with:='sensors');
@ -66,9 +66,11 @@ ALTER TABLE sensors ADD CONSTRAINT fkey_table_to_dist FOREIGN KEY (measureid) RE
-- BEGIN : Load data into tables.
INSERT INTO reference_table SELECT i FROM generate_series(0,1000)i;
INSERT INTO colocated_dist_table SELECT i FROM generate_series(0,1000)i;
INSERT INTO colocated_dist_table(measureid, value, col_todrop) SELECT i,'Value',i FROM generate_series(0,1000)i;
INSERT INTO sensors SELECT i, '2020-01-05', '{}', 11011.10, 'A', 'I <3 Citus' FROM generate_series(0,1000)i;
ALTER TABLE colocated_dist_table DROP COLUMN col_todrop;
SELECT COUNT(*) FROM sensors;
SELECT COUNT(*) FROM reference_table;
SELECT COUNT(*) FROM colocated_dist_table;

View File

@ -1,266 +1,235 @@
-- This test file has an alternative output because of error messages vary for PG13
SHOW server_version \gset
SELECT substring(:'server_version', '\d+')::int <= 13 AS server_version_le_13;
CREATE SCHEMA generated_identities;
SET search_path TO generated_identities;
SET client_min_messages to ERROR;
SET citus.shard_replication_factor TO 1;
SELECT 1 from citus_add_node('localhost', :master_port, groupId=>0);
DROP TABLE IF EXISTS generated_identities_test;
-- create a partitioned table for testing.
CREATE TABLE generated_identities_test (
a int CONSTRAINT myconname GENERATED BY DEFAULT AS IDENTITY,
b bigint GENERATED ALWAYS AS IDENTITY (START WITH 10 INCREMENT BY 10),
c smallint GENERATED BY DEFAULT AS IDENTITY,
d serial,
e bigserial,
f smallserial,
g int
)
PARTITION BY RANGE (a);
CREATE TABLE generated_identities_test_1_5 PARTITION OF generated_identities_test FOR VALUES FROM (1) TO (5);
CREATE TABLE generated_identities_test_5_50 PARTITION OF generated_identities_test FOR VALUES FROM (5) TO (50);
-- local tables
SELECT citus_add_local_table_to_metadata('generated_identities_test');
\d generated_identities_test
\c - - - :worker_1_port
\d generated_identities.generated_identities_test
\c - - - :master_port
SET search_path TO generated_identities;
SET client_min_messages to ERROR;
SELECT undistribute_table('generated_identities_test');
SELECT citus_remove_node('localhost', :master_port);
SELECT create_distributed_table('generated_identities_test', 'a');
\d generated_identities_test
\c - - - :worker_1_port
\d generated_identities.generated_identities_test
\c - - - :master_port
SET search_path TO generated_identities;
SET client_min_messages to ERROR;
insert into generated_identities_test (g) values (1);
insert into generated_identities_test (g) SELECT 2;
INSERT INTO generated_identities_test (g)
SELECT s FROM generate_series(3,7) s;
SELECT * FROM generated_identities_test ORDER BY 1;
SELECT undistribute_table('generated_identities_test');
SELECT * FROM generated_identities_test ORDER BY 1;
\d generated_identities_test
\c - - - :worker_1_port
\d generated_identities.generated_identities_test
\c - - - :master_port
SET search_path TO generated_identities;
SET client_min_messages to ERROR;
INSERT INTO generated_identities_test (g)
SELECT s FROM generate_series(8,10) s;
SELECT * FROM generated_identities_test ORDER BY 1;
-- distributed table
SELECT create_distributed_table('generated_identities_test', 'a');
-- alter table .. alter column .. add is unsupported
ALTER TABLE generated_identities_test ALTER COLUMN g ADD GENERATED ALWAYS AS IDENTITY;
-- alter table .. alter column is unsupported
ALTER TABLE generated_identities_test ALTER COLUMN b TYPE int;
SELECT alter_distributed_table('generated_identities_test', 'g');
SELECT alter_distributed_table('generated_identities_test', 'b');
SELECT alter_distributed_table('generated_identities_test', 'c');
SELECT undistribute_table('generated_identities_test');
SELECT * FROM generated_identities_test ORDER BY g;
-- reference table
DROP TABLE generated_identities_test;
CREATE TABLE generated_identities_test (
a int GENERATED BY DEFAULT AS IDENTITY,
b bigint GENERATED ALWAYS AS IDENTITY (START WITH 10 INCREMENT BY 10),
c smallint GENERATED BY DEFAULT AS IDENTITY,
d serial,
e bigserial,
f smallserial,
g int
-- smallint identity column can not be distributed
CREATE TABLE smallint_identity_column (
a smallint GENERATED BY DEFAULT AS IDENTITY
);
SELECT create_distributed_table('smallint_identity_column', 'a');
SELECT create_distributed_table_concurrently('smallint_identity_column', 'a');
SELECT create_reference_table('smallint_identity_column');
SELECT citus_add_local_table_to_metadata('smallint_identity_column');
SELECT create_reference_table('generated_identities_test');
DROP TABLE smallint_identity_column;
\d generated_identities_test
-- int identity column can not be distributed
CREATE TABLE int_identity_column (
a int GENERATED BY DEFAULT AS IDENTITY
);
SELECT create_distributed_table('int_identity_column', 'a');
SELECT create_distributed_table_concurrently('int_identity_column', 'a');
SELECT create_reference_table('int_identity_column');
SELECT citus_add_local_table_to_metadata('int_identity_column');
DROP TABLE int_identity_column;
RESET citus.shard_replication_factor;
CREATE TABLE bigint_identity_column (
a bigint GENERATED BY DEFAULT AS IDENTITY,
b int
);
SELECT citus_add_local_table_to_metadata('bigint_identity_column');
DROP TABLE bigint_identity_column;
CREATE TABLE bigint_identity_column (
a bigint GENERATED BY DEFAULT AS IDENTITY,
b int
);
SELECT create_distributed_table('bigint_identity_column', 'a');
\d bigint_identity_column
\c - - - :worker_1_port
SET search_path TO generated_identities;
SET client_min_messages to ERROR;
\d generated_identities.generated_identities_test
INSERT INTO bigint_identity_column (b)
SELECT s FROM generate_series(1,10) s;
\d generated_identities.bigint_identity_column
\c - - - :master_port
SET search_path TO generated_identities;
SET client_min_messages to ERROR;
INSERT INTO generated_identities_test (g)
INSERT INTO bigint_identity_column (b)
SELECT s FROM generate_series(11,20) s;
SELECT * FROM generated_identities_test ORDER BY g;
SELECT * FROM bigint_identity_column ORDER BY B ASC;
SELECT undistribute_table('generated_identities_test');
-- table with identity column cannot be altered.
SELECT alter_distributed_table('bigint_identity_column', 'b');
\d generated_identities_test
-- table with identity column cannot be undistributed.
SELECT undistribute_table('bigint_identity_column');
DROP TABLE bigint_identity_column;
-- create a partitioned table for testing.
CREATE TABLE partitioned_table (
a bigint CONSTRAINT myconname GENERATED BY DEFAULT AS IDENTITY (START WITH 10 INCREMENT BY 10),
b bigint GENERATED ALWAYS AS IDENTITY (START WITH 10 INCREMENT BY 10),
c int
)
PARTITION BY RANGE (c);
CREATE TABLE partitioned_table_1_50 PARTITION OF partitioned_table FOR VALUES FROM (1) TO (50);
CREATE TABLE partitioned_table_50_500 PARTITION OF partitioned_table FOR VALUES FROM (50) TO (1000);
SELECT create_distributed_table('partitioned_table', 'a');
\d partitioned_table
\c - - - :worker_1_port
SET search_path TO generated_identities;
SET client_min_messages to ERROR;
\d generated_identities.generated_identities_test
\d generated_identities.partitioned_table
insert into partitioned_table (c) values (1);
insert into partitioned_table (c) SELECT 2;
INSERT INTO partitioned_table (c)
SELECT s FROM generate_series(3,7) s;
\c - - - :master_port
SET search_path TO generated_identities;
SET client_min_messages to ERROR;
INSERT INTO partitioned_table (c)
SELECT s FROM generate_series(10,20) s;
INSERT INTO partitioned_table (a,c) VALUES (998,998);
INSERT INTO partitioned_table (a,b,c) OVERRIDING SYSTEM VALUE VALUES (999,999,999);
SELECT * FROM partitioned_table ORDER BY c ASC;
-- alter table .. alter column .. add is unsupported
ALTER TABLE partitioned_table ALTER COLUMN g ADD GENERATED ALWAYS AS IDENTITY;
-- alter table .. alter column is unsupported
ALTER TABLE partitioned_table ALTER COLUMN b TYPE int;
DROP TABLE partitioned_table;
-- create a table for reference table testing.
CREATE TABLE reference_table (
a bigint CONSTRAINT myconname GENERATED BY DEFAULT AS IDENTITY (START WITH 10 INCREMENT BY 10),
b bigint GENERATED ALWAYS AS IDENTITY (START WITH 10 INCREMENT BY 10) UNIQUE,
c int
);
SELECT create_reference_table('reference_table');
\d reference_table
\c - - - :worker_1_port
SET search_path TO generated_identities;
\d generated_identities.reference_table
INSERT INTO reference_table (c)
SELECT s FROM generate_series(1,10) s;
--on master
select * from reference_table;
\c - - - :master_port
SET search_path TO generated_identities;
SET client_min_messages to ERROR;
INSERT INTO reference_table (c)
SELECT s FROM generate_series(11,20) s;
SELECT * FROM reference_table ORDER BY c ASC;
DROP TABLE reference_table;
CREATE TABLE color (
color_id BIGINT GENERATED ALWAYS AS IDENTITY UNIQUE,
color_name VARCHAR NOT NULL
);
-- https://github.com/citusdata/citus/issues/6694
CREATE USER identity_test_user;
GRANT INSERT ON color TO identity_test_user;
GRANT USAGE ON SCHEMA generated_identities TO identity_test_user;
SET ROLE identity_test_user;
SELECT create_distributed_table('color', 'color_id');
SET ROLE postgres;
SET citus.shard_replication_factor TO 1;
SELECT create_distributed_table_concurrently('color', 'color_id');
RESET citus.shard_replication_factor;
\c - identity_test_user - :worker_1_port
SET search_path TO generated_identities;
SET client_min_messages to ERROR;
INSERT INTO color(color_name) VALUES ('Blue');
\c - postgres - :master_port
SET search_path TO generated_identities;
SET client_min_messages to ERROR;
SET citus.next_shard_id TO 12400000;
DROP TABLE Color;
CREATE TABLE color (
color_id BIGINT GENERATED ALWAYS AS IDENTITY UNIQUE,
color_name VARCHAR NOT NULL
) USING columnar;
SELECT create_distributed_table('color', 'color_id');
INSERT INTO color(color_name) VALUES ('Blue');
\d+ color
\c - - - :worker_1_port
SET search_path TO generated_identities;
\d+ color
INSERT INTO color(color_name) VALUES ('Red');
-- alter sequence .. restart
ALTER SEQUENCE color_color_id_seq RESTART WITH 1000;
-- override system value
INSERT INTO color(color_id, color_name) VALUES (1, 'Red');
INSERT INTO color(color_id, color_name) VALUES (NULL, 'Red');
INSERT INTO color(color_id, color_name) OVERRIDING SYSTEM VALUE VALUES (1, 'Red');
-- update null or custom value
UPDATE color SET color_id = NULL;
UPDATE color SET color_id = 1;
\c - postgres - :master_port
SET search_path TO generated_identities;
SET client_min_messages to ERROR;
-- alter table .. add column .. GENERATED .. AS IDENTITY
DROP TABLE IF EXISTS color;
CREATE TABLE color (
color_name VARCHAR NOT NULL
);
SELECT create_distributed_table('color', 'color_name');
ALTER TABLE color ADD COLUMN color_id BIGINT GENERATED ALWAYS AS IDENTITY;
INSERT INTO color(color_name) VALUES ('Red');
ALTER TABLE color ADD COLUMN color_id_1 BIGINT GENERATED ALWAYS AS IDENTITY;
DROP TABLE color;
-- insert data from workers
CREATE TABLE color (
color_id BIGINT GENERATED ALWAYS AS IDENTITY UNIQUE,
color_name VARCHAR NOT NULL
);
SELECT create_distributed_table('color', 'color_id');
-- alter sequence .. restart
ALTER SEQUENCE color_color_id_seq RESTART WITH 1000;
-- override system value
INSERT INTO color(color_id, color_name) VALUES (1, 'Red');
INSERT INTO color(color_id, color_name) VALUES (NULL, 'Red');
INSERT INTO color(color_id, color_name) OVERRIDING SYSTEM VALUE VALUES (1, 'Red');
-- update null or custom value
UPDATE color SET color_id = NULL;
UPDATE color SET color_id = 1;
\c - - - :worker_1_port
SET search_path TO generated_identities;
SET client_min_messages to ERROR;
INSERT INTO color(color_name) VALUES ('Red');
\c - - - :master_port
SET search_path TO generated_identities;
SET client_min_messages to ERROR;
SELECT undistribute_table('color');
SELECT create_distributed_table('color', 'color_id');
\c - - - :worker_1_port
SET search_path TO generated_identities;
SET client_min_messages to ERROR;
INSERT INTO color(color_name) VALUES ('Red');
\c - - - :master_port
SET search_path TO generated_identities;
SET client_min_messages to ERROR;
INSERT INTO color(color_name) VALUES ('Red');
SELECT count(*) from color;
-- modify sequence & alter table
DROP TABLE color;
CREATE TABLE color (
color_id BIGINT GENERATED ALWAYS AS IDENTITY UNIQUE,
color_name VARCHAR NOT NULL
);
SELECT create_distributed_table('color', 'color_id');
\c - - - :worker_1_port
SET search_path TO generated_identities;
SET client_min_messages to ERROR;
INSERT INTO color(color_name) VALUES ('Red');
\c - - - :master_port
SET search_path TO generated_identities;
SET client_min_messages to ERROR;
SELECT undistribute_table('color');
ALTER SEQUENCE color_color_id_seq RENAME TO myseq;
SELECT create_distributed_table('color', 'color_id');
\ds+ myseq
\ds+ color_color_id_seq
\d color
\c - - - :worker_1_port
SET search_path TO generated_identities;
SET client_min_messages to ERROR;
\ds+ myseq
\ds+ color_color_id_seq
\d color
INSERT INTO color(color_name) VALUES ('Red');
\c - - - :master_port
SET search_path TO generated_identities;
SET client_min_messages to ERROR;
ALTER SEQUENCE myseq RENAME TO color_color_id_seq;
\ds+ myseq
\ds+ color_color_id_seq
INSERT INTO color(color_name) VALUES ('Red');
\c - - - :worker_1_port
SET search_path TO generated_identities;
SET client_min_messages to ERROR;
\ds+ myseq
\ds+ color_color_id_seq
\d color
INSERT INTO color(color_name) VALUES ('Red');
\c - - - :master_port
SET search_path TO generated_identities;
SET client_min_messages to ERROR;
SELECT alter_distributed_table('co23423lor', shard_count := 6);
INSERT INTO color(color_name) VALUES ('Red');
\c - - - :worker_1_port
SET search_path TO generated_identities;
SET client_min_messages to ERROR;
\ds+ color_color_id_seq
INSERT INTO color(color_name) VALUES ('Red');
\c - - - :master_port
SET search_path TO generated_identities;
SET client_min_messages to ERROR;
DROP TABLE IF EXISTS test;
CREATE TABLE test (x int, y int, z bigint generated by default as identity);
SELECT create_distributed_table('test', 'x', colocate_with := 'none');
INSERT INTO test VALUES (1,2);
INSERT INTO test SELECT x, y FROM test WHERE x = 1;
SELECT * FROM test;
DROP SCHEMA generated_identities CASCADE;
DROP USER identity_test_user;

View File

@ -563,11 +563,17 @@ RESET client_min_messages;
SELECT * FROM multi_extension.print_extension_changes();
DROP TABLE multi_extension.prev_objects, multi_extension.extension_diff;
-- show running version
SHOW citus.version;
-- Snapshot of state at 11.2-2
ALTER EXTENSION citus UPDATE TO '11.2-2';
SELECT * FROM multi_extension.print_extension_changes();
-- Test downgrade to 11.2-1 from 11.2-2
ALTER EXTENSION citus UPDATE TO '11.2-1';
-- ensure no unexpected objects were created outside pg_catalog
SELECT pgio.type, pgio.identity
FROM pg_depend AS pgd,
@ -579,6 +585,8 @@ WHERE pgd.refclassid = 'pg_extension'::regclass AND
pgio.schema NOT IN ('pg_catalog', 'citus', 'citus_internal', 'test', 'columnar', 'columnar_internal')
ORDER BY 1, 2;
DROP TABLE multi_extension.prev_objects, multi_extension.extension_diff;
-- see incompatible version errors out
RESET citus.enable_version_checks;
RESET columnar.enable_version_checks;

View File

@ -151,8 +151,34 @@ ORDER BY
shardid
LIMIT 1 OFFSET 1;
-- Check that shards of a table with GENERATED columns can be moved.
\c - - - :master_port
SET citus.shard_count TO 4;
SET citus.shard_replication_factor TO 1;
CREATE TABLE mx_table_with_generated_column (a int, b int GENERATED ALWAYS AS ( a + 3 ) STORED, c int);
SELECT create_distributed_table('mx_table_with_generated_column', 'a');
-- Check that dropped columns are handled properly in a move.
ALTER TABLE mx_table_with_generated_column DROP COLUMN c;
-- Move a shard from worker 1 to worker 2
SELECT
citus_move_shard_placement(shardid, 'localhost', :worker_1_port, 'localhost', :worker_2_port, 'force_logical')
FROM
pg_dist_shard NATURAL JOIN pg_dist_shard_placement
WHERE
logicalrelid = 'mx_table_with_generated_column'::regclass
AND nodeport = :worker_1_port
ORDER BY
shardid
LIMIT 1;
-- Cleanup
\c - - - :master_port
SET client_min_messages TO WARNING;
CALL citus_cleanup_orphaned_resources();
DROP TABLE mx_table_with_generated_column;
DROP TABLE mx_table_1;
DROP TABLE mx_table_2;
DROP TABLE mx_table_3;

View File

@ -110,8 +110,66 @@ SELECT COUNT(*) FROM worker_split_copy_test."test !/ \n _""dist_123_table_810700
SELECT COUNT(*) FROM worker_split_copy_test."test !/ \n _""dist_123_table_81070016";
-- END: List updated row count for local targets shard.
-- Check that GENERATED columns are handled properly in a shard split operation.
\c - - - :master_port
SET search_path TO worker_split_copy_test;
SET citus.shard_count TO 2;
SET citus.shard_replication_factor TO 1;
SET citus.next_shard_id TO 81080000;
-- BEGIN: Create distributed table and insert data.
CREATE TABLE worker_split_copy_test.dist_table_with_generated_col(id int primary key, new_id int GENERATED ALWAYS AS ( id + 3 ) stored, value char, col_todrop int);
SELECT create_distributed_table('dist_table_with_generated_col', 'id');
-- Check that dropped columns are filtered out in COPY command.
ALTER TABLE dist_table_with_generated_col DROP COLUMN col_todrop;
INSERT INTO dist_table_with_generated_col (id, value) (SELECT g.id, 'N' FROM generate_series(1, 1000) AS g(id));
-- END: Create distributed table and insert data.
-- BEGIN: Create target shards in Worker1 and Worker2 for a 2-way split copy.
\c - - - :worker_1_port
CREATE TABLE worker_split_copy_test.dist_table_with_generated_col_81080015(id int primary key, new_id int GENERATED ALWAYS AS ( id + 3 ) stored, value char);
\c - - - :worker_2_port
CREATE TABLE worker_split_copy_test.dist_table_with_generated_col_81080016(id int primary key, new_id int GENERATED ALWAYS AS ( id + 3 ) stored, value char);
-- BEGIN: List row count for source shard and targets shard in Worker1.
\c - - - :worker_1_port
SELECT COUNT(*) FROM worker_split_copy_test.dist_table_with_generated_col_81080000;
SELECT COUNT(*) FROM worker_split_copy_test.dist_table_with_generated_col_81080015;
-- BEGIN: List row count for target shard in Worker2.
\c - - - :worker_2_port
SELECT COUNT(*) FROM worker_split_copy_test.dist_table_with_generated_col_81080016;
\c - - - :worker_1_port
SELECT * from worker_split_copy(
81080000, -- source shard id to copy
'id',
ARRAY[
-- split copy info for split children 1
ROW(81080015, -- destination shard id
-2147483648, -- split range begin
-1073741824, --split range end
:worker_1_node)::pg_catalog.split_copy_info,
-- split copy info for split children 2
ROW(81080016, --destination shard id
-1073741823, --split range begin
-1, --split range end
:worker_2_node)::pg_catalog.split_copy_info
]
);
\c - - - :worker_1_port
SELECT COUNT(*) FROM worker_split_copy_test.dist_table_with_generated_col_81080015;
\c - - - :worker_2_port
SELECT COUNT(*) FROM worker_split_copy_test.dist_table_with_generated_col_81080016;
-- BEGIN: CLEANUP.
\c - - - :master_port
SET client_min_messages TO WARNING;
CALL citus_cleanup_orphaned_resources();
DROP SCHEMA worker_split_copy_test CASCADE;
-- END: CLEANUP.