Compare commits

...

23 Commits

Author SHA1 Message Date
Halil Ozan AkgĂĽl 85a87af11c Update CHANGELOG for 10.0.2
(cherry picked from commit c2a9706203)

 Conflicts:
	CHANGELOG.md
2021-03-03 17:26:26 +03:00
Hanefi Onaldi 115fa950d3 Do not use security flags by default (#4770)
(cherry picked from commit 697bbbd3c6)
2021-03-03 13:20:05 +03:00
Naisila Puka 445291d94b Reimplement citus_update_table_statistics to detect dist. deadlocks (#4752)
* Reimplement citus_update_table_statistics

* Update stats for the given table not colocation group

* Add tests for reimplemented citus_update_table_statistics

* Use coordinated transaction, merge with citus_shard_sizes functions

* Update the old master_update_table_statistics as well

(cherry picked from commit 2f30614fe3)
2021-03-03 11:41:31 +03:00
Hanefi Onaldi 28f1c2129d Add security flags in configure scripts (#4760)
(cherry picked from commit f87107eb6b)
2021-03-03 11:41:00 +03:00
Marco Slot 205b8ec70a Normalize the ConvertTable notices
(cherry picked from commit dca615c5aa)
2021-03-03 11:40:38 +03:00
Halil Ozan Akgul 6fa25d73be Bump version to 10.0.2 2021-03-01 17:04:24 +03:00
SaitTalhaNisanci bfb1ca6d0d Use translated vars in postgres 13 as well (#4746)
* Use translated vars in postgres 13 as well

Postgres 13 removed translated vars with pg 13 so we had a special logic
for pg 13. However it had some bug, so now we copy the translated vars
before postgres deletes it. This also simplifies the logic.

* fix rtoffset with pg >= 13

(cherry picked from commit feee25dfbd)
2021-03-01 15:18:32 +03:00
Halil Ozan Akgul b355f0d9a2 Adds GRANT for public to citus_tables
(cherry picked from commit 5c5cb200f7)
2021-03-01 15:15:34 +03:00
Önder Kalacı fdcb6ead43 Prevent cross join without any target list entries (#4750)
/*
 * The physical planner assumes that all worker queries would have
 * target list entries based on the fact that at least the column
 * on the JOINs have to be on the target list. However, there is
 * an exception to that if there is a cartesian product join and
 * there is no additional target list entries belong to one side
 * of the JOIN. Once we support cartesian product join, we should
 * remove this error.
 */

(cherry picked from commit 0fe26a216c)
2021-03-01 15:13:26 +03:00
Onur Tirtir 3fcb011b67 Grant read access for columnar metadata tables to unprivileged user
(cherry picked from commit 54ac924bef)
2021-03-01 15:02:57 +03:00
Halil Ozan Akgul 8228815b38 Add 10.0-2 schema version
(cherry-picked from dcc0207605)
2021-03-01 14:58:41 +03:00
Onur Tirtir 270234c7ff Ensure table owner when using alter_columnar_table_set/alter_columnar_table_reset (#4748)
(cherry picked from commit 5ed954844c)
2021-03-01 14:38:19 +03:00
Naisila Puka 3131d3e3c5 Preserve colocation with procedures in alter_distributed_table (#4743)
(cherry picked from commit 5ebd4eac7f)
2021-03-01 14:36:52 +03:00
Hanefi Onaldi a7f9dfc3f0 Fix flaky test
(cherry picked from commit 5aff18b573)
2021-03-01 13:18:22 +03:00
Hanefi Onaldi 049cd55346 Remove length limitations for table renames
(cherry picked from commit 9a792ef841)
2021-03-01 13:18:05 +03:00
Hanefi Onaldi 27ecb5cde2 Failing long table name tests
(cherry picked from commit 7bebeb872d)
2021-03-01 13:17:48 +03:00
Naisila Puka fc08ec203f Fix insert query with CTEs/sublinks/subqueries etc (#4700)
* Fix insert query with CTE

* Add more cases with deferred pruning but false fast path

* Add more tests

* Better readability with if statements

(cherry picked from commit dbb88f6f8b)
2021-03-01 12:16:40 +03:00
Hadi Moshayedi 495470d291 Fix alignment issue in DatumToBytea
(cherry picked from commit 2fca5ff3b5)
2021-03-01 12:07:46 +03:00
SaitTalhaNisanci 39a142b4d9 Use PROCESS_UTILITY_QUERY in utility calls
When we use PROCESS_UTILITY_TOPLEVEL it causes some problems when
combined with other extensions such as pg_audit. With this commit we use
PROCESS_UTILITY_QUERY in the codebase to fix those problems.

(cherry picked from commit dcf54eaf2a)
2021-03-01 11:49:44 +03:00
Onur Tirtir ca4b529751 Bump version to 10.0.1 2021-02-19 12:05:56 +03:00
Onur Tirtir e48f5d804d Update CHANGELOG for 10.0.1
(cherry picked from commit 9031a22e20)

 Conflicts:
	CHANGELOG.md
2021-02-19 12:05:49 +03:00
Marco Slot 85e2c6b523 Rewrite time_partitions join clause to avoid smallint[] operator
(cherry picked from commit 972a8bc0b7)
2021-02-19 11:25:00 +03:00
Onur Tirtir 2a390b4c1d Bump Citus to 10.0.0 2021-02-16 14:39:24 +03:00
89 changed files with 3526 additions and 601 deletions

View File

@ -1,3 +1,33 @@
### citus v10.0.2 (March 3, 2021) ###
* Adds a configure flag to enforce security
* Fixes a bug due to cross join without target list
* Fixes a bug with `UNION ALL` on PG 13
* Fixes a compatibility issue with pg_audit in utility calls
* Fixes insert query with CTEs/sublinks/subqueries etc
* Grants `SELECT` permission on `citus_tables` view to `public`
* Grants `SELECT` permission on columnar metadata tables to `public`
* Improves `citus_update_table_statistics` and provides distributed deadlock
detection
* Preserves colocation with procedures in `alter_distributed_table`
* Prevents using `alter_columnar_table_set` and `alter_columnar_table_reset`
on a columnar table not owned by the user
* Removes limits around long table names
### citus v10.0.1 (February 19, 2021) ###
* Fixes an issue in creation of `pg_catalog.time_partitions` view
### citus v10.0.0 (February 16, 2021) ###
* Adds support for per-table option for columnar storage

View File

@ -86,6 +86,7 @@ endif
# Add options passed to configure or computed therein, to CFLAGS/CPPFLAGS/...
override CFLAGS += @CFLAGS@ @CITUS_CFLAGS@
override BITCODE_CFLAGS := $(BITCODE_CFLAGS) @CITUS_BITCODE_CFLAGS@
ifneq ($(GIT_VERSION),)
override CFLAGS += -DGIT_VERSION=\"$(GIT_VERSION)\"
endif

119
configure vendored
View File

@ -1,6 +1,6 @@
#! /bin/sh
# Guess values for system-dependent variables and create Makefiles.
# Generated by GNU Autoconf 2.69 for Citus 10.0devel.
# Generated by GNU Autoconf 2.69 for Citus 10.0.2.
#
#
# Copyright (C) 1992-1996, 1998-2012 Free Software Foundation, Inc.
@ -579,8 +579,8 @@ MAKEFLAGS=
# Identity of this package.
PACKAGE_NAME='Citus'
PACKAGE_TARNAME='citus'
PACKAGE_VERSION='10.0devel'
PACKAGE_STRING='Citus 10.0devel'
PACKAGE_VERSION='10.0.2'
PACKAGE_STRING='Citus 10.0.2'
PACKAGE_BUGREPORT=''
PACKAGE_URL=''
@ -628,8 +628,10 @@ POSTGRES_BUILDDIR
POSTGRES_SRCDIR
CITUS_LDFLAGS
CITUS_CPPFLAGS
CITUS_BITCODE_CFLAGS
CITUS_CFLAGS
GIT_BIN
with_security_flags
with_zstd
with_lz4
EGREP
@ -696,6 +698,7 @@ with_libcurl
with_reports_hostname
with_lz4
with_zstd
with_security_flags
'
ac_precious_vars='build_alias
host_alias
@ -1258,7 +1261,7 @@ if test "$ac_init_help" = "long"; then
# Omit some internal or obsolete options to make the list less imposing.
# This message is too long to be a string in the A/UX 3.1 sh.
cat <<_ACEOF
\`configure' configures Citus 10.0devel to adapt to many kinds of systems.
\`configure' configures Citus 10.0.2 to adapt to many kinds of systems.
Usage: $0 [OPTION]... [VAR=VALUE]...
@ -1320,7 +1323,7 @@ fi
if test -n "$ac_init_help"; then
case $ac_init_help in
short | recursive ) echo "Configuration of Citus 10.0devel:";;
short | recursive ) echo "Configuration of Citus 10.0.2:";;
esac
cat <<\_ACEOF
@ -1342,6 +1345,7 @@ Optional Packages:
and update checks
--without-lz4 do not use lz4
--without-zstd do not use zstd
--with-security-flags use security flags
Some influential environment variables:
PG_CONFIG Location to find pg_config for target PostgreSQL instalation
@ -1422,7 +1426,7 @@ fi
test -n "$ac_init_help" && exit $ac_status
if $ac_init_version; then
cat <<\_ACEOF
Citus configure 10.0devel
Citus configure 10.0.2
generated by GNU Autoconf 2.69
Copyright (C) 2012 Free Software Foundation, Inc.
@ -1905,7 +1909,7 @@ cat >config.log <<_ACEOF
This file contains any messages produced by compilers while
running configure, to aid debugging if configure makes a mistake.
It was created by Citus $as_me 10.0devel, which was
It was created by Citus $as_me 10.0.2, which was
generated by GNU Autoconf 2.69. Invocation command line was
$ $0 $@
@ -4346,6 +4350,48 @@ if test x"$citusac_cv_prog_cc_cflags__Werror_return_type" = x"yes"; then
CITUS_CFLAGS="$CITUS_CFLAGS -Werror=return-type"
fi
# Security flags
# Flags taken from: https://liquid.microsoft.com/Web/Object/Read/ms.security/Requirements/Microsoft.Security.SystemsADM.10203#guide
# We do not enforce the following flag because it is only available on GCC>=8
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether $CC supports -fstack-clash-protection" >&5
$as_echo_n "checking whether $CC supports -fstack-clash-protection... " >&6; }
if ${citusac_cv_prog_cc_cflags__fstack_clash_protection+:} false; then :
$as_echo_n "(cached) " >&6
else
citusac_save_CFLAGS=$CFLAGS
flag=-fstack-clash-protection
case $flag in -Wno*)
flag=-W$(echo $flag | cut -c 6-)
esac
CFLAGS="$citusac_save_CFLAGS $flag"
ac_save_c_werror_flag=$ac_c_werror_flag
ac_c_werror_flag=yes
cat confdefs.h - <<_ACEOF >conftest.$ac_ext
/* end confdefs.h. */
int
main ()
{
;
return 0;
}
_ACEOF
if ac_fn_c_try_compile "$LINENO"; then :
citusac_cv_prog_cc_cflags__fstack_clash_protection=yes
else
citusac_cv_prog_cc_cflags__fstack_clash_protection=no
fi
rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
ac_c_werror_flag=$ac_save_c_werror_flag
CFLAGS="$citusac_save_CFLAGS"
fi
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $citusac_cv_prog_cc_cflags__fstack_clash_protection" >&5
$as_echo "$citusac_cv_prog_cc_cflags__fstack_clash_protection" >&6; }
if test x"$citusac_cv_prog_cc_cflags__fstack_clash_protection" = x"yes"; then
CITUS_CFLAGS="$CITUS_CFLAGS -fstack-clash-protection"
fi
#
# --enable-coverage enables generation of code coverage metrics with gcov
@ -4493,8 +4539,8 @@ if test "$version_num" != '11'; then
$as_echo "#define HAS_TABLEAM 1" >>confdefs.h
else
{ $as_echo "$as_me:${as_lineno-$LINENO}: postgres version does not support table access methodds" >&5
$as_echo "$as_me: postgres version does not support table access methodds" >&6;}
{ $as_echo "$as_me:${as_lineno-$LINENO}: postgres version does not support table access methods" >&5
$as_echo "$as_me: postgres version does not support table access methods" >&6;}
fi;
# Require lz4 & zstd only if we are compiling columnar
@ -4687,6 +4733,55 @@ fi
fi # test "$HAS_TABLEAM" == 'yes'
# Check whether --with-security-flags was given.
if test "${with_security_flags+set}" = set; then :
withval=$with_security_flags;
case $withval in
yes)
:
;;
no)
:
;;
*)
as_fn_error $? "no argument expected for --with-security-flags option" "$LINENO" 5
;;
esac
else
with_security_flags=no
fi
if test "$with_security_flags" = yes; then
# Flags taken from: https://liquid.microsoft.com/Web/Object/Read/ms.security/Requirements/Microsoft.Security.SystemsADM.10203#guide
# We always want to have some compiler flags for security concerns.
SECURITY_CFLAGS="-fstack-protector-strong -D_FORTIFY_SOURCE=2 -O2 -z noexecstack -fpic -shared -Wl,-z,relro -Wl,-z,now -Wformat -Wformat-security -Werror=format-security"
CITUS_CFLAGS="$CITUS_CFLAGS $SECURITY_CFLAGS"
{ $as_echo "$as_me:${as_lineno-$LINENO}: Blindly added security flags for linker: $SECURITY_CFLAGS" >&5
$as_echo "$as_me: Blindly added security flags for linker: $SECURITY_CFLAGS" >&6;}
# We always want to have some clang flags for security concerns.
# This doesn't include "-Wl,-z,relro -Wl,-z,now" on purpuse, because bitcode is not linked.
# This doesn't include -fsanitize=cfi because it breaks builds on many distros including
# Debian/Buster, Debian/Stretch, Ubuntu/Bionic, Ubuntu/Xenial and EL7.
SECURITY_BITCODE_CFLAGS="-fsanitize=safe-stack -fstack-protector-strong -flto -fPIC -Wformat -Wformat-security -Werror=format-security"
CITUS_BITCODE_CFLAGS="$CITUS_BITCODE_CFLAGS $SECURITY_BITCODE_CFLAGS"
{ $as_echo "$as_me:${as_lineno-$LINENO}: Blindly added security flags for llvm: $SECURITY_BITCODE_CFLAGS" >&5
$as_echo "$as_me: Blindly added security flags for llvm: $SECURITY_BITCODE_CFLAGS" >&6;}
{ $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: If you run into issues during linking or bitcode compilation, you can use --without-security-flags." >&5
$as_echo "$as_me: WARNING: If you run into issues during linking or bitcode compilation, you can use --without-security-flags." >&2;}
fi
# Check if git is installed, when installed the gitref of the checkout will be baked in the application
# Extract the first word of "git", so it can be a program name with args.
set dummy git; ac_word=$2
@ -4752,6 +4847,8 @@ fi
CITUS_CFLAGS="$CITUS_CFLAGS"
CITUS_BITCODE_CFLAGS="$CITUS_BITCODE_CFLAGS"
CITUS_CPPFLAGS="$CITUS_CPPFLAGS"
CITUS_LDFLAGS="$LIBS $CITUS_LDFLAGS"
@ -5276,7 +5373,7 @@ cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1
# report actual input values of CONFIG_FILES etc. instead of their
# values after options handling.
ac_log="
This file was extended by Citus $as_me 10.0devel, which was
This file was extended by Citus $as_me 10.0.2, which was
generated by GNU Autoconf 2.69. Invocation command line was
CONFIG_FILES = $CONFIG_FILES
@ -5338,7 +5435,7 @@ _ACEOF
cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1
ac_cs_config="`$as_echo "$ac_configure_args" | sed 's/^ //; s/[\\""\`\$]/\\\\&/g'`"
ac_cs_version="\\
Citus config.status 10.0devel
Citus config.status 10.0.2
configured by $0, generated by GNU Autoconf 2.69,
with options \\"\$ac_cs_config\\"

View File

@ -5,7 +5,7 @@
# everyone needing autoconf installed, the resulting files are checked
# into the SCM.
AC_INIT([Citus], [10.0devel])
AC_INIT([Citus], [10.0.2])
AC_COPYRIGHT([Copyright (c) Citus Data, Inc.])
# we'll need sed and awk for some of the version commands
@ -174,6 +174,10 @@ CITUSAC_PROG_CC_CFLAGS_OPT([-Werror=vla]) # visual studio does not support thes
CITUSAC_PROG_CC_CFLAGS_OPT([-Werror=implicit-int])
CITUSAC_PROG_CC_CFLAGS_OPT([-Werror=implicit-function-declaration])
CITUSAC_PROG_CC_CFLAGS_OPT([-Werror=return-type])
# Security flags
# Flags taken from: https://liquid.microsoft.com/Web/Object/Read/ms.security/Requirements/Microsoft.Security.SystemsADM.10203#guide
# We do not enforce the following flag because it is only available on GCC>=8
CITUSAC_PROG_CC_CFLAGS_OPT([-fstack-clash-protection])
#
# --enable-coverage enables generation of code coverage metrics with gcov
@ -216,7 +220,7 @@ if test "$version_num" != '11'; then
HAS_TABLEAM=yes
AC_DEFINE([HAS_TABLEAM], 1, [Define to 1 to build with table access method support, pg12 and up])
else
AC_MSG_NOTICE([postgres version does not support table access methodds])
AC_MSG_NOTICE([postgres version does not support table access methods])
fi;
# Require lz4 & zstd only if we are compiling columnar
@ -261,11 +265,36 @@ if test "$HAS_TABLEAM" == 'yes'; then
fi # test "$HAS_TABLEAM" == 'yes'
PGAC_ARG_BOOL(with, security-flags, no,
[use security flags])
AC_SUBST(with_security_flags)
if test "$with_security_flags" = yes; then
# Flags taken from: https://liquid.microsoft.com/Web/Object/Read/ms.security/Requirements/Microsoft.Security.SystemsADM.10203#guide
# We always want to have some compiler flags for security concerns.
SECURITY_CFLAGS="-fstack-protector-strong -D_FORTIFY_SOURCE=2 -O2 -z noexecstack -fpic -shared -Wl,-z,relro -Wl,-z,now -Wformat -Wformat-security -Werror=format-security"
CITUS_CFLAGS="$CITUS_CFLAGS $SECURITY_CFLAGS"
AC_MSG_NOTICE([Blindly added security flags for linker: $SECURITY_CFLAGS])
# We always want to have some clang flags for security concerns.
# This doesn't include "-Wl,-z,relro -Wl,-z,now" on purpuse, because bitcode is not linked.
# This doesn't include -fsanitize=cfi because it breaks builds on many distros including
# Debian/Buster, Debian/Stretch, Ubuntu/Bionic, Ubuntu/Xenial and EL7.
SECURITY_BITCODE_CFLAGS="-fsanitize=safe-stack -fstack-protector-strong -flto -fPIC -Wformat -Wformat-security -Werror=format-security"
CITUS_BITCODE_CFLAGS="$CITUS_BITCODE_CFLAGS $SECURITY_BITCODE_CFLAGS"
AC_MSG_NOTICE([Blindly added security flags for llvm: $SECURITY_BITCODE_CFLAGS])
AC_MSG_WARN([If you run into issues during linking or bitcode compilation, you can use --without-security-flags.])
fi
# Check if git is installed, when installed the gitref of the checkout will be baked in the application
AC_PATH_PROG(GIT_BIN, git)
AC_CHECK_FILE(.git,[HAS_DOTGIT=yes], [HAS_DOTGIT=])
AC_SUBST(CITUS_CFLAGS, "$CITUS_CFLAGS")
AC_SUBST(CITUS_BITCODE_CFLAGS, "$CITUS_BITCODE_CFLAGS")
AC_SUBST(CITUS_CPPFLAGS, "$CITUS_CPPFLAGS")
AC_SUBST(CITUS_LDFLAGS, "$LIBS $CITUS_LDFLAGS")
AC_SUBST(POSTGRES_SRCDIR, "$POSTGRES_SRCDIR")

View File

@ -1087,7 +1087,11 @@ DatumToBytea(Datum value, Form_pg_attribute attrForm)
{
if (attrForm->attbyval)
{
store_att_byval(VARDATA(result), value, attrForm->attlen);
Datum tmp;
store_att_byval(&tmp, value, attrForm->attlen);
memcpy_s(VARDATA(result), datumLength + VARHDRSZ,
&tmp, attrForm->attlen);
}
else
{

View File

@ -1662,6 +1662,8 @@ alter_columnar_table_set(PG_FUNCTION_ARGS)
quote_identifier(RelationGetRelationName(rel)))));
}
EnsureTableOwner(relationId);
ColumnarOptions options = { 0 };
if (!ReadColumnarOptions(relationId, &options))
{
@ -1769,6 +1771,8 @@ alter_columnar_table_reset(PG_FUNCTION_ARGS)
quote_identifier(RelationGetRelationName(rel)))));
}
EnsureTableOwner(relationId);
ColumnarOptions options = { 0 };
if (!ReadColumnarOptions(relationId, &options))
{

View File

@ -0,0 +1,5 @@
/* columnar--10.0-1--10.0-2.sql */
-- grant read access for columnar metadata tables to unprivileged user
GRANT USAGE ON SCHEMA columnar TO PUBLIC;
GRANT SELECT ON ALL tables IN SCHEMA columnar TO PUBLIC ;

View File

@ -0,0 +1,5 @@
/* columnar--10.0-2--10.0-1.sql */
-- revoke read access for columnar metadata tables from unprivileged user
REVOKE USAGE ON SCHEMA columnar FROM PUBLIC;
REVOKE SELECT ON ALL tables IN SCHEMA columnar FROM PUBLIC;

View File

@ -1,6 +1,6 @@
# Citus extension
comment = 'Citus distributed database'
default_version = '10.0-1'
default_version = '10.0-2'
module_pathname = '$libdir/citus'
relocatable = false
schema = pg_catalog

View File

@ -43,12 +43,15 @@
#include "distributed/listutils.h"
#include "distributed/local_executor.h"
#include "distributed/metadata/dependency.h"
#include "distributed/metadata/distobject.h"
#include "distributed/metadata_cache.h"
#include "distributed/metadata_sync.h"
#include "distributed/multi_executor.h"
#include "distributed/multi_logical_planner.h"
#include "distributed/multi_partitioning_utils.h"
#include "distributed/reference_table_utils.h"
#include "distributed/relation_access_tracking.h"
#include "distributed/shard_utils.h"
#include "distributed/worker_protocol.h"
#include "distributed/worker_transaction.h"
#include "executor/spi.h"
@ -175,6 +178,8 @@ static TableConversionReturn * AlterDistributedTable(TableConversionParameters *
static TableConversionReturn * AlterTableSetAccessMethod(
TableConversionParameters *params);
static TableConversionReturn * ConvertTable(TableConversionState *con);
static bool SwitchToSequentialAndLocalExecutionIfShardNameTooLong(char *relationName,
char *longestShardName);
static void EnsureTableNotReferencing(Oid relationId, char conversionType);
static void EnsureTableNotReferenced(Oid relationId, char conversionType);
static void EnsureTableNotForeign(Oid relationId);
@ -511,6 +516,10 @@ ConvertTable(TableConversionState *con)
bool oldEnableLocalReferenceForeignKeys = EnableLocalReferenceForeignKeys;
SetLocalEnableLocalReferenceForeignKeys(false);
/* switch to sequential execution if shard names will be too long */
SwitchToSequentialAndLocalExecutionIfRelationNameTooLong(con->relationId,
con->relationName);
if (con->conversionType == UNDISTRIBUTE_TABLE && con->cascadeViaForeignKeys &&
(TableReferencing(con->relationId) || TableReferenced(con->relationId)))
{
@ -673,7 +682,7 @@ ConvertTable(TableConversionState *con)
Node *parseTree = ParseTreeNode(tableCreationSql);
RelayEventExtendNames(parseTree, con->schemaName, con->hashOfName);
ProcessUtilityParseTree(parseTree, tableCreationSql, PROCESS_UTILITY_TOPLEVEL,
ProcessUtilityParseTree(parseTree, tableCreationSql, PROCESS_UTILITY_QUERY,
NULL, None_Receiver, NULL);
}
@ -711,6 +720,32 @@ ConvertTable(TableConversionState *con)
CreateCitusTableLike(con);
}
/* preserve colocation with procedures/functions */
if (con->conversionType == ALTER_DISTRIBUTED_TABLE)
{
/*
* Updating the colocationId of functions is always desirable for
* the following scenario:
* we have shardCount or colocateWith change
* AND entire co-location group is altered
* The reason for the second condition is because we currently don't
* remember the original table specified in the colocateWith when
* distributing the function. We only remember the colocationId in
* pg_dist_object table.
*/
if ((!con->shardCountIsNull || con->colocateWith != NULL) &&
(con->cascadeToColocated == CASCADE_TO_COLOCATED_YES || list_length(
con->colocatedTableList) == 1) && con->distributionColumn == NULL)
{
/*
* Update the colocationId from the one of the old relation to the one
* of the new relation for all tuples in citus.pg_dist_object
*/
UpdateDistributedObjectColocationId(TableColocationId(con->relationId),
TableColocationId(con->newRelationId));
}
}
ReplaceTable(con->relationId, con->newRelationId, justBeforeDropCommands,
con->suppressNoticeMessages);
@ -728,7 +763,7 @@ ConvertTable(TableConversionState *con)
Node *parseTree = ParseTreeNode(attachPartitionCommand);
ProcessUtilityParseTree(parseTree, attachPartitionCommand,
PROCESS_UTILITY_TOPLEVEL,
PROCESS_UTILITY_QUERY,
NULL, None_Receiver, NULL);
}
@ -1134,7 +1169,7 @@ ReplaceTable(Oid sourceId, Oid targetId, List *justBeforeDropCommands,
{
if (!suppressNoticeMessages)
{
ereport(NOTICE, (errmsg("Moving the data of %s",
ereport(NOTICE, (errmsg("moving the data of %s",
quote_qualified_identifier(schemaName, sourceName))));
}
@ -1207,7 +1242,7 @@ ReplaceTable(Oid sourceId, Oid targetId, List *justBeforeDropCommands,
if (!suppressNoticeMessages)
{
ereport(NOTICE, (errmsg("Dropping the old %s",
ereport(NOTICE, (errmsg("dropping the old %s",
quote_qualified_identifier(schemaName, sourceName))));
}
@ -1218,7 +1253,7 @@ ReplaceTable(Oid sourceId, Oid targetId, List *justBeforeDropCommands,
if (!suppressNoticeMessages)
{
ereport(NOTICE, (errmsg("Renaming the new table to %s",
ereport(NOTICE, (errmsg("renaming the new table to %s",
quote_qualified_identifier(schemaName, sourceName))));
}
@ -1572,3 +1607,104 @@ ExecuteQueryViaSPI(char *query, int SPIOK)
ereport(ERROR, (errmsg("could not finish SPI connection")));
}
}
/*
* SwitchToSequentialAndLocalExecutionIfRelationNameTooLong generates the longest shard name
* on the shards of a distributed table, and if exceeds the limit switches to sequential and
* local execution to prevent self-deadlocks.
*
* In case of a RENAME, the relation name parameter should store the new table name, so
* that the function can generate shard names of the renamed relations
*/
void
SwitchToSequentialAndLocalExecutionIfRelationNameTooLong(Oid relationId,
char *finalRelationName)
{
if (!IsCitusTable(relationId))
{
return;
}
if (ShardIntervalCount(relationId) == 0)
{
/*
* Relation has no shards, so we cannot run into "long shard relation
* name" issue.
*/
return;
}
char *longestShardName = GetLongestShardName(relationId, finalRelationName);
bool switchedToSequentialAndLocalExecution =
SwitchToSequentialAndLocalExecutionIfShardNameTooLong(finalRelationName,
longestShardName);
if (switchedToSequentialAndLocalExecution)
{
return;
}
if (PartitionedTable(relationId))
{
Oid longestNamePartitionId = PartitionWithLongestNameRelationId(relationId);
if (!OidIsValid(longestNamePartitionId))
{
/* no partitions have been created yet */
return;
}
char *longestPartitionName = get_rel_name(longestNamePartitionId);
char *longestPartitionShardName = GetLongestShardName(longestNamePartitionId,
longestPartitionName);
SwitchToSequentialAndLocalExecutionIfShardNameTooLong(longestPartitionName,
longestPartitionShardName);
}
}
/*
* SwitchToSequentialAndLocalExecutionIfShardNameTooLong switches to sequential and local
* execution if the shard name is too long.
*
* returns true if switched to sequential and local execution.
*/
static bool
SwitchToSequentialAndLocalExecutionIfShardNameTooLong(char *relationName,
char *longestShardName)
{
if (strlen(longestShardName) >= NAMEDATALEN - 1)
{
if (ParallelQueryExecutedInTransaction())
{
/*
* If there has already been a parallel query executed, the sequential mode
* would still use the already opened parallel connections to the workers,
* thus contradicting our purpose of using sequential mode.
*/
ereport(ERROR, (errmsg(
"Shard name (%s) for table (%s) is too long and could "
"lead to deadlocks when executed in a transaction "
"block after a parallel query", longestShardName,
relationName),
errhint("Try re-running the transaction with "
"\"SET LOCAL citus.multi_shard_modify_mode TO "
"\'sequential\';\"")));
}
else
{
elog(DEBUG1, "the name of the shard (%s) for relation (%s) is too long, "
"switching to sequential and local execution mode to prevent "
"self deadlocks",
longestShardName, relationName);
SetLocalMultiShardModifyModeToSequential();
SetLocalExecutionStatus(LOCAL_EXECUTION_REQUIRED);
return true;
}
}
return false;
}

View File

@ -510,6 +510,6 @@ ExecuteForeignKeyCreateCommand(const char *commandString, bool skip_validation)
"command \"%s\"", commandString)));
}
ProcessUtilityParseTree(parseTree, commandString, PROCESS_UTILITY_TOPLEVEL,
ProcessUtilityParseTree(parseTree, commandString, PROCESS_UTILITY_QUERY,
NULL, None_Receiver, NULL);
}

View File

@ -411,15 +411,16 @@ static char *
GenerateLongestShardPartitionIndexName(IndexStmt *createIndexStatement)
{
Oid relationId = CreateIndexStmtGetRelationId(createIndexStatement);
char *longestPartitionName = LongestPartitionName(relationId);
if (longestPartitionName == NULL)
Oid longestNamePartitionId = PartitionWithLongestNameRelationId(relationId);
if (!OidIsValid(longestNamePartitionId))
{
/* no partitions have been created yet */
return NULL;
}
char *longestPartitionShardName = pstrdup(longestPartitionName);
ShardInterval *shardInterval = LoadShardIntervalWithLongestShardName(relationId);
char *longestPartitionShardName = get_rel_name(longestNamePartitionId);
ShardInterval *shardInterval = LoadShardIntervalWithLongestShardName(
longestNamePartitionId);
AppendShardIdToName(&longestPartitionShardName, shardInterval->shardId);
IndexStmt *createLongestShardIndexStmt = copyObject(createIndexStatement);

View File

@ -109,6 +109,13 @@ PreprocessRenameStmt(Node *node, const char *renameCommand,
*/
ErrorIfUnsupportedRenameStmt(renameStmt);
if (renameStmt->renameType == OBJECT_TABLE ||
renameStmt->renameType == OBJECT_FOREIGN_TABLE)
{
SwitchToSequentialAndLocalExecutionIfRelationNameTooLong(tableRelationId,
renameStmt->newname);
}
DDLJob *ddlJob = palloc0(sizeof(DDLJob));
ddlJob->targetRelationId = tableRelationId;
ddlJob->concurrentIndexCmd = false;

View File

@ -342,9 +342,12 @@ CitusBeginModifyScan(CustomScanState *node, EState *estate, int eflags)
/*
* At this point, we're about to do the shard pruning for fast-path queries.
* Given that pruning is deferred always for INSERTs, we get here
* !EnableFastPathRouterPlanner as well.
* !EnableFastPathRouterPlanner as well. Given that INSERT statements with
* CTEs/sublinks etc are not eligible for fast-path router plan, we get here
* jobQuery->commandType == CMD_INSERT as well.
*/
Assert(currentPlan->fastPathRouterPlan || !EnableFastPathRouterPlanner);
Assert(currentPlan->fastPathRouterPlan || !EnableFastPathRouterPlanner ||
jobQuery->commandType == CMD_INSERT);
/*
* We can only now decide which shard to use, so we need to build a new task

View File

@ -406,7 +406,7 @@ ExecuteUtilityCommand(const char *taskQueryCommand)
* process utility.
*/
ProcessUtilityParseTree(taskRawParseTree, taskQueryCommand,
PROCESS_UTILITY_TOPLEVEL, NULL, None_Receiver,
PROCESS_UTILITY_QUERY, NULL, None_Receiver,
NULL);
}
}

View File

@ -373,3 +373,56 @@ GetDistributedObjectAddressList(void)
return objectAddressList;
}
/*
* UpdateDistributedObjectColocationId gets an old and a new colocationId
* and updates the colocationId of all tuples in citus.pg_dist_object which
* have the old colocationId to the new colocationId.
*/
void
UpdateDistributedObjectColocationId(uint32 oldColocationId,
uint32 newColocationId)
{
const bool indexOK = false;
ScanKeyData scanKey[1];
Relation pgDistObjectRel = table_open(DistObjectRelationId(),
RowExclusiveLock);
TupleDesc tupleDescriptor = RelationGetDescr(pgDistObjectRel);
/* scan pg_dist_object for colocationId equal to old colocationId */
ScanKeyInit(&scanKey[0], Anum_pg_dist_object_colocationid,
BTEqualStrategyNumber,
F_INT4EQ, UInt32GetDatum(oldColocationId));
SysScanDesc scanDescriptor = systable_beginscan(pgDistObjectRel,
InvalidOid,
indexOK,
NULL, 1, scanKey);
HeapTuple heapTuple;
while (HeapTupleIsValid(heapTuple = systable_getnext(scanDescriptor)))
{
Datum values[Natts_pg_dist_object];
bool isnull[Natts_pg_dist_object];
bool replace[Natts_pg_dist_object];
memset(replace, 0, sizeof(replace));
replace[Anum_pg_dist_object_colocationid - 1] = true;
/* update the colocationId to the new one */
values[Anum_pg_dist_object_colocationid - 1] = UInt32GetDatum(newColocationId);
isnull[Anum_pg_dist_object_colocationid - 1] = false;
heapTuple = heap_modify_tuple(heapTuple, tupleDescriptor, values, isnull,
replace);
CatalogTupleUpdate(pgDistObjectRel, &heapTuple->t_self, heapTuple);
CitusInvalidateRelcacheByRelid(DistObjectRelationId());
}
systable_endscan(scanDescriptor);
table_close(pgDistObjectRel, NoLock);
CommandCounterIncrement();
}

View File

@ -79,14 +79,24 @@ static bool DistributedTableSizeOnWorker(WorkerNode *workerNode, Oid relationId,
char *sizeQuery, bool failOnError,
uint64 *tableSize);
static List * ShardIntervalsOnWorkerGroup(WorkerNode *workerNode, Oid relationId);
static char * GenerateShardNameAndSizeQueryForShardList(List *shardIntervalList);
static char * GenerateAllShardNameAndSizeQueryForNode(WorkerNode *workerNode);
static List * GenerateShardSizesQueryList(List *workerNodeList);
static char * GenerateShardStatisticsQueryForShardList(List *shardIntervalList, bool
useShardMinMaxQuery);
static char * GenerateAllShardStatisticsQueryForNode(WorkerNode *workerNode,
List *citusTableIds, bool
useShardMinMaxQuery);
static List * GenerateShardStatisticsQueryList(List *workerNodeList, List *citusTableIds,
bool useShardMinMaxQuery);
static void ErrorIfNotSuitableToGetSize(Oid relationId);
static List * OpenConnectionToNodes(List *workerNodeList);
static void ReceiveShardNameAndSizeResults(List *connectionList,
Tuplestorestate *tupleStore,
TupleDesc tupleDescriptor);
static void AppendShardSizeMinMaxQuery(StringInfo selectQuery, uint64 shardId,
ShardInterval *
shardInterval, char *shardName,
char *quotedShardName);
static void AppendShardSizeQuery(StringInfo selectQuery, ShardInterval *shardInterval,
char *quotedShardName);
/* exports for SQL callable functions */
PG_FUNCTION_INFO_V1(citus_table_size);
@ -102,25 +112,16 @@ citus_shard_sizes(PG_FUNCTION_ARGS)
{
CheckCitusVersion(ERROR);
List *workerNodeList = ActivePrimaryNodeList(NoLock);
List *allCitusTableIds = AllCitusTableIds();
List *shardSizesQueryList = GenerateShardSizesQueryList(workerNodeList);
/* we don't need a distributed transaction here */
bool useDistributedTransaction = false;
List *connectionList = OpenConnectionToNodes(workerNodeList);
FinishConnectionListEstablishment(connectionList);
/* send commands in parallel */
for (int i = 0; i < list_length(connectionList); i++)
{
MultiConnection *connection = (MultiConnection *) list_nth(connectionList, i);
char *shardSizesQuery = (char *) list_nth(shardSizesQueryList, i);
int querySent = SendRemoteCommand(connection, shardSizesQuery);
if (querySent == 0)
{
ReportConnectionError(connection, WARNING);
}
}
/* we only want the shard sizes here so useShardMinMaxQuery parameter is false */
bool useShardMinMaxQuery = false;
List *connectionList = SendShardStatisticsQueriesInParallel(allCitusTableIds,
useDistributedTransaction,
useShardMinMaxQuery);
TupleDesc tupleDescriptor = NULL;
Tuplestorestate *tupleStore = SetupTuplestore(fcinfo, &tupleDescriptor);
@ -225,6 +226,59 @@ citus_relation_size(PG_FUNCTION_ARGS)
}
/*
* SendShardStatisticsQueriesInParallel generates query lists for obtaining shard
* statistics and then sends the commands in parallel by opening connections
* to available nodes. It returns the connection list.
*/
List *
SendShardStatisticsQueriesInParallel(List *citusTableIds, bool useDistributedTransaction,
bool
useShardMinMaxQuery)
{
List *workerNodeList = ActivePrimaryNodeList(NoLock);
List *shardSizesQueryList = GenerateShardStatisticsQueryList(workerNodeList,
citusTableIds,
useShardMinMaxQuery);
List *connectionList = OpenConnectionToNodes(workerNodeList);
FinishConnectionListEstablishment(connectionList);
if (useDistributedTransaction)
{
/*
* For now, in the case we want to include shard min and max values, we also
* want to update the entries in pg_dist_placement and pg_dist_shard with the
* latest statistics. In order to detect distributed deadlocks, we assign a
* distributed transaction ID to the current transaction
*/
UseCoordinatedTransaction();
}
/* send commands in parallel */
for (int i = 0; i < list_length(connectionList); i++)
{
MultiConnection *connection = (MultiConnection *) list_nth(connectionList, i);
char *shardSizesQuery = (char *) list_nth(shardSizesQueryList, i);
if (useDistributedTransaction)
{
/* run the size query in a distributed transaction */
RemoteTransactionBeginIfNecessary(connection);
}
int querySent = SendRemoteCommand(connection, shardSizesQuery);
if (querySent == 0)
{
ReportConnectionError(connection, WARNING);
}
}
return connectionList;
}
/*
* OpenConnectionToNodes opens a single connection per node
* for the given workerNodeList.
@ -250,20 +304,25 @@ OpenConnectionToNodes(List *workerNodeList)
/*
* GenerateShardSizesQueryList generates a query per node that
* will return all shard_name, shard_size pairs from the node.
* GenerateShardStatisticsQueryList generates a query per node that will return:
* - all shard_name, shard_size pairs from the node (if includeShardMinMax is false)
* - all shard_id, shard_minvalue, shard_maxvalue, shard_size quartuples from the node (if true)
*/
static List *
GenerateShardSizesQueryList(List *workerNodeList)
GenerateShardStatisticsQueryList(List *workerNodeList, List *citusTableIds, bool
useShardMinMaxQuery)
{
List *shardSizesQueryList = NIL;
List *shardStatisticsQueryList = NIL;
WorkerNode *workerNode = NULL;
foreach_ptr(workerNode, workerNodeList)
{
char *shardSizesQuery = GenerateAllShardNameAndSizeQueryForNode(workerNode);
shardSizesQueryList = lappend(shardSizesQueryList, shardSizesQuery);
char *shardStatisticsQuery = GenerateAllShardStatisticsQueryForNode(workerNode,
citusTableIds,
useShardMinMaxQuery);
shardStatisticsQueryList = lappend(shardStatisticsQueryList,
shardStatisticsQuery);
}
return shardSizesQueryList;
return shardStatisticsQueryList;
}
@ -572,37 +631,50 @@ GenerateSizeQueryOnMultiplePlacements(List *shardIntervalList, char *sizeQuery)
/*
* GenerateAllShardNameAndSizeQueryForNode generates a query that returns all
* shard_name, shard_size pairs for the given node.
* GenerateAllShardStatisticsQueryForNode generates a query that returns:
* - all shard_name, shard_size pairs for the given node (if useShardMinMaxQuery is false)
* - all shard_id, shard_minvalue, shard_maxvalue, shard_size quartuples (if true)
*/
static char *
GenerateAllShardNameAndSizeQueryForNode(WorkerNode *workerNode)
GenerateAllShardStatisticsQueryForNode(WorkerNode *workerNode, List *citusTableIds, bool
useShardMinMaxQuery)
{
List *allCitusTableIds = AllCitusTableIds();
StringInfo allShardNameAndSizeQuery = makeStringInfo();
StringInfo allShardStatisticsQuery = makeStringInfo();
Oid relationId = InvalidOid;
foreach_oid(relationId, allCitusTableIds)
foreach_oid(relationId, citusTableIds)
{
List *shardIntervalsOnNode = ShardIntervalsOnWorkerGroup(workerNode, relationId);
char *shardNameAndSizeQuery =
GenerateShardNameAndSizeQueryForShardList(shardIntervalsOnNode);
appendStringInfoString(allShardNameAndSizeQuery, shardNameAndSizeQuery);
char *shardStatisticsQuery =
GenerateShardStatisticsQueryForShardList(shardIntervalsOnNode,
useShardMinMaxQuery);
appendStringInfoString(allShardStatisticsQuery, shardStatisticsQuery);
}
/* Add a dummy entry so that UNION ALL doesn't complain */
appendStringInfo(allShardNameAndSizeQuery, "SELECT NULL::text, 0::bigint;");
return allShardNameAndSizeQuery->data;
if (useShardMinMaxQuery)
{
/* 0 for shard_id, NULL for min, NULL for text, 0 for shard_size */
appendStringInfo(allShardStatisticsQuery,
"SELECT 0::bigint, NULL::text, NULL::text, 0::bigint;");
}
else
{
/* NULL for shard_name, 0 for shard_size */
appendStringInfo(allShardStatisticsQuery, "SELECT NULL::text, 0::bigint;");
}
return allShardStatisticsQuery->data;
}
/*
* GenerateShardNameAndSizeQueryForShardList generates a SELECT shard_name - shard_size query to get
* size of multiple tables.
* GenerateShardStatisticsQueryForShardList generates one of the two types of queries:
* - SELECT shard_name - shard_size (if useShardMinMaxQuery is false)
* - SELECT shard_id, shard_minvalue, shard_maxvalue, shard_size (if true)
*/
static char *
GenerateShardNameAndSizeQueryForShardList(List *shardIntervalList)
GenerateShardStatisticsQueryForShardList(List *shardIntervalList, bool
useShardMinMaxQuery)
{
StringInfo selectQuery = makeStringInfo();
@ -618,8 +690,15 @@ GenerateShardNameAndSizeQueryForShardList(List *shardIntervalList)
char *shardQualifiedName = quote_qualified_identifier(schemaName, shardName);
char *quotedShardName = quote_literal_cstr(shardQualifiedName);
appendStringInfo(selectQuery, "SELECT %s AS shard_name, ", quotedShardName);
appendStringInfo(selectQuery, PG_RELATION_SIZE_FUNCTION, quotedShardName);
if (useShardMinMaxQuery)
{
AppendShardSizeMinMaxQuery(selectQuery, shardId, shardInterval, shardName,
quotedShardName);
}
else
{
AppendShardSizeQuery(selectQuery, shardInterval, quotedShardName);
}
appendStringInfo(selectQuery, " UNION ALL ");
}
@ -627,6 +706,54 @@ GenerateShardNameAndSizeQueryForShardList(List *shardIntervalList)
}
/*
* AppendShardSizeMinMaxQuery appends a query in the following form to selectQuery
* SELECT shard_id, shard_minvalue, shard_maxvalue, shard_size
*/
static void
AppendShardSizeMinMaxQuery(StringInfo selectQuery, uint64 shardId,
ShardInterval *shardInterval, char *shardName,
char *quotedShardName)
{
if (IsCitusTableType(shardInterval->relationId, APPEND_DISTRIBUTED))
{
/* fill in the partition column name */
const uint32 unusedTableId = 1;
Var *partitionColumn = PartitionColumn(shardInterval->relationId,
unusedTableId);
char *partitionColumnName = get_attname(shardInterval->relationId,
partitionColumn->varattno, false);
appendStringInfo(selectQuery,
"SELECT " UINT64_FORMAT
" AS shard_id, min(%s)::text AS shard_minvalue, max(%s)::text AS shard_maxvalue, pg_relation_size(%s) AS shard_size FROM %s ",
shardId, partitionColumnName,
partitionColumnName,
quotedShardName, shardName);
}
else
{
/* we don't need to update min/max for non-append distributed tables because they don't change */
appendStringInfo(selectQuery,
"SELECT " UINT64_FORMAT
" AS shard_id, NULL::text AS shard_minvalue, NULL::text AS shard_maxvalue, pg_relation_size(%s) AS shard_size ",
shardId, quotedShardName);
}
}
/*
* AppendShardSizeQuery appends a query in the following form to selectQuery
* SELECT shard_name, shard_size
*/
static void
AppendShardSizeQuery(StringInfo selectQuery, ShardInterval *shardInterval,
char *quotedShardName)
{
appendStringInfo(selectQuery, "SELECT %s AS shard_name, ", quotedShardName);
appendStringInfo(selectQuery, PG_RELATION_SIZE_FUNCTION, quotedShardName);
}
/*
* ErrorIfNotSuitableToGetSize determines whether the table is suitable to find
* its' size with internal functions.

View File

@ -32,7 +32,9 @@
#include "distributed/connection_management.h"
#include "distributed/deparse_shard_query.h"
#include "distributed/distributed_planner.h"
#include "distributed/foreign_key_relationship.h"
#include "distributed/listutils.h"
#include "distributed/lock_graph.h"
#include "distributed/multi_client_executor.h"
#include "distributed/multi_executor.h"
#include "distributed/metadata_utility.h"
@ -65,12 +67,22 @@ static List * RelationShardListForShardCreate(ShardInterval *shardInterval);
static bool WorkerShardStats(ShardPlacement *placement, Oid relationId,
const char *shardName, uint64 *shardSize,
text **shardMinValue, text **shardMaxValue);
static void UpdateTableStatistics(Oid relationId);
static void ReceiveAndUpdateShardsSizeAndMinMax(List *connectionList);
static void UpdateShardSizeAndMinMax(uint64 shardId, ShardInterval *shardInterval, Oid
relationId, List *shardPlacementList, uint64
shardSize, text *shardMinValue,
text *shardMaxValue);
static bool ProcessShardStatisticsRow(PGresult *result, int64 rowIndex, uint64 *shardId,
text **shardMinValue, text **shardMaxValue,
uint64 *shardSize);
/* exports for SQL callable functions */
PG_FUNCTION_INFO_V1(master_create_empty_shard);
PG_FUNCTION_INFO_V1(master_append_table_to_shard);
PG_FUNCTION_INFO_V1(citus_update_shard_statistics);
PG_FUNCTION_INFO_V1(master_update_shard_statistics);
PG_FUNCTION_INFO_V1(citus_update_table_statistics);
/*
@ -361,6 +373,23 @@ citus_update_shard_statistics(PG_FUNCTION_ARGS)
}
/*
* citus_update_table_statistics updates metadata (shard size and shard min/max
* values) of the shards of the given table
*/
Datum
citus_update_table_statistics(PG_FUNCTION_ARGS)
{
Oid distributedTableId = PG_GETARG_OID(0);
CheckCitusVersion(ERROR);
UpdateTableStatistics(distributedTableId);
PG_RETURN_VOID();
}
/*
* master_update_shard_statistics is a wrapper function for old UDF name.
*/
@ -782,7 +811,6 @@ UpdateShardStatistics(int64 shardId)
{
ShardInterval *shardInterval = LoadShardInterval(shardId);
Oid relationId = shardInterval->relationId;
char storageType = shardInterval->storageType;
bool statsOK = false;
uint64 shardSize = 0;
text *minValue = NULL;
@ -825,17 +853,166 @@ UpdateShardStatistics(int64 shardId)
errdetail("Setting shard statistics to NULL")));
}
/* make sure we don't process cancel signals */
HOLD_INTERRUPTS();
UpdateShardSizeAndMinMax(shardId, shardInterval, relationId, shardPlacementList,
shardSize, minValue, maxValue);
return shardSize;
}
/* update metadata for each shard placement we appended to */
/*
* UpdateTableStatistics updates metadata (shard size and shard min/max values)
* of the shards of the given table. Follows a similar logic to citus_shard_sizes function.
*/
static void
UpdateTableStatistics(Oid relationId)
{
List *citusTableIds = NIL;
citusTableIds = lappend_oid(citusTableIds, relationId);
/* we want to use a distributed transaction here to detect distributed deadlocks */
bool useDistributedTransaction = true;
/* we also want shard min/max values for append distributed tables */
bool useShardMinMaxQuery = true;
List *connectionList = SendShardStatisticsQueriesInParallel(citusTableIds,
useDistributedTransaction,
useShardMinMaxQuery);
ReceiveAndUpdateShardsSizeAndMinMax(connectionList);
}
/*
* ReceiveAndUpdateShardsSizeAndMinMax receives shard id, size
* and min max results from the given connection list, and updates
* respective entries in pg_dist_placement and pg_dist_shard
*/
static void
ReceiveAndUpdateShardsSizeAndMinMax(List *connectionList)
{
/*
* From the connection list, we will not get all the shards, but
* all the placements. We use a hash table to remember already visited shard ids
* since we update all the different placements of a shard id at once.
*/
HTAB *alreadyVisitedShardPlacements = CreateOidVisitedHashSet();
MultiConnection *connection = NULL;
foreach_ptr(connection, connectionList)
{
if (PQstatus(connection->pgConn) != CONNECTION_OK)
{
continue;
}
bool raiseInterrupts = true;
PGresult *result = GetRemoteCommandResult(connection, raiseInterrupts);
if (!IsResponseOK(result))
{
ReportResultError(connection, result, WARNING);
continue;
}
int64 rowCount = PQntuples(result);
int64 colCount = PQnfields(result);
/* Although it is not expected */
if (colCount != UPDATE_SHARD_STATISTICS_COLUMN_COUNT)
{
ereport(WARNING, (errmsg("unexpected number of columns from "
"citus_update_table_statistics")));
continue;
}
for (int64 rowIndex = 0; rowIndex < rowCount; rowIndex++)
{
uint64 shardId = 0;
text *shardMinValue = NULL;
text *shardMaxValue = NULL;
uint64 shardSize = 0;
if (!ProcessShardStatisticsRow(result, rowIndex, &shardId, &shardMinValue,
&shardMaxValue, &shardSize))
{
/* this row has no valid shard statistics */
continue;
}
if (OidVisited(alreadyVisitedShardPlacements, shardId))
{
/* We have already updated this placement list */
continue;
}
VisitOid(alreadyVisitedShardPlacements, shardId);
ShardInterval *shardInterval = LoadShardInterval(shardId);
Oid relationId = shardInterval->relationId;
List *shardPlacementList = ActiveShardPlacementList(shardId);
UpdateShardSizeAndMinMax(shardId, shardInterval, relationId,
shardPlacementList, shardSize, shardMinValue,
shardMaxValue);
}
PQclear(result);
ForgetResults(connection);
}
hash_destroy(alreadyVisitedShardPlacements);
}
/*
* ProcessShardStatisticsRow processes a row of shard statistics of the input PGresult
* - it returns true if this row belongs to a valid shard
* - it returns false if this row has no valid shard statistics (shardId = INVALID_SHARD_ID)
*/
static bool
ProcessShardStatisticsRow(PGresult *result, int64 rowIndex, uint64 *shardId,
text **shardMinValue, text **shardMaxValue, uint64 *shardSize)
{
*shardId = ParseIntField(result, rowIndex, 0);
/* check for the dummy entries we put so that UNION ALL wouldn't complain */
if (*shardId == INVALID_SHARD_ID)
{
/* this row has no valid shard statistics */
return false;
}
char *minValueResult = PQgetvalue(result, rowIndex, 1);
char *maxValueResult = PQgetvalue(result, rowIndex, 2);
*shardMinValue = cstring_to_text(minValueResult);
*shardMaxValue = cstring_to_text(maxValueResult);
*shardSize = ParseIntField(result, rowIndex, 3);
return true;
}
/*
* UpdateShardSizeAndMinMax updates the shardlength (shard size) of the given
* shard and its placements in pg_dist_placement, and updates the shard min value
* and shard max value of the given shard in pg_dist_shard if the relationId belongs
* to an append-distributed table
*/
static void
UpdateShardSizeAndMinMax(uint64 shardId, ShardInterval *shardInterval, Oid relationId,
List *shardPlacementList, uint64 shardSize, text *shardMinValue,
text *shardMaxValue)
{
char storageType = shardInterval->storageType;
ShardPlacement *placement = NULL;
/* update metadata for each shard placement */
foreach_ptr(placement, shardPlacementList)
{
uint64 placementId = placement->placementId;
int32 groupId = placement->groupId;
DeleteShardPlacementRow(placementId);
InsertShardPlacementRow(shardId, placementId, SHARD_STATE_ACTIVE, shardSize,
InsertShardPlacementRow(shardId, placementId, SHARD_STATE_ACTIVE,
shardSize,
groupId);
}
@ -843,18 +1020,9 @@ UpdateShardStatistics(int64 shardId)
if (IsCitusTableType(relationId, APPEND_DISTRIBUTED))
{
DeleteShardRow(shardId);
InsertShardRow(relationId, shardId, storageType, minValue, maxValue);
InsertShardRow(relationId, shardId, storageType, shardMinValue,
shardMaxValue);
}
if (QueryCancelPending)
{
ereport(WARNING, (errmsg("cancel requests are ignored during metadata update")));
QueryCancelPending = false;
}
RESUME_INTERRUPTS();
return shardSize;
}

View File

@ -49,6 +49,7 @@
#include "executor/executor.h"
#include "nodes/makefuncs.h"
#include "nodes/nodeFuncs.h"
#include "nodes/pg_list.h"
#include "parser/parsetree.h"
#include "parser/parse_type.h"
#if PG_VERSION_NUM >= PG_VERSION_12
@ -98,6 +99,7 @@ static PlannedStmt * FinalizeNonRouterPlan(PlannedStmt *localPlan,
DistributedPlan *distributedPlan,
CustomScan *customScan);
static PlannedStmt * FinalizeRouterPlan(PlannedStmt *localPlan, CustomScan *customScan);
static AppendRelInfo * FindTargetAppendRelInfo(PlannerInfo *root, int relationRteIndex);
static List * makeTargetListFromCustomScanList(List *custom_scan_tlist);
static List * makeCustomScanTargetlistFromExistingTargetList(List *existingTargetlist);
static int32 BlessRecordExpressionList(List *exprs);
@ -124,6 +126,7 @@ static PlannedStmt * PlanFastPathDistributedStmt(DistributedPlanningContext *pla
static PlannedStmt * PlanDistributedStmt(DistributedPlanningContext *planContext,
int rteIdCounter);
static RTEListProperties * GetRTEListProperties(List *rangeTableList);
static List * TranslatedVars(PlannerInfo *root, int relationIndex);
/* Distributed planner hook */
@ -1814,6 +1817,8 @@ multi_relation_restriction_hook(PlannerInfo *root, RelOptInfo *relOptInfo,
/* see comments on GetVarFromAssignedParam() */
relationRestriction->outerPlanParamsList = OuterPlanParamsList(root);
relationRestriction->translatedVars = TranslatedVars(root,
relationRestriction->index);
RelationRestrictionContext *relationRestrictionContext =
plannerRestrictionContext->relationRestrictionContext;
@ -1837,6 +1842,61 @@ multi_relation_restriction_hook(PlannerInfo *root, RelOptInfo *relOptInfo,
}
/*
* TranslatedVars deep copies the translated vars for the given relation index
* if there is any append rel list.
*/
static List *
TranslatedVars(PlannerInfo *root, int relationIndex)
{
List *translatedVars = NIL;
if (root->append_rel_list != NIL)
{
AppendRelInfo *targetAppendRelInfo =
FindTargetAppendRelInfo(root, relationIndex);
if (targetAppendRelInfo != NULL)
{
/* postgres deletes translated_vars after pg13, hence we deep copy them here */
Node *targetNode = NULL;
foreach_ptr(targetNode, targetAppendRelInfo->translated_vars)
{
translatedVars =
lappend(translatedVars, copyObject(targetNode));
}
}
}
return translatedVars;
}
/*
* FindTargetAppendRelInfo finds the target append rel info for the given
* relation rte index.
*/
static AppendRelInfo *
FindTargetAppendRelInfo(PlannerInfo *root, int relationRteIndex)
{
AppendRelInfo *appendRelInfo = NULL;
/* iterate on the queries that are part of UNION ALL subselects */
foreach_ptr(appendRelInfo, root->append_rel_list)
{
/*
* We're only interested in the child rel that is equal to the
* relation we're investigating. Here we don't need to find the offset
* because postgres adds an offset to child_relid and parent_relid after
* calling multi_relation_restriction_hook.
*/
if (appendRelInfo->child_relid == relationRteIndex)
{
return appendRelInfo;
}
}
return NULL;
}
/*
* AdjustReadIntermediateResultCost adjusts the row count and total cost
* of a read_intermediate_result call based on the file size.

View File

@ -824,7 +824,21 @@ static List *
QueryTargetList(MultiNode *multiNode)
{
List *projectNodeList = FindNodesOfType(multiNode, T_MultiProject);
Assert(list_length(projectNodeList) > 0);
if (list_length(projectNodeList) == 0)
{
/*
* The physical planner assumes that all worker queries would have
* target list entries based on the fact that at least the column
* on the JOINs have to be on the target list. However, there is
* an exception to that if there is a cartesian product join and
* there is no additional target list entries belong to one side
* of the JOIN. Once we support cartesian product join, we should
* remove this error.
*/
ereport(ERROR, (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
errmsg("cannot perform distributed planning on this query"),
errdetail("Cartesian products are currently unsupported")));
}
MultiProject *topProjectNode = (MultiProject *) linitial(projectNodeList);
List *columnList = topProjectNode->columnList;

View File

@ -555,6 +555,14 @@ ModifyPartialQuerySupported(Query *queryTree, bool multiShardQuery,
{
ListCell *cteCell = NULL;
/* CTEs still not supported for INSERTs. */
if (queryTree->commandType == CMD_INSERT)
{
return DeferredError(ERRCODE_FEATURE_NOT_SUPPORTED,
"Router planner doesn't support common table expressions with INSERT queries.",
NULL, NULL);
}
foreach(cteCell, queryTree->cteList)
{
CommonTableExpr *cte = (CommonTableExpr *) lfirst(cteCell);
@ -562,31 +570,22 @@ ModifyPartialQuerySupported(Query *queryTree, bool multiShardQuery,
if (cteQuery->commandType != CMD_SELECT)
{
/* Modifying CTEs still not supported for INSERTs & multi shard queries. */
if (queryTree->commandType == CMD_INSERT)
{
return DeferredError(ERRCODE_FEATURE_NOT_SUPPORTED,
"Router planner doesn't support non-select common table expressions with non-select queries.",
NULL, NULL);
}
/* Modifying CTEs still not supported for multi shard queries. */
if (multiShardQuery)
{
return DeferredError(ERRCODE_FEATURE_NOT_SUPPORTED,
"Router planner doesn't support non-select common table expressions with multi shard queries.",
NULL, NULL);
}
/* Modifying CTEs exclude both INSERT CTEs & INSERT queries. */
else if (cteQuery->commandType == CMD_INSERT)
{
return DeferredError(ERRCODE_FEATURE_NOT_SUPPORTED,
"Router planner doesn't support INSERT common table expressions.",
NULL, NULL);
}
}
/* Modifying CTEs exclude both INSERT CTEs & INSERT queries. */
if (cteQuery->commandType == CMD_INSERT)
{
return DeferredError(ERRCODE_FEATURE_NOT_SUPPORTED,
"Router planner doesn't support INSERT common table expressions.",
NULL, NULL);
}
if (cteQuery->hasForUpdate &&
FindNodeMatchingCheckFunctionInRangeTableList(cteQuery->rtable,
IsReferenceTableRTE))

View File

@ -83,7 +83,8 @@ typedef struct AttributeEquivalenceClassMember
static bool ContextContainsLocalRelation(RelationRestrictionContext *restrictionContext);
static Var * FindUnionAllVar(PlannerInfo *root, List *appendRelList, Oid relationOid,
static int RangeTableOffsetCompat(PlannerInfo *root, AppendRelInfo *appendRelInfo);
static Var * FindUnionAllVar(PlannerInfo *root, List *translatedVars, Oid relationOid,
Index relationRteIndex, Index *partitionKeyIndex);
static bool ContainsMultipleDistributedRelations(PlannerRestrictionContext *
plannerRestrictionContext);
@ -156,9 +157,12 @@ static JoinRestrictionContext * FilterJoinRestrictionContext(
static bool RangeTableArrayContainsAnyRTEIdentities(RangeTblEntry **rangeTableEntries, int
rangeTableArrayLength, Relids
queryRteIdentities);
static int RangeTableOffsetCompat(PlannerInfo *root, AppendRelInfo *appendRelInfo);
static Relids QueryRteIdentities(Query *queryTree);
#if PG_VERSION_NUM >= PG_VERSION_13
static int ParentCountPriorToAppendRel(List *appendRelList, AppendRelInfo *appendRelInfo);
#endif
/*
* AllDistributionKeysInQueryAreEqual returns true if either
* (i) there exists join in the query and all relations joined on their
@ -279,7 +283,8 @@ SafeToPushdownUnionSubquery(PlannerRestrictionContext *plannerRestrictionContext
*/
if (appendRelList != NULL)
{
varToBeAdded = FindUnionAllVar(relationPlannerRoot, appendRelList,
varToBeAdded = FindUnionAllVar(relationPlannerRoot,
relationRestriction->translatedVars,
relationRestriction->relationId,
relationRestriction->index,
&partitionKeyIndex);
@ -373,63 +378,65 @@ SafeToPushdownUnionSubquery(PlannerRestrictionContext *plannerRestrictionContext
}
/*
* RangeTableOffsetCompat returns the range table offset(in glob->finalrtable) for the appendRelInfo.
* For PG < 13 this is a no op.
*/
static int
RangeTableOffsetCompat(PlannerInfo *root, AppendRelInfo *appendRelInfo)
{
#if PG_VERSION_NUM >= PG_VERSION_13
int parentCount = ParentCountPriorToAppendRel(root->append_rel_list, appendRelInfo);
int skipParentCount = parentCount - 1;
int i = 1;
for (; i < root->simple_rel_array_size; i++)
{
RangeTblEntry *rte = root->simple_rte_array[i];
if (rte->inh)
{
/*
* We skip the previous parents because we want to find the offset
* for the given append rel info.
*/
if (skipParentCount > 0)
{
skipParentCount--;
continue;
}
break;
}
}
int indexInRtable = (i - 1);
/*
* Postgres adds the global rte array size to parent_relid as an offset.
* Here we do the reverse operation: Commit on postgres side:
* 6ef77cf46e81f45716ec981cb08781d426181378
*/
int parentRelIndex = appendRelInfo->parent_relid - 1;
return parentRelIndex - indexInRtable;
#else
return 0;
#endif
}
/*
* FindUnionAllVar finds the variable used in union all for the side that has
* relationRteIndex as its index and the same varattno as the partition key of
* the given relation with relationOid.
*/
static Var *
FindUnionAllVar(PlannerInfo *root, List *appendRelList, Oid relationOid,
FindUnionAllVar(PlannerInfo *root, List *translatedVars, Oid relationOid,
Index relationRteIndex, Index *partitionKeyIndex)
{
ListCell *appendRelCell = NULL;
AppendRelInfo *targetAppendRelInfo = NULL;
AttrNumber childAttrNumber = 0;
*partitionKeyIndex = 0;
/* iterate on the queries that are part of UNION ALL subselects */
foreach(appendRelCell, appendRelList)
{
AppendRelInfo *appendRelInfo = (AppendRelInfo *) lfirst(appendRelCell);
int rtoffset = RangeTableOffsetCompat(root, appendRelInfo);
/*
* We're only interested in the child rel that is equal to the
* relation we're investigating.
*/
if (appendRelInfo->child_relid - rtoffset == relationRteIndex)
{
targetAppendRelInfo = appendRelInfo;
break;
}
}
if (!targetAppendRelInfo)
{
return NULL;
}
Var *relationPartitionKey = DistPartitionKeyOrError(relationOid);
#if PG_VERSION_NUM >= PG_VERSION_13
for (; childAttrNumber < targetAppendRelInfo->num_child_cols; childAttrNumber++)
{
int curAttNo = targetAppendRelInfo->parent_colnos[childAttrNumber];
if (curAttNo == relationPartitionKey->varattno)
{
*partitionKeyIndex = (childAttrNumber + 1);
int rtoffset = RangeTableOffsetCompat(root, targetAppendRelInfo);
relationPartitionKey->varno = targetAppendRelInfo->child_relid - rtoffset;
return relationPartitionKey;
}
}
#else
AttrNumber childAttrNumber = 0;
*partitionKeyIndex = 0;
ListCell *translatedVarCell;
List *translaterVars = targetAppendRelInfo->translated_vars;
foreach(translatedVarCell, translaterVars)
foreach(translatedVarCell, translatedVars)
{
Node *targetNode = (Node *) lfirst(translatedVarCell);
@ -449,7 +456,6 @@ FindUnionAllVar(PlannerInfo *root, List *appendRelList, Oid relationOid,
return targetVar;
}
}
#endif
return NULL;
}
@ -1387,31 +1393,32 @@ AddUnionAllSetOperationsToAttributeEquivalenceClass(AttributeEquivalenceClass **
}
#if PG_VERSION_NUM >= PG_VERSION_13
/*
* RangeTableOffsetCompat returns the range table offset(in glob->finalrtable) for the appendRelInfo.
* For PG < 13 this is a no op.
* ParentCountPriorToAppendRel returns the number of parents that come before
* the given append rel info.
*/
static int
RangeTableOffsetCompat(PlannerInfo *root, AppendRelInfo *appendRelInfo)
ParentCountPriorToAppendRel(List *appendRelList, AppendRelInfo *targetAppendRelInfo)
{
#if PG_VERSION_NUM >= PG_VERSION_13
int i = 1;
for (; i < root->simple_rel_array_size; i++)
int targetParentIndex = targetAppendRelInfo->parent_relid;
Bitmapset *parent_ids = NULL;
AppendRelInfo *appendRelInfo = NULL;
foreach_ptr(appendRelInfo, appendRelList)
{
RangeTblEntry *rte = root->simple_rte_array[i];
if (rte->inh)
int curParentIndex = appendRelInfo->parent_relid;
if (curParentIndex <= targetParentIndex)
{
break;
parent_ids = bms_add_member(parent_ids, curParentIndex);
}
}
int indexInRtable = (i - 1);
return appendRelInfo->parent_relid - 1 - (indexInRtable);
#else
return 0;
#endif
return bms_num_members(parent_ids);
}
#endif
/*
* AddUnionSetOperationsToAttributeEquivalenceClass recursively iterates on all the
* setOperations and adds each corresponding target entry to the given equivalence

View File

@ -556,30 +556,6 @@ RelayEventExtendNames(Node *parseTree, char *schemaName, uint64 shardId)
AppendShardIdToName(oldRelationName, shardId);
AppendShardIdToName(newRelationName, shardId);
/*
* PostgreSQL creates array types for each ordinary table, with
* the same name plus a prefix of '_'.
*
* ALTER TABLE ... RENAME TO ... also renames the underlying
* array type, and the DDL is run in parallel connections over
* all the placements and shards at once. Concurrent access
* here deadlocks.
*
* Let's provide an easier to understand error message here
* than the deadlock one.
*
* See also https://github.com/citusdata/citus/issues/1664
*/
int newRelationNameLength = strlen(*newRelationName);
if (newRelationNameLength >= (NAMEDATALEN - 1))
{
ereport(ERROR,
(errcode(ERRCODE_NAME_TOO_LONG),
errmsg(
"shard name %s exceeds %d characters",
*newRelationName, NAMEDATALEN - 1)));
}
}
else if (objectType == OBJECT_COLUMN)
{

View File

@ -0,0 +1,5 @@
-- citus--10.0-1--10.0-2
#include "../../columnar/sql/columnar--10.0-1--10.0-2.sql"
GRANT SELECT ON public.citus_tables TO public;

View File

@ -1394,22 +1394,11 @@ COMMENT ON FUNCTION master_update_node(node_id int, new_node_name text, new_node
-- shard statistics
CREATE OR REPLACE FUNCTION master_update_table_statistics(relation regclass)
RETURNS VOID AS $$
DECLARE
colocated_tables regclass[];
BEGIN
SELECT get_colocated_table_array(relation) INTO colocated_tables;
PERFORM
master_update_shard_statistics(shardid)
FROM
pg_dist_shard
WHERE
logicalrelid = ANY (colocated_tables);
END;
$$ LANGUAGE 'plpgsql';
COMMENT ON FUNCTION master_update_table_statistics(regclass)
IS 'updates shard statistics of the given table and its colocated tables';
RETURNS VOID
LANGUAGE C STRICT
AS 'MODULE_PATHNAME', $$citus_update_table_statistics$$;
COMMENT ON FUNCTION pg_catalog.master_update_table_statistics(regclass)
IS 'updates shard statistics of the given table';
CREATE OR REPLACE FUNCTION get_colocated_shard_array(bigint)
RETURNS BIGINT[]

View File

@ -0,0 +1,4 @@
/* citus--10.0-2--10.0-1.sql */
#include "../../../columnar/sql/downgrades/columnar--10.0-2--10.0-1.sql"
REVOKE SELECT ON public.citus_tables FROM public;

View File

@ -1,17 +1,6 @@
CREATE FUNCTION pg_catalog.citus_update_table_statistics(relation regclass)
RETURNS VOID AS $$
DECLARE
colocated_tables regclass[];
BEGIN
SELECT get_colocated_table_array(relation) INTO colocated_tables;
PERFORM
master_update_shard_statistics(shardid)
FROM
pg_dist_shard
WHERE
logicalrelid = ANY (colocated_tables);
END;
$$ LANGUAGE 'plpgsql';
RETURNS VOID
LANGUAGE C STRICT
AS 'MODULE_PATHNAME', $$citus_update_table_statistics$$;
COMMENT ON FUNCTION pg_catalog.citus_update_table_statistics(regclass)
IS 'updates shard statistics of the given table and its colocated tables';
IS 'updates shard statistics of the given table';

View File

@ -1,17 +1,6 @@
CREATE FUNCTION pg_catalog.citus_update_table_statistics(relation regclass)
RETURNS VOID AS $$
DECLARE
colocated_tables regclass[];
BEGIN
SELECT get_colocated_table_array(relation) INTO colocated_tables;
PERFORM
master_update_shard_statistics(shardid)
FROM
pg_dist_shard
WHERE
logicalrelid = ANY (colocated_tables);
END;
$$ LANGUAGE 'plpgsql';
RETURNS VOID
LANGUAGE C STRICT
AS 'MODULE_PATHNAME', $$citus_update_table_statistics$$;
COMMENT ON FUNCTION pg_catalog.citus_update_table_statistics(regclass)
IS 'updates shard statistics of the given table and its colocated tables';
IS 'updates shard statistics of the given table';

View File

@ -5,12 +5,13 @@ FROM (
FROM pg_class c
JOIN pg_inherits i ON (c.oid = inhrelid)
JOIN pg_partitioned_table p ON (inhparent = partrelid)
JOIN pg_attribute a ON (partrelid = attrelid AND ARRAY[attnum] <@ string_to_array(partattrs::text, ' ')::int2[])
JOIN pg_attribute a ON (partrelid = attrelid)
JOIN pg_type t ON (atttypid = t.oid)
JOIN pg_namespace tn ON (t.typnamespace = tn.oid)
LEFT JOIN pg_am am ON (c.relam = am.oid),
pg_catalog.time_partition_range(c.oid)
WHERE c.relpartbound IS NOT NULL AND p.partstrat = 'r' AND p.partnatts = 1
AND a.attnum = ANY(partattrs::int2[])
) partitions
ORDER BY partrelid::text, lower_bound;

View File

@ -5,12 +5,13 @@ FROM (
FROM pg_class c
JOIN pg_inherits i ON (c.oid = inhrelid)
JOIN pg_partitioned_table p ON (inhparent = partrelid)
JOIN pg_attribute a ON (partrelid = attrelid AND ARRAY[attnum] <@ string_to_array(partattrs::text, ' ')::int2[])
JOIN pg_attribute a ON (partrelid = attrelid)
JOIN pg_type t ON (atttypid = t.oid)
JOIN pg_namespace tn ON (t.typnamespace = tn.oid)
LEFT JOIN pg_am am ON (c.relam = am.oid),
pg_catalog.time_partition_range(c.oid)
WHERE c.relpartbound IS NOT NULL AND p.partstrat = 'r' AND p.partnatts = 1
AND a.attnum = ANY(partattrs::int2[])
) partitions
ORDER BY partrelid::text, lower_bound;

View File

@ -100,9 +100,6 @@ static ForeignConstraintRelationshipNode * CreateOrFindNode(HTAB *adjacencyLists
relid);
static List * GetConnectedListHelper(ForeignConstraintRelationshipNode *node,
bool isReferencing);
static HTAB * CreateOidVisitedHashSet(void);
static bool OidVisited(HTAB *oidVisitedMap, Oid oid);
static void VisitOid(HTAB *oidVisitedMap, Oid oid);
static List * GetForeignConstraintRelationshipHelper(Oid relationId, bool isReferencing);
@ -442,7 +439,7 @@ GetConnectedListHelper(ForeignConstraintRelationshipNode *node, bool isReferenci
* As hash_create allocates memory in heap, callers are responsible to call
* hash_destroy when appropriate.
*/
static HTAB *
HTAB *
CreateOidVisitedHashSet(void)
{
HASHCTL info = { 0 };
@ -464,7 +461,7 @@ CreateOidVisitedHashSet(void)
/*
* OidVisited returns true if given oid is visited according to given oid hash-set.
*/
static bool
bool
OidVisited(HTAB *oidVisitedMap, Oid oid)
{
bool found = false;
@ -476,7 +473,7 @@ OidVisited(HTAB *oidVisitedMap, Oid oid)
/*
* VisitOid sets given oid as visited in given hash-set.
*/
static void
void
VisitOid(HTAB *oidVisitedMap, Oid oid)
{
bool found = false;

View File

@ -548,13 +548,14 @@ PartitionParentOid(Oid partitionOid)
/*
* LongestPartitionName is a uitility function that returns the partition
* name which is the longest in terms of number of characters.
* PartitionWithLongestNameRelationId is a utility function that returns the
* oid of the partition table that has the longest name in terms of number of
* characters.
*/
char *
LongestPartitionName(Oid parentRelationId)
Oid
PartitionWithLongestNameRelationId(Oid parentRelationId)
{
char *longestName = NULL;
Oid longestNamePartitionId = InvalidOid;
int longestNameLength = 0;
List *partitionList = PartitionList(parentRelationId);
@ -565,12 +566,12 @@ LongestPartitionName(Oid parentRelationId)
int partitionNameLength = strnlen(partitionName, NAMEDATALEN);
if (partitionNameLength > longestNameLength)
{
longestName = partitionName;
longestNamePartitionId = partitionRelationId;
longestNameLength = partitionNameLength;
}
}
return longestName;
return longestNamePartitionId;
}

View File

@ -41,7 +41,7 @@ alter_role_if_exists(PG_FUNCTION_ARGS)
Node *parseTree = ParseTreeNode(utilityQuery);
ProcessUtilityParseTree(parseTree, utilityQuery, PROCESS_UTILITY_TOPLEVEL, NULL,
ProcessUtilityParseTree(parseTree, utilityQuery, PROCESS_UTILITY_QUERY, NULL,
None_Receiver, NULL);
PG_RETURN_BOOL(true);
@ -98,7 +98,7 @@ worker_create_or_alter_role(PG_FUNCTION_ARGS)
ProcessUtilityParseTree(parseTree,
createRoleUtilityQuery,
PROCESS_UTILITY_TOPLEVEL,
PROCESS_UTILITY_QUERY,
NULL,
None_Receiver, NULL);
@ -126,7 +126,7 @@ worker_create_or_alter_role(PG_FUNCTION_ARGS)
ProcessUtilityParseTree(parseTree,
alterRoleUtilityQuery,
PROCESS_UTILITY_TOPLEVEL,
PROCESS_UTILITY_QUERY,
NULL,
None_Receiver, NULL);

View File

@ -12,6 +12,7 @@
#include "postgres.h"
#include "utils/lsyscache.h"
#include "distributed/metadata_utility.h"
#include "distributed/relay_utility.h"
#include "distributed/shard_utils.h"
@ -36,3 +37,21 @@ GetTableLocalShardOid(Oid citusTableOid, uint64 shardId)
return shardRelationOid;
}
/*
* GetLongestShardName is a utility function that returns the name of the shard of a
* table that has the longest name in terms of number of characters.
*
* Both the Oid and name of the table are required so we can create longest shard names
* after a RENAME.
*/
char *
GetLongestShardName(Oid citusTableOid, char *finalRelationName)
{
char *longestShardName = pstrdup(finalRelationName);
ShardInterval *shardInterval = LoadShardIntervalWithLongestShardName(citusTableOid);
AppendShardIdToName(&longestShardName, shardInterval->shardId);
return longestShardName;
}

View File

@ -111,12 +111,12 @@ worker_create_or_replace_object(PG_FUNCTION_ARGS)
RenameStmt *renameStmt = CreateRenameStatement(&address, newName);
const char *sqlRenameStmt = DeparseTreeNode((Node *) renameStmt);
ProcessUtilityParseTree((Node *) renameStmt, sqlRenameStmt,
PROCESS_UTILITY_TOPLEVEL,
PROCESS_UTILITY_QUERY,
NULL, None_Receiver, NULL);
}
/* apply create statement locally */
ProcessUtilityParseTree(parseTree, sqlStatement, PROCESS_UTILITY_TOPLEVEL, NULL,
ProcessUtilityParseTree(parseTree, sqlStatement, PROCESS_UTILITY_QUERY, NULL,
None_Receiver, NULL);
/* type has been created */

View File

@ -396,7 +396,7 @@ worker_apply_shard_ddl_command(PG_FUNCTION_ARGS)
/* extend names in ddl command and apply extended command */
RelayEventExtendNames(ddlCommandNode, schemaName, shardId);
ProcessUtilityParseTree(ddlCommandNode, ddlCommand, PROCESS_UTILITY_TOPLEVEL, NULL,
ProcessUtilityParseTree(ddlCommandNode, ddlCommand, PROCESS_UTILITY_QUERY, NULL,
None_Receiver, NULL);
PG_RETURN_VOID();
@ -428,7 +428,7 @@ worker_apply_inter_shard_ddl_command(PG_FUNCTION_ARGS)
RelayEventExtendNamesForInterShardCommands(ddlCommandNode, leftShardId,
leftShardSchemaName, rightShardId,
rightShardSchemaName);
ProcessUtilityParseTree(ddlCommandNode, ddlCommand, PROCESS_UTILITY_TOPLEVEL, NULL,
ProcessUtilityParseTree(ddlCommandNode, ddlCommand, PROCESS_UTILITY_QUERY, NULL,
None_Receiver, NULL);
PG_RETURN_VOID();
@ -461,7 +461,7 @@ worker_apply_sequence_command(PG_FUNCTION_ARGS)
}
/* run the CREATE SEQUENCE command */
ProcessUtilityParseTree(commandNode, commandString, PROCESS_UTILITY_TOPLEVEL, NULL,
ProcessUtilityParseTree(commandNode, commandString, PROCESS_UTILITY_QUERY, NULL,
None_Receiver, NULL);
CommandCounterIncrement();
@ -669,7 +669,7 @@ worker_append_table_to_shard(PG_FUNCTION_ARGS)
SetUserIdAndSecContext(CitusExtensionOwner(), SECURITY_LOCAL_USERID_CHANGE);
ProcessUtilityParseTree((Node *) localCopyCommand, queryString->data,
PROCESS_UTILITY_TOPLEVEL, NULL, None_Receiver, NULL);
PROCESS_UTILITY_QUERY, NULL, None_Receiver, NULL);
SetUserIdAndSecContext(savedUserId, savedSecurityContext);
@ -782,7 +782,7 @@ AlterSequenceMinMax(Oid sequenceId, char *schemaName, char *sequenceName,
/* since the command is an AlterSeqStmt, a dummy command string works fine */
ProcessUtilityParseTree((Node *) alterSequenceStatement, dummyString,
PROCESS_UTILITY_TOPLEVEL, NULL, None_Receiver, NULL);
PROCESS_UTILITY_QUERY, NULL, None_Receiver, NULL);
}
}

View File

@ -24,6 +24,10 @@
/* controlled via GUC, should be accessed via EnableLocalReferenceForeignKeys() */
extern bool EnableLocalReferenceForeignKeys;
extern void SwitchToSequentialAndLocalExecutionIfRelationNameTooLong(Oid relationId,
char *
finalRelationName);
/*
* DistributeObjectOps specifies handlers for node/object type pairs.

View File

@ -67,6 +67,9 @@ typedef struct RelationRestriction
/* list of RootPlanParams for all outer nodes */
List *outerPlanParamsList;
/* list of translated vars, this is copied from postgres since it gets deleted on postgres*/
List *translatedVars;
} RelationRestriction;
typedef struct JoinRestrictionContext

View File

@ -22,5 +22,8 @@ extern List * ReferencingRelationIdList(Oid relationId);
extern void SetForeignConstraintRelationshipGraphInvalid(void);
extern bool IsForeignConstraintRelationshipGraphValid(void);
extern void ClearForeignConstraintRelationshipGraphContext(void);
extern HTAB * CreateOidVisitedHashSet(void);
extern bool OidVisited(HTAB *oidVisitedMap, Oid oid);
extern void VisitOid(HTAB *oidVisitedMap, Oid oid);
#endif

View File

@ -27,5 +27,6 @@ extern bool IsObjectAddressOwnedByExtension(const ObjectAddress *target,
ObjectAddress *extensionAddress);
extern List * GetDistributedObjectAddressList(void);
extern void UpdateDistributedObjectColocationId(uint32 oldColocationId, uint32
newColocationId);
#endif /* CITUS_METADATA_DISTOBJECT_H */

View File

@ -36,6 +36,7 @@
#define CSTORE_TABLE_SIZE_FUNCTION "cstore_table_size(%s)"
#define SHARD_SIZES_COLUMN_COUNT 2
#define UPDATE_SHARD_STATISTICS_COLUMN_COUNT 4
/* In-memory representation of a typed tuple in pg_dist_shard. */
typedef struct ShardInterval
@ -206,7 +207,6 @@ extern StringInfo GenerateSizeQueryOnMultiplePlacements(List *shardIntervalList,
extern List * RemoveCoordinatorPlacementIfNotSingleNode(List *placementList);
extern ShardPlacement * ShardPlacementOnGroup(uint64 shardId, int groupId);
/* Function declarations to modify shard and shard placement data */
extern void InsertShardRow(Oid relationId, uint64 shardId, char storageType,
text *shardMinValue, text *shardMaxValue);
@ -264,5 +264,8 @@ extern ShardInterval * DeformedDistShardTupleToShardInterval(Datum *datumArray,
int32 intervalTypeMod);
extern void GetIntervalTypeInfo(char partitionMethod, Var *partitionColumn,
Oid *intervalTypeId, int32 *intervalTypeMod);
extern List * SendShardStatisticsQueriesInParallel(List *citusTableIds, bool
useDistributedTransaction, bool
useShardMinMaxQuery);
#endif /* METADATA_UTILITY_H */

View File

@ -19,7 +19,7 @@ extern bool PartitionTableNoLock(Oid relationId);
extern bool IsChildTable(Oid relationId);
extern bool IsParentTable(Oid relationId);
extern Oid PartitionParentOid(Oid partitionOid);
extern char * LongestPartitionName(Oid parentRelationId);
extern Oid PartitionWithLongestNameRelationId(Oid parentRelationId);
extern List * PartitionList(Oid parentRelationId);
extern char * GenerateDetachPartitionCommand(Oid partitionTableId);
extern char * GenerateAttachShardPartitionCommand(ShardInterval *shardInterval);

View File

@ -33,7 +33,6 @@ extern List * GenerateAllAttributeEquivalences(PlannerRestrictionContext *
plannerRestrictionContext);
extern uint32 UniqueRelationCount(RelationRestrictionContext *restrictionContext,
CitusTableType tableType);
extern List * DistributedRelationIdList(Query *query);
extern PlannerRestrictionContext * FilterPlannerRestrictionForQuery(
PlannerRestrictionContext *plannerRestrictionContext,

View File

@ -14,5 +14,6 @@
#include "postgres.h"
extern Oid GetTableLocalShardOid(Oid citusTableOid, uint64 shardId);
extern char * GetLongestShardName(Oid citusTableOid, char *finalRelationName);
#endif /* SHARD_UTILS_H */

View File

@ -191,8 +191,8 @@ s/relation with OID [0-9]+ does not exist/relation with OID XXXX does not exist/
# ignore JIT related messages
/^DEBUG: probing availability of JIT.*/d
/^DEBUG: provider not available, disabling JIT for current session.*/d
/^DEBUG: time to inline:.*/d
/^DEBUG: successfully loaded JIT.*/d
# ignore timing statistics for VACUUM VERBOSE
/CPU: user: .*s, system: .*s, elapsed: .*s/d
@ -223,3 +223,11 @@ s/^(ERROR: child table is missing constraint "\w+)_([0-9])+"/\1_xxxxxx"/g
s/.*//g
}
}
# normalize long table shard name errors for alter_table_set_access_method and alter_distributed_table
s/^(ERROR: child table is missing constraint "\w+)_([0-9])+"/\1_xxxxxx"/g
s/^(DEBUG: the name of the shard \(abcde_01234567890123456789012345678901234567890_f7ff6612)_([0-9])+/\1_xxxxxx/g
# normalize long index name errors for multi_index_statements
s/^(ERROR: The index name \(test_index_creation1_p2020_09_26)_([0-9])+_(tenant_id_timeperiod_idx)/\1_xxxxxx_\3/g
s/^(DEBUG: the index name on the shards of the partition is too long, switching to sequential and local execution mode to prevent self deadlocks: test_index_creation1_p2020_09_26)_([0-9])+_(tenant_id_timeperiod_idx)/\1_xxxxxx_\3/g

View File

@ -53,9 +53,9 @@ SELECT STRING_AGG(table_name::text, ', ' ORDER BY 1) AS "Colocation Groups" FROM
-- test altering distribution column
SELECT alter_distributed_table('dist_table', distribution_column := 'b');
NOTICE: creating a new table for alter_distributed_table.dist_table
NOTICE: Moving the data of alter_distributed_table.dist_table
NOTICE: Dropping the old alter_distributed_table.dist_table
NOTICE: Renaming the new table to alter_distributed_table.dist_table
NOTICE: moving the data of alter_distributed_table.dist_table
NOTICE: dropping the old alter_distributed_table.dist_table
NOTICE: renaming the new table to alter_distributed_table.dist_table
alter_distributed_table
---------------------------------------------------------------------
@ -82,9 +82,9 @@ SELECT STRING_AGG(table_name::text, ', ' ORDER BY 1) AS "Colocation Groups" FROM
-- test altering shard count
SELECT alter_distributed_table('dist_table', shard_count := 6);
NOTICE: creating a new table for alter_distributed_table.dist_table
NOTICE: Moving the data of alter_distributed_table.dist_table
NOTICE: Dropping the old alter_distributed_table.dist_table
NOTICE: Renaming the new table to alter_distributed_table.dist_table
NOTICE: moving the data of alter_distributed_table.dist_table
NOTICE: dropping the old alter_distributed_table.dist_table
NOTICE: renaming the new table to alter_distributed_table.dist_table
alter_distributed_table
---------------------------------------------------------------------
@ -111,9 +111,9 @@ SELECT STRING_AGG(table_name::text, ', ' ORDER BY 1) AS "Colocation Groups" FROM
-- test altering colocation, note that shard count will also change
SELECT alter_distributed_table('dist_table', colocate_with := 'alter_distributed_table.colocation_table');
NOTICE: creating a new table for alter_distributed_table.dist_table
NOTICE: Moving the data of alter_distributed_table.dist_table
NOTICE: Dropping the old alter_distributed_table.dist_table
NOTICE: Renaming the new table to alter_distributed_table.dist_table
NOTICE: moving the data of alter_distributed_table.dist_table
NOTICE: dropping the old alter_distributed_table.dist_table
NOTICE: renaming the new table to alter_distributed_table.dist_table
alter_distributed_table
---------------------------------------------------------------------
@ -139,13 +139,13 @@ SELECT STRING_AGG(table_name::text, ', ' ORDER BY 1) AS "Colocation Groups" FROM
-- test altering shard count with cascading, note that the colocation will be kept
SELECT alter_distributed_table('dist_table', shard_count := 8, cascade_to_colocated := true);
NOTICE: creating a new table for alter_distributed_table.dist_table
NOTICE: Moving the data of alter_distributed_table.dist_table
NOTICE: Dropping the old alter_distributed_table.dist_table
NOTICE: Renaming the new table to alter_distributed_table.dist_table
NOTICE: moving the data of alter_distributed_table.dist_table
NOTICE: dropping the old alter_distributed_table.dist_table
NOTICE: renaming the new table to alter_distributed_table.dist_table
NOTICE: creating a new table for alter_distributed_table.colocation_table
NOTICE: Moving the data of alter_distributed_table.colocation_table
NOTICE: Dropping the old alter_distributed_table.colocation_table
NOTICE: Renaming the new table to alter_distributed_table.colocation_table
NOTICE: moving the data of alter_distributed_table.colocation_table
NOTICE: dropping the old alter_distributed_table.colocation_table
NOTICE: renaming the new table to alter_distributed_table.colocation_table
alter_distributed_table
---------------------------------------------------------------------
@ -171,9 +171,9 @@ SELECT STRING_AGG(table_name::text, ', ' ORDER BY 1) AS "Colocation Groups" FROM
-- test altering shard count without cascading, note that the colocation will be broken
SELECT alter_distributed_table('dist_table', shard_count := 10, cascade_to_colocated := false);
NOTICE: creating a new table for alter_distributed_table.dist_table
NOTICE: Moving the data of alter_distributed_table.dist_table
NOTICE: Dropping the old alter_distributed_table.dist_table
NOTICE: Renaming the new table to alter_distributed_table.dist_table
NOTICE: moving the data of alter_distributed_table.dist_table
NOTICE: dropping the old alter_distributed_table.dist_table
NOTICE: renaming the new table to alter_distributed_table.dist_table
alter_distributed_table
---------------------------------------------------------------------
@ -261,16 +261,16 @@ SELECT * FROM partitioned_table_6_10 ORDER BY 1, 2;
SELECT alter_distributed_table('partitioned_table', shard_count := 10, distribution_column := 'a');
NOTICE: converting the partitions of alter_distributed_table.partitioned_table
NOTICE: creating a new table for alter_distributed_table.partitioned_table_1_5
NOTICE: Moving the data of alter_distributed_table.partitioned_table_1_5
NOTICE: Dropping the old alter_distributed_table.partitioned_table_1_5
NOTICE: Renaming the new table to alter_distributed_table.partitioned_table_1_5
NOTICE: moving the data of alter_distributed_table.partitioned_table_1_5
NOTICE: dropping the old alter_distributed_table.partitioned_table_1_5
NOTICE: renaming the new table to alter_distributed_table.partitioned_table_1_5
NOTICE: creating a new table for alter_distributed_table.partitioned_table_6_10
NOTICE: Moving the data of alter_distributed_table.partitioned_table_6_10
NOTICE: Dropping the old alter_distributed_table.partitioned_table_6_10
NOTICE: Renaming the new table to alter_distributed_table.partitioned_table_6_10
NOTICE: moving the data of alter_distributed_table.partitioned_table_6_10
NOTICE: dropping the old alter_distributed_table.partitioned_table_6_10
NOTICE: renaming the new table to alter_distributed_table.partitioned_table_6_10
NOTICE: creating a new table for alter_distributed_table.partitioned_table
NOTICE: Dropping the old alter_distributed_table.partitioned_table
NOTICE: Renaming the new table to alter_distributed_table.partitioned_table
NOTICE: dropping the old alter_distributed_table.partitioned_table
NOTICE: renaming the new table to alter_distributed_table.partitioned_table
alter_distributed_table
---------------------------------------------------------------------
@ -658,12 +658,12 @@ ALTER TABLE par_table ATTACH PARTITION par_table_1 FOR VALUES FROM (1) TO (5);
SELECT alter_distributed_table('par_table', distribution_column:='b', colocate_with:='col_table');
NOTICE: converting the partitions of alter_distributed_table.par_table
NOTICE: creating a new table for alter_distributed_table.par_table_1
NOTICE: Moving the data of alter_distributed_table.par_table_1
NOTICE: Dropping the old alter_distributed_table.par_table_1
NOTICE: Renaming the new table to alter_distributed_table.par_table_1
NOTICE: moving the data of alter_distributed_table.par_table_1
NOTICE: dropping the old alter_distributed_table.par_table_1
NOTICE: renaming the new table to alter_distributed_table.par_table_1
NOTICE: creating a new table for alter_distributed_table.par_table
NOTICE: Dropping the old alter_distributed_table.par_table
NOTICE: Renaming the new table to alter_distributed_table.par_table
NOTICE: dropping the old alter_distributed_table.par_table
NOTICE: renaming the new table to alter_distributed_table.par_table
alter_distributed_table
---------------------------------------------------------------------
@ -685,9 +685,9 @@ HINT: check citus_tables view to see current properties of the table
-- first colocate the tables, then try to re-colococate
SELECT alter_distributed_table('dist_table', colocate_with := 'colocation_table');
NOTICE: creating a new table for alter_distributed_table.dist_table
NOTICE: Moving the data of alter_distributed_table.dist_table
NOTICE: Dropping the old alter_distributed_table.dist_table
NOTICE: Renaming the new table to alter_distributed_table.dist_table
NOTICE: moving the data of alter_distributed_table.dist_table
NOTICE: dropping the old alter_distributed_table.dist_table
NOTICE: renaming the new table to alter_distributed_table.dist_table
alter_distributed_table
---------------------------------------------------------------------
@ -701,9 +701,9 @@ HINT: check citus_tables view to see current properties of the table
SELECT alter_distributed_table('dist_table', distribution_column:='b', shard_count:=4, cascade_to_colocated:=false);
NOTICE: table is already distributed by b
NOTICE: creating a new table for alter_distributed_table.dist_table
NOTICE: Moving the data of alter_distributed_table.dist_table
NOTICE: Dropping the old alter_distributed_table.dist_table
NOTICE: Renaming the new table to alter_distributed_table.dist_table
NOTICE: moving the data of alter_distributed_table.dist_table
NOTICE: dropping the old alter_distributed_table.dist_table
NOTICE: renaming the new table to alter_distributed_table.dist_table
alter_distributed_table
---------------------------------------------------------------------
@ -712,9 +712,9 @@ NOTICE: Renaming the new table to alter_distributed_table.dist_table
SELECT alter_distributed_table('dist_table', shard_count:=4, colocate_with:='colocation_table_2');
NOTICE: shard count of the table is already 4
NOTICE: creating a new table for alter_distributed_table.dist_table
NOTICE: Moving the data of alter_distributed_table.dist_table
NOTICE: Dropping the old alter_distributed_table.dist_table
NOTICE: Renaming the new table to alter_distributed_table.dist_table
NOTICE: moving the data of alter_distributed_table.dist_table
NOTICE: dropping the old alter_distributed_table.dist_table
NOTICE: renaming the new table to alter_distributed_table.dist_table
alter_distributed_table
---------------------------------------------------------------------
@ -723,9 +723,9 @@ NOTICE: Renaming the new table to alter_distributed_table.dist_table
SELECT alter_distributed_table('dist_table', colocate_with:='colocation_table_2', distribution_column:='a');
NOTICE: table is already colocated with colocation_table_2
NOTICE: creating a new table for alter_distributed_table.dist_table
NOTICE: Moving the data of alter_distributed_table.dist_table
NOTICE: Dropping the old alter_distributed_table.dist_table
NOTICE: Renaming the new table to alter_distributed_table.dist_table
NOTICE: moving the data of alter_distributed_table.dist_table
NOTICE: dropping the old alter_distributed_table.dist_table
NOTICE: renaming the new table to alter_distributed_table.dist_table
alter_distributed_table
---------------------------------------------------------------------
@ -750,9 +750,9 @@ HINT: cascade_to_colocated := false will break the current colocation, cascade_
-- test changing shard count of a non-colocated table without cascade_to_colocated, shouldn't error
SELECT alter_distributed_table('dist_table', colocate_with := 'none');
NOTICE: creating a new table for alter_distributed_table.dist_table
NOTICE: Moving the data of alter_distributed_table.dist_table
NOTICE: Dropping the old alter_distributed_table.dist_table
NOTICE: Renaming the new table to alter_distributed_table.dist_table
NOTICE: moving the data of alter_distributed_table.dist_table
NOTICE: dropping the old alter_distributed_table.dist_table
NOTICE: renaming the new table to alter_distributed_table.dist_table
alter_distributed_table
---------------------------------------------------------------------
@ -760,9 +760,9 @@ NOTICE: Renaming the new table to alter_distributed_table.dist_table
SELECT alter_distributed_table('dist_table', shard_count := 14);
NOTICE: creating a new table for alter_distributed_table.dist_table
NOTICE: Moving the data of alter_distributed_table.dist_table
NOTICE: Dropping the old alter_distributed_table.dist_table
NOTICE: Renaming the new table to alter_distributed_table.dist_table
NOTICE: moving the data of alter_distributed_table.dist_table
NOTICE: dropping the old alter_distributed_table.dist_table
NOTICE: renaming the new table to alter_distributed_table.dist_table
alter_distributed_table
---------------------------------------------------------------------
@ -820,11 +820,11 @@ INSERT INTO mat_view_test VALUES (1,1), (2,2);
CREATE MATERIALIZED VIEW mat_view AS SELECT * FROM mat_view_test;
SELECT alter_distributed_table('mat_view_test', shard_count := 5, cascade_to_colocated := false);
NOTICE: creating a new table for alter_distributed_table.mat_view_test
NOTICE: Moving the data of alter_distributed_table.mat_view_test
NOTICE: Dropping the old alter_distributed_table.mat_view_test
NOTICE: moving the data of alter_distributed_table.mat_view_test
NOTICE: dropping the old alter_distributed_table.mat_view_test
NOTICE: drop cascades to materialized view mat_view
CONTEXT: SQL statement "DROP TABLE alter_distributed_table.mat_view_test CASCADE"
NOTICE: Renaming the new table to alter_distributed_table.mat_view_test
NOTICE: renaming the new table to alter_distributed_table.mat_view_test
alter_distributed_table
---------------------------------------------------------------------
@ -837,5 +837,85 @@ SELECT * FROM mat_view ORDER BY a;
2 | 2
(2 rows)
-- test long table names
SET client_min_messages TO DEBUG1;
CREATE TABLE abcde_0123456789012345678901234567890123456789012345678901234567890123456789 (x int, y int);
NOTICE: identifier "abcde_0123456789012345678901234567890123456789012345678901234567890123456789" will be truncated to "abcde_012345678901234567890123456789012345678901234567890123456"
SELECT create_distributed_table('abcde_0123456789012345678901234567890123456789012345678901234567890123456789', 'x');
create_distributed_table
---------------------------------------------------------------------
(1 row)
SELECT alter_distributed_table('abcde_0123456789012345678901234567890123456789012345678901234567890123456789', distribution_column := 'y');
DEBUG: the name of the shard (abcde_01234567890123456789012345678901234567890_f7ff6612_xxxxxx) for relation (abcde_012345678901234567890123456789012345678901234567890123456) is too long, switching to sequential and local execution mode to prevent self deadlocks
NOTICE: creating a new table for alter_distributed_table.abcde_012345678901234567890123456789012345678901234567890123456
NOTICE: moving the data of alter_distributed_table.abcde_012345678901234567890123456789012345678901234567890123456
DEBUG: cannot perform distributed INSERT INTO ... SELECT because the partition columns in the source table and subquery do not match
DETAIL: The target table's partition column should correspond to a partition column in the subquery.
CONTEXT: SQL statement "INSERT INTO alter_distributed_table.abcde_0123456789012345678901234567890123456_f7ff6612_4160710162 (x,y) SELECT x,y FROM alter_distributed_table.abcde_012345678901234567890123456789012345678901234567890123456"
DEBUG: performing repartitioned INSERT ... SELECT
CONTEXT: SQL statement "INSERT INTO alter_distributed_table.abcde_0123456789012345678901234567890123456_f7ff6612_4160710162 (x,y) SELECT x,y FROM alter_distributed_table.abcde_012345678901234567890123456789012345678901234567890123456"
NOTICE: dropping the old alter_distributed_table.abcde_012345678901234567890123456789012345678901234567890123456
CONTEXT: SQL statement "DROP TABLE alter_distributed_table.abcde_012345678901234567890123456789012345678901234567890123456 CASCADE"
NOTICE: renaming the new table to alter_distributed_table.abcde_012345678901234567890123456789012345678901234567890123456
DEBUG: the name of the shard (abcde_01234567890123456789012345678901234567890_f7ff6612_xxxxxx) for relation (abcde_012345678901234567890123456789012345678901234567890123456) is too long, switching to sequential and local execution mode to prevent self deadlocks
CONTEXT: SQL statement "ALTER TABLE alter_distributed_table.abcde_0123456789012345678901234567890123456_f7ff6612_4160710162 RENAME TO abcde_012345678901234567890123456789012345678901234567890123456"
alter_distributed_table
---------------------------------------------------------------------
(1 row)
RESET client_min_messages;
-- test long partitioned table names
CREATE TABLE partition_lengths
(
tenant_id integer NOT NULL,
timeperiod timestamp without time zone NOT NULL,
inserted_utc timestamp without time zone NOT NULL DEFAULT now()
) PARTITION BY RANGE (timeperiod);
SELECT create_distributed_table('partition_lengths', 'tenant_id');
create_distributed_table
---------------------------------------------------------------------
(1 row)
CREATE TABLE partition_lengths_p2020_09_28_12345678901234567890123456789012345678901234567890 PARTITION OF partition_lengths FOR VALUES FROM ('2020-09-28 00:00:00') TO ('2020-09-29 00:00:00');
NOTICE: identifier "partition_lengths_p2020_09_28_12345678901234567890123456789012345678901234567890" will be truncated to "partition_lengths_p2020_09_28_123456789012345678901234567890123"
-- verify alter_distributed_table works with long partition names
SELECT alter_distributed_table('partition_lengths', shard_count := 29, cascade_to_colocated := false);
NOTICE: converting the partitions of alter_distributed_table.partition_lengths
NOTICE: creating a new table for alter_distributed_table.partition_lengths_p2020_09_28_123456789012345678901234567890123
NOTICE: moving the data of alter_distributed_table.partition_lengths_p2020_09_28_123456789012345678901234567890123
NOTICE: dropping the old alter_distributed_table.partition_lengths_p2020_09_28_123456789012345678901234567890123
NOTICE: renaming the new table to alter_distributed_table.partition_lengths_p2020_09_28_123456789012345678901234567890123
NOTICE: creating a new table for alter_distributed_table.partition_lengths
NOTICE: dropping the old alter_distributed_table.partition_lengths
NOTICE: renaming the new table to alter_distributed_table.partition_lengths
alter_distributed_table
---------------------------------------------------------------------
(1 row)
-- test long partition table names
ALTER TABLE partition_lengths_p2020_09_28_12345678901234567890123456789012345678901234567890 RENAME TO partition_lengths_p2020_09_28;
NOTICE: identifier "partition_lengths_p2020_09_28_12345678901234567890123456789012345678901234567890" will be truncated to "partition_lengths_p2020_09_28_123456789012345678901234567890123"
ALTER TABLE partition_lengths RENAME TO partition_lengths_12345678901234567890123456789012345678901234567890;
NOTICE: identifier "partition_lengths_12345678901234567890123456789012345678901234567890" will be truncated to "partition_lengths_123456789012345678901234567890123456789012345"
-- verify alter_distributed_table works with long partitioned table names
SELECT alter_distributed_table('partition_lengths_12345678901234567890123456789012345678901234567890', shard_count := 17, cascade_to_colocated := false);
NOTICE: converting the partitions of alter_distributed_table.partition_lengths_123456789012345678901234567890123456789012345
NOTICE: creating a new table for alter_distributed_table.partition_lengths_p2020_09_28
NOTICE: moving the data of alter_distributed_table.partition_lengths_p2020_09_28
NOTICE: dropping the old alter_distributed_table.partition_lengths_p2020_09_28
NOTICE: renaming the new table to alter_distributed_table.partition_lengths_p2020_09_28
NOTICE: creating a new table for alter_distributed_table.partition_lengths_123456789012345678901234567890123456789012345
NOTICE: dropping the old alter_distributed_table.partition_lengths_123456789012345678901234567890123456789012345
NOTICE: renaming the new table to alter_distributed_table.partition_lengths_123456789012345678901234567890123456789012345
alter_distributed_table
---------------------------------------------------------------------
(1 row)
SET client_min_messages TO WARNING;
DROP SCHEMA alter_distributed_table CASCADE;

View File

@ -53,9 +53,9 @@ SELECT STRING_AGG(table_name::text, ', ' ORDER BY 1) AS "Colocation Groups" FROM
-- test altering distribution column
SELECT alter_distributed_table('dist_table', distribution_column := 'b');
NOTICE: creating a new table for alter_distributed_table.dist_table
NOTICE: Moving the data of alter_distributed_table.dist_table
NOTICE: Dropping the old alter_distributed_table.dist_table
NOTICE: Renaming the new table to alter_distributed_table.dist_table
NOTICE: moving the data of alter_distributed_table.dist_table
NOTICE: dropping the old alter_distributed_table.dist_table
NOTICE: renaming the new table to alter_distributed_table.dist_table
alter_distributed_table
---------------------------------------------------------------------
@ -82,9 +82,9 @@ SELECT STRING_AGG(table_name::text, ', ' ORDER BY 1) AS "Colocation Groups" FROM
-- test altering shard count
SELECT alter_distributed_table('dist_table', shard_count := 6);
NOTICE: creating a new table for alter_distributed_table.dist_table
NOTICE: Moving the data of alter_distributed_table.dist_table
NOTICE: Dropping the old alter_distributed_table.dist_table
NOTICE: Renaming the new table to alter_distributed_table.dist_table
NOTICE: moving the data of alter_distributed_table.dist_table
NOTICE: dropping the old alter_distributed_table.dist_table
NOTICE: renaming the new table to alter_distributed_table.dist_table
alter_distributed_table
---------------------------------------------------------------------
@ -111,9 +111,9 @@ SELECT STRING_AGG(table_name::text, ', ' ORDER BY 1) AS "Colocation Groups" FROM
-- test altering colocation, note that shard count will also change
SELECT alter_distributed_table('dist_table', colocate_with := 'alter_distributed_table.colocation_table');
NOTICE: creating a new table for alter_distributed_table.dist_table
NOTICE: Moving the data of alter_distributed_table.dist_table
NOTICE: Dropping the old alter_distributed_table.dist_table
NOTICE: Renaming the new table to alter_distributed_table.dist_table
NOTICE: moving the data of alter_distributed_table.dist_table
NOTICE: dropping the old alter_distributed_table.dist_table
NOTICE: renaming the new table to alter_distributed_table.dist_table
alter_distributed_table
---------------------------------------------------------------------
@ -139,13 +139,13 @@ SELECT STRING_AGG(table_name::text, ', ' ORDER BY 1) AS "Colocation Groups" FROM
-- test altering shard count with cascading, note that the colocation will be kept
SELECT alter_distributed_table('dist_table', shard_count := 8, cascade_to_colocated := true);
NOTICE: creating a new table for alter_distributed_table.dist_table
NOTICE: Moving the data of alter_distributed_table.dist_table
NOTICE: Dropping the old alter_distributed_table.dist_table
NOTICE: Renaming the new table to alter_distributed_table.dist_table
NOTICE: moving the data of alter_distributed_table.dist_table
NOTICE: dropping the old alter_distributed_table.dist_table
NOTICE: renaming the new table to alter_distributed_table.dist_table
NOTICE: creating a new table for alter_distributed_table.colocation_table
NOTICE: Moving the data of alter_distributed_table.colocation_table
NOTICE: Dropping the old alter_distributed_table.colocation_table
NOTICE: Renaming the new table to alter_distributed_table.colocation_table
NOTICE: moving the data of alter_distributed_table.colocation_table
NOTICE: dropping the old alter_distributed_table.colocation_table
NOTICE: renaming the new table to alter_distributed_table.colocation_table
alter_distributed_table
---------------------------------------------------------------------
@ -171,9 +171,9 @@ SELECT STRING_AGG(table_name::text, ', ' ORDER BY 1) AS "Colocation Groups" FROM
-- test altering shard count without cascading, note that the colocation will be broken
SELECT alter_distributed_table('dist_table', shard_count := 10, cascade_to_colocated := false);
NOTICE: creating a new table for alter_distributed_table.dist_table
NOTICE: Moving the data of alter_distributed_table.dist_table
NOTICE: Dropping the old alter_distributed_table.dist_table
NOTICE: Renaming the new table to alter_distributed_table.dist_table
NOTICE: moving the data of alter_distributed_table.dist_table
NOTICE: dropping the old alter_distributed_table.dist_table
NOTICE: renaming the new table to alter_distributed_table.dist_table
alter_distributed_table
---------------------------------------------------------------------
@ -261,16 +261,16 @@ SELECT * FROM partitioned_table_6_10 ORDER BY 1, 2;
SELECT alter_distributed_table('partitioned_table', shard_count := 10, distribution_column := 'a');
NOTICE: converting the partitions of alter_distributed_table.partitioned_table
NOTICE: creating a new table for alter_distributed_table.partitioned_table_1_5
NOTICE: Moving the data of alter_distributed_table.partitioned_table_1_5
NOTICE: Dropping the old alter_distributed_table.partitioned_table_1_5
NOTICE: Renaming the new table to alter_distributed_table.partitioned_table_1_5
NOTICE: moving the data of alter_distributed_table.partitioned_table_1_5
NOTICE: dropping the old alter_distributed_table.partitioned_table_1_5
NOTICE: renaming the new table to alter_distributed_table.partitioned_table_1_5
NOTICE: creating a new table for alter_distributed_table.partitioned_table_6_10
NOTICE: Moving the data of alter_distributed_table.partitioned_table_6_10
NOTICE: Dropping the old alter_distributed_table.partitioned_table_6_10
NOTICE: Renaming the new table to alter_distributed_table.partitioned_table_6_10
NOTICE: moving the data of alter_distributed_table.partitioned_table_6_10
NOTICE: dropping the old alter_distributed_table.partitioned_table_6_10
NOTICE: renaming the new table to alter_distributed_table.partitioned_table_6_10
NOTICE: creating a new table for alter_distributed_table.partitioned_table
NOTICE: Dropping the old alter_distributed_table.partitioned_table
NOTICE: Renaming the new table to alter_distributed_table.partitioned_table
NOTICE: dropping the old alter_distributed_table.partitioned_table
NOTICE: renaming the new table to alter_distributed_table.partitioned_table
alter_distributed_table
---------------------------------------------------------------------
@ -637,12 +637,12 @@ ALTER TABLE par_table ATTACH PARTITION par_table_1 FOR VALUES FROM (1) TO (5);
SELECT alter_distributed_table('par_table', distribution_column:='b', colocate_with:='col_table');
NOTICE: converting the partitions of alter_distributed_table.par_table
NOTICE: creating a new table for alter_distributed_table.par_table_1
NOTICE: Moving the data of alter_distributed_table.par_table_1
NOTICE: Dropping the old alter_distributed_table.par_table_1
NOTICE: Renaming the new table to alter_distributed_table.par_table_1
NOTICE: moving the data of alter_distributed_table.par_table_1
NOTICE: dropping the old alter_distributed_table.par_table_1
NOTICE: renaming the new table to alter_distributed_table.par_table_1
NOTICE: creating a new table for alter_distributed_table.par_table
NOTICE: Dropping the old alter_distributed_table.par_table
NOTICE: Renaming the new table to alter_distributed_table.par_table
NOTICE: dropping the old alter_distributed_table.par_table
NOTICE: renaming the new table to alter_distributed_table.par_table
alter_distributed_table
---------------------------------------------------------------------
@ -664,9 +664,9 @@ HINT: check citus_tables view to see current properties of the table
-- first colocate the tables, then try to re-colococate
SELECT alter_distributed_table('dist_table', colocate_with := 'colocation_table');
NOTICE: creating a new table for alter_distributed_table.dist_table
NOTICE: Moving the data of alter_distributed_table.dist_table
NOTICE: Dropping the old alter_distributed_table.dist_table
NOTICE: Renaming the new table to alter_distributed_table.dist_table
NOTICE: moving the data of alter_distributed_table.dist_table
NOTICE: dropping the old alter_distributed_table.dist_table
NOTICE: renaming the new table to alter_distributed_table.dist_table
alter_distributed_table
---------------------------------------------------------------------
@ -680,9 +680,9 @@ HINT: check citus_tables view to see current properties of the table
SELECT alter_distributed_table('dist_table', distribution_column:='b', shard_count:=4, cascade_to_colocated:=false);
NOTICE: table is already distributed by b
NOTICE: creating a new table for alter_distributed_table.dist_table
NOTICE: Moving the data of alter_distributed_table.dist_table
NOTICE: Dropping the old alter_distributed_table.dist_table
NOTICE: Renaming the new table to alter_distributed_table.dist_table
NOTICE: moving the data of alter_distributed_table.dist_table
NOTICE: dropping the old alter_distributed_table.dist_table
NOTICE: renaming the new table to alter_distributed_table.dist_table
alter_distributed_table
---------------------------------------------------------------------
@ -691,9 +691,9 @@ NOTICE: Renaming the new table to alter_distributed_table.dist_table
SELECT alter_distributed_table('dist_table', shard_count:=4, colocate_with:='colocation_table_2');
NOTICE: shard count of the table is already 4
NOTICE: creating a new table for alter_distributed_table.dist_table
NOTICE: Moving the data of alter_distributed_table.dist_table
NOTICE: Dropping the old alter_distributed_table.dist_table
NOTICE: Renaming the new table to alter_distributed_table.dist_table
NOTICE: moving the data of alter_distributed_table.dist_table
NOTICE: dropping the old alter_distributed_table.dist_table
NOTICE: renaming the new table to alter_distributed_table.dist_table
alter_distributed_table
---------------------------------------------------------------------
@ -702,9 +702,9 @@ NOTICE: Renaming the new table to alter_distributed_table.dist_table
SELECT alter_distributed_table('dist_table', colocate_with:='colocation_table_2', distribution_column:='a');
NOTICE: table is already colocated with colocation_table_2
NOTICE: creating a new table for alter_distributed_table.dist_table
NOTICE: Moving the data of alter_distributed_table.dist_table
NOTICE: Dropping the old alter_distributed_table.dist_table
NOTICE: Renaming the new table to alter_distributed_table.dist_table
NOTICE: moving the data of alter_distributed_table.dist_table
NOTICE: dropping the old alter_distributed_table.dist_table
NOTICE: renaming the new table to alter_distributed_table.dist_table
alter_distributed_table
---------------------------------------------------------------------
@ -729,9 +729,9 @@ HINT: cascade_to_colocated := false will break the current colocation, cascade_
-- test changing shard count of a non-colocated table without cascade_to_colocated, shouldn't error
SELECT alter_distributed_table('dist_table', colocate_with := 'none');
NOTICE: creating a new table for alter_distributed_table.dist_table
NOTICE: Moving the data of alter_distributed_table.dist_table
NOTICE: Dropping the old alter_distributed_table.dist_table
NOTICE: Renaming the new table to alter_distributed_table.dist_table
NOTICE: moving the data of alter_distributed_table.dist_table
NOTICE: dropping the old alter_distributed_table.dist_table
NOTICE: renaming the new table to alter_distributed_table.dist_table
alter_distributed_table
---------------------------------------------------------------------
@ -739,9 +739,9 @@ NOTICE: Renaming the new table to alter_distributed_table.dist_table
SELECT alter_distributed_table('dist_table', shard_count := 14);
NOTICE: creating a new table for alter_distributed_table.dist_table
NOTICE: Moving the data of alter_distributed_table.dist_table
NOTICE: Dropping the old alter_distributed_table.dist_table
NOTICE: Renaming the new table to alter_distributed_table.dist_table
NOTICE: moving the data of alter_distributed_table.dist_table
NOTICE: dropping the old alter_distributed_table.dist_table
NOTICE: renaming the new table to alter_distributed_table.dist_table
alter_distributed_table
---------------------------------------------------------------------
@ -799,11 +799,11 @@ INSERT INTO mat_view_test VALUES (1,1), (2,2);
CREATE MATERIALIZED VIEW mat_view AS SELECT * FROM mat_view_test;
SELECT alter_distributed_table('mat_view_test', shard_count := 5, cascade_to_colocated := false);
NOTICE: creating a new table for alter_distributed_table.mat_view_test
NOTICE: Moving the data of alter_distributed_table.mat_view_test
NOTICE: Dropping the old alter_distributed_table.mat_view_test
NOTICE: moving the data of alter_distributed_table.mat_view_test
NOTICE: dropping the old alter_distributed_table.mat_view_test
NOTICE: drop cascades to materialized view mat_view
CONTEXT: SQL statement "DROP TABLE alter_distributed_table.mat_view_test CASCADE"
NOTICE: Renaming the new table to alter_distributed_table.mat_view_test
NOTICE: renaming the new table to alter_distributed_table.mat_view_test
alter_distributed_table
---------------------------------------------------------------------
@ -816,5 +816,85 @@ SELECT * FROM mat_view ORDER BY a;
2 | 2
(2 rows)
-- test long table names
SET client_min_messages TO DEBUG1;
CREATE TABLE abcde_0123456789012345678901234567890123456789012345678901234567890123456789 (x int, y int);
NOTICE: identifier "abcde_0123456789012345678901234567890123456789012345678901234567890123456789" will be truncated to "abcde_012345678901234567890123456789012345678901234567890123456"
SELECT create_distributed_table('abcde_0123456789012345678901234567890123456789012345678901234567890123456789', 'x');
create_distributed_table
---------------------------------------------------------------------
(1 row)
SELECT alter_distributed_table('abcde_0123456789012345678901234567890123456789012345678901234567890123456789', distribution_column := 'y');
DEBUG: the name of the shard (abcde_01234567890123456789012345678901234567890_f7ff6612_xxxxxx) for relation (abcde_012345678901234567890123456789012345678901234567890123456) is too long, switching to sequential and local execution mode to prevent self deadlocks
NOTICE: creating a new table for alter_distributed_table.abcde_012345678901234567890123456789012345678901234567890123456
NOTICE: moving the data of alter_distributed_table.abcde_012345678901234567890123456789012345678901234567890123456
DEBUG: cannot perform distributed INSERT INTO ... SELECT because the partition columns in the source table and subquery do not match
DETAIL: The target table's partition column should correspond to a partition column in the subquery.
CONTEXT: SQL statement "INSERT INTO alter_distributed_table.abcde_0123456789012345678901234567890123456_f7ff6612_4160710162 (x,y) SELECT x,y FROM alter_distributed_table.abcde_012345678901234567890123456789012345678901234567890123456"
DEBUG: performing repartitioned INSERT ... SELECT
CONTEXT: SQL statement "INSERT INTO alter_distributed_table.abcde_0123456789012345678901234567890123456_f7ff6612_4160710162 (x,y) SELECT x,y FROM alter_distributed_table.abcde_012345678901234567890123456789012345678901234567890123456"
NOTICE: dropping the old alter_distributed_table.abcde_012345678901234567890123456789012345678901234567890123456
CONTEXT: SQL statement "DROP TABLE alter_distributed_table.abcde_012345678901234567890123456789012345678901234567890123456 CASCADE"
NOTICE: renaming the new table to alter_distributed_table.abcde_012345678901234567890123456789012345678901234567890123456
DEBUG: the name of the shard (abcde_01234567890123456789012345678901234567890_f7ff6612_xxxxxx) for relation (abcde_012345678901234567890123456789012345678901234567890123456) is too long, switching to sequential and local execution mode to prevent self deadlocks
CONTEXT: SQL statement "ALTER TABLE alter_distributed_table.abcde_0123456789012345678901234567890123456_f7ff6612_4160710162 RENAME TO abcde_012345678901234567890123456789012345678901234567890123456"
alter_distributed_table
---------------------------------------------------------------------
(1 row)
RESET client_min_messages;
-- test long partitioned table names
CREATE TABLE partition_lengths
(
tenant_id integer NOT NULL,
timeperiod timestamp without time zone NOT NULL,
inserted_utc timestamp without time zone NOT NULL DEFAULT now()
) PARTITION BY RANGE (timeperiod);
SELECT create_distributed_table('partition_lengths', 'tenant_id');
create_distributed_table
---------------------------------------------------------------------
(1 row)
CREATE TABLE partition_lengths_p2020_09_28_12345678901234567890123456789012345678901234567890 PARTITION OF partition_lengths FOR VALUES FROM ('2020-09-28 00:00:00') TO ('2020-09-29 00:00:00');
NOTICE: identifier "partition_lengths_p2020_09_28_12345678901234567890123456789012345678901234567890" will be truncated to "partition_lengths_p2020_09_28_123456789012345678901234567890123"
-- verify alter_distributed_table works with long partition names
SELECT alter_distributed_table('partition_lengths', shard_count := 29, cascade_to_colocated := false);
NOTICE: converting the partitions of alter_distributed_table.partition_lengths
NOTICE: creating a new table for alter_distributed_table.partition_lengths_p2020_09_28_123456789012345678901234567890123
NOTICE: moving the data of alter_distributed_table.partition_lengths_p2020_09_28_123456789012345678901234567890123
NOTICE: dropping the old alter_distributed_table.partition_lengths_p2020_09_28_123456789012345678901234567890123
NOTICE: renaming the new table to alter_distributed_table.partition_lengths_p2020_09_28_123456789012345678901234567890123
NOTICE: creating a new table for alter_distributed_table.partition_lengths
NOTICE: dropping the old alter_distributed_table.partition_lengths
NOTICE: renaming the new table to alter_distributed_table.partition_lengths
alter_distributed_table
---------------------------------------------------------------------
(1 row)
-- test long partition table names
ALTER TABLE partition_lengths_p2020_09_28_12345678901234567890123456789012345678901234567890 RENAME TO partition_lengths_p2020_09_28;
NOTICE: identifier "partition_lengths_p2020_09_28_12345678901234567890123456789012345678901234567890" will be truncated to "partition_lengths_p2020_09_28_123456789012345678901234567890123"
ALTER TABLE partition_lengths RENAME TO partition_lengths_12345678901234567890123456789012345678901234567890;
NOTICE: identifier "partition_lengths_12345678901234567890123456789012345678901234567890" will be truncated to "partition_lengths_123456789012345678901234567890123456789012345"
-- verify alter_distributed_table works with long partitioned table names
SELECT alter_distributed_table('partition_lengths_12345678901234567890123456789012345678901234567890', shard_count := 17, cascade_to_colocated := false);
NOTICE: converting the partitions of alter_distributed_table.partition_lengths_123456789012345678901234567890123456789012345
NOTICE: creating a new table for alter_distributed_table.partition_lengths_p2020_09_28
NOTICE: moving the data of alter_distributed_table.partition_lengths_p2020_09_28
NOTICE: dropping the old alter_distributed_table.partition_lengths_p2020_09_28
NOTICE: renaming the new table to alter_distributed_table.partition_lengths_p2020_09_28
NOTICE: creating a new table for alter_distributed_table.partition_lengths_123456789012345678901234567890123456789012345
NOTICE: dropping the old alter_distributed_table.partition_lengths_123456789012345678901234567890123456789012345
NOTICE: renaming the new table to alter_distributed_table.partition_lengths_123456789012345678901234567890123456789012345
alter_distributed_table
---------------------------------------------------------------------
(1 row)
SET client_min_messages TO WARNING;
DROP SCHEMA alter_distributed_table CASCADE;

View File

@ -3,9 +3,9 @@
CREATE TABLE alter_am_pg_version_table (a INT);
SELECT alter_table_set_access_method('alter_am_pg_version_table', 'columnar');
NOTICE: creating a new table for public.alter_am_pg_version_table
NOTICE: Moving the data of public.alter_am_pg_version_table
NOTICE: Dropping the old public.alter_am_pg_version_table
NOTICE: Renaming the new table to public.alter_am_pg_version_table
NOTICE: moving the data of public.alter_am_pg_version_table
NOTICE: dropping the old public.alter_am_pg_version_table
NOTICE: renaming the new table to public.alter_am_pg_version_table
alter_table_set_access_method
---------------------------------------------------------------------
@ -51,9 +51,9 @@ SELECT table_name, access_method FROM public.citus_tables WHERE table_name::text
SELECT alter_table_set_access_method('dist_table', 'columnar');
NOTICE: creating a new table for alter_table_set_access_method.dist_table
NOTICE: Moving the data of alter_table_set_access_method.dist_table
NOTICE: Dropping the old alter_table_set_access_method.dist_table
NOTICE: Renaming the new table to alter_table_set_access_method.dist_table
NOTICE: moving the data of alter_table_set_access_method.dist_table
NOTICE: dropping the old alter_table_set_access_method.dist_table
NOTICE: renaming the new table to alter_table_set_access_method.dist_table
alter_table_set_access_method
---------------------------------------------------------------------
@ -131,9 +131,9 @@ ERROR: you cannot alter access method of a partitioned table
-- test altering the partition's access method
SELECT alter_table_set_access_method('partitioned_table_1_5', 'columnar');
NOTICE: creating a new table for alter_table_set_access_method.partitioned_table_1_5
NOTICE: Moving the data of alter_table_set_access_method.partitioned_table_1_5
NOTICE: Dropping the old alter_table_set_access_method.partitioned_table_1_5
NOTICE: Renaming the new table to alter_table_set_access_method.partitioned_table_1_5
NOTICE: moving the data of alter_table_set_access_method.partitioned_table_1_5
NOTICE: dropping the old alter_table_set_access_method.partitioned_table_1_5
NOTICE: renaming the new table to alter_table_set_access_method.partitioned_table_1_5
alter_table_set_access_method
---------------------------------------------------------------------
@ -228,14 +228,14 @@ SELECT event FROM time_partitioned ORDER BY 1;
CALL alter_old_partitions_set_access_method('time_partitioned', '2021-01-01', 'columnar');
NOTICE: converting time_partitioned_d00 with start time Sat Jan 01 00:00:00 2000 and end time Thu Dec 31 00:00:00 2009
NOTICE: creating a new table for alter_table_set_access_method.time_partitioned_d00
NOTICE: Moving the data of alter_table_set_access_method.time_partitioned_d00
NOTICE: Dropping the old alter_table_set_access_method.time_partitioned_d00
NOTICE: Renaming the new table to alter_table_set_access_method.time_partitioned_d00
NOTICE: moving the data of alter_table_set_access_method.time_partitioned_d00
NOTICE: dropping the old alter_table_set_access_method.time_partitioned_d00
NOTICE: renaming the new table to alter_table_set_access_method.time_partitioned_d00
NOTICE: converting time_partitioned_d10 with start time Fri Jan 01 00:00:00 2010 and end time Tue Dec 31 00:00:00 2019
NOTICE: creating a new table for alter_table_set_access_method.time_partitioned_d10
NOTICE: Moving the data of alter_table_set_access_method.time_partitioned_d10
NOTICE: Dropping the old alter_table_set_access_method.time_partitioned_d10
NOTICE: Renaming the new table to alter_table_set_access_method.time_partitioned_d10
NOTICE: moving the data of alter_table_set_access_method.time_partitioned_d10
NOTICE: dropping the old alter_table_set_access_method.time_partitioned_d10
NOTICE: renaming the new table to alter_table_set_access_method.time_partitioned_d10
SELECT partition, access_method FROM time_partitions WHERE parent_table = 'time_partitioned'::regclass ORDER BY partition::text;
partition | access_method
---------------------------------------------------------------------
@ -274,14 +274,14 @@ SELECT event FROM time_partitioned ORDER BY 1;
CALL alter_old_partitions_set_access_method('time_partitioned', '2021-01-01', 'heap');
NOTICE: converting time_partitioned_d00 with start time Sat Jan 01 00:00:00 2000 and end time Thu Dec 31 00:00:00 2009
NOTICE: creating a new table for alter_table_set_access_method.time_partitioned_d00
NOTICE: Moving the data of alter_table_set_access_method.time_partitioned_d00
NOTICE: Dropping the old alter_table_set_access_method.time_partitioned_d00
NOTICE: Renaming the new table to alter_table_set_access_method.time_partitioned_d00
NOTICE: moving the data of alter_table_set_access_method.time_partitioned_d00
NOTICE: dropping the old alter_table_set_access_method.time_partitioned_d00
NOTICE: renaming the new table to alter_table_set_access_method.time_partitioned_d00
NOTICE: converting time_partitioned_d10 with start time Fri Jan 01 00:00:00 2010 and end time Tue Dec 31 00:00:00 2019
NOTICE: creating a new table for alter_table_set_access_method.time_partitioned_d10
NOTICE: Moving the data of alter_table_set_access_method.time_partitioned_d10
NOTICE: Dropping the old alter_table_set_access_method.time_partitioned_d10
NOTICE: Renaming the new table to alter_table_set_access_method.time_partitioned_d10
NOTICE: moving the data of alter_table_set_access_method.time_partitioned_d10
NOTICE: dropping the old alter_table_set_access_method.time_partitioned_d10
NOTICE: renaming the new table to alter_table_set_access_method.time_partitioned_d10
SELECT partition, access_method FROM time_partitions WHERE parent_table = 'time_partitioned'::regclass ORDER BY partition::text;
partition | access_method
---------------------------------------------------------------------
@ -324,9 +324,9 @@ SELECT alter_table_set_access_method('index_table', 'columnar');
NOTICE: the index idx1 on table alter_table_set_access_method.index_table will be dropped, because columnar tables cannot have indexes
NOTICE: the index idx2 on table alter_table_set_access_method.index_table will be dropped, because columnar tables cannot have indexes
NOTICE: creating a new table for alter_table_set_access_method.index_table
NOTICE: Moving the data of alter_table_set_access_method.index_table
NOTICE: Dropping the old alter_table_set_access_method.index_table
NOTICE: Renaming the new table to alter_table_set_access_method.index_table
NOTICE: moving the data of alter_table_set_access_method.index_table
NOTICE: dropping the old alter_table_set_access_method.index_table
NOTICE: renaming the new table to alter_table_set_access_method.index_table
alter_table_set_access_method
---------------------------------------------------------------------
@ -395,9 +395,9 @@ NOTICE: creating a new table for alter_table_set_access_method.table_type_dist
WARNING: fake_scan_getnextslot
CONTEXT: SQL statement "SELECT EXISTS (SELECT 1 FROM alter_table_set_access_method.table_type_dist_1533505599)"
WARNING: fake_scan_getnextslot
NOTICE: Moving the data of alter_table_set_access_method.table_type_dist
NOTICE: Dropping the old alter_table_set_access_method.table_type_dist
NOTICE: Renaming the new table to alter_table_set_access_method.table_type_dist
NOTICE: moving the data of alter_table_set_access_method.table_type_dist
NOTICE: dropping the old alter_table_set_access_method.table_type_dist
NOTICE: renaming the new table to alter_table_set_access_method.table_type_dist
alter_table_set_access_method
---------------------------------------------------------------------
@ -408,9 +408,9 @@ NOTICE: creating a new table for alter_table_set_access_method.table_type_ref
WARNING: fake_scan_getnextslot
CONTEXT: SQL statement "SELECT EXISTS (SELECT 1 FROM alter_table_set_access_method.table_type_ref_1037855087)"
WARNING: fake_scan_getnextslot
NOTICE: Moving the data of alter_table_set_access_method.table_type_ref
NOTICE: Dropping the old alter_table_set_access_method.table_type_ref
NOTICE: Renaming the new table to alter_table_set_access_method.table_type_ref
NOTICE: moving the data of alter_table_set_access_method.table_type_ref
NOTICE: dropping the old alter_table_set_access_method.table_type_ref
NOTICE: renaming the new table to alter_table_set_access_method.table_type_ref
alter_table_set_access_method
---------------------------------------------------------------------
@ -418,9 +418,9 @@ NOTICE: Renaming the new table to alter_table_set_access_method.table_type_ref
SELECT alter_table_set_access_method('table_type_pg_local', 'fake_am');
NOTICE: creating a new table for alter_table_set_access_method.table_type_pg_local
NOTICE: Moving the data of alter_table_set_access_method.table_type_pg_local
NOTICE: Dropping the old alter_table_set_access_method.table_type_pg_local
NOTICE: Renaming the new table to alter_table_set_access_method.table_type_pg_local
NOTICE: moving the data of alter_table_set_access_method.table_type_pg_local
NOTICE: dropping the old alter_table_set_access_method.table_type_pg_local
NOTICE: renaming the new table to alter_table_set_access_method.table_type_pg_local
alter_table_set_access_method
---------------------------------------------------------------------
@ -428,9 +428,9 @@ NOTICE: Renaming the new table to alter_table_set_access_method.table_type_pg_l
SELECT alter_table_set_access_method('table_type_citus_local', 'fake_am');
NOTICE: creating a new table for alter_table_set_access_method.table_type_citus_local
NOTICE: Moving the data of alter_table_set_access_method.table_type_citus_local
NOTICE: Dropping the old alter_table_set_access_method.table_type_citus_local
NOTICE: Renaming the new table to alter_table_set_access_method.table_type_citus_local
NOTICE: moving the data of alter_table_set_access_method.table_type_citus_local
NOTICE: dropping the old alter_table_set_access_method.table_type_citus_local
NOTICE: renaming the new table to alter_table_set_access_method.table_type_citus_local
alter_table_set_access_method
---------------------------------------------------------------------
@ -459,9 +459,9 @@ create table test_fk_p0 partition of test_fk_p for values from (0) to (10);
create table test_fk_p1 partition of test_fk_p for values from (10) to (20);
select alter_table_set_access_method('test_fk_p1', 'columnar');
NOTICE: creating a new table for alter_table_set_access_method.test_fk_p1
NOTICE: Moving the data of alter_table_set_access_method.test_fk_p1
NOTICE: Dropping the old alter_table_set_access_method.test_fk_p1
NOTICE: Renaming the new table to alter_table_set_access_method.test_fk_p1
NOTICE: moving the data of alter_table_set_access_method.test_fk_p1
NOTICE: dropping the old alter_table_set_access_method.test_fk_p1
NOTICE: renaming the new table to alter_table_set_access_method.test_fk_p1
ERROR: Foreign keys and AFTER ROW triggers are not supported for columnar tables
HINT: Consider an AFTER STATEMENT trigger instead.
CONTEXT: SQL statement "ALTER TABLE alter_table_set_access_method.test_fk_p ATTACH PARTITION alter_table_set_access_method.test_fk_p1 FOR VALUES FROM (10) TO (20);"
@ -475,11 +475,11 @@ INSERT INTO mat_view_test VALUES (1), (2);
CREATE MATERIALIZED VIEW mat_view AS SELECT * FROM mat_view_test;
SELECT alter_table_set_access_method('mat_view_test','columnar');
NOTICE: creating a new table for alter_table_set_access_method.mat_view_test
NOTICE: Moving the data of alter_table_set_access_method.mat_view_test
NOTICE: Dropping the old alter_table_set_access_method.mat_view_test
NOTICE: moving the data of alter_table_set_access_method.mat_view_test
NOTICE: dropping the old alter_table_set_access_method.mat_view_test
NOTICE: drop cascades to materialized view mat_view
CONTEXT: SQL statement "DROP TABLE alter_table_set_access_method.mat_view_test CASCADE"
NOTICE: Renaming the new table to alter_table_set_access_method.mat_view_test
NOTICE: renaming the new table to alter_table_set_access_method.mat_view_test
alter_table_set_access_method
---------------------------------------------------------------------
@ -519,13 +519,13 @@ create materialized view m_dist as select * from dist;
create view v_dist as select * from dist;
select alter_table_set_access_method('local','columnar');
NOTICE: creating a new table for alter_table_set_access_method.local
NOTICE: Moving the data of alter_table_set_access_method.local
NOTICE: Dropping the old alter_table_set_access_method.local
NOTICE: moving the data of alter_table_set_access_method.local
NOTICE: dropping the old alter_table_set_access_method.local
NOTICE: drop cascades to 2 other objects
DETAIL: drop cascades to materialized view m_local
drop cascades to view v_local
CONTEXT: SQL statement "DROP TABLE alter_table_set_access_method.local CASCADE"
NOTICE: Renaming the new table to alter_table_set_access_method.local
NOTICE: renaming the new table to alter_table_set_access_method.local
alter_table_set_access_method
---------------------------------------------------------------------
@ -533,13 +533,13 @@ NOTICE: Renaming the new table to alter_table_set_access_method.local
select alter_table_set_access_method('ref','columnar');
NOTICE: creating a new table for alter_table_set_access_method.ref
NOTICE: Moving the data of alter_table_set_access_method.ref
NOTICE: Dropping the old alter_table_set_access_method.ref
NOTICE: moving the data of alter_table_set_access_method.ref
NOTICE: dropping the old alter_table_set_access_method.ref
NOTICE: drop cascades to 2 other objects
DETAIL: drop cascades to materialized view m_ref
drop cascades to view v_ref
CONTEXT: SQL statement "DROP TABLE alter_table_set_access_method.ref CASCADE"
NOTICE: Renaming the new table to alter_table_set_access_method.ref
NOTICE: renaming the new table to alter_table_set_access_method.ref
alter_table_set_access_method
---------------------------------------------------------------------
@ -547,13 +547,13 @@ NOTICE: Renaming the new table to alter_table_set_access_method.ref
select alter_table_set_access_method('dist','columnar');
NOTICE: creating a new table for alter_table_set_access_method.dist
NOTICE: Moving the data of alter_table_set_access_method.dist
NOTICE: Dropping the old alter_table_set_access_method.dist
NOTICE: moving the data of alter_table_set_access_method.dist
NOTICE: dropping the old alter_table_set_access_method.dist
NOTICE: drop cascades to 2 other objects
DETAIL: drop cascades to materialized view m_dist
drop cascades to view v_dist
CONTEXT: SQL statement "DROP TABLE alter_table_set_access_method.dist CASCADE"
NOTICE: Renaming the new table to alter_table_set_access_method.dist
NOTICE: renaming the new table to alter_table_set_access_method.dist
alter_table_set_access_method
---------------------------------------------------------------------
@ -561,13 +561,13 @@ NOTICE: Renaming the new table to alter_table_set_access_method.dist
SELECT alter_distributed_table('dist', shard_count:=1, cascade_to_colocated:=false);
NOTICE: creating a new table for alter_table_set_access_method.dist
NOTICE: Moving the data of alter_table_set_access_method.dist
NOTICE: Dropping the old alter_table_set_access_method.dist
NOTICE: moving the data of alter_table_set_access_method.dist
NOTICE: dropping the old alter_table_set_access_method.dist
NOTICE: drop cascades to 2 other objects
DETAIL: drop cascades to materialized view m_dist
drop cascades to view v_dist
CONTEXT: SQL statement "DROP TABLE alter_table_set_access_method.dist CASCADE"
NOTICE: Renaming the new table to alter_table_set_access_method.dist
NOTICE: renaming the new table to alter_table_set_access_method.dist
alter_distributed_table
---------------------------------------------------------------------
@ -575,13 +575,13 @@ NOTICE: Renaming the new table to alter_table_set_access_method.dist
select alter_table_set_access_method('local','heap');
NOTICE: creating a new table for alter_table_set_access_method.local
NOTICE: Moving the data of alter_table_set_access_method.local
NOTICE: Dropping the old alter_table_set_access_method.local
NOTICE: moving the data of alter_table_set_access_method.local
NOTICE: dropping the old alter_table_set_access_method.local
NOTICE: drop cascades to 2 other objects
DETAIL: drop cascades to materialized view m_local
drop cascades to view v_local
CONTEXT: SQL statement "DROP TABLE alter_table_set_access_method.local CASCADE"
NOTICE: Renaming the new table to alter_table_set_access_method.local
NOTICE: renaming the new table to alter_table_set_access_method.local
alter_table_set_access_method
---------------------------------------------------------------------
@ -589,13 +589,13 @@ NOTICE: Renaming the new table to alter_table_set_access_method.local
select alter_table_set_access_method('ref','heap');
NOTICE: creating a new table for alter_table_set_access_method.ref
NOTICE: Moving the data of alter_table_set_access_method.ref
NOTICE: Dropping the old alter_table_set_access_method.ref
NOTICE: moving the data of alter_table_set_access_method.ref
NOTICE: dropping the old alter_table_set_access_method.ref
NOTICE: drop cascades to 2 other objects
DETAIL: drop cascades to materialized view m_ref
drop cascades to view v_ref
CONTEXT: SQL statement "DROP TABLE alter_table_set_access_method.ref CASCADE"
NOTICE: Renaming the new table to alter_table_set_access_method.ref
NOTICE: renaming the new table to alter_table_set_access_method.ref
alter_table_set_access_method
---------------------------------------------------------------------
@ -603,13 +603,13 @@ NOTICE: Renaming the new table to alter_table_set_access_method.ref
select alter_table_set_access_method('dist','heap');
NOTICE: creating a new table for alter_table_set_access_method.dist
NOTICE: Moving the data of alter_table_set_access_method.dist
NOTICE: Dropping the old alter_table_set_access_method.dist
NOTICE: moving the data of alter_table_set_access_method.dist
NOTICE: dropping the old alter_table_set_access_method.dist
NOTICE: drop cascades to 2 other objects
DETAIL: drop cascades to materialized view m_dist
drop cascades to view v_dist
CONTEXT: SQL statement "DROP TABLE alter_table_set_access_method.dist CASCADE"
NOTICE: Renaming the new table to alter_table_set_access_method.dist
NOTICE: renaming the new table to alter_table_set_access_method.dist
alter_table_set_access_method
---------------------------------------------------------------------
@ -681,6 +681,40 @@ CREATE TABLE identity_cols_test (a int, b int generated by default as identity (
SELECT alter_table_set_access_method('identity_cols_test', 'columnar');
ERROR: cannot complete command because relation alter_table_set_access_method.identity_cols_test has identity column
HINT: Drop the identity columns and re-try the command
-- test long table names
SET client_min_messages TO DEBUG1;
CREATE TABLE abcde_0123456789012345678901234567890123456789012345678901234567890123456789 (x int, y int);
NOTICE: identifier "abcde_0123456789012345678901234567890123456789012345678901234567890123456789" will be truncated to "abcde_012345678901234567890123456789012345678901234567890123456"
SELECT create_distributed_table('abcde_0123456789012345678901234567890123456789012345678901234567890123456789', 'x');
create_distributed_table
---------------------------------------------------------------------
(1 row)
SELECT alter_table_set_access_method('abcde_0123456789012345678901234567890123456789012345678901234567890123456789', 'columnar');
DEBUG: the name of the shard (abcde_01234567890123456789012345678901234567890_f7ff6612_xxxxxx) for relation (abcde_012345678901234567890123456789012345678901234567890123456) is too long, switching to sequential and local execution mode to prevent self deadlocks
NOTICE: creating a new table for alter_table_set_access_method.abcde_012345678901234567890123456789012345678901234567890123456
DEBUG: pathlist hook for columnar table am
CONTEXT: SQL statement "SELECT EXISTS (SELECT 1 FROM alter_table_set_access_method.abcde_0123456789012345678901234567890123456_f7ff6612_4160710162)"
NOTICE: moving the data of alter_table_set_access_method.abcde_012345678901234567890123456789012345678901234567890123456
NOTICE: dropping the old alter_table_set_access_method.abcde_012345678901234567890123456789012345678901234567890123456
CONTEXT: SQL statement "DROP TABLE alter_table_set_access_method.abcde_012345678901234567890123456789012345678901234567890123456 CASCADE"
NOTICE: renaming the new table to alter_table_set_access_method.abcde_012345678901234567890123456789012345678901234567890123456
DEBUG: the name of the shard (abcde_01234567890123456789012345678901234567890_f7ff6612_xxxxxx) for relation (abcde_012345678901234567890123456789012345678901234567890123456) is too long, switching to sequential and local execution mode to prevent self deadlocks
CONTEXT: SQL statement "ALTER TABLE alter_table_set_access_method.abcde_0123456789012345678901234567890123456_f7ff6612_4160710162 RENAME TO abcde_012345678901234567890123456789012345678901234567890123456"
alter_table_set_access_method
---------------------------------------------------------------------
(1 row)
SELECT * FROM abcde_0123456789012345678901234567890123456789012345678901234567890123456789;
NOTICE: identifier "abcde_0123456789012345678901234567890123456789012345678901234567890123456789" will be truncated to "abcde_012345678901234567890123456789012345678901234567890123456"
DEBUG: pathlist hook for columnar table am
x | y
---------------------------------------------------------------------
(0 rows)
RESET client_min_messages;
SET client_min_messages TO WARNING;
DROP SCHEMA alter_table_set_access_method CASCADE;
SELECT 1 FROM master_remove_node('localhost', :master_port);

View File

@ -351,11 +351,11 @@ ERROR: Table 'citus_local_table_1' is a local table. Replicating shard of a loc
BEGIN;
SELECT undistribute_table('citus_local_table_1');
NOTICE: creating a new table for citus_local_tables_test_schema.citus_local_table_1
NOTICE: Moving the data of citus_local_tables_test_schema.citus_local_table_1
NOTICE: moving the data of citus_local_tables_test_schema.citus_local_table_1
NOTICE: executing the command locally: SELECT a FROM citus_local_tables_test_schema.citus_local_table_1_1504027 citus_local_table_1
NOTICE: Dropping the old citus_local_tables_test_schema.citus_local_table_1
NOTICE: dropping the old citus_local_tables_test_schema.citus_local_table_1
NOTICE: executing the command locally: DROP TABLE IF EXISTS citus_local_tables_test_schema.citus_local_table_1_xxxxx CASCADE
NOTICE: Renaming the new table to citus_local_tables_test_schema.citus_local_table_1
NOTICE: renaming the new table to citus_local_tables_test_schema.citus_local_table_1
undistribute_table
---------------------------------------------------------------------

View File

@ -0,0 +1,190 @@
--
-- citus_update_table_statistics.sql
--
-- Test citus_update_table_statistics function on both
-- hash and append distributed tables
-- This function updates shardlength, shardminvalue and shardmaxvalue
--
SET citus.next_shard_id TO 981000;
SET citus.next_placement_id TO 982000;
SET citus.shard_count TO 8;
SET citus.shard_replication_factor TO 2;
-- test with a hash-distributed table
-- here we update only shardlength, not shardminvalue and shardmaxvalue
CREATE TABLE test_table_statistics_hash (id int);
SELECT create_distributed_table('test_table_statistics_hash', 'id');
create_distributed_table
---------------------------------------------------------------------
(1 row)
-- populate table
INSERT INTO test_table_statistics_hash SELECT i FROM generate_series(0, 10000)i;
-- originally shardlength (size of the shard) is zero
SELECT
ds.logicalrelid::regclass::text AS tablename,
ds.shardid AS shardid,
dsp.placementid AS placementid,
shard_name(ds.logicalrelid, ds.shardid) AS shardname,
ds.shardminvalue AS shardminvalue,
ds.shardmaxvalue AS shardmaxvalue
FROM pg_dist_shard ds JOIN pg_dist_shard_placement dsp USING (shardid)
WHERE ds.logicalrelid::regclass::text in ('test_table_statistics_hash') AND dsp.shardlength = 0
ORDER BY 2, 3;
tablename | shardid | placementid | shardname | shardminvalue | shardmaxvalue
---------------------------------------------------------------------
test_table_statistics_hash | 981000 | 982000 | test_table_statistics_hash_981000 | -2147483648 | -1610612737
test_table_statistics_hash | 981000 | 982001 | test_table_statistics_hash_981000 | -2147483648 | -1610612737
test_table_statistics_hash | 981001 | 982002 | test_table_statistics_hash_981001 | -1610612736 | -1073741825
test_table_statistics_hash | 981001 | 982003 | test_table_statistics_hash_981001 | -1610612736 | -1073741825
test_table_statistics_hash | 981002 | 982004 | test_table_statistics_hash_981002 | -1073741824 | -536870913
test_table_statistics_hash | 981002 | 982005 | test_table_statistics_hash_981002 | -1073741824 | -536870913
test_table_statistics_hash | 981003 | 982006 | test_table_statistics_hash_981003 | -536870912 | -1
test_table_statistics_hash | 981003 | 982007 | test_table_statistics_hash_981003 | -536870912 | -1
test_table_statistics_hash | 981004 | 982008 | test_table_statistics_hash_981004 | 0 | 536870911
test_table_statistics_hash | 981004 | 982009 | test_table_statistics_hash_981004 | 0 | 536870911
test_table_statistics_hash | 981005 | 982010 | test_table_statistics_hash_981005 | 536870912 | 1073741823
test_table_statistics_hash | 981005 | 982011 | test_table_statistics_hash_981005 | 536870912 | 1073741823
test_table_statistics_hash | 981006 | 982012 | test_table_statistics_hash_981006 | 1073741824 | 1610612735
test_table_statistics_hash | 981006 | 982013 | test_table_statistics_hash_981006 | 1073741824 | 1610612735
test_table_statistics_hash | 981007 | 982014 | test_table_statistics_hash_981007 | 1610612736 | 2147483647
test_table_statistics_hash | 981007 | 982015 | test_table_statistics_hash_981007 | 1610612736 | 2147483647
(16 rows)
-- setting this to on in order to verify that we use a distributed transaction id
-- to run the size queries from different connections
-- this is going to help detect deadlocks
SET citus.log_remote_commands TO ON;
-- setting this to sequential in order to have a deterministic order
-- in the output of citus.log_remote_commands
SET citus.multi_shard_modify_mode TO sequential;
-- update table statistics and then check that shardlength has changed
-- but shardminvalue and shardmaxvalue stay the same because this is
-- a hash distributed table
SELECT citus_update_table_statistics('test_table_statistics_hash');
NOTICE: issuing BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;SELECT assign_distributed_transaction_id(xx, xx, 'xxxxxxx');
DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx
NOTICE: issuing SELECT 981000 AS shard_id, NULL::text AS shard_minvalue, NULL::text AS shard_maxvalue, pg_relation_size('public.test_table_statistics_hash_981000') AS shard_size UNION ALL SELECT 981001 AS shard_id, NULL::text AS shard_minvalue, NULL::text AS shard_maxvalue, pg_relation_size('public.test_table_statistics_hash_981001') AS shard_size UNION ALL SELECT 981002 AS shard_id, NULL::text AS shard_minvalue, NULL::text AS shard_maxvalue, pg_relation_size('public.test_table_statistics_hash_981002') AS shard_size UNION ALL SELECT 981003 AS shard_id, NULL::text AS shard_minvalue, NULL::text AS shard_maxvalue, pg_relation_size('public.test_table_statistics_hash_981003') AS shard_size UNION ALL SELECT 981004 AS shard_id, NULL::text AS shard_minvalue, NULL::text AS shard_maxvalue, pg_relation_size('public.test_table_statistics_hash_981004') AS shard_size UNION ALL SELECT 981005 AS shard_id, NULL::text AS shard_minvalue, NULL::text AS shard_maxvalue, pg_relation_size('public.test_table_statistics_hash_981005') AS shard_size UNION ALL SELECT 981006 AS shard_id, NULL::text AS shard_minvalue, NULL::text AS shard_maxvalue, pg_relation_size('public.test_table_statistics_hash_981006') AS shard_size UNION ALL SELECT 981007 AS shard_id, NULL::text AS shard_minvalue, NULL::text AS shard_maxvalue, pg_relation_size('public.test_table_statistics_hash_981007') AS shard_size UNION ALL SELECT 0::bigint, NULL::text, NULL::text, 0::bigint;
DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx
NOTICE: issuing BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;SELECT assign_distributed_transaction_id(xx, xx, 'xxxxxxx');
DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx
NOTICE: issuing SELECT 981000 AS shard_id, NULL::text AS shard_minvalue, NULL::text AS shard_maxvalue, pg_relation_size('public.test_table_statistics_hash_981000') AS shard_size UNION ALL SELECT 981001 AS shard_id, NULL::text AS shard_minvalue, NULL::text AS shard_maxvalue, pg_relation_size('public.test_table_statistics_hash_981001') AS shard_size UNION ALL SELECT 981002 AS shard_id, NULL::text AS shard_minvalue, NULL::text AS shard_maxvalue, pg_relation_size('public.test_table_statistics_hash_981002') AS shard_size UNION ALL SELECT 981003 AS shard_id, NULL::text AS shard_minvalue, NULL::text AS shard_maxvalue, pg_relation_size('public.test_table_statistics_hash_981003') AS shard_size UNION ALL SELECT 981004 AS shard_id, NULL::text AS shard_minvalue, NULL::text AS shard_maxvalue, pg_relation_size('public.test_table_statistics_hash_981004') AS shard_size UNION ALL SELECT 981005 AS shard_id, NULL::text AS shard_minvalue, NULL::text AS shard_maxvalue, pg_relation_size('public.test_table_statistics_hash_981005') AS shard_size UNION ALL SELECT 981006 AS shard_id, NULL::text AS shard_minvalue, NULL::text AS shard_maxvalue, pg_relation_size('public.test_table_statistics_hash_981006') AS shard_size UNION ALL SELECT 981007 AS shard_id, NULL::text AS shard_minvalue, NULL::text AS shard_maxvalue, pg_relation_size('public.test_table_statistics_hash_981007') AS shard_size UNION ALL SELECT 0::bigint, NULL::text, NULL::text, 0::bigint;
DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx
NOTICE: issuing COMMIT
DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx
NOTICE: issuing COMMIT
DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx
citus_update_table_statistics
---------------------------------------------------------------------
(1 row)
RESET citus.log_remote_commands;
RESET citus.multi_shard_modify_mode;
SELECT
ds.logicalrelid::regclass::text AS tablename,
ds.shardid AS shardid,
dsp.placementid AS placementid,
shard_name(ds.logicalrelid, ds.shardid) AS shardname,
ds.shardminvalue as shardminvalue,
ds.shardmaxvalue as shardmaxvalue
FROM pg_dist_shard ds JOIN pg_dist_shard_placement dsp USING (shardid)
WHERE ds.logicalrelid::regclass::text in ('test_table_statistics_hash') AND dsp.shardlength > 0
ORDER BY 2, 3;
tablename | shardid | placementid | shardname | shardminvalue | shardmaxvalue
---------------------------------------------------------------------
test_table_statistics_hash | 981000 | 982000 | test_table_statistics_hash_981000 | -2147483648 | -1610612737
test_table_statistics_hash | 981000 | 982001 | test_table_statistics_hash_981000 | -2147483648 | -1610612737
test_table_statistics_hash | 981001 | 982002 | test_table_statistics_hash_981001 | -1610612736 | -1073741825
test_table_statistics_hash | 981001 | 982003 | test_table_statistics_hash_981001 | -1610612736 | -1073741825
test_table_statistics_hash | 981002 | 982004 | test_table_statistics_hash_981002 | -1073741824 | -536870913
test_table_statistics_hash | 981002 | 982005 | test_table_statistics_hash_981002 | -1073741824 | -536870913
test_table_statistics_hash | 981003 | 982006 | test_table_statistics_hash_981003 | -536870912 | -1
test_table_statistics_hash | 981003 | 982007 | test_table_statistics_hash_981003 | -536870912 | -1
test_table_statistics_hash | 981004 | 982008 | test_table_statistics_hash_981004 | 0 | 536870911
test_table_statistics_hash | 981004 | 982009 | test_table_statistics_hash_981004 | 0 | 536870911
test_table_statistics_hash | 981005 | 982010 | test_table_statistics_hash_981005 | 536870912 | 1073741823
test_table_statistics_hash | 981005 | 982011 | test_table_statistics_hash_981005 | 536870912 | 1073741823
test_table_statistics_hash | 981006 | 982012 | test_table_statistics_hash_981006 | 1073741824 | 1610612735
test_table_statistics_hash | 981006 | 982013 | test_table_statistics_hash_981006 | 1073741824 | 1610612735
test_table_statistics_hash | 981007 | 982014 | test_table_statistics_hash_981007 | 1610612736 | 2147483647
test_table_statistics_hash | 981007 | 982015 | test_table_statistics_hash_981007 | 1610612736 | 2147483647
(16 rows)
-- check with an append-distributed table
-- here we update shardlength, shardminvalue and shardmaxvalue
CREATE TABLE test_table_statistics_append (id int);
SELECT create_distributed_table('test_table_statistics_append', 'id', 'append');
create_distributed_table
---------------------------------------------------------------------
(1 row)
COPY test_table_statistics_append FROM PROGRAM 'echo 0 && echo 1 && echo 2 && echo 3' WITH CSV;
COPY test_table_statistics_append FROM PROGRAM 'echo 4 && echo 5 && echo 6 && echo 7' WITH CSV;
-- originally shardminvalue and shardmaxvalue will be 0,3 and 4, 7
SELECT
ds.logicalrelid::regclass::text AS tablename,
ds.shardid AS shardid,
dsp.placementid AS placementid,
shard_name(ds.logicalrelid, ds.shardid) AS shardname,
ds.shardminvalue as shardminvalue,
ds.shardmaxvalue as shardmaxvalue
FROM pg_dist_shard ds JOIN pg_dist_shard_placement dsp USING (shardid)
WHERE ds.logicalrelid::regclass::text in ('test_table_statistics_append')
ORDER BY 2, 3;
tablename | shardid | placementid | shardname | shardminvalue | shardmaxvalue
---------------------------------------------------------------------
test_table_statistics_append | 981008 | 982016 | test_table_statistics_append_981008 | 0 | 3
test_table_statistics_append | 981008 | 982017 | test_table_statistics_append_981008 | 0 | 3
test_table_statistics_append | 981009 | 982018 | test_table_statistics_append_981009 | 4 | 7
test_table_statistics_append | 981009 | 982019 | test_table_statistics_append_981009 | 4 | 7
(4 rows)
-- delete some data to change shardminvalues of a shards
DELETE FROM test_table_statistics_append WHERE id = 0 OR id = 4;
SET citus.log_remote_commands TO ON;
SET citus.multi_shard_modify_mode TO sequential;
-- update table statistics and then check that shardminvalue has changed
-- shardlength (shardsize) is still 8192 since there is very few data
SELECT citus_update_table_statistics('test_table_statistics_append');
NOTICE: issuing BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;SELECT assign_distributed_transaction_id(xx, xx, 'xxxxxxx');
DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx
NOTICE: issuing SELECT 981008 AS shard_id, min(id)::text AS shard_minvalue, max(id)::text AS shard_maxvalue, pg_relation_size('public.test_table_statistics_append_981008') AS shard_size FROM test_table_statistics_append_981008 UNION ALL SELECT 981009 AS shard_id, min(id)::text AS shard_minvalue, max(id)::text AS shard_maxvalue, pg_relation_size('public.test_table_statistics_append_981009') AS shard_size FROM test_table_statistics_append_981009 UNION ALL SELECT 0::bigint, NULL::text, NULL::text, 0::bigint;
DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx
NOTICE: issuing BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;SELECT assign_distributed_transaction_id(xx, xx, 'xxxxxxx');
DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx
NOTICE: issuing SELECT 981008 AS shard_id, min(id)::text AS shard_minvalue, max(id)::text AS shard_maxvalue, pg_relation_size('public.test_table_statistics_append_981008') AS shard_size FROM test_table_statistics_append_981008 UNION ALL SELECT 981009 AS shard_id, min(id)::text AS shard_minvalue, max(id)::text AS shard_maxvalue, pg_relation_size('public.test_table_statistics_append_981009') AS shard_size FROM test_table_statistics_append_981009 UNION ALL SELECT 0::bigint, NULL::text, NULL::text, 0::bigint;
DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx
NOTICE: issuing COMMIT
DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx
NOTICE: issuing COMMIT
DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx
citus_update_table_statistics
---------------------------------------------------------------------
(1 row)
RESET citus.log_remote_commands;
RESET citus.multi_shard_modify_mode;
SELECT
ds.logicalrelid::regclass::text AS tablename,
ds.shardid AS shardid,
dsp.placementid AS placementid,
shard_name(ds.logicalrelid, ds.shardid) AS shardname,
ds.shardminvalue as shardminvalue,
ds.shardmaxvalue as shardmaxvalue
FROM pg_dist_shard ds JOIN pg_dist_shard_placement dsp USING (shardid)
WHERE ds.logicalrelid::regclass::text in ('test_table_statistics_append')
ORDER BY 2, 3;
tablename | shardid | placementid | shardname | shardminvalue | shardmaxvalue
---------------------------------------------------------------------
test_table_statistics_append | 981008 | 982016 | test_table_statistics_append_981008 | 1 | 3
test_table_statistics_append | 981008 | 982017 | test_table_statistics_append_981008 | 1 | 3
test_table_statistics_append | 981009 | 982018 | test_table_statistics_append_981009 | 5 | 7
test_table_statistics_append | 981009 | 982019 | test_table_statistics_append_981009 | 5 | 7
(4 rows)
DROP TABLE test_table_statistics_hash, test_table_statistics_append;
ALTER SYSTEM RESET citus.shard_count;
ALTER SYSTEM RESET citus.shard_replication_factor;

View File

@ -250,9 +250,9 @@ $cmd$);
-- verify undistribute works
SELECT undistribute_table('table_option');
NOTICE: creating a new table for columnar_citus_integration.table_option
NOTICE: Moving the data of columnar_citus_integration.table_option
NOTICE: Dropping the old columnar_citus_integration.table_option
NOTICE: Renaming the new table to columnar_citus_integration.table_option
NOTICE: moving the data of columnar_citus_integration.table_option
NOTICE: dropping the old columnar_citus_integration.table_option
NOTICE: renaming the new table to columnar_citus_integration.table_option
undistribute_table
---------------------------------------------------------------------
@ -569,9 +569,9 @@ $cmd$);
-- verify undistribute works
SELECT undistribute_table('table_option');
NOTICE: creating a new table for columnar_citus_integration.table_option
NOTICE: Moving the data of columnar_citus_integration.table_option
NOTICE: Dropping the old columnar_citus_integration.table_option
NOTICE: Renaming the new table to columnar_citus_integration.table_option
NOTICE: moving the data of columnar_citus_integration.table_option
NOTICE: dropping the old columnar_citus_integration.table_option
NOTICE: renaming the new table to columnar_citus_integration.table_option
undistribute_table
---------------------------------------------------------------------
@ -808,9 +808,9 @@ $cmd$);
-- verify undistribute works
SELECT undistribute_table('table_option_reference');
NOTICE: creating a new table for columnar_citus_integration.table_option_reference
NOTICE: Moving the data of columnar_citus_integration.table_option_reference
NOTICE: Dropping the old columnar_citus_integration.table_option_reference
NOTICE: Renaming the new table to columnar_citus_integration.table_option_reference
NOTICE: moving the data of columnar_citus_integration.table_option_reference
NOTICE: dropping the old columnar_citus_integration.table_option_reference
NOTICE: renaming the new table to columnar_citus_integration.table_option_reference
undistribute_table
---------------------------------------------------------------------
@ -1041,9 +1041,9 @@ $cmd$);
-- verify undistribute works
SELECT undistribute_table('table_option_citus_local');
NOTICE: creating a new table for columnar_citus_integration.table_option_citus_local
NOTICE: Moving the data of columnar_citus_integration.table_option_citus_local
NOTICE: Dropping the old columnar_citus_integration.table_option_citus_local
NOTICE: Renaming the new table to columnar_citus_integration.table_option_citus_local
NOTICE: moving the data of columnar_citus_integration.table_option_citus_local
NOTICE: dropping the old columnar_citus_integration.table_option_citus_local
NOTICE: renaming the new table to columnar_citus_integration.table_option_citus_local
undistribute_table
---------------------------------------------------------------------

View File

@ -46,9 +46,9 @@ SELECT count(*), sum(i), min(i), max(i) FROM parent;
-- set older partitions as columnar
SELECT alter_table_set_access_method('p0','columnar');
NOTICE: creating a new table for public.p0
NOTICE: Moving the data of public.p0
NOTICE: Dropping the old public.p0
NOTICE: Renaming the new table to public.p0
NOTICE: moving the data of public.p0
NOTICE: dropping the old public.p0
NOTICE: renaming the new table to public.p0
alter_table_set_access_method
---------------------------------------------------------------------
@ -56,9 +56,9 @@ NOTICE: Renaming the new table to public.p0
SELECT alter_table_set_access_method('p1','columnar');
NOTICE: creating a new table for public.p1
NOTICE: Moving the data of public.p1
NOTICE: Dropping the old public.p1
NOTICE: Renaming the new table to public.p1
NOTICE: moving the data of public.p1
NOTICE: dropping the old public.p1
NOTICE: renaming the new table to public.p1
alter_table_set_access_method
---------------------------------------------------------------------
@ -66,9 +66,9 @@ NOTICE: Renaming the new table to public.p1
SELECT alter_table_set_access_method('p3','columnar');
NOTICE: creating a new table for public.p3
NOTICE: Moving the data of public.p3
NOTICE: Dropping the old public.p3
NOTICE: Renaming the new table to public.p3
NOTICE: moving the data of public.p3
NOTICE: dropping the old public.p3
NOTICE: renaming the new table to public.p3
alter_table_set_access_method
---------------------------------------------------------------------

View File

@ -46,9 +46,9 @@ SELECT count(*), sum(i), min(i), max(i) FROM parent;
-- set older partitions as columnar
SELECT alter_table_set_access_method('p0','columnar');
NOTICE: creating a new table for public.p0
NOTICE: Moving the data of public.p0
NOTICE: Dropping the old public.p0
NOTICE: Renaming the new table to public.p0
NOTICE: moving the data of public.p0
NOTICE: dropping the old public.p0
NOTICE: renaming the new table to public.p0
alter_table_set_access_method
---------------------------------------------------------------------
@ -56,9 +56,9 @@ NOTICE: Renaming the new table to public.p0
SELECT alter_table_set_access_method('p1','columnar');
NOTICE: creating a new table for public.p1
NOTICE: Moving the data of public.p1
NOTICE: Dropping the old public.p1
NOTICE: Renaming the new table to public.p1
NOTICE: moving the data of public.p1
NOTICE: dropping the old public.p1
NOTICE: renaming the new table to public.p1
alter_table_set_access_method
---------------------------------------------------------------------
@ -66,9 +66,9 @@ NOTICE: Renaming the new table to public.p1
SELECT alter_table_set_access_method('p3','columnar');
NOTICE: creating a new table for public.p3
NOTICE: Moving the data of public.p3
NOTICE: Dropping the old public.p3
NOTICE: Renaming the new table to public.p3
NOTICE: moving the data of public.p3
NOTICE: dropping the old public.p3
NOTICE: renaming the new table to public.p3
alter_table_set_access_method
---------------------------------------------------------------------

View File

@ -718,9 +718,9 @@ SELECT conrelid::regclass::text AS "Referencing Table", pg_get_constraintdef(oid
SELECT alter_distributed_table('adt_table', distribution_column:='b', colocate_with:='none');
NOTICE: creating a new table for coordinator_shouldhaveshards.adt_table
NOTICE: Moving the data of coordinator_shouldhaveshards.adt_table
NOTICE: Dropping the old coordinator_shouldhaveshards.adt_table
NOTICE: Renaming the new table to coordinator_shouldhaveshards.adt_table
NOTICE: moving the data of coordinator_shouldhaveshards.adt_table
NOTICE: dropping the old coordinator_shouldhaveshards.adt_table
NOTICE: renaming the new table to coordinator_shouldhaveshards.adt_table
alter_distributed_table
---------------------------------------------------------------------

View File

@ -251,3 +251,48 @@ ORDER BY 1,2,3,4 LIMIT 5;
1 | Wed Nov 22 22:51:43.132261 2017 | 4 | 0 | 3 |
(5 rows)
-- we don't support cross JOINs between distributed tables
-- and without target list entries
CREATE TABLE dist1(c0 int);
CREATE TABLE dist2(c0 int);
CREATE TABLE dist3(c0 int , c1 int);
CREATE TABLE dist4(c0 int , c1 int);
SELECT create_distributed_table('dist1', 'c0');
create_distributed_table
---------------------------------------------------------------------
(1 row)
SELECT create_distributed_table('dist2', 'c0');
create_distributed_table
---------------------------------------------------------------------
(1 row)
SELECT create_distributed_table('dist3', 'c1');
create_distributed_table
---------------------------------------------------------------------
(1 row)
SELECT create_distributed_table('dist4', 'c1');
create_distributed_table
---------------------------------------------------------------------
(1 row)
SELECT dist2.c0 FROM dist1, dist3, dist4, dist2 WHERE (dist3.c0) IN (dist4.c0);
ERROR: cannot perform distributed planning on this query
DETAIL: Cartesian products are currently unsupported
SELECT 1 FROM dist3, dist4, dist2 WHERE (dist3.c0) IN (dist4.c0);
ERROR: cannot perform distributed planning on this query
DETAIL: Cartesian products are currently unsupported
SELECT FROM dist3, dist4, dist2 WHERE (dist3.c0) IN (dist4.c0);
ERROR: cannot perform distributed planning on this query
DETAIL: Cartesian products are currently unsupported
SELECT dist2.c0 FROM dist3, dist4, dist2 WHERE (dist3.c0) IN (dist4.c0);
ERROR: cannot perform distributed planning on this query
DETAIL: Cartesian products are currently unsupported
SELECT dist2.* FROM dist3, dist4, dist2 WHERE (dist3.c0) IN (dist4.c0);
ERROR: cannot perform distributed planning on this query
DETAIL: Cartesian products are currently unsupported

View File

@ -73,9 +73,9 @@ SELECT create_distributed_table('test_collate_pushed_down_aggregate', 'a');
SELECT alter_distributed_table('test_collate_pushed_down_aggregate', shard_count := 2, cascade_to_colocated:=false);
NOTICE: creating a new table for collation_tests.test_collate_pushed_down_aggregate
NOTICE: Moving the data of collation_tests.test_collate_pushed_down_aggregate
NOTICE: Dropping the old collation_tests.test_collate_pushed_down_aggregate
NOTICE: Renaming the new table to collation_tests.test_collate_pushed_down_aggregate
NOTICE: moving the data of collation_tests.test_collate_pushed_down_aggregate
NOTICE: dropping the old collation_tests.test_collate_pushed_down_aggregate
NOTICE: renaming the new table to collation_tests.test_collate_pushed_down_aggregate
alter_distributed_table
---------------------------------------------------------------------

View File

@ -515,12 +515,28 @@ SELECT * FROM print_extension_changes();
| view time_partitions
(67 rows)
-- Test downgrade to 10.0-1 from 10.0-2
ALTER EXTENSION citus UPDATE TO '10.0-2';
ALTER EXTENSION citus UPDATE TO '10.0-1';
-- Should be empty result since upgrade+downgrade should be a no-op
SELECT * FROM print_extension_changes();
previous_object | current_object
---------------------------------------------------------------------
(0 rows)
-- Snapshot of state at 10.0-2
ALTER EXTENSION citus UPDATE TO '10.0-2';
SELECT * FROM print_extension_changes();
previous_object | current_object
---------------------------------------------------------------------
(0 rows)
DROP TABLE prev_objects, extension_diff;
-- show running version
SHOW citus.version;
citus.version
---------------------------------------------------------------------
10.0devel
10.0.2
(1 row)
-- ensure no unexpected objects were created outside pg_catalog

View File

@ -511,12 +511,28 @@ SELECT * FROM print_extension_changes();
| view time_partitions
(63 rows)
-- Test downgrade to 10.0-1 from 10.0-2
ALTER EXTENSION citus UPDATE TO '10.0-2';
ALTER EXTENSION citus UPDATE TO '10.0-1';
-- Should be empty result since upgrade+downgrade should be a no-op
SELECT * FROM print_extension_changes();
previous_object | current_object
---------------------------------------------------------------------
(0 rows)
-- Snapshot of state at 10.0-2
ALTER EXTENSION citus UPDATE TO '10.0-2';
SELECT * FROM print_extension_changes();
previous_object | current_object
---------------------------------------------------------------------
(0 rows)
DROP TABLE prev_objects, extension_diff;
-- show running version
SHOW citus.version;
citus.version
---------------------------------------------------------------------
10.0devel
10.0.2
(1 row)
-- ensure no unexpected objects were created outside pg_catalog

View File

@ -9,7 +9,7 @@ CREATE TABLE simple_table (
id bigint
);
SELECT master_get_table_ddl_events('simple_table');
master_get_table_ddl_events
master_get_table_ddl_events
---------------------------------------------------------------------
CREATE TABLE public.simple_table (first_name text, last_name text, id bigint)
ALTER TABLE public.simple_table OWNER TO postgres
@ -21,7 +21,7 @@ CREATE TABLE not_null_table (
id bigint not null
);
SELECT master_get_table_ddl_events('not_null_table');
master_get_table_ddl_events
master_get_table_ddl_events
---------------------------------------------------------------------
CREATE TABLE public.not_null_table (city text, id bigint NOT NULL)
ALTER TABLE public.not_null_table OWNER TO postgres
@ -34,7 +34,7 @@ CREATE TABLE column_constraint_table (
age int CONSTRAINT non_negative_age CHECK (age >= 0)
);
SELECT master_get_table_ddl_events('column_constraint_table');
master_get_table_ddl_events
master_get_table_ddl_events
---------------------------------------------------------------------
CREATE TABLE public.column_constraint_table (first_name text, last_name text, age integer, CONSTRAINT non_negative_age CHECK ((age >= 0)))
ALTER TABLE public.column_constraint_table OWNER TO postgres
@ -48,7 +48,7 @@ CREATE TABLE table_constraint_table (
CONSTRAINT bids_ordered CHECK (min_bid > max_bid)
);
SELECT master_get_table_ddl_events('table_constraint_table');
master_get_table_ddl_events
master_get_table_ddl_events
---------------------------------------------------------------------
CREATE TABLE public.table_constraint_table (bid_item_id bigint, min_bid numeric NOT NULL, max_bid numeric NOT NULL, CONSTRAINT bids_ordered CHECK ((min_bid > max_bid)))
ALTER TABLE public.table_constraint_table OWNER TO postgres
@ -67,7 +67,7 @@ SELECT create_distributed_table('check_constraint_table_1', 'id');
(1 row)
SELECT master_get_table_ddl_events('check_constraint_table_1');
master_get_table_ddl_events
master_get_table_ddl_events
---------------------------------------------------------------------
CREATE TABLE public.check_constraint_table_1 (id integer, b boolean, CONSTRAINT check_constraint_table_1_b_check CHECK (b))
ALTER TABLE public.check_constraint_table_1 OWNER TO postgres
@ -84,7 +84,7 @@ SELECT create_distributed_table('check_constraint_table_2', 'id');
(1 row)
SELECT master_get_table_ddl_events('check_constraint_table_2');
master_get_table_ddl_events
master_get_table_ddl_events
---------------------------------------------------------------------
CREATE TABLE public.check_constraint_table_2 (id integer, CONSTRAINT check_constraint_table_2_check CHECK (true))
ALTER TABLE public.check_constraint_table_2 OWNER TO postgres
@ -96,7 +96,7 @@ CREATE TABLE default_value_table (
price decimal default 0.00
);
SELECT master_get_table_ddl_events('default_value_table');
master_get_table_ddl_events
master_get_table_ddl_events
---------------------------------------------------------------------
CREATE TABLE public.default_value_table (name text, price numeric DEFAULT 0.00)
ALTER TABLE public.default_value_table OWNER TO postgres
@ -109,7 +109,7 @@ CREATE TABLE pkey_table (
id bigint PRIMARY KEY
);
SELECT master_get_table_ddl_events('pkey_table');
master_get_table_ddl_events
master_get_table_ddl_events
---------------------------------------------------------------------
CREATE TABLE public.pkey_table (first_name text, last_name text, id bigint NOT NULL)
ALTER TABLE public.pkey_table OWNER TO postgres
@ -122,7 +122,7 @@ CREATE TABLE unique_table (
username text UNIQUE not null
);
SELECT master_get_table_ddl_events('unique_table');
master_get_table_ddl_events
master_get_table_ddl_events
---------------------------------------------------------------------
CREATE TABLE public.unique_table (user_id bigint NOT NULL, username text NOT NULL)
ALTER TABLE public.unique_table OWNER TO postgres
@ -137,7 +137,7 @@ CREATE TABLE clustered_table (
CREATE INDEX clustered_time_idx ON clustered_table (received_at);
CLUSTER clustered_table USING clustered_time_idx;
SELECT master_get_table_ddl_events('clustered_table');
master_get_table_ddl_events
master_get_table_ddl_events
---------------------------------------------------------------------
CREATE TABLE public.clustered_table (data json NOT NULL, received_at timestamp without time zone NOT NULL)
ALTER TABLE public.clustered_table OWNER TO postgres
@ -178,30 +178,33 @@ NOTICE: foreign-data wrapper "fake_fdw" does not have an extension defined
(1 row)
ALTER FOREIGN TABLE foreign_table rename to renamed_foreign_table;
ALTER FOREIGN TABLE renamed_foreign_table rename full_name to rename_name;
ALTER FOREIGN TABLE renamed_foreign_table alter rename_name type char(8);
ALTER FOREIGN TABLE foreign_table rename to renamed_foreign_table_with_long_name_12345678901234567890123456789012345678901234567890;
NOTICE: identifier "renamed_foreign_table_with_long_name_12345678901234567890123456789012345678901234567890" will be truncated to "renamed_foreign_table_with_long_name_12345678901234567890123456"
ALTER FOREIGN TABLE renamed_foreign_table_with_long_name_12345678901234567890123456789012345678901234567890 rename full_name to rename_name;
NOTICE: identifier "renamed_foreign_table_with_long_name_12345678901234567890123456789012345678901234567890" will be truncated to "renamed_foreign_table_with_long_name_12345678901234567890123456"
ALTER FOREIGN TABLE renamed_foreign_table_with_long_name_12345678901234567890123456789012345678901234567890 alter rename_name type char(8);
NOTICE: identifier "renamed_foreign_table_with_long_name_12345678901234567890123456789012345678901234567890" will be truncated to "renamed_foreign_table_with_long_name_12345678901234567890123456"
\c - - :public_worker_1_host :worker_1_port
select table_name, column_name, data_type
from information_schema.columns
where table_schema='public' and table_name like 'renamed_foreign_table_%' and column_name <> 'id'
order by table_name;
table_name | column_name | data_type
table_name | column_name | data_type
---------------------------------------------------------------------
renamed_foreign_table_610008 | rename_name | character
renamed_foreign_table_610009 | rename_name | character
renamed_foreign_table_610010 | rename_name | character
renamed_foreign_table_610011 | rename_name | character
renamed_foreign_table_with_long_name_1234567890_6a8dd6f8_610008 | rename_name | character
renamed_foreign_table_with_long_name_1234567890_6a8dd6f8_610009 | rename_name | character
renamed_foreign_table_with_long_name_1234567890_6a8dd6f8_610010 | rename_name | character
renamed_foreign_table_with_long_name_1234567890_6a8dd6f8_610011 | rename_name | character
(4 rows)
\c - - :master_host :master_port
SELECT master_get_table_ddl_events('renamed_foreign_table');
SELECT master_get_table_ddl_events('renamed_foreign_table_with_long_name_12345678901234567890123456789012345678901234567890');
NOTICE: foreign-data wrapper "fake_fdw" does not have an extension defined
master_get_table_ddl_events
master_get_table_ddl_events
---------------------------------------------------------------------
CREATE SERVER IF NOT EXISTS fake_fdw_server FOREIGN DATA WRAPPER fake_fdw
CREATE FOREIGN TABLE public.renamed_foreign_table (id bigint NOT NULL, rename_name character(8) DEFAULT ''::text NOT NULL) SERVER fake_fdw_server OPTIONS (encoding 'utf-8', compression 'true')
ALTER TABLE public.renamed_foreign_table OWNER TO postgres
CREATE FOREIGN TABLE public.renamed_foreign_table_with_long_name_12345678901234567890123456 (id bigint NOT NULL, rename_name character(8) DEFAULT ''::text NOT NULL) SERVER fake_fdw_server OPTIONS (encoding 'utf-8', compression 'true')
ALTER TABLE public.renamed_foreign_table_with_long_name_12345678901234567890123456 OWNER TO postgres
(3 rows)
-- propagating views is not supported
@ -210,7 +213,8 @@ SELECT master_get_table_ddl_events('local_view');
ERROR: local_view is not a regular, foreign or partitioned table
-- clean up
DROP VIEW IF EXISTS local_view;
DROP FOREIGN TABLE IF EXISTS renamed_foreign_table;
DROP FOREIGN TABLE IF EXISTS renamed_foreign_table_with_long_name_12345678901234567890123456789012345678901234567890;
NOTICE: identifier "renamed_foreign_table_with_long_name_12345678901234567890123456789012345678901234567890" will be truncated to "renamed_foreign_table_with_long_name_12345678901234567890123456"
\c - - :public_worker_1_host :worker_1_port
select table_name, column_name, data_type
from information_schema.columns

View File

@ -357,13 +357,13 @@ SET client_min_messages TO DEBUG1;
CREATE INDEX ix_test_index_creation2
ON test_index_creation1 USING btree
(tenant_id, timeperiod);
DEBUG: the index name on the shards of the partition is too long, switching to sequential and local execution mode to prevent self deadlocks: test_index_creation1_p2020_09_26_10311_tenant_id_timeperiod_idx
DEBUG: the index name on the shards of the partition is too long, switching to sequential and local execution mode to prevent self deadlocks: test_index_creation1_p2020_09_26_xxxxxx_tenant_id_timeperiod_idx
-- same test with schema qualified
SET search_path TO public;
CREATE INDEX ix_test_index_creation3
ON multi_index_statements.test_index_creation1 USING btree
(tenant_id, timeperiod);
DEBUG: the index name on the shards of the partition is too long, switching to sequential and local execution mode to prevent self deadlocks: test_index_creation1_p2020_09_26_10311_tenant_id_timeperiod_idx
DEBUG: the index name on the shards of the partition is too long, switching to sequential and local execution mode to prevent self deadlocks: test_index_creation1_p2020_09_26_xxxxxx_tenant_id_timeperiod_idx
SET search_path TO multi_index_statements;
-- we cannot switch to sequential execution
-- after a parallel query
@ -377,7 +377,7 @@ BEGIN;
CREATE INDEX ix_test_index_creation4
ON test_index_creation1 USING btree
(tenant_id, timeperiod);
ERROR: The index name (test_index_creation1_p2020_09_26_10311_tenant_id_timeperiod_idx) on a shard is too long and could lead to deadlocks when executed in a transaction block after a parallel query
ERROR: The index name (test_index_creation1_p2020_09_26_xxxxxx_tenant_id_timeperiod_idx) on a shard is too long and could lead to deadlocks when executed in a transaction block after a parallel query
HINT: Try re-running the transaction with "SET LOCAL citus.multi_shard_modify_mode TO 'sequential';"
ROLLBACK;
-- try inside a sequential block
@ -392,7 +392,7 @@ BEGIN;
CREATE INDEX ix_test_index_creation4
ON test_index_creation1 USING btree
(tenant_id, timeperiod);
DEBUG: the index name on the shards of the partition is too long, switching to sequential and local execution mode to prevent self deadlocks: test_index_creation1_p2020_09_26_10311_tenant_id_timeperiod_idx
DEBUG: the index name on the shards of the partition is too long, switching to sequential and local execution mode to prevent self deadlocks: test_index_creation1_p2020_09_26_xxxxxx_tenant_id_timeperiod_idx
ROLLBACK;
-- should be able to create indexes with INCLUDE/WHERE
CREATE INDEX ix_test_index_creation5 ON test_index_creation1
@ -401,7 +401,7 @@ CREATE INDEX ix_test_index_creation5 ON test_index_creation1
DEBUG: the index name on the shards of the partition is too long, switching to sequential and local execution mode to prevent self deadlocks: test_index_creation1_p2020_09_2_tenant_id_timeperiod_field1_idx
CREATE UNIQUE INDEX ix_test_index_creation6 ON test_index_creation1
USING btree(tenant_id, timeperiod);
DEBUG: the index name on the shards of the partition is too long, switching to sequential and local execution mode to prevent self deadlocks: test_index_creation1_p2020_09_26_10311_tenant_id_timeperiod_idx
DEBUG: the index name on the shards of the partition is too long, switching to sequential and local execution mode to prevent self deadlocks: test_index_creation1_p2020_09_26_xxxxxx_tenant_id_timeperiod_idx
-- should be able to create short named indexes in parallel
-- as the table/index name is short
CREATE INDEX f1

View File

@ -256,6 +256,44 @@ SELECT lock_relation_if_exists('test', 'ACCESS SHARE');
SELECT lock_relation_if_exists('test', 'EXCLUSIVE');
ERROR: permission denied for table test
ABORT;
-- test creating columnar tables and accessing to columnar metadata tables via unprivileged user
-- all below 5 commands should throw no permission errors
-- read columnar metadata table
SELECT * FROM columnar.stripe;
storage_id | stripe_num | file_offset | data_length | column_count | chunk_row_count | row_count | chunk_group_count
---------------------------------------------------------------------
(0 rows)
-- alter a columnar setting
SET columnar.chunk_group_row_limit = 1050;
DO $proc$
BEGIN
IF substring(current_Setting('server_version'), '\d+')::int >= 12 THEN
EXECUTE $$
-- create columnar table
CREATE TABLE columnar_table (a int) USING columnar;
-- alter a columnar table that is created by that unprivileged user
SELECT alter_columnar_table_set('columnar_table', chunk_group_row_limit => 100);
-- and drop it
DROP TABLE columnar_table;
$$;
END IF;
END$proc$;
-- cannot modify columnar metadata table as unprivileged user
INSERT INTO columnar.stripe VALUES(99);
ERROR: permission denied for table stripe
-- Cannot drop columnar metadata table as unprivileged user.
-- Privileged user also cannot drop but with a different error message.
-- (since citus extension has a dependency to it)
DROP TABLE columnar.chunk;
ERROR: must be owner of table chunk
-- test whether a read-only user can read from citus_tables view
SELECT distribution_column FROM citus_tables WHERE table_name = 'test'::regclass;
distribution_column
---------------------------------------------------------------------
id
(1 row)
-- check no permission
SET ROLE no_access;
EXECUTE prepare_insert(1);

View File

@ -1,6 +1,8 @@
CREATE SCHEMA mx_alter_distributed_table;
SET search_path TO mx_alter_distributed_table;
SET citus.shard_replication_factor TO 1;
ALTER SEQUENCE pg_catalog.pg_dist_colocationid_seq RESTART 1410000;
SET citus.replication_model TO 'streaming';
-- test alter_distributed_table UDF
CREATE TABLE adt_table (a INT, b INT);
CREATE TABLE adt_col (a INT UNIQUE, b INT);
@ -80,9 +82,9 @@ SELECT conrelid::regclass::text AS "Referencing Table", pg_get_constraintdef(oid
SELECT alter_distributed_table('adt_table', distribution_column:='b', colocate_with:='none');
NOTICE: creating a new table for mx_alter_distributed_table.adt_table
NOTICE: Moving the data of mx_alter_distributed_table.adt_table
NOTICE: Dropping the old mx_alter_distributed_table.adt_table
NOTICE: Renaming the new table to mx_alter_distributed_table.adt_table
NOTICE: moving the data of mx_alter_distributed_table.adt_table
NOTICE: dropping the old mx_alter_distributed_table.adt_table
NOTICE: renaming the new table to mx_alter_distributed_table.adt_table
alter_distributed_table
---------------------------------------------------------------------
@ -138,9 +140,9 @@ BEGIN;
INSERT INTO adt_table SELECT x, x+1 FROM generate_series(1, 1000) x;
SELECT alter_distributed_table('adt_table', distribution_column:='a');
NOTICE: creating a new table for mx_alter_distributed_table.adt_table
NOTICE: Moving the data of mx_alter_distributed_table.adt_table
NOTICE: Dropping the old mx_alter_distributed_table.adt_table
NOTICE: Renaming the new table to mx_alter_distributed_table.adt_table
NOTICE: moving the data of mx_alter_distributed_table.adt_table
NOTICE: dropping the old mx_alter_distributed_table.adt_table
NOTICE: renaming the new table to mx_alter_distributed_table.adt_table
alter_distributed_table
---------------------------------------------------------------------
@ -159,5 +161,317 @@ SELECT table_name, citus_table_type, distribution_column, shard_count FROM publi
adt_table | distributed | a | 6
(1 row)
-- test procedure colocation is preserved with alter_distributed_table
CREATE TABLE test_proc_colocation_0 (a float8);
SELECT create_distributed_table('test_proc_colocation_0', 'a');
create_distributed_table
---------------------------------------------------------------------
(1 row)
CREATE OR REPLACE procedure proc_0(dist_key float8)
LANGUAGE plpgsql
AS $$
DECLARE
res INT := 0;
BEGIN
INSERT INTO test_proc_colocation_0 VALUES (dist_key);
SELECT count(*) INTO res FROM test_proc_colocation_0;
RAISE NOTICE 'Res: %', res;
COMMIT;
END;$$;
SELECT create_distributed_function('proc_0(float8)', 'dist_key', 'test_proc_colocation_0' );
create_distributed_function
---------------------------------------------------------------------
(1 row)
SELECT logicalrelid, colocationid FROM pg_dist_partition WHERE logicalrelid::regclass::text IN ('test_proc_colocation_0');
logicalrelid | colocationid
---------------------------------------------------------------------
test_proc_colocation_0 | 1410002
(1 row)
SELECT proname, colocationid FROM pg_proc JOIN citus.pg_dist_object ON pg_proc.oid = citus.pg_dist_object.objid WHERE proname IN ('proc_0');
proname | colocationid
---------------------------------------------------------------------
proc_0 | 1410002
(1 row)
SET client_min_messages TO DEBUG1;
CALL proc_0(1.0);
DEBUG: pushing down the procedure
NOTICE: Res: 1
DETAIL: from localhost:xxxxx
RESET client_min_messages;
-- shardCount is not null && list_length(colocatedTableList) = 1
SELECT alter_distributed_table('test_proc_colocation_0', shard_count:= 8);
NOTICE: creating a new table for mx_alter_distributed_table.test_proc_colocation_0
NOTICE: moving the data of mx_alter_distributed_table.test_proc_colocation_0
NOTICE: dropping the old mx_alter_distributed_table.test_proc_colocation_0
NOTICE: renaming the new table to mx_alter_distributed_table.test_proc_colocation_0
alter_distributed_table
---------------------------------------------------------------------
(1 row)
SET client_min_messages TO DEBUG1;
CALL proc_0(1.0);
DEBUG: pushing down the procedure
NOTICE: Res: 2
DETAIL: from localhost:xxxxx
RESET client_min_messages;
SELECT logicalrelid, colocationid FROM pg_dist_partition WHERE logicalrelid::regclass::text IN ('test_proc_colocation_0');
logicalrelid | colocationid
---------------------------------------------------------------------
test_proc_colocation_0 | 1410003
(1 row)
SELECT proname, colocationid FROM pg_proc JOIN citus.pg_dist_object ON pg_proc.oid = citus.pg_dist_object.objid WHERE proname IN ('proc_0');
proname | colocationid
---------------------------------------------------------------------
proc_0 | 1410003
(1 row)
-- colocatewith is not null && list_length(colocatedTableList) = 1
SELECT alter_distributed_table('test_proc_colocation_0', shard_count:= 4);
NOTICE: creating a new table for mx_alter_distributed_table.test_proc_colocation_0
NOTICE: moving the data of mx_alter_distributed_table.test_proc_colocation_0
NOTICE: dropping the old mx_alter_distributed_table.test_proc_colocation_0
NOTICE: renaming the new table to mx_alter_distributed_table.test_proc_colocation_0
alter_distributed_table
---------------------------------------------------------------------
(1 row)
CREATE TABLE test_proc_colocation_1 (a float8);
SELECT create_distributed_table('test_proc_colocation_1', 'a', colocate_with := 'none');
create_distributed_table
---------------------------------------------------------------------
(1 row)
SELECT alter_distributed_table('test_proc_colocation_0', colocate_with := 'test_proc_colocation_1');
NOTICE: creating a new table for mx_alter_distributed_table.test_proc_colocation_0
NOTICE: moving the data of mx_alter_distributed_table.test_proc_colocation_0
NOTICE: dropping the old mx_alter_distributed_table.test_proc_colocation_0
NOTICE: renaming the new table to mx_alter_distributed_table.test_proc_colocation_0
alter_distributed_table
---------------------------------------------------------------------
(1 row)
SET client_min_messages TO DEBUG1;
CALL proc_0(1.0);
DEBUG: pushing down the procedure
NOTICE: Res: 3
DETAIL: from localhost:xxxxx
RESET client_min_messages;
SELECT logicalrelid, colocationid FROM pg_dist_partition WHERE logicalrelid::regclass::text IN ('test_proc_colocation_0');
logicalrelid | colocationid
---------------------------------------------------------------------
test_proc_colocation_0 | 1410004
(1 row)
SELECT proname, colocationid FROM pg_proc JOIN citus.pg_dist_object ON pg_proc.oid = citus.pg_dist_object.objid WHERE proname IN ('proc_0');
proname | colocationid
---------------------------------------------------------------------
proc_0 | 1410004
(1 row)
-- shardCount is not null && cascade_to_colocated is true
SELECT alter_distributed_table('test_proc_colocation_0', shard_count:= 8, cascade_to_colocated := true);
NOTICE: creating a new table for mx_alter_distributed_table.test_proc_colocation_0
NOTICE: moving the data of mx_alter_distributed_table.test_proc_colocation_0
NOTICE: dropping the old mx_alter_distributed_table.test_proc_colocation_0
NOTICE: renaming the new table to mx_alter_distributed_table.test_proc_colocation_0
NOTICE: creating a new table for mx_alter_distributed_table.test_proc_colocation_1
NOTICE: moving the data of mx_alter_distributed_table.test_proc_colocation_1
NOTICE: dropping the old mx_alter_distributed_table.test_proc_colocation_1
NOTICE: renaming the new table to mx_alter_distributed_table.test_proc_colocation_1
alter_distributed_table
---------------------------------------------------------------------
(1 row)
SET client_min_messages TO DEBUG1;
CALL proc_0(1.0);
DEBUG: pushing down the procedure
NOTICE: Res: 4
DETAIL: from localhost:xxxxx
RESET client_min_messages;
SELECT logicalrelid, colocationid FROM pg_dist_partition WHERE logicalrelid::regclass::text IN ('test_proc_colocation_0');
logicalrelid | colocationid
---------------------------------------------------------------------
test_proc_colocation_0 | 1410003
(1 row)
SELECT proname, colocationid FROM pg_proc JOIN citus.pg_dist_object ON pg_proc.oid = citus.pg_dist_object.objid WHERE proname IN ('proc_0');
proname | colocationid
---------------------------------------------------------------------
proc_0 | 1410003
(1 row)
-- colocatewith is not null && cascade_to_colocated is true
SELECT alter_distributed_table('test_proc_colocation_0', shard_count:= 4, cascade_to_colocated := true);
NOTICE: creating a new table for mx_alter_distributed_table.test_proc_colocation_0
NOTICE: moving the data of mx_alter_distributed_table.test_proc_colocation_0
NOTICE: dropping the old mx_alter_distributed_table.test_proc_colocation_0
NOTICE: renaming the new table to mx_alter_distributed_table.test_proc_colocation_0
NOTICE: creating a new table for mx_alter_distributed_table.test_proc_colocation_1
NOTICE: moving the data of mx_alter_distributed_table.test_proc_colocation_1
NOTICE: dropping the old mx_alter_distributed_table.test_proc_colocation_1
NOTICE: renaming the new table to mx_alter_distributed_table.test_proc_colocation_1
alter_distributed_table
---------------------------------------------------------------------
(1 row)
CREATE TABLE test_proc_colocation_2 (a float8);
SELECT create_distributed_table('test_proc_colocation_2', 'a', colocate_with := 'none');
create_distributed_table
---------------------------------------------------------------------
(1 row)
SELECT alter_distributed_table('test_proc_colocation_0', colocate_with := 'test_proc_colocation_2', cascade_to_colocated := true);
NOTICE: creating a new table for mx_alter_distributed_table.test_proc_colocation_0
NOTICE: moving the data of mx_alter_distributed_table.test_proc_colocation_0
NOTICE: dropping the old mx_alter_distributed_table.test_proc_colocation_0
NOTICE: renaming the new table to mx_alter_distributed_table.test_proc_colocation_0
NOTICE: creating a new table for mx_alter_distributed_table.test_proc_colocation_1
NOTICE: moving the data of mx_alter_distributed_table.test_proc_colocation_1
NOTICE: dropping the old mx_alter_distributed_table.test_proc_colocation_1
NOTICE: renaming the new table to mx_alter_distributed_table.test_proc_colocation_1
alter_distributed_table
---------------------------------------------------------------------
(1 row)
SET client_min_messages TO DEBUG1;
CALL proc_0(1.0);
DEBUG: pushing down the procedure
NOTICE: Res: 5
DETAIL: from localhost:xxxxx
RESET client_min_messages;
SELECT logicalrelid, colocationid FROM pg_dist_partition WHERE logicalrelid::regclass::text IN ('test_proc_colocation_0');
logicalrelid | colocationid
---------------------------------------------------------------------
test_proc_colocation_0 | 1410005
(1 row)
SELECT proname, colocationid FROM pg_proc JOIN citus.pg_dist_object ON pg_proc.oid = citus.pg_dist_object.objid WHERE proname IN ('proc_0');
proname | colocationid
---------------------------------------------------------------------
proc_0 | 1410005
(1 row)
-- try a case with more than one procedure
CREATE OR REPLACE procedure proc_1(dist_key float8)
LANGUAGE plpgsql
AS $$
DECLARE
res INT := 0;
BEGIN
INSERT INTO test_proc_colocation_0 VALUES (dist_key);
SELECT count(*) INTO res FROM test_proc_colocation_0;
RAISE NOTICE 'Res: %', res;
COMMIT;
END;$$;
SELECT create_distributed_function('proc_1(float8)', 'dist_key', 'test_proc_colocation_0' );
create_distributed_function
---------------------------------------------------------------------
(1 row)
SELECT logicalrelid, colocationid FROM pg_dist_partition WHERE logicalrelid::regclass::text IN ('test_proc_colocation_0');
logicalrelid | colocationid
---------------------------------------------------------------------
test_proc_colocation_0 | 1410005
(1 row)
SELECT proname, colocationid FROM pg_proc JOIN citus.pg_dist_object ON pg_proc.oid = citus.pg_dist_object.objid WHERE proname IN ('proc_0', 'proc_1') ORDER BY proname;
proname | colocationid
---------------------------------------------------------------------
proc_0 | 1410005
proc_1 | 1410005
(2 rows)
SET client_min_messages TO DEBUG1;
CALL proc_0(1.0);
DEBUG: pushing down the procedure
NOTICE: Res: 6
DETAIL: from localhost:xxxxx
CALL proc_1(2.0);
DEBUG: pushing down the procedure
NOTICE: Res: 7
DETAIL: from localhost:xxxxx
RESET client_min_messages;
SELECT alter_distributed_table('test_proc_colocation_0', shard_count:= 8, cascade_to_colocated := true);
NOTICE: creating a new table for mx_alter_distributed_table.test_proc_colocation_0
NOTICE: moving the data of mx_alter_distributed_table.test_proc_colocation_0
NOTICE: dropping the old mx_alter_distributed_table.test_proc_colocation_0
NOTICE: renaming the new table to mx_alter_distributed_table.test_proc_colocation_0
NOTICE: creating a new table for mx_alter_distributed_table.test_proc_colocation_2
NOTICE: moving the data of mx_alter_distributed_table.test_proc_colocation_2
NOTICE: dropping the old mx_alter_distributed_table.test_proc_colocation_2
NOTICE: renaming the new table to mx_alter_distributed_table.test_proc_colocation_2
NOTICE: creating a new table for mx_alter_distributed_table.test_proc_colocation_1
NOTICE: moving the data of mx_alter_distributed_table.test_proc_colocation_1
NOTICE: dropping the old mx_alter_distributed_table.test_proc_colocation_1
NOTICE: renaming the new table to mx_alter_distributed_table.test_proc_colocation_1
alter_distributed_table
---------------------------------------------------------------------
(1 row)
SET client_min_messages TO DEBUG1;
CALL proc_0(1.0);
DEBUG: pushing down the procedure
NOTICE: Res: 8
DETAIL: from localhost:xxxxx
CALL proc_1(2.0);
DEBUG: pushing down the procedure
NOTICE: Res: 9
DETAIL: from localhost:xxxxx
RESET client_min_messages;
SELECT logicalrelid, colocationid FROM pg_dist_partition WHERE logicalrelid::regclass::text IN ('test_proc_colocation_0');
logicalrelid | colocationid
---------------------------------------------------------------------
test_proc_colocation_0 | 1410003
(1 row)
SELECT proname, colocationid FROM pg_proc JOIN citus.pg_dist_object ON pg_proc.oid = citus.pg_dist_object.objid WHERE proname IN ('proc_0', 'proc_1') ORDER BY proname;
proname | colocationid
---------------------------------------------------------------------
proc_0 | 1410003
proc_1 | 1410003
(2 rows)
-- case which shouldn't preserve colocation for now
-- shardCount is not null && cascade_to_colocated is false
SELECT alter_distributed_table('test_proc_colocation_0', shard_count:= 18, cascade_to_colocated := false);
NOTICE: creating a new table for mx_alter_distributed_table.test_proc_colocation_0
NOTICE: moving the data of mx_alter_distributed_table.test_proc_colocation_0
NOTICE: dropping the old mx_alter_distributed_table.test_proc_colocation_0
NOTICE: renaming the new table to mx_alter_distributed_table.test_proc_colocation_0
alter_distributed_table
---------------------------------------------------------------------
(1 row)
SELECT logicalrelid, colocationid FROM pg_dist_partition WHERE logicalrelid::regclass::text IN ('test_proc_colocation_0');
logicalrelid | colocationid
---------------------------------------------------------------------
test_proc_colocation_0 | 1410006
(1 row)
SELECT proname, colocationid FROM pg_proc JOIN citus.pg_dist_object ON pg_proc.oid = citus.pg_dist_object.objid WHERE proname IN ('proc_0', 'proc_1') ORDER BY proname;
proname | colocationid
---------------------------------------------------------------------
proc_0 | 1410003
proc_1 | 1410003
(2 rows)
SET client_min_messages TO WARNING;
DROP SCHEMA mx_alter_distributed_table CASCADE;

View File

@ -125,13 +125,112 @@ SELECT "Constraint", "Definition" FROM table_checks WHERE relid='public.name_len
(1 row)
\c - - :master_host :master_port
-- Placeholders for RENAME operations
\set VERBOSITY TERSE
-- Rename the table to a too-long name
SET client_min_messages TO DEBUG1;
SET citus.force_max_query_parallelization TO ON;
ALTER TABLE name_lengths RENAME TO name_len_12345678901234567890123456789012345678901234567890;
ERROR: shard name name_len_12345678901234567890123456789012345678_fcd8ab6f_xxxxx exceeds 63 characters
DEBUG: the name of the shard (name_len_12345678901234567890123456789012345678_fcd8ab6f_xxxxx) for relation (name_len_12345678901234567890123456789012345678901234567890) is too long, switching to sequential and local execution mode to prevent self deadlocks
SELECT * FROM name_len_12345678901234567890123456789012345678901234567890;
col1 | col2 | float_col_12345678901234567890123456789012345678901234567890 | date_col_12345678901234567890123456789012345678901234567890 | int_col_12345678901234567890123456789012345678901234567890
---------------------------------------------------------------------
(0 rows)
ALTER TABLE name_len_12345678901234567890123456789012345678901234567890 RENAME TO name_lengths;
SELECT * FROM name_lengths;
col1 | col2 | float_col_12345678901234567890123456789012345678901234567890 | date_col_12345678901234567890123456789012345678901234567890 | int_col_12345678901234567890123456789012345678901234567890
---------------------------------------------------------------------
(0 rows)
-- Test renames on zero shard distributed tables
CREATE TABLE append_zero_shard_table (a int);
SELECT create_distributed_table('append_zero_shard_table', 'a', 'append');
create_distributed_table
---------------------------------------------------------------------
(1 row)
ALTER TABLE append_zero_shard_table rename TO append_zero_shard_table_12345678901234567890123456789012345678901234567890;
NOTICE: identifier "append_zero_shard_table_12345678901234567890123456789012345678901234567890" will be truncated to "append_zero_shard_table_123456789012345678901234567890123456789"
-- Verify that we do not support long renames after parallel queries are executed in transaction block
BEGIN;
ALTER TABLE name_lengths rename col1 to new_column_name;
ALTER TABLE name_lengths RENAME TO name_len_12345678901234567890123456789012345678901234567890;
ERROR: Shard name (name_len_12345678901234567890123456789012345678_fcd8ab6f_xxxxx) for table (name_len_12345678901234567890123456789012345678901234567890) is too long and could lead to deadlocks when executed in a transaction block after a parallel query
HINT: Try re-running the transaction with "SET LOCAL citus.multi_shard_modify_mode TO 'sequential';"
ROLLBACK;
-- The same operation will work when sequential mode is set
BEGIN;
SET LOCAL citus.multi_shard_modify_mode TO 'sequential';
ALTER TABLE name_lengths rename col1 to new_column_name;
ALTER TABLE name_lengths RENAME TO name_len_12345678901234567890123456789012345678901234567890;
DEBUG: the name of the shard (name_len_12345678901234567890123456789012345678_fcd8ab6f_xxxxx) for relation (name_len_12345678901234567890123456789012345678901234567890) is too long, switching to sequential and local execution mode to prevent self deadlocks
ROLLBACK;
RESET client_min_messages;
-- test long partitioned table renames
SET citus.shard_replication_factor TO 1;
CREATE TABLE partition_lengths
(
tenant_id integer NOT NULL,
timeperiod timestamp without time zone NOT NULL
) PARTITION BY RANGE (timeperiod);
SELECT create_distributed_table('partition_lengths', 'tenant_id');
create_distributed_table
---------------------------------------------------------------------
(1 row)
CREATE TABLE partition_lengths_p2020_09_28 PARTITION OF partition_lengths FOR VALUES FROM ('2020-09-28 00:00:00') TO ('2020-09-29 00:00:00');
-- verify that we can rename partitioned tables and partitions to too-long names
ALTER TABLE partition_lengths RENAME TO partition_lengths_12345678901234567890123456789012345678901234567890;
NOTICE: identifier "partition_lengths_12345678901234567890123456789012345678901234567890" will be truncated to "partition_lengths_123456789012345678901234567890123456789012345"
-- verify that we can rename partitioned tables and partitions with too-long names
ALTER TABLE partition_lengths_12345678901234567890123456789012345678901234567890 RENAME TO partition_lengths;
NOTICE: identifier "partition_lengths_12345678901234567890123456789012345678901234567890" will be truncated to "partition_lengths_123456789012345678901234567890123456789012345"
-- Placeholders for unsupported operations
\set VERBOSITY TERSE
-- renaming distributed table partitions
ALTER TABLE partition_lengths_p2020_09_28 RENAME TO partition_lengths_p2020_09_28_12345678901234567890123456789012345678901234567890;
NOTICE: identifier "partition_lengths_p2020_09_28_12345678901234567890123456789012345678901234567890" will be truncated to "partition_lengths_p2020_09_28_123456789012345678901234567890123"
-- creating or attaching new partitions with long names create deadlocks
CREATE TABLE partition_lengths_p2020_09_29_12345678901234567890123456789012345678901234567890 (LIKE partition_lengths_p2020_09_28_12345678901234567890123456789012345678901234567890);
NOTICE: identifier "partition_lengths_p2020_09_29_12345678901234567890123456789012345678901234567890" will be truncated to "partition_lengths_p2020_09_29_123456789012345678901234567890123"
NOTICE: identifier "partition_lengths_p2020_09_28_12345678901234567890123456789012345678901234567890" will be truncated to "partition_lengths_p2020_09_28_123456789012345678901234567890123"
ALTER TABLE partition_lengths
ATTACH PARTITION partition_lengths_p2020_09_29_12345678901234567890123456789012345678901234567890
FOR VALUES FROM ('2020-09-29 00:00:00') TO ('2020-09-30 00:00:00');
NOTICE: identifier "partition_lengths_p2020_09_29_12345678901234567890123456789012345678901234567890" will be truncated to "partition_lengths_p2020_09_29_123456789012345678901234567890123"
ERROR: canceling the transaction since it was involved in a distributed deadlock
CREATE TABLE partition_lengths_p2020_09_30_12345678901234567890123456789012345678901234567890
PARTITION OF partition_lengths
FOR VALUES FROM ('2020-09-30 00:00:00') TO ('2020-10-01 00:00:00');
NOTICE: identifier "partition_lengths_p2020_09_30_12345678901234567890123456789012345678901234567890" will be truncated to "partition_lengths_p2020_09_30_123456789012345678901234567890123"
ERROR: canceling the transaction since it was involved in a distributed deadlock
DROP TABLE partition_lengths_p2020_09_29_12345678901234567890123456789012345678901234567890;
NOTICE: identifier "partition_lengths_p2020_09_29_12345678901234567890123456789012345678901234567890" will be truncated to "partition_lengths_p2020_09_29_123456789012345678901234567890123"
-- creating or attaching new partitions with long names work when using sequential shard modify mode
BEGIN;
SET LOCAL citus.multi_shard_modify_mode = sequential;
CREATE TABLE partition_lengths_p2020_09_29_12345678901234567890123456789012345678901234567890 (LIKE partition_lengths_p2020_09_28_12345678901234567890123456789012345678901234567890);
NOTICE: identifier "partition_lengths_p2020_09_29_12345678901234567890123456789012345678901234567890" will be truncated to "partition_lengths_p2020_09_29_123456789012345678901234567890123"
NOTICE: identifier "partition_lengths_p2020_09_28_12345678901234567890123456789012345678901234567890" will be truncated to "partition_lengths_p2020_09_28_123456789012345678901234567890123"
ALTER TABLE partition_lengths
ATTACH PARTITION partition_lengths_p2020_09_29_12345678901234567890123456789012345678901234567890
FOR VALUES FROM ('2020-09-29 00:00:00') TO ('2020-09-30 00:00:00');
NOTICE: identifier "partition_lengths_p2020_09_29_12345678901234567890123456789012345678901234567890" will be truncated to "partition_lengths_p2020_09_29_123456789012345678901234567890123"
CREATE TABLE partition_lengths_p2020_09_30_12345678901234567890123456789012345678901234567890
PARTITION OF partition_lengths
FOR VALUES FROM ('2020-09-30 00:00:00') TO ('2020-10-01 00:00:00');
NOTICE: identifier "partition_lengths_p2020_09_30_12345678901234567890123456789012345678901234567890" will be truncated to "partition_lengths_p2020_09_30_123456789012345678901234567890123"
ROLLBACK;
-- renaming distributed table constraints are not supported
ALTER TABLE name_lengths RENAME CONSTRAINT unique_12345678901234567890123456789012345678901234567890 TO unique2_12345678901234567890123456789012345678901234567890;
ERROR: renaming constraints belonging to distributed tables is currently unsupported
DROP TABLE partition_lengths CASCADE;
\set VERBOSITY DEFAULT
-- Verify that we can create indexes with very long names on zero shard tables.
CREATE INDEX append_zero_shard_table_idx_12345678901234567890123456789012345678901234567890 ON append_zero_shard_table_12345678901234567890123456789012345678901234567890(a);
NOTICE: identifier "append_zero_shard_table_idx_12345678901234567890123456789012345678901234567890" will be truncated to "append_zero_shard_table_idx_12345678901234567890123456789012345"
NOTICE: identifier "append_zero_shard_table_12345678901234567890123456789012345678901234567890" will be truncated to "append_zero_shard_table_123456789012345678901234567890123456789"
-- Verify that CREATE INDEX on already distributed table has proper shard names.
CREATE INDEX tmp_idx_12345678901234567890123456789012345678901234567890 ON name_lengths(col2);
\c - - :public_worker_1_host :worker_1_port
@ -148,15 +247,19 @@ SELECT "relname", "Column", "Type", "Definition" FROM index_attrs WHERE
-- by the parser/rewriter before further processing, just as in Postgres.
CREATE INDEX tmp_idx_123456789012345678901234567890123456789012345678901234567890 ON name_lengths(col2);
NOTICE: identifier "tmp_idx_123456789012345678901234567890123456789012345678901234567890" will be truncated to "tmp_idx_1234567890123456789012345678901234567890123456789012345"
-- Verify we can rename indexes with long names
ALTER INDEX tmp_idx_123456789012345678901234567890123456789012345678901234567890 RENAME TO tmp_idx_newname_123456789012345678901234567890123456789012345678901234567890;
NOTICE: identifier "tmp_idx_123456789012345678901234567890123456789012345678901234567890" will be truncated to "tmp_idx_1234567890123456789012345678901234567890123456789012345"
NOTICE: identifier "tmp_idx_newname_123456789012345678901234567890123456789012345678901234567890" will be truncated to "tmp_idx_newname_12345678901234567890123456789012345678901234567"
\c - - :public_worker_1_host :worker_1_port
SELECT "relname", "Column", "Type", "Definition" FROM index_attrs WHERE
relname LIKE 'tmp_idx_%' ORDER BY 1 DESC, 2 DESC, 3 DESC, 4 DESC;
relname | Column | Type | Definition
---------------------------------------------------------------------
tmp_idx_newname_1234567890123456789012345678901_c54e849b_225003 | col2 | integer | col2
tmp_idx_newname_1234567890123456789012345678901_c54e849b_225002 | col2 | integer | col2
tmp_idx_123456789012345678901234567890123456789_5e470afa_225003 | col2 | integer | col2
tmp_idx_123456789012345678901234567890123456789_5e470afa_225002 | col2 | integer | col2
tmp_idx_123456789012345678901234567890123456789_599636aa_225003 | col2 | integer | col2
tmp_idx_123456789012345678901234567890123456789_599636aa_225002 | col2 | integer | col2
(4 rows)
\c - - :master_host :master_port
@ -208,14 +311,14 @@ SELECT master_create_worker_shards('sneaky_name_lengths', '2', '2');
(1 row)
\c - - :public_worker_1_host :worker_1_port
\di public.sneaky*225006
\di public.sneaky*225030
List of relations
Schema | Name | Type | Owner | Table
---------------------------------------------------------------------
public | sneaky_name_lengths_int_col_1234567890123456789_6402d2cd_225006 | index | postgres | sneaky_name_lengths_225006
public | sneaky_name_lengths_int_col_1234567890123456789_6402d2cd_225030 | index | postgres | sneaky_name_lengths_225030
(1 row)
SELECT "Constraint", "Definition" FROM table_checks WHERE relid='public.sneaky_name_lengths_225006'::regclass ORDER BY 1 DESC, 2 DESC;
SELECT "Constraint", "Definition" FROM table_checks WHERE relid='public.sneaky_name_lengths_225030'::regclass ORDER BY 1 DESC, 2 DESC;
Constraint | Definition
---------------------------------------------------------------------
checky_12345678901234567890123456789012345678901234567890 | CHECK (int_col_123456789012345678901234567890123456789012345678901234 > 100)
@ -239,11 +342,11 @@ SELECT create_distributed_table('sneaky_name_lengths', 'col1', 'hash');
(1 row)
\c - - :public_worker_1_host :worker_1_port
\di unique*225008
\di unique*225032
List of relations
Schema | Name | Type | Owner | Table
---------------------------------------------------------------------
public | unique_1234567890123456789012345678901234567890_a5986f27_225008 | index | postgres | sneaky_name_lengths_225008
public | unique_1234567890123456789012345678901234567890_a5986f27_225032 | index | postgres | sneaky_name_lengths_225032
(1 row)
\c - - :master_host :master_port
@ -336,3 +439,4 @@ DROP TABLE multi_name_lengths.too_long_12345678901234567890123456789012345678901
-- Clean up.
DROP TABLE name_lengths CASCADE;
DROP TABLE U&"elephant_!0441!043B!043E!043D!0441!043B!043E!043D!0441!043B!043E!043D!0441!043B!043E!043D!0441!043B!043E!043D!0441!043B!043E!043D" UESCAPE '!' CASCADE;
RESET citus.force_max_query_parallelization;

View File

@ -41,7 +41,7 @@ SELECT create_distributed_table('articles_hash', 'author_id');
(1 row)
CREATE TABLE authors_reference ( name varchar(20), id bigint );
CREATE TABLE authors_reference (id int, name text);
SELECT create_reference_table('authors_reference');
create_reference_table
---------------------------------------------------------------------
@ -2111,6 +2111,26 @@ DEBUG: query has a single distribution column value: 4
0
(1 row)
-- test INSERT using values from generate_series() and repeat() functions
INSERT INTO authors_reference (id, name) VALUES (generate_series(1, 10), repeat('Migjeni', 3));
DEBUG: Creating router plan
SELECT * FROM authors_reference ORDER BY 1, 2;
DEBUG: Distributed planning for a fast-path router query
DEBUG: Creating router plan
id | name
---------------------------------------------------------------------
1 | MigjeniMigjeniMigjeni
2 | MigjeniMigjeniMigjeni
3 | MigjeniMigjeniMigjeni
4 | MigjeniMigjeniMigjeni
5 | MigjeniMigjeniMigjeni
6 | MigjeniMigjeniMigjeni
7 | MigjeniMigjeniMigjeni
8 | MigjeniMigjeniMigjeni
9 | MigjeniMigjeniMigjeni
10 | MigjeniMigjeniMigjeni
(10 rows)
SET client_min_messages to 'NOTICE';
DROP FUNCTION author_articles_max_id();
DROP FUNCTION author_articles_id_word_count();

View File

@ -3,6 +3,7 @@
CREATE SCHEMA null_parameters;
SET search_path TO null_parameters;
SET citus.next_shard_id TO 1680000;
SET citus.shard_count to 32;
CREATE TABLE text_dist_column (key text, value text);
SELECT create_distributed_table('text_dist_column', 'key');
create_distributed_table

View File

@ -472,9 +472,9 @@ SELECT * FROM generated_stored_dist ORDER BY 1,2,3;
INSERT INTO generated_stored_dist VALUES (1, 'text_1'), (2, 'text_2');
SELECT alter_distributed_table('generated_stored_dist', shard_count := 5, cascade_to_colocated := false);
NOTICE: creating a new table for test_pg12.generated_stored_dist
NOTICE: Moving the data of test_pg12.generated_stored_dist
NOTICE: Dropping the old test_pg12.generated_stored_dist
NOTICE: Renaming the new table to test_pg12.generated_stored_dist
NOTICE: moving the data of test_pg12.generated_stored_dist
NOTICE: dropping the old test_pg12.generated_stored_dist
NOTICE: renaming the new table to test_pg12.generated_stored_dist
alter_distributed_table
---------------------------------------------------------------------
@ -533,9 +533,9 @@ create table generated_stored_columnar_p0 partition of generated_stored_columnar
create table generated_stored_columnar_p1 partition of generated_stored_columnar for values from (10) to (20);
SELECT alter_table_set_access_method('generated_stored_columnar_p0', 'columnar');
NOTICE: creating a new table for test_pg12.generated_stored_columnar_p0
NOTICE: Moving the data of test_pg12.generated_stored_columnar_p0
NOTICE: Dropping the old test_pg12.generated_stored_columnar_p0
NOTICE: Renaming the new table to test_pg12.generated_stored_columnar_p0
NOTICE: moving the data of test_pg12.generated_stored_columnar_p0
NOTICE: dropping the old test_pg12.generated_stored_columnar_p0
NOTICE: renaming the new table to test_pg12.generated_stored_columnar_p0
alter_table_set_access_method
---------------------------------------------------------------------
@ -568,9 +568,9 @@ SELECT * FROM generated_stored_ref ORDER BY 1,2,3,4,5;
BEGIN;
SELECT undistribute_table('generated_stored_ref');
NOTICE: creating a new table for test_pg12.generated_stored_ref
NOTICE: Moving the data of test_pg12.generated_stored_ref
NOTICE: Dropping the old test_pg12.generated_stored_ref
NOTICE: Renaming the new table to test_pg12.generated_stored_ref
NOTICE: moving the data of test_pg12.generated_stored_ref
NOTICE: dropping the old test_pg12.generated_stored_ref
NOTICE: renaming the new table to test_pg12.generated_stored_ref
undistribute_table
---------------------------------------------------------------------
@ -600,9 +600,9 @@ BEGIN;
-- show that undistribute_table works fine
SELECT undistribute_table('generated_stored_ref');
NOTICE: creating a new table for test_pg12.generated_stored_ref
NOTICE: Moving the data of test_pg12.generated_stored_ref
NOTICE: Dropping the old test_pg12.generated_stored_ref
NOTICE: Renaming the new table to test_pg12.generated_stored_ref
NOTICE: moving the data of test_pg12.generated_stored_ref
NOTICE: dropping the old test_pg12.generated_stored_ref
NOTICE: renaming the new table to test_pg12.generated_stored_ref
undistribute_table
---------------------------------------------------------------------
@ -630,9 +630,9 @@ BEGIN;
-- show that undistribute_table works fine
SELECT undistribute_table('generated_stored_ref');
NOTICE: creating a new table for test_pg12.generated_stored_ref
NOTICE: Moving the data of test_pg12.generated_stored_ref
NOTICE: Dropping the old test_pg12.generated_stored_ref
NOTICE: Renaming the new table to test_pg12.generated_stored_ref
NOTICE: moving the data of test_pg12.generated_stored_ref
NOTICE: dropping the old test_pg12.generated_stored_ref
NOTICE: renaming the new table to test_pg12.generated_stored_ref
undistribute_table
---------------------------------------------------------------------
@ -650,8 +650,22 @@ SELECT citus_remove_node('localhost', :master_port);
(1 row)
CREATE TABLE superuser_columnar_table (a int) USING columnar;
CREATE USER read_access;
NOTICE: not propagating CREATE ROLE/USER commands to worker nodes
HINT: Connect to worker nodes directly to manually create all necessary users and roles.
SET ROLE read_access;
-- user shouldn't be able to execute alter_columnar_table_set
-- or alter_columnar_table_reset for a columnar table that it
-- doesn't own
SELECT alter_columnar_table_set('test_pg12.superuser_columnar_table', chunk_group_row_limit => 100);
ERROR: permission denied for schema test_pg12
SELECT alter_columnar_table_reset('test_pg12.superuser_columnar_table');
ERROR: permission denied for schema test_pg12
RESET ROLE;
DROP USER read_access;
\set VERBOSITY terse
drop schema test_pg12 cascade;
NOTICE: drop cascades to 15 other objects
NOTICE: drop cascades to 16 other objects
\set VERBOSITY default
SET citus.shard_replication_factor to 2;

View File

@ -155,9 +155,9 @@ TRUNCATE with_ties_table_2;
-- test INSERT SELECTs into distributed table with a different distribution column
SELECT undistribute_table('with_ties_table_2');
NOTICE: creating a new table for public.with_ties_table_2
NOTICE: Moving the data of public.with_ties_table_2
NOTICE: Dropping the old public.with_ties_table_2
NOTICE: Renaming the new table to public.with_ties_table_2
NOTICE: moving the data of public.with_ties_table_2
NOTICE: dropping the old public.with_ties_table_2
NOTICE: renaming the new table to public.with_ties_table_2
undistribute_table
---------------------------------------------------------------------

View File

@ -829,6 +829,57 @@ DEBUG: Router planner cannot handle multi-shard select queries
DEBUG: Router planner cannot handle multi-shard select queries
ERROR: cannot compute aggregate (distinct)
DETAIL: table partitioning is unsuitable for aggregate (distinct)
/* these are not safe to push down as the partition key index is different */
SELECT COUNT(*) FROM ((SELECT x,y FROM test) UNION ALL (SELECT y,x FROM test)) u;
DEBUG: Router planner cannot handle multi-shard select queries
DEBUG: Router planner cannot handle multi-shard select queries
DEBUG: generating subplan XXX_1 for subquery SELECT x, y FROM recursive_union.test
DEBUG: Router planner cannot handle multi-shard select queries
DEBUG: generating subplan XXX_2 for subquery SELECT y, x FROM recursive_union.test
DEBUG: Creating router plan
DEBUG: generating subplan XXX_3 for subquery SELECT intermediate_result.x, intermediate_result.y FROM read_intermediate_result('XXX_1'::text, 'binary'::citus_copy_format) intermediate_result(x integer, y integer) UNION ALL SELECT intermediate_result.y, intermediate_result.x FROM read_intermediate_result('XXX_2'::text, 'binary'::citus_copy_format) intermediate_result(y integer, x integer)
DEBUG: Plan XXX query after replacing subqueries and CTEs: SELECT count(*) AS count FROM (SELECT intermediate_result.x, intermediate_result.y FROM read_intermediate_result('XXX_3'::text, 'binary'::citus_copy_format) intermediate_result(x integer, y integer)) u
DEBUG: Creating router plan
count
---------------------------------------------------------------------
4
(1 row)
/* this is safe to push down since the partition key index is the same */
SELECT COUNT(*) FROM (SELECT x,y FROM test UNION ALL SELECT x,y FROM test) foo;
DEBUG: Router planner cannot handle multi-shard select queries
count
---------------------------------------------------------------------
4
(1 row)
SELECT COUNT(*) FROM
((SELECT x,y FROM test UNION ALL SELECT x,y FROM test)
UNION ALL
(SELECT x,y FROM test UNION ALL SELECT x,y FROM test)) foo;
DEBUG: Router planner cannot handle multi-shard select queries
count
---------------------------------------------------------------------
8
(1 row)
SELECT COUNT(*)
FROM
(SELECT user_id AS user_id
FROM
(SELECT x AS user_id
FROM test
UNION ALL SELECT x AS user_id
FROM test) AS bar
UNION ALL SELECT x AS user_id
FROM test) AS fool LIMIT 1;
DEBUG: Router planner cannot handle multi-shard select queries
DEBUG: push down of limit count: 1
count
---------------------------------------------------------------------
6
(1 row)
-- one of the leaves is a repartition join
SET citus.enable_repartition_joins TO ON;
-- repartition is recursively planned before the set operation

View File

@ -289,9 +289,9 @@ COMMIT;
-- to test citus local tables
select undistribute_table('upsert_test');
NOTICE: creating a new table for single_node.upsert_test
NOTICE: Moving the data of single_node.upsert_test
NOTICE: Dropping the old single_node.upsert_test
NOTICE: Renaming the new table to single_node.upsert_test
NOTICE: moving the data of single_node.upsert_test
NOTICE: dropping the old single_node.upsert_test
NOTICE: renaming the new table to single_node.upsert_test
undistribute_table
---------------------------------------------------------------------
@ -694,6 +694,76 @@ SELECT * FROM collections_list, collections_list_0 WHERE collections_list.key=co
100 | 0 | 10000 | 100 | 0 | 10000
(1 row)
-- test hash distribution using INSERT with generate_series() function
CREATE OR REPLACE FUNCTION part_hashint4_noop(value int4, seed int8)
RETURNS int8 AS $$
SELECT value + seed;
$$ LANGUAGE SQL IMMUTABLE;
CREATE OPERATOR CLASS part_test_int4_ops
FOR TYPE int4
USING HASH AS
operator 1 =,
function 2 part_hashint4_noop(int4, int8);
CREATE TABLE hash_parted (
a int,
b int
) PARTITION BY HASH (a part_test_int4_ops);
CREATE TABLE hpart0 PARTITION OF hash_parted FOR VALUES WITH (modulus 4, remainder 0);
CREATE TABLE hpart1 PARTITION OF hash_parted FOR VALUES WITH (modulus 4, remainder 1);
CREATE TABLE hpart2 PARTITION OF hash_parted FOR VALUES WITH (modulus 4, remainder 2);
CREATE TABLE hpart3 PARTITION OF hash_parted FOR VALUES WITH (modulus 4, remainder 3);
SELECT create_distributed_table('hash_parted ', 'a');
create_distributed_table
---------------------------------------------------------------------
(1 row)
INSERT INTO hash_parted VALUES (1, generate_series(1, 10));
SELECT * FROM hash_parted ORDER BY 1, 2;
a | b
---------------------------------------------------------------------
1 | 1
1 | 2
1 | 3
1 | 4
1 | 5
1 | 6
1 | 7
1 | 8
1 | 9
1 | 10
(10 rows)
ALTER TABLE hash_parted DETACH PARTITION hpart0;
ALTER TABLE hash_parted DETACH PARTITION hpart1;
ALTER TABLE hash_parted DETACH PARTITION hpart2;
ALTER TABLE hash_parted DETACH PARTITION hpart3;
-- test range partition without creating partitions and inserting with generate_series()
-- should error out even in plain PG since no partition of relation "parent_tab" is found for row
-- in Citus it errors out because it fails to evaluate partition key in insert
CREATE TABLE parent_tab (id int) PARTITION BY RANGE (id);
SELECT create_distributed_table('parent_tab', 'id');
create_distributed_table
---------------------------------------------------------------------
(1 row)
INSERT INTO parent_tab VALUES (generate_series(0, 3));
ERROR: failed to evaluate partition key in insert
HINT: try using constant values for partition column
-- now it should work
CREATE TABLE parent_tab_1_2 PARTITION OF parent_tab FOR VALUES FROM (1) to (2);
ALTER TABLE parent_tab ADD COLUMN b int;
INSERT INTO parent_tab VALUES (1, generate_series(0, 3));
SELECT * FROM parent_tab ORDER BY 1, 2;
id | b
---------------------------------------------------------------------
1 | 0
1 | 1
1 | 2
1 | 3
(4 rows)
-- make sure that parallel accesses are good
SET citus.force_max_query_parallelization TO ON;
SELECT * FROM test_2 ORDER BY 1 DESC;
@ -1125,9 +1195,9 @@ RESET citus.task_executor_type;
ALTER TABLE test DROP CONSTRAINT foreign_key;
SELECT undistribute_table('test_2');
NOTICE: creating a new table for single_node.test_2
NOTICE: Moving the data of single_node.test_2
NOTICE: Dropping the old single_node.test_2
NOTICE: Renaming the new table to single_node.test_2
NOTICE: moving the data of single_node.test_2
NOTICE: dropping the old single_node.test_2
NOTICE: renaming the new table to single_node.test_2
undistribute_table
---------------------------------------------------------------------
@ -1176,28 +1246,28 @@ ALTER TABLE partitioned_table_1 ADD CONSTRAINT fkey_5 FOREIGN KEY (col_1) REFERE
SELECT undistribute_table('partitioned_table_1', cascade_via_foreign_keys=>true);
NOTICE: converting the partitions of single_node.partitioned_table_1
NOTICE: creating a new table for single_node.partitioned_table_1_100_200
NOTICE: Moving the data of single_node.partitioned_table_1_100_200
NOTICE: Dropping the old single_node.partitioned_table_1_100_200
NOTICE: Renaming the new table to single_node.partitioned_table_1_100_200
NOTICE: moving the data of single_node.partitioned_table_1_100_200
NOTICE: dropping the old single_node.partitioned_table_1_100_200
NOTICE: renaming the new table to single_node.partitioned_table_1_100_200
NOTICE: creating a new table for single_node.partitioned_table_1_200_300
NOTICE: Moving the data of single_node.partitioned_table_1_200_300
NOTICE: Dropping the old single_node.partitioned_table_1_200_300
NOTICE: Renaming the new table to single_node.partitioned_table_1_200_300
NOTICE: moving the data of single_node.partitioned_table_1_200_300
NOTICE: dropping the old single_node.partitioned_table_1_200_300
NOTICE: renaming the new table to single_node.partitioned_table_1_200_300
NOTICE: creating a new table for single_node.partitioned_table_1
NOTICE: Dropping the old single_node.partitioned_table_1
NOTICE: Renaming the new table to single_node.partitioned_table_1
NOTICE: dropping the old single_node.partitioned_table_1
NOTICE: renaming the new table to single_node.partitioned_table_1
NOTICE: creating a new table for single_node.reference_table_1
NOTICE: Moving the data of single_node.reference_table_1
NOTICE: Dropping the old single_node.reference_table_1
NOTICE: Renaming the new table to single_node.reference_table_1
NOTICE: moving the data of single_node.reference_table_1
NOTICE: dropping the old single_node.reference_table_1
NOTICE: renaming the new table to single_node.reference_table_1
NOTICE: creating a new table for single_node.distributed_table_1
NOTICE: Moving the data of single_node.distributed_table_1
NOTICE: Dropping the old single_node.distributed_table_1
NOTICE: Renaming the new table to single_node.distributed_table_1
NOTICE: moving the data of single_node.distributed_table_1
NOTICE: dropping the old single_node.distributed_table_1
NOTICE: renaming the new table to single_node.distributed_table_1
NOTICE: creating a new table for single_node.citus_local_table_1
NOTICE: Moving the data of single_node.citus_local_table_1
NOTICE: Dropping the old single_node.citus_local_table_1
NOTICE: Renaming the new table to single_node.citus_local_table_1
NOTICE: moving the data of single_node.citus_local_table_1
NOTICE: dropping the old single_node.citus_local_table_1
NOTICE: renaming the new table to single_node.citus_local_table_1
undistribute_table
---------------------------------------------------------------------

View File

@ -35,15 +35,15 @@ SELECT * FROM dist_table ORDER BY 1, 2, 3;
-- the name->OID conversion happens at parse time.
SELECT undistribute_table('dist_table'), create_distributed_table('dist_table', 'a');
NOTICE: creating a new table for undistribute_table.dist_table
NOTICE: Moving the data of undistribute_table.dist_table
NOTICE: Dropping the old undistribute_table.dist_table
NOTICE: Renaming the new table to undistribute_table.dist_table
NOTICE: moving the data of undistribute_table.dist_table
NOTICE: dropping the old undistribute_table.dist_table
NOTICE: renaming the new table to undistribute_table.dist_table
ERROR: relation with OID XXXX does not exist
SELECT undistribute_table('dist_table');
NOTICE: creating a new table for undistribute_table.dist_table
NOTICE: Moving the data of undistribute_table.dist_table
NOTICE: Dropping the old undistribute_table.dist_table
NOTICE: Renaming the new table to undistribute_table.dist_table
NOTICE: moving the data of undistribute_table.dist_table
NOTICE: dropping the old undistribute_table.dist_table
NOTICE: renaming the new table to undistribute_table.dist_table
undistribute_table
---------------------------------------------------------------------
@ -88,9 +88,9 @@ SELECT * FROM pg_indexes WHERE tablename = 'dist_table';
SELECT undistribute_table('dist_table');
NOTICE: creating a new table for undistribute_table.dist_table
NOTICE: Moving the data of undistribute_table.dist_table
NOTICE: Dropping the old undistribute_table.dist_table
NOTICE: Renaming the new table to undistribute_table.dist_table
NOTICE: moving the data of undistribute_table.dist_table
NOTICE: dropping the old undistribute_table.dist_table
NOTICE: renaming the new table to undistribute_table.dist_table
undistribute_table
---------------------------------------------------------------------
@ -204,16 +204,16 @@ HINT: the parent table is "partitioned_table"
SELECT undistribute_table('partitioned_table');
NOTICE: converting the partitions of undistribute_table.partitioned_table
NOTICE: creating a new table for undistribute_table.partitioned_table_1_5
NOTICE: Moving the data of undistribute_table.partitioned_table_1_5
NOTICE: Dropping the old undistribute_table.partitioned_table_1_5
NOTICE: Renaming the new table to undistribute_table.partitioned_table_1_5
NOTICE: moving the data of undistribute_table.partitioned_table_1_5
NOTICE: dropping the old undistribute_table.partitioned_table_1_5
NOTICE: renaming the new table to undistribute_table.partitioned_table_1_5
NOTICE: creating a new table for undistribute_table.partitioned_table_6_10
NOTICE: Moving the data of undistribute_table.partitioned_table_6_10
NOTICE: Dropping the old undistribute_table.partitioned_table_6_10
NOTICE: Renaming the new table to undistribute_table.partitioned_table_6_10
NOTICE: moving the data of undistribute_table.partitioned_table_6_10
NOTICE: dropping the old undistribute_table.partitioned_table_6_10
NOTICE: renaming the new table to undistribute_table.partitioned_table_6_10
NOTICE: creating a new table for undistribute_table.partitioned_table
NOTICE: Dropping the old undistribute_table.partitioned_table
NOTICE: Renaming the new table to undistribute_table.partitioned_table
NOTICE: dropping the old undistribute_table.partitioned_table
NOTICE: renaming the new table to undistribute_table.partitioned_table
undistribute_table
---------------------------------------------------------------------
@ -283,9 +283,9 @@ SELECT * FROM seq_table ORDER BY a;
SELECT undistribute_table('seq_table');
NOTICE: creating a new table for undistribute_table.seq_table
NOTICE: Moving the data of undistribute_table.seq_table
NOTICE: Dropping the old undistribute_table.seq_table
NOTICE: Renaming the new table to undistribute_table.seq_table
NOTICE: moving the data of undistribute_table.seq_table
NOTICE: dropping the old undistribute_table.seq_table
NOTICE: renaming the new table to undistribute_table.seq_table
undistribute_table
---------------------------------------------------------------------
@ -348,14 +348,14 @@ SELECT * FROM another_schema.undis_view3 ORDER BY 1, 2;
SELECT undistribute_table('view_table');
NOTICE: creating a new table for undistribute_table.view_table
NOTICE: Moving the data of undistribute_table.view_table
NOTICE: Dropping the old undistribute_table.view_table
NOTICE: moving the data of undistribute_table.view_table
NOTICE: dropping the old undistribute_table.view_table
NOTICE: drop cascades to 3 other objects
DETAIL: drop cascades to view undis_view1
drop cascades to view undis_view2
drop cascades to view another_schema.undis_view3
CONTEXT: SQL statement "DROP TABLE undistribute_table.view_table CASCADE"
NOTICE: Renaming the new table to undistribute_table.view_table
NOTICE: renaming the new table to undistribute_table.view_table
undistribute_table
---------------------------------------------------------------------

View File

@ -0,0 +1,293 @@
CREATE SCHEMA union_pushdown;
SET search_path TO union_pushdown;
SET citus.shard_count TO 4;
SET citus.shard_replication_factor TO 1;
CREATE TABLE users_table_part(user_id bigint, value_1 int, value_2 int) PARTITION BY RANGE (value_1);
CREATE TABLE users_table_part_0 PARTITION OF users_table_part FOR VALUES FROM (0) TO (1);
CREATE TABLE users_table_part_1 PARTITION OF users_table_part FOR VALUES FROM (1) TO (2);
CREATE TABLE users_table_part_2 PARTITION OF users_table_part FOR VALUES FROM (2) TO (3);
CREATE TABLE users_table_part_3 PARTITION OF users_table_part FOR VALUES FROM (3) TO (4);
CREATE TABLE users_table_part_4 PARTITION OF users_table_part FOR VALUES FROM (4) TO (5);
CREATE TABLE users_table_part_5 PARTITION OF users_table_part FOR VALUES FROM (5) TO (6);
CREATE TABLE users_table_part_6 PARTITION OF users_table_part FOR VALUES FROM (6) TO (7);
CREATE TABLE users_table_part_7 PARTITION OF users_table_part FOR VALUES FROM (7) TO (8);
CREATE TABLE users_table_part_8 PARTITION OF users_table_part FOR VALUES FROM (8) TO (9);
SELECT create_distributed_table('users_table_part', 'user_id');
create_distributed_table
---------------------------------------------------------------------
(1 row)
INSERT INTO users_table_part SELECT i, i %9, i %50 FROM generate_series(0, 100) i;
CREATE TABLE events_table_part(user_id bigint, value_1 int, value_2 int) PARTITION BY RANGE (value_1);
CREATE TABLE events_table_part_0 PARTITION OF events_table_part FOR VALUES FROM (0) TO (1);
CREATE TABLE events_table_part_1 PARTITION OF events_table_part FOR VALUES FROM (1) TO (2);
CREATE TABLE events_table_part_2 PARTITION OF events_table_part FOR VALUES FROM (2) TO (3);
CREATE TABLE events_table_part_3 PARTITION OF events_table_part FOR VALUES FROM (3) TO (4);
CREATE TABLE events_table_part_4 PARTITION OF events_table_part FOR VALUES FROM (4) TO (5);
CREATE TABLE events_table_part_5 PARTITION OF events_table_part FOR VALUES FROM (5) TO (6);
CREATE TABLE events_table_part_6 PARTITION OF events_table_part FOR VALUES FROM (6) TO (7);
CREATE TABLE events_table_part_7 PARTITION OF events_table_part FOR VALUES FROM (7) TO (8);
CREATE TABLE events_table_part_8 PARTITION OF events_table_part FOR VALUES FROM (8) TO (9);
SELECT create_distributed_table('events_table_part', 'user_id');
create_distributed_table
---------------------------------------------------------------------
(1 row)
INSERT INTO events_table_part SELECT i, i %9, i %50 FROM generate_series(0, 100) i;
set client_min_messages to DEBUG1;
-- a union all query with 2 different levels of UNION ALL
SELECT COUNT(*)
FROM
(SELECT user_id AS user_id
FROM
(SELECT user_id AS user_id
FROM users_table_part
UNION ALL SELECT user_id AS user_id
FROM users_table_part) AS bar
UNION ALL SELECT user_id AS user_id
FROM users_table_part) AS fool LIMIT 1;
DEBUG: push down of limit count: 1
count
---------------------------------------------------------------------
303
(1 row)
-- a union [all] query with 2 different levels of UNION [ALL]
SELECT COUNT(*)
FROM
(SELECT user_id AS user_id
FROM
(SELECT user_id AS user_id
FROM users_table_part
UNION ALL SELECT user_id AS user_id
FROM users_table_part) AS bar
UNION SELECT user_id AS user_id
FROM users_table_part) AS fool LIMIT 1;
DEBUG: push down of limit count: 1
count
---------------------------------------------------------------------
101
(1 row)
-- a union all query with several levels and leaf queries
SELECT DISTINCT user_id
FROM
(SELECT user_id AS user_id
FROM
(SELECT user_id AS user_id FROM users_table_part WHERE value_1 = 1
UNION ALL SELECT user_id AS user_id FROM users_table_part WHERE value_1 = 2) AS bar
UNION ALL SELECT user_id AS user_id FROM users_table_part WHERE value_1 = 3
UNION ALL
SELECT user_id AS user_id
FROM
(SELECT user_id AS user_id FROM users_table_part WHERE value_1 = 4
UNION ALL SELECT user_id AS user_id FROM users_table_part WHERE value_1 = 5) AS bar
UNION ALL
(SELECT user_id AS user_id
FROM
(SELECT DISTINCT user_id AS user_id FROM users_table_part WHERE value_1 = 6
UNION ALL SELECT user_id AS user_id FROM users_table_part WHERE value_1 = 7) AS bar
UNION ALL SELECT user_id AS user_id FROM users_table_part WHERE value_1 = 8)) AS bar
ORDER BY 1 LIMIT 1;
DEBUG: push down of limit count: 1
user_id
---------------------------------------------------------------------
1
(1 row)
-- a union all query with several levels and leaf queries
-- on the partition tables
SELECT DISTINCT user_id
FROM
(SELECT user_id AS user_id
FROM
(SELECT user_id AS user_id FROM users_table_part WHERE value_1 = 1
UNION ALL SELECT user_id AS user_id FROM users_table_part_2 WHERE value_1 = 2) AS bar
UNION ALL SELECT user_id AS user_id FROM users_table_part WHERE value_1 = 3
UNION ALL
SELECT user_id AS user_id
FROM
(SELECT user_id AS user_id FROM users_table_part WHERE value_1 = 4
UNION ALL SELECT user_id AS user_id FROM users_table_part_3 WHERE value_1 = 5) AS bar
UNION ALL
(SELECT user_id AS user_id
FROM
(SELECT DISTINCT user_id AS user_id FROM users_table_part WHERE value_1 = 6
UNION ALL SELECT user_id AS user_id FROM users_table_part_5 WHERE value_1 = 7) AS bar
UNION ALL SELECT user_id AS user_id FROM users_table_part_4 WHERE value_1 = 8)) AS bar
ORDER BY 1 LIMIT 1;
DEBUG: push down of limit count: 1
user_id
---------------------------------------------------------------------
1
(1 row)
-- a union all query with a combine query on the coordinator
-- can still be pushed down
SELECT COUNT(DISTINCT user_id)
FROM
(SELECT user_id AS user_id
FROM
(SELECT user_id AS user_id FROM users_table_part WHERE value_1 = 1
UNION ALL SELECT user_id AS user_id FROM users_table_part WHERE value_1 = 2) AS bar
UNION ALL SELECT user_id AS user_id FROM users_table_part WHERE value_1 = 3
UNION ALL
SELECT user_id AS user_id
FROM
(SELECT user_id AS user_id FROM users_table_part WHERE value_1 = 4
UNION ALL SELECT user_id AS user_id FROM users_table_part WHERE value_1 = 5) AS bar
UNION ALL
(SELECT user_id AS user_id
FROM
(SELECT user_id AS user_id FROM users_table_part WHERE value_1 = 6
UNION ALL SELECT user_id AS user_id FROM users_table_part WHERE value_1 = 7) AS bar
UNION ALL SELECT user_id AS user_id FROM users_table_part WHERE value_1 = 8)) AS bar;
count
---------------------------------------------------------------------
89
(1 row)
-- a union all query with ORDER BY LIMIT
SELECT COUNT(user_id)
FROM
(SELECT user_id AS user_id
FROM
(SELECT user_id AS user_id FROM users_table_part WHERE value_1 = 1
UNION ALL SELECT user_id AS user_id FROM users_table_part WHERE value_1 = 2) AS bar
UNION ALL SELECT user_id AS user_id FROM users_table_part WHERE value_1 = 3
UNION ALL
SELECT user_id AS user_id
FROM
(SELECT user_id AS user_id FROM users_table_part WHERE value_1 = 4
UNION ALL SELECT user_id AS user_id FROM users_table_part WHERE value_1 = 5) AS bar
UNION ALL
(SELECT user_id AS user_id
FROM
(SELECT user_id AS user_id FROM users_table_part WHERE value_1 = 6
UNION ALL SELECT user_id AS user_id FROM users_table_part WHERE value_1 = 7) AS bar
UNION ALL SELECT user_id AS user_id FROM users_table_part WHERE value_1 = 8)) AS bar
ORDER BY 1 DESC LIMIT 10;
DEBUG: push down of limit count: 10
count
---------------------------------------------------------------------
89
(1 row)
-- a union all query where leaf queries have JOINs on distribution keys
-- can be pushded down
SELECT COUNT(user_id)
FROM
(SELECT user_id AS user_id
FROM
(SELECT user_id AS user_id FROM users_table_part JOIN events_table_part USING (user_id) WHERE users_table_part.value_1 = 1
UNION ALL SELECT user_id AS user_id FROM users_table_part JOIN events_table_part USING (user_id) WHERE users_table_part.value_1 = 2) AS bar
UNION ALL SELECT user_id AS user_id FROM users_table_part JOIN events_table_part USING (user_id) WHERE users_table_part.value_1 = 3
UNION ALL
SELECT user_id AS user_id
FROM
(SELECT user_id AS user_id FROM users_table_part JOIN events_table_part USING (user_id) WHERE users_table_part.value_1 = 4
UNION ALL SELECT user_id AS user_id FROM users_table_part JOIN events_table_part USING (user_id) WHERE users_table_part.value_1 = 5) AS bar
UNION ALL
(SELECT user_id AS user_id
FROM
(SELECT user_id AS user_id FROM users_table_part JOIN events_table_part USING (user_id) WHERE users_table_part.value_1 = 6
UNION ALL SELECT user_id AS user_id FROM users_table_part JOIN events_table_part USING (user_id) WHERE users_table_part.value_1 = 7) AS bar
UNION ALL SELECT user_id AS user_id FROM users_table_part JOIN events_table_part USING (user_id) WHERE users_table_part.value_1 = 8 GROUP BY user_id)) AS bar
ORDER BY 1 DESC LIMIT 10;
DEBUG: push down of limit count: 10
count
---------------------------------------------------------------------
89
(1 row)
-- a union all query deep down inside a subquery can still be pushed down
SELECT COUNT(user_id) FROM (
SELECT user_id, random() FROM (
SELECT user_id, random() FROM (
SELECT user_id, random()
FROM
(SELECT user_id AS user_id
FROM
(SELECT user_id AS user_id FROM users_table_part JOIN events_table_part USING (user_id) WHERE users_table_part.value_1 = 1
UNION ALL SELECT user_id AS user_id FROM users_table_part JOIN events_table_part USING (user_id) WHERE users_table_part.value_1 = 2) AS bar
UNION ALL SELECT user_id AS user_id FROM users_table_part JOIN events_table_part USING (user_id) WHERE users_table_part.value_1 = 3
UNION ALL
SELECT user_id AS user_id
FROM
(SELECT user_id AS user_id FROM users_table_part JOIN events_table_part USING (user_id) WHERE users_table_part.value_1 = 4
UNION ALL SELECT user_id AS user_id FROM users_table_part JOIN events_table_part USING (user_id) WHERE users_table_part.value_1 = 5) AS bar
UNION ALL
(SELECT user_id AS user_id
FROM
(SELECT DISTINCT user_id AS user_id FROM users_table_part JOIN events_table_part USING (user_id) WHERE users_table_part.value_1 = 6
UNION ALL SELECT user_id AS user_id FROM users_table_part JOIN events_table_part USING (user_id) WHERE users_table_part.value_1 = 7 AND events_table_part.user_id IN (SELECT user_id FROM users_table_part WHERE users_table_part.value_2 = 3 AND events_table_part.user_id IN (SELECT user_id FROM users_table_part WHERE value_2 = 3))) AS bar
UNION ALL SELECT user_id AS user_id FROM users_table_part JOIN events_table_part USING (user_id) WHERE users_table_part.value_1 = 8 GROUP BY user_id)) AS bar
WHERE user_id < 2000 ) as level_1 ) as level_2 ) as level_3
ORDER BY 1 DESC LIMIT 10;
DEBUG: push down of limit count: 10
count
---------------------------------------------------------------------
78
(1 row)
-- safe to pushdown
SELECT DISTINCT user_id FROM (
SELECT * FROM
(SELECT user_id FROM users_table_part UNION ALL SELECT user_id FROM users_table_part) as foo
JOIN
(SELECT user_id FROM users_table_part UNION ALL SELECT user_id FROM users_table_part) as bar
USING (user_id)
UNION ALL
SELECT * FROM
(SELECT user_id FROM users_table_part UNION ALL SELECT user_id FROM users_table_part) as foo
JOIN
(SELECT user_id FROM users_table_part UNION ALL SELECT user_id FROM users_table_part) as bar
USING (user_id)
) as foo1 ORDER BY 1 LIMIT 1;
DEBUG: push down of limit count: 1
user_id
---------------------------------------------------------------------
0
(1 row)
-- safe to pushdown
SELECT DISTINCT user_id FROM (
SELECT * FROM (
SELECT * FROM
(SELECT user_id FROM users_table_part UNION ALL SELECT user_id FROM users_table_part) as foo
JOIN
(SELECT user_id FROM users_table_part UNION ALL SELECT user_id FROM users_table_part) as bar
USING (user_id)
UNION ALL
SELECT * FROM
(SELECT user_id FROM users_table_part UNION ALL SELECT user_id FROM users_table_part) as foo
JOIN
(SELECT user_id FROM users_table_part UNION ALL SELECT user_id FROM users_table_part) as bar
USING (user_id)) as bar
) as foo1 ORDER BY 1 LIMIT 1;
DEBUG: push down of limit count: 1
user_id
---------------------------------------------------------------------
0
(1 row)
-- safe to pushdown
SELECT DISTINCT user_id FROM
(SELECT user_id FROM users_table_part UNION ALL SELECT user_id FROM users_table_part) as foo
JOIN
(SELECT user_id FROM users_table_part UNION ALL SELECT user_id FROM users_table_part) as bar
USING (user_id)
ORDER BY 1 LIMIT 1;
DEBUG: push down of limit count: 1
user_id
---------------------------------------------------------------------
0
(1 row)
RESET client_min_messages;
DROP SCHEMA union_pushdown CASCADE;
NOTICE: drop cascades to 2 other objects
DETAIL: drop cascades to table users_table_part
drop cascades to table events_table_part

View File

@ -822,6 +822,69 @@ EXECUTE olu(1,ARRAY[1,2],ARRAY[1,2]);
{1} | {1} | {NULL}
(1 row)
-- test insert query with insert CTE
WITH insert_cte AS
(INSERT INTO with_modifying.modify_table VALUES (23, 7))
INSERT INTO with_modifying.anchor_table VALUES (1998);
SELECT * FROM with_modifying.modify_table WHERE id = 23 AND val = 7;
id | val
---------------------------------------------------------------------
23 | 7
(1 row)
SELECT * FROM with_modifying.anchor_table WHERE id = 1998;
id
---------------------------------------------------------------------
1998
(1 row)
-- test insert query with multiple CTEs
WITH select_cte AS (SELECT * FROM with_modifying.anchor_table),
modifying_cte AS (INSERT INTO with_modifying.anchor_table SELECT * FROM select_cte)
INSERT INTO with_modifying.anchor_table VALUES (1995);
SELECT * FROM with_modifying.anchor_table ORDER BY 1;
id
---------------------------------------------------------------------
1
1
2
2
1995
1998
1998
(7 rows)
-- test with returning
WITH returning_cte AS (INSERT INTO with_modifying.anchor_table values (1997) RETURNING *)
INSERT INTO with_modifying.anchor_table VALUES (1996);
SELECT * FROM with_modifying.anchor_table WHERE id IN (1996, 1997) ORDER BY 1;
id
---------------------------------------------------------------------
1996
1997
(2 rows)
-- test insert query with select CTE
WITH select_cte AS
(SELECT * FROM with_modifying.modify_table)
INSERT INTO with_modifying.anchor_table VALUES (1990);
SELECT * FROM with_modifying.anchor_table WHERE id = 1990;
id
---------------------------------------------------------------------
1990
(1 row)
-- even if we do multi-row insert, it is not fast path router due to cte
WITH select_cte AS (SELECT 1 AS col)
INSERT INTO with_modifying.anchor_table VALUES (1991), (1992);
SELECT * FROM with_modifying.anchor_table WHERE id IN (1991, 1992) ORDER BY 1;
id
---------------------------------------------------------------------
1991
1992
(2 rows)
DELETE FROM with_modifying.anchor_table WHERE id IN (1990, 1991, 1992, 1995, 1996, 1997, 1998);
-- Test with replication factor 2
SET citus.shard_replication_factor to 2;
DROP TABLE modify_table;

View File

@ -72,7 +72,7 @@ test: multi_create_fdw
# ----------
# Tests for recursive subquery planning
# ----------
# NOTE: The next 6 were in parallel originally, but we got "too many
# NOTE: The next 7 were in parallel originally, but we got "too many
# connection" errors on CI. Requires investigation before doing them in
# parallel again.
test: subquery_basics
@ -80,6 +80,7 @@ test: subquery_local_tables
test: subquery_executors
test: subquery_and_cte
test: set_operations
test: union_pushdown
test: set_operation_and_local_tables
test: subqueries_deep subquery_view subquery_partitioning subqueries_not_supported
@ -96,6 +97,11 @@ test: tableam
test: propagate_statistics
test: pg13_propagate_statistics
# ----------
# Test for updating table statistics
# ----------
test: citus_update_table_statistics
# ----------
# Miscellaneous tests to check our query planning behavior
# ----------

View File

@ -280,5 +280,33 @@ CREATE MATERIALIZED VIEW mat_view AS SELECT * FROM mat_view_test;
SELECT alter_distributed_table('mat_view_test', shard_count := 5, cascade_to_colocated := false);
SELECT * FROM mat_view ORDER BY a;
-- test long table names
SET client_min_messages TO DEBUG1;
CREATE TABLE abcde_0123456789012345678901234567890123456789012345678901234567890123456789 (x int, y int);
SELECT create_distributed_table('abcde_0123456789012345678901234567890123456789012345678901234567890123456789', 'x');
SELECT alter_distributed_table('abcde_0123456789012345678901234567890123456789012345678901234567890123456789', distribution_column := 'y');
RESET client_min_messages;
-- test long partitioned table names
CREATE TABLE partition_lengths
(
tenant_id integer NOT NULL,
timeperiod timestamp without time zone NOT NULL,
inserted_utc timestamp without time zone NOT NULL DEFAULT now()
) PARTITION BY RANGE (timeperiod);
SELECT create_distributed_table('partition_lengths', 'tenant_id');
CREATE TABLE partition_lengths_p2020_09_28_12345678901234567890123456789012345678901234567890 PARTITION OF partition_lengths FOR VALUES FROM ('2020-09-28 00:00:00') TO ('2020-09-29 00:00:00');
-- verify alter_distributed_table works with long partition names
SELECT alter_distributed_table('partition_lengths', shard_count := 29, cascade_to_colocated := false);
-- test long partition table names
ALTER TABLE partition_lengths_p2020_09_28_12345678901234567890123456789012345678901234567890 RENAME TO partition_lengths_p2020_09_28;
ALTER TABLE partition_lengths RENAME TO partition_lengths_12345678901234567890123456789012345678901234567890;
-- verify alter_distributed_table works with long partitioned table names
SELECT alter_distributed_table('partition_lengths_12345678901234567890123456789012345678901234567890', shard_count := 17, cascade_to_colocated := false);
SET client_min_messages TO WARNING;
DROP SCHEMA alter_distributed_table CASCADE;

View File

@ -213,6 +213,14 @@ CREATE TABLE identity_cols_test (a int, b int generated by default as identity (
-- errors out since we don't support alter_table.* udfs with tables having any identity columns
SELECT alter_table_set_access_method('identity_cols_test', 'columnar');
-- test long table names
SET client_min_messages TO DEBUG1;
CREATE TABLE abcde_0123456789012345678901234567890123456789012345678901234567890123456789 (x int, y int);
SELECT create_distributed_table('abcde_0123456789012345678901234567890123456789012345678901234567890123456789', 'x');
SELECT alter_table_set_access_method('abcde_0123456789012345678901234567890123456789012345678901234567890123456789', 'columnar');
SELECT * FROM abcde_0123456789012345678901234567890123456789012345678901234567890123456789;
RESET client_min_messages;
SET client_min_messages TO WARNING;
DROP SCHEMA alter_table_set_access_method CASCADE;
SELECT 1 FROM master_remove_node('localhost', :master_port);

View File

@ -0,0 +1,107 @@
--
-- citus_update_table_statistics.sql
--
-- Test citus_update_table_statistics function on both
-- hash and append distributed tables
-- This function updates shardlength, shardminvalue and shardmaxvalue
--
SET citus.next_shard_id TO 981000;
SET citus.next_placement_id TO 982000;
SET citus.shard_count TO 8;
SET citus.shard_replication_factor TO 2;
-- test with a hash-distributed table
-- here we update only shardlength, not shardminvalue and shardmaxvalue
CREATE TABLE test_table_statistics_hash (id int);
SELECT create_distributed_table('test_table_statistics_hash', 'id');
-- populate table
INSERT INTO test_table_statistics_hash SELECT i FROM generate_series(0, 10000)i;
-- originally shardlength (size of the shard) is zero
SELECT
ds.logicalrelid::regclass::text AS tablename,
ds.shardid AS shardid,
dsp.placementid AS placementid,
shard_name(ds.logicalrelid, ds.shardid) AS shardname,
ds.shardminvalue AS shardminvalue,
ds.shardmaxvalue AS shardmaxvalue
FROM pg_dist_shard ds JOIN pg_dist_shard_placement dsp USING (shardid)
WHERE ds.logicalrelid::regclass::text in ('test_table_statistics_hash') AND dsp.shardlength = 0
ORDER BY 2, 3;
-- setting this to on in order to verify that we use a distributed transaction id
-- to run the size queries from different connections
-- this is going to help detect deadlocks
SET citus.log_remote_commands TO ON;
-- setting this to sequential in order to have a deterministic order
-- in the output of citus.log_remote_commands
SET citus.multi_shard_modify_mode TO sequential;
-- update table statistics and then check that shardlength has changed
-- but shardminvalue and shardmaxvalue stay the same because this is
-- a hash distributed table
SELECT citus_update_table_statistics('test_table_statistics_hash');
RESET citus.log_remote_commands;
RESET citus.multi_shard_modify_mode;
SELECT
ds.logicalrelid::regclass::text AS tablename,
ds.shardid AS shardid,
dsp.placementid AS placementid,
shard_name(ds.logicalrelid, ds.shardid) AS shardname,
ds.shardminvalue as shardminvalue,
ds.shardmaxvalue as shardmaxvalue
FROM pg_dist_shard ds JOIN pg_dist_shard_placement dsp USING (shardid)
WHERE ds.logicalrelid::regclass::text in ('test_table_statistics_hash') AND dsp.shardlength > 0
ORDER BY 2, 3;
-- check with an append-distributed table
-- here we update shardlength, shardminvalue and shardmaxvalue
CREATE TABLE test_table_statistics_append (id int);
SELECT create_distributed_table('test_table_statistics_append', 'id', 'append');
COPY test_table_statistics_append FROM PROGRAM 'echo 0 && echo 1 && echo 2 && echo 3' WITH CSV;
COPY test_table_statistics_append FROM PROGRAM 'echo 4 && echo 5 && echo 6 && echo 7' WITH CSV;
-- originally shardminvalue and shardmaxvalue will be 0,3 and 4, 7
SELECT
ds.logicalrelid::regclass::text AS tablename,
ds.shardid AS shardid,
dsp.placementid AS placementid,
shard_name(ds.logicalrelid, ds.shardid) AS shardname,
ds.shardminvalue as shardminvalue,
ds.shardmaxvalue as shardmaxvalue
FROM pg_dist_shard ds JOIN pg_dist_shard_placement dsp USING (shardid)
WHERE ds.logicalrelid::regclass::text in ('test_table_statistics_append')
ORDER BY 2, 3;
-- delete some data to change shardminvalues of a shards
DELETE FROM test_table_statistics_append WHERE id = 0 OR id = 4;
SET citus.log_remote_commands TO ON;
SET citus.multi_shard_modify_mode TO sequential;
-- update table statistics and then check that shardminvalue has changed
-- shardlength (shardsize) is still 8192 since there is very few data
SELECT citus_update_table_statistics('test_table_statistics_append');
RESET citus.log_remote_commands;
RESET citus.multi_shard_modify_mode;
SELECT
ds.logicalrelid::regclass::text AS tablename,
ds.shardid AS shardid,
dsp.placementid AS placementid,
shard_name(ds.logicalrelid, ds.shardid) AS shardname,
ds.shardminvalue as shardminvalue,
ds.shardmaxvalue as shardmaxvalue
FROM pg_dist_shard ds JOIN pg_dist_shard_placement dsp USING (shardid)
WHERE ds.logicalrelid::regclass::text in ('test_table_statistics_append')
ORDER BY 2, 3;
DROP TABLE test_table_statistics_hash, test_table_statistics_append;
ALTER SYSTEM RESET citus.shard_count;
ALTER SYSTEM RESET citus.shard_replication_factor;

View File

@ -128,4 +128,20 @@ FROM
LEFT JOIN users_table USING (user_id)
ORDER BY 1,2,3,4 LIMIT 5;
-- we don't support cross JOINs between distributed tables
-- and without target list entries
CREATE TABLE dist1(c0 int);
CREATE TABLE dist2(c0 int);
CREATE TABLE dist3(c0 int , c1 int);
CREATE TABLE dist4(c0 int , c1 int);
SELECT create_distributed_table('dist1', 'c0');
SELECT create_distributed_table('dist2', 'c0');
SELECT create_distributed_table('dist3', 'c1');
SELECT create_distributed_table('dist4', 'c1');
SELECT dist2.c0 FROM dist1, dist3, dist4, dist2 WHERE (dist3.c0) IN (dist4.c0);
SELECT 1 FROM dist3, dist4, dist2 WHERE (dist3.c0) IN (dist4.c0);
SELECT FROM dist3, dist4, dist2 WHERE (dist3.c0) IN (dist4.c0);
SELECT dist2.c0 FROM dist3, dist4, dist2 WHERE (dist3.c0) IN (dist4.c0);
SELECT dist2.* FROM dist3, dist4, dist2 WHERE (dist3.c0) IN (dist4.c0);

View File

@ -198,6 +198,16 @@ SELECT * FROM print_extension_changes();
ALTER EXTENSION citus UPDATE TO '10.0-1';
SELECT * FROM print_extension_changes();
-- Test downgrade to 10.0-1 from 10.0-2
ALTER EXTENSION citus UPDATE TO '10.0-2';
ALTER EXTENSION citus UPDATE TO '10.0-1';
-- Should be empty result since upgrade+downgrade should be a no-op
SELECT * FROM print_extension_changes();
-- Snapshot of state at 10.0-2
ALTER EXTENSION citus UPDATE TO '10.0-2';
SELECT * FROM print_extension_changes();
DROP TABLE prev_objects, extension_diff;
-- show running version

View File

@ -123,9 +123,9 @@ CREATE FOREIGN TABLE foreign_table (
) SERVER fake_fdw_server OPTIONS (encoding 'utf-8', compression 'true');
SELECT create_distributed_table('foreign_table', 'id');
ALTER FOREIGN TABLE foreign_table rename to renamed_foreign_table;
ALTER FOREIGN TABLE renamed_foreign_table rename full_name to rename_name;
ALTER FOREIGN TABLE renamed_foreign_table alter rename_name type char(8);
ALTER FOREIGN TABLE foreign_table rename to renamed_foreign_table_with_long_name_12345678901234567890123456789012345678901234567890;
ALTER FOREIGN TABLE renamed_foreign_table_with_long_name_12345678901234567890123456789012345678901234567890 rename full_name to rename_name;
ALTER FOREIGN TABLE renamed_foreign_table_with_long_name_12345678901234567890123456789012345678901234567890 alter rename_name type char(8);
\c - - :public_worker_1_host :worker_1_port
select table_name, column_name, data_type
from information_schema.columns
@ -133,7 +133,7 @@ where table_schema='public' and table_name like 'renamed_foreign_table_%' and co
order by table_name;
\c - - :master_host :master_port
SELECT master_get_table_ddl_events('renamed_foreign_table');
SELECT master_get_table_ddl_events('renamed_foreign_table_with_long_name_12345678901234567890123456789012345678901234567890');
-- propagating views is not supported
CREATE VIEW local_view AS SELECT * FROM simple_table;
@ -142,7 +142,7 @@ SELECT master_get_table_ddl_events('local_view');
-- clean up
DROP VIEW IF EXISTS local_view;
DROP FOREIGN TABLE IF EXISTS renamed_foreign_table;
DROP FOREIGN TABLE IF EXISTS renamed_foreign_table_with_long_name_12345678901234567890123456789012345678901234567890;
\c - - :public_worker_1_host :worker_1_port
select table_name, column_name, data_type
from information_schema.columns

View File

@ -155,6 +155,39 @@ SELECT lock_relation_if_exists('test', 'ACCESS SHARE');
SELECT lock_relation_if_exists('test', 'EXCLUSIVE');
ABORT;
-- test creating columnar tables and accessing to columnar metadata tables via unprivileged user
-- all below 5 commands should throw no permission errors
-- read columnar metadata table
SELECT * FROM columnar.stripe;
-- alter a columnar setting
SET columnar.chunk_group_row_limit = 1050;
DO $proc$
BEGIN
IF substring(current_Setting('server_version'), '\d+')::int >= 12 THEN
EXECUTE $$
-- create columnar table
CREATE TABLE columnar_table (a int) USING columnar;
-- alter a columnar table that is created by that unprivileged user
SELECT alter_columnar_table_set('columnar_table', chunk_group_row_limit => 100);
-- and drop it
DROP TABLE columnar_table;
$$;
END IF;
END$proc$;
-- cannot modify columnar metadata table as unprivileged user
INSERT INTO columnar.stripe VALUES(99);
-- Cannot drop columnar metadata table as unprivileged user.
-- Privileged user also cannot drop but with a different error message.
-- (since citus extension has a dependency to it)
DROP TABLE columnar.chunk;
-- test whether a read-only user can read from citus_tables view
SELECT distribution_column FROM citus_tables WHERE table_name = 'test'::regclass;
-- check no permission
SET ROLE no_access;

View File

@ -1,6 +1,8 @@
CREATE SCHEMA mx_alter_distributed_table;
SET search_path TO mx_alter_distributed_table;
SET citus.shard_replication_factor TO 1;
ALTER SEQUENCE pg_catalog.pg_dist_colocationid_seq RESTART 1410000;
SET citus.replication_model TO 'streaming';
-- test alter_distributed_table UDF
CREATE TABLE adt_table (a INT, b INT);
@ -48,5 +50,114 @@ END;
SELECT table_name, citus_table_type, distribution_column, shard_count FROM public.citus_tables WHERE table_name::text = 'adt_table';
-- test procedure colocation is preserved with alter_distributed_table
CREATE TABLE test_proc_colocation_0 (a float8);
SELECT create_distributed_table('test_proc_colocation_0', 'a');
CREATE OR REPLACE procedure proc_0(dist_key float8)
LANGUAGE plpgsql
AS $$
DECLARE
res INT := 0;
BEGIN
INSERT INTO test_proc_colocation_0 VALUES (dist_key);
SELECT count(*) INTO res FROM test_proc_colocation_0;
RAISE NOTICE 'Res: %', res;
COMMIT;
END;$$;
SELECT create_distributed_function('proc_0(float8)', 'dist_key', 'test_proc_colocation_0' );
SELECT logicalrelid, colocationid FROM pg_dist_partition WHERE logicalrelid::regclass::text IN ('test_proc_colocation_0');
SELECT proname, colocationid FROM pg_proc JOIN citus.pg_dist_object ON pg_proc.oid = citus.pg_dist_object.objid WHERE proname IN ('proc_0');
SET client_min_messages TO DEBUG1;
CALL proc_0(1.0);
RESET client_min_messages;
-- shardCount is not null && list_length(colocatedTableList) = 1
SELECT alter_distributed_table('test_proc_colocation_0', shard_count:= 8);
SET client_min_messages TO DEBUG1;
CALL proc_0(1.0);
RESET client_min_messages;
SELECT logicalrelid, colocationid FROM pg_dist_partition WHERE logicalrelid::regclass::text IN ('test_proc_colocation_0');
SELECT proname, colocationid FROM pg_proc JOIN citus.pg_dist_object ON pg_proc.oid = citus.pg_dist_object.objid WHERE proname IN ('proc_0');
-- colocatewith is not null && list_length(colocatedTableList) = 1
SELECT alter_distributed_table('test_proc_colocation_0', shard_count:= 4);
CREATE TABLE test_proc_colocation_1 (a float8);
SELECT create_distributed_table('test_proc_colocation_1', 'a', colocate_with := 'none');
SELECT alter_distributed_table('test_proc_colocation_0', colocate_with := 'test_proc_colocation_1');
SET client_min_messages TO DEBUG1;
CALL proc_0(1.0);
RESET client_min_messages;
SELECT logicalrelid, colocationid FROM pg_dist_partition WHERE logicalrelid::regclass::text IN ('test_proc_colocation_0');
SELECT proname, colocationid FROM pg_proc JOIN citus.pg_dist_object ON pg_proc.oid = citus.pg_dist_object.objid WHERE proname IN ('proc_0');
-- shardCount is not null && cascade_to_colocated is true
SELECT alter_distributed_table('test_proc_colocation_0', shard_count:= 8, cascade_to_colocated := true);
SET client_min_messages TO DEBUG1;
CALL proc_0(1.0);
RESET client_min_messages;
SELECT logicalrelid, colocationid FROM pg_dist_partition WHERE logicalrelid::regclass::text IN ('test_proc_colocation_0');
SELECT proname, colocationid FROM pg_proc JOIN citus.pg_dist_object ON pg_proc.oid = citus.pg_dist_object.objid WHERE proname IN ('proc_0');
-- colocatewith is not null && cascade_to_colocated is true
SELECT alter_distributed_table('test_proc_colocation_0', shard_count:= 4, cascade_to_colocated := true);
CREATE TABLE test_proc_colocation_2 (a float8);
SELECT create_distributed_table('test_proc_colocation_2', 'a', colocate_with := 'none');
SELECT alter_distributed_table('test_proc_colocation_0', colocate_with := 'test_proc_colocation_2', cascade_to_colocated := true);
SET client_min_messages TO DEBUG1;
CALL proc_0(1.0);
RESET client_min_messages;
SELECT logicalrelid, colocationid FROM pg_dist_partition WHERE logicalrelid::regclass::text IN ('test_proc_colocation_0');
SELECT proname, colocationid FROM pg_proc JOIN citus.pg_dist_object ON pg_proc.oid = citus.pg_dist_object.objid WHERE proname IN ('proc_0');
-- try a case with more than one procedure
CREATE OR REPLACE procedure proc_1(dist_key float8)
LANGUAGE plpgsql
AS $$
DECLARE
res INT := 0;
BEGIN
INSERT INTO test_proc_colocation_0 VALUES (dist_key);
SELECT count(*) INTO res FROM test_proc_colocation_0;
RAISE NOTICE 'Res: %', res;
COMMIT;
END;$$;
SELECT create_distributed_function('proc_1(float8)', 'dist_key', 'test_proc_colocation_0' );
SELECT logicalrelid, colocationid FROM pg_dist_partition WHERE logicalrelid::regclass::text IN ('test_proc_colocation_0');
SELECT proname, colocationid FROM pg_proc JOIN citus.pg_dist_object ON pg_proc.oid = citus.pg_dist_object.objid WHERE proname IN ('proc_0', 'proc_1') ORDER BY proname;
SET client_min_messages TO DEBUG1;
CALL proc_0(1.0);
CALL proc_1(2.0);
RESET client_min_messages;
SELECT alter_distributed_table('test_proc_colocation_0', shard_count:= 8, cascade_to_colocated := true);
SET client_min_messages TO DEBUG1;
CALL proc_0(1.0);
CALL proc_1(2.0);
RESET client_min_messages;
SELECT logicalrelid, colocationid FROM pg_dist_partition WHERE logicalrelid::regclass::text IN ('test_proc_colocation_0');
SELECT proname, colocationid FROM pg_proc JOIN citus.pg_dist_object ON pg_proc.oid = citus.pg_dist_object.objid WHERE proname IN ('proc_0', 'proc_1') ORDER BY proname;
-- case which shouldn't preserve colocation for now
-- shardCount is not null && cascade_to_colocated is false
SELECT alter_distributed_table('test_proc_colocation_0', shard_count:= 18, cascade_to_colocated := false);
SELECT logicalrelid, colocationid FROM pg_dist_partition WHERE logicalrelid::regclass::text IN ('test_proc_colocation_0');
SELECT proname, colocationid FROM pg_proc JOIN citus.pg_dist_object ON pg_proc.oid = citus.pg_dist_object.objid WHERE proname IN ('proc_0', 'proc_1') ORDER BY proname;
SET client_min_messages TO WARNING;
DROP SCHEMA mx_alter_distributed_table CASCADE;

View File

@ -84,13 +84,89 @@ ALTER TABLE name_lengths ADD CONSTRAINT nl_checky_123456789012345678901234567890
SELECT "Constraint", "Definition" FROM table_checks WHERE relid='public.name_lengths_225002'::regclass ORDER BY 1 DESC, 2 DESC;
\c - - :master_host :master_port
-- Placeholders for RENAME operations
\set VERBOSITY TERSE
-- Rename the table to a too-long name
SET client_min_messages TO DEBUG1;
SET citus.force_max_query_parallelization TO ON;
ALTER TABLE name_lengths RENAME TO name_len_12345678901234567890123456789012345678901234567890;
SELECT * FROM name_len_12345678901234567890123456789012345678901234567890;
ALTER TABLE name_len_12345678901234567890123456789012345678901234567890 RENAME TO name_lengths;
SELECT * FROM name_lengths;
-- Test renames on zero shard distributed tables
CREATE TABLE append_zero_shard_table (a int);
SELECT create_distributed_table('append_zero_shard_table', 'a', 'append');
ALTER TABLE append_zero_shard_table rename TO append_zero_shard_table_12345678901234567890123456789012345678901234567890;
-- Verify that we do not support long renames after parallel queries are executed in transaction block
BEGIN;
ALTER TABLE name_lengths rename col1 to new_column_name;
ALTER TABLE name_lengths RENAME TO name_len_12345678901234567890123456789012345678901234567890;
ROLLBACK;
-- The same operation will work when sequential mode is set
BEGIN;
SET LOCAL citus.multi_shard_modify_mode TO 'sequential';
ALTER TABLE name_lengths rename col1 to new_column_name;
ALTER TABLE name_lengths RENAME TO name_len_12345678901234567890123456789012345678901234567890;
ROLLBACK;
RESET client_min_messages;
-- test long partitioned table renames
SET citus.shard_replication_factor TO 1;
CREATE TABLE partition_lengths
(
tenant_id integer NOT NULL,
timeperiod timestamp without time zone NOT NULL
) PARTITION BY RANGE (timeperiod);
SELECT create_distributed_table('partition_lengths', 'tenant_id');
CREATE TABLE partition_lengths_p2020_09_28 PARTITION OF partition_lengths FOR VALUES FROM ('2020-09-28 00:00:00') TO ('2020-09-29 00:00:00');
-- verify that we can rename partitioned tables and partitions to too-long names
ALTER TABLE partition_lengths RENAME TO partition_lengths_12345678901234567890123456789012345678901234567890;
-- verify that we can rename partitioned tables and partitions with too-long names
ALTER TABLE partition_lengths_12345678901234567890123456789012345678901234567890 RENAME TO partition_lengths;
-- Placeholders for unsupported operations
\set VERBOSITY TERSE
-- renaming distributed table partitions
ALTER TABLE partition_lengths_p2020_09_28 RENAME TO partition_lengths_p2020_09_28_12345678901234567890123456789012345678901234567890;
-- creating or attaching new partitions with long names create deadlocks
CREATE TABLE partition_lengths_p2020_09_29_12345678901234567890123456789012345678901234567890 (LIKE partition_lengths_p2020_09_28_12345678901234567890123456789012345678901234567890);
ALTER TABLE partition_lengths
ATTACH PARTITION partition_lengths_p2020_09_29_12345678901234567890123456789012345678901234567890
FOR VALUES FROM ('2020-09-29 00:00:00') TO ('2020-09-30 00:00:00');
CREATE TABLE partition_lengths_p2020_09_30_12345678901234567890123456789012345678901234567890
PARTITION OF partition_lengths
FOR VALUES FROM ('2020-09-30 00:00:00') TO ('2020-10-01 00:00:00');
DROP TABLE partition_lengths_p2020_09_29_12345678901234567890123456789012345678901234567890;
-- creating or attaching new partitions with long names work when using sequential shard modify mode
BEGIN;
SET LOCAL citus.multi_shard_modify_mode = sequential;
CREATE TABLE partition_lengths_p2020_09_29_12345678901234567890123456789012345678901234567890 (LIKE partition_lengths_p2020_09_28_12345678901234567890123456789012345678901234567890);
ALTER TABLE partition_lengths
ATTACH PARTITION partition_lengths_p2020_09_29_12345678901234567890123456789012345678901234567890
FOR VALUES FROM ('2020-09-29 00:00:00') TO ('2020-09-30 00:00:00');
CREATE TABLE partition_lengths_p2020_09_30_12345678901234567890123456789012345678901234567890
PARTITION OF partition_lengths
FOR VALUES FROM ('2020-09-30 00:00:00') TO ('2020-10-01 00:00:00');
ROLLBACK;
-- renaming distributed table constraints are not supported
ALTER TABLE name_lengths RENAME CONSTRAINT unique_12345678901234567890123456789012345678901234567890 TO unique2_12345678901234567890123456789012345678901234567890;
DROP TABLE partition_lengths CASCADE;
\set VERBOSITY DEFAULT
-- Verify that we can create indexes with very long names on zero shard tables.
CREATE INDEX append_zero_shard_table_idx_12345678901234567890123456789012345678901234567890 ON append_zero_shard_table_12345678901234567890123456789012345678901234567890(a);
-- Verify that CREATE INDEX on already distributed table has proper shard names.
CREATE INDEX tmp_idx_12345678901234567890123456789012345678901234567890 ON name_lengths(col2);
@ -104,6 +180,9 @@ SELECT "relname", "Column", "Type", "Definition" FROM index_attrs WHERE
-- by the parser/rewriter before further processing, just as in Postgres.
CREATE INDEX tmp_idx_123456789012345678901234567890123456789012345678901234567890 ON name_lengths(col2);
-- Verify we can rename indexes with long names
ALTER INDEX tmp_idx_123456789012345678901234567890123456789012345678901234567890 RENAME TO tmp_idx_newname_123456789012345678901234567890123456789012345678901234567890;
\c - - :public_worker_1_host :worker_1_port
SELECT "relname", "Column", "Type", "Definition" FROM index_attrs WHERE
relname LIKE 'tmp_idx_%' ORDER BY 1 DESC, 2 DESC, 3 DESC, 4 DESC;
@ -136,8 +215,8 @@ SELECT master_create_distributed_table('sneaky_name_lengths', 'int_col_123456789
SELECT master_create_worker_shards('sneaky_name_lengths', '2', '2');
\c - - :public_worker_1_host :worker_1_port
\di public.sneaky*225006
SELECT "Constraint", "Definition" FROM table_checks WHERE relid='public.sneaky_name_lengths_225006'::regclass ORDER BY 1 DESC, 2 DESC;
\di public.sneaky*225030
SELECT "Constraint", "Definition" FROM table_checks WHERE relid='public.sneaky_name_lengths_225030'::regclass ORDER BY 1 DESC, 2 DESC;
\c - - :master_host :master_port
SET citus.shard_count TO 2;
@ -155,7 +234,7 @@ CREATE TABLE sneaky_name_lengths (
SELECT create_distributed_table('sneaky_name_lengths', 'col1', 'hash');
\c - - :public_worker_1_host :worker_1_port
\di unique*225008
\di unique*225032
\c - - :master_host :master_port
SET citus.shard_count TO 2;
@ -215,3 +294,4 @@ DROP TABLE multi_name_lengths.too_long_12345678901234567890123456789012345678901
-- Clean up.
DROP TABLE name_lengths CASCADE;
DROP TABLE U&"elephant_!0441!043B!043E!043D!0441!043B!043E!043D!0441!043B!043E!043D!0441!043B!043E!043D!0441!043B!043E!043D!0441!043B!043E!043D" UESCAPE '!' CASCADE;
RESET citus.force_max_query_parallelization;

View File

@ -49,7 +49,7 @@ SET citus.shard_replication_factor TO 1;
SET citus.shard_count TO 2;
SELECT create_distributed_table('articles_hash', 'author_id');
CREATE TABLE authors_reference ( name varchar(20), id bigint );
CREATE TABLE authors_reference (id int, name text);
SELECT create_reference_table('authors_reference');
-- create a bunch of test data
@ -850,6 +850,10 @@ SELECT count(*) FILTER (where value = 15) FROM collections_list WHERE key = 4;
SELECT count(*) FILTER (where value = 15) FROM collections_list_1 WHERE key = 4;
SELECT count(*) FILTER (where value = 15) FROM collections_list_2 WHERE key = 4;
-- test INSERT using values from generate_series() and repeat() functions
INSERT INTO authors_reference (id, name) VALUES (generate_series(1, 10), repeat('Migjeni', 3));
SELECT * FROM authors_reference ORDER BY 1, 2;
SET client_min_messages to 'NOTICE';
DROP FUNCTION author_articles_max_id();

View File

@ -4,6 +4,7 @@ CREATE SCHEMA null_parameters;
SET search_path TO null_parameters;
SET citus.next_shard_id TO 1680000;
SET citus.shard_count to 32;
CREATE TABLE text_dist_column (key text, value text);
SELECT create_distributed_table('text_dist_column', 'key');

View File

@ -383,6 +383,20 @@ ROLLBACK;
RESET citus.replicate_reference_tables_on_activate;
SELECT citus_remove_node('localhost', :master_port);
CREATE TABLE superuser_columnar_table (a int) USING columnar;
CREATE USER read_access;
SET ROLE read_access;
-- user shouldn't be able to execute alter_columnar_table_set
-- or alter_columnar_table_reset for a columnar table that it
-- doesn't own
SELECT alter_columnar_table_set('test_pg12.superuser_columnar_table', chunk_group_row_limit => 100);
SELECT alter_columnar_table_reset('test_pg12.superuser_columnar_table');
RESET ROLE;
DROP USER read_access;
\set VERBOSITY terse
drop schema test_pg12 cascade;
\set VERBOSITY default

View File

@ -74,6 +74,7 @@ SELECT * FROM ((SELECT x, y FROM test) EXCEPT (SELECT y, x FROM test)) u ORDER B
SELECT * FROM ((SELECT * FROM test) EXCEPT (SELECT * FROM ref)) u ORDER BY 1,2;
SELECT * FROM ((SELECT * FROM ref) EXCEPT (SELECT * FROM ref)) u ORDER BY 1,2;
-- unions can even be pushed down within a join
SELECT * FROM ((SELECT * FROM test) UNION (SELECT * FROM test)) u JOIN test USING (x) ORDER BY 1,2;
SELECT * FROM ((SELECT * FROM test) UNION ALL (SELECT * FROM test)) u LEFT JOIN test USING (x) ORDER BY 1,2;
@ -148,6 +149,27 @@ select avg(DISTINCT t.x) FROM ((SELECT avg(DISTINCT y) FROM test GROUP BY x) UNI
-- other agg. distincts are not supported when group by doesn't include partition key
select count(DISTINCT t.x) FROM ((SELECT avg(DISTINCT y) FROM test GROUP BY y) UNION (SELECT avg(DISTINCT y) FROM test GROUP BY y)) as t(x) ORDER BY 1;
/* these are not safe to push down as the partition key index is different */
SELECT COUNT(*) FROM ((SELECT x,y FROM test) UNION ALL (SELECT y,x FROM test)) u;
/* this is safe to push down since the partition key index is the same */
SELECT COUNT(*) FROM (SELECT x,y FROM test UNION ALL SELECT x,y FROM test) foo;
SELECT COUNT(*) FROM
((SELECT x,y FROM test UNION ALL SELECT x,y FROM test)
UNION ALL
(SELECT x,y FROM test UNION ALL SELECT x,y FROM test)) foo;
SELECT COUNT(*)
FROM
(SELECT user_id AS user_id
FROM
(SELECT x AS user_id
FROM test
UNION ALL SELECT x AS user_id
FROM test) AS bar
UNION ALL SELECT x AS user_id
FROM test) AS fool LIMIT 1;
-- one of the leaves is a repartition join
SET citus.enable_repartition_joins TO ON;

View File

@ -409,6 +409,50 @@ SELECT count(*) FROM collections_list_1 WHERE key = 11;
ALTER TABLE collections_list DROP COLUMN ts;
SELECT * FROM collections_list, collections_list_0 WHERE collections_list.key=collections_list_0.key ORDER BY 1 DESC,2 DESC,3 DESC,4 DESC LIMIT 1;
-- test hash distribution using INSERT with generate_series() function
CREATE OR REPLACE FUNCTION part_hashint4_noop(value int4, seed int8)
RETURNS int8 AS $$
SELECT value + seed;
$$ LANGUAGE SQL IMMUTABLE;
CREATE OPERATOR CLASS part_test_int4_ops
FOR TYPE int4
USING HASH AS
operator 1 =,
function 2 part_hashint4_noop(int4, int8);
CREATE TABLE hash_parted (
a int,
b int
) PARTITION BY HASH (a part_test_int4_ops);
CREATE TABLE hpart0 PARTITION OF hash_parted FOR VALUES WITH (modulus 4, remainder 0);
CREATE TABLE hpart1 PARTITION OF hash_parted FOR VALUES WITH (modulus 4, remainder 1);
CREATE TABLE hpart2 PARTITION OF hash_parted FOR VALUES WITH (modulus 4, remainder 2);
CREATE TABLE hpart3 PARTITION OF hash_parted FOR VALUES WITH (modulus 4, remainder 3);
SELECT create_distributed_table('hash_parted ', 'a');
INSERT INTO hash_parted VALUES (1, generate_series(1, 10));
SELECT * FROM hash_parted ORDER BY 1, 2;
ALTER TABLE hash_parted DETACH PARTITION hpart0;
ALTER TABLE hash_parted DETACH PARTITION hpart1;
ALTER TABLE hash_parted DETACH PARTITION hpart2;
ALTER TABLE hash_parted DETACH PARTITION hpart3;
-- test range partition without creating partitions and inserting with generate_series()
-- should error out even in plain PG since no partition of relation "parent_tab" is found for row
-- in Citus it errors out because it fails to evaluate partition key in insert
CREATE TABLE parent_tab (id int) PARTITION BY RANGE (id);
SELECT create_distributed_table('parent_tab', 'id');
INSERT INTO parent_tab VALUES (generate_series(0, 3));
-- now it should work
CREATE TABLE parent_tab_1_2 PARTITION OF parent_tab FOR VALUES FROM (1) to (2);
ALTER TABLE parent_tab ADD COLUMN b int;
INSERT INTO parent_tab VALUES (1, generate_series(0, 3));
SELECT * FROM parent_tab ORDER BY 1, 2;
-- make sure that parallel accesses are good
SET citus.force_max_query_parallelization TO ON;
SELECT * FROM test_2 ORDER BY 1 DESC;

View File

@ -0,0 +1,234 @@
CREATE SCHEMA union_pushdown;
SET search_path TO union_pushdown;
SET citus.shard_count TO 4;
SET citus.shard_replication_factor TO 1;
CREATE TABLE users_table_part(user_id bigint, value_1 int, value_2 int) PARTITION BY RANGE (value_1);
CREATE TABLE users_table_part_0 PARTITION OF users_table_part FOR VALUES FROM (0) TO (1);
CREATE TABLE users_table_part_1 PARTITION OF users_table_part FOR VALUES FROM (1) TO (2);
CREATE TABLE users_table_part_2 PARTITION OF users_table_part FOR VALUES FROM (2) TO (3);
CREATE TABLE users_table_part_3 PARTITION OF users_table_part FOR VALUES FROM (3) TO (4);
CREATE TABLE users_table_part_4 PARTITION OF users_table_part FOR VALUES FROM (4) TO (5);
CREATE TABLE users_table_part_5 PARTITION OF users_table_part FOR VALUES FROM (5) TO (6);
CREATE TABLE users_table_part_6 PARTITION OF users_table_part FOR VALUES FROM (6) TO (7);
CREATE TABLE users_table_part_7 PARTITION OF users_table_part FOR VALUES FROM (7) TO (8);
CREATE TABLE users_table_part_8 PARTITION OF users_table_part FOR VALUES FROM (8) TO (9);
SELECT create_distributed_table('users_table_part', 'user_id');
INSERT INTO users_table_part SELECT i, i %9, i %50 FROM generate_series(0, 100) i;
CREATE TABLE events_table_part(user_id bigint, value_1 int, value_2 int) PARTITION BY RANGE (value_1);
CREATE TABLE events_table_part_0 PARTITION OF events_table_part FOR VALUES FROM (0) TO (1);
CREATE TABLE events_table_part_1 PARTITION OF events_table_part FOR VALUES FROM (1) TO (2);
CREATE TABLE events_table_part_2 PARTITION OF events_table_part FOR VALUES FROM (2) TO (3);
CREATE TABLE events_table_part_3 PARTITION OF events_table_part FOR VALUES FROM (3) TO (4);
CREATE TABLE events_table_part_4 PARTITION OF events_table_part FOR VALUES FROM (4) TO (5);
CREATE TABLE events_table_part_5 PARTITION OF events_table_part FOR VALUES FROM (5) TO (6);
CREATE TABLE events_table_part_6 PARTITION OF events_table_part FOR VALUES FROM (6) TO (7);
CREATE TABLE events_table_part_7 PARTITION OF events_table_part FOR VALUES FROM (7) TO (8);
CREATE TABLE events_table_part_8 PARTITION OF events_table_part FOR VALUES FROM (8) TO (9);
SELECT create_distributed_table('events_table_part', 'user_id');
INSERT INTO events_table_part SELECT i, i %9, i %50 FROM generate_series(0, 100) i;
set client_min_messages to DEBUG1;
-- a union all query with 2 different levels of UNION ALL
SELECT COUNT(*)
FROM
(SELECT user_id AS user_id
FROM
(SELECT user_id AS user_id
FROM users_table_part
UNION ALL SELECT user_id AS user_id
FROM users_table_part) AS bar
UNION ALL SELECT user_id AS user_id
FROM users_table_part) AS fool LIMIT 1;
-- a union [all] query with 2 different levels of UNION [ALL]
SELECT COUNT(*)
FROM
(SELECT user_id AS user_id
FROM
(SELECT user_id AS user_id
FROM users_table_part
UNION ALL SELECT user_id AS user_id
FROM users_table_part) AS bar
UNION SELECT user_id AS user_id
FROM users_table_part) AS fool LIMIT 1;
-- a union all query with several levels and leaf queries
SELECT DISTINCT user_id
FROM
(SELECT user_id AS user_id
FROM
(SELECT user_id AS user_id FROM users_table_part WHERE value_1 = 1
UNION ALL SELECT user_id AS user_id FROM users_table_part WHERE value_1 = 2) AS bar
UNION ALL SELECT user_id AS user_id FROM users_table_part WHERE value_1 = 3
UNION ALL
SELECT user_id AS user_id
FROM
(SELECT user_id AS user_id FROM users_table_part WHERE value_1 = 4
UNION ALL SELECT user_id AS user_id FROM users_table_part WHERE value_1 = 5) AS bar
UNION ALL
(SELECT user_id AS user_id
FROM
(SELECT DISTINCT user_id AS user_id FROM users_table_part WHERE value_1 = 6
UNION ALL SELECT user_id AS user_id FROM users_table_part WHERE value_1 = 7) AS bar
UNION ALL SELECT user_id AS user_id FROM users_table_part WHERE value_1 = 8)) AS bar
ORDER BY 1 LIMIT 1;
-- a union all query with several levels and leaf queries
-- on the partition tables
SELECT DISTINCT user_id
FROM
(SELECT user_id AS user_id
FROM
(SELECT user_id AS user_id FROM users_table_part WHERE value_1 = 1
UNION ALL SELECT user_id AS user_id FROM users_table_part_2 WHERE value_1 = 2) AS bar
UNION ALL SELECT user_id AS user_id FROM users_table_part WHERE value_1 = 3
UNION ALL
SELECT user_id AS user_id
FROM
(SELECT user_id AS user_id FROM users_table_part WHERE value_1 = 4
UNION ALL SELECT user_id AS user_id FROM users_table_part_3 WHERE value_1 = 5) AS bar
UNION ALL
(SELECT user_id AS user_id
FROM
(SELECT DISTINCT user_id AS user_id FROM users_table_part WHERE value_1 = 6
UNION ALL SELECT user_id AS user_id FROM users_table_part_5 WHERE value_1 = 7) AS bar
UNION ALL SELECT user_id AS user_id FROM users_table_part_4 WHERE value_1 = 8)) AS bar
ORDER BY 1 LIMIT 1;
-- a union all query with a combine query on the coordinator
-- can still be pushed down
SELECT COUNT(DISTINCT user_id)
FROM
(SELECT user_id AS user_id
FROM
(SELECT user_id AS user_id FROM users_table_part WHERE value_1 = 1
UNION ALL SELECT user_id AS user_id FROM users_table_part WHERE value_1 = 2) AS bar
UNION ALL SELECT user_id AS user_id FROM users_table_part WHERE value_1 = 3
UNION ALL
SELECT user_id AS user_id
FROM
(SELECT user_id AS user_id FROM users_table_part WHERE value_1 = 4
UNION ALL SELECT user_id AS user_id FROM users_table_part WHERE value_1 = 5) AS bar
UNION ALL
(SELECT user_id AS user_id
FROM
(SELECT user_id AS user_id FROM users_table_part WHERE value_1 = 6
UNION ALL SELECT user_id AS user_id FROM users_table_part WHERE value_1 = 7) AS bar
UNION ALL SELECT user_id AS user_id FROM users_table_part WHERE value_1 = 8)) AS bar;
-- a union all query with ORDER BY LIMIT
SELECT COUNT(user_id)
FROM
(SELECT user_id AS user_id
FROM
(SELECT user_id AS user_id FROM users_table_part WHERE value_1 = 1
UNION ALL SELECT user_id AS user_id FROM users_table_part WHERE value_1 = 2) AS bar
UNION ALL SELECT user_id AS user_id FROM users_table_part WHERE value_1 = 3
UNION ALL
SELECT user_id AS user_id
FROM
(SELECT user_id AS user_id FROM users_table_part WHERE value_1 = 4
UNION ALL SELECT user_id AS user_id FROM users_table_part WHERE value_1 = 5) AS bar
UNION ALL
(SELECT user_id AS user_id
FROM
(SELECT user_id AS user_id FROM users_table_part WHERE value_1 = 6
UNION ALL SELECT user_id AS user_id FROM users_table_part WHERE value_1 = 7) AS bar
UNION ALL SELECT user_id AS user_id FROM users_table_part WHERE value_1 = 8)) AS bar
ORDER BY 1 DESC LIMIT 10;
-- a union all query where leaf queries have JOINs on distribution keys
-- can be pushded down
SELECT COUNT(user_id)
FROM
(SELECT user_id AS user_id
FROM
(SELECT user_id AS user_id FROM users_table_part JOIN events_table_part USING (user_id) WHERE users_table_part.value_1 = 1
UNION ALL SELECT user_id AS user_id FROM users_table_part JOIN events_table_part USING (user_id) WHERE users_table_part.value_1 = 2) AS bar
UNION ALL SELECT user_id AS user_id FROM users_table_part JOIN events_table_part USING (user_id) WHERE users_table_part.value_1 = 3
UNION ALL
SELECT user_id AS user_id
FROM
(SELECT user_id AS user_id FROM users_table_part JOIN events_table_part USING (user_id) WHERE users_table_part.value_1 = 4
UNION ALL SELECT user_id AS user_id FROM users_table_part JOIN events_table_part USING (user_id) WHERE users_table_part.value_1 = 5) AS bar
UNION ALL
(SELECT user_id AS user_id
FROM
(SELECT user_id AS user_id FROM users_table_part JOIN events_table_part USING (user_id) WHERE users_table_part.value_1 = 6
UNION ALL SELECT user_id AS user_id FROM users_table_part JOIN events_table_part USING (user_id) WHERE users_table_part.value_1 = 7) AS bar
UNION ALL SELECT user_id AS user_id FROM users_table_part JOIN events_table_part USING (user_id) WHERE users_table_part.value_1 = 8 GROUP BY user_id)) AS bar
ORDER BY 1 DESC LIMIT 10;
-- a union all query deep down inside a subquery can still be pushed down
SELECT COUNT(user_id) FROM (
SELECT user_id, random() FROM (
SELECT user_id, random() FROM (
SELECT user_id, random()
FROM
(SELECT user_id AS user_id
FROM
(SELECT user_id AS user_id FROM users_table_part JOIN events_table_part USING (user_id) WHERE users_table_part.value_1 = 1
UNION ALL SELECT user_id AS user_id FROM users_table_part JOIN events_table_part USING (user_id) WHERE users_table_part.value_1 = 2) AS bar
UNION ALL SELECT user_id AS user_id FROM users_table_part JOIN events_table_part USING (user_id) WHERE users_table_part.value_1 = 3
UNION ALL
SELECT user_id AS user_id
FROM
(SELECT user_id AS user_id FROM users_table_part JOIN events_table_part USING (user_id) WHERE users_table_part.value_1 = 4
UNION ALL SELECT user_id AS user_id FROM users_table_part JOIN events_table_part USING (user_id) WHERE users_table_part.value_1 = 5) AS bar
UNION ALL
(SELECT user_id AS user_id
FROM
(SELECT DISTINCT user_id AS user_id FROM users_table_part JOIN events_table_part USING (user_id) WHERE users_table_part.value_1 = 6
UNION ALL SELECT user_id AS user_id FROM users_table_part JOIN events_table_part USING (user_id) WHERE users_table_part.value_1 = 7 AND events_table_part.user_id IN (SELECT user_id FROM users_table_part WHERE users_table_part.value_2 = 3 AND events_table_part.user_id IN (SELECT user_id FROM users_table_part WHERE value_2 = 3))) AS bar
UNION ALL SELECT user_id AS user_id FROM users_table_part JOIN events_table_part USING (user_id) WHERE users_table_part.value_1 = 8 GROUP BY user_id)) AS bar
WHERE user_id < 2000 ) as level_1 ) as level_2 ) as level_3
ORDER BY 1 DESC LIMIT 10;
-- safe to pushdown
SELECT DISTINCT user_id FROM (
SELECT * FROM
(SELECT user_id FROM users_table_part UNION ALL SELECT user_id FROM users_table_part) as foo
JOIN
(SELECT user_id FROM users_table_part UNION ALL SELECT user_id FROM users_table_part) as bar
USING (user_id)
UNION ALL
SELECT * FROM
(SELECT user_id FROM users_table_part UNION ALL SELECT user_id FROM users_table_part) as foo
JOIN
(SELECT user_id FROM users_table_part UNION ALL SELECT user_id FROM users_table_part) as bar
USING (user_id)
) as foo1 ORDER BY 1 LIMIT 1;
-- safe to pushdown
SELECT DISTINCT user_id FROM (
SELECT * FROM (
SELECT * FROM
(SELECT user_id FROM users_table_part UNION ALL SELECT user_id FROM users_table_part) as foo
JOIN
(SELECT user_id FROM users_table_part UNION ALL SELECT user_id FROM users_table_part) as bar
USING (user_id)
UNION ALL
SELECT * FROM
(SELECT user_id FROM users_table_part UNION ALL SELECT user_id FROM users_table_part) as foo
JOIN
(SELECT user_id FROM users_table_part UNION ALL SELECT user_id FROM users_table_part) as bar
USING (user_id)) as bar
) as foo1 ORDER BY 1 LIMIT 1;
-- safe to pushdown
SELECT DISTINCT user_id FROM
(SELECT user_id FROM users_table_part UNION ALL SELECT user_id FROM users_table_part) as foo
JOIN
(SELECT user_id FROM users_table_part UNION ALL SELECT user_id FROM users_table_part) as bar
USING (user_id)
ORDER BY 1 LIMIT 1;
RESET client_min_messages;
DROP SCHEMA union_pushdown CASCADE;

View File

@ -491,6 +491,37 @@ EXECUTE olu(1,ARRAY[1,2],ARRAY[1,2]);
EXECUTE olu(1,ARRAY[1,2],ARRAY[1,2]);
EXECUTE olu(1,ARRAY[1,2],ARRAY[1,2]);
-- test insert query with insert CTE
WITH insert_cte AS
(INSERT INTO with_modifying.modify_table VALUES (23, 7))
INSERT INTO with_modifying.anchor_table VALUES (1998);
SELECT * FROM with_modifying.modify_table WHERE id = 23 AND val = 7;
SELECT * FROM with_modifying.anchor_table WHERE id = 1998;
-- test insert query with multiple CTEs
WITH select_cte AS (SELECT * FROM with_modifying.anchor_table),
modifying_cte AS (INSERT INTO with_modifying.anchor_table SELECT * FROM select_cte)
INSERT INTO with_modifying.anchor_table VALUES (1995);
SELECT * FROM with_modifying.anchor_table ORDER BY 1;
-- test with returning
WITH returning_cte AS (INSERT INTO with_modifying.anchor_table values (1997) RETURNING *)
INSERT INTO with_modifying.anchor_table VALUES (1996);
SELECT * FROM with_modifying.anchor_table WHERE id IN (1996, 1997) ORDER BY 1;
-- test insert query with select CTE
WITH select_cte AS
(SELECT * FROM with_modifying.modify_table)
INSERT INTO with_modifying.anchor_table VALUES (1990);
SELECT * FROM with_modifying.anchor_table WHERE id = 1990;
-- even if we do multi-row insert, it is not fast path router due to cte
WITH select_cte AS (SELECT 1 AS col)
INSERT INTO with_modifying.anchor_table VALUES (1991), (1992);
SELECT * FROM with_modifying.anchor_table WHERE id IN (1991, 1992) ORDER BY 1;
DELETE FROM with_modifying.anchor_table WHERE id IN (1990, 1991, 1992, 1995, 1996, 1997, 1998);
-- Test with replication factor 2
SET citus.shard_replication_factor to 2;