Compare commits

...

41 Commits

Author SHA1 Message Date
Sait Talha Nisanci fcb932268a Bump version to 10.0.3 2021-03-17 18:02:01 +03:00
Sait Talha Nisanci 1200c8fd1c Update CHANGELOG for 10.0.3
(cherry picked from commit 92130ae2a2)
2021-03-17 18:01:57 +03:00
Önder Kalacı 0237d826d5 Make sure that single task local executions start coordinated transaction (#4831)
With https://github.com/citusdata/citus/pull/4806 we enabled
2PC for any non-read-only local task. However, if the execution
is a single task, enabling 2PC (CoordinatedTransactionShouldUse2PC)
hits an assertion as we are not in a coordinated transaction.

There is no downside of using a coordinated transaction for single
task local queries.
2021-03-17 14:56:28 +03:00
Ahmet Gedemenli e54b253713 Add udf citus_get_active_worker_nodes
(cherry picked from commit 5e5db9eefa)
2021-03-17 14:56:28 +03:00
Marco Slot 61efc87c53 Replace MAX_PUT_COPY_DATA_BUFFER_SIZE by citus.remote_copy_flush_threshold GUC
(cherry picked from commit fbc2147e11)
2021-03-17 07:35:46 +03:00
Marco Slot f5608c2769 Add GUC to set maximum connection lifetime
(cherry picked from commit 1646fca445)
2021-03-17 07:35:46 +03:00
Marco Slot ecf0f2fdbf Remove unnecessary AtEOXact_Files call
(cherry picked from commit 6c5d263b7a)
2021-03-16 10:01:14 +03:00
Onder Kalaci 0a09551dab Rename use -> shouldUse
Because setting the flag doesn't necessarily mean that we'll
use 2PC. If connections are read-only, we will not use 2PC.
In other words, we'll use 2PC only for connections that modified
any placements.

(cherry picked from commit e65e72130d)
2021-03-16 10:01:14 +03:00
Onder Kalaci 0805ef9c79 Do not trigger 2PC for reads on local execution
Before this commit, Citus used 2PC no matter what kind of
local query execution happens.

For example, if the coordinator has shards (and the workers as well),
even a simple SELECT query could start 2PC:
```SQL

WITH cte_1 AS (SELECT * FROM test LIMIT 10) SELECT count(*) FROM cte_1;
```

In this query, the local execution of the shards (and also intermediate
result reads) triggers the 2PC.

To prevent that, Citus now distinguishes local reads and local writes.
And, Citus switches to 2PC only if a modification happens. This may
still lead to unnecessary 2PCs when there is a local modification
and remote SELECTs only. Though, we handle that separately
via #4587.

(cherry picked from commit 6a7ed7b309)
2021-03-16 10:01:14 +03:00
Naisila Puka a6435b7f6b Fix upgrade and downgrade paths for master/citus_update_table_statistics (#4805)
(cherry picked from commit 71a9f45513)
2021-03-16 10:01:09 +03:00
Marco Slot f13cf336f2 Add tests for modifying CTE and SELECT without FROM
(cherry picked from commit 9c0d7f5c26)
2021-03-16 09:44:00 +03:00
Marco Slot 46e316881b Fixes a crash in queries with a modifying CTE and a SELECT without FROM
(cherry picked from commit 58f85f55c0)
2021-03-16 09:43:24 +03:00
Onur Tirtir 18ab327c6c Add tests for concurrent index deadlock issue (#4775)
(cherry picked from commit 9728ce1167)
2021-03-16 09:42:21 +03:00
Hadi Moshayedi 61a89c69cd Populate DATABASEOID cache before CREATE INDEX CONCURRENTLY
(cherry picked from commit affe38eac6)
2021-03-16 09:41:19 +03:00
Marco Slot ad9469b351 Try to return earlier in idempotent master_add_node
(cherry picked from commit f25de6a0e3)
2021-03-16 09:40:43 +03:00
Onder Kalaci 4121788848 Pass pointer of AttributeEquivalenceClass instead of pointer of pointer
AttributeEquivalenceClass seems to be unnecessarily used with multiple
pointers. Just use a single pointer for ease of read.

(cherry picked from commit 54ee96470e)
2021-03-16 09:40:07 +03:00
Onder Kalaci e9bf5fa235 Prevent infinite recursion for queries that involve UNION ALL and JOIN
With this commit, we make sure to prevent infinite recursion for queries
in the format: [subquery with a UNION ALL] JOIN [table or subquery]

Also, fixes a bug where we pushdown UNION ALL below a JOIN even if the
UNION ALL is not safe to pushdown.

(cherry picked from commit d1cd198655)
2021-03-16 09:39:59 +03:00
Naisila Puka 18c7a3c188 Skip 2PC for readonly connections in a transaction (#4587)
* Skip 2PC for readonly connections in a transaction

* Use ConnectionModifiedPlacement() function

* Remove the second check of ConnectionModifiedPlacement()

* Add order by to prevent flaky output

* Test using pg_dist_transaction

(cherry picked from commit 196064836c)
2021-03-16 09:31:18 +03:00
Halil Ozan AkgĂĽl 85a87af11c Update CHANGELOG for 10.0.2
(cherry picked from commit c2a9706203)

 Conflicts:
	CHANGELOG.md
2021-03-03 17:26:26 +03:00
Hanefi Onaldi 115fa950d3 Do not use security flags by default (#4770)
(cherry picked from commit 697bbbd3c6)
2021-03-03 13:20:05 +03:00
Naisila Puka 445291d94b Reimplement citus_update_table_statistics to detect dist. deadlocks (#4752)
* Reimplement citus_update_table_statistics

* Update stats for the given table not colocation group

* Add tests for reimplemented citus_update_table_statistics

* Use coordinated transaction, merge with citus_shard_sizes functions

* Update the old master_update_table_statistics as well

(cherry picked from commit 2f30614fe3)
2021-03-03 11:41:31 +03:00
Hanefi Onaldi 28f1c2129d Add security flags in configure scripts (#4760)
(cherry picked from commit f87107eb6b)
2021-03-03 11:41:00 +03:00
Marco Slot 205b8ec70a Normalize the ConvertTable notices
(cherry picked from commit dca615c5aa)
2021-03-03 11:40:38 +03:00
Halil Ozan Akgul 6fa25d73be Bump version to 10.0.2 2021-03-01 17:04:24 +03:00
SaitTalhaNisanci bfb1ca6d0d Use translated vars in postgres 13 as well (#4746)
* Use translated vars in postgres 13 as well

Postgres 13 removed translated vars with pg 13 so we had a special logic
for pg 13. However it had some bug, so now we copy the translated vars
before postgres deletes it. This also simplifies the logic.

* fix rtoffset with pg >= 13

(cherry picked from commit feee25dfbd)
2021-03-01 15:18:32 +03:00
Halil Ozan Akgul b355f0d9a2 Adds GRANT for public to citus_tables
(cherry picked from commit 5c5cb200f7)
2021-03-01 15:15:34 +03:00
Önder Kalacı fdcb6ead43 Prevent cross join without any target list entries (#4750)
/*
 * The physical planner assumes that all worker queries would have
 * target list entries based on the fact that at least the column
 * on the JOINs have to be on the target list. However, there is
 * an exception to that if there is a cartesian product join and
 * there is no additional target list entries belong to one side
 * of the JOIN. Once we support cartesian product join, we should
 * remove this error.
 */

(cherry picked from commit 0fe26a216c)
2021-03-01 15:13:26 +03:00
Onur Tirtir 3fcb011b67 Grant read access for columnar metadata tables to unprivileged user
(cherry picked from commit 54ac924bef)
2021-03-01 15:02:57 +03:00
Halil Ozan Akgul 8228815b38 Add 10.0-2 schema version
(cherry-picked from dcc0207605)
2021-03-01 14:58:41 +03:00
Onur Tirtir 270234c7ff Ensure table owner when using alter_columnar_table_set/alter_columnar_table_reset (#4748)
(cherry picked from commit 5ed954844c)
2021-03-01 14:38:19 +03:00
Naisila Puka 3131d3e3c5 Preserve colocation with procedures in alter_distributed_table (#4743)
(cherry picked from commit 5ebd4eac7f)
2021-03-01 14:36:52 +03:00
Hanefi Onaldi a7f9dfc3f0 Fix flaky test
(cherry picked from commit 5aff18b573)
2021-03-01 13:18:22 +03:00
Hanefi Onaldi 049cd55346 Remove length limitations for table renames
(cherry picked from commit 9a792ef841)
2021-03-01 13:18:05 +03:00
Hanefi Onaldi 27ecb5cde2 Failing long table name tests
(cherry picked from commit 7bebeb872d)
2021-03-01 13:17:48 +03:00
Naisila Puka fc08ec203f Fix insert query with CTEs/sublinks/subqueries etc (#4700)
* Fix insert query with CTE

* Add more cases with deferred pruning but false fast path

* Add more tests

* Better readability with if statements

(cherry picked from commit dbb88f6f8b)
2021-03-01 12:16:40 +03:00
Hadi Moshayedi 495470d291 Fix alignment issue in DatumToBytea
(cherry picked from commit 2fca5ff3b5)
2021-03-01 12:07:46 +03:00
SaitTalhaNisanci 39a142b4d9 Use PROCESS_UTILITY_QUERY in utility calls
When we use PROCESS_UTILITY_TOPLEVEL it causes some problems when
combined with other extensions such as pg_audit. With this commit we use
PROCESS_UTILITY_QUERY in the codebase to fix those problems.

(cherry picked from commit dcf54eaf2a)
2021-03-01 11:49:44 +03:00
Onur Tirtir ca4b529751 Bump version to 10.0.1 2021-02-19 12:05:56 +03:00
Onur Tirtir e48f5d804d Update CHANGELOG for 10.0.1
(cherry picked from commit 9031a22e20)

 Conflicts:
	CHANGELOG.md
2021-02-19 12:05:49 +03:00
Marco Slot 85e2c6b523 Rewrite time_partitions join clause to avoid smallint[] operator
(cherry picked from commit 972a8bc0b7)
2021-02-19 11:25:00 +03:00
Onur Tirtir 2a390b4c1d Bump Citus to 10.0.0 2021-02-16 14:39:24 +03:00
124 changed files with 5933 additions and 745 deletions

View File

@ -1,3 +1,66 @@
### citus v10.0.3 (March 16, 2021) ###
* Prevents infinite recursion for queries that involve `UNION ALL`
below `JOIN`
* Fixes a crash in queries with a modifying `CTE` and a `SELECT`
without `FROM`
* Fixes upgrade and downgrade paths for `citus_update_table_statistics`
* Fixes a bug that causes `SELECT` queries to use 2PC unnecessarily
* Fixes a bug that might cause self-deadlocks with
`CREATE INDEX` / `REINDEX CONCURRENTLY` commands
* Adds `citus.max_cached_connection_lifetime` GUC to set maximum connection
lifetime
* Adds `citus.remote_copy_flush_threshold` GUC that controls
per-shard memory usages by `COPY`
* Adds `citus_get_active_worker_nodes` UDF to deprecate
`master_get_active_worker_nodes`
* Skips 2PC for readonly connections in a transaction
* Makes sure that local execution starts coordinated transaction
* Removes open temporary file warning when cancelling a query with
an open tuple store
* Relaxes the locks when adding an existing node
### citus v10.0.2 (March 3, 2021) ###
* Adds a configure flag to enforce security
* Fixes a bug due to cross join without target list
* Fixes a bug with `UNION ALL` on PG 13
* Fixes a compatibility issue with pg_audit in utility calls
* Fixes insert query with CTEs/sublinks/subqueries etc
* Grants `SELECT` permission on `citus_tables` view to `public`
* Grants `SELECT` permission on columnar metadata tables to `public`
* Improves `citus_update_table_statistics` and provides distributed deadlock
detection
* Preserves colocation with procedures in `alter_distributed_table`
* Prevents using `alter_columnar_table_set` and `alter_columnar_table_reset`
on a columnar table not owned by the user
* Removes limits around long table names
### citus v10.0.1 (February 19, 2021) ###
* Fixes an issue in creation of `pg_catalog.time_partitions` view
### citus v10.0.0 (February 16, 2021) ###
* Adds support for per-table option for columnar storage

View File

@ -86,6 +86,7 @@ endif
# Add options passed to configure or computed therein, to CFLAGS/CPPFLAGS/...
override CFLAGS += @CFLAGS@ @CITUS_CFLAGS@
override BITCODE_CFLAGS := $(BITCODE_CFLAGS) @CITUS_BITCODE_CFLAGS@
ifneq ($(GIT_VERSION),)
override CFLAGS += -DGIT_VERSION=\"$(GIT_VERSION)\"
endif

119
configure vendored
View File

@ -1,6 +1,6 @@
#! /bin/sh
# Guess values for system-dependent variables and create Makefiles.
# Generated by GNU Autoconf 2.69 for Citus 10.0devel.
# Generated by GNU Autoconf 2.69 for Citus 10.0.3.
#
#
# Copyright (C) 1992-1996, 1998-2012 Free Software Foundation, Inc.
@ -579,8 +579,8 @@ MAKEFLAGS=
# Identity of this package.
PACKAGE_NAME='Citus'
PACKAGE_TARNAME='citus'
PACKAGE_VERSION='10.0devel'
PACKAGE_STRING='Citus 10.0devel'
PACKAGE_VERSION='10.0.3'
PACKAGE_STRING='Citus 10.0.3'
PACKAGE_BUGREPORT=''
PACKAGE_URL=''
@ -628,8 +628,10 @@ POSTGRES_BUILDDIR
POSTGRES_SRCDIR
CITUS_LDFLAGS
CITUS_CPPFLAGS
CITUS_BITCODE_CFLAGS
CITUS_CFLAGS
GIT_BIN
with_security_flags
with_zstd
with_lz4
EGREP
@ -696,6 +698,7 @@ with_libcurl
with_reports_hostname
with_lz4
with_zstd
with_security_flags
'
ac_precious_vars='build_alias
host_alias
@ -1258,7 +1261,7 @@ if test "$ac_init_help" = "long"; then
# Omit some internal or obsolete options to make the list less imposing.
# This message is too long to be a string in the A/UX 3.1 sh.
cat <<_ACEOF
\`configure' configures Citus 10.0devel to adapt to many kinds of systems.
\`configure' configures Citus 10.0.3 to adapt to many kinds of systems.
Usage: $0 [OPTION]... [VAR=VALUE]...
@ -1320,7 +1323,7 @@ fi
if test -n "$ac_init_help"; then
case $ac_init_help in
short | recursive ) echo "Configuration of Citus 10.0devel:";;
short | recursive ) echo "Configuration of Citus 10.0.3:";;
esac
cat <<\_ACEOF
@ -1342,6 +1345,7 @@ Optional Packages:
and update checks
--without-lz4 do not use lz4
--without-zstd do not use zstd
--with-security-flags use security flags
Some influential environment variables:
PG_CONFIG Location to find pg_config for target PostgreSQL instalation
@ -1422,7 +1426,7 @@ fi
test -n "$ac_init_help" && exit $ac_status
if $ac_init_version; then
cat <<\_ACEOF
Citus configure 10.0devel
Citus configure 10.0.3
generated by GNU Autoconf 2.69
Copyright (C) 2012 Free Software Foundation, Inc.
@ -1905,7 +1909,7 @@ cat >config.log <<_ACEOF
This file contains any messages produced by compilers while
running configure, to aid debugging if configure makes a mistake.
It was created by Citus $as_me 10.0devel, which was
It was created by Citus $as_me 10.0.3, which was
generated by GNU Autoconf 2.69. Invocation command line was
$ $0 $@
@ -4346,6 +4350,48 @@ if test x"$citusac_cv_prog_cc_cflags__Werror_return_type" = x"yes"; then
CITUS_CFLAGS="$CITUS_CFLAGS -Werror=return-type"
fi
# Security flags
# Flags taken from: https://liquid.microsoft.com/Web/Object/Read/ms.security/Requirements/Microsoft.Security.SystemsADM.10203#guide
# We do not enforce the following flag because it is only available on GCC>=8
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether $CC supports -fstack-clash-protection" >&5
$as_echo_n "checking whether $CC supports -fstack-clash-protection... " >&6; }
if ${citusac_cv_prog_cc_cflags__fstack_clash_protection+:} false; then :
$as_echo_n "(cached) " >&6
else
citusac_save_CFLAGS=$CFLAGS
flag=-fstack-clash-protection
case $flag in -Wno*)
flag=-W$(echo $flag | cut -c 6-)
esac
CFLAGS="$citusac_save_CFLAGS $flag"
ac_save_c_werror_flag=$ac_c_werror_flag
ac_c_werror_flag=yes
cat confdefs.h - <<_ACEOF >conftest.$ac_ext
/* end confdefs.h. */
int
main ()
{
;
return 0;
}
_ACEOF
if ac_fn_c_try_compile "$LINENO"; then :
citusac_cv_prog_cc_cflags__fstack_clash_protection=yes
else
citusac_cv_prog_cc_cflags__fstack_clash_protection=no
fi
rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
ac_c_werror_flag=$ac_save_c_werror_flag
CFLAGS="$citusac_save_CFLAGS"
fi
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $citusac_cv_prog_cc_cflags__fstack_clash_protection" >&5
$as_echo "$citusac_cv_prog_cc_cflags__fstack_clash_protection" >&6; }
if test x"$citusac_cv_prog_cc_cflags__fstack_clash_protection" = x"yes"; then
CITUS_CFLAGS="$CITUS_CFLAGS -fstack-clash-protection"
fi
#
# --enable-coverage enables generation of code coverage metrics with gcov
@ -4493,8 +4539,8 @@ if test "$version_num" != '11'; then
$as_echo "#define HAS_TABLEAM 1" >>confdefs.h
else
{ $as_echo "$as_me:${as_lineno-$LINENO}: postgres version does not support table access methodds" >&5
$as_echo "$as_me: postgres version does not support table access methodds" >&6;}
{ $as_echo "$as_me:${as_lineno-$LINENO}: postgres version does not support table access methods" >&5
$as_echo "$as_me: postgres version does not support table access methods" >&6;}
fi;
# Require lz4 & zstd only if we are compiling columnar
@ -4687,6 +4733,55 @@ fi
fi # test "$HAS_TABLEAM" == 'yes'
# Check whether --with-security-flags was given.
if test "${with_security_flags+set}" = set; then :
withval=$with_security_flags;
case $withval in
yes)
:
;;
no)
:
;;
*)
as_fn_error $? "no argument expected for --with-security-flags option" "$LINENO" 5
;;
esac
else
with_security_flags=no
fi
if test "$with_security_flags" = yes; then
# Flags taken from: https://liquid.microsoft.com/Web/Object/Read/ms.security/Requirements/Microsoft.Security.SystemsADM.10203#guide
# We always want to have some compiler flags for security concerns.
SECURITY_CFLAGS="-fstack-protector-strong -D_FORTIFY_SOURCE=2 -O2 -z noexecstack -fpic -shared -Wl,-z,relro -Wl,-z,now -Wformat -Wformat-security -Werror=format-security"
CITUS_CFLAGS="$CITUS_CFLAGS $SECURITY_CFLAGS"
{ $as_echo "$as_me:${as_lineno-$LINENO}: Blindly added security flags for linker: $SECURITY_CFLAGS" >&5
$as_echo "$as_me: Blindly added security flags for linker: $SECURITY_CFLAGS" >&6;}
# We always want to have some clang flags for security concerns.
# This doesn't include "-Wl,-z,relro -Wl,-z,now" on purpuse, because bitcode is not linked.
# This doesn't include -fsanitize=cfi because it breaks builds on many distros including
# Debian/Buster, Debian/Stretch, Ubuntu/Bionic, Ubuntu/Xenial and EL7.
SECURITY_BITCODE_CFLAGS="-fsanitize=safe-stack -fstack-protector-strong -flto -fPIC -Wformat -Wformat-security -Werror=format-security"
CITUS_BITCODE_CFLAGS="$CITUS_BITCODE_CFLAGS $SECURITY_BITCODE_CFLAGS"
{ $as_echo "$as_me:${as_lineno-$LINENO}: Blindly added security flags for llvm: $SECURITY_BITCODE_CFLAGS" >&5
$as_echo "$as_me: Blindly added security flags for llvm: $SECURITY_BITCODE_CFLAGS" >&6;}
{ $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: If you run into issues during linking or bitcode compilation, you can use --without-security-flags." >&5
$as_echo "$as_me: WARNING: If you run into issues during linking or bitcode compilation, you can use --without-security-flags." >&2;}
fi
# Check if git is installed, when installed the gitref of the checkout will be baked in the application
# Extract the first word of "git", so it can be a program name with args.
set dummy git; ac_word=$2
@ -4752,6 +4847,8 @@ fi
CITUS_CFLAGS="$CITUS_CFLAGS"
CITUS_BITCODE_CFLAGS="$CITUS_BITCODE_CFLAGS"
CITUS_CPPFLAGS="$CITUS_CPPFLAGS"
CITUS_LDFLAGS="$LIBS $CITUS_LDFLAGS"
@ -5276,7 +5373,7 @@ cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1
# report actual input values of CONFIG_FILES etc. instead of their
# values after options handling.
ac_log="
This file was extended by Citus $as_me 10.0devel, which was
This file was extended by Citus $as_me 10.0.3, which was
generated by GNU Autoconf 2.69. Invocation command line was
CONFIG_FILES = $CONFIG_FILES
@ -5338,7 +5435,7 @@ _ACEOF
cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1
ac_cs_config="`$as_echo "$ac_configure_args" | sed 's/^ //; s/[\\""\`\$]/\\\\&/g'`"
ac_cs_version="\\
Citus config.status 10.0devel
Citus config.status 10.0.3
configured by $0, generated by GNU Autoconf 2.69,
with options \\"\$ac_cs_config\\"

View File

@ -5,7 +5,7 @@
# everyone needing autoconf installed, the resulting files are checked
# into the SCM.
AC_INIT([Citus], [10.0devel])
AC_INIT([Citus], [10.0.3])
AC_COPYRIGHT([Copyright (c) Citus Data, Inc.])
# we'll need sed and awk for some of the version commands
@ -174,6 +174,10 @@ CITUSAC_PROG_CC_CFLAGS_OPT([-Werror=vla]) # visual studio does not support thes
CITUSAC_PROG_CC_CFLAGS_OPT([-Werror=implicit-int])
CITUSAC_PROG_CC_CFLAGS_OPT([-Werror=implicit-function-declaration])
CITUSAC_PROG_CC_CFLAGS_OPT([-Werror=return-type])
# Security flags
# Flags taken from: https://liquid.microsoft.com/Web/Object/Read/ms.security/Requirements/Microsoft.Security.SystemsADM.10203#guide
# We do not enforce the following flag because it is only available on GCC>=8
CITUSAC_PROG_CC_CFLAGS_OPT([-fstack-clash-protection])
#
# --enable-coverage enables generation of code coverage metrics with gcov
@ -216,7 +220,7 @@ if test "$version_num" != '11'; then
HAS_TABLEAM=yes
AC_DEFINE([HAS_TABLEAM], 1, [Define to 1 to build with table access method support, pg12 and up])
else
AC_MSG_NOTICE([postgres version does not support table access methodds])
AC_MSG_NOTICE([postgres version does not support table access methods])
fi;
# Require lz4 & zstd only if we are compiling columnar
@ -261,11 +265,36 @@ if test "$HAS_TABLEAM" == 'yes'; then
fi # test "$HAS_TABLEAM" == 'yes'
PGAC_ARG_BOOL(with, security-flags, no,
[use security flags])
AC_SUBST(with_security_flags)
if test "$with_security_flags" = yes; then
# Flags taken from: https://liquid.microsoft.com/Web/Object/Read/ms.security/Requirements/Microsoft.Security.SystemsADM.10203#guide
# We always want to have some compiler flags for security concerns.
SECURITY_CFLAGS="-fstack-protector-strong -D_FORTIFY_SOURCE=2 -O2 -z noexecstack -fpic -shared -Wl,-z,relro -Wl,-z,now -Wformat -Wformat-security -Werror=format-security"
CITUS_CFLAGS="$CITUS_CFLAGS $SECURITY_CFLAGS"
AC_MSG_NOTICE([Blindly added security flags for linker: $SECURITY_CFLAGS])
# We always want to have some clang flags for security concerns.
# This doesn't include "-Wl,-z,relro -Wl,-z,now" on purpuse, because bitcode is not linked.
# This doesn't include -fsanitize=cfi because it breaks builds on many distros including
# Debian/Buster, Debian/Stretch, Ubuntu/Bionic, Ubuntu/Xenial and EL7.
SECURITY_BITCODE_CFLAGS="-fsanitize=safe-stack -fstack-protector-strong -flto -fPIC -Wformat -Wformat-security -Werror=format-security"
CITUS_BITCODE_CFLAGS="$CITUS_BITCODE_CFLAGS $SECURITY_BITCODE_CFLAGS"
AC_MSG_NOTICE([Blindly added security flags for llvm: $SECURITY_BITCODE_CFLAGS])
AC_MSG_WARN([If you run into issues during linking or bitcode compilation, you can use --without-security-flags.])
fi
# Check if git is installed, when installed the gitref of the checkout will be baked in the application
AC_PATH_PROG(GIT_BIN, git)
AC_CHECK_FILE(.git,[HAS_DOTGIT=yes], [HAS_DOTGIT=])
AC_SUBST(CITUS_CFLAGS, "$CITUS_CFLAGS")
AC_SUBST(CITUS_BITCODE_CFLAGS, "$CITUS_BITCODE_CFLAGS")
AC_SUBST(CITUS_CPPFLAGS, "$CITUS_CPPFLAGS")
AC_SUBST(CITUS_LDFLAGS, "$LIBS $CITUS_LDFLAGS")
AC_SUBST(POSTGRES_SRCDIR, "$POSTGRES_SRCDIR")

View File

@ -1087,7 +1087,11 @@ DatumToBytea(Datum value, Form_pg_attribute attrForm)
{
if (attrForm->attbyval)
{
store_att_byval(VARDATA(result), value, attrForm->attlen);
Datum tmp;
store_att_byval(&tmp, value, attrForm->attlen);
memcpy_s(VARDATA(result), datumLength + VARHDRSZ,
&tmp, attrForm->attlen);
}
else
{

View File

@ -1662,6 +1662,8 @@ alter_columnar_table_set(PG_FUNCTION_ARGS)
quote_identifier(RelationGetRelationName(rel)))));
}
EnsureTableOwner(relationId);
ColumnarOptions options = { 0 };
if (!ReadColumnarOptions(relationId, &options))
{
@ -1769,6 +1771,8 @@ alter_columnar_table_reset(PG_FUNCTION_ARGS)
quote_identifier(RelationGetRelationName(rel)))));
}
EnsureTableOwner(relationId);
ColumnarOptions options = { 0 };
if (!ReadColumnarOptions(relationId, &options))
{

View File

@ -0,0 +1,5 @@
/* columnar--10.0-1--10.0-2.sql */
-- grant read access for columnar metadata tables to unprivileged user
GRANT USAGE ON SCHEMA columnar TO PUBLIC;
GRANT SELECT ON ALL tables IN SCHEMA columnar TO PUBLIC ;

View File

@ -0,0 +1,5 @@
/* columnar--10.0-2--10.0-1.sql */
-- revoke read access for columnar metadata tables from unprivileged user
REVOKE USAGE ON SCHEMA columnar FROM PUBLIC;
REVOKE SELECT ON ALL tables IN SCHEMA columnar FROM PUBLIC;

View File

@ -1,6 +1,6 @@
# Citus extension
comment = 'Citus distributed database'
default_version = '10.0-1'
default_version = '10.0-3'
module_pathname = '$libdir/citus'
relocatable = false
schema = pg_catalog

View File

@ -43,12 +43,15 @@
#include "distributed/listutils.h"
#include "distributed/local_executor.h"
#include "distributed/metadata/dependency.h"
#include "distributed/metadata/distobject.h"
#include "distributed/metadata_cache.h"
#include "distributed/metadata_sync.h"
#include "distributed/multi_executor.h"
#include "distributed/multi_logical_planner.h"
#include "distributed/multi_partitioning_utils.h"
#include "distributed/reference_table_utils.h"
#include "distributed/relation_access_tracking.h"
#include "distributed/shard_utils.h"
#include "distributed/worker_protocol.h"
#include "distributed/worker_transaction.h"
#include "executor/spi.h"
@ -175,6 +178,8 @@ static TableConversionReturn * AlterDistributedTable(TableConversionParameters *
static TableConversionReturn * AlterTableSetAccessMethod(
TableConversionParameters *params);
static TableConversionReturn * ConvertTable(TableConversionState *con);
static bool SwitchToSequentialAndLocalExecutionIfShardNameTooLong(char *relationName,
char *longestShardName);
static void EnsureTableNotReferencing(Oid relationId, char conversionType);
static void EnsureTableNotReferenced(Oid relationId, char conversionType);
static void EnsureTableNotForeign(Oid relationId);
@ -511,6 +516,10 @@ ConvertTable(TableConversionState *con)
bool oldEnableLocalReferenceForeignKeys = EnableLocalReferenceForeignKeys;
SetLocalEnableLocalReferenceForeignKeys(false);
/* switch to sequential execution if shard names will be too long */
SwitchToSequentialAndLocalExecutionIfRelationNameTooLong(con->relationId,
con->relationName);
if (con->conversionType == UNDISTRIBUTE_TABLE && con->cascadeViaForeignKeys &&
(TableReferencing(con->relationId) || TableReferenced(con->relationId)))
{
@ -673,7 +682,7 @@ ConvertTable(TableConversionState *con)
Node *parseTree = ParseTreeNode(tableCreationSql);
RelayEventExtendNames(parseTree, con->schemaName, con->hashOfName);
ProcessUtilityParseTree(parseTree, tableCreationSql, PROCESS_UTILITY_TOPLEVEL,
ProcessUtilityParseTree(parseTree, tableCreationSql, PROCESS_UTILITY_QUERY,
NULL, None_Receiver, NULL);
}
@ -711,6 +720,32 @@ ConvertTable(TableConversionState *con)
CreateCitusTableLike(con);
}
/* preserve colocation with procedures/functions */
if (con->conversionType == ALTER_DISTRIBUTED_TABLE)
{
/*
* Updating the colocationId of functions is always desirable for
* the following scenario:
* we have shardCount or colocateWith change
* AND entire co-location group is altered
* The reason for the second condition is because we currently don't
* remember the original table specified in the colocateWith when
* distributing the function. We only remember the colocationId in
* pg_dist_object table.
*/
if ((!con->shardCountIsNull || con->colocateWith != NULL) &&
(con->cascadeToColocated == CASCADE_TO_COLOCATED_YES || list_length(
con->colocatedTableList) == 1) && con->distributionColumn == NULL)
{
/*
* Update the colocationId from the one of the old relation to the one
* of the new relation for all tuples in citus.pg_dist_object
*/
UpdateDistributedObjectColocationId(TableColocationId(con->relationId),
TableColocationId(con->newRelationId));
}
}
ReplaceTable(con->relationId, con->newRelationId, justBeforeDropCommands,
con->suppressNoticeMessages);
@ -728,7 +763,7 @@ ConvertTable(TableConversionState *con)
Node *parseTree = ParseTreeNode(attachPartitionCommand);
ProcessUtilityParseTree(parseTree, attachPartitionCommand,
PROCESS_UTILITY_TOPLEVEL,
PROCESS_UTILITY_QUERY,
NULL, None_Receiver, NULL);
}
@ -1134,7 +1169,7 @@ ReplaceTable(Oid sourceId, Oid targetId, List *justBeforeDropCommands,
{
if (!suppressNoticeMessages)
{
ereport(NOTICE, (errmsg("Moving the data of %s",
ereport(NOTICE, (errmsg("moving the data of %s",
quote_qualified_identifier(schemaName, sourceName))));
}
@ -1207,7 +1242,7 @@ ReplaceTable(Oid sourceId, Oid targetId, List *justBeforeDropCommands,
if (!suppressNoticeMessages)
{
ereport(NOTICE, (errmsg("Dropping the old %s",
ereport(NOTICE, (errmsg("dropping the old %s",
quote_qualified_identifier(schemaName, sourceName))));
}
@ -1218,7 +1253,7 @@ ReplaceTable(Oid sourceId, Oid targetId, List *justBeforeDropCommands,
if (!suppressNoticeMessages)
{
ereport(NOTICE, (errmsg("Renaming the new table to %s",
ereport(NOTICE, (errmsg("renaming the new table to %s",
quote_qualified_identifier(schemaName, sourceName))));
}
@ -1572,3 +1607,104 @@ ExecuteQueryViaSPI(char *query, int SPIOK)
ereport(ERROR, (errmsg("could not finish SPI connection")));
}
}
/*
* SwitchToSequentialAndLocalExecutionIfRelationNameTooLong generates the longest shard name
* on the shards of a distributed table, and if exceeds the limit switches to sequential and
* local execution to prevent self-deadlocks.
*
* In case of a RENAME, the relation name parameter should store the new table name, so
* that the function can generate shard names of the renamed relations
*/
void
SwitchToSequentialAndLocalExecutionIfRelationNameTooLong(Oid relationId,
char *finalRelationName)
{
if (!IsCitusTable(relationId))
{
return;
}
if (ShardIntervalCount(relationId) == 0)
{
/*
* Relation has no shards, so we cannot run into "long shard relation
* name" issue.
*/
return;
}
char *longestShardName = GetLongestShardName(relationId, finalRelationName);
bool switchedToSequentialAndLocalExecution =
SwitchToSequentialAndLocalExecutionIfShardNameTooLong(finalRelationName,
longestShardName);
if (switchedToSequentialAndLocalExecution)
{
return;
}
if (PartitionedTable(relationId))
{
Oid longestNamePartitionId = PartitionWithLongestNameRelationId(relationId);
if (!OidIsValid(longestNamePartitionId))
{
/* no partitions have been created yet */
return;
}
char *longestPartitionName = get_rel_name(longestNamePartitionId);
char *longestPartitionShardName = GetLongestShardName(longestNamePartitionId,
longestPartitionName);
SwitchToSequentialAndLocalExecutionIfShardNameTooLong(longestPartitionName,
longestPartitionShardName);
}
}
/*
* SwitchToSequentialAndLocalExecutionIfShardNameTooLong switches to sequential and local
* execution if the shard name is too long.
*
* returns true if switched to sequential and local execution.
*/
static bool
SwitchToSequentialAndLocalExecutionIfShardNameTooLong(char *relationName,
char *longestShardName)
{
if (strlen(longestShardName) >= NAMEDATALEN - 1)
{
if (ParallelQueryExecutedInTransaction())
{
/*
* If there has already been a parallel query executed, the sequential mode
* would still use the already opened parallel connections to the workers,
* thus contradicting our purpose of using sequential mode.
*/
ereport(ERROR, (errmsg(
"Shard name (%s) for table (%s) is too long and could "
"lead to deadlocks when executed in a transaction "
"block after a parallel query", longestShardName,
relationName),
errhint("Try re-running the transaction with "
"\"SET LOCAL citus.multi_shard_modify_mode TO "
"\'sequential\';\"")));
}
else
{
elog(DEBUG1, "the name of the shard (%s) for relation (%s) is too long, "
"switching to sequential and local execution mode to prevent "
"self deadlocks",
longestShardName, relationName);
SetLocalMultiShardModifyModeToSequential();
SetLocalExecutionStatus(LOCAL_EXECUTION_REQUIRED);
return true;
}
}
return false;
}

View File

@ -510,6 +510,6 @@ ExecuteForeignKeyCreateCommand(const char *commandString, bool skip_validation)
"command \"%s\"", commandString)));
}
ProcessUtilityParseTree(parseTree, commandString, PROCESS_UTILITY_TOPLEVEL,
ProcessUtilityParseTree(parseTree, commandString, PROCESS_UTILITY_QUERY,
NULL, None_Receiver, NULL);
}

View File

@ -411,15 +411,16 @@ static char *
GenerateLongestShardPartitionIndexName(IndexStmt *createIndexStatement)
{
Oid relationId = CreateIndexStmtGetRelationId(createIndexStatement);
char *longestPartitionName = LongestPartitionName(relationId);
if (longestPartitionName == NULL)
Oid longestNamePartitionId = PartitionWithLongestNameRelationId(relationId);
if (!OidIsValid(longestNamePartitionId))
{
/* no partitions have been created yet */
return NULL;
}
char *longestPartitionShardName = pstrdup(longestPartitionName);
ShardInterval *shardInterval = LoadShardIntervalWithLongestShardName(relationId);
char *longestPartitionShardName = get_rel_name(longestNamePartitionId);
ShardInterval *shardInterval = LoadShardIntervalWithLongestShardName(
longestNamePartitionId);
AppendShardIdToName(&longestPartitionShardName, shardInterval->shardId);
IndexStmt *createLongestShardIndexStmt = copyObject(createIndexStatement);

View File

@ -2244,7 +2244,7 @@ CitusCopyDestReceiverStartup(DestReceiver *dest, int operation,
if (cacheEntry->replicationModel == REPLICATION_MODEL_2PC ||
MultiShardCommitProtocol == COMMIT_PROTOCOL_2PC)
{
CoordinatedTransactionUse2PC();
CoordinatedTransactionShouldUse2PC();
}
/* define how tuples will be serialised */

View File

@ -109,6 +109,13 @@ PreprocessRenameStmt(Node *node, const char *renameCommand,
*/
ErrorIfUnsupportedRenameStmt(renameStmt);
if (renameStmt->renameType == OBJECT_TABLE ||
renameStmt->renameType == OBJECT_FOREIGN_TABLE)
{
SwitchToSequentialAndLocalExecutionIfRelationNameTooLong(tableRelationId,
renameStmt->newname);
}
DDLJob *ddlJob = palloc0(sizeof(DDLJob));
ddlJob->targetRelationId = tableRelationId;
ddlJob->concurrentIndexCmd = false;

View File

@ -910,6 +910,19 @@ ExecuteDistributedDDLJob(DDLJob *ddlJob)
*/
if (ddlJob->startNewTransaction)
{
/*
* If cache is not populated, system catalog lookups will cause
* the xmin of current backend to change. Then the last phase
* of CREATE INDEX CONCURRENTLY, which is in a separate backend,
* will hang waiting for our backend and result in a deadlock.
*
* We populate the cache before starting the next transaction to
* avoid this. Most of the metadata has already been resolved in
* planning phase, we only need to lookup metadata needed for
* connection establishment.
*/
(void) CurrentDatabaseName();
CommitTransactionCommand();
StartTransactionCommand();
}

View File

@ -32,6 +32,7 @@
#include "distributed/shared_connection_stats.h"
#include "distributed/cancel_utils.h"
#include "distributed/remote_commands.h"
#include "distributed/time_constants.h"
#include "distributed/version_compat.h"
#include "distributed/worker_log_messages.h"
#include "mb/pg_wchar.h"
@ -43,6 +44,7 @@
int NodeConnectionTimeout = 30000;
int MaxCachedConnectionsPerWorker = 1;
int MaxCachedConnectionLifetime = 10 * MS_PER_MINUTE;
HTAB *ConnectionHash = NULL;
HTAB *ConnParamsHash = NULL;
@ -1288,6 +1290,7 @@ AfterXactHostConnectionHandling(ConnectionHashEntry *entry, bool isCommit)
* - Connection is forced to close at the end of transaction
* - Connection is not in OK state
* - A transaction is still in progress (usually because we are cancelling a distributed transaction)
* - A connection reached its maximum lifetime
*/
static bool
ShouldShutdownConnection(MultiConnection *connection, const int cachedConnectionCount)
@ -1303,7 +1306,10 @@ ShouldShutdownConnection(MultiConnection *connection, const int cachedConnection
cachedConnectionCount >= MaxCachedConnectionsPerWorker ||
connection->forceCloseAtTransactionEnd ||
PQstatus(connection->pgConn) != CONNECTION_OK ||
!RemoteTransactionIdle(connection);
!RemoteTransactionIdle(connection) ||
(MaxCachedConnectionLifetime >= 0 &&
TimestampDifferenceExceeds(connection->connectionStart, GetCurrentTimestamp(),
MaxCachedConnectionLifetime));
}

View File

@ -25,7 +25,11 @@
#include "utils/palloc.h"
#define MAX_PUT_COPY_DATA_BUFFER_SIZE (8 * 1024 * 1024)
/*
* Setting that controls how many bytes of COPY data libpq is allowed to buffer
* internally before we force a flush.
*/
int RemoteCopyFlushThreshold = 8 * 1024 * 1024;
/* GUC, determining whether statements sent to remote nodes are logged */
@ -620,7 +624,7 @@ PutRemoteCopyData(MultiConnection *connection, const char *buffer, int nbytes)
*/
connection->copyBytesWrittenSinceLastFlush += nbytes;
if (connection->copyBytesWrittenSinceLastFlush > MAX_PUT_COPY_DATA_BUFFER_SIZE)
if (connection->copyBytesWrittenSinceLastFlush > RemoteCopyFlushThreshold)
{
connection->copyBytesWrittenSinceLastFlush = 0;
return FinishConnectionIO(connection, allowInterrupts);

View File

@ -1165,23 +1165,6 @@ DecideTransactionPropertiesForTaskList(RowModifyLevel modLevel, List *taskList,
return xactProperties;
}
if (GetCurrentLocalExecutionStatus() == LOCAL_EXECUTION_REQUIRED)
{
/*
* In case localExecutionHappened, we force the executor to use 2PC.
* The primary motivation is that at this point we're definitely expanding
* the nodes participated in the transaction. And, by re-generating the
* remote task lists during local query execution, we might prevent the adaptive
* executor to kick-in 2PC (or even start coordinated transaction, that's why
* we prefer adding this check here instead of
* Activate2PCIfModifyingTransactionExpandsToNewNode()).
*/
xactProperties.errorOnAnyFailure = true;
xactProperties.useRemoteTransactionBlocks = TRANSACTION_BLOCKS_REQUIRED;
xactProperties.requires2PC = true;
return xactProperties;
}
if (DistributedExecutionRequiresRollback(taskList))
{
/* transaction blocks are required if the task list needs to roll back */
@ -1240,7 +1223,7 @@ StartDistributedExecution(DistributedExecution *execution)
if (xactProperties->requires2PC)
{
CoordinatedTransactionUse2PC();
CoordinatedTransactionShouldUse2PC();
}
/*
@ -3197,7 +3180,7 @@ Activate2PCIfModifyingTransactionExpandsToNewNode(WorkerSession *session)
* just opened, which means we're now going to make modifications
* over multiple connections. Activate 2PC!
*/
CoordinatedTransactionUse2PC();
CoordinatedTransactionShouldUse2PC();
}
}

View File

@ -342,9 +342,12 @@ CitusBeginModifyScan(CustomScanState *node, EState *estate, int eflags)
/*
* At this point, we're about to do the shard pruning for fast-path queries.
* Given that pruning is deferred always for INSERTs, we get here
* !EnableFastPathRouterPlanner as well.
* !EnableFastPathRouterPlanner as well. Given that INSERT statements with
* CTEs/sublinks etc are not eligible for fast-path router plan, we get here
* jobQuery->commandType == CMD_INSERT as well.
*/
Assert(currentPlan->fastPathRouterPlan || !EnableFastPathRouterPlanner);
Assert(currentPlan->fastPathRouterPlan || !EnableFastPathRouterPlanner ||
jobQuery->commandType == CMD_INSERT);
/*
* We can only now decide which shard to use, so we need to build a new task

View File

@ -209,6 +209,19 @@ ExecuteLocalTaskListExtended(List *taskList,
Oid *parameterTypes = NULL;
uint64 totalRowsProcessed = 0;
/*
* Even if we are executing local tasks, we still enable
* coordinated transaction. This is because
* (a) we might be in a transaction, and the next commands may
* require coordinated transaction
* (b) we might be executing some tasks locally and the others
* via remote execution
*
* Also, there is no harm enabling coordinated transaction even if
* we only deal with local tasks in the transaction.
*/
UseCoordinatedTransaction();
if (paramListInfo != NULL)
{
/* not used anywhere, so declare here */
@ -236,6 +249,17 @@ ExecuteLocalTaskListExtended(List *taskList,
{
SetLocalExecutionStatus(LOCAL_EXECUTION_REQUIRED);
}
if (!ReadOnlyTask(task->taskType))
{
/*
* Any modification on the local execution should enable 2PC. If remote
* queries are also ReadOnly, our 2PC logic is smart enough to skip sending
* PREPARE to those connections.
*/
CoordinatedTransactionShouldUse2PC();
}
LogLocalCommand(task);
if (isUtilityCommand)
@ -406,7 +430,7 @@ ExecuteUtilityCommand(const char *taskQueryCommand)
* process utility.
*/
ProcessUtilityParseTree(taskRawParseTree, taskQueryCommand,
PROCESS_UTILITY_TOPLEVEL, NULL, None_Receiver,
PROCESS_UTILITY_QUERY, NULL, None_Receiver,
NULL);
}
}

View File

@ -373,3 +373,56 @@ GetDistributedObjectAddressList(void)
return objectAddressList;
}
/*
* UpdateDistributedObjectColocationId gets an old and a new colocationId
* and updates the colocationId of all tuples in citus.pg_dist_object which
* have the old colocationId to the new colocationId.
*/
void
UpdateDistributedObjectColocationId(uint32 oldColocationId,
uint32 newColocationId)
{
const bool indexOK = false;
ScanKeyData scanKey[1];
Relation pgDistObjectRel = table_open(DistObjectRelationId(),
RowExclusiveLock);
TupleDesc tupleDescriptor = RelationGetDescr(pgDistObjectRel);
/* scan pg_dist_object for colocationId equal to old colocationId */
ScanKeyInit(&scanKey[0], Anum_pg_dist_object_colocationid,
BTEqualStrategyNumber,
F_INT4EQ, UInt32GetDatum(oldColocationId));
SysScanDesc scanDescriptor = systable_beginscan(pgDistObjectRel,
InvalidOid,
indexOK,
NULL, 1, scanKey);
HeapTuple heapTuple;
while (HeapTupleIsValid(heapTuple = systable_getnext(scanDescriptor)))
{
Datum values[Natts_pg_dist_object];
bool isnull[Natts_pg_dist_object];
bool replace[Natts_pg_dist_object];
memset(replace, 0, sizeof(replace));
replace[Anum_pg_dist_object_colocationid - 1] = true;
/* update the colocationId to the new one */
values[Anum_pg_dist_object_colocationid - 1] = UInt32GetDatum(newColocationId);
isnull[Anum_pg_dist_object_colocationid - 1] = false;
heapTuple = heap_modify_tuple(heapTuple, tupleDescriptor, values, isnull,
replace);
CatalogTupleUpdate(pgDistObjectRel, &heapTuple->t_self, heapTuple);
CitusInvalidateRelcacheByRelid(DistObjectRelationId());
}
systable_endscan(scanDescriptor);
table_close(pgDistObjectRel, NoLock);
CommandCounterIncrement();
}

View File

@ -79,14 +79,24 @@ static bool DistributedTableSizeOnWorker(WorkerNode *workerNode, Oid relationId,
char *sizeQuery, bool failOnError,
uint64 *tableSize);
static List * ShardIntervalsOnWorkerGroup(WorkerNode *workerNode, Oid relationId);
static char * GenerateShardNameAndSizeQueryForShardList(List *shardIntervalList);
static char * GenerateAllShardNameAndSizeQueryForNode(WorkerNode *workerNode);
static List * GenerateShardSizesQueryList(List *workerNodeList);
static char * GenerateShardStatisticsQueryForShardList(List *shardIntervalList, bool
useShardMinMaxQuery);
static char * GenerateAllShardStatisticsQueryForNode(WorkerNode *workerNode,
List *citusTableIds, bool
useShardMinMaxQuery);
static List * GenerateShardStatisticsQueryList(List *workerNodeList, List *citusTableIds,
bool useShardMinMaxQuery);
static void ErrorIfNotSuitableToGetSize(Oid relationId);
static List * OpenConnectionToNodes(List *workerNodeList);
static void ReceiveShardNameAndSizeResults(List *connectionList,
Tuplestorestate *tupleStore,
TupleDesc tupleDescriptor);
static void AppendShardSizeMinMaxQuery(StringInfo selectQuery, uint64 shardId,
ShardInterval *
shardInterval, char *shardName,
char *quotedShardName);
static void AppendShardSizeQuery(StringInfo selectQuery, ShardInterval *shardInterval,
char *quotedShardName);
/* exports for SQL callable functions */
PG_FUNCTION_INFO_V1(citus_table_size);
@ -102,25 +112,16 @@ citus_shard_sizes(PG_FUNCTION_ARGS)
{
CheckCitusVersion(ERROR);
List *workerNodeList = ActivePrimaryNodeList(NoLock);
List *allCitusTableIds = AllCitusTableIds();
List *shardSizesQueryList = GenerateShardSizesQueryList(workerNodeList);
/* we don't need a distributed transaction here */
bool useDistributedTransaction = false;
List *connectionList = OpenConnectionToNodes(workerNodeList);
FinishConnectionListEstablishment(connectionList);
/* send commands in parallel */
for (int i = 0; i < list_length(connectionList); i++)
{
MultiConnection *connection = (MultiConnection *) list_nth(connectionList, i);
char *shardSizesQuery = (char *) list_nth(shardSizesQueryList, i);
int querySent = SendRemoteCommand(connection, shardSizesQuery);
if (querySent == 0)
{
ReportConnectionError(connection, WARNING);
}
}
/* we only want the shard sizes here so useShardMinMaxQuery parameter is false */
bool useShardMinMaxQuery = false;
List *connectionList = SendShardStatisticsQueriesInParallel(allCitusTableIds,
useDistributedTransaction,
useShardMinMaxQuery);
TupleDesc tupleDescriptor = NULL;
Tuplestorestate *tupleStore = SetupTuplestore(fcinfo, &tupleDescriptor);
@ -225,6 +226,59 @@ citus_relation_size(PG_FUNCTION_ARGS)
}
/*
* SendShardStatisticsQueriesInParallel generates query lists for obtaining shard
* statistics and then sends the commands in parallel by opening connections
* to available nodes. It returns the connection list.
*/
List *
SendShardStatisticsQueriesInParallel(List *citusTableIds, bool useDistributedTransaction,
bool
useShardMinMaxQuery)
{
List *workerNodeList = ActivePrimaryNodeList(NoLock);
List *shardSizesQueryList = GenerateShardStatisticsQueryList(workerNodeList,
citusTableIds,
useShardMinMaxQuery);
List *connectionList = OpenConnectionToNodes(workerNodeList);
FinishConnectionListEstablishment(connectionList);
if (useDistributedTransaction)
{
/*
* For now, in the case we want to include shard min and max values, we also
* want to update the entries in pg_dist_placement and pg_dist_shard with the
* latest statistics. In order to detect distributed deadlocks, we assign a
* distributed transaction ID to the current transaction
*/
UseCoordinatedTransaction();
}
/* send commands in parallel */
for (int i = 0; i < list_length(connectionList); i++)
{
MultiConnection *connection = (MultiConnection *) list_nth(connectionList, i);
char *shardSizesQuery = (char *) list_nth(shardSizesQueryList, i);
if (useDistributedTransaction)
{
/* run the size query in a distributed transaction */
RemoteTransactionBeginIfNecessary(connection);
}
int querySent = SendRemoteCommand(connection, shardSizesQuery);
if (querySent == 0)
{
ReportConnectionError(connection, WARNING);
}
}
return connectionList;
}
/*
* OpenConnectionToNodes opens a single connection per node
* for the given workerNodeList.
@ -250,20 +304,25 @@ OpenConnectionToNodes(List *workerNodeList)
/*
* GenerateShardSizesQueryList generates a query per node that
* will return all shard_name, shard_size pairs from the node.
* GenerateShardStatisticsQueryList generates a query per node that will return:
* - all shard_name, shard_size pairs from the node (if includeShardMinMax is false)
* - all shard_id, shard_minvalue, shard_maxvalue, shard_size quartuples from the node (if true)
*/
static List *
GenerateShardSizesQueryList(List *workerNodeList)
GenerateShardStatisticsQueryList(List *workerNodeList, List *citusTableIds, bool
useShardMinMaxQuery)
{
List *shardSizesQueryList = NIL;
List *shardStatisticsQueryList = NIL;
WorkerNode *workerNode = NULL;
foreach_ptr(workerNode, workerNodeList)
{
char *shardSizesQuery = GenerateAllShardNameAndSizeQueryForNode(workerNode);
shardSizesQueryList = lappend(shardSizesQueryList, shardSizesQuery);
char *shardStatisticsQuery = GenerateAllShardStatisticsQueryForNode(workerNode,
citusTableIds,
useShardMinMaxQuery);
shardStatisticsQueryList = lappend(shardStatisticsQueryList,
shardStatisticsQuery);
}
return shardSizesQueryList;
return shardStatisticsQueryList;
}
@ -572,37 +631,50 @@ GenerateSizeQueryOnMultiplePlacements(List *shardIntervalList, char *sizeQuery)
/*
* GenerateAllShardNameAndSizeQueryForNode generates a query that returns all
* shard_name, shard_size pairs for the given node.
* GenerateAllShardStatisticsQueryForNode generates a query that returns:
* - all shard_name, shard_size pairs for the given node (if useShardMinMaxQuery is false)
* - all shard_id, shard_minvalue, shard_maxvalue, shard_size quartuples (if true)
*/
static char *
GenerateAllShardNameAndSizeQueryForNode(WorkerNode *workerNode)
GenerateAllShardStatisticsQueryForNode(WorkerNode *workerNode, List *citusTableIds, bool
useShardMinMaxQuery)
{
List *allCitusTableIds = AllCitusTableIds();
StringInfo allShardNameAndSizeQuery = makeStringInfo();
StringInfo allShardStatisticsQuery = makeStringInfo();
Oid relationId = InvalidOid;
foreach_oid(relationId, allCitusTableIds)
foreach_oid(relationId, citusTableIds)
{
List *shardIntervalsOnNode = ShardIntervalsOnWorkerGroup(workerNode, relationId);
char *shardNameAndSizeQuery =
GenerateShardNameAndSizeQueryForShardList(shardIntervalsOnNode);
appendStringInfoString(allShardNameAndSizeQuery, shardNameAndSizeQuery);
char *shardStatisticsQuery =
GenerateShardStatisticsQueryForShardList(shardIntervalsOnNode,
useShardMinMaxQuery);
appendStringInfoString(allShardStatisticsQuery, shardStatisticsQuery);
}
/* Add a dummy entry so that UNION ALL doesn't complain */
appendStringInfo(allShardNameAndSizeQuery, "SELECT NULL::text, 0::bigint;");
return allShardNameAndSizeQuery->data;
if (useShardMinMaxQuery)
{
/* 0 for shard_id, NULL for min, NULL for text, 0 for shard_size */
appendStringInfo(allShardStatisticsQuery,
"SELECT 0::bigint, NULL::text, NULL::text, 0::bigint;");
}
else
{
/* NULL for shard_name, 0 for shard_size */
appendStringInfo(allShardStatisticsQuery, "SELECT NULL::text, 0::bigint;");
}
return allShardStatisticsQuery->data;
}
/*
* GenerateShardNameAndSizeQueryForShardList generates a SELECT shard_name - shard_size query to get
* size of multiple tables.
* GenerateShardStatisticsQueryForShardList generates one of the two types of queries:
* - SELECT shard_name - shard_size (if useShardMinMaxQuery is false)
* - SELECT shard_id, shard_minvalue, shard_maxvalue, shard_size (if true)
*/
static char *
GenerateShardNameAndSizeQueryForShardList(List *shardIntervalList)
GenerateShardStatisticsQueryForShardList(List *shardIntervalList, bool
useShardMinMaxQuery)
{
StringInfo selectQuery = makeStringInfo();
@ -618,8 +690,15 @@ GenerateShardNameAndSizeQueryForShardList(List *shardIntervalList)
char *shardQualifiedName = quote_qualified_identifier(schemaName, shardName);
char *quotedShardName = quote_literal_cstr(shardQualifiedName);
appendStringInfo(selectQuery, "SELECT %s AS shard_name, ", quotedShardName);
appendStringInfo(selectQuery, PG_RELATION_SIZE_FUNCTION, quotedShardName);
if (useShardMinMaxQuery)
{
AppendShardSizeMinMaxQuery(selectQuery, shardId, shardInterval, shardName,
quotedShardName);
}
else
{
AppendShardSizeQuery(selectQuery, shardInterval, quotedShardName);
}
appendStringInfo(selectQuery, " UNION ALL ");
}
@ -627,6 +706,54 @@ GenerateShardNameAndSizeQueryForShardList(List *shardIntervalList)
}
/*
* AppendShardSizeMinMaxQuery appends a query in the following form to selectQuery
* SELECT shard_id, shard_minvalue, shard_maxvalue, shard_size
*/
static void
AppendShardSizeMinMaxQuery(StringInfo selectQuery, uint64 shardId,
ShardInterval *shardInterval, char *shardName,
char *quotedShardName)
{
if (IsCitusTableType(shardInterval->relationId, APPEND_DISTRIBUTED))
{
/* fill in the partition column name */
const uint32 unusedTableId = 1;
Var *partitionColumn = PartitionColumn(shardInterval->relationId,
unusedTableId);
char *partitionColumnName = get_attname(shardInterval->relationId,
partitionColumn->varattno, false);
appendStringInfo(selectQuery,
"SELECT " UINT64_FORMAT
" AS shard_id, min(%s)::text AS shard_minvalue, max(%s)::text AS shard_maxvalue, pg_relation_size(%s) AS shard_size FROM %s ",
shardId, partitionColumnName,
partitionColumnName,
quotedShardName, shardName);
}
else
{
/* we don't need to update min/max for non-append distributed tables because they don't change */
appendStringInfo(selectQuery,
"SELECT " UINT64_FORMAT
" AS shard_id, NULL::text AS shard_minvalue, NULL::text AS shard_maxvalue, pg_relation_size(%s) AS shard_size ",
shardId, quotedShardName);
}
}
/*
* AppendShardSizeQuery appends a query in the following form to selectQuery
* SELECT shard_name, shard_size
*/
static void
AppendShardSizeQuery(StringInfo selectQuery, ShardInterval *shardInterval,
char *quotedShardName)
{
appendStringInfo(selectQuery, "SELECT %s AS shard_name, ", quotedShardName);
appendStringInfo(selectQuery, PG_RELATION_SIZE_FUNCTION, quotedShardName);
}
/*
* ErrorIfNotSuitableToGetSize determines whether the table is suitable to find
* its' size with internal functions.

View File

@ -1384,16 +1384,34 @@ AddNodeMetadata(char *nodeName, int32 nodePort,
*nodeAlreadyExists = false;
/*
* Take an exclusive lock on pg_dist_node to serialize node changes.
* Prevent / wait for concurrent modification before checking whether
* the worker already exists in pg_dist_node.
*/
LockRelationOid(DistNodeRelationId(), RowShareLock);
WorkerNode *workerNode = FindWorkerNodeAnyCluster(nodeName, nodePort);
if (workerNode != NULL)
{
/* return early without holding locks when the node already exists */
*nodeAlreadyExists = true;
return workerNode->nodeId;
}
/*
* We are going to change pg_dist_node, prevent any concurrent reads that
* are not tolerant to concurrent node addition by taking an exclusive
* lock (conflicts with all but AccessShareLock).
*
* We may want to relax or have more fine-grained locking in the future
* to allow users to add multiple nodes concurrently.
*/
LockRelationOid(DistNodeRelationId(), ExclusiveLock);
WorkerNode *workerNode = FindWorkerNodeAnyCluster(nodeName, nodePort);
/* recheck in case 2 node additions pass the first check concurrently */
workerNode = FindWorkerNodeAnyCluster(nodeName, nodePort);
if (workerNode != NULL)
{
/* fill return data and return */
*nodeAlreadyExists = true;
return workerNode->nodeId;

View File

@ -332,7 +332,7 @@ DropShards(Oid relationId, char *schemaName, char *relationName,
*/
if (MultiShardCommitProtocol == COMMIT_PROTOCOL_2PC)
{
CoordinatedTransactionUse2PC();
CoordinatedTransactionShouldUse2PC();
}
List *dropTaskList = DropTaskList(relationId, schemaName, relationName,

View File

@ -85,6 +85,7 @@ PG_FUNCTION_INFO_V1(master_get_table_ddl_events);
PG_FUNCTION_INFO_V1(master_get_new_shardid);
PG_FUNCTION_INFO_V1(master_get_new_placementid);
PG_FUNCTION_INFO_V1(master_get_active_worker_nodes);
PG_FUNCTION_INFO_V1(citus_get_active_worker_nodes);
PG_FUNCTION_INFO_V1(master_get_round_robin_candidate_nodes);
PG_FUNCTION_INFO_V1(master_stage_shard_row);
PG_FUNCTION_INFO_V1(master_stage_shard_placement_row);
@ -442,12 +443,12 @@ master_stage_shard_placement_row(PG_FUNCTION_ARGS)
/*
* master_get_active_worker_nodes returns a set of active worker host names and
* citus_get_active_worker_nodes returns a set of active worker host names and
* port numbers in deterministic order. Currently we assume that all worker
* nodes in pg_dist_node are active.
*/
Datum
master_get_active_worker_nodes(PG_FUNCTION_ARGS)
citus_get_active_worker_nodes(PG_FUNCTION_ARGS)
{
FuncCallContext *functionContext = NULL;
uint32 workerNodeIndex = 0;
@ -512,6 +513,16 @@ master_get_active_worker_nodes(PG_FUNCTION_ARGS)
}
/*
* master_get_active_worker_nodes is a wrapper function for old UDF name.
*/
Datum
master_get_active_worker_nodes(PG_FUNCTION_ARGS)
{
return citus_get_active_worker_nodes(fcinfo);
}
/* Finds the relationId from a potentially qualified relation name. */
Oid
ResolveRelationId(text *relationName, bool missingOk)

View File

@ -32,7 +32,9 @@
#include "distributed/connection_management.h"
#include "distributed/deparse_shard_query.h"
#include "distributed/distributed_planner.h"
#include "distributed/foreign_key_relationship.h"
#include "distributed/listutils.h"
#include "distributed/lock_graph.h"
#include "distributed/multi_client_executor.h"
#include "distributed/multi_executor.h"
#include "distributed/metadata_utility.h"
@ -65,12 +67,22 @@ static List * RelationShardListForShardCreate(ShardInterval *shardInterval);
static bool WorkerShardStats(ShardPlacement *placement, Oid relationId,
const char *shardName, uint64 *shardSize,
text **shardMinValue, text **shardMaxValue);
static void UpdateTableStatistics(Oid relationId);
static void ReceiveAndUpdateShardsSizeAndMinMax(List *connectionList);
static void UpdateShardSizeAndMinMax(uint64 shardId, ShardInterval *shardInterval, Oid
relationId, List *shardPlacementList, uint64
shardSize, text *shardMinValue,
text *shardMaxValue);
static bool ProcessShardStatisticsRow(PGresult *result, int64 rowIndex, uint64 *shardId,
text **shardMinValue, text **shardMaxValue,
uint64 *shardSize);
/* exports for SQL callable functions */
PG_FUNCTION_INFO_V1(master_create_empty_shard);
PG_FUNCTION_INFO_V1(master_append_table_to_shard);
PG_FUNCTION_INFO_V1(citus_update_shard_statistics);
PG_FUNCTION_INFO_V1(master_update_shard_statistics);
PG_FUNCTION_INFO_V1(citus_update_table_statistics);
/*
@ -361,6 +373,23 @@ citus_update_shard_statistics(PG_FUNCTION_ARGS)
}
/*
* citus_update_table_statistics updates metadata (shard size and shard min/max
* values) of the shards of the given table
*/
Datum
citus_update_table_statistics(PG_FUNCTION_ARGS)
{
Oid distributedTableId = PG_GETARG_OID(0);
CheckCitusVersion(ERROR);
UpdateTableStatistics(distributedTableId);
PG_RETURN_VOID();
}
/*
* master_update_shard_statistics is a wrapper function for old UDF name.
*/
@ -782,7 +811,6 @@ UpdateShardStatistics(int64 shardId)
{
ShardInterval *shardInterval = LoadShardInterval(shardId);
Oid relationId = shardInterval->relationId;
char storageType = shardInterval->storageType;
bool statsOK = false;
uint64 shardSize = 0;
text *minValue = NULL;
@ -825,17 +853,166 @@ UpdateShardStatistics(int64 shardId)
errdetail("Setting shard statistics to NULL")));
}
/* make sure we don't process cancel signals */
HOLD_INTERRUPTS();
UpdateShardSizeAndMinMax(shardId, shardInterval, relationId, shardPlacementList,
shardSize, minValue, maxValue);
return shardSize;
}
/* update metadata for each shard placement we appended to */
/*
* UpdateTableStatistics updates metadata (shard size and shard min/max values)
* of the shards of the given table. Follows a similar logic to citus_shard_sizes function.
*/
static void
UpdateTableStatistics(Oid relationId)
{
List *citusTableIds = NIL;
citusTableIds = lappend_oid(citusTableIds, relationId);
/* we want to use a distributed transaction here to detect distributed deadlocks */
bool useDistributedTransaction = true;
/* we also want shard min/max values for append distributed tables */
bool useShardMinMaxQuery = true;
List *connectionList = SendShardStatisticsQueriesInParallel(citusTableIds,
useDistributedTransaction,
useShardMinMaxQuery);
ReceiveAndUpdateShardsSizeAndMinMax(connectionList);
}
/*
* ReceiveAndUpdateShardsSizeAndMinMax receives shard id, size
* and min max results from the given connection list, and updates
* respective entries in pg_dist_placement and pg_dist_shard
*/
static void
ReceiveAndUpdateShardsSizeAndMinMax(List *connectionList)
{
/*
* From the connection list, we will not get all the shards, but
* all the placements. We use a hash table to remember already visited shard ids
* since we update all the different placements of a shard id at once.
*/
HTAB *alreadyVisitedShardPlacements = CreateOidVisitedHashSet();
MultiConnection *connection = NULL;
foreach_ptr(connection, connectionList)
{
if (PQstatus(connection->pgConn) != CONNECTION_OK)
{
continue;
}
bool raiseInterrupts = true;
PGresult *result = GetRemoteCommandResult(connection, raiseInterrupts);
if (!IsResponseOK(result))
{
ReportResultError(connection, result, WARNING);
continue;
}
int64 rowCount = PQntuples(result);
int64 colCount = PQnfields(result);
/* Although it is not expected */
if (colCount != UPDATE_SHARD_STATISTICS_COLUMN_COUNT)
{
ereport(WARNING, (errmsg("unexpected number of columns from "
"citus_update_table_statistics")));
continue;
}
for (int64 rowIndex = 0; rowIndex < rowCount; rowIndex++)
{
uint64 shardId = 0;
text *shardMinValue = NULL;
text *shardMaxValue = NULL;
uint64 shardSize = 0;
if (!ProcessShardStatisticsRow(result, rowIndex, &shardId, &shardMinValue,
&shardMaxValue, &shardSize))
{
/* this row has no valid shard statistics */
continue;
}
if (OidVisited(alreadyVisitedShardPlacements, shardId))
{
/* We have already updated this placement list */
continue;
}
VisitOid(alreadyVisitedShardPlacements, shardId);
ShardInterval *shardInterval = LoadShardInterval(shardId);
Oid relationId = shardInterval->relationId;
List *shardPlacementList = ActiveShardPlacementList(shardId);
UpdateShardSizeAndMinMax(shardId, shardInterval, relationId,
shardPlacementList, shardSize, shardMinValue,
shardMaxValue);
}
PQclear(result);
ForgetResults(connection);
}
hash_destroy(alreadyVisitedShardPlacements);
}
/*
* ProcessShardStatisticsRow processes a row of shard statistics of the input PGresult
* - it returns true if this row belongs to a valid shard
* - it returns false if this row has no valid shard statistics (shardId = INVALID_SHARD_ID)
*/
static bool
ProcessShardStatisticsRow(PGresult *result, int64 rowIndex, uint64 *shardId,
text **shardMinValue, text **shardMaxValue, uint64 *shardSize)
{
*shardId = ParseIntField(result, rowIndex, 0);
/* check for the dummy entries we put so that UNION ALL wouldn't complain */
if (*shardId == INVALID_SHARD_ID)
{
/* this row has no valid shard statistics */
return false;
}
char *minValueResult = PQgetvalue(result, rowIndex, 1);
char *maxValueResult = PQgetvalue(result, rowIndex, 2);
*shardMinValue = cstring_to_text(minValueResult);
*shardMaxValue = cstring_to_text(maxValueResult);
*shardSize = ParseIntField(result, rowIndex, 3);
return true;
}
/*
* UpdateShardSizeAndMinMax updates the shardlength (shard size) of the given
* shard and its placements in pg_dist_placement, and updates the shard min value
* and shard max value of the given shard in pg_dist_shard if the relationId belongs
* to an append-distributed table
*/
static void
UpdateShardSizeAndMinMax(uint64 shardId, ShardInterval *shardInterval, Oid relationId,
List *shardPlacementList, uint64 shardSize, text *shardMinValue,
text *shardMaxValue)
{
char storageType = shardInterval->storageType;
ShardPlacement *placement = NULL;
/* update metadata for each shard placement */
foreach_ptr(placement, shardPlacementList)
{
uint64 placementId = placement->placementId;
int32 groupId = placement->groupId;
DeleteShardPlacementRow(placementId);
InsertShardPlacementRow(shardId, placementId, SHARD_STATE_ACTIVE, shardSize,
InsertShardPlacementRow(shardId, placementId, SHARD_STATE_ACTIVE,
shardSize,
groupId);
}
@ -843,18 +1020,9 @@ UpdateShardStatistics(int64 shardId)
if (IsCitusTableType(relationId, APPEND_DISTRIBUTED))
{
DeleteShardRow(shardId);
InsertShardRow(relationId, shardId, storageType, minValue, maxValue);
InsertShardRow(relationId, shardId, storageType, shardMinValue,
shardMaxValue);
}
if (QueryCancelPending)
{
ereport(WARNING, (errmsg("cancel requests are ignored during metadata update")));
QueryCancelPending = false;
}
RESUME_INTERRUPTS();
return shardSize;
}

View File

@ -38,8 +38,9 @@
#include "utils/rel.h"
#include "utils/syscache.h"
static void UpdateTaskQueryString(Query *query, Oid distributedTableId,
RangeTblEntry *valuesRTE, Task *task);
static void AddInsertAliasIfNeeded(Query *query);
static void UpdateTaskQueryString(Query *query, Task *task);
static bool ReplaceRelationConstraintByShardConstraint(List *relationShardList,
OnConflictExpr *onConflict);
static RelationShard * FindRelationShard(Oid inputRelationId, List *relationShardList);
@ -57,27 +58,43 @@ RebuildQueryStrings(Job *workerJob)
{
Query *originalQuery = workerJob->jobQuery;
List *taskList = workerJob->taskList;
Oid relationId = ((RangeTblEntry *) linitial(originalQuery->rtable))->relid;
RangeTblEntry *valuesRTE = ExtractDistributedInsertValuesRTE(originalQuery);
Task *task = NULL;
if (originalQuery->commandType == CMD_INSERT)
{
AddInsertAliasIfNeeded(originalQuery);
}
foreach_ptr(task, taskList)
{
Query *query = originalQuery;
if (UpdateOrDeleteQuery(query) && list_length(taskList) > 1)
/*
* Copy the query if there are multiple tasks. If there is a single
* task, we scribble on the original query to avoid the copying
* overhead.
*/
if (list_length(taskList) > 1)
{
query = copyObject(originalQuery);
}
if (UpdateOrDeleteQuery(query))
{
/*
* For UPDATE and DELETE queries, we may have subqueries and joins, so
* we use relation shard list to update shard names and call
* pg_get_query_def() directly.
*/
List *relationShardList = task->relationShardList;
UpdateRelationToShardNames((Node *) query, relationShardList);
}
else if (query->commandType == CMD_INSERT && task->modifyWithSubquery)
{
/* for INSERT..SELECT, adjust shard names in SELECT part */
List *relationShardList = task->relationShardList;
ShardInterval *shardInterval = LoadShardInterval(task->anchorShardId);
query = copyObject(originalQuery);
RangeTblEntry *copiedInsertRte = ExtractResultRelationRTEOrError(query);
RangeTblEntry *copiedSubqueryRte = ExtractSelectRangeTableEntry(query);
Query *copiedSubquery = copiedSubqueryRte->subquery;
@ -90,29 +107,18 @@ RebuildQueryStrings(Job *workerJob)
ReorderInsertSelectTargetLists(query, copiedInsertRte, copiedSubqueryRte);
/* setting an alias simplifies deparsing of RETURNING */
if (copiedInsertRte->alias == NULL)
{
Alias *alias = makeAlias(CITUS_TABLE_ALIAS, NIL);
copiedInsertRte->alias = alias;
}
UpdateRelationToShardNames((Node *) copiedSubquery, relationShardList);
}
else if (query->commandType == CMD_INSERT && (query->onConflict != NULL ||
valuesRTE != NULL))
if (query->commandType == CMD_INSERT)
{
RangeTblEntry *modifiedRelationRTE = linitial(originalQuery->rtable);
/*
* Always an alias in UPSERTs and multi-row INSERTs to avoid
* deparsing issues (e.g. RETURNING might reference the original
* table name, which has been replaced by a shard name).
* We store the modified relaiton ID in the task so we can lazily call
* deparse_shard_query when the string is needed
*/
RangeTblEntry *rangeTableEntry = linitial(query->rtable);
if (rangeTableEntry->alias == NULL)
{
Alias *alias = makeAlias(CITUS_TABLE_ALIAS, NIL);
rangeTableEntry->alias = alias;
}
task->anchorDistributedTableId = modifiedRelationRTE->relid;
}
bool isQueryObjectOrText = GetTaskQueryType(task) == TASK_QUERY_TEXT ||
@ -122,7 +128,7 @@ RebuildQueryStrings(Job *workerJob)
? "(null)"
: ApplyLogRedaction(TaskQueryString(task)))));
UpdateTaskQueryString(query, relationId, valuesRTE, task);
UpdateTaskQueryString(query, task);
/*
* If parameters were resolved in the job query, then they are now also
@ -137,53 +143,68 @@ RebuildQueryStrings(Job *workerJob)
/*
* UpdateTaskQueryString updates the query string stored within the provided
* Task. If the Task has row values from a multi-row INSERT, those are injected
* into the provided query (using the provided valuesRTE, which must belong to
* the query) before deparse occurs (the query's full VALUES list will be
* restored before this function returns).
* AddInsertAliasIfNeeded adds an alias in UPSERTs and multi-row INSERTs to avoid
* deparsing issues (e.g. RETURNING might reference the original table name,
* which has been replaced by a shard name).
*/
static void
UpdateTaskQueryString(Query *query, Oid distributedTableId, RangeTblEntry *valuesRTE,
Task *task)
AddInsertAliasIfNeeded(Query *query)
{
Assert(query->commandType == CMD_INSERT);
if (query->onConflict == NULL &&
ExtractDistributedInsertValuesRTE(query) == NULL)
{
/* simple single-row insert does not need an alias */
return;
}
RangeTblEntry *rangeTableEntry = linitial(query->rtable);
if (rangeTableEntry->alias != NULL)
{
/* INSERT already has an alias */
return;
}
Alias *alias = makeAlias(CITUS_TABLE_ALIAS, NIL);
rangeTableEntry->alias = alias;
}
/*
* UpdateTaskQueryString updates the query string stored within the provided
* Task. If the Task has row values from a multi-row INSERT, those are injected
* into the provided query before deparse occurs (the query's full VALUES list
* will be restored before this function returns).
*/
static void
UpdateTaskQueryString(Query *query, Task *task)
{
List *oldValuesLists = NIL;
if (valuesRTE != NULL)
{
Assert(valuesRTE->rtekind == RTE_VALUES);
Assert(task->rowValuesLists != NULL);
oldValuesLists = valuesRTE->values_lists;
valuesRTE->values_lists = task->rowValuesLists;
}
if (query->commandType != CMD_INSERT)
{
/*
* For UPDATE and DELETE queries, we may have subqueries and joins, so
* we use relation shard list to update shard names and call
* pg_get_query_def() directly.
*/
List *relationShardList = task->relationShardList;
UpdateRelationToShardNames((Node *) query, relationShardList);
}
else if (ShouldLazyDeparseQuery(task))
{
/*
* not all insert queries are copied before calling this
* function, so we do it here
*/
query = copyObject(query);
}
RangeTblEntry *valuesRTE = NULL;
if (query->commandType == CMD_INSERT)
{
/*
* We store this in the task so we can lazily call
* deparse_shard_query when the string is needed
*/
task->anchorDistributedTableId = distributedTableId;
/* extract the VALUES from the INSERT */
valuesRTE = ExtractDistributedInsertValuesRTE(query);
if (valuesRTE != NULL)
{
Assert(valuesRTE->rtekind == RTE_VALUES);
Assert(task->rowValuesLists != NULL);
oldValuesLists = valuesRTE->values_lists;
valuesRTE->values_lists = task->rowValuesLists;
}
if (ShouldLazyDeparseQuery(task))
{
/*
* not all insert queries are copied before calling this
* function, so we do it here
*/
query = copyObject(query);
}
}
SetTaskQueryIfShouldLazyDeparse(task, query);

View File

@ -49,6 +49,7 @@
#include "executor/executor.h"
#include "nodes/makefuncs.h"
#include "nodes/nodeFuncs.h"
#include "nodes/pg_list.h"
#include "parser/parsetree.h"
#include "parser/parse_type.h"
#if PG_VERSION_NUM >= PG_VERSION_12
@ -98,6 +99,7 @@ static PlannedStmt * FinalizeNonRouterPlan(PlannedStmt *localPlan,
DistributedPlan *distributedPlan,
CustomScan *customScan);
static PlannedStmt * FinalizeRouterPlan(PlannedStmt *localPlan, CustomScan *customScan);
static AppendRelInfo * FindTargetAppendRelInfo(PlannerInfo *root, int relationRteIndex);
static List * makeTargetListFromCustomScanList(List *custom_scan_tlist);
static List * makeCustomScanTargetlistFromExistingTargetList(List *existingTargetlist);
static int32 BlessRecordExpressionList(List *exprs);
@ -124,6 +126,7 @@ static PlannedStmt * PlanFastPathDistributedStmt(DistributedPlanningContext *pla
static PlannedStmt * PlanDistributedStmt(DistributedPlanningContext *planContext,
int rteIdCounter);
static RTEListProperties * GetRTEListProperties(List *rangeTableList);
static List * TranslatedVars(PlannerInfo *root, int relationIndex);
/* Distributed planner hook */
@ -1814,6 +1817,8 @@ multi_relation_restriction_hook(PlannerInfo *root, RelOptInfo *relOptInfo,
/* see comments on GetVarFromAssignedParam() */
relationRestriction->outerPlanParamsList = OuterPlanParamsList(root);
relationRestriction->translatedVars = TranslatedVars(root,
relationRestriction->index);
RelationRestrictionContext *relationRestrictionContext =
plannerRestrictionContext->relationRestrictionContext;
@ -1837,6 +1842,61 @@ multi_relation_restriction_hook(PlannerInfo *root, RelOptInfo *relOptInfo,
}
/*
* TranslatedVars deep copies the translated vars for the given relation index
* if there is any append rel list.
*/
static List *
TranslatedVars(PlannerInfo *root, int relationIndex)
{
List *translatedVars = NIL;
if (root->append_rel_list != NIL)
{
AppendRelInfo *targetAppendRelInfo =
FindTargetAppendRelInfo(root, relationIndex);
if (targetAppendRelInfo != NULL)
{
/* postgres deletes translated_vars after pg13, hence we deep copy them here */
Node *targetNode = NULL;
foreach_ptr(targetNode, targetAppendRelInfo->translated_vars)
{
translatedVars =
lappend(translatedVars, copyObject(targetNode));
}
}
}
return translatedVars;
}
/*
* FindTargetAppendRelInfo finds the target append rel info for the given
* relation rte index.
*/
static AppendRelInfo *
FindTargetAppendRelInfo(PlannerInfo *root, int relationRteIndex)
{
AppendRelInfo *appendRelInfo = NULL;
/* iterate on the queries that are part of UNION ALL subselects */
foreach_ptr(appendRelInfo, root->append_rel_list)
{
/*
* We're only interested in the child rel that is equal to the
* relation we're investigating. Here we don't need to find the offset
* because postgres adds an offset to child_relid and parent_relid after
* calling multi_relation_restriction_hook.
*/
if (appendRelInfo->child_relid == relationRteIndex)
{
return appendRelInfo;
}
}
return NULL;
}
/*
* AdjustReadIntermediateResultCost adjusts the row count and total cost
* of a read_intermediate_result call based on the file size.
@ -2143,6 +2203,33 @@ CreateAndPushPlannerRestrictionContext(void)
}
/*
* TranslatedVarsForRteIdentity gets an rteIdentity and returns the
* translatedVars that belong to the range table relation. If no
* translatedVars found, the function returns NIL;
*/
List *
TranslatedVarsForRteIdentity(int rteIdentity)
{
PlannerRestrictionContext *currentPlannerRestrictionContext =
CurrentPlannerRestrictionContext();
List *relationRestrictionList =
currentPlannerRestrictionContext->relationRestrictionContext->
relationRestrictionList;
RelationRestriction *relationRestriction = NULL;
foreach_ptr(relationRestriction, relationRestrictionList)
{
if (GetRTEIdentity(relationRestriction->rte) == rteIdentity)
{
return relationRestriction->translatedVars;
}
}
return NIL;
}
/*
* CurrentRestrictionContext returns the most recently added
* PlannerRestrictionContext from the plannerRestrictionContextList list.

View File

@ -824,7 +824,21 @@ static List *
QueryTargetList(MultiNode *multiNode)
{
List *projectNodeList = FindNodesOfType(multiNode, T_MultiProject);
Assert(list_length(projectNodeList) > 0);
if (list_length(projectNodeList) == 0)
{
/*
* The physical planner assumes that all worker queries would have
* target list entries based on the fact that at least the column
* on the JOINs have to be on the target list. However, there is
* an exception to that if there is a cartesian product join and
* there is no additional target list entries belong to one side
* of the JOIN. Once we support cartesian product join, we should
* remove this error.
*/
ereport(ERROR, (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
errmsg("cannot perform distributed planning on this query"),
errdetail("Cartesian products are currently unsupported")));
}
MultiProject *topProjectNode = (MultiProject *) linitial(projectNodeList);
List *columnList = topProjectNode->columnList;

View File

@ -555,6 +555,14 @@ ModifyPartialQuerySupported(Query *queryTree, bool multiShardQuery,
{
ListCell *cteCell = NULL;
/* CTEs still not supported for INSERTs. */
if (queryTree->commandType == CMD_INSERT)
{
return DeferredError(ERRCODE_FEATURE_NOT_SUPPORTED,
"Router planner doesn't support common table expressions with INSERT queries.",
NULL, NULL);
}
foreach(cteCell, queryTree->cteList)
{
CommonTableExpr *cte = (CommonTableExpr *) lfirst(cteCell);
@ -562,31 +570,22 @@ ModifyPartialQuerySupported(Query *queryTree, bool multiShardQuery,
if (cteQuery->commandType != CMD_SELECT)
{
/* Modifying CTEs still not supported for INSERTs & multi shard queries. */
if (queryTree->commandType == CMD_INSERT)
{
return DeferredError(ERRCODE_FEATURE_NOT_SUPPORTED,
"Router planner doesn't support non-select common table expressions with non-select queries.",
NULL, NULL);
}
/* Modifying CTEs still not supported for multi shard queries. */
if (multiShardQuery)
{
return DeferredError(ERRCODE_FEATURE_NOT_SUPPORTED,
"Router planner doesn't support non-select common table expressions with multi shard queries.",
NULL, NULL);
}
/* Modifying CTEs exclude both INSERT CTEs & INSERT queries. */
else if (cteQuery->commandType == CMD_INSERT)
{
return DeferredError(ERRCODE_FEATURE_NOT_SUPPORTED,
"Router planner doesn't support INSERT common table expressions.",
NULL, NULL);
}
}
/* Modifying CTEs exclude both INSERT CTEs & INSERT queries. */
if (cteQuery->commandType == CMD_INSERT)
{
return DeferredError(ERRCODE_FEATURE_NOT_SUPPORTED,
"Router planner doesn't support INSERT common table expressions.",
NULL, NULL);
}
if (cteQuery->hasForUpdate &&
FindNodeMatchingCheckFunctionInRangeTableList(cteQuery->rtable,
IsReferenceTableRTE))

View File

@ -61,6 +61,8 @@ typedef struct AttributeEquivalenceClass
{
uint32 equivalenceId;
List *equivalentAttributes;
Index unionQueryPartitionKeyIndex;
} AttributeEquivalenceClass;
/*
@ -83,7 +85,8 @@ typedef struct AttributeEquivalenceClassMember
static bool ContextContainsLocalRelation(RelationRestrictionContext *restrictionContext);
static Var * FindUnionAllVar(PlannerInfo *root, List *appendRelList, Oid relationOid,
static int RangeTableOffsetCompat(PlannerInfo *root, AppendRelInfo *appendRelInfo);
static Var * FindUnionAllVar(PlannerInfo *root, List *translatedVars, Oid relationOid,
Index relationRteIndex, Index *partitionKeyIndex);
static bool ContainsMultipleDistributedRelations(PlannerRestrictionContext *
plannerRestrictionContext);
@ -91,11 +94,11 @@ static List * GenerateAttributeEquivalencesForRelationRestrictions(
RelationRestrictionContext *restrictionContext);
static AttributeEquivalenceClass * AttributeEquivalenceClassForEquivalenceClass(
EquivalenceClass *plannerEqClass, RelationRestriction *relationRestriction);
static void AddToAttributeEquivalenceClass(AttributeEquivalenceClass **
static void AddToAttributeEquivalenceClass(AttributeEquivalenceClass *
attributeEquivalenceClass,
PlannerInfo *root, Var *varToBeAdded);
static void AddRteSubqueryToAttributeEquivalenceClass(AttributeEquivalenceClass *
*attributeEquivalenceClass,
attributeEquivalenceClass,
RangeTblEntry *
rangeTableEntry,
PlannerInfo *root,
@ -103,17 +106,17 @@ static void AddRteSubqueryToAttributeEquivalenceClass(AttributeEquivalenceClass
static Query * GetTargetSubquery(PlannerInfo *root, RangeTblEntry *rangeTableEntry,
Var *varToBeAdded);
static void AddUnionAllSetOperationsToAttributeEquivalenceClass(
AttributeEquivalenceClass **
AttributeEquivalenceClass *
attributeEquivalenceClass,
PlannerInfo *root,
Var *varToBeAdded);
static void AddUnionSetOperationsToAttributeEquivalenceClass(AttributeEquivalenceClass **
static void AddUnionSetOperationsToAttributeEquivalenceClass(AttributeEquivalenceClass *
attributeEquivalenceClass,
PlannerInfo *root,
SetOperationStmt *
setOperation,
Var *varToBeAdded);
static void AddRteRelationToAttributeEquivalenceClass(AttributeEquivalenceClass **
static void AddRteRelationToAttributeEquivalenceClass(AttributeEquivalenceClass *
attrEquivalenceClass,
RangeTblEntry *rangeTableEntry,
Var *varToBeAdded);
@ -141,7 +144,7 @@ static AttributeEquivalenceClass * GenerateEquivalenceClassForRelationRestrictio
RelationRestrictionContext
*
relationRestrictionContext);
static void ListConcatUniqueAttributeClassMemberLists(AttributeEquivalenceClass **
static void ListConcatUniqueAttributeClassMemberLists(AttributeEquivalenceClass *
firstClass,
AttributeEquivalenceClass *
secondClass);
@ -156,9 +159,13 @@ static JoinRestrictionContext * FilterJoinRestrictionContext(
static bool RangeTableArrayContainsAnyRTEIdentities(RangeTblEntry **rangeTableEntries, int
rangeTableArrayLength, Relids
queryRteIdentities);
static int RangeTableOffsetCompat(PlannerInfo *root, AppendRelInfo *appendRelInfo);
static Relids QueryRteIdentities(Query *queryTree);
#if PG_VERSION_NUM >= PG_VERSION_13
static int ParentCountPriorToAppendRel(List *appendRelList, AppendRelInfo *appendRelInfo);
#endif
/*
* AllDistributionKeysInQueryAreEqual returns true if either
* (i) there exists join in the query and all relations joined on their
@ -249,7 +256,7 @@ SafeToPushdownUnionSubquery(PlannerRestrictionContext *plannerRestrictionContext
plannerRestrictionContext->relationRestrictionContext;
JoinRestrictionContext *joinRestrictionContext =
plannerRestrictionContext->joinRestrictionContext;
Index unionQueryPartitionKeyIndex = 0;
AttributeEquivalenceClass *attributeEquivalence =
palloc0(sizeof(AttributeEquivalenceClass));
ListCell *relationRestrictionCell = NULL;
@ -279,7 +286,8 @@ SafeToPushdownUnionSubquery(PlannerRestrictionContext *plannerRestrictionContext
*/
if (appendRelList != NULL)
{
varToBeAdded = FindUnionAllVar(relationPlannerRoot, appendRelList,
varToBeAdded = FindUnionAllVar(relationPlannerRoot,
relationRestriction->translatedVars,
relationRestriction->relationId,
relationRestriction->index,
&partitionKeyIndex);
@ -323,17 +331,17 @@ SafeToPushdownUnionSubquery(PlannerRestrictionContext *plannerRestrictionContext
* we check whether all the relations have partition keys in the
* same position.
*/
if (unionQueryPartitionKeyIndex == InvalidAttrNumber)
if (attributeEquivalence->unionQueryPartitionKeyIndex == InvalidAttrNumber)
{
unionQueryPartitionKeyIndex = partitionKeyIndex;
attributeEquivalence->unionQueryPartitionKeyIndex = partitionKeyIndex;
}
else if (unionQueryPartitionKeyIndex != partitionKeyIndex)
else if (attributeEquivalence->unionQueryPartitionKeyIndex != partitionKeyIndex)
{
continue;
}
Assert(varToBeAdded != NULL);
AddToAttributeEquivalenceClass(&attributeEquivalence, relationPlannerRoot,
AddToAttributeEquivalenceClass(attributeEquivalence, relationPlannerRoot,
varToBeAdded);
}
@ -373,66 +381,74 @@ SafeToPushdownUnionSubquery(PlannerRestrictionContext *plannerRestrictionContext
}
/*
* RangeTableOffsetCompat returns the range table offset(in glob->finalrtable) for the appendRelInfo.
* For PG < 13 this is a no op.
*/
static int
RangeTableOffsetCompat(PlannerInfo *root, AppendRelInfo *appendRelInfo)
{
#if PG_VERSION_NUM >= PG_VERSION_13
int parentCount = ParentCountPriorToAppendRel(root->append_rel_list, appendRelInfo);
int skipParentCount = parentCount - 1;
int i = 1;
for (; i < root->simple_rel_array_size; i++)
{
RangeTblEntry *rte = root->simple_rte_array[i];
if (rte->inh)
{
/*
* We skip the previous parents because we want to find the offset
* for the given append rel info.
*/
if (skipParentCount > 0)
{
skipParentCount--;
continue;
}
break;
}
}
int indexInRtable = (i - 1);
/*
* Postgres adds the global rte array size to parent_relid as an offset.
* Here we do the reverse operation: Commit on postgres side:
* 6ef77cf46e81f45716ec981cb08781d426181378
*/
int parentRelIndex = appendRelInfo->parent_relid - 1;
return parentRelIndex - indexInRtable;
#else
return 0;
#endif
}
/*
* FindUnionAllVar finds the variable used in union all for the side that has
* relationRteIndex as its index and the same varattno as the partition key of
* the given relation with relationOid.
*/
static Var *
FindUnionAllVar(PlannerInfo *root, List *appendRelList, Oid relationOid,
FindUnionAllVar(PlannerInfo *root, List *translatedVars, Oid relationOid,
Index relationRteIndex, Index *partitionKeyIndex)
{
ListCell *appendRelCell = NULL;
AppendRelInfo *targetAppendRelInfo = NULL;
AttrNumber childAttrNumber = 0;
*partitionKeyIndex = 0;
/* iterate on the queries that are part of UNION ALL subselects */
foreach(appendRelCell, appendRelList)
{
AppendRelInfo *appendRelInfo = (AppendRelInfo *) lfirst(appendRelCell);
int rtoffset = RangeTableOffsetCompat(root, appendRelInfo);
/*
* We're only interested in the child rel that is equal to the
* relation we're investigating.
*/
if (appendRelInfo->child_relid - rtoffset == relationRteIndex)
{
targetAppendRelInfo = appendRelInfo;
break;
}
}
if (!targetAppendRelInfo)
if (!IsCitusTableType(relationOid, STRICTLY_PARTITIONED_DISTRIBUTED_TABLE))
{
/* we only care about hash and range partitioned tables */
*partitionKeyIndex = 0;
return NULL;
}
Var *relationPartitionKey = DistPartitionKeyOrError(relationOid);
#if PG_VERSION_NUM >= PG_VERSION_13
for (; childAttrNumber < targetAppendRelInfo->num_child_cols; childAttrNumber++)
{
int curAttNo = targetAppendRelInfo->parent_colnos[childAttrNumber];
if (curAttNo == relationPartitionKey->varattno)
{
*partitionKeyIndex = (childAttrNumber + 1);
int rtoffset = RangeTableOffsetCompat(root, targetAppendRelInfo);
relationPartitionKey->varno = targetAppendRelInfo->child_relid - rtoffset;
return relationPartitionKey;
}
}
#else
AttrNumber childAttrNumber = 0;
*partitionKeyIndex = 0;
ListCell *translatedVarCell;
List *translaterVars = targetAppendRelInfo->translated_vars;
foreach(translatedVarCell, translaterVars)
foreach(translatedVarCell, translatedVars)
{
Node *targetNode = (Node *) lfirst(translatedVarCell);
childAttrNumber++;
if (!IsA(targetNode, Var))
@ -449,7 +465,6 @@ FindUnionAllVar(PlannerInfo *root, List *appendRelList, Oid relationOid,
return targetVar;
}
}
#endif
return NULL;
}
@ -580,7 +595,6 @@ GenerateAllAttributeEquivalences(PlannerRestrictionContext *plannerRestrictionCo
JoinRestrictionContext *joinRestrictionContext =
plannerRestrictionContext->joinRestrictionContext;
/* reset the equivalence id counter per call to prevent overflows */
attributeEquivalenceId = 1;
@ -788,14 +802,14 @@ AttributeEquivalenceClassForEquivalenceClass(EquivalenceClass *plannerEqClass,
equivalenceParam, &outerNodeRoot);
if (expressionVar)
{
AddToAttributeEquivalenceClass(&attributeEquivalence, outerNodeRoot,
AddToAttributeEquivalenceClass(attributeEquivalence, outerNodeRoot,
expressionVar);
}
}
else if (IsA(strippedEquivalenceExpr, Var))
{
expressionVar = (Var *) strippedEquivalenceExpr;
AddToAttributeEquivalenceClass(&attributeEquivalence, plannerInfo,
AddToAttributeEquivalenceClass(attributeEquivalence, plannerInfo,
expressionVar);
}
}
@ -978,7 +992,7 @@ GenerateCommonEquivalence(List *attributeEquivalenceList,
if (AttributeClassContainsAttributeClassMember(attributeEquialanceMember,
commonEquivalenceClass))
{
ListConcatUniqueAttributeClassMemberLists(&commonEquivalenceClass,
ListConcatUniqueAttributeClassMemberLists(commonEquivalenceClass,
currentEquivalenceClass);
addedEquivalenceIds = bms_add_member(addedEquivalenceIds,
@ -1058,7 +1072,7 @@ GenerateEquivalenceClassForRelationRestriction(
* firstClass.
*/
static void
ListConcatUniqueAttributeClassMemberLists(AttributeEquivalenceClass **firstClass,
ListConcatUniqueAttributeClassMemberLists(AttributeEquivalenceClass *firstClass,
AttributeEquivalenceClass *secondClass)
{
ListCell *equivalenceClassMemberCell = NULL;
@ -1069,13 +1083,13 @@ ListConcatUniqueAttributeClassMemberLists(AttributeEquivalenceClass **firstClass
AttributeEquivalenceClassMember *newEqMember =
(AttributeEquivalenceClassMember *) lfirst(equivalenceClassMemberCell);
if (AttributeClassContainsAttributeClassMember(newEqMember, *firstClass))
if (AttributeClassContainsAttributeClassMember(newEqMember, firstClass))
{
continue;
}
(*firstClass)->equivalentAttributes = lappend((*firstClass)->equivalentAttributes,
newEqMember);
firstClass->equivalentAttributes = lappend(firstClass->equivalentAttributes,
newEqMember);
}
}
@ -1150,10 +1164,10 @@ GenerateAttributeEquivalencesForJoinRestrictions(JoinRestrictionContext *
sizeof(AttributeEquivalenceClass));
attributeEquivalence->equivalenceId = attributeEquivalenceId++;
AddToAttributeEquivalenceClass(&attributeEquivalence,
AddToAttributeEquivalenceClass(attributeEquivalence,
joinRestriction->plannerInfo, leftVar);
AddToAttributeEquivalenceClass(&attributeEquivalence,
AddToAttributeEquivalenceClass(attributeEquivalence,
joinRestriction->plannerInfo, rightVar);
attributeEquivalenceList =
@ -1194,7 +1208,7 @@ GenerateAttributeEquivalencesForJoinRestrictions(JoinRestrictionContext *
* equivalence class
*/
static void
AddToAttributeEquivalenceClass(AttributeEquivalenceClass **attributeEquivalenceClass,
AddToAttributeEquivalenceClass(AttributeEquivalenceClass *attributeEquivalenceClass,
PlannerInfo *root, Var *varToBeAdded)
{
/* punt if it's a whole-row var rather than a plain column reference */
@ -1233,9 +1247,10 @@ AddToAttributeEquivalenceClass(AttributeEquivalenceClass **attributeEquivalenceC
*/
static void
AddRteSubqueryToAttributeEquivalenceClass(AttributeEquivalenceClass
**attributeEquivalenceClass,
*attributeEquivalenceClass,
RangeTblEntry *rangeTableEntry,
PlannerInfo *root, Var *varToBeAdded)
PlannerInfo *root,
Var *varToBeAdded)
{
RelOptInfo *baseRelOptInfo = find_base_rel(root, varToBeAdded->varno);
Query *targetSubquery = GetTargetSubquery(root, rangeTableEntry, varToBeAdded);
@ -1355,7 +1370,7 @@ GetTargetSubquery(PlannerInfo *root, RangeTblEntry *rangeTableEntry, Var *varToB
* var the given equivalence class.
*/
static void
AddUnionAllSetOperationsToAttributeEquivalenceClass(AttributeEquivalenceClass **
AddUnionAllSetOperationsToAttributeEquivalenceClass(AttributeEquivalenceClass *
attributeEquivalenceClass,
PlannerInfo *root,
Var *varToBeAdded)
@ -1377,41 +1392,101 @@ AddUnionAllSetOperationsToAttributeEquivalenceClass(AttributeEquivalenceClass **
continue;
}
int rtoffset = RangeTableOffsetCompat(root, appendRelInfo);
int childRelId = appendRelInfo->child_relid - rtoffset;
/* set the varno accordingly for this specific child */
varToBeAdded->varno = appendRelInfo->child_relid - rtoffset;
if (root->simple_rel_array_size <= childRelId)
{
/* we prefer to return over an Assert or error to be defensive */
return;
}
AddToAttributeEquivalenceClass(attributeEquivalenceClass, root,
varToBeAdded);
}
}
/*
* RangeTableOffsetCompat returns the range table offset(in glob->finalrtable) for the appendRelInfo.
* For PG < 13 this is a no op.
*/
static int
RangeTableOffsetCompat(PlannerInfo *root, AppendRelInfo *appendRelInfo)
{
#if PG_VERSION_NUM >= PG_VERSION_13
int i = 1;
for (; i < root->simple_rel_array_size; i++)
{
RangeTblEntry *rte = root->simple_rte_array[i];
RangeTblEntry *rte = root->simple_rte_array[childRelId];
if (rte->inh)
{
break;
/*
* This code-path may require improvements. If a leaf of a UNION ALL
* (e.g., an entry in appendRelList) itself is another UNION ALL
* (e.g., rte->inh = true), the logic here might get into an infinite
* recursion.
*
* The downside of "continue" here is that certain UNION ALL queries
* that are safe to pushdown may not be pushed down.
*/
continue;
}
else if (rte->rtekind == RTE_RELATION)
{
Index partitionKeyIndex = 0;
List *translatedVars = TranslatedVarsForRteIdentity(GetRTEIdentity(rte));
Var *varToBeAddedOnUnionAllSubquery =
FindUnionAllVar(root, translatedVars, rte->relid, childRelId,
&partitionKeyIndex);
if (partitionKeyIndex == 0)
{
/* no partition key on the target list */
continue;
}
if (attributeEquivalenceClass->unionQueryPartitionKeyIndex == 0)
{
/* the first partition key index we found */
attributeEquivalenceClass->unionQueryPartitionKeyIndex =
partitionKeyIndex;
}
else if (attributeEquivalenceClass->unionQueryPartitionKeyIndex !=
partitionKeyIndex)
{
/*
* Partition keys on the leaves of the UNION ALL queries on
* different ordinal positions. We cannot pushdown, so skip.
*/
continue;
}
if (varToBeAddedOnUnionAllSubquery != NULL)
{
AddToAttributeEquivalenceClass(attributeEquivalenceClass, root,
varToBeAddedOnUnionAllSubquery);
}
}
else
{
/* set the varno accordingly for this specific child */
varToBeAdded->varno = childRelId;
AddToAttributeEquivalenceClass(attributeEquivalenceClass, root,
varToBeAdded);
}
}
int indexInRtable = (i - 1);
return appendRelInfo->parent_relid - 1 - (indexInRtable);
#else
return 0;
#endif
}
#if PG_VERSION_NUM >= PG_VERSION_13
/*
* ParentCountPriorToAppendRel returns the number of parents that come before
* the given append rel info.
*/
static int
ParentCountPriorToAppendRel(List *appendRelList, AppendRelInfo *targetAppendRelInfo)
{
int targetParentIndex = targetAppendRelInfo->parent_relid;
Bitmapset *parent_ids = NULL;
AppendRelInfo *appendRelInfo = NULL;
foreach_ptr(appendRelInfo, appendRelList)
{
int curParentIndex = appendRelInfo->parent_relid;
if (curParentIndex <= targetParentIndex)
{
parent_ids = bms_add_member(parent_ids, curParentIndex);
}
}
return bms_num_members(parent_ids);
}
#endif
/*
* AddUnionSetOperationsToAttributeEquivalenceClass recursively iterates on all the
* setOperations and adds each corresponding target entry to the given equivalence
@ -1422,7 +1497,7 @@ RangeTableOffsetCompat(PlannerInfo *root, AppendRelInfo *appendRelInfo)
* messages.
*/
static void
AddUnionSetOperationsToAttributeEquivalenceClass(AttributeEquivalenceClass **
AddUnionSetOperationsToAttributeEquivalenceClass(AttributeEquivalenceClass *
attributeEquivalenceClass,
PlannerInfo *root,
SetOperationStmt *setOperation,
@ -1450,7 +1525,7 @@ AddUnionSetOperationsToAttributeEquivalenceClass(AttributeEquivalenceClass **
* the input rte to be an RTE_RELATION.
*/
static void
AddRteRelationToAttributeEquivalenceClass(AttributeEquivalenceClass **
AddRteRelationToAttributeEquivalenceClass(AttributeEquivalenceClass *
attrEquivalenceClass,
RangeTblEntry *rangeTableEntry,
Var *varToBeAdded)
@ -1487,8 +1562,8 @@ AddRteRelationToAttributeEquivalenceClass(AttributeEquivalenceClass **
attributeEqMember->rteIdentity = GetRTEIdentity(rangeTableEntry);
attributeEqMember->relationId = rangeTableEntry->relid;
(*attrEquivalenceClass)->equivalentAttributes =
lappend((*attrEquivalenceClass)->equivalentAttributes,
attrEquivalenceClass->equivalentAttributes =
lappend(attrEquivalenceClass->equivalentAttributes,
attributeEqMember);
}

View File

@ -556,30 +556,6 @@ RelayEventExtendNames(Node *parseTree, char *schemaName, uint64 shardId)
AppendShardIdToName(oldRelationName, shardId);
AppendShardIdToName(newRelationName, shardId);
/*
* PostgreSQL creates array types for each ordinary table, with
* the same name plus a prefix of '_'.
*
* ALTER TABLE ... RENAME TO ... also renames the underlying
* array type, and the DDL is run in parallel connections over
* all the placements and shards at once. Concurrent access
* here deadlocks.
*
* Let's provide an easier to understand error message here
* than the deadlock one.
*
* See also https://github.com/citusdata/citus/issues/1664
*/
int newRelationNameLength = strlen(*newRelationName);
if (newRelationNameLength >= (NAMEDATALEN - 1))
{
ereport(ERROR,
(errcode(ERRCODE_NAME_TOO_LONG),
errmsg(
"shard name %s exceeds %d characters",
*newRelationName, NAMEDATALEN - 1)));
}
}
else if (objectType == OBJECT_COLUMN)
{

View File

@ -701,6 +701,19 @@ RegisterCitusConfigVariables(void)
GUC_NO_SHOW_ALL,
NoticeIfSubqueryPushdownEnabled, NULL, NULL);
DefineCustomIntVariable(
"citus.remote_copy_flush_threshold",
gettext_noop("Sets the threshold for remote copy to be flushed."),
gettext_noop("When sending data over remote connections via the COPY protocol, "
"bytes are first buffered internally by libpq. If the number of "
"bytes buffered exceeds the threshold, Citus waits for all the "
"bytes to flush."),
&RemoteCopyFlushThreshold,
8 * 1024 * 1024, 0, INT_MAX,
PGC_USERSET,
GUC_UNIT_BYTE | GUC_NO_SHOW_ALL,
NULL, NULL, NULL);
DefineCustomIntVariable(
"citus.local_copy_flush_threshold",
gettext_noop("Sets the threshold for local copy to be flushed."),
@ -1238,6 +1251,16 @@ RegisterCitusConfigVariables(void)
GUC_STANDARD,
NULL, NULL, NULL);
DefineCustomIntVariable(
"citus.max_cached_connection_lifetime",
gettext_noop("Sets the maximum lifetime of cached connections to other nodes."),
NULL,
&MaxCachedConnectionLifetime,
10 * MS_PER_MINUTE, -1, INT_MAX,
PGC_USERSET,
GUC_UNIT_MS | GUC_STANDARD,
NULL, NULL, NULL);
DefineCustomIntVariable(
"citus.repartition_join_bucket_count_per_node",
gettext_noop("Sets the bucket size for repartition joins per node"),

View File

@ -0,0 +1,5 @@
-- citus--10.0-1--10.0-2
#include "../../columnar/sql/columnar--10.0-1--10.0-2.sql"
GRANT SELECT ON public.citus_tables TO public;

View File

@ -0,0 +1,18 @@
-- citus--10.0-2--10.0-3
#include "udfs/citus_update_table_statistics/10.0-3.sql"
CREATE OR REPLACE FUNCTION master_update_table_statistics(relation regclass)
RETURNS VOID
LANGUAGE C STRICT
AS 'MODULE_PATHNAME', $$citus_update_table_statistics$$;
COMMENT ON FUNCTION pg_catalog.master_update_table_statistics(regclass)
IS 'updates shard statistics of the given table';
CREATE OR REPLACE FUNCTION pg_catalog.citus_get_active_worker_nodes(OUT node_name text, OUT node_port bigint)
RETURNS SETOF record
LANGUAGE C STRICT ROWS 100
AS 'MODULE_PATHNAME', $$citus_get_active_worker_nodes$$;
COMMENT ON FUNCTION pg_catalog.citus_get_active_worker_nodes()
IS 'fetch set of active worker nodes';

View File

@ -0,0 +1,4 @@
/* citus--10.0-2--10.0-1.sql */
#include "../../../columnar/sql/downgrades/columnar--10.0-2--10.0-1.sql"
REVOKE SELECT ON public.citus_tables FROM public;

View File

@ -0,0 +1,26 @@
-- citus--10.0-3--10.0-2
-- this is a downgrade path that will revert the changes made in citus--10.0-2--10.0-3.sql
DROP FUNCTION pg_catalog.citus_update_table_statistics(regclass);
#include "../udfs/citus_update_table_statistics/10.0-1.sql"
CREATE OR REPLACE FUNCTION master_update_table_statistics(relation regclass)
RETURNS VOID AS $$
DECLARE
colocated_tables regclass[];
BEGIN
SELECT get_colocated_table_array(relation) INTO colocated_tables;
PERFORM
master_update_shard_statistics(shardid)
FROM
pg_dist_shard
WHERE
logicalrelid = ANY (colocated_tables);
END;
$$ LANGUAGE 'plpgsql';
COMMENT ON FUNCTION master_update_table_statistics(regclass)
IS 'updates shard statistics of the given table and its colocated tables';
DROP FUNCTION pg_catalog.citus_get_active_worker_nodes(OUT text, OUT bigint);

View File

@ -0,0 +1,6 @@
CREATE OR REPLACE FUNCTION pg_catalog.citus_update_table_statistics(relation regclass)
RETURNS VOID
LANGUAGE C STRICT
AS 'MODULE_PATHNAME', $$citus_update_table_statistics$$;
COMMENT ON FUNCTION pg_catalog.citus_update_table_statistics(regclass)
IS 'updates shard statistics of the given table';

View File

@ -1,17 +1,6 @@
CREATE FUNCTION pg_catalog.citus_update_table_statistics(relation regclass)
RETURNS VOID AS $$
DECLARE
colocated_tables regclass[];
BEGIN
SELECT get_colocated_table_array(relation) INTO colocated_tables;
PERFORM
master_update_shard_statistics(shardid)
FROM
pg_dist_shard
WHERE
logicalrelid = ANY (colocated_tables);
END;
$$ LANGUAGE 'plpgsql';
CREATE OR REPLACE FUNCTION pg_catalog.citus_update_table_statistics(relation regclass)
RETURNS VOID
LANGUAGE C STRICT
AS 'MODULE_PATHNAME', $$citus_update_table_statistics$$;
COMMENT ON FUNCTION pg_catalog.citus_update_table_statistics(regclass)
IS 'updates shard statistics of the given table and its colocated tables';
IS 'updates shard statistics of the given table';

View File

@ -5,12 +5,13 @@ FROM (
FROM pg_class c
JOIN pg_inherits i ON (c.oid = inhrelid)
JOIN pg_partitioned_table p ON (inhparent = partrelid)
JOIN pg_attribute a ON (partrelid = attrelid AND ARRAY[attnum] <@ string_to_array(partattrs::text, ' ')::int2[])
JOIN pg_attribute a ON (partrelid = attrelid)
JOIN pg_type t ON (atttypid = t.oid)
JOIN pg_namespace tn ON (t.typnamespace = tn.oid)
LEFT JOIN pg_am am ON (c.relam = am.oid),
pg_catalog.time_partition_range(c.oid)
WHERE c.relpartbound IS NOT NULL AND p.partstrat = 'r' AND p.partnatts = 1
AND a.attnum = ANY(partattrs::int2[])
) partitions
ORDER BY partrelid::text, lower_bound;

View File

@ -5,12 +5,13 @@ FROM (
FROM pg_class c
JOIN pg_inherits i ON (c.oid = inhrelid)
JOIN pg_partitioned_table p ON (inhparent = partrelid)
JOIN pg_attribute a ON (partrelid = attrelid AND ARRAY[attnum] <@ string_to_array(partattrs::text, ' ')::int2[])
JOIN pg_attribute a ON (partrelid = attrelid)
JOIN pg_type t ON (atttypid = t.oid)
JOIN pg_namespace tn ON (t.typnamespace = tn.oid)
LEFT JOIN pg_am am ON (c.relam = am.oid),
pg_catalog.time_partition_range(c.oid)
WHERE c.relpartbound IS NOT NULL AND p.partstrat = 'r' AND p.partnatts = 1
AND a.attnum = ANY(partattrs::int2[])
) partitions
ORDER BY partrelid::text, lower_bound;

View File

@ -18,9 +18,14 @@
#include "miscadmin.h"
#include "pgstat.h"
#include "distributed/transaction_management.h"
static Size MemoryContextTotalSpace(MemoryContext context);
PG_FUNCTION_INFO_V1(top_transaction_context_size);
PG_FUNCTION_INFO_V1(coordinated_transaction_should_use_2PC);
/*
* top_transaction_context_size returns current size of TopTransactionContext.
@ -54,3 +59,20 @@ MemoryContextTotalSpace(MemoryContext context)
return totalSpace;
}
/*
* coordinated_transaction_should_use_2PC returns true if the transaction is in a
* coordinated transaction and uses 2PC. If the transaction is nott in a
* coordinated transaction, the function throws an error.
*/
Datum
coordinated_transaction_should_use_2PC(PG_FUNCTION_ARGS)
{
if (!InCoordinatedTransaction())
{
ereport(ERROR, (errmsg("The transaction is not a coordinated transaction")));
}
PG_RETURN_BOOL(GetCoordinatedTransactionShouldUse2PC());
}

View File

@ -20,6 +20,7 @@
#include "distributed/connection_management.h"
#include "distributed/listutils.h"
#include "distributed/metadata_cache.h"
#include "distributed/placement_connection.h"
#include "distributed/remote_commands.h"
#include "distributed/remote_transaction.h"
#include "distributed/transaction_identifier.h"
@ -782,8 +783,16 @@ CoordinatedRemoteTransactionsPrepare(void)
continue;
}
StartRemoteTransactionPrepare(connection);
connectionList = lappend(connectionList, connection);
/*
* Check if any DML or DDL is executed over the connection on any
* placement/table. If yes, we start preparing the transaction, otherwise
* we skip prepare since the connection didn't perform any write (read-only)
*/
if (ConnectionModifiedPlacement(connection))
{
StartRemoteTransactionPrepare(connection);
connectionList = lappend(connectionList, connection);
}
}
bool raiseInterrupts = true;
@ -798,6 +807,10 @@ CoordinatedRemoteTransactionsPrepare(void)
if (transaction->transactionState != REMOTE_TRANS_PREPARING)
{
/*
* Verify that the connection didn't modify any placement
*/
Assert(!ConnectionModifiedPlacement(connection));
continue;
}

View File

@ -96,9 +96,16 @@ MemoryContext CommitContext = NULL;
/*
* Should this coordinated transaction use 2PC? Set by
* CoordinatedTransactionUse2PC(), e.g. if DDL was issued and
* MultiShardCommitProtocol was set to 2PC.
* MultiShardCommitProtocol was set to 2PC. But, even if this
* flag is set, the transaction manager is smart enough to only
* do 2PC on the remote connections that did a modification.
*
* As a variable name ShouldCoordinatedTransactionUse2PC could
* be improved. We use CoordinatedTransactionShouldUse2PC() as the
* public API function, hence couldn't come up with a better name
* for the underlying variable at the moment.
*/
bool CoordinatedTransactionUses2PC = false;
bool ShouldCoordinatedTransactionUse2PC = false;
/* if disabled, distributed statements in a function may run as separate transactions */
bool FunctionOpensTransactionBlock = true;
@ -183,15 +190,29 @@ InCoordinatedTransaction(void)
/*
* CoordinatedTransactionUse2PC() signals that the current coordinated
* CoordinatedTransactionShouldUse2PC() signals that the current coordinated
* transaction should use 2PC to commit.
*
* Note that even if 2PC is enabled, it is only used for connections that make
* modification (DML or DDL).
*/
void
CoordinatedTransactionUse2PC(void)
CoordinatedTransactionShouldUse2PC(void)
{
Assert(InCoordinatedTransaction());
CoordinatedTransactionUses2PC = true;
ShouldCoordinatedTransactionUse2PC = true;
}
/*
* GetCoordinatedTransactionShouldUse2PC is a wrapper function to read the value
* of CoordinatedTransactionShouldUse2PCFlag.
*/
bool
GetCoordinatedTransactionShouldUse2PC(void)
{
return ShouldCoordinatedTransactionUse2PC;
}
@ -297,28 +318,8 @@ CoordinatedTransactionCallback(XactEvent event, void *arg)
/* stop propagating notices from workers, we know the query is failed */
DisableWorkerMessagePropagation();
/*
* FIXME: Add warning for the COORD_TRANS_COMMITTED case. That
* can be reached if this backend fails after the
* XACT_EVENT_PRE_COMMIT state.
*/
RemoveIntermediateResultsDirectory();
/*
* Call other parts of citus that need to integrate into
* transaction management. Do so before doing other work, so the
* callbacks still can perform work if needed.
*/
{
/*
* On Windows it's not possible to delete a file before you've closed all
* handles to it (rmdir will return success but not take effect). Since
* we're in an ABORT handler it's very likely that not all handles have
* been closed; force them closed here before running
* RemoveIntermediateResultsDirectory.
*/
AtEOXact_Files(false);
RemoveIntermediateResultsDirectory();
}
ResetShardPlacementTransactionState();
/* handles both already prepared and open transactions */
@ -425,7 +426,7 @@ CoordinatedTransactionCallback(XactEvent event, void *arg)
*/
MarkFailedShardPlacements();
if (CoordinatedTransactionUses2PC)
if (ShouldCoordinatedTransactionUse2PC)
{
CoordinatedRemoteTransactionsPrepare();
CurrentCoordinatedTransactionState = COORD_TRANS_PREPARED;
@ -453,7 +454,7 @@ CoordinatedTransactionCallback(XactEvent event, void *arg)
* Check again whether shards/placement successfully
* committed. This handles failure at COMMIT/PREPARE time.
*/
PostCommitMarkFailedShardPlacements(CoordinatedTransactionUses2PC);
PostCommitMarkFailedShardPlacements(ShouldCoordinatedTransactionUse2PC);
break;
}
@ -485,7 +486,7 @@ ResetGlobalVariables()
FreeSavedExplainPlan();
dlist_init(&InProgressTransactions);
activeSetStmts = NULL;
CoordinatedTransactionUses2PC = false;
ShouldCoordinatedTransactionUse2PC = false;
TransactionModifiedNodeMetadata = false;
MetadataSyncOnCommit = false;
ResetWorkerErrorIndication();

View File

@ -96,7 +96,7 @@ SendCommandToWorkerAsUser(const char *nodeName, int32 nodePort, const char *node
uint32 connectionFlags = 0;
UseCoordinatedTransaction();
CoordinatedTransactionUse2PC();
CoordinatedTransactionShouldUse2PC();
MultiConnection *transactionConnection = GetNodeUserDatabaseConnection(
connectionFlags, nodeName,
@ -404,7 +404,7 @@ SendCommandToWorkersParamsInternal(TargetWorkerSet targetWorkerSet, const char *
List *workerNodeList = TargetWorkerSetNodeList(targetWorkerSet, ShareLock);
UseCoordinatedTransaction();
CoordinatedTransactionUse2PC();
CoordinatedTransactionShouldUse2PC();
/* open connections in parallel */
WorkerNode *workerNode = NULL;

View File

@ -100,9 +100,6 @@ static ForeignConstraintRelationshipNode * CreateOrFindNode(HTAB *adjacencyLists
relid);
static List * GetConnectedListHelper(ForeignConstraintRelationshipNode *node,
bool isReferencing);
static HTAB * CreateOidVisitedHashSet(void);
static bool OidVisited(HTAB *oidVisitedMap, Oid oid);
static void VisitOid(HTAB *oidVisitedMap, Oid oid);
static List * GetForeignConstraintRelationshipHelper(Oid relationId, bool isReferencing);
@ -442,7 +439,7 @@ GetConnectedListHelper(ForeignConstraintRelationshipNode *node, bool isReferenci
* As hash_create allocates memory in heap, callers are responsible to call
* hash_destroy when appropriate.
*/
static HTAB *
HTAB *
CreateOidVisitedHashSet(void)
{
HASHCTL info = { 0 };
@ -464,7 +461,7 @@ CreateOidVisitedHashSet(void)
/*
* OidVisited returns true if given oid is visited according to given oid hash-set.
*/
static bool
bool
OidVisited(HTAB *oidVisitedMap, Oid oid)
{
bool found = false;
@ -476,7 +473,7 @@ OidVisited(HTAB *oidVisitedMap, Oid oid)
/*
* VisitOid sets given oid as visited in given hash-set.
*/
static void
void
VisitOid(HTAB *oidVisitedMap, Oid oid)
{
bool found = false;

View File

@ -548,13 +548,14 @@ PartitionParentOid(Oid partitionOid)
/*
* LongestPartitionName is a uitility function that returns the partition
* name which is the longest in terms of number of characters.
* PartitionWithLongestNameRelationId is a utility function that returns the
* oid of the partition table that has the longest name in terms of number of
* characters.
*/
char *
LongestPartitionName(Oid parentRelationId)
Oid
PartitionWithLongestNameRelationId(Oid parentRelationId)
{
char *longestName = NULL;
Oid longestNamePartitionId = InvalidOid;
int longestNameLength = 0;
List *partitionList = PartitionList(parentRelationId);
@ -565,12 +566,12 @@ LongestPartitionName(Oid parentRelationId)
int partitionNameLength = strnlen(partitionName, NAMEDATALEN);
if (partitionNameLength > longestNameLength)
{
longestName = partitionName;
longestNamePartitionId = partitionRelationId;
longestNameLength = partitionNameLength;
}
}
return longestName;
return longestNamePartitionId;
}

View File

@ -41,7 +41,7 @@ alter_role_if_exists(PG_FUNCTION_ARGS)
Node *parseTree = ParseTreeNode(utilityQuery);
ProcessUtilityParseTree(parseTree, utilityQuery, PROCESS_UTILITY_TOPLEVEL, NULL,
ProcessUtilityParseTree(parseTree, utilityQuery, PROCESS_UTILITY_QUERY, NULL,
None_Receiver, NULL);
PG_RETURN_BOOL(true);
@ -98,7 +98,7 @@ worker_create_or_alter_role(PG_FUNCTION_ARGS)
ProcessUtilityParseTree(parseTree,
createRoleUtilityQuery,
PROCESS_UTILITY_TOPLEVEL,
PROCESS_UTILITY_QUERY,
NULL,
None_Receiver, NULL);
@ -126,7 +126,7 @@ worker_create_or_alter_role(PG_FUNCTION_ARGS)
ProcessUtilityParseTree(parseTree,
alterRoleUtilityQuery,
PROCESS_UTILITY_TOPLEVEL,
PROCESS_UTILITY_QUERY,
NULL,
None_Receiver, NULL);

View File

@ -12,6 +12,7 @@
#include "postgres.h"
#include "utils/lsyscache.h"
#include "distributed/metadata_utility.h"
#include "distributed/relay_utility.h"
#include "distributed/shard_utils.h"
@ -36,3 +37,21 @@ GetTableLocalShardOid(Oid citusTableOid, uint64 shardId)
return shardRelationOid;
}
/*
* GetLongestShardName is a utility function that returns the name of the shard of a
* table that has the longest name in terms of number of characters.
*
* Both the Oid and name of the table are required so we can create longest shard names
* after a RENAME.
*/
char *
GetLongestShardName(Oid citusTableOid, char *finalRelationName)
{
char *longestShardName = pstrdup(finalRelationName);
ShardInterval *shardInterval = LoadShardIntervalWithLongestShardName(citusTableOid);
AppendShardIdToName(&longestShardName, shardInterval->shardId);
return longestShardName;
}

View File

@ -111,12 +111,12 @@ worker_create_or_replace_object(PG_FUNCTION_ARGS)
RenameStmt *renameStmt = CreateRenameStatement(&address, newName);
const char *sqlRenameStmt = DeparseTreeNode((Node *) renameStmt);
ProcessUtilityParseTree((Node *) renameStmt, sqlRenameStmt,
PROCESS_UTILITY_TOPLEVEL,
PROCESS_UTILITY_QUERY,
NULL, None_Receiver, NULL);
}
/* apply create statement locally */
ProcessUtilityParseTree(parseTree, sqlStatement, PROCESS_UTILITY_TOPLEVEL, NULL,
ProcessUtilityParseTree(parseTree, sqlStatement, PROCESS_UTILITY_QUERY, NULL,
None_Receiver, NULL);
/* type has been created */

View File

@ -396,7 +396,7 @@ worker_apply_shard_ddl_command(PG_FUNCTION_ARGS)
/* extend names in ddl command and apply extended command */
RelayEventExtendNames(ddlCommandNode, schemaName, shardId);
ProcessUtilityParseTree(ddlCommandNode, ddlCommand, PROCESS_UTILITY_TOPLEVEL, NULL,
ProcessUtilityParseTree(ddlCommandNode, ddlCommand, PROCESS_UTILITY_QUERY, NULL,
None_Receiver, NULL);
PG_RETURN_VOID();
@ -428,7 +428,7 @@ worker_apply_inter_shard_ddl_command(PG_FUNCTION_ARGS)
RelayEventExtendNamesForInterShardCommands(ddlCommandNode, leftShardId,
leftShardSchemaName, rightShardId,
rightShardSchemaName);
ProcessUtilityParseTree(ddlCommandNode, ddlCommand, PROCESS_UTILITY_TOPLEVEL, NULL,
ProcessUtilityParseTree(ddlCommandNode, ddlCommand, PROCESS_UTILITY_QUERY, NULL,
None_Receiver, NULL);
PG_RETURN_VOID();
@ -461,7 +461,7 @@ worker_apply_sequence_command(PG_FUNCTION_ARGS)
}
/* run the CREATE SEQUENCE command */
ProcessUtilityParseTree(commandNode, commandString, PROCESS_UTILITY_TOPLEVEL, NULL,
ProcessUtilityParseTree(commandNode, commandString, PROCESS_UTILITY_QUERY, NULL,
None_Receiver, NULL);
CommandCounterIncrement();
@ -669,7 +669,7 @@ worker_append_table_to_shard(PG_FUNCTION_ARGS)
SetUserIdAndSecContext(CitusExtensionOwner(), SECURITY_LOCAL_USERID_CHANGE);
ProcessUtilityParseTree((Node *) localCopyCommand, queryString->data,
PROCESS_UTILITY_TOPLEVEL, NULL, None_Receiver, NULL);
PROCESS_UTILITY_QUERY, NULL, None_Receiver, NULL);
SetUserIdAndSecContext(savedUserId, savedSecurityContext);
@ -782,7 +782,7 @@ AlterSequenceMinMax(Oid sequenceId, char *schemaName, char *sequenceName,
/* since the command is an AlterSeqStmt, a dummy command string works fine */
ProcessUtilityParseTree((Node *) alterSequenceStatement, dummyString,
PROCESS_UTILITY_TOPLEVEL, NULL, None_Receiver, NULL);
PROCESS_UTILITY_QUERY, NULL, None_Receiver, NULL);
}
}

View File

@ -24,6 +24,10 @@
/* controlled via GUC, should be accessed via EnableLocalReferenceForeignKeys() */
extern bool EnableLocalReferenceForeignKeys;
extern void SwitchToSequentialAndLocalExecutionIfRelationNameTooLong(Oid relationId,
char *
finalRelationName);
/*
* DistributeObjectOps specifies handlers for node/object type pairs.

View File

@ -200,6 +200,9 @@ extern int NodeConnectionTimeout;
/* maximum number of connections to cache per worker per session */
extern int MaxCachedConnectionsPerWorker;
/* maximum lifetime of connections in miliseconds */
extern int MaxCachedConnectionLifetime;
/* parameters used for outbound connections */
extern char *NodeConninfo;

View File

@ -67,6 +67,9 @@ typedef struct RelationRestriction
/* list of RootPlanParams for all outer nodes */
List *outerPlanParamsList;
/* list of translated vars, this is copied from postgres since it gets deleted on postgres*/
List *translatedVars;
} RelationRestriction;
typedef struct JoinRestrictionContext
@ -219,9 +222,9 @@ extern PlannedStmt * distributed_planner(Query *parse,
#define LOCAL_TABLE_SUBQUERY_CTE_HINT \
"Use CTE's or subqueries to select from local tables and use them in joins"
extern List * ExtractRangeTableEntryList(Query *query);
extern bool NeedsDistributedPlanning(Query *query);
extern List * TranslatedVarsForRteIdentity(int rteIdentity);
extern struct DistributedPlan * GetDistributedPlan(CustomScan *node);
extern void multi_relation_restriction_hook(PlannerInfo *root, RelOptInfo *relOptInfo,
Index restrictionIndex, RangeTblEntry *rte);

View File

@ -22,5 +22,8 @@ extern List * ReferencingRelationIdList(Oid relationId);
extern void SetForeignConstraintRelationshipGraphInvalid(void);
extern bool IsForeignConstraintRelationshipGraphValid(void);
extern void ClearForeignConstraintRelationshipGraphContext(void);
extern HTAB * CreateOidVisitedHashSet(void);
extern bool OidVisited(HTAB *oidVisitedMap, Oid oid);
extern void VisitOid(HTAB *oidVisitedMap, Oid oid);
#endif

View File

@ -27,5 +27,6 @@ extern bool IsObjectAddressOwnedByExtension(const ObjectAddress *target,
ObjectAddress *extensionAddress);
extern List * GetDistributedObjectAddressList(void);
extern void UpdateDistributedObjectColocationId(uint32 oldColocationId, uint32
newColocationId);
#endif /* CITUS_METADATA_DISTOBJECT_H */

View File

@ -36,6 +36,7 @@
#define CSTORE_TABLE_SIZE_FUNCTION "cstore_table_size(%s)"
#define SHARD_SIZES_COLUMN_COUNT 2
#define UPDATE_SHARD_STATISTICS_COLUMN_COUNT 4
/* In-memory representation of a typed tuple in pg_dist_shard. */
typedef struct ShardInterval
@ -206,7 +207,6 @@ extern StringInfo GenerateSizeQueryOnMultiplePlacements(List *shardIntervalList,
extern List * RemoveCoordinatorPlacementIfNotSingleNode(List *placementList);
extern ShardPlacement * ShardPlacementOnGroup(uint64 shardId, int groupId);
/* Function declarations to modify shard and shard placement data */
extern void InsertShardRow(Oid relationId, uint64 shardId, char storageType,
text *shardMinValue, text *shardMaxValue);
@ -264,5 +264,8 @@ extern ShardInterval * DeformedDistShardTupleToShardInterval(Datum *datumArray,
int32 intervalTypeMod);
extern void GetIntervalTypeInfo(char partitionMethod, Var *partitionColumn,
Oid *intervalTypeId, int32 *intervalTypeMod);
extern List * SendShardStatisticsQueriesInParallel(List *citusTableIds, bool
useDistributedTransaction, bool
useShardMinMaxQuery);
#endif /* METADATA_UTILITY_H */

View File

@ -19,7 +19,7 @@ extern bool PartitionTableNoLock(Oid relationId);
extern bool IsChildTable(Oid relationId);
extern bool IsParentTable(Oid relationId);
extern Oid PartitionParentOid(Oid partitionOid);
extern char * LongestPartitionName(Oid parentRelationId);
extern Oid PartitionWithLongestNameRelationId(Oid parentRelationId);
extern List * PartitionList(Oid parentRelationId);
extern char * GenerateDetachPartitionCommand(Oid partitionTableId);
extern char * GenerateAttachShardPartitionCommand(ShardInterval *shardInterval);

View File

@ -33,7 +33,6 @@ extern List * GenerateAllAttributeEquivalences(PlannerRestrictionContext *
plannerRestrictionContext);
extern uint32 UniqueRelationCount(RelationRestrictionContext *restrictionContext,
CitusTableType tableType);
extern List * DistributedRelationIdList(Query *query);
extern PlannerRestrictionContext * FilterPlannerRestrictionForQuery(
PlannerRestrictionContext *plannerRestrictionContext,

View File

@ -21,6 +21,9 @@
/* GUC, determining whether statements sent to remote nodes are logged */
extern bool LogRemoteCommands;
/* GUC that determines the number of bytes after which remote COPY is flushed */
extern int RemoteCopyFlushThreshold;
/* simple helpers */
extern bool IsResponseOK(PGresult *result);

View File

@ -14,5 +14,6 @@
#include "postgres.h"
extern Oid GetTableLocalShardOid(Oid citusTableOid, uint64 shardId);
extern char * GetLongestShardName(Oid citusTableOid, char *finalRelationName);
#endif /* SHARD_UTILS_H */

View File

@ -111,7 +111,8 @@ extern bool TransactionModifiedNodeMetadata;
*/
extern void UseCoordinatedTransaction(void);
extern bool InCoordinatedTransaction(void);
extern void CoordinatedTransactionUse2PC(void);
extern void CoordinatedTransactionShouldUse2PC(void);
extern bool GetCoordinatedTransactionShouldUse2PC(void);
extern bool IsMultiStatementTransaction(void);
extern void EnsureDistributedTransactionId(void);

View File

@ -191,8 +191,8 @@ s/relation with OID [0-9]+ does not exist/relation with OID XXXX does not exist/
# ignore JIT related messages
/^DEBUG: probing availability of JIT.*/d
/^DEBUG: provider not available, disabling JIT for current session.*/d
/^DEBUG: time to inline:.*/d
/^DEBUG: successfully loaded JIT.*/d
# ignore timing statistics for VACUUM VERBOSE
/CPU: user: .*s, system: .*s, elapsed: .*s/d
@ -223,3 +223,11 @@ s/^(ERROR: child table is missing constraint "\w+)_([0-9])+"/\1_xxxxxx"/g
s/.*//g
}
}
# normalize long table shard name errors for alter_table_set_access_method and alter_distributed_table
s/^(ERROR: child table is missing constraint "\w+)_([0-9])+"/\1_xxxxxx"/g
s/^(DEBUG: the name of the shard \(abcde_01234567890123456789012345678901234567890_f7ff6612)_([0-9])+/\1_xxxxxx/g
# normalize long index name errors for multi_index_statements
s/^(ERROR: The index name \(test_index_creation1_p2020_09_26)_([0-9])+_(tenant_id_timeperiod_idx)/\1_xxxxxx_\3/g
s/^(DEBUG: the index name on the shards of the partition is too long, switching to sequential and local execution mode to prevent self deadlocks: test_index_creation1_p2020_09_26)_([0-9])+_(tenant_id_timeperiod_idx)/\1_xxxxxx_\3/g

View File

@ -53,9 +53,9 @@ SELECT STRING_AGG(table_name::text, ', ' ORDER BY 1) AS "Colocation Groups" FROM
-- test altering distribution column
SELECT alter_distributed_table('dist_table', distribution_column := 'b');
NOTICE: creating a new table for alter_distributed_table.dist_table
NOTICE: Moving the data of alter_distributed_table.dist_table
NOTICE: Dropping the old alter_distributed_table.dist_table
NOTICE: Renaming the new table to alter_distributed_table.dist_table
NOTICE: moving the data of alter_distributed_table.dist_table
NOTICE: dropping the old alter_distributed_table.dist_table
NOTICE: renaming the new table to alter_distributed_table.dist_table
alter_distributed_table
---------------------------------------------------------------------
@ -82,9 +82,9 @@ SELECT STRING_AGG(table_name::text, ', ' ORDER BY 1) AS "Colocation Groups" FROM
-- test altering shard count
SELECT alter_distributed_table('dist_table', shard_count := 6);
NOTICE: creating a new table for alter_distributed_table.dist_table
NOTICE: Moving the data of alter_distributed_table.dist_table
NOTICE: Dropping the old alter_distributed_table.dist_table
NOTICE: Renaming the new table to alter_distributed_table.dist_table
NOTICE: moving the data of alter_distributed_table.dist_table
NOTICE: dropping the old alter_distributed_table.dist_table
NOTICE: renaming the new table to alter_distributed_table.dist_table
alter_distributed_table
---------------------------------------------------------------------
@ -111,9 +111,9 @@ SELECT STRING_AGG(table_name::text, ', ' ORDER BY 1) AS "Colocation Groups" FROM
-- test altering colocation, note that shard count will also change
SELECT alter_distributed_table('dist_table', colocate_with := 'alter_distributed_table.colocation_table');
NOTICE: creating a new table for alter_distributed_table.dist_table
NOTICE: Moving the data of alter_distributed_table.dist_table
NOTICE: Dropping the old alter_distributed_table.dist_table
NOTICE: Renaming the new table to alter_distributed_table.dist_table
NOTICE: moving the data of alter_distributed_table.dist_table
NOTICE: dropping the old alter_distributed_table.dist_table
NOTICE: renaming the new table to alter_distributed_table.dist_table
alter_distributed_table
---------------------------------------------------------------------
@ -139,13 +139,13 @@ SELECT STRING_AGG(table_name::text, ', ' ORDER BY 1) AS "Colocation Groups" FROM
-- test altering shard count with cascading, note that the colocation will be kept
SELECT alter_distributed_table('dist_table', shard_count := 8, cascade_to_colocated := true);
NOTICE: creating a new table for alter_distributed_table.dist_table
NOTICE: Moving the data of alter_distributed_table.dist_table
NOTICE: Dropping the old alter_distributed_table.dist_table
NOTICE: Renaming the new table to alter_distributed_table.dist_table
NOTICE: moving the data of alter_distributed_table.dist_table
NOTICE: dropping the old alter_distributed_table.dist_table
NOTICE: renaming the new table to alter_distributed_table.dist_table
NOTICE: creating a new table for alter_distributed_table.colocation_table
NOTICE: Moving the data of alter_distributed_table.colocation_table
NOTICE: Dropping the old alter_distributed_table.colocation_table
NOTICE: Renaming the new table to alter_distributed_table.colocation_table
NOTICE: moving the data of alter_distributed_table.colocation_table
NOTICE: dropping the old alter_distributed_table.colocation_table
NOTICE: renaming the new table to alter_distributed_table.colocation_table
alter_distributed_table
---------------------------------------------------------------------
@ -171,9 +171,9 @@ SELECT STRING_AGG(table_name::text, ', ' ORDER BY 1) AS "Colocation Groups" FROM
-- test altering shard count without cascading, note that the colocation will be broken
SELECT alter_distributed_table('dist_table', shard_count := 10, cascade_to_colocated := false);
NOTICE: creating a new table for alter_distributed_table.dist_table
NOTICE: Moving the data of alter_distributed_table.dist_table
NOTICE: Dropping the old alter_distributed_table.dist_table
NOTICE: Renaming the new table to alter_distributed_table.dist_table
NOTICE: moving the data of alter_distributed_table.dist_table
NOTICE: dropping the old alter_distributed_table.dist_table
NOTICE: renaming the new table to alter_distributed_table.dist_table
alter_distributed_table
---------------------------------------------------------------------
@ -261,16 +261,16 @@ SELECT * FROM partitioned_table_6_10 ORDER BY 1, 2;
SELECT alter_distributed_table('partitioned_table', shard_count := 10, distribution_column := 'a');
NOTICE: converting the partitions of alter_distributed_table.partitioned_table
NOTICE: creating a new table for alter_distributed_table.partitioned_table_1_5
NOTICE: Moving the data of alter_distributed_table.partitioned_table_1_5
NOTICE: Dropping the old alter_distributed_table.partitioned_table_1_5
NOTICE: Renaming the new table to alter_distributed_table.partitioned_table_1_5
NOTICE: moving the data of alter_distributed_table.partitioned_table_1_5
NOTICE: dropping the old alter_distributed_table.partitioned_table_1_5
NOTICE: renaming the new table to alter_distributed_table.partitioned_table_1_5
NOTICE: creating a new table for alter_distributed_table.partitioned_table_6_10
NOTICE: Moving the data of alter_distributed_table.partitioned_table_6_10
NOTICE: Dropping the old alter_distributed_table.partitioned_table_6_10
NOTICE: Renaming the new table to alter_distributed_table.partitioned_table_6_10
NOTICE: moving the data of alter_distributed_table.partitioned_table_6_10
NOTICE: dropping the old alter_distributed_table.partitioned_table_6_10
NOTICE: renaming the new table to alter_distributed_table.partitioned_table_6_10
NOTICE: creating a new table for alter_distributed_table.partitioned_table
NOTICE: Dropping the old alter_distributed_table.partitioned_table
NOTICE: Renaming the new table to alter_distributed_table.partitioned_table
NOTICE: dropping the old alter_distributed_table.partitioned_table
NOTICE: renaming the new table to alter_distributed_table.partitioned_table
alter_distributed_table
---------------------------------------------------------------------
@ -658,12 +658,12 @@ ALTER TABLE par_table ATTACH PARTITION par_table_1 FOR VALUES FROM (1) TO (5);
SELECT alter_distributed_table('par_table', distribution_column:='b', colocate_with:='col_table');
NOTICE: converting the partitions of alter_distributed_table.par_table
NOTICE: creating a new table for alter_distributed_table.par_table_1
NOTICE: Moving the data of alter_distributed_table.par_table_1
NOTICE: Dropping the old alter_distributed_table.par_table_1
NOTICE: Renaming the new table to alter_distributed_table.par_table_1
NOTICE: moving the data of alter_distributed_table.par_table_1
NOTICE: dropping the old alter_distributed_table.par_table_1
NOTICE: renaming the new table to alter_distributed_table.par_table_1
NOTICE: creating a new table for alter_distributed_table.par_table
NOTICE: Dropping the old alter_distributed_table.par_table
NOTICE: Renaming the new table to alter_distributed_table.par_table
NOTICE: dropping the old alter_distributed_table.par_table
NOTICE: renaming the new table to alter_distributed_table.par_table
alter_distributed_table
---------------------------------------------------------------------
@ -685,9 +685,9 @@ HINT: check citus_tables view to see current properties of the table
-- first colocate the tables, then try to re-colococate
SELECT alter_distributed_table('dist_table', colocate_with := 'colocation_table');
NOTICE: creating a new table for alter_distributed_table.dist_table
NOTICE: Moving the data of alter_distributed_table.dist_table
NOTICE: Dropping the old alter_distributed_table.dist_table
NOTICE: Renaming the new table to alter_distributed_table.dist_table
NOTICE: moving the data of alter_distributed_table.dist_table
NOTICE: dropping the old alter_distributed_table.dist_table
NOTICE: renaming the new table to alter_distributed_table.dist_table
alter_distributed_table
---------------------------------------------------------------------
@ -701,9 +701,9 @@ HINT: check citus_tables view to see current properties of the table
SELECT alter_distributed_table('dist_table', distribution_column:='b', shard_count:=4, cascade_to_colocated:=false);
NOTICE: table is already distributed by b
NOTICE: creating a new table for alter_distributed_table.dist_table
NOTICE: Moving the data of alter_distributed_table.dist_table
NOTICE: Dropping the old alter_distributed_table.dist_table
NOTICE: Renaming the new table to alter_distributed_table.dist_table
NOTICE: moving the data of alter_distributed_table.dist_table
NOTICE: dropping the old alter_distributed_table.dist_table
NOTICE: renaming the new table to alter_distributed_table.dist_table
alter_distributed_table
---------------------------------------------------------------------
@ -712,9 +712,9 @@ NOTICE: Renaming the new table to alter_distributed_table.dist_table
SELECT alter_distributed_table('dist_table', shard_count:=4, colocate_with:='colocation_table_2');
NOTICE: shard count of the table is already 4
NOTICE: creating a new table for alter_distributed_table.dist_table
NOTICE: Moving the data of alter_distributed_table.dist_table
NOTICE: Dropping the old alter_distributed_table.dist_table
NOTICE: Renaming the new table to alter_distributed_table.dist_table
NOTICE: moving the data of alter_distributed_table.dist_table
NOTICE: dropping the old alter_distributed_table.dist_table
NOTICE: renaming the new table to alter_distributed_table.dist_table
alter_distributed_table
---------------------------------------------------------------------
@ -723,9 +723,9 @@ NOTICE: Renaming the new table to alter_distributed_table.dist_table
SELECT alter_distributed_table('dist_table', colocate_with:='colocation_table_2', distribution_column:='a');
NOTICE: table is already colocated with colocation_table_2
NOTICE: creating a new table for alter_distributed_table.dist_table
NOTICE: Moving the data of alter_distributed_table.dist_table
NOTICE: Dropping the old alter_distributed_table.dist_table
NOTICE: Renaming the new table to alter_distributed_table.dist_table
NOTICE: moving the data of alter_distributed_table.dist_table
NOTICE: dropping the old alter_distributed_table.dist_table
NOTICE: renaming the new table to alter_distributed_table.dist_table
alter_distributed_table
---------------------------------------------------------------------
@ -750,9 +750,9 @@ HINT: cascade_to_colocated := false will break the current colocation, cascade_
-- test changing shard count of a non-colocated table without cascade_to_colocated, shouldn't error
SELECT alter_distributed_table('dist_table', colocate_with := 'none');
NOTICE: creating a new table for alter_distributed_table.dist_table
NOTICE: Moving the data of alter_distributed_table.dist_table
NOTICE: Dropping the old alter_distributed_table.dist_table
NOTICE: Renaming the new table to alter_distributed_table.dist_table
NOTICE: moving the data of alter_distributed_table.dist_table
NOTICE: dropping the old alter_distributed_table.dist_table
NOTICE: renaming the new table to alter_distributed_table.dist_table
alter_distributed_table
---------------------------------------------------------------------
@ -760,9 +760,9 @@ NOTICE: Renaming the new table to alter_distributed_table.dist_table
SELECT alter_distributed_table('dist_table', shard_count := 14);
NOTICE: creating a new table for alter_distributed_table.dist_table
NOTICE: Moving the data of alter_distributed_table.dist_table
NOTICE: Dropping the old alter_distributed_table.dist_table
NOTICE: Renaming the new table to alter_distributed_table.dist_table
NOTICE: moving the data of alter_distributed_table.dist_table
NOTICE: dropping the old alter_distributed_table.dist_table
NOTICE: renaming the new table to alter_distributed_table.dist_table
alter_distributed_table
---------------------------------------------------------------------
@ -820,11 +820,11 @@ INSERT INTO mat_view_test VALUES (1,1), (2,2);
CREATE MATERIALIZED VIEW mat_view AS SELECT * FROM mat_view_test;
SELECT alter_distributed_table('mat_view_test', shard_count := 5, cascade_to_colocated := false);
NOTICE: creating a new table for alter_distributed_table.mat_view_test
NOTICE: Moving the data of alter_distributed_table.mat_view_test
NOTICE: Dropping the old alter_distributed_table.mat_view_test
NOTICE: moving the data of alter_distributed_table.mat_view_test
NOTICE: dropping the old alter_distributed_table.mat_view_test
NOTICE: drop cascades to materialized view mat_view
CONTEXT: SQL statement "DROP TABLE alter_distributed_table.mat_view_test CASCADE"
NOTICE: Renaming the new table to alter_distributed_table.mat_view_test
NOTICE: renaming the new table to alter_distributed_table.mat_view_test
alter_distributed_table
---------------------------------------------------------------------
@ -837,5 +837,85 @@ SELECT * FROM mat_view ORDER BY a;
2 | 2
(2 rows)
-- test long table names
SET client_min_messages TO DEBUG1;
CREATE TABLE abcde_0123456789012345678901234567890123456789012345678901234567890123456789 (x int, y int);
NOTICE: identifier "abcde_0123456789012345678901234567890123456789012345678901234567890123456789" will be truncated to "abcde_012345678901234567890123456789012345678901234567890123456"
SELECT create_distributed_table('abcde_0123456789012345678901234567890123456789012345678901234567890123456789', 'x');
create_distributed_table
---------------------------------------------------------------------
(1 row)
SELECT alter_distributed_table('abcde_0123456789012345678901234567890123456789012345678901234567890123456789', distribution_column := 'y');
DEBUG: the name of the shard (abcde_01234567890123456789012345678901234567890_f7ff6612_xxxxxx) for relation (abcde_012345678901234567890123456789012345678901234567890123456) is too long, switching to sequential and local execution mode to prevent self deadlocks
NOTICE: creating a new table for alter_distributed_table.abcde_012345678901234567890123456789012345678901234567890123456
NOTICE: moving the data of alter_distributed_table.abcde_012345678901234567890123456789012345678901234567890123456
DEBUG: cannot perform distributed INSERT INTO ... SELECT because the partition columns in the source table and subquery do not match
DETAIL: The target table's partition column should correspond to a partition column in the subquery.
CONTEXT: SQL statement "INSERT INTO alter_distributed_table.abcde_0123456789012345678901234567890123456_f7ff6612_4160710162 (x,y) SELECT x,y FROM alter_distributed_table.abcde_012345678901234567890123456789012345678901234567890123456"
DEBUG: performing repartitioned INSERT ... SELECT
CONTEXT: SQL statement "INSERT INTO alter_distributed_table.abcde_0123456789012345678901234567890123456_f7ff6612_4160710162 (x,y) SELECT x,y FROM alter_distributed_table.abcde_012345678901234567890123456789012345678901234567890123456"
NOTICE: dropping the old alter_distributed_table.abcde_012345678901234567890123456789012345678901234567890123456
CONTEXT: SQL statement "DROP TABLE alter_distributed_table.abcde_012345678901234567890123456789012345678901234567890123456 CASCADE"
NOTICE: renaming the new table to alter_distributed_table.abcde_012345678901234567890123456789012345678901234567890123456
DEBUG: the name of the shard (abcde_01234567890123456789012345678901234567890_f7ff6612_xxxxxx) for relation (abcde_012345678901234567890123456789012345678901234567890123456) is too long, switching to sequential and local execution mode to prevent self deadlocks
CONTEXT: SQL statement "ALTER TABLE alter_distributed_table.abcde_0123456789012345678901234567890123456_f7ff6612_4160710162 RENAME TO abcde_012345678901234567890123456789012345678901234567890123456"
alter_distributed_table
---------------------------------------------------------------------
(1 row)
RESET client_min_messages;
-- test long partitioned table names
CREATE TABLE partition_lengths
(
tenant_id integer NOT NULL,
timeperiod timestamp without time zone NOT NULL,
inserted_utc timestamp without time zone NOT NULL DEFAULT now()
) PARTITION BY RANGE (timeperiod);
SELECT create_distributed_table('partition_lengths', 'tenant_id');
create_distributed_table
---------------------------------------------------------------------
(1 row)
CREATE TABLE partition_lengths_p2020_09_28_12345678901234567890123456789012345678901234567890 PARTITION OF partition_lengths FOR VALUES FROM ('2020-09-28 00:00:00') TO ('2020-09-29 00:00:00');
NOTICE: identifier "partition_lengths_p2020_09_28_12345678901234567890123456789012345678901234567890" will be truncated to "partition_lengths_p2020_09_28_123456789012345678901234567890123"
-- verify alter_distributed_table works with long partition names
SELECT alter_distributed_table('partition_lengths', shard_count := 29, cascade_to_colocated := false);
NOTICE: converting the partitions of alter_distributed_table.partition_lengths
NOTICE: creating a new table for alter_distributed_table.partition_lengths_p2020_09_28_123456789012345678901234567890123
NOTICE: moving the data of alter_distributed_table.partition_lengths_p2020_09_28_123456789012345678901234567890123
NOTICE: dropping the old alter_distributed_table.partition_lengths_p2020_09_28_123456789012345678901234567890123
NOTICE: renaming the new table to alter_distributed_table.partition_lengths_p2020_09_28_123456789012345678901234567890123
NOTICE: creating a new table for alter_distributed_table.partition_lengths
NOTICE: dropping the old alter_distributed_table.partition_lengths
NOTICE: renaming the new table to alter_distributed_table.partition_lengths
alter_distributed_table
---------------------------------------------------------------------
(1 row)
-- test long partition table names
ALTER TABLE partition_lengths_p2020_09_28_12345678901234567890123456789012345678901234567890 RENAME TO partition_lengths_p2020_09_28;
NOTICE: identifier "partition_lengths_p2020_09_28_12345678901234567890123456789012345678901234567890" will be truncated to "partition_lengths_p2020_09_28_123456789012345678901234567890123"
ALTER TABLE partition_lengths RENAME TO partition_lengths_12345678901234567890123456789012345678901234567890;
NOTICE: identifier "partition_lengths_12345678901234567890123456789012345678901234567890" will be truncated to "partition_lengths_123456789012345678901234567890123456789012345"
-- verify alter_distributed_table works with long partitioned table names
SELECT alter_distributed_table('partition_lengths_12345678901234567890123456789012345678901234567890', shard_count := 17, cascade_to_colocated := false);
NOTICE: converting the partitions of alter_distributed_table.partition_lengths_123456789012345678901234567890123456789012345
NOTICE: creating a new table for alter_distributed_table.partition_lengths_p2020_09_28
NOTICE: moving the data of alter_distributed_table.partition_lengths_p2020_09_28
NOTICE: dropping the old alter_distributed_table.partition_lengths_p2020_09_28
NOTICE: renaming the new table to alter_distributed_table.partition_lengths_p2020_09_28
NOTICE: creating a new table for alter_distributed_table.partition_lengths_123456789012345678901234567890123456789012345
NOTICE: dropping the old alter_distributed_table.partition_lengths_123456789012345678901234567890123456789012345
NOTICE: renaming the new table to alter_distributed_table.partition_lengths_123456789012345678901234567890123456789012345
alter_distributed_table
---------------------------------------------------------------------
(1 row)
SET client_min_messages TO WARNING;
DROP SCHEMA alter_distributed_table CASCADE;

View File

@ -53,9 +53,9 @@ SELECT STRING_AGG(table_name::text, ', ' ORDER BY 1) AS "Colocation Groups" FROM
-- test altering distribution column
SELECT alter_distributed_table('dist_table', distribution_column := 'b');
NOTICE: creating a new table for alter_distributed_table.dist_table
NOTICE: Moving the data of alter_distributed_table.dist_table
NOTICE: Dropping the old alter_distributed_table.dist_table
NOTICE: Renaming the new table to alter_distributed_table.dist_table
NOTICE: moving the data of alter_distributed_table.dist_table
NOTICE: dropping the old alter_distributed_table.dist_table
NOTICE: renaming the new table to alter_distributed_table.dist_table
alter_distributed_table
---------------------------------------------------------------------
@ -82,9 +82,9 @@ SELECT STRING_AGG(table_name::text, ', ' ORDER BY 1) AS "Colocation Groups" FROM
-- test altering shard count
SELECT alter_distributed_table('dist_table', shard_count := 6);
NOTICE: creating a new table for alter_distributed_table.dist_table
NOTICE: Moving the data of alter_distributed_table.dist_table
NOTICE: Dropping the old alter_distributed_table.dist_table
NOTICE: Renaming the new table to alter_distributed_table.dist_table
NOTICE: moving the data of alter_distributed_table.dist_table
NOTICE: dropping the old alter_distributed_table.dist_table
NOTICE: renaming the new table to alter_distributed_table.dist_table
alter_distributed_table
---------------------------------------------------------------------
@ -111,9 +111,9 @@ SELECT STRING_AGG(table_name::text, ', ' ORDER BY 1) AS "Colocation Groups" FROM
-- test altering colocation, note that shard count will also change
SELECT alter_distributed_table('dist_table', colocate_with := 'alter_distributed_table.colocation_table');
NOTICE: creating a new table for alter_distributed_table.dist_table
NOTICE: Moving the data of alter_distributed_table.dist_table
NOTICE: Dropping the old alter_distributed_table.dist_table
NOTICE: Renaming the new table to alter_distributed_table.dist_table
NOTICE: moving the data of alter_distributed_table.dist_table
NOTICE: dropping the old alter_distributed_table.dist_table
NOTICE: renaming the new table to alter_distributed_table.dist_table
alter_distributed_table
---------------------------------------------------------------------
@ -139,13 +139,13 @@ SELECT STRING_AGG(table_name::text, ', ' ORDER BY 1) AS "Colocation Groups" FROM
-- test altering shard count with cascading, note that the colocation will be kept
SELECT alter_distributed_table('dist_table', shard_count := 8, cascade_to_colocated := true);
NOTICE: creating a new table for alter_distributed_table.dist_table
NOTICE: Moving the data of alter_distributed_table.dist_table
NOTICE: Dropping the old alter_distributed_table.dist_table
NOTICE: Renaming the new table to alter_distributed_table.dist_table
NOTICE: moving the data of alter_distributed_table.dist_table
NOTICE: dropping the old alter_distributed_table.dist_table
NOTICE: renaming the new table to alter_distributed_table.dist_table
NOTICE: creating a new table for alter_distributed_table.colocation_table
NOTICE: Moving the data of alter_distributed_table.colocation_table
NOTICE: Dropping the old alter_distributed_table.colocation_table
NOTICE: Renaming the new table to alter_distributed_table.colocation_table
NOTICE: moving the data of alter_distributed_table.colocation_table
NOTICE: dropping the old alter_distributed_table.colocation_table
NOTICE: renaming the new table to alter_distributed_table.colocation_table
alter_distributed_table
---------------------------------------------------------------------
@ -171,9 +171,9 @@ SELECT STRING_AGG(table_name::text, ', ' ORDER BY 1) AS "Colocation Groups" FROM
-- test altering shard count without cascading, note that the colocation will be broken
SELECT alter_distributed_table('dist_table', shard_count := 10, cascade_to_colocated := false);
NOTICE: creating a new table for alter_distributed_table.dist_table
NOTICE: Moving the data of alter_distributed_table.dist_table
NOTICE: Dropping the old alter_distributed_table.dist_table
NOTICE: Renaming the new table to alter_distributed_table.dist_table
NOTICE: moving the data of alter_distributed_table.dist_table
NOTICE: dropping the old alter_distributed_table.dist_table
NOTICE: renaming the new table to alter_distributed_table.dist_table
alter_distributed_table
---------------------------------------------------------------------
@ -261,16 +261,16 @@ SELECT * FROM partitioned_table_6_10 ORDER BY 1, 2;
SELECT alter_distributed_table('partitioned_table', shard_count := 10, distribution_column := 'a');
NOTICE: converting the partitions of alter_distributed_table.partitioned_table
NOTICE: creating a new table for alter_distributed_table.partitioned_table_1_5
NOTICE: Moving the data of alter_distributed_table.partitioned_table_1_5
NOTICE: Dropping the old alter_distributed_table.partitioned_table_1_5
NOTICE: Renaming the new table to alter_distributed_table.partitioned_table_1_5
NOTICE: moving the data of alter_distributed_table.partitioned_table_1_5
NOTICE: dropping the old alter_distributed_table.partitioned_table_1_5
NOTICE: renaming the new table to alter_distributed_table.partitioned_table_1_5
NOTICE: creating a new table for alter_distributed_table.partitioned_table_6_10
NOTICE: Moving the data of alter_distributed_table.partitioned_table_6_10
NOTICE: Dropping the old alter_distributed_table.partitioned_table_6_10
NOTICE: Renaming the new table to alter_distributed_table.partitioned_table_6_10
NOTICE: moving the data of alter_distributed_table.partitioned_table_6_10
NOTICE: dropping the old alter_distributed_table.partitioned_table_6_10
NOTICE: renaming the new table to alter_distributed_table.partitioned_table_6_10
NOTICE: creating a new table for alter_distributed_table.partitioned_table
NOTICE: Dropping the old alter_distributed_table.partitioned_table
NOTICE: Renaming the new table to alter_distributed_table.partitioned_table
NOTICE: dropping the old alter_distributed_table.partitioned_table
NOTICE: renaming the new table to alter_distributed_table.partitioned_table
alter_distributed_table
---------------------------------------------------------------------
@ -637,12 +637,12 @@ ALTER TABLE par_table ATTACH PARTITION par_table_1 FOR VALUES FROM (1) TO (5);
SELECT alter_distributed_table('par_table', distribution_column:='b', colocate_with:='col_table');
NOTICE: converting the partitions of alter_distributed_table.par_table
NOTICE: creating a new table for alter_distributed_table.par_table_1
NOTICE: Moving the data of alter_distributed_table.par_table_1
NOTICE: Dropping the old alter_distributed_table.par_table_1
NOTICE: Renaming the new table to alter_distributed_table.par_table_1
NOTICE: moving the data of alter_distributed_table.par_table_1
NOTICE: dropping the old alter_distributed_table.par_table_1
NOTICE: renaming the new table to alter_distributed_table.par_table_1
NOTICE: creating a new table for alter_distributed_table.par_table
NOTICE: Dropping the old alter_distributed_table.par_table
NOTICE: Renaming the new table to alter_distributed_table.par_table
NOTICE: dropping the old alter_distributed_table.par_table
NOTICE: renaming the new table to alter_distributed_table.par_table
alter_distributed_table
---------------------------------------------------------------------
@ -664,9 +664,9 @@ HINT: check citus_tables view to see current properties of the table
-- first colocate the tables, then try to re-colococate
SELECT alter_distributed_table('dist_table', colocate_with := 'colocation_table');
NOTICE: creating a new table for alter_distributed_table.dist_table
NOTICE: Moving the data of alter_distributed_table.dist_table
NOTICE: Dropping the old alter_distributed_table.dist_table
NOTICE: Renaming the new table to alter_distributed_table.dist_table
NOTICE: moving the data of alter_distributed_table.dist_table
NOTICE: dropping the old alter_distributed_table.dist_table
NOTICE: renaming the new table to alter_distributed_table.dist_table
alter_distributed_table
---------------------------------------------------------------------
@ -680,9 +680,9 @@ HINT: check citus_tables view to see current properties of the table
SELECT alter_distributed_table('dist_table', distribution_column:='b', shard_count:=4, cascade_to_colocated:=false);
NOTICE: table is already distributed by b
NOTICE: creating a new table for alter_distributed_table.dist_table
NOTICE: Moving the data of alter_distributed_table.dist_table
NOTICE: Dropping the old alter_distributed_table.dist_table
NOTICE: Renaming the new table to alter_distributed_table.dist_table
NOTICE: moving the data of alter_distributed_table.dist_table
NOTICE: dropping the old alter_distributed_table.dist_table
NOTICE: renaming the new table to alter_distributed_table.dist_table
alter_distributed_table
---------------------------------------------------------------------
@ -691,9 +691,9 @@ NOTICE: Renaming the new table to alter_distributed_table.dist_table
SELECT alter_distributed_table('dist_table', shard_count:=4, colocate_with:='colocation_table_2');
NOTICE: shard count of the table is already 4
NOTICE: creating a new table for alter_distributed_table.dist_table
NOTICE: Moving the data of alter_distributed_table.dist_table
NOTICE: Dropping the old alter_distributed_table.dist_table
NOTICE: Renaming the new table to alter_distributed_table.dist_table
NOTICE: moving the data of alter_distributed_table.dist_table
NOTICE: dropping the old alter_distributed_table.dist_table
NOTICE: renaming the new table to alter_distributed_table.dist_table
alter_distributed_table
---------------------------------------------------------------------
@ -702,9 +702,9 @@ NOTICE: Renaming the new table to alter_distributed_table.dist_table
SELECT alter_distributed_table('dist_table', colocate_with:='colocation_table_2', distribution_column:='a');
NOTICE: table is already colocated with colocation_table_2
NOTICE: creating a new table for alter_distributed_table.dist_table
NOTICE: Moving the data of alter_distributed_table.dist_table
NOTICE: Dropping the old alter_distributed_table.dist_table
NOTICE: Renaming the new table to alter_distributed_table.dist_table
NOTICE: moving the data of alter_distributed_table.dist_table
NOTICE: dropping the old alter_distributed_table.dist_table
NOTICE: renaming the new table to alter_distributed_table.dist_table
alter_distributed_table
---------------------------------------------------------------------
@ -729,9 +729,9 @@ HINT: cascade_to_colocated := false will break the current colocation, cascade_
-- test changing shard count of a non-colocated table without cascade_to_colocated, shouldn't error
SELECT alter_distributed_table('dist_table', colocate_with := 'none');
NOTICE: creating a new table for alter_distributed_table.dist_table
NOTICE: Moving the data of alter_distributed_table.dist_table
NOTICE: Dropping the old alter_distributed_table.dist_table
NOTICE: Renaming the new table to alter_distributed_table.dist_table
NOTICE: moving the data of alter_distributed_table.dist_table
NOTICE: dropping the old alter_distributed_table.dist_table
NOTICE: renaming the new table to alter_distributed_table.dist_table
alter_distributed_table
---------------------------------------------------------------------
@ -739,9 +739,9 @@ NOTICE: Renaming the new table to alter_distributed_table.dist_table
SELECT alter_distributed_table('dist_table', shard_count := 14);
NOTICE: creating a new table for alter_distributed_table.dist_table
NOTICE: Moving the data of alter_distributed_table.dist_table
NOTICE: Dropping the old alter_distributed_table.dist_table
NOTICE: Renaming the new table to alter_distributed_table.dist_table
NOTICE: moving the data of alter_distributed_table.dist_table
NOTICE: dropping the old alter_distributed_table.dist_table
NOTICE: renaming the new table to alter_distributed_table.dist_table
alter_distributed_table
---------------------------------------------------------------------
@ -799,11 +799,11 @@ INSERT INTO mat_view_test VALUES (1,1), (2,2);
CREATE MATERIALIZED VIEW mat_view AS SELECT * FROM mat_view_test;
SELECT alter_distributed_table('mat_view_test', shard_count := 5, cascade_to_colocated := false);
NOTICE: creating a new table for alter_distributed_table.mat_view_test
NOTICE: Moving the data of alter_distributed_table.mat_view_test
NOTICE: Dropping the old alter_distributed_table.mat_view_test
NOTICE: moving the data of alter_distributed_table.mat_view_test
NOTICE: dropping the old alter_distributed_table.mat_view_test
NOTICE: drop cascades to materialized view mat_view
CONTEXT: SQL statement "DROP TABLE alter_distributed_table.mat_view_test CASCADE"
NOTICE: Renaming the new table to alter_distributed_table.mat_view_test
NOTICE: renaming the new table to alter_distributed_table.mat_view_test
alter_distributed_table
---------------------------------------------------------------------
@ -816,5 +816,85 @@ SELECT * FROM mat_view ORDER BY a;
2 | 2
(2 rows)
-- test long table names
SET client_min_messages TO DEBUG1;
CREATE TABLE abcde_0123456789012345678901234567890123456789012345678901234567890123456789 (x int, y int);
NOTICE: identifier "abcde_0123456789012345678901234567890123456789012345678901234567890123456789" will be truncated to "abcde_012345678901234567890123456789012345678901234567890123456"
SELECT create_distributed_table('abcde_0123456789012345678901234567890123456789012345678901234567890123456789', 'x');
create_distributed_table
---------------------------------------------------------------------
(1 row)
SELECT alter_distributed_table('abcde_0123456789012345678901234567890123456789012345678901234567890123456789', distribution_column := 'y');
DEBUG: the name of the shard (abcde_01234567890123456789012345678901234567890_f7ff6612_xxxxxx) for relation (abcde_012345678901234567890123456789012345678901234567890123456) is too long, switching to sequential and local execution mode to prevent self deadlocks
NOTICE: creating a new table for alter_distributed_table.abcde_012345678901234567890123456789012345678901234567890123456
NOTICE: moving the data of alter_distributed_table.abcde_012345678901234567890123456789012345678901234567890123456
DEBUG: cannot perform distributed INSERT INTO ... SELECT because the partition columns in the source table and subquery do not match
DETAIL: The target table's partition column should correspond to a partition column in the subquery.
CONTEXT: SQL statement "INSERT INTO alter_distributed_table.abcde_0123456789012345678901234567890123456_f7ff6612_4160710162 (x,y) SELECT x,y FROM alter_distributed_table.abcde_012345678901234567890123456789012345678901234567890123456"
DEBUG: performing repartitioned INSERT ... SELECT
CONTEXT: SQL statement "INSERT INTO alter_distributed_table.abcde_0123456789012345678901234567890123456_f7ff6612_4160710162 (x,y) SELECT x,y FROM alter_distributed_table.abcde_012345678901234567890123456789012345678901234567890123456"
NOTICE: dropping the old alter_distributed_table.abcde_012345678901234567890123456789012345678901234567890123456
CONTEXT: SQL statement "DROP TABLE alter_distributed_table.abcde_012345678901234567890123456789012345678901234567890123456 CASCADE"
NOTICE: renaming the new table to alter_distributed_table.abcde_012345678901234567890123456789012345678901234567890123456
DEBUG: the name of the shard (abcde_01234567890123456789012345678901234567890_f7ff6612_xxxxxx) for relation (abcde_012345678901234567890123456789012345678901234567890123456) is too long, switching to sequential and local execution mode to prevent self deadlocks
CONTEXT: SQL statement "ALTER TABLE alter_distributed_table.abcde_0123456789012345678901234567890123456_f7ff6612_4160710162 RENAME TO abcde_012345678901234567890123456789012345678901234567890123456"
alter_distributed_table
---------------------------------------------------------------------
(1 row)
RESET client_min_messages;
-- test long partitioned table names
CREATE TABLE partition_lengths
(
tenant_id integer NOT NULL,
timeperiod timestamp without time zone NOT NULL,
inserted_utc timestamp without time zone NOT NULL DEFAULT now()
) PARTITION BY RANGE (timeperiod);
SELECT create_distributed_table('partition_lengths', 'tenant_id');
create_distributed_table
---------------------------------------------------------------------
(1 row)
CREATE TABLE partition_lengths_p2020_09_28_12345678901234567890123456789012345678901234567890 PARTITION OF partition_lengths FOR VALUES FROM ('2020-09-28 00:00:00') TO ('2020-09-29 00:00:00');
NOTICE: identifier "partition_lengths_p2020_09_28_12345678901234567890123456789012345678901234567890" will be truncated to "partition_lengths_p2020_09_28_123456789012345678901234567890123"
-- verify alter_distributed_table works with long partition names
SELECT alter_distributed_table('partition_lengths', shard_count := 29, cascade_to_colocated := false);
NOTICE: converting the partitions of alter_distributed_table.partition_lengths
NOTICE: creating a new table for alter_distributed_table.partition_lengths_p2020_09_28_123456789012345678901234567890123
NOTICE: moving the data of alter_distributed_table.partition_lengths_p2020_09_28_123456789012345678901234567890123
NOTICE: dropping the old alter_distributed_table.partition_lengths_p2020_09_28_123456789012345678901234567890123
NOTICE: renaming the new table to alter_distributed_table.partition_lengths_p2020_09_28_123456789012345678901234567890123
NOTICE: creating a new table for alter_distributed_table.partition_lengths
NOTICE: dropping the old alter_distributed_table.partition_lengths
NOTICE: renaming the new table to alter_distributed_table.partition_lengths
alter_distributed_table
---------------------------------------------------------------------
(1 row)
-- test long partition table names
ALTER TABLE partition_lengths_p2020_09_28_12345678901234567890123456789012345678901234567890 RENAME TO partition_lengths_p2020_09_28;
NOTICE: identifier "partition_lengths_p2020_09_28_12345678901234567890123456789012345678901234567890" will be truncated to "partition_lengths_p2020_09_28_123456789012345678901234567890123"
ALTER TABLE partition_lengths RENAME TO partition_lengths_12345678901234567890123456789012345678901234567890;
NOTICE: identifier "partition_lengths_12345678901234567890123456789012345678901234567890" will be truncated to "partition_lengths_123456789012345678901234567890123456789012345"
-- verify alter_distributed_table works with long partitioned table names
SELECT alter_distributed_table('partition_lengths_12345678901234567890123456789012345678901234567890', shard_count := 17, cascade_to_colocated := false);
NOTICE: converting the partitions of alter_distributed_table.partition_lengths_123456789012345678901234567890123456789012345
NOTICE: creating a new table for alter_distributed_table.partition_lengths_p2020_09_28
NOTICE: moving the data of alter_distributed_table.partition_lengths_p2020_09_28
NOTICE: dropping the old alter_distributed_table.partition_lengths_p2020_09_28
NOTICE: renaming the new table to alter_distributed_table.partition_lengths_p2020_09_28
NOTICE: creating a new table for alter_distributed_table.partition_lengths_123456789012345678901234567890123456789012345
NOTICE: dropping the old alter_distributed_table.partition_lengths_123456789012345678901234567890123456789012345
NOTICE: renaming the new table to alter_distributed_table.partition_lengths_123456789012345678901234567890123456789012345
alter_distributed_table
---------------------------------------------------------------------
(1 row)
SET client_min_messages TO WARNING;
DROP SCHEMA alter_distributed_table CASCADE;

View File

@ -3,9 +3,9 @@
CREATE TABLE alter_am_pg_version_table (a INT);
SELECT alter_table_set_access_method('alter_am_pg_version_table', 'columnar');
NOTICE: creating a new table for public.alter_am_pg_version_table
NOTICE: Moving the data of public.alter_am_pg_version_table
NOTICE: Dropping the old public.alter_am_pg_version_table
NOTICE: Renaming the new table to public.alter_am_pg_version_table
NOTICE: moving the data of public.alter_am_pg_version_table
NOTICE: dropping the old public.alter_am_pg_version_table
NOTICE: renaming the new table to public.alter_am_pg_version_table
alter_table_set_access_method
---------------------------------------------------------------------
@ -51,9 +51,9 @@ SELECT table_name, access_method FROM public.citus_tables WHERE table_name::text
SELECT alter_table_set_access_method('dist_table', 'columnar');
NOTICE: creating a new table for alter_table_set_access_method.dist_table
NOTICE: Moving the data of alter_table_set_access_method.dist_table
NOTICE: Dropping the old alter_table_set_access_method.dist_table
NOTICE: Renaming the new table to alter_table_set_access_method.dist_table
NOTICE: moving the data of alter_table_set_access_method.dist_table
NOTICE: dropping the old alter_table_set_access_method.dist_table
NOTICE: renaming the new table to alter_table_set_access_method.dist_table
alter_table_set_access_method
---------------------------------------------------------------------
@ -131,9 +131,9 @@ ERROR: you cannot alter access method of a partitioned table
-- test altering the partition's access method
SELECT alter_table_set_access_method('partitioned_table_1_5', 'columnar');
NOTICE: creating a new table for alter_table_set_access_method.partitioned_table_1_5
NOTICE: Moving the data of alter_table_set_access_method.partitioned_table_1_5
NOTICE: Dropping the old alter_table_set_access_method.partitioned_table_1_5
NOTICE: Renaming the new table to alter_table_set_access_method.partitioned_table_1_5
NOTICE: moving the data of alter_table_set_access_method.partitioned_table_1_5
NOTICE: dropping the old alter_table_set_access_method.partitioned_table_1_5
NOTICE: renaming the new table to alter_table_set_access_method.partitioned_table_1_5
alter_table_set_access_method
---------------------------------------------------------------------
@ -228,14 +228,14 @@ SELECT event FROM time_partitioned ORDER BY 1;
CALL alter_old_partitions_set_access_method('time_partitioned', '2021-01-01', 'columnar');
NOTICE: converting time_partitioned_d00 with start time Sat Jan 01 00:00:00 2000 and end time Thu Dec 31 00:00:00 2009
NOTICE: creating a new table for alter_table_set_access_method.time_partitioned_d00
NOTICE: Moving the data of alter_table_set_access_method.time_partitioned_d00
NOTICE: Dropping the old alter_table_set_access_method.time_partitioned_d00
NOTICE: Renaming the new table to alter_table_set_access_method.time_partitioned_d00
NOTICE: moving the data of alter_table_set_access_method.time_partitioned_d00
NOTICE: dropping the old alter_table_set_access_method.time_partitioned_d00
NOTICE: renaming the new table to alter_table_set_access_method.time_partitioned_d00
NOTICE: converting time_partitioned_d10 with start time Fri Jan 01 00:00:00 2010 and end time Tue Dec 31 00:00:00 2019
NOTICE: creating a new table for alter_table_set_access_method.time_partitioned_d10
NOTICE: Moving the data of alter_table_set_access_method.time_partitioned_d10
NOTICE: Dropping the old alter_table_set_access_method.time_partitioned_d10
NOTICE: Renaming the new table to alter_table_set_access_method.time_partitioned_d10
NOTICE: moving the data of alter_table_set_access_method.time_partitioned_d10
NOTICE: dropping the old alter_table_set_access_method.time_partitioned_d10
NOTICE: renaming the new table to alter_table_set_access_method.time_partitioned_d10
SELECT partition, access_method FROM time_partitions WHERE parent_table = 'time_partitioned'::regclass ORDER BY partition::text;
partition | access_method
---------------------------------------------------------------------
@ -274,14 +274,14 @@ SELECT event FROM time_partitioned ORDER BY 1;
CALL alter_old_partitions_set_access_method('time_partitioned', '2021-01-01', 'heap');
NOTICE: converting time_partitioned_d00 with start time Sat Jan 01 00:00:00 2000 and end time Thu Dec 31 00:00:00 2009
NOTICE: creating a new table for alter_table_set_access_method.time_partitioned_d00
NOTICE: Moving the data of alter_table_set_access_method.time_partitioned_d00
NOTICE: Dropping the old alter_table_set_access_method.time_partitioned_d00
NOTICE: Renaming the new table to alter_table_set_access_method.time_partitioned_d00
NOTICE: moving the data of alter_table_set_access_method.time_partitioned_d00
NOTICE: dropping the old alter_table_set_access_method.time_partitioned_d00
NOTICE: renaming the new table to alter_table_set_access_method.time_partitioned_d00
NOTICE: converting time_partitioned_d10 with start time Fri Jan 01 00:00:00 2010 and end time Tue Dec 31 00:00:00 2019
NOTICE: creating a new table for alter_table_set_access_method.time_partitioned_d10
NOTICE: Moving the data of alter_table_set_access_method.time_partitioned_d10
NOTICE: Dropping the old alter_table_set_access_method.time_partitioned_d10
NOTICE: Renaming the new table to alter_table_set_access_method.time_partitioned_d10
NOTICE: moving the data of alter_table_set_access_method.time_partitioned_d10
NOTICE: dropping the old alter_table_set_access_method.time_partitioned_d10
NOTICE: renaming the new table to alter_table_set_access_method.time_partitioned_d10
SELECT partition, access_method FROM time_partitions WHERE parent_table = 'time_partitioned'::regclass ORDER BY partition::text;
partition | access_method
---------------------------------------------------------------------
@ -324,9 +324,9 @@ SELECT alter_table_set_access_method('index_table', 'columnar');
NOTICE: the index idx1 on table alter_table_set_access_method.index_table will be dropped, because columnar tables cannot have indexes
NOTICE: the index idx2 on table alter_table_set_access_method.index_table will be dropped, because columnar tables cannot have indexes
NOTICE: creating a new table for alter_table_set_access_method.index_table
NOTICE: Moving the data of alter_table_set_access_method.index_table
NOTICE: Dropping the old alter_table_set_access_method.index_table
NOTICE: Renaming the new table to alter_table_set_access_method.index_table
NOTICE: moving the data of alter_table_set_access_method.index_table
NOTICE: dropping the old alter_table_set_access_method.index_table
NOTICE: renaming the new table to alter_table_set_access_method.index_table
alter_table_set_access_method
---------------------------------------------------------------------
@ -395,9 +395,9 @@ NOTICE: creating a new table for alter_table_set_access_method.table_type_dist
WARNING: fake_scan_getnextslot
CONTEXT: SQL statement "SELECT EXISTS (SELECT 1 FROM alter_table_set_access_method.table_type_dist_1533505599)"
WARNING: fake_scan_getnextslot
NOTICE: Moving the data of alter_table_set_access_method.table_type_dist
NOTICE: Dropping the old alter_table_set_access_method.table_type_dist
NOTICE: Renaming the new table to alter_table_set_access_method.table_type_dist
NOTICE: moving the data of alter_table_set_access_method.table_type_dist
NOTICE: dropping the old alter_table_set_access_method.table_type_dist
NOTICE: renaming the new table to alter_table_set_access_method.table_type_dist
alter_table_set_access_method
---------------------------------------------------------------------
@ -408,9 +408,9 @@ NOTICE: creating a new table for alter_table_set_access_method.table_type_ref
WARNING: fake_scan_getnextslot
CONTEXT: SQL statement "SELECT EXISTS (SELECT 1 FROM alter_table_set_access_method.table_type_ref_1037855087)"
WARNING: fake_scan_getnextslot
NOTICE: Moving the data of alter_table_set_access_method.table_type_ref
NOTICE: Dropping the old alter_table_set_access_method.table_type_ref
NOTICE: Renaming the new table to alter_table_set_access_method.table_type_ref
NOTICE: moving the data of alter_table_set_access_method.table_type_ref
NOTICE: dropping the old alter_table_set_access_method.table_type_ref
NOTICE: renaming the new table to alter_table_set_access_method.table_type_ref
alter_table_set_access_method
---------------------------------------------------------------------
@ -418,9 +418,9 @@ NOTICE: Renaming the new table to alter_table_set_access_method.table_type_ref
SELECT alter_table_set_access_method('table_type_pg_local', 'fake_am');
NOTICE: creating a new table for alter_table_set_access_method.table_type_pg_local
NOTICE: Moving the data of alter_table_set_access_method.table_type_pg_local
NOTICE: Dropping the old alter_table_set_access_method.table_type_pg_local
NOTICE: Renaming the new table to alter_table_set_access_method.table_type_pg_local
NOTICE: moving the data of alter_table_set_access_method.table_type_pg_local
NOTICE: dropping the old alter_table_set_access_method.table_type_pg_local
NOTICE: renaming the new table to alter_table_set_access_method.table_type_pg_local
alter_table_set_access_method
---------------------------------------------------------------------
@ -428,9 +428,9 @@ NOTICE: Renaming the new table to alter_table_set_access_method.table_type_pg_l
SELECT alter_table_set_access_method('table_type_citus_local', 'fake_am');
NOTICE: creating a new table for alter_table_set_access_method.table_type_citus_local
NOTICE: Moving the data of alter_table_set_access_method.table_type_citus_local
NOTICE: Dropping the old alter_table_set_access_method.table_type_citus_local
NOTICE: Renaming the new table to alter_table_set_access_method.table_type_citus_local
NOTICE: moving the data of alter_table_set_access_method.table_type_citus_local
NOTICE: dropping the old alter_table_set_access_method.table_type_citus_local
NOTICE: renaming the new table to alter_table_set_access_method.table_type_citus_local
alter_table_set_access_method
---------------------------------------------------------------------
@ -459,9 +459,9 @@ create table test_fk_p0 partition of test_fk_p for values from (0) to (10);
create table test_fk_p1 partition of test_fk_p for values from (10) to (20);
select alter_table_set_access_method('test_fk_p1', 'columnar');
NOTICE: creating a new table for alter_table_set_access_method.test_fk_p1
NOTICE: Moving the data of alter_table_set_access_method.test_fk_p1
NOTICE: Dropping the old alter_table_set_access_method.test_fk_p1
NOTICE: Renaming the new table to alter_table_set_access_method.test_fk_p1
NOTICE: moving the data of alter_table_set_access_method.test_fk_p1
NOTICE: dropping the old alter_table_set_access_method.test_fk_p1
NOTICE: renaming the new table to alter_table_set_access_method.test_fk_p1
ERROR: Foreign keys and AFTER ROW triggers are not supported for columnar tables
HINT: Consider an AFTER STATEMENT trigger instead.
CONTEXT: SQL statement "ALTER TABLE alter_table_set_access_method.test_fk_p ATTACH PARTITION alter_table_set_access_method.test_fk_p1 FOR VALUES FROM (10) TO (20);"
@ -475,11 +475,11 @@ INSERT INTO mat_view_test VALUES (1), (2);
CREATE MATERIALIZED VIEW mat_view AS SELECT * FROM mat_view_test;
SELECT alter_table_set_access_method('mat_view_test','columnar');
NOTICE: creating a new table for alter_table_set_access_method.mat_view_test
NOTICE: Moving the data of alter_table_set_access_method.mat_view_test
NOTICE: Dropping the old alter_table_set_access_method.mat_view_test
NOTICE: moving the data of alter_table_set_access_method.mat_view_test
NOTICE: dropping the old alter_table_set_access_method.mat_view_test
NOTICE: drop cascades to materialized view mat_view
CONTEXT: SQL statement "DROP TABLE alter_table_set_access_method.mat_view_test CASCADE"
NOTICE: Renaming the new table to alter_table_set_access_method.mat_view_test
NOTICE: renaming the new table to alter_table_set_access_method.mat_view_test
alter_table_set_access_method
---------------------------------------------------------------------
@ -519,13 +519,13 @@ create materialized view m_dist as select * from dist;
create view v_dist as select * from dist;
select alter_table_set_access_method('local','columnar');
NOTICE: creating a new table for alter_table_set_access_method.local
NOTICE: Moving the data of alter_table_set_access_method.local
NOTICE: Dropping the old alter_table_set_access_method.local
NOTICE: moving the data of alter_table_set_access_method.local
NOTICE: dropping the old alter_table_set_access_method.local
NOTICE: drop cascades to 2 other objects
DETAIL: drop cascades to materialized view m_local
drop cascades to view v_local
CONTEXT: SQL statement "DROP TABLE alter_table_set_access_method.local CASCADE"
NOTICE: Renaming the new table to alter_table_set_access_method.local
NOTICE: renaming the new table to alter_table_set_access_method.local
alter_table_set_access_method
---------------------------------------------------------------------
@ -533,13 +533,13 @@ NOTICE: Renaming the new table to alter_table_set_access_method.local
select alter_table_set_access_method('ref','columnar');
NOTICE: creating a new table for alter_table_set_access_method.ref
NOTICE: Moving the data of alter_table_set_access_method.ref
NOTICE: Dropping the old alter_table_set_access_method.ref
NOTICE: moving the data of alter_table_set_access_method.ref
NOTICE: dropping the old alter_table_set_access_method.ref
NOTICE: drop cascades to 2 other objects
DETAIL: drop cascades to materialized view m_ref
drop cascades to view v_ref
CONTEXT: SQL statement "DROP TABLE alter_table_set_access_method.ref CASCADE"
NOTICE: Renaming the new table to alter_table_set_access_method.ref
NOTICE: renaming the new table to alter_table_set_access_method.ref
alter_table_set_access_method
---------------------------------------------------------------------
@ -547,13 +547,13 @@ NOTICE: Renaming the new table to alter_table_set_access_method.ref
select alter_table_set_access_method('dist','columnar');
NOTICE: creating a new table for alter_table_set_access_method.dist
NOTICE: Moving the data of alter_table_set_access_method.dist
NOTICE: Dropping the old alter_table_set_access_method.dist
NOTICE: moving the data of alter_table_set_access_method.dist
NOTICE: dropping the old alter_table_set_access_method.dist
NOTICE: drop cascades to 2 other objects
DETAIL: drop cascades to materialized view m_dist
drop cascades to view v_dist
CONTEXT: SQL statement "DROP TABLE alter_table_set_access_method.dist CASCADE"
NOTICE: Renaming the new table to alter_table_set_access_method.dist
NOTICE: renaming the new table to alter_table_set_access_method.dist
alter_table_set_access_method
---------------------------------------------------------------------
@ -561,13 +561,13 @@ NOTICE: Renaming the new table to alter_table_set_access_method.dist
SELECT alter_distributed_table('dist', shard_count:=1, cascade_to_colocated:=false);
NOTICE: creating a new table for alter_table_set_access_method.dist
NOTICE: Moving the data of alter_table_set_access_method.dist
NOTICE: Dropping the old alter_table_set_access_method.dist
NOTICE: moving the data of alter_table_set_access_method.dist
NOTICE: dropping the old alter_table_set_access_method.dist
NOTICE: drop cascades to 2 other objects
DETAIL: drop cascades to materialized view m_dist
drop cascades to view v_dist
CONTEXT: SQL statement "DROP TABLE alter_table_set_access_method.dist CASCADE"
NOTICE: Renaming the new table to alter_table_set_access_method.dist
NOTICE: renaming the new table to alter_table_set_access_method.dist
alter_distributed_table
---------------------------------------------------------------------
@ -575,13 +575,13 @@ NOTICE: Renaming the new table to alter_table_set_access_method.dist
select alter_table_set_access_method('local','heap');
NOTICE: creating a new table for alter_table_set_access_method.local
NOTICE: Moving the data of alter_table_set_access_method.local
NOTICE: Dropping the old alter_table_set_access_method.local
NOTICE: moving the data of alter_table_set_access_method.local
NOTICE: dropping the old alter_table_set_access_method.local
NOTICE: drop cascades to 2 other objects
DETAIL: drop cascades to materialized view m_local
drop cascades to view v_local
CONTEXT: SQL statement "DROP TABLE alter_table_set_access_method.local CASCADE"
NOTICE: Renaming the new table to alter_table_set_access_method.local
NOTICE: renaming the new table to alter_table_set_access_method.local
alter_table_set_access_method
---------------------------------------------------------------------
@ -589,13 +589,13 @@ NOTICE: Renaming the new table to alter_table_set_access_method.local
select alter_table_set_access_method('ref','heap');
NOTICE: creating a new table for alter_table_set_access_method.ref
NOTICE: Moving the data of alter_table_set_access_method.ref
NOTICE: Dropping the old alter_table_set_access_method.ref
NOTICE: moving the data of alter_table_set_access_method.ref
NOTICE: dropping the old alter_table_set_access_method.ref
NOTICE: drop cascades to 2 other objects
DETAIL: drop cascades to materialized view m_ref
drop cascades to view v_ref
CONTEXT: SQL statement "DROP TABLE alter_table_set_access_method.ref CASCADE"
NOTICE: Renaming the new table to alter_table_set_access_method.ref
NOTICE: renaming the new table to alter_table_set_access_method.ref
alter_table_set_access_method
---------------------------------------------------------------------
@ -603,13 +603,13 @@ NOTICE: Renaming the new table to alter_table_set_access_method.ref
select alter_table_set_access_method('dist','heap');
NOTICE: creating a new table for alter_table_set_access_method.dist
NOTICE: Moving the data of alter_table_set_access_method.dist
NOTICE: Dropping the old alter_table_set_access_method.dist
NOTICE: moving the data of alter_table_set_access_method.dist
NOTICE: dropping the old alter_table_set_access_method.dist
NOTICE: drop cascades to 2 other objects
DETAIL: drop cascades to materialized view m_dist
drop cascades to view v_dist
CONTEXT: SQL statement "DROP TABLE alter_table_set_access_method.dist CASCADE"
NOTICE: Renaming the new table to alter_table_set_access_method.dist
NOTICE: renaming the new table to alter_table_set_access_method.dist
alter_table_set_access_method
---------------------------------------------------------------------
@ -681,6 +681,40 @@ CREATE TABLE identity_cols_test (a int, b int generated by default as identity (
SELECT alter_table_set_access_method('identity_cols_test', 'columnar');
ERROR: cannot complete command because relation alter_table_set_access_method.identity_cols_test has identity column
HINT: Drop the identity columns and re-try the command
-- test long table names
SET client_min_messages TO DEBUG1;
CREATE TABLE abcde_0123456789012345678901234567890123456789012345678901234567890123456789 (x int, y int);
NOTICE: identifier "abcde_0123456789012345678901234567890123456789012345678901234567890123456789" will be truncated to "abcde_012345678901234567890123456789012345678901234567890123456"
SELECT create_distributed_table('abcde_0123456789012345678901234567890123456789012345678901234567890123456789', 'x');
create_distributed_table
---------------------------------------------------------------------
(1 row)
SELECT alter_table_set_access_method('abcde_0123456789012345678901234567890123456789012345678901234567890123456789', 'columnar');
DEBUG: the name of the shard (abcde_01234567890123456789012345678901234567890_f7ff6612_xxxxxx) for relation (abcde_012345678901234567890123456789012345678901234567890123456) is too long, switching to sequential and local execution mode to prevent self deadlocks
NOTICE: creating a new table for alter_table_set_access_method.abcde_012345678901234567890123456789012345678901234567890123456
DEBUG: pathlist hook for columnar table am
CONTEXT: SQL statement "SELECT EXISTS (SELECT 1 FROM alter_table_set_access_method.abcde_0123456789012345678901234567890123456_f7ff6612_4160710162)"
NOTICE: moving the data of alter_table_set_access_method.abcde_012345678901234567890123456789012345678901234567890123456
NOTICE: dropping the old alter_table_set_access_method.abcde_012345678901234567890123456789012345678901234567890123456
CONTEXT: SQL statement "DROP TABLE alter_table_set_access_method.abcde_012345678901234567890123456789012345678901234567890123456 CASCADE"
NOTICE: renaming the new table to alter_table_set_access_method.abcde_012345678901234567890123456789012345678901234567890123456
DEBUG: the name of the shard (abcde_01234567890123456789012345678901234567890_f7ff6612_xxxxxx) for relation (abcde_012345678901234567890123456789012345678901234567890123456) is too long, switching to sequential and local execution mode to prevent self deadlocks
CONTEXT: SQL statement "ALTER TABLE alter_table_set_access_method.abcde_0123456789012345678901234567890123456_f7ff6612_4160710162 RENAME TO abcde_012345678901234567890123456789012345678901234567890123456"
alter_table_set_access_method
---------------------------------------------------------------------
(1 row)
SELECT * FROM abcde_0123456789012345678901234567890123456789012345678901234567890123456789;
NOTICE: identifier "abcde_0123456789012345678901234567890123456789012345678901234567890123456789" will be truncated to "abcde_012345678901234567890123456789012345678901234567890123456"
DEBUG: pathlist hook for columnar table am
x | y
---------------------------------------------------------------------
(0 rows)
RESET client_min_messages;
SET client_min_messages TO WARNING;
DROP SCHEMA alter_table_set_access_method CASCADE;
SELECT 1 FROM master_remove_node('localhost', :master_port);

View File

@ -351,11 +351,11 @@ ERROR: Table 'citus_local_table_1' is a local table. Replicating shard of a loc
BEGIN;
SELECT undistribute_table('citus_local_table_1');
NOTICE: creating a new table for citus_local_tables_test_schema.citus_local_table_1
NOTICE: Moving the data of citus_local_tables_test_schema.citus_local_table_1
NOTICE: moving the data of citus_local_tables_test_schema.citus_local_table_1
NOTICE: executing the command locally: SELECT a FROM citus_local_tables_test_schema.citus_local_table_1_1504027 citus_local_table_1
NOTICE: Dropping the old citus_local_tables_test_schema.citus_local_table_1
NOTICE: dropping the old citus_local_tables_test_schema.citus_local_table_1
NOTICE: executing the command locally: DROP TABLE IF EXISTS citus_local_tables_test_schema.citus_local_table_1_xxxxx CASCADE
NOTICE: Renaming the new table to citus_local_tables_test_schema.citus_local_table_1
NOTICE: renaming the new table to citus_local_tables_test_schema.citus_local_table_1
undistribute_table
---------------------------------------------------------------------

View File

@ -0,0 +1,190 @@
--
-- citus_update_table_statistics.sql
--
-- Test citus_update_table_statistics function on both
-- hash and append distributed tables
-- This function updates shardlength, shardminvalue and shardmaxvalue
--
SET citus.next_shard_id TO 981000;
SET citus.next_placement_id TO 982000;
SET citus.shard_count TO 8;
SET citus.shard_replication_factor TO 2;
-- test with a hash-distributed table
-- here we update only shardlength, not shardminvalue and shardmaxvalue
CREATE TABLE test_table_statistics_hash (id int);
SELECT create_distributed_table('test_table_statistics_hash', 'id');
create_distributed_table
---------------------------------------------------------------------
(1 row)
-- populate table
INSERT INTO test_table_statistics_hash SELECT i FROM generate_series(0, 10000)i;
-- originally shardlength (size of the shard) is zero
SELECT
ds.logicalrelid::regclass::text AS tablename,
ds.shardid AS shardid,
dsp.placementid AS placementid,
shard_name(ds.logicalrelid, ds.shardid) AS shardname,
ds.shardminvalue AS shardminvalue,
ds.shardmaxvalue AS shardmaxvalue
FROM pg_dist_shard ds JOIN pg_dist_shard_placement dsp USING (shardid)
WHERE ds.logicalrelid::regclass::text in ('test_table_statistics_hash') AND dsp.shardlength = 0
ORDER BY 2, 3;
tablename | shardid | placementid | shardname | shardminvalue | shardmaxvalue
---------------------------------------------------------------------
test_table_statistics_hash | 981000 | 982000 | test_table_statistics_hash_981000 | -2147483648 | -1610612737
test_table_statistics_hash | 981000 | 982001 | test_table_statistics_hash_981000 | -2147483648 | -1610612737
test_table_statistics_hash | 981001 | 982002 | test_table_statistics_hash_981001 | -1610612736 | -1073741825
test_table_statistics_hash | 981001 | 982003 | test_table_statistics_hash_981001 | -1610612736 | -1073741825
test_table_statistics_hash | 981002 | 982004 | test_table_statistics_hash_981002 | -1073741824 | -536870913
test_table_statistics_hash | 981002 | 982005 | test_table_statistics_hash_981002 | -1073741824 | -536870913
test_table_statistics_hash | 981003 | 982006 | test_table_statistics_hash_981003 | -536870912 | -1
test_table_statistics_hash | 981003 | 982007 | test_table_statistics_hash_981003 | -536870912 | -1
test_table_statistics_hash | 981004 | 982008 | test_table_statistics_hash_981004 | 0 | 536870911
test_table_statistics_hash | 981004 | 982009 | test_table_statistics_hash_981004 | 0 | 536870911
test_table_statistics_hash | 981005 | 982010 | test_table_statistics_hash_981005 | 536870912 | 1073741823
test_table_statistics_hash | 981005 | 982011 | test_table_statistics_hash_981005 | 536870912 | 1073741823
test_table_statistics_hash | 981006 | 982012 | test_table_statistics_hash_981006 | 1073741824 | 1610612735
test_table_statistics_hash | 981006 | 982013 | test_table_statistics_hash_981006 | 1073741824 | 1610612735
test_table_statistics_hash | 981007 | 982014 | test_table_statistics_hash_981007 | 1610612736 | 2147483647
test_table_statistics_hash | 981007 | 982015 | test_table_statistics_hash_981007 | 1610612736 | 2147483647
(16 rows)
-- setting this to on in order to verify that we use a distributed transaction id
-- to run the size queries from different connections
-- this is going to help detect deadlocks
SET citus.log_remote_commands TO ON;
-- setting this to sequential in order to have a deterministic order
-- in the output of citus.log_remote_commands
SET citus.multi_shard_modify_mode TO sequential;
-- update table statistics and then check that shardlength has changed
-- but shardminvalue and shardmaxvalue stay the same because this is
-- a hash distributed table
SELECT citus_update_table_statistics('test_table_statistics_hash');
NOTICE: issuing BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;SELECT assign_distributed_transaction_id(xx, xx, 'xxxxxxx');
DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx
NOTICE: issuing SELECT 981000 AS shard_id, NULL::text AS shard_minvalue, NULL::text AS shard_maxvalue, pg_relation_size('public.test_table_statistics_hash_981000') AS shard_size UNION ALL SELECT 981001 AS shard_id, NULL::text AS shard_minvalue, NULL::text AS shard_maxvalue, pg_relation_size('public.test_table_statistics_hash_981001') AS shard_size UNION ALL SELECT 981002 AS shard_id, NULL::text AS shard_minvalue, NULL::text AS shard_maxvalue, pg_relation_size('public.test_table_statistics_hash_981002') AS shard_size UNION ALL SELECT 981003 AS shard_id, NULL::text AS shard_minvalue, NULL::text AS shard_maxvalue, pg_relation_size('public.test_table_statistics_hash_981003') AS shard_size UNION ALL SELECT 981004 AS shard_id, NULL::text AS shard_minvalue, NULL::text AS shard_maxvalue, pg_relation_size('public.test_table_statistics_hash_981004') AS shard_size UNION ALL SELECT 981005 AS shard_id, NULL::text AS shard_minvalue, NULL::text AS shard_maxvalue, pg_relation_size('public.test_table_statistics_hash_981005') AS shard_size UNION ALL SELECT 981006 AS shard_id, NULL::text AS shard_minvalue, NULL::text AS shard_maxvalue, pg_relation_size('public.test_table_statistics_hash_981006') AS shard_size UNION ALL SELECT 981007 AS shard_id, NULL::text AS shard_minvalue, NULL::text AS shard_maxvalue, pg_relation_size('public.test_table_statistics_hash_981007') AS shard_size UNION ALL SELECT 0::bigint, NULL::text, NULL::text, 0::bigint;
DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx
NOTICE: issuing BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;SELECT assign_distributed_transaction_id(xx, xx, 'xxxxxxx');
DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx
NOTICE: issuing SELECT 981000 AS shard_id, NULL::text AS shard_minvalue, NULL::text AS shard_maxvalue, pg_relation_size('public.test_table_statistics_hash_981000') AS shard_size UNION ALL SELECT 981001 AS shard_id, NULL::text AS shard_minvalue, NULL::text AS shard_maxvalue, pg_relation_size('public.test_table_statistics_hash_981001') AS shard_size UNION ALL SELECT 981002 AS shard_id, NULL::text AS shard_minvalue, NULL::text AS shard_maxvalue, pg_relation_size('public.test_table_statistics_hash_981002') AS shard_size UNION ALL SELECT 981003 AS shard_id, NULL::text AS shard_minvalue, NULL::text AS shard_maxvalue, pg_relation_size('public.test_table_statistics_hash_981003') AS shard_size UNION ALL SELECT 981004 AS shard_id, NULL::text AS shard_minvalue, NULL::text AS shard_maxvalue, pg_relation_size('public.test_table_statistics_hash_981004') AS shard_size UNION ALL SELECT 981005 AS shard_id, NULL::text AS shard_minvalue, NULL::text AS shard_maxvalue, pg_relation_size('public.test_table_statistics_hash_981005') AS shard_size UNION ALL SELECT 981006 AS shard_id, NULL::text AS shard_minvalue, NULL::text AS shard_maxvalue, pg_relation_size('public.test_table_statistics_hash_981006') AS shard_size UNION ALL SELECT 981007 AS shard_id, NULL::text AS shard_minvalue, NULL::text AS shard_maxvalue, pg_relation_size('public.test_table_statistics_hash_981007') AS shard_size UNION ALL SELECT 0::bigint, NULL::text, NULL::text, 0::bigint;
DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx
NOTICE: issuing COMMIT
DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx
NOTICE: issuing COMMIT
DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx
citus_update_table_statistics
---------------------------------------------------------------------
(1 row)
RESET citus.log_remote_commands;
RESET citus.multi_shard_modify_mode;
SELECT
ds.logicalrelid::regclass::text AS tablename,
ds.shardid AS shardid,
dsp.placementid AS placementid,
shard_name(ds.logicalrelid, ds.shardid) AS shardname,
ds.shardminvalue as shardminvalue,
ds.shardmaxvalue as shardmaxvalue
FROM pg_dist_shard ds JOIN pg_dist_shard_placement dsp USING (shardid)
WHERE ds.logicalrelid::regclass::text in ('test_table_statistics_hash') AND dsp.shardlength > 0
ORDER BY 2, 3;
tablename | shardid | placementid | shardname | shardminvalue | shardmaxvalue
---------------------------------------------------------------------
test_table_statistics_hash | 981000 | 982000 | test_table_statistics_hash_981000 | -2147483648 | -1610612737
test_table_statistics_hash | 981000 | 982001 | test_table_statistics_hash_981000 | -2147483648 | -1610612737
test_table_statistics_hash | 981001 | 982002 | test_table_statistics_hash_981001 | -1610612736 | -1073741825
test_table_statistics_hash | 981001 | 982003 | test_table_statistics_hash_981001 | -1610612736 | -1073741825
test_table_statistics_hash | 981002 | 982004 | test_table_statistics_hash_981002 | -1073741824 | -536870913
test_table_statistics_hash | 981002 | 982005 | test_table_statistics_hash_981002 | -1073741824 | -536870913
test_table_statistics_hash | 981003 | 982006 | test_table_statistics_hash_981003 | -536870912 | -1
test_table_statistics_hash | 981003 | 982007 | test_table_statistics_hash_981003 | -536870912 | -1
test_table_statistics_hash | 981004 | 982008 | test_table_statistics_hash_981004 | 0 | 536870911
test_table_statistics_hash | 981004 | 982009 | test_table_statistics_hash_981004 | 0 | 536870911
test_table_statistics_hash | 981005 | 982010 | test_table_statistics_hash_981005 | 536870912 | 1073741823
test_table_statistics_hash | 981005 | 982011 | test_table_statistics_hash_981005 | 536870912 | 1073741823
test_table_statistics_hash | 981006 | 982012 | test_table_statistics_hash_981006 | 1073741824 | 1610612735
test_table_statistics_hash | 981006 | 982013 | test_table_statistics_hash_981006 | 1073741824 | 1610612735
test_table_statistics_hash | 981007 | 982014 | test_table_statistics_hash_981007 | 1610612736 | 2147483647
test_table_statistics_hash | 981007 | 982015 | test_table_statistics_hash_981007 | 1610612736 | 2147483647
(16 rows)
-- check with an append-distributed table
-- here we update shardlength, shardminvalue and shardmaxvalue
CREATE TABLE test_table_statistics_append (id int);
SELECT create_distributed_table('test_table_statistics_append', 'id', 'append');
create_distributed_table
---------------------------------------------------------------------
(1 row)
COPY test_table_statistics_append FROM PROGRAM 'echo 0 && echo 1 && echo 2 && echo 3' WITH CSV;
COPY test_table_statistics_append FROM PROGRAM 'echo 4 && echo 5 && echo 6 && echo 7' WITH CSV;
-- originally shardminvalue and shardmaxvalue will be 0,3 and 4, 7
SELECT
ds.logicalrelid::regclass::text AS tablename,
ds.shardid AS shardid,
dsp.placementid AS placementid,
shard_name(ds.logicalrelid, ds.shardid) AS shardname,
ds.shardminvalue as shardminvalue,
ds.shardmaxvalue as shardmaxvalue
FROM pg_dist_shard ds JOIN pg_dist_shard_placement dsp USING (shardid)
WHERE ds.logicalrelid::regclass::text in ('test_table_statistics_append')
ORDER BY 2, 3;
tablename | shardid | placementid | shardname | shardminvalue | shardmaxvalue
---------------------------------------------------------------------
test_table_statistics_append | 981008 | 982016 | test_table_statistics_append_981008 | 0 | 3
test_table_statistics_append | 981008 | 982017 | test_table_statistics_append_981008 | 0 | 3
test_table_statistics_append | 981009 | 982018 | test_table_statistics_append_981009 | 4 | 7
test_table_statistics_append | 981009 | 982019 | test_table_statistics_append_981009 | 4 | 7
(4 rows)
-- delete some data to change shardminvalues of a shards
DELETE FROM test_table_statistics_append WHERE id = 0 OR id = 4;
SET citus.log_remote_commands TO ON;
SET citus.multi_shard_modify_mode TO sequential;
-- update table statistics and then check that shardminvalue has changed
-- shardlength (shardsize) is still 8192 since there is very few data
SELECT citus_update_table_statistics('test_table_statistics_append');
NOTICE: issuing BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;SELECT assign_distributed_transaction_id(xx, xx, 'xxxxxxx');
DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx
NOTICE: issuing SELECT 981008 AS shard_id, min(id)::text AS shard_minvalue, max(id)::text AS shard_maxvalue, pg_relation_size('public.test_table_statistics_append_981008') AS shard_size FROM test_table_statistics_append_981008 UNION ALL SELECT 981009 AS shard_id, min(id)::text AS shard_minvalue, max(id)::text AS shard_maxvalue, pg_relation_size('public.test_table_statistics_append_981009') AS shard_size FROM test_table_statistics_append_981009 UNION ALL SELECT 0::bigint, NULL::text, NULL::text, 0::bigint;
DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx
NOTICE: issuing BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;SELECT assign_distributed_transaction_id(xx, xx, 'xxxxxxx');
DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx
NOTICE: issuing SELECT 981008 AS shard_id, min(id)::text AS shard_minvalue, max(id)::text AS shard_maxvalue, pg_relation_size('public.test_table_statistics_append_981008') AS shard_size FROM test_table_statistics_append_981008 UNION ALL SELECT 981009 AS shard_id, min(id)::text AS shard_minvalue, max(id)::text AS shard_maxvalue, pg_relation_size('public.test_table_statistics_append_981009') AS shard_size FROM test_table_statistics_append_981009 UNION ALL SELECT 0::bigint, NULL::text, NULL::text, 0::bigint;
DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx
NOTICE: issuing COMMIT
DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx
NOTICE: issuing COMMIT
DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx
citus_update_table_statistics
---------------------------------------------------------------------
(1 row)
RESET citus.log_remote_commands;
RESET citus.multi_shard_modify_mode;
SELECT
ds.logicalrelid::regclass::text AS tablename,
ds.shardid AS shardid,
dsp.placementid AS placementid,
shard_name(ds.logicalrelid, ds.shardid) AS shardname,
ds.shardminvalue as shardminvalue,
ds.shardmaxvalue as shardmaxvalue
FROM pg_dist_shard ds JOIN pg_dist_shard_placement dsp USING (shardid)
WHERE ds.logicalrelid::regclass::text in ('test_table_statistics_append')
ORDER BY 2, 3;
tablename | shardid | placementid | shardname | shardminvalue | shardmaxvalue
---------------------------------------------------------------------
test_table_statistics_append | 981008 | 982016 | test_table_statistics_append_981008 | 1 | 3
test_table_statistics_append | 981008 | 982017 | test_table_statistics_append_981008 | 1 | 3
test_table_statistics_append | 981009 | 982018 | test_table_statistics_append_981009 | 5 | 7
test_table_statistics_append | 981009 | 982019 | test_table_statistics_append_981009 | 5 | 7
(4 rows)
DROP TABLE test_table_statistics_hash, test_table_statistics_append;
ALTER SYSTEM RESET citus.shard_count;
ALTER SYSTEM RESET citus.shard_replication_factor;

View File

@ -250,9 +250,9 @@ $cmd$);
-- verify undistribute works
SELECT undistribute_table('table_option');
NOTICE: creating a new table for columnar_citus_integration.table_option
NOTICE: Moving the data of columnar_citus_integration.table_option
NOTICE: Dropping the old columnar_citus_integration.table_option
NOTICE: Renaming the new table to columnar_citus_integration.table_option
NOTICE: moving the data of columnar_citus_integration.table_option
NOTICE: dropping the old columnar_citus_integration.table_option
NOTICE: renaming the new table to columnar_citus_integration.table_option
undistribute_table
---------------------------------------------------------------------
@ -569,9 +569,9 @@ $cmd$);
-- verify undistribute works
SELECT undistribute_table('table_option');
NOTICE: creating a new table for columnar_citus_integration.table_option
NOTICE: Moving the data of columnar_citus_integration.table_option
NOTICE: Dropping the old columnar_citus_integration.table_option
NOTICE: Renaming the new table to columnar_citus_integration.table_option
NOTICE: moving the data of columnar_citus_integration.table_option
NOTICE: dropping the old columnar_citus_integration.table_option
NOTICE: renaming the new table to columnar_citus_integration.table_option
undistribute_table
---------------------------------------------------------------------
@ -808,9 +808,9 @@ $cmd$);
-- verify undistribute works
SELECT undistribute_table('table_option_reference');
NOTICE: creating a new table for columnar_citus_integration.table_option_reference
NOTICE: Moving the data of columnar_citus_integration.table_option_reference
NOTICE: Dropping the old columnar_citus_integration.table_option_reference
NOTICE: Renaming the new table to columnar_citus_integration.table_option_reference
NOTICE: moving the data of columnar_citus_integration.table_option_reference
NOTICE: dropping the old columnar_citus_integration.table_option_reference
NOTICE: renaming the new table to columnar_citus_integration.table_option_reference
undistribute_table
---------------------------------------------------------------------
@ -1041,9 +1041,9 @@ $cmd$);
-- verify undistribute works
SELECT undistribute_table('table_option_citus_local');
NOTICE: creating a new table for columnar_citus_integration.table_option_citus_local
NOTICE: Moving the data of columnar_citus_integration.table_option_citus_local
NOTICE: Dropping the old columnar_citus_integration.table_option_citus_local
NOTICE: Renaming the new table to columnar_citus_integration.table_option_citus_local
NOTICE: moving the data of columnar_citus_integration.table_option_citus_local
NOTICE: dropping the old columnar_citus_integration.table_option_citus_local
NOTICE: renaming the new table to columnar_citus_integration.table_option_citus_local
undistribute_table
---------------------------------------------------------------------

View File

@ -46,9 +46,9 @@ SELECT count(*), sum(i), min(i), max(i) FROM parent;
-- set older partitions as columnar
SELECT alter_table_set_access_method('p0','columnar');
NOTICE: creating a new table for public.p0
NOTICE: Moving the data of public.p0
NOTICE: Dropping the old public.p0
NOTICE: Renaming the new table to public.p0
NOTICE: moving the data of public.p0
NOTICE: dropping the old public.p0
NOTICE: renaming the new table to public.p0
alter_table_set_access_method
---------------------------------------------------------------------
@ -56,9 +56,9 @@ NOTICE: Renaming the new table to public.p0
SELECT alter_table_set_access_method('p1','columnar');
NOTICE: creating a new table for public.p1
NOTICE: Moving the data of public.p1
NOTICE: Dropping the old public.p1
NOTICE: Renaming the new table to public.p1
NOTICE: moving the data of public.p1
NOTICE: dropping the old public.p1
NOTICE: renaming the new table to public.p1
alter_table_set_access_method
---------------------------------------------------------------------
@ -66,9 +66,9 @@ NOTICE: Renaming the new table to public.p1
SELECT alter_table_set_access_method('p3','columnar');
NOTICE: creating a new table for public.p3
NOTICE: Moving the data of public.p3
NOTICE: Dropping the old public.p3
NOTICE: Renaming the new table to public.p3
NOTICE: moving the data of public.p3
NOTICE: dropping the old public.p3
NOTICE: renaming the new table to public.p3
alter_table_set_access_method
---------------------------------------------------------------------

View File

@ -46,9 +46,9 @@ SELECT count(*), sum(i), min(i), max(i) FROM parent;
-- set older partitions as columnar
SELECT alter_table_set_access_method('p0','columnar');
NOTICE: creating a new table for public.p0
NOTICE: Moving the data of public.p0
NOTICE: Dropping the old public.p0
NOTICE: Renaming the new table to public.p0
NOTICE: moving the data of public.p0
NOTICE: dropping the old public.p0
NOTICE: renaming the new table to public.p0
alter_table_set_access_method
---------------------------------------------------------------------
@ -56,9 +56,9 @@ NOTICE: Renaming the new table to public.p0
SELECT alter_table_set_access_method('p1','columnar');
NOTICE: creating a new table for public.p1
NOTICE: Moving the data of public.p1
NOTICE: Dropping the old public.p1
NOTICE: Renaming the new table to public.p1
NOTICE: moving the data of public.p1
NOTICE: dropping the old public.p1
NOTICE: renaming the new table to public.p1
alter_table_set_access_method
---------------------------------------------------------------------
@ -66,9 +66,9 @@ NOTICE: Renaming the new table to public.p1
SELECT alter_table_set_access_method('p3','columnar');
NOTICE: creating a new table for public.p3
NOTICE: Moving the data of public.p3
NOTICE: Dropping the old public.p3
NOTICE: Renaming the new table to public.p3
NOTICE: moving the data of public.p3
NOTICE: dropping the old public.p3
NOTICE: renaming the new table to public.p3
alter_table_set_access_method
---------------------------------------------------------------------

View File

@ -718,9 +718,9 @@ SELECT conrelid::regclass::text AS "Referencing Table", pg_get_constraintdef(oid
SELECT alter_distributed_table('adt_table', distribution_column:='b', colocate_with:='none');
NOTICE: creating a new table for coordinator_shouldhaveshards.adt_table
NOTICE: Moving the data of coordinator_shouldhaveshards.adt_table
NOTICE: Dropping the old coordinator_shouldhaveshards.adt_table
NOTICE: Renaming the new table to coordinator_shouldhaveshards.adt_table
NOTICE: moving the data of coordinator_shouldhaveshards.adt_table
NOTICE: dropping the old coordinator_shouldhaveshards.adt_table
NOTICE: renaming the new table to coordinator_shouldhaveshards.adt_table
alter_distributed_table
---------------------------------------------------------------------
@ -895,6 +895,215 @@ NOTICE: executing the command locally: SELECT count(*) AS count FROM (SELECT in
0
(1 row)
-- a helper function which return true if the coordinated
-- trannsaction uses 2PC
CREATE OR REPLACE FUNCTION coordinated_transaction_should_use_2PC()
RETURNS BOOL LANGUAGE C STRICT VOLATILE AS 'citus',
$$coordinated_transaction_should_use_2PC$$;
-- a local SELECT followed by remote SELECTs
-- does not trigger 2PC
BEGIN;
SELECT y FROM test WHERE x = 1;
NOTICE: executing the command locally: SELECT y FROM coordinator_shouldhaveshards.test_1503000 test WHERE (x OPERATOR(pg_catalog.=) 1)
y
---------------------------------------------------------------------
(0 rows)
WITH cte_1 AS (SELECT y FROM test WHERE x = 1 LIMIT 5) SELECT count(*) FROM test;
NOTICE: executing the command locally: SELECT count(*) AS count FROM coordinator_shouldhaveshards.test_1503000 test WHERE true
NOTICE: executing the command locally: SELECT count(*) AS count FROM coordinator_shouldhaveshards.test_1503003 test WHERE true
count
---------------------------------------------------------------------
0
(1 row)
SELECT count(*) FROM test;
NOTICE: executing the command locally: SELECT count(*) AS count FROM coordinator_shouldhaveshards.test_1503000 test WHERE true
NOTICE: executing the command locally: SELECT count(*) AS count FROM coordinator_shouldhaveshards.test_1503003 test WHERE true
count
---------------------------------------------------------------------
0
(1 row)
WITH cte_1 as (SELECT * FROM test LIMIT 5) SELECT count(*) FROM test;
NOTICE: executing the command locally: SELECT count(*) AS count FROM coordinator_shouldhaveshards.test_1503000 test WHERE true
NOTICE: executing the command locally: SELECT count(*) AS count FROM coordinator_shouldhaveshards.test_1503003 test WHERE true
count
---------------------------------------------------------------------
0
(1 row)
SELECT coordinated_transaction_should_use_2PC();
coordinated_transaction_should_use_2pc
---------------------------------------------------------------------
f
(1 row)
COMMIT;
-- remote SELECTs followed by local SELECTs
-- does not trigger 2PC
BEGIN;
SELECT count(*) FROM test;
NOTICE: executing the command locally: SELECT count(*) AS count FROM coordinator_shouldhaveshards.test_1503000 test WHERE true
NOTICE: executing the command locally: SELECT count(*) AS count FROM coordinator_shouldhaveshards.test_1503003 test WHERE true
count
---------------------------------------------------------------------
0
(1 row)
WITH cte_1 as (SELECT * FROM test LIMIT 5) SELECT count(*) FROM test;
NOTICE: executing the command locally: SELECT count(*) AS count FROM coordinator_shouldhaveshards.test_1503000 test WHERE true
NOTICE: executing the command locally: SELECT count(*) AS count FROM coordinator_shouldhaveshards.test_1503003 test WHERE true
count
---------------------------------------------------------------------
0
(1 row)
SELECT y FROM test WHERE x = 1;
NOTICE: executing the command locally: SELECT y FROM coordinator_shouldhaveshards.test_1503000 test WHERE (x OPERATOR(pg_catalog.=) 1)
y
---------------------------------------------------------------------
(0 rows)
WITH cte_1 AS (SELECT y FROM test WHERE x = 1 LIMIT 5) SELECT count(*) FROM test;
NOTICE: executing the command locally: SELECT count(*) AS count FROM coordinator_shouldhaveshards.test_1503000 test WHERE true
NOTICE: executing the command locally: SELECT count(*) AS count FROM coordinator_shouldhaveshards.test_1503003 test WHERE true
count
---------------------------------------------------------------------
0
(1 row)
SELECT coordinated_transaction_should_use_2PC();
coordinated_transaction_should_use_2pc
---------------------------------------------------------------------
f
(1 row)
COMMIT;
-- a local SELECT followed by a remote Modify
-- triggers 2PC
BEGIN;
SELECT y FROM test WHERE x = 1;
NOTICE: executing the command locally: SELECT y FROM coordinator_shouldhaveshards.test_1503000 test WHERE (x OPERATOR(pg_catalog.=) 1)
y
---------------------------------------------------------------------
(0 rows)
UPDATE test SET y = y +1;
NOTICE: executing the command locally: UPDATE coordinator_shouldhaveshards.test_1503000 test SET y = (y OPERATOR(pg_catalog.+) 1)
NOTICE: executing the command locally: UPDATE coordinator_shouldhaveshards.test_1503003 test SET y = (y OPERATOR(pg_catalog.+) 1)
SELECT coordinated_transaction_should_use_2PC();
coordinated_transaction_should_use_2pc
---------------------------------------------------------------------
t
(1 row)
COMMIT;
-- a local modify followed by a remote SELECT
-- triggers 2PC
BEGIN;
INSERT INTO test VALUES (1,1);
NOTICE: executing the command locally: INSERT INTO coordinator_shouldhaveshards.test_1503000 (x, y) VALUES (1, 1)
SELECT count(*) FROM test;
NOTICE: executing the command locally: SELECT count(*) AS count FROM coordinator_shouldhaveshards.test_1503000 test WHERE true
NOTICE: executing the command locally: SELECT count(*) AS count FROM coordinator_shouldhaveshards.test_1503003 test WHERE true
count
---------------------------------------------------------------------
1
(1 row)
SELECT coordinated_transaction_should_use_2PC();
coordinated_transaction_should_use_2pc
---------------------------------------------------------------------
t
(1 row)
COMMIT;
-- a local modify followed by a remote MODIFY
-- triggers 2PC
BEGIN;
INSERT INTO test VALUES (1,1);
NOTICE: executing the command locally: INSERT INTO coordinator_shouldhaveshards.test_1503000 (x, y) VALUES (1, 1)
UPDATE test SET y = y +1;
NOTICE: executing the command locally: UPDATE coordinator_shouldhaveshards.test_1503000 test SET y = (y OPERATOR(pg_catalog.+) 1)
NOTICE: executing the command locally: UPDATE coordinator_shouldhaveshards.test_1503003 test SET y = (y OPERATOR(pg_catalog.+) 1)
SELECT coordinated_transaction_should_use_2PC();
coordinated_transaction_should_use_2pc
---------------------------------------------------------------------
t
(1 row)
COMMIT;
-- a local modify followed by a remote single shard MODIFY
-- triggers 2PC
BEGIN;
INSERT INTO test VALUES (1,1);
NOTICE: executing the command locally: INSERT INTO coordinator_shouldhaveshards.test_1503000 (x, y) VALUES (1, 1)
INSERT INTO test VALUES (3,3);
SELECT coordinated_transaction_should_use_2PC();
coordinated_transaction_should_use_2pc
---------------------------------------------------------------------
t
(1 row)
COMMIT;
-- a remote single shard modify followed by a local single
-- shard MODIFY triggers 2PC
BEGIN;
INSERT INTO test VALUES (3,3);
INSERT INTO test VALUES (1,1);
NOTICE: executing the command locally: INSERT INTO coordinator_shouldhaveshards.test_1503000 (x, y) VALUES (1, 1)
SELECT coordinated_transaction_should_use_2PC();
coordinated_transaction_should_use_2pc
---------------------------------------------------------------------
t
(1 row)
COMMIT;
-- a remote single shard select followed by a local single
-- shard MODIFY triggers 2PC. But, the transaction manager
-- is smart enough to skip sending 2PC as the remote
-- command is read only
BEGIN;
SELECT count(*) FROM test WHERE x = 3;
count
---------------------------------------------------------------------
2
(1 row)
INSERT INTO test VALUES (1,1);
NOTICE: executing the command locally: INSERT INTO coordinator_shouldhaveshards.test_1503000 (x, y) VALUES (1, 1)
SELECT coordinated_transaction_should_use_2PC();
coordinated_transaction_should_use_2pc
---------------------------------------------------------------------
t
(1 row)
SET LOCAL citus.log_remote_commands TO ON;
COMMIT;
NOTICE: issuing COMMIT
DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx
-- a local single shard select followed by a remote single
-- shard modify does not trigger 2PC
BEGIN;
SELECT count(*) FROM test WHERE x = 1;
NOTICE: executing the command locally: SELECT count(*) AS count FROM coordinator_shouldhaveshards.test_1503000 test WHERE (x OPERATOR(pg_catalog.=) 1)
count
---------------------------------------------------------------------
5
(1 row)
INSERT INTO test VALUES (3,3);
SELECT coordinated_transaction_should_use_2PC();
coordinated_transaction_should_use_2pc
---------------------------------------------------------------------
f
(1 row)
SET LOCAL citus.log_remote_commands TO ON;
COMMIT;
NOTICE: issuing COMMIT
DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx
RESET client_min_messages;
\set VERBOSITY terse
DROP TABLE ref_table;
@ -906,7 +1115,7 @@ DROP TABLE ref;
NOTICE: executing the command locally: DROP TABLE IF EXISTS coordinator_shouldhaveshards.ref_xxxxx CASCADE
DROP TABLE test_append_table;
DROP SCHEMA coordinator_shouldhaveshards CASCADE;
NOTICE: drop cascades to 19 other objects
NOTICE: drop cascades to 20 other objects
SELECT 1 FROM master_set_node_property('localhost', :master_port, 'shouldhaveshards', false);
?column?
---------------------------------------------------------------------

View File

@ -251,3 +251,48 @@ ORDER BY 1,2,3,4 LIMIT 5;
1 | Wed Nov 22 22:51:43.132261 2017 | 4 | 0 | 3 |
(5 rows)
-- we don't support cross JOINs between distributed tables
-- and without target list entries
CREATE TABLE dist1(c0 int);
CREATE TABLE dist2(c0 int);
CREATE TABLE dist3(c0 int , c1 int);
CREATE TABLE dist4(c0 int , c1 int);
SELECT create_distributed_table('dist1', 'c0');
create_distributed_table
---------------------------------------------------------------------
(1 row)
SELECT create_distributed_table('dist2', 'c0');
create_distributed_table
---------------------------------------------------------------------
(1 row)
SELECT create_distributed_table('dist3', 'c1');
create_distributed_table
---------------------------------------------------------------------
(1 row)
SELECT create_distributed_table('dist4', 'c1');
create_distributed_table
---------------------------------------------------------------------
(1 row)
SELECT dist2.c0 FROM dist1, dist3, dist4, dist2 WHERE (dist3.c0) IN (dist4.c0);
ERROR: cannot perform distributed planning on this query
DETAIL: Cartesian products are currently unsupported
SELECT 1 FROM dist3, dist4, dist2 WHERE (dist3.c0) IN (dist4.c0);
ERROR: cannot perform distributed planning on this query
DETAIL: Cartesian products are currently unsupported
SELECT FROM dist3, dist4, dist2 WHERE (dist3.c0) IN (dist4.c0);
ERROR: cannot perform distributed planning on this query
DETAIL: Cartesian products are currently unsupported
SELECT dist2.c0 FROM dist3, dist4, dist2 WHERE (dist3.c0) IN (dist4.c0);
ERROR: cannot perform distributed planning on this query
DETAIL: Cartesian products are currently unsupported
SELECT dist2.* FROM dist3, dist4, dist2 WHERE (dist3.c0) IN (dist4.c0);
ERROR: cannot perform distributed planning on this query
DETAIL: Cartesian products are currently unsupported

View File

@ -73,9 +73,9 @@ SELECT create_distributed_table('test_collate_pushed_down_aggregate', 'a');
SELECT alter_distributed_table('test_collate_pushed_down_aggregate', shard_count := 2, cascade_to_colocated:=false);
NOTICE: creating a new table for collation_tests.test_collate_pushed_down_aggregate
NOTICE: Moving the data of collation_tests.test_collate_pushed_down_aggregate
NOTICE: Dropping the old collation_tests.test_collate_pushed_down_aggregate
NOTICE: Renaming the new table to collation_tests.test_collate_pushed_down_aggregate
NOTICE: moving the data of collation_tests.test_collate_pushed_down_aggregate
NOTICE: dropping the old collation_tests.test_collate_pushed_down_aggregate
NOTICE: renaming the new table to collation_tests.test_collate_pushed_down_aggregate
alter_distributed_table
---------------------------------------------------------------------

View File

@ -49,6 +49,32 @@ BEGIN;
drop schema public cascade;
ROLLBACK;
CREATE EXTENSION citus;
CREATE SCHEMA test_schema;
SET search_path TO test_schema;
CREATE TABLE ref(x int, y int);
SELECT create_reference_table('ref');
create_reference_table
---------------------------------------------------------------------
(1 row)
CREATE INDEX CONCURRENTLY ref_concurrent_idx_x ON ref(x);
CREATE INDEX CONCURRENTLY ref_concurrent_idx_y ON ref(x);
SELECT substring(current_Setting('server_version'), '\d+')::int > 11 AS server_version_above_eleven
\gset
\if :server_version_above_eleven
REINDEX INDEX CONCURRENTLY ref_concurrent_idx_x;
REINDEX INDEX CONCURRENTLY ref_concurrent_idx_y;
REINDEX TABLE CONCURRENTLY ref;
REINDEX SCHEMA CONCURRENTLY test_schema;
\endif
SET search_path TO public;
\set VERBOSITY TERSE
DROP SCHEMA test_schema CASCADE;
NOTICE: drop cascades to 2 other objects
DROP EXTENSION citus CASCADE;
\set VERBOSITY DEFAULT
CREATE EXTENSION citus;
-- this function is dropped in Citus10, added here for tests
CREATE OR REPLACE FUNCTION pg_catalog.master_create_worker_shards(table_name text, shard_count integer,
replication_factor integer DEFAULT 2)

View File

@ -515,12 +515,45 @@ SELECT * FROM print_extension_changes();
| view time_partitions
(67 rows)
-- Test downgrade to 10.0-1 from 10.0-2
ALTER EXTENSION citus UPDATE TO '10.0-2';
ALTER EXTENSION citus UPDATE TO '10.0-1';
-- Should be empty result since upgrade+downgrade should be a no-op
SELECT * FROM print_extension_changes();
previous_object | current_object
---------------------------------------------------------------------
(0 rows)
-- Snapshot of state at 10.0-2
ALTER EXTENSION citus UPDATE TO '10.0-2';
SELECT * FROM print_extension_changes();
previous_object | current_object
---------------------------------------------------------------------
(0 rows)
-- Test downgrade to 10.0-2 from 10.0-3
ALTER EXTENSION citus UPDATE TO '10.0-3';
ALTER EXTENSION citus UPDATE TO '10.0-2';
-- Should be empty result since upgrade+downgrade should be a no-op
SELECT * FROM print_extension_changes();
previous_object | current_object
---------------------------------------------------------------------
(0 rows)
-- Snapshot of state at 10.0-3
ALTER EXTENSION citus UPDATE TO '10.0-3';
SELECT * FROM print_extension_changes();
previous_object | current_object
---------------------------------------------------------------------
| function citus_get_active_worker_nodes()
(1 row)
DROP TABLE prev_objects, extension_diff;
-- show running version
SHOW citus.version;
citus.version
---------------------------------------------------------------------
10.0devel
10.0.3
(1 row)
-- ensure no unexpected objects were created outside pg_catalog

View File

@ -511,12 +511,45 @@ SELECT * FROM print_extension_changes();
| view time_partitions
(63 rows)
-- Test downgrade to 10.0-1 from 10.0-2
ALTER EXTENSION citus UPDATE TO '10.0-2';
ALTER EXTENSION citus UPDATE TO '10.0-1';
-- Should be empty result since upgrade+downgrade should be a no-op
SELECT * FROM print_extension_changes();
previous_object | current_object
---------------------------------------------------------------------
(0 rows)
-- Snapshot of state at 10.0-2
ALTER EXTENSION citus UPDATE TO '10.0-2';
SELECT * FROM print_extension_changes();
previous_object | current_object
---------------------------------------------------------------------
(0 rows)
-- Test downgrade to 10.0-2 from 10.0-3
ALTER EXTENSION citus UPDATE TO '10.0-3';
ALTER EXTENSION citus UPDATE TO '10.0-2';
-- Should be empty result since upgrade+downgrade should be a no-op
SELECT * FROM print_extension_changes();
previous_object | current_object
---------------------------------------------------------------------
(0 rows)
-- Snapshot of state at 10.0-3
ALTER EXTENSION citus UPDATE TO '10.0-3';
SELECT * FROM print_extension_changes();
previous_object | current_object
---------------------------------------------------------------------
| function citus_get_active_worker_nodes()
(1 row)
DROP TABLE prev_objects, extension_diff;
-- show running version
SHOW citus.version;
citus.version
---------------------------------------------------------------------
10.0devel
10.0.3
(1 row)
-- ensure no unexpected objects were created outside pg_catalog

View File

@ -9,7 +9,7 @@ CREATE TABLE simple_table (
id bigint
);
SELECT master_get_table_ddl_events('simple_table');
master_get_table_ddl_events
master_get_table_ddl_events
---------------------------------------------------------------------
CREATE TABLE public.simple_table (first_name text, last_name text, id bigint)
ALTER TABLE public.simple_table OWNER TO postgres
@ -21,7 +21,7 @@ CREATE TABLE not_null_table (
id bigint not null
);
SELECT master_get_table_ddl_events('not_null_table');
master_get_table_ddl_events
master_get_table_ddl_events
---------------------------------------------------------------------
CREATE TABLE public.not_null_table (city text, id bigint NOT NULL)
ALTER TABLE public.not_null_table OWNER TO postgres
@ -34,7 +34,7 @@ CREATE TABLE column_constraint_table (
age int CONSTRAINT non_negative_age CHECK (age >= 0)
);
SELECT master_get_table_ddl_events('column_constraint_table');
master_get_table_ddl_events
master_get_table_ddl_events
---------------------------------------------------------------------
CREATE TABLE public.column_constraint_table (first_name text, last_name text, age integer, CONSTRAINT non_negative_age CHECK ((age >= 0)))
ALTER TABLE public.column_constraint_table OWNER TO postgres
@ -48,7 +48,7 @@ CREATE TABLE table_constraint_table (
CONSTRAINT bids_ordered CHECK (min_bid > max_bid)
);
SELECT master_get_table_ddl_events('table_constraint_table');
master_get_table_ddl_events
master_get_table_ddl_events
---------------------------------------------------------------------
CREATE TABLE public.table_constraint_table (bid_item_id bigint, min_bid numeric NOT NULL, max_bid numeric NOT NULL, CONSTRAINT bids_ordered CHECK ((min_bid > max_bid)))
ALTER TABLE public.table_constraint_table OWNER TO postgres
@ -67,7 +67,7 @@ SELECT create_distributed_table('check_constraint_table_1', 'id');
(1 row)
SELECT master_get_table_ddl_events('check_constraint_table_1');
master_get_table_ddl_events
master_get_table_ddl_events
---------------------------------------------------------------------
CREATE TABLE public.check_constraint_table_1 (id integer, b boolean, CONSTRAINT check_constraint_table_1_b_check CHECK (b))
ALTER TABLE public.check_constraint_table_1 OWNER TO postgres
@ -84,7 +84,7 @@ SELECT create_distributed_table('check_constraint_table_2', 'id');
(1 row)
SELECT master_get_table_ddl_events('check_constraint_table_2');
master_get_table_ddl_events
master_get_table_ddl_events
---------------------------------------------------------------------
CREATE TABLE public.check_constraint_table_2 (id integer, CONSTRAINT check_constraint_table_2_check CHECK (true))
ALTER TABLE public.check_constraint_table_2 OWNER TO postgres
@ -96,7 +96,7 @@ CREATE TABLE default_value_table (
price decimal default 0.00
);
SELECT master_get_table_ddl_events('default_value_table');
master_get_table_ddl_events
master_get_table_ddl_events
---------------------------------------------------------------------
CREATE TABLE public.default_value_table (name text, price numeric DEFAULT 0.00)
ALTER TABLE public.default_value_table OWNER TO postgres
@ -109,7 +109,7 @@ CREATE TABLE pkey_table (
id bigint PRIMARY KEY
);
SELECT master_get_table_ddl_events('pkey_table');
master_get_table_ddl_events
master_get_table_ddl_events
---------------------------------------------------------------------
CREATE TABLE public.pkey_table (first_name text, last_name text, id bigint NOT NULL)
ALTER TABLE public.pkey_table OWNER TO postgres
@ -122,7 +122,7 @@ CREATE TABLE unique_table (
username text UNIQUE not null
);
SELECT master_get_table_ddl_events('unique_table');
master_get_table_ddl_events
master_get_table_ddl_events
---------------------------------------------------------------------
CREATE TABLE public.unique_table (user_id bigint NOT NULL, username text NOT NULL)
ALTER TABLE public.unique_table OWNER TO postgres
@ -137,7 +137,7 @@ CREATE TABLE clustered_table (
CREATE INDEX clustered_time_idx ON clustered_table (received_at);
CLUSTER clustered_table USING clustered_time_idx;
SELECT master_get_table_ddl_events('clustered_table');
master_get_table_ddl_events
master_get_table_ddl_events
---------------------------------------------------------------------
CREATE TABLE public.clustered_table (data json NOT NULL, received_at timestamp without time zone NOT NULL)
ALTER TABLE public.clustered_table OWNER TO postgres
@ -178,30 +178,33 @@ NOTICE: foreign-data wrapper "fake_fdw" does not have an extension defined
(1 row)
ALTER FOREIGN TABLE foreign_table rename to renamed_foreign_table;
ALTER FOREIGN TABLE renamed_foreign_table rename full_name to rename_name;
ALTER FOREIGN TABLE renamed_foreign_table alter rename_name type char(8);
ALTER FOREIGN TABLE foreign_table rename to renamed_foreign_table_with_long_name_12345678901234567890123456789012345678901234567890;
NOTICE: identifier "renamed_foreign_table_with_long_name_12345678901234567890123456789012345678901234567890" will be truncated to "renamed_foreign_table_with_long_name_12345678901234567890123456"
ALTER FOREIGN TABLE renamed_foreign_table_with_long_name_12345678901234567890123456789012345678901234567890 rename full_name to rename_name;
NOTICE: identifier "renamed_foreign_table_with_long_name_12345678901234567890123456789012345678901234567890" will be truncated to "renamed_foreign_table_with_long_name_12345678901234567890123456"
ALTER FOREIGN TABLE renamed_foreign_table_with_long_name_12345678901234567890123456789012345678901234567890 alter rename_name type char(8);
NOTICE: identifier "renamed_foreign_table_with_long_name_12345678901234567890123456789012345678901234567890" will be truncated to "renamed_foreign_table_with_long_name_12345678901234567890123456"
\c - - :public_worker_1_host :worker_1_port
select table_name, column_name, data_type
from information_schema.columns
where table_schema='public' and table_name like 'renamed_foreign_table_%' and column_name <> 'id'
order by table_name;
table_name | column_name | data_type
table_name | column_name | data_type
---------------------------------------------------------------------
renamed_foreign_table_610008 | rename_name | character
renamed_foreign_table_610009 | rename_name | character
renamed_foreign_table_610010 | rename_name | character
renamed_foreign_table_610011 | rename_name | character
renamed_foreign_table_with_long_name_1234567890_6a8dd6f8_610008 | rename_name | character
renamed_foreign_table_with_long_name_1234567890_6a8dd6f8_610009 | rename_name | character
renamed_foreign_table_with_long_name_1234567890_6a8dd6f8_610010 | rename_name | character
renamed_foreign_table_with_long_name_1234567890_6a8dd6f8_610011 | rename_name | character
(4 rows)
\c - - :master_host :master_port
SELECT master_get_table_ddl_events('renamed_foreign_table');
SELECT master_get_table_ddl_events('renamed_foreign_table_with_long_name_12345678901234567890123456789012345678901234567890');
NOTICE: foreign-data wrapper "fake_fdw" does not have an extension defined
master_get_table_ddl_events
master_get_table_ddl_events
---------------------------------------------------------------------
CREATE SERVER IF NOT EXISTS fake_fdw_server FOREIGN DATA WRAPPER fake_fdw
CREATE FOREIGN TABLE public.renamed_foreign_table (id bigint NOT NULL, rename_name character(8) DEFAULT ''::text NOT NULL) SERVER fake_fdw_server OPTIONS (encoding 'utf-8', compression 'true')
ALTER TABLE public.renamed_foreign_table OWNER TO postgres
CREATE FOREIGN TABLE public.renamed_foreign_table_with_long_name_12345678901234567890123456 (id bigint NOT NULL, rename_name character(8) DEFAULT ''::text NOT NULL) SERVER fake_fdw_server OPTIONS (encoding 'utf-8', compression 'true')
ALTER TABLE public.renamed_foreign_table_with_long_name_12345678901234567890123456 OWNER TO postgres
(3 rows)
-- propagating views is not supported
@ -210,7 +213,8 @@ SELECT master_get_table_ddl_events('local_view');
ERROR: local_view is not a regular, foreign or partitioned table
-- clean up
DROP VIEW IF EXISTS local_view;
DROP FOREIGN TABLE IF EXISTS renamed_foreign_table;
DROP FOREIGN TABLE IF EXISTS renamed_foreign_table_with_long_name_12345678901234567890123456789012345678901234567890;
NOTICE: identifier "renamed_foreign_table_with_long_name_12345678901234567890123456789012345678901234567890" will be truncated to "renamed_foreign_table_with_long_name_12345678901234567890123456"
\c - - :public_worker_1_host :worker_1_port
select table_name, column_name, data_type
from information_schema.columns

View File

@ -357,13 +357,13 @@ SET client_min_messages TO DEBUG1;
CREATE INDEX ix_test_index_creation2
ON test_index_creation1 USING btree
(tenant_id, timeperiod);
DEBUG: the index name on the shards of the partition is too long, switching to sequential and local execution mode to prevent self deadlocks: test_index_creation1_p2020_09_26_10311_tenant_id_timeperiod_idx
DEBUG: the index name on the shards of the partition is too long, switching to sequential and local execution mode to prevent self deadlocks: test_index_creation1_p2020_09_26_xxxxxx_tenant_id_timeperiod_idx
-- same test with schema qualified
SET search_path TO public;
CREATE INDEX ix_test_index_creation3
ON multi_index_statements.test_index_creation1 USING btree
(tenant_id, timeperiod);
DEBUG: the index name on the shards of the partition is too long, switching to sequential and local execution mode to prevent self deadlocks: test_index_creation1_p2020_09_26_10311_tenant_id_timeperiod_idx
DEBUG: the index name on the shards of the partition is too long, switching to sequential and local execution mode to prevent self deadlocks: test_index_creation1_p2020_09_26_xxxxxx_tenant_id_timeperiod_idx
SET search_path TO multi_index_statements;
-- we cannot switch to sequential execution
-- after a parallel query
@ -377,7 +377,7 @@ BEGIN;
CREATE INDEX ix_test_index_creation4
ON test_index_creation1 USING btree
(tenant_id, timeperiod);
ERROR: The index name (test_index_creation1_p2020_09_26_10311_tenant_id_timeperiod_idx) on a shard is too long and could lead to deadlocks when executed in a transaction block after a parallel query
ERROR: The index name (test_index_creation1_p2020_09_26_xxxxxx_tenant_id_timeperiod_idx) on a shard is too long and could lead to deadlocks when executed in a transaction block after a parallel query
HINT: Try re-running the transaction with "SET LOCAL citus.multi_shard_modify_mode TO 'sequential';"
ROLLBACK;
-- try inside a sequential block
@ -392,7 +392,7 @@ BEGIN;
CREATE INDEX ix_test_index_creation4
ON test_index_creation1 USING btree
(tenant_id, timeperiod);
DEBUG: the index name on the shards of the partition is too long, switching to sequential and local execution mode to prevent self deadlocks: test_index_creation1_p2020_09_26_10311_tenant_id_timeperiod_idx
DEBUG: the index name on the shards of the partition is too long, switching to sequential and local execution mode to prevent self deadlocks: test_index_creation1_p2020_09_26_xxxxxx_tenant_id_timeperiod_idx
ROLLBACK;
-- should be able to create indexes with INCLUDE/WHERE
CREATE INDEX ix_test_index_creation5 ON test_index_creation1
@ -401,7 +401,7 @@ CREATE INDEX ix_test_index_creation5 ON test_index_creation1
DEBUG: the index name on the shards of the partition is too long, switching to sequential and local execution mode to prevent self deadlocks: test_index_creation1_p2020_09_2_tenant_id_timeperiod_field1_idx
CREATE UNIQUE INDEX ix_test_index_creation6 ON test_index_creation1
USING btree(tenant_id, timeperiod);
DEBUG: the index name on the shards of the partition is too long, switching to sequential and local execution mode to prevent self deadlocks: test_index_creation1_p2020_09_26_10311_tenant_id_timeperiod_idx
DEBUG: the index name on the shards of the partition is too long, switching to sequential and local execution mode to prevent self deadlocks: test_index_creation1_p2020_09_26_xxxxxx_tenant_id_timeperiod_idx
-- should be able to create short named indexes in parallel
-- as the table/index name is short
CREATE INDEX f1

View File

@ -3059,6 +3059,13 @@ DO UPDATE
SET user_id = 42
RETURNING user_id, value_1_agg;
ERROR: modifying the partition value of rows is not allowed
-- test a small citus.remote_copy_flush_threshold
BEGIN;
SET LOCAL citus.remote_copy_flush_threshold TO 1;
INSERT INTO raw_events_first
SELECT * FROM raw_events_first OFFSET 0
ON CONFLICT DO NOTHING;
ABORT;
-- wrap in a transaction to improve performance
BEGIN;
DROP TABLE coerce_events;

View File

@ -256,6 +256,44 @@ SELECT lock_relation_if_exists('test', 'ACCESS SHARE');
SELECT lock_relation_if_exists('test', 'EXCLUSIVE');
ERROR: permission denied for table test
ABORT;
-- test creating columnar tables and accessing to columnar metadata tables via unprivileged user
-- all below 5 commands should throw no permission errors
-- read columnar metadata table
SELECT * FROM columnar.stripe;
storage_id | stripe_num | file_offset | data_length | column_count | chunk_row_count | row_count | chunk_group_count
---------------------------------------------------------------------
(0 rows)
-- alter a columnar setting
SET columnar.chunk_group_row_limit = 1050;
DO $proc$
BEGIN
IF substring(current_Setting('server_version'), '\d+')::int >= 12 THEN
EXECUTE $$
-- create columnar table
CREATE TABLE columnar_table (a int) USING columnar;
-- alter a columnar table that is created by that unprivileged user
SELECT alter_columnar_table_set('columnar_table', chunk_group_row_limit => 100);
-- and drop it
DROP TABLE columnar_table;
$$;
END IF;
END$proc$;
-- cannot modify columnar metadata table as unprivileged user
INSERT INTO columnar.stripe VALUES(99);
ERROR: permission denied for table stripe
-- Cannot drop columnar metadata table as unprivileged user.
-- Privileged user also cannot drop but with a different error message.
-- (since citus extension has a dependency to it)
DROP TABLE columnar.chunk;
ERROR: must be owner of table chunk
-- test whether a read-only user can read from citus_tables view
SELECT distribution_column FROM citus_tables WHERE table_name = 'test'::regclass;
distribution_column
---------------------------------------------------------------------
id
(1 row)
-- check no permission
SET ROLE no_access;
EXECUTE prepare_insert(1);

View File

@ -1,6 +1,8 @@
CREATE SCHEMA mx_alter_distributed_table;
SET search_path TO mx_alter_distributed_table;
SET citus.shard_replication_factor TO 1;
ALTER SEQUENCE pg_catalog.pg_dist_colocationid_seq RESTART 1410000;
SET citus.replication_model TO 'streaming';
-- test alter_distributed_table UDF
CREATE TABLE adt_table (a INT, b INT);
CREATE TABLE adt_col (a INT UNIQUE, b INT);
@ -80,9 +82,9 @@ SELECT conrelid::regclass::text AS "Referencing Table", pg_get_constraintdef(oid
SELECT alter_distributed_table('adt_table', distribution_column:='b', colocate_with:='none');
NOTICE: creating a new table for mx_alter_distributed_table.adt_table
NOTICE: Moving the data of mx_alter_distributed_table.adt_table
NOTICE: Dropping the old mx_alter_distributed_table.adt_table
NOTICE: Renaming the new table to mx_alter_distributed_table.adt_table
NOTICE: moving the data of mx_alter_distributed_table.adt_table
NOTICE: dropping the old mx_alter_distributed_table.adt_table
NOTICE: renaming the new table to mx_alter_distributed_table.adt_table
alter_distributed_table
---------------------------------------------------------------------
@ -138,9 +140,9 @@ BEGIN;
INSERT INTO adt_table SELECT x, x+1 FROM generate_series(1, 1000) x;
SELECT alter_distributed_table('adt_table', distribution_column:='a');
NOTICE: creating a new table for mx_alter_distributed_table.adt_table
NOTICE: Moving the data of mx_alter_distributed_table.adt_table
NOTICE: Dropping the old mx_alter_distributed_table.adt_table
NOTICE: Renaming the new table to mx_alter_distributed_table.adt_table
NOTICE: moving the data of mx_alter_distributed_table.adt_table
NOTICE: dropping the old mx_alter_distributed_table.adt_table
NOTICE: renaming the new table to mx_alter_distributed_table.adt_table
alter_distributed_table
---------------------------------------------------------------------
@ -159,5 +161,317 @@ SELECT table_name, citus_table_type, distribution_column, shard_count FROM publi
adt_table | distributed | a | 6
(1 row)
-- test procedure colocation is preserved with alter_distributed_table
CREATE TABLE test_proc_colocation_0 (a float8);
SELECT create_distributed_table('test_proc_colocation_0', 'a');
create_distributed_table
---------------------------------------------------------------------
(1 row)
CREATE OR REPLACE procedure proc_0(dist_key float8)
LANGUAGE plpgsql
AS $$
DECLARE
res INT := 0;
BEGIN
INSERT INTO test_proc_colocation_0 VALUES (dist_key);
SELECT count(*) INTO res FROM test_proc_colocation_0;
RAISE NOTICE 'Res: %', res;
COMMIT;
END;$$;
SELECT create_distributed_function('proc_0(float8)', 'dist_key', 'test_proc_colocation_0' );
create_distributed_function
---------------------------------------------------------------------
(1 row)
SELECT logicalrelid, colocationid FROM pg_dist_partition WHERE logicalrelid::regclass::text IN ('test_proc_colocation_0');
logicalrelid | colocationid
---------------------------------------------------------------------
test_proc_colocation_0 | 1410002
(1 row)
SELECT proname, colocationid FROM pg_proc JOIN citus.pg_dist_object ON pg_proc.oid = citus.pg_dist_object.objid WHERE proname IN ('proc_0');
proname | colocationid
---------------------------------------------------------------------
proc_0 | 1410002
(1 row)
SET client_min_messages TO DEBUG1;
CALL proc_0(1.0);
DEBUG: pushing down the procedure
NOTICE: Res: 1
DETAIL: from localhost:xxxxx
RESET client_min_messages;
-- shardCount is not null && list_length(colocatedTableList) = 1
SELECT alter_distributed_table('test_proc_colocation_0', shard_count:= 8);
NOTICE: creating a new table for mx_alter_distributed_table.test_proc_colocation_0
NOTICE: moving the data of mx_alter_distributed_table.test_proc_colocation_0
NOTICE: dropping the old mx_alter_distributed_table.test_proc_colocation_0
NOTICE: renaming the new table to mx_alter_distributed_table.test_proc_colocation_0
alter_distributed_table
---------------------------------------------------------------------
(1 row)
SET client_min_messages TO DEBUG1;
CALL proc_0(1.0);
DEBUG: pushing down the procedure
NOTICE: Res: 2
DETAIL: from localhost:xxxxx
RESET client_min_messages;
SELECT logicalrelid, colocationid FROM pg_dist_partition WHERE logicalrelid::regclass::text IN ('test_proc_colocation_0');
logicalrelid | colocationid
---------------------------------------------------------------------
test_proc_colocation_0 | 1410003
(1 row)
SELECT proname, colocationid FROM pg_proc JOIN citus.pg_dist_object ON pg_proc.oid = citus.pg_dist_object.objid WHERE proname IN ('proc_0');
proname | colocationid
---------------------------------------------------------------------
proc_0 | 1410003
(1 row)
-- colocatewith is not null && list_length(colocatedTableList) = 1
SELECT alter_distributed_table('test_proc_colocation_0', shard_count:= 4);
NOTICE: creating a new table for mx_alter_distributed_table.test_proc_colocation_0
NOTICE: moving the data of mx_alter_distributed_table.test_proc_colocation_0
NOTICE: dropping the old mx_alter_distributed_table.test_proc_colocation_0
NOTICE: renaming the new table to mx_alter_distributed_table.test_proc_colocation_0
alter_distributed_table
---------------------------------------------------------------------
(1 row)
CREATE TABLE test_proc_colocation_1 (a float8);
SELECT create_distributed_table('test_proc_colocation_1', 'a', colocate_with := 'none');
create_distributed_table
---------------------------------------------------------------------
(1 row)
SELECT alter_distributed_table('test_proc_colocation_0', colocate_with := 'test_proc_colocation_1');
NOTICE: creating a new table for mx_alter_distributed_table.test_proc_colocation_0
NOTICE: moving the data of mx_alter_distributed_table.test_proc_colocation_0
NOTICE: dropping the old mx_alter_distributed_table.test_proc_colocation_0
NOTICE: renaming the new table to mx_alter_distributed_table.test_proc_colocation_0
alter_distributed_table
---------------------------------------------------------------------
(1 row)
SET client_min_messages TO DEBUG1;
CALL proc_0(1.0);
DEBUG: pushing down the procedure
NOTICE: Res: 3
DETAIL: from localhost:xxxxx
RESET client_min_messages;
SELECT logicalrelid, colocationid FROM pg_dist_partition WHERE logicalrelid::regclass::text IN ('test_proc_colocation_0');
logicalrelid | colocationid
---------------------------------------------------------------------
test_proc_colocation_0 | 1410004
(1 row)
SELECT proname, colocationid FROM pg_proc JOIN citus.pg_dist_object ON pg_proc.oid = citus.pg_dist_object.objid WHERE proname IN ('proc_0');
proname | colocationid
---------------------------------------------------------------------
proc_0 | 1410004
(1 row)
-- shardCount is not null && cascade_to_colocated is true
SELECT alter_distributed_table('test_proc_colocation_0', shard_count:= 8, cascade_to_colocated := true);
NOTICE: creating a new table for mx_alter_distributed_table.test_proc_colocation_0
NOTICE: moving the data of mx_alter_distributed_table.test_proc_colocation_0
NOTICE: dropping the old mx_alter_distributed_table.test_proc_colocation_0
NOTICE: renaming the new table to mx_alter_distributed_table.test_proc_colocation_0
NOTICE: creating a new table for mx_alter_distributed_table.test_proc_colocation_1
NOTICE: moving the data of mx_alter_distributed_table.test_proc_colocation_1
NOTICE: dropping the old mx_alter_distributed_table.test_proc_colocation_1
NOTICE: renaming the new table to mx_alter_distributed_table.test_proc_colocation_1
alter_distributed_table
---------------------------------------------------------------------
(1 row)
SET client_min_messages TO DEBUG1;
CALL proc_0(1.0);
DEBUG: pushing down the procedure
NOTICE: Res: 4
DETAIL: from localhost:xxxxx
RESET client_min_messages;
SELECT logicalrelid, colocationid FROM pg_dist_partition WHERE logicalrelid::regclass::text IN ('test_proc_colocation_0');
logicalrelid | colocationid
---------------------------------------------------------------------
test_proc_colocation_0 | 1410003
(1 row)
SELECT proname, colocationid FROM pg_proc JOIN citus.pg_dist_object ON pg_proc.oid = citus.pg_dist_object.objid WHERE proname IN ('proc_0');
proname | colocationid
---------------------------------------------------------------------
proc_0 | 1410003
(1 row)
-- colocatewith is not null && cascade_to_colocated is true
SELECT alter_distributed_table('test_proc_colocation_0', shard_count:= 4, cascade_to_colocated := true);
NOTICE: creating a new table for mx_alter_distributed_table.test_proc_colocation_0
NOTICE: moving the data of mx_alter_distributed_table.test_proc_colocation_0
NOTICE: dropping the old mx_alter_distributed_table.test_proc_colocation_0
NOTICE: renaming the new table to mx_alter_distributed_table.test_proc_colocation_0
NOTICE: creating a new table for mx_alter_distributed_table.test_proc_colocation_1
NOTICE: moving the data of mx_alter_distributed_table.test_proc_colocation_1
NOTICE: dropping the old mx_alter_distributed_table.test_proc_colocation_1
NOTICE: renaming the new table to mx_alter_distributed_table.test_proc_colocation_1
alter_distributed_table
---------------------------------------------------------------------
(1 row)
CREATE TABLE test_proc_colocation_2 (a float8);
SELECT create_distributed_table('test_proc_colocation_2', 'a', colocate_with := 'none');
create_distributed_table
---------------------------------------------------------------------
(1 row)
SELECT alter_distributed_table('test_proc_colocation_0', colocate_with := 'test_proc_colocation_2', cascade_to_colocated := true);
NOTICE: creating a new table for mx_alter_distributed_table.test_proc_colocation_0
NOTICE: moving the data of mx_alter_distributed_table.test_proc_colocation_0
NOTICE: dropping the old mx_alter_distributed_table.test_proc_colocation_0
NOTICE: renaming the new table to mx_alter_distributed_table.test_proc_colocation_0
NOTICE: creating a new table for mx_alter_distributed_table.test_proc_colocation_1
NOTICE: moving the data of mx_alter_distributed_table.test_proc_colocation_1
NOTICE: dropping the old mx_alter_distributed_table.test_proc_colocation_1
NOTICE: renaming the new table to mx_alter_distributed_table.test_proc_colocation_1
alter_distributed_table
---------------------------------------------------------------------
(1 row)
SET client_min_messages TO DEBUG1;
CALL proc_0(1.0);
DEBUG: pushing down the procedure
NOTICE: Res: 5
DETAIL: from localhost:xxxxx
RESET client_min_messages;
SELECT logicalrelid, colocationid FROM pg_dist_partition WHERE logicalrelid::regclass::text IN ('test_proc_colocation_0');
logicalrelid | colocationid
---------------------------------------------------------------------
test_proc_colocation_0 | 1410005
(1 row)
SELECT proname, colocationid FROM pg_proc JOIN citus.pg_dist_object ON pg_proc.oid = citus.pg_dist_object.objid WHERE proname IN ('proc_0');
proname | colocationid
---------------------------------------------------------------------
proc_0 | 1410005
(1 row)
-- try a case with more than one procedure
CREATE OR REPLACE procedure proc_1(dist_key float8)
LANGUAGE plpgsql
AS $$
DECLARE
res INT := 0;
BEGIN
INSERT INTO test_proc_colocation_0 VALUES (dist_key);
SELECT count(*) INTO res FROM test_proc_colocation_0;
RAISE NOTICE 'Res: %', res;
COMMIT;
END;$$;
SELECT create_distributed_function('proc_1(float8)', 'dist_key', 'test_proc_colocation_0' );
create_distributed_function
---------------------------------------------------------------------
(1 row)
SELECT logicalrelid, colocationid FROM pg_dist_partition WHERE logicalrelid::regclass::text IN ('test_proc_colocation_0');
logicalrelid | colocationid
---------------------------------------------------------------------
test_proc_colocation_0 | 1410005
(1 row)
SELECT proname, colocationid FROM pg_proc JOIN citus.pg_dist_object ON pg_proc.oid = citus.pg_dist_object.objid WHERE proname IN ('proc_0', 'proc_1') ORDER BY proname;
proname | colocationid
---------------------------------------------------------------------
proc_0 | 1410005
proc_1 | 1410005
(2 rows)
SET client_min_messages TO DEBUG1;
CALL proc_0(1.0);
DEBUG: pushing down the procedure
NOTICE: Res: 6
DETAIL: from localhost:xxxxx
CALL proc_1(2.0);
DEBUG: pushing down the procedure
NOTICE: Res: 7
DETAIL: from localhost:xxxxx
RESET client_min_messages;
SELECT alter_distributed_table('test_proc_colocation_0', shard_count:= 8, cascade_to_colocated := true);
NOTICE: creating a new table for mx_alter_distributed_table.test_proc_colocation_0
NOTICE: moving the data of mx_alter_distributed_table.test_proc_colocation_0
NOTICE: dropping the old mx_alter_distributed_table.test_proc_colocation_0
NOTICE: renaming the new table to mx_alter_distributed_table.test_proc_colocation_0
NOTICE: creating a new table for mx_alter_distributed_table.test_proc_colocation_2
NOTICE: moving the data of mx_alter_distributed_table.test_proc_colocation_2
NOTICE: dropping the old mx_alter_distributed_table.test_proc_colocation_2
NOTICE: renaming the new table to mx_alter_distributed_table.test_proc_colocation_2
NOTICE: creating a new table for mx_alter_distributed_table.test_proc_colocation_1
NOTICE: moving the data of mx_alter_distributed_table.test_proc_colocation_1
NOTICE: dropping the old mx_alter_distributed_table.test_proc_colocation_1
NOTICE: renaming the new table to mx_alter_distributed_table.test_proc_colocation_1
alter_distributed_table
---------------------------------------------------------------------
(1 row)
SET client_min_messages TO DEBUG1;
CALL proc_0(1.0);
DEBUG: pushing down the procedure
NOTICE: Res: 8
DETAIL: from localhost:xxxxx
CALL proc_1(2.0);
DEBUG: pushing down the procedure
NOTICE: Res: 9
DETAIL: from localhost:xxxxx
RESET client_min_messages;
SELECT logicalrelid, colocationid FROM pg_dist_partition WHERE logicalrelid::regclass::text IN ('test_proc_colocation_0');
logicalrelid | colocationid
---------------------------------------------------------------------
test_proc_colocation_0 | 1410003
(1 row)
SELECT proname, colocationid FROM pg_proc JOIN citus.pg_dist_object ON pg_proc.oid = citus.pg_dist_object.objid WHERE proname IN ('proc_0', 'proc_1') ORDER BY proname;
proname | colocationid
---------------------------------------------------------------------
proc_0 | 1410003
proc_1 | 1410003
(2 rows)
-- case which shouldn't preserve colocation for now
-- shardCount is not null && cascade_to_colocated is false
SELECT alter_distributed_table('test_proc_colocation_0', shard_count:= 18, cascade_to_colocated := false);
NOTICE: creating a new table for mx_alter_distributed_table.test_proc_colocation_0
NOTICE: moving the data of mx_alter_distributed_table.test_proc_colocation_0
NOTICE: dropping the old mx_alter_distributed_table.test_proc_colocation_0
NOTICE: renaming the new table to mx_alter_distributed_table.test_proc_colocation_0
alter_distributed_table
---------------------------------------------------------------------
(1 row)
SELECT logicalrelid, colocationid FROM pg_dist_partition WHERE logicalrelid::regclass::text IN ('test_proc_colocation_0');
logicalrelid | colocationid
---------------------------------------------------------------------
test_proc_colocation_0 | 1410006
(1 row)
SELECT proname, colocationid FROM pg_proc JOIN citus.pg_dist_object ON pg_proc.oid = citus.pg_dist_object.objid WHERE proname IN ('proc_0', 'proc_1') ORDER BY proname;
proname | colocationid
---------------------------------------------------------------------
proc_0 | 1410003
proc_1 | 1410003
(2 rows)
SET client_min_messages TO WARNING;
DROP SCHEMA mx_alter_distributed_table CASCADE;

View File

@ -288,6 +288,31 @@ DEBUG: query has a single distribution column value: 1
---------------------------------------------------------------------
(0 rows)
WITH update_article AS (
UPDATE articles_hash_mx SET word_count = 11 WHERE id = 1 AND word_count = 10 RETURNING *
)
SELECT coalesce(1,random());
DEBUG: Router planner cannot handle multi-shard select queries
DEBUG: generating subplan XXX_1 for CTE update_article: UPDATE public.articles_hash_mx SET word_count = 11 WHERE ((id OPERATOR(pg_catalog.=) 1) AND (word_count OPERATOR(pg_catalog.=) 10)) RETURNING id, author_id, title, word_count
DEBUG: Creating router plan
DEBUG: Plan XXX query after replacing subqueries and CTEs: SELECT COALESCE((1)::double precision, random()) AS "coalesce"
DEBUG: Creating router plan
coalesce
---------------------------------------------------------------------
1
(1 row)
WITH update_article AS (
UPDATE articles_hash_mx SET word_count = 10 WHERE author_id = 1 AND id = 1 AND word_count = 11 RETURNING *
)
SELECT coalesce(1,random());
DEBUG: Creating router plan
DEBUG: query has a single distribution column value: 1
coalesce
---------------------------------------------------------------------
1
(1 row)
-- recursive CTEs are supported when filtered on partition column
INSERT INTO company_employees_mx values(1, 1, 0);
DEBUG: Creating router plan

View File

@ -125,13 +125,112 @@ SELECT "Constraint", "Definition" FROM table_checks WHERE relid='public.name_len
(1 row)
\c - - :master_host :master_port
-- Placeholders for RENAME operations
\set VERBOSITY TERSE
-- Rename the table to a too-long name
SET client_min_messages TO DEBUG1;
SET citus.force_max_query_parallelization TO ON;
ALTER TABLE name_lengths RENAME TO name_len_12345678901234567890123456789012345678901234567890;
ERROR: shard name name_len_12345678901234567890123456789012345678_fcd8ab6f_xxxxx exceeds 63 characters
DEBUG: the name of the shard (name_len_12345678901234567890123456789012345678_fcd8ab6f_xxxxx) for relation (name_len_12345678901234567890123456789012345678901234567890) is too long, switching to sequential and local execution mode to prevent self deadlocks
SELECT * FROM name_len_12345678901234567890123456789012345678901234567890;
col1 | col2 | float_col_12345678901234567890123456789012345678901234567890 | date_col_12345678901234567890123456789012345678901234567890 | int_col_12345678901234567890123456789012345678901234567890
---------------------------------------------------------------------
(0 rows)
ALTER TABLE name_len_12345678901234567890123456789012345678901234567890 RENAME TO name_lengths;
SELECT * FROM name_lengths;
col1 | col2 | float_col_12345678901234567890123456789012345678901234567890 | date_col_12345678901234567890123456789012345678901234567890 | int_col_12345678901234567890123456789012345678901234567890
---------------------------------------------------------------------
(0 rows)
-- Test renames on zero shard distributed tables
CREATE TABLE append_zero_shard_table (a int);
SELECT create_distributed_table('append_zero_shard_table', 'a', 'append');
create_distributed_table
---------------------------------------------------------------------
(1 row)
ALTER TABLE append_zero_shard_table rename TO append_zero_shard_table_12345678901234567890123456789012345678901234567890;
NOTICE: identifier "append_zero_shard_table_12345678901234567890123456789012345678901234567890" will be truncated to "append_zero_shard_table_123456789012345678901234567890123456789"
-- Verify that we do not support long renames after parallel queries are executed in transaction block
BEGIN;
ALTER TABLE name_lengths rename col1 to new_column_name;
ALTER TABLE name_lengths RENAME TO name_len_12345678901234567890123456789012345678901234567890;
ERROR: Shard name (name_len_12345678901234567890123456789012345678_fcd8ab6f_xxxxx) for table (name_len_12345678901234567890123456789012345678901234567890) is too long and could lead to deadlocks when executed in a transaction block after a parallel query
HINT: Try re-running the transaction with "SET LOCAL citus.multi_shard_modify_mode TO 'sequential';"
ROLLBACK;
-- The same operation will work when sequential mode is set
BEGIN;
SET LOCAL citus.multi_shard_modify_mode TO 'sequential';
ALTER TABLE name_lengths rename col1 to new_column_name;
ALTER TABLE name_lengths RENAME TO name_len_12345678901234567890123456789012345678901234567890;
DEBUG: the name of the shard (name_len_12345678901234567890123456789012345678_fcd8ab6f_xxxxx) for relation (name_len_12345678901234567890123456789012345678901234567890) is too long, switching to sequential and local execution mode to prevent self deadlocks
ROLLBACK;
RESET client_min_messages;
-- test long partitioned table renames
SET citus.shard_replication_factor TO 1;
CREATE TABLE partition_lengths
(
tenant_id integer NOT NULL,
timeperiod timestamp without time zone NOT NULL
) PARTITION BY RANGE (timeperiod);
SELECT create_distributed_table('partition_lengths', 'tenant_id');
create_distributed_table
---------------------------------------------------------------------
(1 row)
CREATE TABLE partition_lengths_p2020_09_28 PARTITION OF partition_lengths FOR VALUES FROM ('2020-09-28 00:00:00') TO ('2020-09-29 00:00:00');
-- verify that we can rename partitioned tables and partitions to too-long names
ALTER TABLE partition_lengths RENAME TO partition_lengths_12345678901234567890123456789012345678901234567890;
NOTICE: identifier "partition_lengths_12345678901234567890123456789012345678901234567890" will be truncated to "partition_lengths_123456789012345678901234567890123456789012345"
-- verify that we can rename partitioned tables and partitions with too-long names
ALTER TABLE partition_lengths_12345678901234567890123456789012345678901234567890 RENAME TO partition_lengths;
NOTICE: identifier "partition_lengths_12345678901234567890123456789012345678901234567890" will be truncated to "partition_lengths_123456789012345678901234567890123456789012345"
-- Placeholders for unsupported operations
\set VERBOSITY TERSE
-- renaming distributed table partitions
ALTER TABLE partition_lengths_p2020_09_28 RENAME TO partition_lengths_p2020_09_28_12345678901234567890123456789012345678901234567890;
NOTICE: identifier "partition_lengths_p2020_09_28_12345678901234567890123456789012345678901234567890" will be truncated to "partition_lengths_p2020_09_28_123456789012345678901234567890123"
-- creating or attaching new partitions with long names create deadlocks
CREATE TABLE partition_lengths_p2020_09_29_12345678901234567890123456789012345678901234567890 (LIKE partition_lengths_p2020_09_28_12345678901234567890123456789012345678901234567890);
NOTICE: identifier "partition_lengths_p2020_09_29_12345678901234567890123456789012345678901234567890" will be truncated to "partition_lengths_p2020_09_29_123456789012345678901234567890123"
NOTICE: identifier "partition_lengths_p2020_09_28_12345678901234567890123456789012345678901234567890" will be truncated to "partition_lengths_p2020_09_28_123456789012345678901234567890123"
ALTER TABLE partition_lengths
ATTACH PARTITION partition_lengths_p2020_09_29_12345678901234567890123456789012345678901234567890
FOR VALUES FROM ('2020-09-29 00:00:00') TO ('2020-09-30 00:00:00');
NOTICE: identifier "partition_lengths_p2020_09_29_12345678901234567890123456789012345678901234567890" will be truncated to "partition_lengths_p2020_09_29_123456789012345678901234567890123"
ERROR: canceling the transaction since it was involved in a distributed deadlock
CREATE TABLE partition_lengths_p2020_09_30_12345678901234567890123456789012345678901234567890
PARTITION OF partition_lengths
FOR VALUES FROM ('2020-09-30 00:00:00') TO ('2020-10-01 00:00:00');
NOTICE: identifier "partition_lengths_p2020_09_30_12345678901234567890123456789012345678901234567890" will be truncated to "partition_lengths_p2020_09_30_123456789012345678901234567890123"
ERROR: canceling the transaction since it was involved in a distributed deadlock
DROP TABLE partition_lengths_p2020_09_29_12345678901234567890123456789012345678901234567890;
NOTICE: identifier "partition_lengths_p2020_09_29_12345678901234567890123456789012345678901234567890" will be truncated to "partition_lengths_p2020_09_29_123456789012345678901234567890123"
-- creating or attaching new partitions with long names work when using sequential shard modify mode
BEGIN;
SET LOCAL citus.multi_shard_modify_mode = sequential;
CREATE TABLE partition_lengths_p2020_09_29_12345678901234567890123456789012345678901234567890 (LIKE partition_lengths_p2020_09_28_12345678901234567890123456789012345678901234567890);
NOTICE: identifier "partition_lengths_p2020_09_29_12345678901234567890123456789012345678901234567890" will be truncated to "partition_lengths_p2020_09_29_123456789012345678901234567890123"
NOTICE: identifier "partition_lengths_p2020_09_28_12345678901234567890123456789012345678901234567890" will be truncated to "partition_lengths_p2020_09_28_123456789012345678901234567890123"
ALTER TABLE partition_lengths
ATTACH PARTITION partition_lengths_p2020_09_29_12345678901234567890123456789012345678901234567890
FOR VALUES FROM ('2020-09-29 00:00:00') TO ('2020-09-30 00:00:00');
NOTICE: identifier "partition_lengths_p2020_09_29_12345678901234567890123456789012345678901234567890" will be truncated to "partition_lengths_p2020_09_29_123456789012345678901234567890123"
CREATE TABLE partition_lengths_p2020_09_30_12345678901234567890123456789012345678901234567890
PARTITION OF partition_lengths
FOR VALUES FROM ('2020-09-30 00:00:00') TO ('2020-10-01 00:00:00');
NOTICE: identifier "partition_lengths_p2020_09_30_12345678901234567890123456789012345678901234567890" will be truncated to "partition_lengths_p2020_09_30_123456789012345678901234567890123"
ROLLBACK;
-- renaming distributed table constraints are not supported
ALTER TABLE name_lengths RENAME CONSTRAINT unique_12345678901234567890123456789012345678901234567890 TO unique2_12345678901234567890123456789012345678901234567890;
ERROR: renaming constraints belonging to distributed tables is currently unsupported
DROP TABLE partition_lengths CASCADE;
\set VERBOSITY DEFAULT
-- Verify that we can create indexes with very long names on zero shard tables.
CREATE INDEX append_zero_shard_table_idx_12345678901234567890123456789012345678901234567890 ON append_zero_shard_table_12345678901234567890123456789012345678901234567890(a);
NOTICE: identifier "append_zero_shard_table_idx_12345678901234567890123456789012345678901234567890" will be truncated to "append_zero_shard_table_idx_12345678901234567890123456789012345"
NOTICE: identifier "append_zero_shard_table_12345678901234567890123456789012345678901234567890" will be truncated to "append_zero_shard_table_123456789012345678901234567890123456789"
-- Verify that CREATE INDEX on already distributed table has proper shard names.
CREATE INDEX tmp_idx_12345678901234567890123456789012345678901234567890 ON name_lengths(col2);
\c - - :public_worker_1_host :worker_1_port
@ -148,15 +247,19 @@ SELECT "relname", "Column", "Type", "Definition" FROM index_attrs WHERE
-- by the parser/rewriter before further processing, just as in Postgres.
CREATE INDEX tmp_idx_123456789012345678901234567890123456789012345678901234567890 ON name_lengths(col2);
NOTICE: identifier "tmp_idx_123456789012345678901234567890123456789012345678901234567890" will be truncated to "tmp_idx_1234567890123456789012345678901234567890123456789012345"
-- Verify we can rename indexes with long names
ALTER INDEX tmp_idx_123456789012345678901234567890123456789012345678901234567890 RENAME TO tmp_idx_newname_123456789012345678901234567890123456789012345678901234567890;
NOTICE: identifier "tmp_idx_123456789012345678901234567890123456789012345678901234567890" will be truncated to "tmp_idx_1234567890123456789012345678901234567890123456789012345"
NOTICE: identifier "tmp_idx_newname_123456789012345678901234567890123456789012345678901234567890" will be truncated to "tmp_idx_newname_12345678901234567890123456789012345678901234567"
\c - - :public_worker_1_host :worker_1_port
SELECT "relname", "Column", "Type", "Definition" FROM index_attrs WHERE
relname LIKE 'tmp_idx_%' ORDER BY 1 DESC, 2 DESC, 3 DESC, 4 DESC;
relname | Column | Type | Definition
---------------------------------------------------------------------
tmp_idx_newname_1234567890123456789012345678901_c54e849b_225003 | col2 | integer | col2
tmp_idx_newname_1234567890123456789012345678901_c54e849b_225002 | col2 | integer | col2
tmp_idx_123456789012345678901234567890123456789_5e470afa_225003 | col2 | integer | col2
tmp_idx_123456789012345678901234567890123456789_5e470afa_225002 | col2 | integer | col2
tmp_idx_123456789012345678901234567890123456789_599636aa_225003 | col2 | integer | col2
tmp_idx_123456789012345678901234567890123456789_599636aa_225002 | col2 | integer | col2
(4 rows)
\c - - :master_host :master_port
@ -208,14 +311,14 @@ SELECT master_create_worker_shards('sneaky_name_lengths', '2', '2');
(1 row)
\c - - :public_worker_1_host :worker_1_port
\di public.sneaky*225006
\di public.sneaky*225030
List of relations
Schema | Name | Type | Owner | Table
---------------------------------------------------------------------
public | sneaky_name_lengths_int_col_1234567890123456789_6402d2cd_225006 | index | postgres | sneaky_name_lengths_225006
public | sneaky_name_lengths_int_col_1234567890123456789_6402d2cd_225030 | index | postgres | sneaky_name_lengths_225030
(1 row)
SELECT "Constraint", "Definition" FROM table_checks WHERE relid='public.sneaky_name_lengths_225006'::regclass ORDER BY 1 DESC, 2 DESC;
SELECT "Constraint", "Definition" FROM table_checks WHERE relid='public.sneaky_name_lengths_225030'::regclass ORDER BY 1 DESC, 2 DESC;
Constraint | Definition
---------------------------------------------------------------------
checky_12345678901234567890123456789012345678901234567890 | CHECK (int_col_123456789012345678901234567890123456789012345678901234 > 100)
@ -239,11 +342,11 @@ SELECT create_distributed_table('sneaky_name_lengths', 'col1', 'hash');
(1 row)
\c - - :public_worker_1_host :worker_1_port
\di unique*225008
\di unique*225032
List of relations
Schema | Name | Type | Owner | Table
---------------------------------------------------------------------
public | unique_1234567890123456789012345678901234567890_a5986f27_225008 | index | postgres | sneaky_name_lengths_225008
public | unique_1234567890123456789012345678901234567890_a5986f27_225032 | index | postgres | sneaky_name_lengths_225032
(1 row)
\c - - :master_host :master_port
@ -336,3 +439,4 @@ DROP TABLE multi_name_lengths.too_long_12345678901234567890123456789012345678901
-- Clean up.
DROP TABLE name_lengths CASCADE;
DROP TABLE U&"elephant_!0441!043B!043E!043D!0441!043B!043E!043D!0441!043B!043E!043D!0441!043B!043E!043D!0441!043B!043E!043D!0441!043B!043E!043D" UESCAPE '!' CASCADE;
RESET citus.force_max_query_parallelization;

View File

@ -507,6 +507,45 @@ DEBUG: Creating router plan
1 | 1 | arsenous | 10
(1 row)
WITH update_article AS (
UPDATE articles_hash SET word_count = 11 WHERE id = 1 AND word_count = 10 RETURNING *
)
SELECT coalesce(1,random());
DEBUG: Router planner cannot handle multi-shard select queries
DEBUG: generating subplan XXX_1 for CTE update_article: UPDATE public.articles_hash SET word_count = 11 WHERE ((id OPERATOR(pg_catalog.=) 1) AND (word_count OPERATOR(pg_catalog.=) 10)) RETURNING id, author_id, title, word_count
DEBUG: Creating router plan
DEBUG: Plan XXX query after replacing subqueries and CTEs: SELECT COALESCE((1)::double precision, random()) AS "coalesce"
DEBUG: Creating router plan
coalesce
---------------------------------------------------------------------
1
(1 row)
WITH update_article AS (
UPDATE articles_hash SET word_count = 10 WHERE author_id = 1 AND id = 1 AND word_count = 11 RETURNING *
)
SELECT coalesce(1,random());
DEBUG: Creating router plan
DEBUG: query has a single distribution column value: 1
coalesce
---------------------------------------------------------------------
1
(1 row)
WITH update_article AS (
UPDATE authors_reference SET name = '' WHERE id = 0 RETURNING *
)
SELECT coalesce(1,random());
DEBUG: cannot router plan modification of a non-distributed table
DEBUG: generating subplan XXX_1 for CTE update_article: UPDATE public.authors_reference SET name = ''::character varying WHERE (id OPERATOR(pg_catalog.=) 0) RETURNING name, id
DEBUG: Creating router plan
DEBUG: Plan XXX query after replacing subqueries and CTEs: SELECT COALESCE((1)::double precision, random()) AS "coalesce"
DEBUG: Creating router plan
coalesce
---------------------------------------------------------------------
1
(1 row)
WITH delete_article AS (
DELETE FROM articles_hash WHERE id = 1 AND word_count = 10 RETURNING *
)

View File

@ -41,7 +41,7 @@ SELECT create_distributed_table('articles_hash', 'author_id');
(1 row)
CREATE TABLE authors_reference ( name varchar(20), id bigint );
CREATE TABLE authors_reference (id int, name text);
SELECT create_reference_table('authors_reference');
create_reference_table
---------------------------------------------------------------------
@ -2111,6 +2111,26 @@ DEBUG: query has a single distribution column value: 4
0
(1 row)
-- test INSERT using values from generate_series() and repeat() functions
INSERT INTO authors_reference (id, name) VALUES (generate_series(1, 10), repeat('Migjeni', 3));
DEBUG: Creating router plan
SELECT * FROM authors_reference ORDER BY 1, 2;
DEBUG: Distributed planning for a fast-path router query
DEBUG: Creating router plan
id | name
---------------------------------------------------------------------
1 | MigjeniMigjeniMigjeni
2 | MigjeniMigjeniMigjeni
3 | MigjeniMigjeniMigjeni
4 | MigjeniMigjeniMigjeni
5 | MigjeniMigjeniMigjeni
6 | MigjeniMigjeniMigjeni
7 | MigjeniMigjeniMigjeni
8 | MigjeniMigjeniMigjeni
9 | MigjeniMigjeniMigjeni
10 | MigjeniMigjeniMigjeni
(10 rows)
SET client_min_messages to 'NOTICE';
DROP FUNCTION author_articles_max_id();
DROP FUNCTION author_articles_id_word_count();

View File

@ -43,6 +43,20 @@ BEGIN
END LOOP;
RETURN false;
END; $$ language plpgsql;
-- helper function that returns true if output of given explain has "is not null" (case in-sensitive)
CREATE OR REPLACE FUNCTION explain_has_distributed_subplan(explain_commmand text)
RETURNS BOOLEAN AS $$
DECLARE
query_plan text;
BEGIN
FOR query_plan IN EXECUTE explain_commmand LOOP
IF query_plan ILIKE '%Distributed Subplan %_%'
THEN
RETURN true;
END IF;
END LOOP;
RETURN false;
END; $$ language plpgsql;
-- helper function to quickly run SQL on the whole cluster
CREATE OR REPLACE FUNCTION run_command_on_coordinator_and_workers(p_sql text)
RETURNS void LANGUAGE plpgsql AS $$

View File

@ -331,6 +331,111 @@ SELECT count(*) FROM pg_dist_transaction;
2
(1 row)
-- check that read-only participants skip prepare
SET citus.shard_count TO 4;
CREATE TABLE test_2pcskip (a int);
SELECT create_distributed_table('test_2pcskip', 'a');
create_distributed_table
---------------------------------------------------------------------
(1 row)
INSERT INTO test_2pcskip SELECT i FROM generate_series(0, 5)i;
SELECT recover_prepared_transactions();
recover_prepared_transactions
---------------------------------------------------------------------
0
(1 row)
-- for the following test, ensure that 6 and 7 go to different shards on different workers
SELECT count(DISTINCT nodeport) FROM pg_dist_shard_placement WHERE shardid IN (get_shard_id_for_distribution_column('test_2pcskip', 6),get_shard_id_for_distribution_column('test_2pcskip', 7));
count
---------------------------------------------------------------------
2
(1 row)
-- only two of the connections will perform a write (INSERT)
SET citus.force_max_query_parallelization TO ON;
BEGIN;
-- these inserts use two connections
INSERT INTO test_2pcskip VALUES (6);
INSERT INTO test_2pcskip VALUES (7);
-- we know this will use more than two connections
SELECT count(*) FROM test_2pcskip;
count
---------------------------------------------------------------------
8
(1 row)
COMMIT;
SELECT count(*) FROM pg_dist_transaction;
count
---------------------------------------------------------------------
2
(1 row)
SELECT recover_prepared_transactions();
recover_prepared_transactions
---------------------------------------------------------------------
0
(1 row)
-- only two of the connections will perform a write (INSERT)
BEGIN;
-- this insert uses two connections
INSERT INTO test_2pcskip SELECT i FROM generate_series(6, 7)i;
-- we know this will use more than two connections
SELECT COUNT(*) FROM test_2pcskip;
count
---------------------------------------------------------------------
10
(1 row)
COMMIT;
SELECT count(*) FROM pg_dist_transaction;
count
---------------------------------------------------------------------
2
(1 row)
-- check that reads from a reference table don't trigger 2PC
-- despite repmodel being 2PC
CREATE TABLE test_reference (b int);
SELECT create_reference_table('test_reference');
create_reference_table
---------------------------------------------------------------------
(1 row)
INSERT INTO test_reference VALUES(1);
INSERT INTO test_reference VALUES(2);
SELECT recover_prepared_transactions();
recover_prepared_transactions
---------------------------------------------------------------------
0
(1 row)
BEGIN;
SELECT * FROM test_reference ORDER BY 1;
b
---------------------------------------------------------------------
1
2
(2 rows)
COMMIT;
SELECT count(*) FROM pg_dist_transaction;
count
---------------------------------------------------------------------
0
(1 row)
SELECT recover_prepared_transactions();
recover_prepared_transactions
---------------------------------------------------------------------
0
(1 row)
-- Test whether auto-recovery runs
ALTER SYSTEM SET citus.recover_2pc_interval TO 10;
SELECT pg_reload_conf();
@ -362,6 +467,8 @@ SELECT pg_reload_conf();
DROP TABLE test_recovery_ref;
DROP TABLE test_recovery;
DROP TABLE test_recovery_single;
DROP TABLE test_2pcskip;
DROP TABLE test_reference;
SELECT 1 FROM master_remove_node('localhost', :master_port);
?column?
---------------------------------------------------------------------

View File

@ -3,6 +3,7 @@
CREATE SCHEMA null_parameters;
SET search_path TO null_parameters;
SET citus.next_shard_id TO 1680000;
SET citus.shard_count to 32;
CREATE TABLE text_dist_column (key text, value text);
SELECT create_distributed_table('text_dist_column', 'key');
create_distributed_table

View File

@ -472,9 +472,9 @@ SELECT * FROM generated_stored_dist ORDER BY 1,2,3;
INSERT INTO generated_stored_dist VALUES (1, 'text_1'), (2, 'text_2');
SELECT alter_distributed_table('generated_stored_dist', shard_count := 5, cascade_to_colocated := false);
NOTICE: creating a new table for test_pg12.generated_stored_dist
NOTICE: Moving the data of test_pg12.generated_stored_dist
NOTICE: Dropping the old test_pg12.generated_stored_dist
NOTICE: Renaming the new table to test_pg12.generated_stored_dist
NOTICE: moving the data of test_pg12.generated_stored_dist
NOTICE: dropping the old test_pg12.generated_stored_dist
NOTICE: renaming the new table to test_pg12.generated_stored_dist
alter_distributed_table
---------------------------------------------------------------------
@ -533,9 +533,9 @@ create table generated_stored_columnar_p0 partition of generated_stored_columnar
create table generated_stored_columnar_p1 partition of generated_stored_columnar for values from (10) to (20);
SELECT alter_table_set_access_method('generated_stored_columnar_p0', 'columnar');
NOTICE: creating a new table for test_pg12.generated_stored_columnar_p0
NOTICE: Moving the data of test_pg12.generated_stored_columnar_p0
NOTICE: Dropping the old test_pg12.generated_stored_columnar_p0
NOTICE: Renaming the new table to test_pg12.generated_stored_columnar_p0
NOTICE: moving the data of test_pg12.generated_stored_columnar_p0
NOTICE: dropping the old test_pg12.generated_stored_columnar_p0
NOTICE: renaming the new table to test_pg12.generated_stored_columnar_p0
alter_table_set_access_method
---------------------------------------------------------------------
@ -568,9 +568,9 @@ SELECT * FROM generated_stored_ref ORDER BY 1,2,3,4,5;
BEGIN;
SELECT undistribute_table('generated_stored_ref');
NOTICE: creating a new table for test_pg12.generated_stored_ref
NOTICE: Moving the data of test_pg12.generated_stored_ref
NOTICE: Dropping the old test_pg12.generated_stored_ref
NOTICE: Renaming the new table to test_pg12.generated_stored_ref
NOTICE: moving the data of test_pg12.generated_stored_ref
NOTICE: dropping the old test_pg12.generated_stored_ref
NOTICE: renaming the new table to test_pg12.generated_stored_ref
undistribute_table
---------------------------------------------------------------------
@ -600,9 +600,9 @@ BEGIN;
-- show that undistribute_table works fine
SELECT undistribute_table('generated_stored_ref');
NOTICE: creating a new table for test_pg12.generated_stored_ref
NOTICE: Moving the data of test_pg12.generated_stored_ref
NOTICE: Dropping the old test_pg12.generated_stored_ref
NOTICE: Renaming the new table to test_pg12.generated_stored_ref
NOTICE: moving the data of test_pg12.generated_stored_ref
NOTICE: dropping the old test_pg12.generated_stored_ref
NOTICE: renaming the new table to test_pg12.generated_stored_ref
undistribute_table
---------------------------------------------------------------------
@ -630,9 +630,9 @@ BEGIN;
-- show that undistribute_table works fine
SELECT undistribute_table('generated_stored_ref');
NOTICE: creating a new table for test_pg12.generated_stored_ref
NOTICE: Moving the data of test_pg12.generated_stored_ref
NOTICE: Dropping the old test_pg12.generated_stored_ref
NOTICE: Renaming the new table to test_pg12.generated_stored_ref
NOTICE: moving the data of test_pg12.generated_stored_ref
NOTICE: dropping the old test_pg12.generated_stored_ref
NOTICE: renaming the new table to test_pg12.generated_stored_ref
undistribute_table
---------------------------------------------------------------------
@ -650,8 +650,22 @@ SELECT citus_remove_node('localhost', :master_port);
(1 row)
CREATE TABLE superuser_columnar_table (a int) USING columnar;
CREATE USER read_access;
NOTICE: not propagating CREATE ROLE/USER commands to worker nodes
HINT: Connect to worker nodes directly to manually create all necessary users and roles.
SET ROLE read_access;
-- user shouldn't be able to execute alter_columnar_table_set
-- or alter_columnar_table_reset for a columnar table that it
-- doesn't own
SELECT alter_columnar_table_set('test_pg12.superuser_columnar_table', chunk_group_row_limit => 100);
ERROR: permission denied for schema test_pg12
SELECT alter_columnar_table_reset('test_pg12.superuser_columnar_table');
ERROR: permission denied for schema test_pg12
RESET ROLE;
DROP USER read_access;
\set VERBOSITY terse
drop schema test_pg12 cascade;
NOTICE: drop cascades to 15 other objects
NOTICE: drop cascades to 16 other objects
\set VERBOSITY default
SET citus.shard_replication_factor to 2;

View File

@ -155,9 +155,9 @@ TRUNCATE with_ties_table_2;
-- test INSERT SELECTs into distributed table with a different distribution column
SELECT undistribute_table('with_ties_table_2');
NOTICE: creating a new table for public.with_ties_table_2
NOTICE: Moving the data of public.with_ties_table_2
NOTICE: Dropping the old public.with_ties_table_2
NOTICE: Renaming the new table to public.with_ties_table_2
NOTICE: moving the data of public.with_ties_table_2
NOTICE: dropping the old public.with_ties_table_2
NOTICE: renaming the new table to public.with_ties_table_2
undistribute_table
---------------------------------------------------------------------

View File

@ -317,6 +317,14 @@ DEBUG: Router planner cannot handle multi-shard select queries
SELECT * FROM ((SELECT x, y FROM test) UNION ALL (SELECT y, x FROM test)) u ORDER BY 1,2;
DEBUG: Router planner cannot handle multi-shard select queries
DEBUG: Router planner cannot handle multi-shard select queries
DEBUG: generating subplan XXX_1 for subquery SELECT x, y FROM recursive_union.test
DEBUG: Router planner cannot handle multi-shard select queries
DEBUG: generating subplan XXX_2 for subquery SELECT y, x FROM recursive_union.test
DEBUG: Creating router plan
DEBUG: generating subplan XXX_3 for subquery SELECT intermediate_result.x, intermediate_result.y FROM read_intermediate_result('XXX_1'::text, 'binary'::citus_copy_format) intermediate_result(x integer, y integer) UNION ALL SELECT intermediate_result.y, intermediate_result.x FROM read_intermediate_result('XXX_2'::text, 'binary'::citus_copy_format) intermediate_result(y integer, x integer)
DEBUG: Plan XXX query after replacing subqueries and CTEs: SELECT x, y FROM (SELECT intermediate_result.x, intermediate_result.y FROM read_intermediate_result('XXX_3'::text, 'binary'::citus_copy_format) intermediate_result(x integer, y integer)) u ORDER BY x, y
DEBUG: Creating router plan
x | y
---------------------------------------------------------------------
1 | 1
@ -829,6 +837,57 @@ DEBUG: Router planner cannot handle multi-shard select queries
DEBUG: Router planner cannot handle multi-shard select queries
ERROR: cannot compute aggregate (distinct)
DETAIL: table partitioning is unsuitable for aggregate (distinct)
/* these are not safe to push down as the partition key index is different */
SELECT COUNT(*) FROM ((SELECT x,y FROM test) UNION ALL (SELECT y,x FROM test)) u;
DEBUG: Router planner cannot handle multi-shard select queries
DEBUG: Router planner cannot handle multi-shard select queries
DEBUG: generating subplan XXX_1 for subquery SELECT x, y FROM recursive_union.test
DEBUG: Router planner cannot handle multi-shard select queries
DEBUG: generating subplan XXX_2 for subquery SELECT y, x FROM recursive_union.test
DEBUG: Creating router plan
DEBUG: generating subplan XXX_3 for subquery SELECT intermediate_result.x, intermediate_result.y FROM read_intermediate_result('XXX_1'::text, 'binary'::citus_copy_format) intermediate_result(x integer, y integer) UNION ALL SELECT intermediate_result.y, intermediate_result.x FROM read_intermediate_result('XXX_2'::text, 'binary'::citus_copy_format) intermediate_result(y integer, x integer)
DEBUG: Plan XXX query after replacing subqueries and CTEs: SELECT count(*) AS count FROM (SELECT intermediate_result.x, intermediate_result.y FROM read_intermediate_result('XXX_3'::text, 'binary'::citus_copy_format) intermediate_result(x integer, y integer)) u
DEBUG: Creating router plan
count
---------------------------------------------------------------------
4
(1 row)
/* this is safe to push down since the partition key index is the same */
SELECT COUNT(*) FROM (SELECT x,y FROM test UNION ALL SELECT x,y FROM test) foo;
DEBUG: Router planner cannot handle multi-shard select queries
count
---------------------------------------------------------------------
4
(1 row)
SELECT COUNT(*) FROM
((SELECT x,y FROM test UNION ALL SELECT x,y FROM test)
UNION ALL
(SELECT x,y FROM test UNION ALL SELECT x,y FROM test)) foo;
DEBUG: Router planner cannot handle multi-shard select queries
count
---------------------------------------------------------------------
8
(1 row)
SELECT COUNT(*)
FROM
(SELECT user_id AS user_id
FROM
(SELECT x AS user_id
FROM test
UNION ALL SELECT x AS user_id
FROM test) AS bar
UNION ALL SELECT x AS user_id
FROM test) AS fool LIMIT 1;
DEBUG: Router planner cannot handle multi-shard select queries
DEBUG: push down of limit count: 1
count
---------------------------------------------------------------------
6
(1 row)
-- one of the leaves is a repartition join
SET citus.enable_repartition_joins TO ON;
-- repartition is recursively planned before the set operation

View File

@ -807,6 +807,7 @@ SELECT pg_sleep(0.1);
(1 row)
-- cache connections to the nodes
SET citus.force_max_query_parallelization TO ON;
SELECT count(*) FROM test;
count
---------------------------------------------------------------------
@ -823,6 +824,28 @@ BEGIN;
(0 rows)
COMMIT;
-- should close all connections
SET citus.max_cached_connection_lifetime TO '0s';
SELECT count(*) FROM test;
count
---------------------------------------------------------------------
155
(1 row)
-- show that no connections are cached
SELECT
connection_count_to_node
FROM
citus_remote_connection_stats()
WHERE
port IN (SELECT node_port FROM master_get_active_worker_nodes()) AND
database_name = 'regression'
ORDER BY
hostname, port;
connection_count_to_node
---------------------------------------------------------------------
(0 rows)
-- in case other tests relies on these setting, reset them
ALTER SYSTEM RESET citus.distributed_deadlock_detection_factor;
ALTER SYSTEM RESET citus.recover_2pc_interval;

View File

@ -289,9 +289,9 @@ COMMIT;
-- to test citus local tables
select undistribute_table('upsert_test');
NOTICE: creating a new table for single_node.upsert_test
NOTICE: Moving the data of single_node.upsert_test
NOTICE: Dropping the old single_node.upsert_test
NOTICE: Renaming the new table to single_node.upsert_test
NOTICE: moving the data of single_node.upsert_test
NOTICE: dropping the old single_node.upsert_test
NOTICE: renaming the new table to single_node.upsert_test
undistribute_table
---------------------------------------------------------------------
@ -694,6 +694,76 @@ SELECT * FROM collections_list, collections_list_0 WHERE collections_list.key=co
100 | 0 | 10000 | 100 | 0 | 10000
(1 row)
-- test hash distribution using INSERT with generate_series() function
CREATE OR REPLACE FUNCTION part_hashint4_noop(value int4, seed int8)
RETURNS int8 AS $$
SELECT value + seed;
$$ LANGUAGE SQL IMMUTABLE;
CREATE OPERATOR CLASS part_test_int4_ops
FOR TYPE int4
USING HASH AS
operator 1 =,
function 2 part_hashint4_noop(int4, int8);
CREATE TABLE hash_parted (
a int,
b int
) PARTITION BY HASH (a part_test_int4_ops);
CREATE TABLE hpart0 PARTITION OF hash_parted FOR VALUES WITH (modulus 4, remainder 0);
CREATE TABLE hpart1 PARTITION OF hash_parted FOR VALUES WITH (modulus 4, remainder 1);
CREATE TABLE hpart2 PARTITION OF hash_parted FOR VALUES WITH (modulus 4, remainder 2);
CREATE TABLE hpart3 PARTITION OF hash_parted FOR VALUES WITH (modulus 4, remainder 3);
SELECT create_distributed_table('hash_parted ', 'a');
create_distributed_table
---------------------------------------------------------------------
(1 row)
INSERT INTO hash_parted VALUES (1, generate_series(1, 10));
SELECT * FROM hash_parted ORDER BY 1, 2;
a | b
---------------------------------------------------------------------
1 | 1
1 | 2
1 | 3
1 | 4
1 | 5
1 | 6
1 | 7
1 | 8
1 | 9
1 | 10
(10 rows)
ALTER TABLE hash_parted DETACH PARTITION hpart0;
ALTER TABLE hash_parted DETACH PARTITION hpart1;
ALTER TABLE hash_parted DETACH PARTITION hpart2;
ALTER TABLE hash_parted DETACH PARTITION hpart3;
-- test range partition without creating partitions and inserting with generate_series()
-- should error out even in plain PG since no partition of relation "parent_tab" is found for row
-- in Citus it errors out because it fails to evaluate partition key in insert
CREATE TABLE parent_tab (id int) PARTITION BY RANGE (id);
SELECT create_distributed_table('parent_tab', 'id');
create_distributed_table
---------------------------------------------------------------------
(1 row)
INSERT INTO parent_tab VALUES (generate_series(0, 3));
ERROR: failed to evaluate partition key in insert
HINT: try using constant values for partition column
-- now it should work
CREATE TABLE parent_tab_1_2 PARTITION OF parent_tab FOR VALUES FROM (1) to (2);
ALTER TABLE parent_tab ADD COLUMN b int;
INSERT INTO parent_tab VALUES (1, generate_series(0, 3));
SELECT * FROM parent_tab ORDER BY 1, 2;
id | b
---------------------------------------------------------------------
1 | 0
1 | 1
1 | 2
1 | 3
(4 rows)
-- make sure that parallel accesses are good
SET citus.force_max_query_parallelization TO ON;
SELECT * FROM test_2 ORDER BY 1 DESC;
@ -1125,9 +1195,9 @@ RESET citus.task_executor_type;
ALTER TABLE test DROP CONSTRAINT foreign_key;
SELECT undistribute_table('test_2');
NOTICE: creating a new table for single_node.test_2
NOTICE: Moving the data of single_node.test_2
NOTICE: Dropping the old single_node.test_2
NOTICE: Renaming the new table to single_node.test_2
NOTICE: moving the data of single_node.test_2
NOTICE: dropping the old single_node.test_2
NOTICE: renaming the new table to single_node.test_2
undistribute_table
---------------------------------------------------------------------
@ -1176,28 +1246,28 @@ ALTER TABLE partitioned_table_1 ADD CONSTRAINT fkey_5 FOREIGN KEY (col_1) REFERE
SELECT undistribute_table('partitioned_table_1', cascade_via_foreign_keys=>true);
NOTICE: converting the partitions of single_node.partitioned_table_1
NOTICE: creating a new table for single_node.partitioned_table_1_100_200
NOTICE: Moving the data of single_node.partitioned_table_1_100_200
NOTICE: Dropping the old single_node.partitioned_table_1_100_200
NOTICE: Renaming the new table to single_node.partitioned_table_1_100_200
NOTICE: moving the data of single_node.partitioned_table_1_100_200
NOTICE: dropping the old single_node.partitioned_table_1_100_200
NOTICE: renaming the new table to single_node.partitioned_table_1_100_200
NOTICE: creating a new table for single_node.partitioned_table_1_200_300
NOTICE: Moving the data of single_node.partitioned_table_1_200_300
NOTICE: Dropping the old single_node.partitioned_table_1_200_300
NOTICE: Renaming the new table to single_node.partitioned_table_1_200_300
NOTICE: moving the data of single_node.partitioned_table_1_200_300
NOTICE: dropping the old single_node.partitioned_table_1_200_300
NOTICE: renaming the new table to single_node.partitioned_table_1_200_300
NOTICE: creating a new table for single_node.partitioned_table_1
NOTICE: Dropping the old single_node.partitioned_table_1
NOTICE: Renaming the new table to single_node.partitioned_table_1
NOTICE: dropping the old single_node.partitioned_table_1
NOTICE: renaming the new table to single_node.partitioned_table_1
NOTICE: creating a new table for single_node.reference_table_1
NOTICE: Moving the data of single_node.reference_table_1
NOTICE: Dropping the old single_node.reference_table_1
NOTICE: Renaming the new table to single_node.reference_table_1
NOTICE: moving the data of single_node.reference_table_1
NOTICE: dropping the old single_node.reference_table_1
NOTICE: renaming the new table to single_node.reference_table_1
NOTICE: creating a new table for single_node.distributed_table_1
NOTICE: Moving the data of single_node.distributed_table_1
NOTICE: Dropping the old single_node.distributed_table_1
NOTICE: Renaming the new table to single_node.distributed_table_1
NOTICE: moving the data of single_node.distributed_table_1
NOTICE: dropping the old single_node.distributed_table_1
NOTICE: renaming the new table to single_node.distributed_table_1
NOTICE: creating a new table for single_node.citus_local_table_1
NOTICE: Moving the data of single_node.citus_local_table_1
NOTICE: Dropping the old single_node.citus_local_table_1
NOTICE: Renaming the new table to single_node.citus_local_table_1
NOTICE: moving the data of single_node.citus_local_table_1
NOTICE: dropping the old single_node.citus_local_table_1
NOTICE: renaming the new table to single_node.citus_local_table_1
undistribute_table
---------------------------------------------------------------------
@ -1785,6 +1855,142 @@ NOTICE: executing the command locally: SELECT bool_and((z IS NULL)) AS bool_and
(1 row)
RESET citus.local_copy_flush_threshold;
RESET citus.local_copy_flush_threshold;
CREATE OR REPLACE FUNCTION coordinated_transaction_should_use_2PC()
RETURNS BOOL LANGUAGE C STRICT VOLATILE AS 'citus',
$$coordinated_transaction_should_use_2PC$$;
-- a multi-shard/single-shard select that is failed over to local
-- execution doesn't start a 2PC
BEGIN;
SELECT count(*) FROM another_schema_table;
NOTICE: executing the command locally: SELECT count(*) AS count FROM single_node.another_schema_table_90630511 another_schema_table WHERE true
NOTICE: executing the command locally: SELECT count(*) AS count FROM single_node.another_schema_table_90630512 another_schema_table WHERE true
NOTICE: executing the command locally: SELECT count(*) AS count FROM single_node.another_schema_table_90630513 another_schema_table WHERE true
NOTICE: executing the command locally: SELECT count(*) AS count FROM single_node.another_schema_table_90630514 another_schema_table WHERE true
count
---------------------------------------------------------------------
10001
(1 row)
SELECT count(*) FROM another_schema_table WHERE a = 1;
NOTICE: executing the command locally: SELECT count(*) AS count FROM single_node.another_schema_table_90630511 another_schema_table WHERE (a OPERATOR(pg_catalog.=) 1)
count
---------------------------------------------------------------------
1
(1 row)
WITH cte_1 as (SELECT * FROM another_schema_table LIMIT 10)
SELECT count(*) FROM cte_1;
NOTICE: executing the command locally: SELECT a, b FROM single_node.another_schema_table_90630511 another_schema_table WHERE true LIMIT '10'::bigint
NOTICE: executing the command locally: SELECT a, b FROM single_node.another_schema_table_90630512 another_schema_table WHERE true LIMIT '10'::bigint
NOTICE: executing the command locally: SELECT a, b FROM single_node.another_schema_table_90630513 another_schema_table WHERE true LIMIT '10'::bigint
NOTICE: executing the command locally: SELECT a, b FROM single_node.another_schema_table_90630514 another_schema_table WHERE true LIMIT '10'::bigint
NOTICE: executing the command locally: SELECT count(*) AS count FROM (SELECT intermediate_result.a, intermediate_result.b FROM read_intermediate_result('XXX_1'::text, 'binary'::citus_copy_format) intermediate_result(a integer, b integer)) cte_1
count
---------------------------------------------------------------------
10
(1 row)
WITH cte_1 as (SELECT * FROM another_schema_table WHERE a = 1 LIMIT 10)
SELECT count(*) FROM cte_1;
NOTICE: executing the command locally: SELECT count(*) AS count FROM (SELECT another_schema_table.a, another_schema_table.b FROM single_node.another_schema_table_90630511 another_schema_table WHERE (another_schema_table.a OPERATOR(pg_catalog.=) 1) LIMIT 10) cte_1
count
---------------------------------------------------------------------
1
(1 row)
SELECT coordinated_transaction_should_use_2PC();
coordinated_transaction_should_use_2pc
---------------------------------------------------------------------
f
(1 row)
ROLLBACK;
-- same without a transaction block
WITH cte_1 AS (SELECT count(*) as cnt FROM another_schema_table LIMIT 1000),
cte_2 AS (SELECT coordinated_transaction_should_use_2PC() as enabled_2pc)
SELECT cnt, enabled_2pc FROM cte_1, cte_2;
NOTICE: executing the command locally: SELECT count(*) AS cnt FROM single_node.another_schema_table_90630511 another_schema_table WHERE true LIMIT '1000'::bigint
NOTICE: executing the command locally: SELECT count(*) AS cnt FROM single_node.another_schema_table_90630512 another_schema_table WHERE true LIMIT '1000'::bigint
NOTICE: executing the command locally: SELECT count(*) AS cnt FROM single_node.another_schema_table_90630513 another_schema_table WHERE true LIMIT '1000'::bigint
NOTICE: executing the command locally: SELECT count(*) AS cnt FROM single_node.another_schema_table_90630514 another_schema_table WHERE true LIMIT '1000'::bigint
NOTICE: executing the command locally: SELECT cte_1.cnt, cte_2.enabled_2pc FROM (SELECT intermediate_result.cnt FROM read_intermediate_result('XXX_2'::text, 'binary'::citus_copy_format) intermediate_result(cnt bigint)) cte_1, (SELECT intermediate_result.enabled_2pc FROM read_intermediate_result('XXX_1'::text, 'binary'::citus_copy_format) intermediate_result(enabled_2pc boolean)) cte_2
cnt | enabled_2pc
---------------------------------------------------------------------
10001 | f
(1 row)
-- a multi-shard modification that is failed over to local
-- execution starts a 2PC
BEGIN;
UPDATE another_schema_table SET b = b + 1;
NOTICE: executing the command locally: UPDATE single_node.another_schema_table_90630511 another_schema_table SET b = (b OPERATOR(pg_catalog.+) 1)
NOTICE: executing the command locally: UPDATE single_node.another_schema_table_90630512 another_schema_table SET b = (b OPERATOR(pg_catalog.+) 1)
NOTICE: executing the command locally: UPDATE single_node.another_schema_table_90630513 another_schema_table SET b = (b OPERATOR(pg_catalog.+) 1)
NOTICE: executing the command locally: UPDATE single_node.another_schema_table_90630514 another_schema_table SET b = (b OPERATOR(pg_catalog.+) 1)
SELECT coordinated_transaction_should_use_2PC();
coordinated_transaction_should_use_2pc
---------------------------------------------------------------------
t
(1 row)
ROLLBACK;
-- a multi-shard modification that is failed over to local
-- execution starts a 2PC
BEGIN;
WITH cte_1 AS (UPDATE another_schema_table SET b = b + 1 RETURNING *)
SELECT count(*) FROM cte_1;
NOTICE: executing the command locally: UPDATE single_node.another_schema_table_90630511 another_schema_table SET b = (b OPERATOR(pg_catalog.+) 1) RETURNING a, b
NOTICE: executing the command locally: UPDATE single_node.another_schema_table_90630512 another_schema_table SET b = (b OPERATOR(pg_catalog.+) 1) RETURNING a, b
NOTICE: executing the command locally: UPDATE single_node.another_schema_table_90630513 another_schema_table SET b = (b OPERATOR(pg_catalog.+) 1) RETURNING a, b
NOTICE: executing the command locally: UPDATE single_node.another_schema_table_90630514 another_schema_table SET b = (b OPERATOR(pg_catalog.+) 1) RETURNING a, b
NOTICE: executing the command locally: SELECT count(*) AS count FROM (SELECT intermediate_result.a, intermediate_result.b FROM read_intermediate_result('XXX_1'::text, 'binary'::citus_copy_format) intermediate_result(a integer, b integer)) cte_1
count
---------------------------------------------------------------------
10001
(1 row)
SELECT coordinated_transaction_should_use_2PC();
coordinated_transaction_should_use_2pc
---------------------------------------------------------------------
t
(1 row)
ROLLBACK;
-- same without transaction block
WITH cte_1 AS (UPDATE another_schema_table SET b = b + 1 RETURNING *)
SELECT coordinated_transaction_should_use_2PC();
NOTICE: executing the command locally: UPDATE single_node.another_schema_table_90630511 another_schema_table SET b = (b OPERATOR(pg_catalog.+) 1) RETURNING a, b
NOTICE: executing the command locally: UPDATE single_node.another_schema_table_90630512 another_schema_table SET b = (b OPERATOR(pg_catalog.+) 1) RETURNING a, b
NOTICE: executing the command locally: UPDATE single_node.another_schema_table_90630513 another_schema_table SET b = (b OPERATOR(pg_catalog.+) 1) RETURNING a, b
NOTICE: executing the command locally: UPDATE single_node.another_schema_table_90630514 another_schema_table SET b = (b OPERATOR(pg_catalog.+) 1) RETURNING a, b
NOTICE: executing the command locally: SELECT single_node.coordinated_transaction_should_use_2pc() AS coordinated_transaction_should_use_2pc
coordinated_transaction_should_use_2pc
---------------------------------------------------------------------
t
(1 row)
-- a single-shard modification that is failed over to local
-- starts 2PC execution
BEGIN;
UPDATE another_schema_table SET b = b + 1 WHERE a = 1;
NOTICE: executing the command locally: UPDATE single_node.another_schema_table_90630511 another_schema_table SET b = (b OPERATOR(pg_catalog.+) 1) WHERE (a OPERATOR(pg_catalog.=) 1)
SELECT coordinated_transaction_should_use_2PC();
coordinated_transaction_should_use_2pc
---------------------------------------------------------------------
t
(1 row)
ROLLBACK;
-- same without transaction block
WITH cte_1 AS (UPDATE another_schema_table SET b = b + 1 WHERE a = 1 RETURNING *)
SELECT coordinated_transaction_should_use_2PC() FROM cte_1;
NOTICE: executing the command locally: WITH cte_1 AS (UPDATE single_node.another_schema_table_90630511 another_schema_table SET b = (another_schema_table.b OPERATOR(pg_catalog.+) 1) WHERE (another_schema_table.a OPERATOR(pg_catalog.=) 1) RETURNING another_schema_table.a, another_schema_table.b) SELECT single_node.coordinated_transaction_should_use_2pc() AS coordinated_transaction_should_use_2pc FROM cte_1
coordinated_transaction_should_use_2pc
---------------------------------------------------------------------
t
(1 row)
-- if the local execution is disabled, we cannot failover to
-- local execution and the queries would fail
SET citus.enable_local_execution TO false;

View File

@ -35,15 +35,15 @@ SELECT * FROM dist_table ORDER BY 1, 2, 3;
-- the name->OID conversion happens at parse time.
SELECT undistribute_table('dist_table'), create_distributed_table('dist_table', 'a');
NOTICE: creating a new table for undistribute_table.dist_table
NOTICE: Moving the data of undistribute_table.dist_table
NOTICE: Dropping the old undistribute_table.dist_table
NOTICE: Renaming the new table to undistribute_table.dist_table
NOTICE: moving the data of undistribute_table.dist_table
NOTICE: dropping the old undistribute_table.dist_table
NOTICE: renaming the new table to undistribute_table.dist_table
ERROR: relation with OID XXXX does not exist
SELECT undistribute_table('dist_table');
NOTICE: creating a new table for undistribute_table.dist_table
NOTICE: Moving the data of undistribute_table.dist_table
NOTICE: Dropping the old undistribute_table.dist_table
NOTICE: Renaming the new table to undistribute_table.dist_table
NOTICE: moving the data of undistribute_table.dist_table
NOTICE: dropping the old undistribute_table.dist_table
NOTICE: renaming the new table to undistribute_table.dist_table
undistribute_table
---------------------------------------------------------------------
@ -88,9 +88,9 @@ SELECT * FROM pg_indexes WHERE tablename = 'dist_table';
SELECT undistribute_table('dist_table');
NOTICE: creating a new table for undistribute_table.dist_table
NOTICE: Moving the data of undistribute_table.dist_table
NOTICE: Dropping the old undistribute_table.dist_table
NOTICE: Renaming the new table to undistribute_table.dist_table
NOTICE: moving the data of undistribute_table.dist_table
NOTICE: dropping the old undistribute_table.dist_table
NOTICE: renaming the new table to undistribute_table.dist_table
undistribute_table
---------------------------------------------------------------------
@ -204,16 +204,16 @@ HINT: the parent table is "partitioned_table"
SELECT undistribute_table('partitioned_table');
NOTICE: converting the partitions of undistribute_table.partitioned_table
NOTICE: creating a new table for undistribute_table.partitioned_table_1_5
NOTICE: Moving the data of undistribute_table.partitioned_table_1_5
NOTICE: Dropping the old undistribute_table.partitioned_table_1_5
NOTICE: Renaming the new table to undistribute_table.partitioned_table_1_5
NOTICE: moving the data of undistribute_table.partitioned_table_1_5
NOTICE: dropping the old undistribute_table.partitioned_table_1_5
NOTICE: renaming the new table to undistribute_table.partitioned_table_1_5
NOTICE: creating a new table for undistribute_table.partitioned_table_6_10
NOTICE: Moving the data of undistribute_table.partitioned_table_6_10
NOTICE: Dropping the old undistribute_table.partitioned_table_6_10
NOTICE: Renaming the new table to undistribute_table.partitioned_table_6_10
NOTICE: moving the data of undistribute_table.partitioned_table_6_10
NOTICE: dropping the old undistribute_table.partitioned_table_6_10
NOTICE: renaming the new table to undistribute_table.partitioned_table_6_10
NOTICE: creating a new table for undistribute_table.partitioned_table
NOTICE: Dropping the old undistribute_table.partitioned_table
NOTICE: Renaming the new table to undistribute_table.partitioned_table
NOTICE: dropping the old undistribute_table.partitioned_table
NOTICE: renaming the new table to undistribute_table.partitioned_table
undistribute_table
---------------------------------------------------------------------
@ -283,9 +283,9 @@ SELECT * FROM seq_table ORDER BY a;
SELECT undistribute_table('seq_table');
NOTICE: creating a new table for undistribute_table.seq_table
NOTICE: Moving the data of undistribute_table.seq_table
NOTICE: Dropping the old undistribute_table.seq_table
NOTICE: Renaming the new table to undistribute_table.seq_table
NOTICE: moving the data of undistribute_table.seq_table
NOTICE: dropping the old undistribute_table.seq_table
NOTICE: renaming the new table to undistribute_table.seq_table
undistribute_table
---------------------------------------------------------------------
@ -348,14 +348,14 @@ SELECT * FROM another_schema.undis_view3 ORDER BY 1, 2;
SELECT undistribute_table('view_table');
NOTICE: creating a new table for undistribute_table.view_table
NOTICE: Moving the data of undistribute_table.view_table
NOTICE: Dropping the old undistribute_table.view_table
NOTICE: moving the data of undistribute_table.view_table
NOTICE: dropping the old undistribute_table.view_table
NOTICE: drop cascades to 3 other objects
DETAIL: drop cascades to view undis_view1
drop cascades to view undis_view2
drop cascades to view another_schema.undis_view3
CONTEXT: SQL statement "DROP TABLE undistribute_table.view_table CASCADE"
NOTICE: Renaming the new table to undistribute_table.view_table
NOTICE: renaming the new table to undistribute_table.view_table
undistribute_table
---------------------------------------------------------------------

File diff suppressed because one or more lines are too long

View File

@ -55,6 +55,7 @@ ORDER BY 1;
function citus_executor_name(integer)
function citus_extradata_container(internal)
function citus_finish_pg_upgrade()
function citus_get_active_worker_nodes()
function citus_internal.columnar_ensure_objects_exist()
function citus_internal.find_groupid_for_node(text,integer)
function citus_internal.pg_dist_node_trigger_func()
@ -242,5 +243,5 @@ ORDER BY 1;
view citus_worker_stat_activity
view pg_dist_shard_placement
view time_partitions
(226 rows)
(227 rows)

View File

@ -52,6 +52,7 @@ ORDER BY 1;
function citus_executor_name(integer)
function citus_extradata_container(internal)
function citus_finish_pg_upgrade()
function citus_get_active_worker_nodes()
function citus_internal.columnar_ensure_objects_exist()
function citus_internal.find_groupid_for_node(text,integer)
function citus_internal.pg_dist_node_trigger_func()
@ -238,5 +239,5 @@ ORDER BY 1;
view citus_worker_stat_activity
view pg_dist_shard_placement
view time_partitions
(222 rows)
(223 rows)

View File

@ -822,6 +822,69 @@ EXECUTE olu(1,ARRAY[1,2],ARRAY[1,2]);
{1} | {1} | {NULL}
(1 row)
-- test insert query with insert CTE
WITH insert_cte AS
(INSERT INTO with_modifying.modify_table VALUES (23, 7))
INSERT INTO with_modifying.anchor_table VALUES (1998);
SELECT * FROM with_modifying.modify_table WHERE id = 23 AND val = 7;
id | val
---------------------------------------------------------------------
23 | 7
(1 row)
SELECT * FROM with_modifying.anchor_table WHERE id = 1998;
id
---------------------------------------------------------------------
1998
(1 row)
-- test insert query with multiple CTEs
WITH select_cte AS (SELECT * FROM with_modifying.anchor_table),
modifying_cte AS (INSERT INTO with_modifying.anchor_table SELECT * FROM select_cte)
INSERT INTO with_modifying.anchor_table VALUES (1995);
SELECT * FROM with_modifying.anchor_table ORDER BY 1;
id
---------------------------------------------------------------------
1
1
2
2
1995
1998
1998
(7 rows)
-- test with returning
WITH returning_cte AS (INSERT INTO with_modifying.anchor_table values (1997) RETURNING *)
INSERT INTO with_modifying.anchor_table VALUES (1996);
SELECT * FROM with_modifying.anchor_table WHERE id IN (1996, 1997) ORDER BY 1;
id
---------------------------------------------------------------------
1996
1997
(2 rows)
-- test insert query with select CTE
WITH select_cte AS
(SELECT * FROM with_modifying.modify_table)
INSERT INTO with_modifying.anchor_table VALUES (1990);
SELECT * FROM with_modifying.anchor_table WHERE id = 1990;
id
---------------------------------------------------------------------
1990
(1 row)
-- even if we do multi-row insert, it is not fast path router due to cte
WITH select_cte AS (SELECT 1 AS col)
INSERT INTO with_modifying.anchor_table VALUES (1991), (1992);
SELECT * FROM with_modifying.anchor_table WHERE id IN (1991, 1992) ORDER BY 1;
id
---------------------------------------------------------------------
1991
1992
(2 rows)
DELETE FROM with_modifying.anchor_table WHERE id IN (1990, 1991, 1992, 1995, 1996, 1997, 1998);
-- Test with replication factor 2
SET citus.shard_replication_factor to 2;
DROP TABLE modify_table;

View File

@ -72,7 +72,7 @@ test: multi_create_fdw
# ----------
# Tests for recursive subquery planning
# ----------
# NOTE: The next 6 were in parallel originally, but we got "too many
# NOTE: The next 7 were in parallel originally, but we got "too many
# connection" errors on CI. Requires investigation before doing them in
# parallel again.
test: subquery_basics
@ -80,6 +80,7 @@ test: subquery_local_tables
test: subquery_executors
test: subquery_and_cte
test: set_operations
test: union_pushdown
test: set_operation_and_local_tables
test: subqueries_deep subquery_view subquery_partitioning subqueries_not_supported
@ -96,6 +97,11 @@ test: tableam
test: propagate_statistics
test: pg13_propagate_statistics
# ----------
# Test for updating table statistics
# ----------
test: citus_update_table_statistics
# ----------
# Miscellaneous tests to check our query planning behavior
# ----------

Some files were not shown because too many files have changed in this diff Show More