Compare commits

...

49 Commits
main ... v9.4.4

Author SHA1 Message Date
Gürkan İndibay d0000a15bd
Bump version to 9.4.4 (#4468)
Update configuration files version info
2021-01-04 13:56:34 +03:00
gurkanindibay a5adc49077 Add changelog entry for version 9.4.4
Conflicts:
	CHANGELOG.md

 Committing a cherry-pick with sha 19977792d8
2020-12-30 13:12:58 +00:00
Onur Tirtir 7233c27533 Not consider single shard hash dist. tables as replicated (#4413)
(cherry picked from commit 0eb5701658)
2020-12-22 16:59:32 +03:00
Onur Tirtir 51f422f3c6
Add some more tests with views to test recursive planning on views (#4404) 2020-12-18 17:42:35 +03:00
Marco Slot 8fae9aae96
Reliably detect local tables in router queries in 9.4 (#4418)
Co-authored-by: Marco Slot <marco.slot@gmail.com>
2020-12-17 13:38:16 +01:00
Onur Tirtir 59774b1dd4 Handle invalid connection hash entries (#4362)
If MemoryContextAlloc errors out -e.g. during an OOM-, ConnectionHashEntry->connections
stays as NULL.

With this commit, we add isValid flag to ConnectionHashEntry that should be set to true
right after we allocate & initialize ConnectionHashEntry->connections list properly, and we
check it before accesing to ConnectionHashEntry->connections.
(cherry picked from commit 7f3d1182ed)

 Conflicts:
	src/backend/distributed/connection/connection_management.c
2020-12-01 11:09:33 +03:00
Onur Tirtir 221aa1a381 Update CHANGELOG for 9.4.3
(cherry picked from commit 76a429f19b)

 Conflicts:
	CHANGELOG.md
2020-11-24 13:21:08 +03:00
Onur Tirtir 7e99324bd9 Bump version to 9.4.3 2020-11-24 12:19:15 +03:00
Önder Kalacı 447c7ecdd4 Enable parallel query on EXPLAIN ANALYZE (#4325)
It seems that we forgot to pass the revelant
flag to enable Postgres' parallel query
capabilities on the shards when user does
EXPLAIN ANALYZE on a distributed table.

(cherry picked from commit b0ddbbd33a)

 Conflicts:
	src/backend/distributed/planner/multi_explain.c
2020-11-24 12:19:15 +03:00
Onder Kalaci a943696c44 Do not execute subplans multiple times with cursors
Before this commit, we let AdaptiveExecutorPreExecutorRun()
to be effective multiple times on every FETCH on cursors.
That does not affect the correctness of the query results,
but adds significant overhead.

(cherry picked from commit c433c66f2b)
2020-11-23 13:43:24 +03:00
Onur Tirtir ca2bbd89b6 Update CHANGELOG for 9.4.2
(cherry picked from commit c7755103f1)
2020-10-21 15:29:24 +03:00
Onur Tirtir 3d2e1a7464 Bump version to 9.4.2 2020-10-21 15:28:12 +03:00
Marco Slot b7b7b66beb Support reference table view in reference table modification 2020-10-16 13:16:16 +02:00
Marco Slot f6e5006dfd Fix a bug that could lead to multiple maintenance daemons 2020-10-16 13:15:51 +02:00
Marco Slot 03e4bec352 Add maintenance daemon error tests 2020-10-16 13:15:39 +02:00
Onur Tirtir 6aac62e847 Update CHANGELOG for 9.4.1
(cherry picked from commit bc29238546)
2020-09-30 10:53:05 +03:00
Onur Tirtir 72c54b5cdd Bump version to 9.4.1 2020-09-30 10:52:31 +03:00
Marco Slot 637d93e8ff Fix EXPLAIN ANALYZE truncation
(cherry picked from commit c9d46c618b)

Conflicts:
	src/test/regress/expected/multi_explain.out
	src/test/regress/sql/multi_explain.sql
2020-09-28 15:49:58 +03:00
SaitTalhaNisanci 7a00c5b83c Not take ShareUpdateExlusiveLock on pg_dist_transaction (#4184)
* Not take ShareUpdateExlusiveLock on pg_dist_transaction

We were taking ShareUpdateExlusiveLock on pg_dist_transaction during
recovery to prevent multiple recoveries happening concurrenly. VACUUM(
not FULL) also takes ShareUpdateExclusiveLock, and they can conflict. It
seems that VACUUM will skip the table if there is a conflicting lock
already taken unless it is doing the vacuum to prevent id wraparound, in
which case there can be a deadlock. I guess the deadlock happens if:

- VACUUM takes a lock on pg_dist_transaction and is done for id
wraparound problem
- The transaction in the maintenance tries to take a lock but
cannot as that conflicts with the lock acquired by VACUUM
- The transaction in the maintenance daemon has a very old xid hence
VACUUM cannot proceed.

If we take a row exclusive lock in transaction recovery then it wouldn't
conflict with VACUUM hence it could proceed so the deadlock would be
resolved. To prevent concurrent transaction recoveries happening, an
advisory lock is taken with ShareUpdateExlusiveLock as before.

* Use CITUS_OPERATIONS tag

(cherry picked from commit e7cd1ed0ee)

 Conflicts:
	src/backend/distributed/transaction/transaction_recovery.c
2020-09-28 11:38:04 +03:00
Onur Tirtir 51b7b01a09 Update CHANGELOG for 9.4.0
(cherry picked from commit c7f97a9e01)

 Conflicts:
	CHANGELOG.md
2020-07-28 14:58:05 +03:00
Halil Ozan Akgul 993a402c73 Fixes create index concurrently bug
(cherry picked from commit 38b72ddd66)
2020-07-27 10:32:08 +03:00
SaitTalhaNisanci 39e63f5a08 Fix int32 overflow and use PG macros for INT32_XX (#4061)
* Use CalculateUniformHashRangeIndex in HashPartitionId

INT32_MIN definition can change among different platforms hence it is
possible to get overflow, we would see crashes because of this in debian
distros. We have already solved a similar problem with introducing
CalculateUniformHashRangeIndex method, hence to solve it we can use the
same method, this also removes some duplication and has a single place
to decide that.

* Use PG_INT32_XX instead of INT32_XX to be safer

(cherry picked from commit ef841115de)
2020-07-27 10:32:08 +03:00
Halil Ozan AkgĂĽl 2271e9ded1 Fixes the non existing table bug (#4058)
(cherry picked from commit e9f89ed651)
2020-07-27 10:32:08 +03:00
Sait Talha Nisanci 4c90dbbd88 improve error message in secondaries
(cherry picked from commit 6f4686c741467b5c8bd6ca15c0788d8db856392a)
2020-07-21 13:55:12 +03:00
Sait Talha Nisanci 388893ce5e add multi follower repartition tests
(cherry picked from commit 6e5598fd58a1c0c6a597ca06539ac5e286cb6914)
2020-07-21 13:55:08 +03:00
Sait Talha Nisanci 4b98f6c5c2 address feedback
(cherry picked from commit 24043a3602abc7b525f2724a35168e4c45442165)
2020-07-21 13:55:04 +03:00
Sait Talha Nisanci 97dda868a0 use ActiveReadableNodeList in JobExecutorType and task tracker
The reason we should use ActiveReadableNodeList instead of ActiveReadableNonCoordinatorNodeList is that if coordinator is added to cluster as a worker, it should be counted as well. Otherwise if there is only coordinator in the cluster, the count will be 0, hence we get a warning.

In MultiTaskTrackerExecute, we should connect to coordinator if it is
added to the cluster because it will also be assigned tasks.

(cherry picked from commit ae6180ace2931223c58b87444a9e812f5e9f06e8)
2020-07-21 13:55:00 +03:00
Sait Talha Nisanci 27ef768f36 use ActivePrimaryNodeList to include coordinator
ActiveReadableWorkerNodeList doesn't include coordinator, however if
coordinator is added as a worker, we should also include that while
planning. The current methods are very easily misusable and this
requires a refactoring to make the distinction between methods that
include coordinator and that don't very explicit as they can introduce
subtle/major bugs pretty easily.

(cherry picked from commit 86b974e4ceddaf5e2c44799148a8cf485c7d90bf)
2020-07-21 13:54:56 +03:00
Sait Talha Nisanci c238e6c8b0 send schema creation/cleanup to coordinator in repartitions
We were using ALL_WORKERS TargetWorkerSet while sending temporary schema
creation and cleanup. We(well mostly I) thought that ALL_WORKERS would also include coordinator when it is added as a worker. It turns out that it was FILTERING OUT the coordinator even if it is added as a worker to the cluster.

So to have some context here, in repartitions, for each jobId we create
(at least we were supposed to) a schema in each worker node in the cluster. Then we partition each shard table into some intermediate files, which is called the PARTITION step. So after this partition step each node has some intermediate files having tuples in those nodes. Then we fetch the partition files to necessary worker nodes, which is called the FETCH step. Then from the files we create intermediate tables in the temporarily created schemas, which is called a MERGE step. Then after evaluating the result, we remove the temporary schemas(one for each job ID in each node) and files.

If node 1 has file1, and node 2 has file2 after PARTITION step, it is
enough to either move file1 from node1 to node2 or vice versa. So we
prune one of them.

In the MERGE step, if the schema for a given jobID doesn't exist, the
node tries to use the `public` schema if it is a superuser, which is
actually added for testing in the past.

So when we were not sending schema creation comands for each job ID to
the coordinator(because we were using ALL_WORKERS flag, and it doesn't
include the coordinator), we would basically not have any schemas for
repartitions in the coordinator. The PARTITION step would be executed on
the coordinator (because the tasks are generated in the planner part)
and it wouldn't give us any error because it doesn't have anything to do
with the temporary schemas(that we didn't create). But later two things
would happen:

- If by chance the fetch is pruned on the coordinator side, we the other
nodes would fetch the partitioned files from the coordinator and execute
the query as expected, because it has all the information.
- If the fetch tasks are not pruned in the coordinator, in the MERGE
step, the coordinator would either error out saying that the necessary
schema doesn't exist, or it would try to create the temporary tables
under public schema ( if it is a superuser). But then if we had the same
task ID with different jobID it would fail saying that the table already
exists, which is an error we were getting.

In the first case, the query would work okay, but it would still not do
the cleanup, hence we would leave the partitioned files from the
PARTITION step there. Hence ensure_no_intermediate_data_leak would fail.

To make things more explicit and prevent such bugs in the future,
ALL_WORKERS is named as ALL_NON_COORD_WORKERS. And a new flag to return
all the active nodes is added as ALL_DATA_NODES. For repartition case,
we don't use the only-reference table nodes but this version makes the
code simpler and there shouldn't be any significant performance issue
with that.

(cherry picked from commit 6532506f4b92b1316eea0812b2bcedb818d3b25c)
2020-07-21 13:54:51 +03:00
Sait Talha Nisanci a04e7b233e rename node/worker utilities
The names were not explicit about what they do, and we have many
misusages in the codebase, so they are renamed to be more explicit.

(cherry picked from commit 09962a7e2ff340705b6b193bbfececa2d48e0855)
2020-07-21 13:54:45 +03:00
Sait Talha Nisanci 0bd4002e5f rename TargetWorkerSet enums
Rename TargetWorkerSet enums to make them more explicit about what they
mean. Ideally it would be good to treat everything as a node without the
'worker' concept because it makes things complicated. Another
improvement could be to rename TargetWorkerSet as TargetNodeSet but it
goes to renaming many occurrences of Worker, which is probably too big
for this PR.

(cherry picked from commit de4b9569359e4f10d4ebf3fbcf7159ee6e2328db)
2020-07-21 13:54:40 +03:00
Nils Dijk 23f24a9668 fix flappy tests due to undeterministic order of test output (#4029)
As reported on #4011 https://github.com/citusdata/citus/pull/4011/files#r453804702 some of the tests were flapping due to an indeterministic order for test outputs.

This PR makes the test output ordered for all tests returning non-zero rows.

Needs to be backported to 9.2, 9.3, 9.4
(cherry picked from commit 23d44eba9f)
2020-07-21 11:01:49 +03:00
Nils Dijk d77e386e92 force aliases in deparsing for queries with anonymous column references (#4011)
DESCRIPTION: Force aliases in deparsing for queries with anonymous column references

Fixes: #3985

The root cause has todo with discrepancies in the query tree we create. I think in the future we should spend some time on categorising all changes we made to ruleutils and see if we can change the data structure `query` we pass to the deparser to have an actual valid postgres query for the deparser to render.

For now the fix is to keep track, besides changing the names of the entries in the target list, also if we have a reference to an anonymous columns. If there are anonymous columns we set the `printaliases` flag to true which forces the deparser to add the aliases.
(cherry picked from commit 449d1f0e91)
2020-07-21 11:01:49 +03:00
Marco Slot b7b960955c Rename master evaluation to coordinator evaluation
(cherry picked from commit b4fec63bc0)
2020-07-21 11:01:49 +03:00
Hadi Moshayedi 49f130fcd3 Fix Subtransaction memory leak
(cherry picked from commit 3651fc64ee)
2020-07-21 11:01:49 +03:00
Jelte Fennema b8c0e5ef1e Make static analysis happier (#4008)
Some small non-functional changes to make static analysis happy.
(cherry picked from commit 4c68ed4c33)
2020-07-21 11:01:49 +03:00
Jelte Fennema 55eed7f2ec Handle some NULL issues that static analysis found (#4001)
Static analysis found some issues where we used the result from
ExtractResultRelationRTE, without checking that it wasn't NULL. It seems
like in all these cases it can never actually be NULL, since we have checked
before that it isn't a SELECT query. So, this PR is mostly to make static
analysis happy (and protect a bit against future changes of the code).
(cherry picked from commit 759e628dd5)
2020-07-21 11:01:48 +03:00
Jelte Fennema 49d23229c4 Fix write queries with const expressions and COLLATE in various places (#3973)
(cherry picked from commit 16242d5264)
2020-07-21 11:01:48 +03:00
Jelte Fennema 48fab6f264 Replace words that have bad associations (#3992)
We had a few words in our codebase that static analysis flagged as having bad
associations.
(cherry picked from commit f6e2f1b1cb)
2020-07-21 11:01:48 +03:00
Jelte Fennema 9a4fddc9c5 Fix crash with single node dummy placement (#3993)
Static analysis found an issue where we could dereference `NULL`, because
`CreateDummyPlacement` could return `NULL` when there were no workers. This
PR changes it so that it never returns `NULL`, which was intended by
@marcocitus when doing this change: https://github.com/citusdata/citus/pull/3887/files#r438136433

While adding tests for citus on a single node I also added some more basic
tests and it turns out we error out on repartition joins. This has been
present since `shouldhaveshards` was introduced and is not trivial to fix.
So I created a separate issue for this: https://github.com/citusdata/citus/issues/3996
(cherry picked from commit ab01571c9e)
2020-07-21 11:01:48 +03:00
Philip Dubé 1d54b8f301 ruleutils: use get_rtable_name for deparsing resultRelation
(cherry picked from commit 444472ffc6)
2020-07-21 11:01:48 +03:00
Hadi Moshayedi 5e648e1a78 Fix task->fetchedExplainAnalyzePlan memory issue.
(cherry picked from commit 23fa421639)
2020-07-21 11:01:48 +03:00
Sait Talha Nisanci fc711af85b Fix explain subplan duration
(cherry picked from commit 4d217819ff)
2020-07-21 11:01:48 +03:00
Hanefi Önaldı 21ca434bef
Accept list of values in a supported ALTER ROLE .. SET statement
Some GUCs support a list of values which is indicated by GUC_LIST_INPUT flag.

When an ALTER ROLE .. SET statement is executed, the new configuration
default for affected users and databases are stored in the
setconfig(text[]) column in a pg_db_role_setting record.

If a GUC that supports a list of values is used in an ALTER ROLE .. SET
statement, we need to split the text into items delimited by commas.

(cherry picked from commit e534dbae4a)
2020-07-21 04:12:39 +03:00
Onur Tirtir 61ab7006d0
Don't run check-merge-to-enterprise for release branches
(cherry picked from commit 1c6439d1af)
2020-07-17 12:54:23 +03:00
Hanefi Önaldı 3de2d2868d
Introduce new make targets for downgrade scripts
Here are the updated make targets:
- install: install everything except downgrade scripts.
- install-downgrades: build and install only the downgrade migration scripts.
- install-all: install everything along with the downgrade migration scripts.

Conflicts:
  src/backend/distributed/Makefile
  src/backend/distributed/sql/downgrades/citus--9.5-1--9.4-1.sql
    - file does not exist on release branch yet, only on master

(cherry picked from commit 315b323d47)
2020-07-17 12:44:16 +03:00
Marco Slot 77b4534c72 Prevent integer overflow in FindShardIntervalIndex 2020-07-16 14:58:53 +02:00
Onder Kalaci 4b493f088b Fix default value of EnableBinaryProtocol
(cherry picked from commit aa8a2866f3)
2020-07-02 14:29:11 +02:00
Onur Tirtir 06c878b348 Bump Citus version to 9.4.0 2020-07-01 11:01:59 +03:00
134 changed files with 3825 additions and 730 deletions

View File

@ -287,7 +287,12 @@ workflows:
version: 2
build_and_test:
jobs:
- check-merge-to-enterprise
- check-merge-to-enterprise:
filters:
branches:
ignore:
- /release-[0-9]+\.[0-9]+.*/ # match with releaseX.Y.*
- build
- check-style
- check-sql-snapshots

View File

@ -1,3 +1,96 @@
### citus v9.4.4 (December 28, 2020) ###
* Fixes a bug that could cause router queries with local tables to be pushed
down
* Fixes a segfault in connection management due to invalid connection hash
entries
* Fixes possible issues that might occur with single shard distributed tables
### citus v9.4.3 (November 24, 2020) ###
* Enables PostgreSQL's parallel queries on EXPLAIN ANALYZE
* Fixes a bug that triggers subplan executions unnecessarily with cursors
### citus v9.4.2 (October 21, 2020) ###
* Fixes a bug that could lead to multiple maintenance daemons
* Fixes an issue preventing views in reference table modifications
### citus v9.4.1 (September 30, 2020) ###
* Fixes EXPLAIN ANALYZE output truncation
* Fixes a deadlock during transaction recovery
### citus v9.4.0 (July 28, 2020) ###
* Improves COPY by honoring max_adaptive_executor_pool_size config
* Adds support for insert into local table select from distributed table
* Adds support to partially push down tdigest aggregates
* Adds support for receiving binary encoded results from workers using
citus.enable_binary_protocol
* Enables joins between local tables and CTEs
* Adds showing query text in EXPLAIN output when explain verbose is true
* Adds support for showing CTE statistics in EXPLAIN ANALYZE
* Adds support for showing amount of data received in EXPLAIN ANALYZE
* Introduces downgrade paths in migration scripts
* Avoids returning incorrect results when changing roles in a transaction
* Fixes `ALTER TABLE IF EXISTS SET SCHEMA` with non-existing table bug
* Fixes `CREATE INDEX CONCURRENTLY` with no index name on a postgres table bug
* Fixes a bug that could cause crashes with certain compile flags
* Fixes a bug with lists of configuration values in ALTER ROLE SET statements
* Fixes a bug that occurs when coordinator is added as a worker node
* Fixes a crash because of overflow in partition id with certain compile flags
* Fixes a crash that may happen if no worker nodes are added
* Fixes a crash that occurs when inserting implicitly coerced constants
* Fixes a crash when aggregating empty tables
* Fixes a memory leak in subtransaction memory handling
* Fixes crash when using rollback to savepoint after cancellation of DML
* Fixes deparsing for queries with anonymous column references
* Fixes distribution of composite types failing to include typemods
* Fixes explain analyze on adaptive executor repartitions
* Fixes possible error throwing in abort handle
* Fixes segfault when evaluating func calls with default params on coordinator
* Fixes several EXPLAIN ANALYZE issues
* Fixes write queries with const expressions and COLLATE in various places
* Fixes wrong cancellation message about distributed deadlocks
* Reports correct INSERT/SELECT method in EXPLAIN
* Disallows triggers on citus tables
### citus v9.3.2 (Jun 22, 2020) ###
* Fixes a version bump issue in 9.3.1

View File

@ -24,6 +24,7 @@ install-headers: extension
$(INSTALL_DATA) $(citus_top_builddir)/src/include/citus_version.h '$(DESTDIR)$(includedir_server)/'
# the rest in the source tree
$(INSTALL_DATA) $(citus_abs_srcdir)/src/include/distributed/*.h '$(DESTDIR)$(includedir_server)/distributed/'
clean-extension:
$(MAKE) -C src/backend/distributed/ clean
clean-full:
@ -31,6 +32,11 @@ clean-full:
.PHONY: extension install-extension clean-extension clean-full
# Add to generic targets
install: install-extension install-headers
install-downgrades:
$(MAKE) -C src/backend/distributed/ install-downgrades
install-all: install-headers
$(MAKE) -C src/backend/distributed/ install-all
clean: clean-extension
# apply or check style
@ -44,4 +50,4 @@ check-style:
check: all install
$(MAKE) -C src/test/regress check-full
.PHONY: all check install clean
.PHONY: all check clean install install-downgrades install-all

32
configure vendored
View File

@ -1,6 +1,6 @@
#! /bin/sh
# Guess values for system-dependent variables and create Makefiles.
# Generated by GNU Autoconf 2.69 for Citus 9.4devel.
# Generated by GNU Autoconf 2.69 for Citus 9.4.4.
#
#
# Copyright (C) 1992-1996, 1998-2012 Free Software Foundation, Inc.
@ -579,8 +579,8 @@ MAKEFLAGS=
# Identity of this package.
PACKAGE_NAME='Citus'
PACKAGE_TARNAME='citus'
PACKAGE_VERSION='9.4devel'
PACKAGE_STRING='Citus 9.4devel'
PACKAGE_VERSION='9.4.4'
PACKAGE_STRING='Citus 9.4.4'
PACKAGE_BUGREPORT=''
PACKAGE_URL=''
@ -664,6 +664,7 @@ infodir
docdir
oldincludedir
includedir
runstatedir
localstatedir
sharedstatedir
sysconfdir
@ -740,6 +741,7 @@ datadir='${datarootdir}'
sysconfdir='${prefix}/etc'
sharedstatedir='${prefix}/com'
localstatedir='${prefix}/var'
runstatedir='${localstatedir}/run'
includedir='${prefix}/include'
oldincludedir='/usr/include'
docdir='${datarootdir}/doc/${PACKAGE_TARNAME}'
@ -992,6 +994,15 @@ do
| -silent | --silent | --silen | --sile | --sil)
silent=yes ;;
-runstatedir | --runstatedir | --runstatedi | --runstated \
| --runstate | --runstat | --runsta | --runst | --runs \
| --run | --ru | --r)
ac_prev=runstatedir ;;
-runstatedir=* | --runstatedir=* | --runstatedi=* | --runstated=* \
| --runstate=* | --runstat=* | --runsta=* | --runst=* | --runs=* \
| --run=* | --ru=* | --r=*)
runstatedir=$ac_optarg ;;
-sbindir | --sbindir | --sbindi | --sbind | --sbin | --sbi | --sb)
ac_prev=sbindir ;;
-sbindir=* | --sbindir=* | --sbindi=* | --sbind=* | --sbin=* \
@ -1129,7 +1140,7 @@ fi
for ac_var in exec_prefix prefix bindir sbindir libexecdir datarootdir \
datadir sysconfdir sharedstatedir localstatedir includedir \
oldincludedir docdir infodir htmldir dvidir pdfdir psdir \
libdir localedir mandir
libdir localedir mandir runstatedir
do
eval ac_val=\$$ac_var
# Remove trailing slashes.
@ -1242,7 +1253,7 @@ if test "$ac_init_help" = "long"; then
# Omit some internal or obsolete options to make the list less imposing.
# This message is too long to be a string in the A/UX 3.1 sh.
cat <<_ACEOF
\`configure' configures Citus 9.4devel to adapt to many kinds of systems.
\`configure' configures Citus 9.4.4 to adapt to many kinds of systems.
Usage: $0 [OPTION]... [VAR=VALUE]...
@ -1282,6 +1293,7 @@ Fine tuning of the installation directories:
--sysconfdir=DIR read-only single-machine data [PREFIX/etc]
--sharedstatedir=DIR modifiable architecture-independent data [PREFIX/com]
--localstatedir=DIR modifiable single-machine data [PREFIX/var]
--runstatedir=DIR modifiable per-process data [LOCALSTATEDIR/run]
--libdir=DIR object code libraries [EPREFIX/lib]
--includedir=DIR C header files [PREFIX/include]
--oldincludedir=DIR C header files for non-gcc [/usr/include]
@ -1303,7 +1315,7 @@ fi
if test -n "$ac_init_help"; then
case $ac_init_help in
short | recursive ) echo "Configuration of Citus 9.4devel:";;
short | recursive ) echo "Configuration of Citus 9.4.4:";;
esac
cat <<\_ACEOF
@ -1403,7 +1415,7 @@ fi
test -n "$ac_init_help" && exit $ac_status
if $ac_init_version; then
cat <<\_ACEOF
Citus configure 9.4devel
Citus configure 9.4.4
generated by GNU Autoconf 2.69
Copyright (C) 2012 Free Software Foundation, Inc.
@ -1886,7 +1898,7 @@ cat >config.log <<_ACEOF
This file contains any messages produced by compilers while
running configure, to aid debugging if configure makes a mistake.
It was created by Citus $as_me 9.4devel, which was
It was created by Citus $as_me 9.4.4, which was
generated by GNU Autoconf 2.69. Invocation command line was
$ $0 $@
@ -5055,7 +5067,7 @@ cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1
# report actual input values of CONFIG_FILES etc. instead of their
# values after options handling.
ac_log="
This file was extended by Citus $as_me 9.4devel, which was
This file was extended by Citus $as_me 9.4.4, which was
generated by GNU Autoconf 2.69. Invocation command line was
CONFIG_FILES = $CONFIG_FILES
@ -5117,7 +5129,7 @@ _ACEOF
cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1
ac_cs_config="`$as_echo "$ac_configure_args" | sed 's/^ //; s/[\\""\`\$]/\\\\&/g'`"
ac_cs_version="\\
Citus config.status 9.4devel
Citus config.status 9.4.4
configured by $0, generated by GNU Autoconf 2.69,
with options \\"\$ac_cs_config\\"

View File

@ -5,7 +5,7 @@
# everyone needing autoconf installed, the resulting files are checked
# into the SCM.
AC_INIT([Citus], [9.4devel])
AC_INIT([Citus], [9.4.4])
AC_COPYRIGHT([Copyright (c) Citus Data, Inc.])
# we'll need sed and awk for some of the version commands

View File

@ -11,7 +11,9 @@ MODULE_big = citus
EXTENSION = citus
template_sql_files = $(patsubst $(citus_abs_srcdir)/%,%,$(wildcard $(citus_abs_srcdir)/sql/*.sql))
template_downgrade_sql_files = $(patsubst $(citus_abs_srcdir)/sql/downgrades/%,%,$(wildcard $(citus_abs_srcdir)/sql/downgrades/*.sql))
generated_sql_files = $(patsubst %,$(citus_abs_srcdir)/build/%,$(template_sql_files))
generated_downgrade_sql_files += $(patsubst %,$(citus_abs_srcdir)/build/sql/%,$(template_downgrade_sql_files))
# All citus--*.sql files that are used to upgrade between versions
DATA_built = $(generated_sql_files)
@ -54,6 +56,20 @@ SQL_BUILDDIR=build/sql
$(generated_sql_files): $(citus_abs_srcdir)/build/%: %
@mkdir -p $(citus_abs_srcdir)/$(SQL_DEPDIR) $(citus_abs_srcdir)/$(SQL_BUILDDIR)
@# -MF is used to store dependency files(.Po) in another directory for separation
@# -MT is used to change the target of the rule emitted by dependency generation.
@# -P is used to inhibit generation of linemarkers in the output from the preprocessor.
@# -undef is used to not predefine any system-specific or GCC-specific macros.
@# `man cpp` for further information
cd $(citus_abs_srcdir) && cpp -undef -w -P -MMD -MP -MF$(SQL_DEPDIR)/$(*F).Po -MT$@ $< > $@
$(generated_downgrade_sql_files): $(citus_abs_srcdir)/build/sql/%: sql/downgrades/%
@mkdir -p $(citus_abs_srcdir)/$(SQL_DEPDIR) $(citus_abs_srcdir)/$(SQL_BUILDDIR)
@# -MF is used to store dependency files(.Po) in another directory for separation
@# -MT is used to change the target of the rule emitted by dependency generation.
@# -P is used to inhibit generation of linemarkers in the output from the preprocessor.
@# -undef is used to not predefine any system-specific or GCC-specific macros.
@# `man cpp` for further information
cd $(citus_abs_srcdir) && cpp -undef -w -P -MMD -MP -MF$(SQL_DEPDIR)/$(*F).Po -MT$@ $< > $@
SQL_Po_files := $(wildcard $(SQL_DEPDIR)/*.Po)
@ -61,7 +77,7 @@ ifneq (,$(SQL_Po_files))
include $(SQL_Po_files)
endif
.PHONY: check-sql-snapshots clean-full
.PHONY: check-sql-snapshots clean-full install install-downgrades install-all
check-sql-snapshots:
bash -c '\
@ -76,6 +92,13 @@ cleanup-before-install:
install: cleanup-before-install
# install and install-downgrades should be run sequentially
install-all: install
make install-downgrades
install-downgrades: $(generated_downgrade_sql_files)
$(INSTALL_DATA) $(generated_downgrade_sql_files) '$(DESTDIR)$(datadir)/$(datamoduledir)/'
clean-full:
make clean
rm -rf $(safestringlib_builddir)

View File

@ -278,7 +278,7 @@ PreprocessDropCollationStmt(Node *node, const char *queryString)
(void *) dropStmtSql,
ENABLE_DDL_PROPAGATION);
return NodeDDLTaskList(ALL_WORKERS, commands);
return NodeDDLTaskList(NON_COORDINATOR_NODES, commands);
}
@ -311,7 +311,7 @@ PreprocessAlterCollationOwnerStmt(Node *node, const char *queryString)
(void *) sql,
ENABLE_DDL_PROPAGATION);
return NodeDDLTaskList(ALL_WORKERS, commands);
return NodeDDLTaskList(NON_COORDINATOR_NODES, commands);
}
@ -346,7 +346,7 @@ PreprocessRenameCollationStmt(Node *node, const char *queryString)
(void *) renameStmtSql,
ENABLE_DDL_PROPAGATION);
return NodeDDLTaskList(ALL_WORKERS, commands);
return NodeDDLTaskList(NON_COORDINATOR_NODES, commands);
}
@ -379,7 +379,7 @@ PreprocessAlterCollationSchemaStmt(Node *node, const char *queryString)
(void *) sql,
ENABLE_DDL_PROPAGATION);
return NodeDDLTaskList(ALL_WORKERS, commands);
return NodeDDLTaskList(NON_COORDINATOR_NODES, commands);
}
@ -604,6 +604,6 @@ PostprocessDefineCollationStmt(Node *node, const char *queryString)
MarkObjectDistributed(&collationAddress);
return NodeDDLTaskList(ALL_WORKERS, CreateCollationDDLsIdempotent(
return NodeDDLTaskList(NON_COORDINATOR_NODES, CreateCollationDDLsIdempotent(
collationAddress.objectId));
}

View File

@ -85,7 +85,7 @@ EnsureDependenciesExistOnAllNodes(const ObjectAddress *target)
* either get it now, or get it in master_add_node after this transaction finishes and
* the pg_dist_object record becomes visible.
*/
List *workerNodeList = ActivePrimaryWorkerNodeList(RowShareLock);
List *workerNodeList = ActivePrimaryNonCoordinatorNodeList(RowShareLock);
/*
* right after we acquired the lock we mark our objects as distributed, these changes

View File

@ -190,7 +190,7 @@ PostprocessCreateExtensionStmt(Node *node, const char *queryString)
MarkObjectDistributed(&extensionAddress);
return NodeDDLTaskList(ALL_WORKERS, commands);
return NodeDDLTaskList(NON_COORDINATOR_NODES, commands);
}
@ -306,7 +306,7 @@ PreprocessDropExtensionStmt(Node *node, const char *queryString)
(void *) deparsedStmt,
ENABLE_DDL_PROPAGATION);
return NodeDDLTaskList(ALL_WORKERS, commands);
return NodeDDLTaskList(NON_COORDINATOR_NODES, commands);
}
@ -421,7 +421,7 @@ PreprocessAlterExtensionSchemaStmt(Node *node, const char *queryString)
(void *) alterExtensionStmtSql,
ENABLE_DDL_PROPAGATION);
return NodeDDLTaskList(ALL_WORKERS, commands);
return NodeDDLTaskList(NON_COORDINATOR_NODES, commands);
}
@ -489,7 +489,7 @@ PreprocessAlterExtensionUpdateStmt(Node *node, const char *queryString)
(void *) alterExtensionStmtSql,
ENABLE_DDL_PROPAGATION);
return NodeDDLTaskList(ALL_WORKERS, commands);
return NodeDDLTaskList(NON_COORDINATOR_NODES, commands);
}

View File

@ -169,7 +169,7 @@ create_distributed_function(PG_FUNCTION_ARGS)
const char *alterFunctionOwnerSQL = GetFunctionAlterOwnerCommand(funcOid);
initStringInfo(&ddlCommand);
appendStringInfo(&ddlCommand, "%s;%s", createFunctionSQL, alterFunctionOwnerSQL);
SendCommandToWorkersAsUser(ALL_WORKERS, CurrentUserName(), ddlCommand.data);
SendCommandToWorkersAsUser(NON_COORDINATOR_NODES, CurrentUserName(), ddlCommand.data);
MarkObjectDistributed(&functionAddress);
@ -1022,7 +1022,7 @@ EnsureSequentialModeForFunctionDDL(void)
static void
TriggerSyncMetadataToPrimaryNodes(void)
{
List *workerList = ActivePrimaryWorkerNodeList(ShareLock);
List *workerList = ActivePrimaryNonCoordinatorNodeList(ShareLock);
bool triggerMetadataSync = false;
WorkerNode *workerNode = NULL;
@ -1192,7 +1192,7 @@ PostprocessCreateFunctionStmt(Node *node, const char *queryString)
GetFunctionAlterOwnerCommand(address.objectId),
ENABLE_DDL_PROPAGATION);
return NodeDDLTaskList(ALL_WORKERS, commands);
return NodeDDLTaskList(NON_COORDINATOR_NODES, commands);
}
@ -1279,7 +1279,7 @@ PreprocessAlterFunctionStmt(Node *node, const char *queryString)
(void *) sql,
ENABLE_DDL_PROPAGATION);
return NodeDDLTaskList(ALL_WORKERS, commands);
return NodeDDLTaskList(NON_COORDINATOR_NODES, commands);
}
@ -1312,7 +1312,7 @@ PreprocessRenameFunctionStmt(Node *node, const char *queryString)
(void *) sql,
ENABLE_DDL_PROPAGATION);
return NodeDDLTaskList(ALL_WORKERS, commands);
return NodeDDLTaskList(NON_COORDINATOR_NODES, commands);
}
@ -1343,7 +1343,7 @@ PreprocessAlterFunctionSchemaStmt(Node *node, const char *queryString)
(void *) sql,
ENABLE_DDL_PROPAGATION);
return NodeDDLTaskList(ALL_WORKERS, commands);
return NodeDDLTaskList(NON_COORDINATOR_NODES, commands);
}
@ -1375,7 +1375,7 @@ PreprocessAlterFunctionOwnerStmt(Node *node, const char *queryString)
(void *) sql,
ENABLE_DDL_PROPAGATION);
return NodeDDLTaskList(ALL_WORKERS, commands);
return NodeDDLTaskList(NON_COORDINATOR_NODES, commands);
}
@ -1477,7 +1477,7 @@ PreprocessDropFunctionStmt(Node *node, const char *queryString)
(void *) dropStmtSql,
ENABLE_DDL_PROPAGATION);
return NodeDDLTaskList(ALL_WORKERS, commands);
return NodeDDLTaskList(NON_COORDINATOR_NODES, commands);
}

View File

@ -411,6 +411,16 @@ PostprocessIndexStmt(Node *node, const char *queryString)
return NIL;
}
/*
* We make sure schema name is not null in the PreprocessIndexStmt
*/
Oid schemaId = get_namespace_oid(indexStmt->relation->schemaname, true);
Oid relationId = get_relname_relid(indexStmt->relation->relname, schemaId);
if (!IsCitusTable(relationId))
{
return NIL;
}
/* commit the current transaction and start anew */
CommitTransactionCommand();
StartTransactionCommand();
@ -418,7 +428,7 @@ PostprocessIndexStmt(Node *node, const char *queryString)
/* get the affected relation and index */
Relation relation = heap_openrv(indexStmt->relation, ShareUpdateExclusiveLock);
Oid indexRelationId = get_relname_relid(indexStmt->idxname,
RelationGetNamespace(relation));
schemaId);
Relation indexRelation = index_open(indexRelationId, RowExclusiveLock);
/* close relations but retain locks */

View File

@ -3663,7 +3663,7 @@ GetLeastUtilisedCopyConnection(List *connectionStateList, char *nodeName,
int nodePort)
{
MultiConnection *connection = NULL;
int minPlacementCount = INT32_MAX;
int minPlacementCount = PG_INT32_MAX;
ListCell *connectionStateCell = NULL;
foreach(connectionStateCell, connectionStateList)

View File

@ -38,11 +38,13 @@
#include "nodes/makefuncs.h"
#include "nodes/parsenodes.h"
#include "nodes/pg_list.h"
#include "parser/scansup.h"
#include "utils/acl.h"
#include "utils/builtins.h"
#include "utils/guc_tables.h"
#include "utils/guc.h"
#include "utils/rel.h"
#include "utils/varlena.h"
#include "utils/syscache.h"
static const char * ExtractEncryptedPassword(Oid roleOid);
@ -169,7 +171,7 @@ PostprocessAlterRoleStmt(Node *node, const char *queryString)
}
List *commands = list_make1((void *) CreateAlterRoleIfExistsCommand(stmt));
return NodeDDLTaskList(ALL_WORKERS, commands);
return NodeDDLTaskList(NON_COORDINATOR_NODES, commands);
}
@ -210,7 +212,7 @@ PreprocessAlterRoleSetStmt(Node *node, const char *queryString)
(void *) sql,
ENABLE_DDL_PROPAGATION);
return NodeDDLTaskList(ALL_WORKERS, commandList);
return NodeDDLTaskList(NON_COORDINATOR_NODES, commandList);
}
@ -410,7 +412,7 @@ MakeVariableSetStmt(const char *config)
VariableSetStmt *variableSetStmt = makeNode(VariableSetStmt);
variableSetStmt->kind = VAR_SET_VALUE;
variableSetStmt->name = name;
variableSetStmt->args = list_make1(MakeSetStatementArgument(name, value));
variableSetStmt->args = MakeSetStatementArguments(name, value);
return variableSetStmt;
}
@ -624,15 +626,15 @@ GetRoleNameFromDbRoleSetting(HeapTuple tuple, TupleDesc DbRoleSettingDescription
/*
* MakeSetStatementArgs parses a configuraton value and creates an A_Const
* with an appropriate type.
* MakeSetStatementArgs parses a configuraton value and creates an List of A_Const
* Nodes with appropriate types.
*
* The allowed A_Const types are Integer, Float, and String.
*/
Node *
MakeSetStatementArgument(char *configurationName, char *configurationValue)
List *
MakeSetStatementArguments(char *configurationName, char *configurationValue)
{
Node *arg = NULL;
List *args = NIL;
char **key = &configurationName;
/* Perform a lookup on GUC variables to find the config type and units.
@ -668,13 +670,15 @@ MakeSetStatementArgument(char *configurationName, char *configurationValue)
int intValue;
parse_int(configurationValue, &intValue,
(*matchingConfig)->flags, NULL);
arg = makeIntConst(intValue, -1);
Node *arg = makeIntConst(intValue, -1);
args = lappend(args, arg);
break;
}
case PGC_REAL:
{
arg = makeFloatConst(configurationValue, -1);
Node *arg = makeFloatConst(configurationValue, -1);
args = lappend(args, arg);
break;
}
@ -682,7 +686,25 @@ MakeSetStatementArgument(char *configurationName, char *configurationValue)
case PGC_STRING:
case PGC_ENUM:
{
arg = makeStringConst(configurationValue, -1);
List *configurationList = NIL;
if ((*matchingConfig)->flags & GUC_LIST_INPUT)
{
char *configurationValueCopy = pstrdup(configurationValue);
SplitIdentifierString(configurationValueCopy, ',',
&configurationList);
}
else
{
configurationList = list_make1(configurationValue);
}
char *configuration = NULL;
foreach_ptr(configuration, configurationList)
{
Node *arg = makeStringConst(configuration, -1);
args = lappend(args, arg);
}
break;
}
@ -696,9 +718,10 @@ MakeSetStatementArgument(char *configurationName, char *configurationValue)
}
else
{
arg = makeStringConst(configurationValue, -1);
Node *arg = makeStringConst(configurationValue, -1);
args = lappend(args, arg);
}
return (Node *) arg;
return args;
}

View File

@ -148,7 +148,7 @@ PreprocessGrantOnSchemaStmt(Node *node, const char *queryString)
stmt->objects = originalObjects;
return NodeDDLTaskList(ALL_WORKERS, list_make1(sql));
return NodeDDLTaskList(NON_COORDINATOR_NODES, list_make1(sql));
}

View File

@ -269,7 +269,10 @@ PostprocessAlterTableSchemaStmt(Node *node, const char *queryString)
AlterObjectSchemaStmt *stmt = castNode(AlterObjectSchemaStmt, node);
Assert(stmt->objectType == OBJECT_TABLE);
ObjectAddress tableAddress = GetObjectAddressFromParseTree((Node *) stmt, false);
/*
* We will let Postgres deal with missing_ok
*/
ObjectAddress tableAddress = GetObjectAddressFromParseTree((Node *) stmt, true);
if (!ShouldPropagate() || !IsCitusTable(tableAddress.objectId))
{
@ -1481,7 +1484,7 @@ AlterTableSchemaStmtObjectAddress(Node *node, bool missing_ok)
if (stmt->relation->schemaname)
{
const char *schemaName = stmt->relation->schemaname;
Oid schemaOid = get_namespace_oid(schemaName, false);
Oid schemaOid = get_namespace_oid(schemaName, missing_ok);
tableOid = get_relname_relid(tableName, schemaOid);
}
else

View File

@ -162,7 +162,7 @@ PreprocessCompositeTypeStmt(Node *node, const char *queryString)
(void *) compositeTypeStmtSql,
ENABLE_DDL_PROPAGATION);
return NodeDDLTaskList(ALL_WORKERS, commands);
return NodeDDLTaskList(NON_COORDINATOR_NODES, commands);
}
@ -228,7 +228,7 @@ PreprocessAlterTypeStmt(Node *node, const char *queryString)
(void *) alterTypeStmtSql,
ENABLE_DDL_PROPAGATION);
return NodeDDLTaskList(ALL_WORKERS, commands);
return NodeDDLTaskList(NON_COORDINATOR_NODES, commands);
}
@ -271,7 +271,7 @@ PreprocessCreateEnumStmt(Node *node, const char *queryString)
(void *) createEnumStmtSql,
ENABLE_DDL_PROPAGATION);
return NodeDDLTaskList(ALL_WORKERS, commands);
return NodeDDLTaskList(NON_COORDINATOR_NODES, commands);
}
@ -362,7 +362,7 @@ PreprocessAlterEnumStmt(Node *node, const char *queryString)
(void *) alterEnumStmtSql,
ENABLE_DDL_PROPAGATION);
return NodeDDLTaskList(ALL_WORKERS, commands);
return NodeDDLTaskList(NON_COORDINATOR_NODES, commands);
}
@ -500,7 +500,7 @@ PreprocessDropTypeStmt(Node *node, const char *queryString)
dropStmtSql,
ENABLE_DDL_PROPAGATION);
return NodeDDLTaskList(ALL_WORKERS, commands);
return NodeDDLTaskList(NON_COORDINATOR_NODES, commands);
}
@ -534,7 +534,7 @@ PreprocessRenameTypeStmt(Node *node, const char *queryString)
(void *) renameStmtSql,
ENABLE_DDL_PROPAGATION);
return NodeDDLTaskList(ALL_WORKERS, commands);
return NodeDDLTaskList(NON_COORDINATOR_NODES, commands);
}
@ -567,7 +567,7 @@ PreprocessRenameTypeAttributeStmt(Node *node, const char *queryString)
(void *) sql,
ENABLE_DDL_PROPAGATION);
return NodeDDLTaskList(ALL_WORKERS, commands);
return NodeDDLTaskList(NON_COORDINATOR_NODES, commands);
}
@ -600,7 +600,7 @@ PreprocessAlterTypeSchemaStmt(Node *node, const char *queryString)
(void *) sql,
ENABLE_DDL_PROPAGATION);
return NodeDDLTaskList(ALL_WORKERS, commands);
return NodeDDLTaskList(NON_COORDINATOR_NODES, commands);
}
@ -657,7 +657,7 @@ PreprocessAlterTypeOwnerStmt(Node *node, const char *queryString)
(void *) sql,
ENABLE_DDL_PROPAGATION);
return NodeDDLTaskList(ALL_WORKERS, commands);
return NodeDDLTaskList(NON_COORDINATOR_NODES, commands);
}

View File

@ -915,7 +915,7 @@ NodeDDLTaskList(TargetWorkerSet targets, List *commands)
{
/*
* if there are no nodes we don't have to plan any ddl tasks. Planning them would
* cause a hang in the executor.
* cause the executor to stop responding.
*/
return NIL;
}

View File

@ -127,7 +127,8 @@ PostprocessVariableSetStmt(VariableSetStmt *setStmt, const char *setStmtString)
/* haven't seen any SET stmts so far in this (sub-)xact: initialize StringInfo */
if (activeSetStmts == NULL)
{
MemoryContext old_context = MemoryContextSwitchTo(CurTransactionContext);
/* see comments in PushSubXact on why we allocate this in TopTransactionContext */
MemoryContext old_context = MemoryContextSwitchTo(TopTransactionContext);
activeSetStmts = makeStringInfo();
MemoryContextSwitchTo(old_context);
}

View File

@ -160,6 +160,12 @@ AfterXactConnectionHandling(bool isCommit)
hash_seq_init(&status, ConnectionHash);
while ((entry = (ConnectionHashEntry *) hash_seq_search(&status)) != 0)
{
if (!entry->isValid)
{
/* skip invalid connection hash entries */
continue;
}
AfterXactHostConnectionHandling(entry, isCommit);
/*
@ -323,11 +329,24 @@ StartNodeUserDatabaseConnection(uint32 flags, const char *hostname, int32 port,
*/
ConnectionHashEntry *entry = hash_search(ConnectionHash, &key, HASH_ENTER, &found);
if (!found)
if (!found || !entry->isValid)
{
/*
* We are just building hash entry or previously it was left in an
* invalid state as we couldn't allocate memory for it.
* So initialize entry->connections list here.
*/
entry->isValid = false;
entry->connections = MemoryContextAlloc(ConnectionContext,
sizeof(dlist_head));
dlist_init(entry->connections);
/*
* If MemoryContextAlloc errors out -e.g. during an OOM-, entry->connections
* stays as NULL. So entry->isValid should be set to true right after we
* initialize entry->connections properly.
*/
entry->isValid = true;
}
/* if desired, check whether there's a usable connection */
@ -474,6 +493,12 @@ CloseAllConnectionsAfterTransaction(void)
hash_seq_init(&status, ConnectionHash);
while ((entry = (ConnectionHashEntry *) hash_seq_search(&status)) != 0)
{
if (!entry->isValid)
{
/* skip invalid connection hash entries */
continue;
}
dlist_iter iter;
dlist_head *connections = entry->connections;
@ -502,6 +527,12 @@ CloseNodeConnectionsAfterTransaction(char *nodeName, int nodePort)
hash_seq_init(&status, ConnectionHash);
while ((entry = (ConnectionHashEntry *) hash_seq_search(&status)) != 0)
{
if (!entry->isValid)
{
/* skip invalid connection hash entries */
continue;
}
dlist_iter iter;
if (strcmp(entry->key.hostname, nodeName) != 0 || entry->key.port != nodePort)
@ -577,6 +608,12 @@ ShutdownAllConnections(void)
hash_seq_init(&status, ConnectionHash);
while ((entry = (ConnectionHashEntry *) hash_seq_search(&status)) != NULL)
{
if (!entry->isValid)
{
/* skip invalid connection hash entries */
continue;
}
dlist_iter iter;
dlist_foreach(iter, entry->connections)
@ -1187,6 +1224,12 @@ FreeConnParamsHashEntryFields(ConnParamsHashEntry *entry)
static void
AfterXactHostConnectionHandling(ConnectionHashEntry *entry, bool isCommit)
{
if (!entry || !entry->isValid)
{
/* callers only pass valid hash entries but let's be on the safe side */
ereport(ERROR, (errmsg("connection hash entry is NULL or invalid")));
}
dlist_mutable_iter iter;
int cachedConnectionCount = 0;

View File

@ -532,8 +532,9 @@ pg_get_tablecolumnoptionsdef_string(Oid tableRelationId)
ereport(ERROR, (errmsg("bad number of tuple descriptor attributes")));
}
AttrNumber natts = tupleDescriptor->natts;
for (AttrNumber attributeIndex = 0;
attributeIndex < (AttrNumber) tupleDescriptor->natts;
attributeIndex < natts;
attributeIndex++)
{
Form_pg_attribute attributeForm = TupleDescAttr(tupleDescriptor, attributeIndex);

View File

@ -56,5 +56,5 @@ QualifyVarSetCurrent(VariableSetStmt *setStmt)
char *configValue = GetConfigOptionByName(configurationName, NULL, false);
setStmt->kind = VAR_SET_VALUE;
setStmt->args = list_make1(MakeSetStatementArgument(configurationName, configValue));
setStmt->args = list_make1(MakeSetStatementArguments(configurationName, configValue));
}

View File

@ -966,6 +966,7 @@ set_relation_column_names(deparse_namespace *dpns, RangeTblEntry *rte,
int ncolumns;
char **real_colnames;
bool changed_any;
bool has_anonymous;
int noldcolumns;
int i;
int j;
@ -1053,6 +1054,7 @@ set_relation_column_names(deparse_namespace *dpns, RangeTblEntry *rte,
*/
noldcolumns = list_length(rte->eref->colnames);
changed_any = false;
has_anonymous = false;
j = 0;
for (i = 0; i < ncolumns; i++)
{
@ -1090,6 +1092,13 @@ set_relation_column_names(deparse_namespace *dpns, RangeTblEntry *rte,
/* Remember if any assigned aliases differ from "real" name */
if (!changed_any && strcmp(colname, real_colname) != 0)
changed_any = true;
/*
* Remember if there is a reference to an anonymous column as named by
* char * FigureColname(Node *node)
*/
if (!has_anonymous && strcmp(real_colname, "?column?") == 0)
has_anonymous = true;
}
/*
@ -1119,7 +1128,7 @@ set_relation_column_names(deparse_namespace *dpns, RangeTblEntry *rte,
else if (rte->alias && rte->alias->colnames != NIL)
colinfo->printaliases = true;
else
colinfo->printaliases = changed_any;
colinfo->printaliases = changed_any || has_anonymous;
}
/*
@ -3036,7 +3045,7 @@ get_insert_query_def(Query *query, deparse_context *context)
/* INSERT requires AS keyword for target alias */
if (rte->alias != NULL)
appendStringInfo(buf, "AS %s ",
quote_identifier(rte->alias->aliasname));
quote_identifier(get_rtable_name(query->resultRelation, context)));
/*
* Add the insert-column-names list. Any indirection decoration needed on
@ -3235,7 +3244,7 @@ get_update_query_def(Query *query, deparse_context *context)
if(rte->eref != NULL)
appendStringInfo(buf, " %s",
quote_identifier(rte->eref->aliasname));
quote_identifier(get_rtable_name(query->resultRelation, context)));
}
else
{
@ -3247,7 +3256,7 @@ get_update_query_def(Query *query, deparse_context *context)
if (rte->alias != NULL)
appendStringInfo(buf, " %s",
quote_identifier(rte->alias->aliasname));
quote_identifier(get_rtable_name(query->resultRelation, context)));
}
appendStringInfoString(buf, " SET ");
@ -3467,7 +3476,7 @@ get_delete_query_def(Query *query, deparse_context *context)
if(rte->eref != NULL)
appendStringInfo(buf, " %s",
quote_identifier(rte->eref->aliasname));
quote_identifier(get_rtable_name(query->resultRelation, context)));
}
else
{
@ -3479,7 +3488,7 @@ get_delete_query_def(Query *query, deparse_context *context)
if (rte->alias != NULL)
appendStringInfo(buf, " %s",
quote_identifier(rte->alias->aliasname));
quote_identifier(get_rtable_name(query->resultRelation, context)));
}
/* Add the USING clause if given */

View File

@ -966,6 +966,7 @@ set_relation_column_names(deparse_namespace *dpns, RangeTblEntry *rte,
int ncolumns;
char **real_colnames;
bool changed_any;
bool has_anonymous;
int noldcolumns;
int i;
int j;
@ -1053,6 +1054,7 @@ set_relation_column_names(deparse_namespace *dpns, RangeTblEntry *rte,
*/
noldcolumns = list_length(rte->eref->colnames);
changed_any = false;
has_anonymous = false;
j = 0;
for (i = 0; i < ncolumns; i++)
{
@ -1090,6 +1092,13 @@ set_relation_column_names(deparse_namespace *dpns, RangeTblEntry *rte,
/* Remember if any assigned aliases differ from "real" name */
if (!changed_any && strcmp(colname, real_colname) != 0)
changed_any = true;
/*
* Remember if there is a reference to an anonymous column as named by
* char * FigureColname(Node *node)
*/
if (!has_anonymous && strcmp(real_colname, "?column?") == 0)
has_anonymous = true;
}
/*
@ -1119,7 +1128,7 @@ set_relation_column_names(deparse_namespace *dpns, RangeTblEntry *rte,
else if (rte->alias && rte->alias->colnames != NIL)
colinfo->printaliases = true;
else
colinfo->printaliases = changed_any;
colinfo->printaliases = changed_any || has_anonymous;
}
/*
@ -3048,7 +3057,7 @@ get_insert_query_def(Query *query, deparse_context *context)
/* INSERT requires AS keyword for target alias */
if (rte->alias != NULL)
appendStringInfo(buf, "AS %s ",
quote_identifier(rte->alias->aliasname));
quote_identifier(get_rtable_name(query->resultRelation, context)));
/*
* Add the insert-column-names list. Any indirection decoration needed on
@ -3247,7 +3256,7 @@ get_update_query_def(Query *query, deparse_context *context)
if(rte->eref != NULL)
appendStringInfo(buf, " %s",
quote_identifier(rte->eref->aliasname));
quote_identifier(get_rtable_name(query->resultRelation, context)));
}
else
{
@ -3259,7 +3268,7 @@ get_update_query_def(Query *query, deparse_context *context)
if (rte->alias != NULL)
appendStringInfo(buf, " %s",
quote_identifier(rte->alias->aliasname));
quote_identifier(get_rtable_name(query->resultRelation, context)));
}
appendStringInfoString(buf, " SET ");
@ -3479,7 +3488,7 @@ get_delete_query_def(Query *query, deparse_context *context)
if(rte->eref != NULL)
appendStringInfo(buf, " %s",
quote_identifier(rte->eref->aliasname));
quote_identifier(get_rtable_name(query->resultRelation, context)));
}
else
{
@ -3491,7 +3500,7 @@ get_delete_query_def(Query *query, deparse_context *context)
if (rte->alias != NULL)
appendStringInfo(buf, " %s",
quote_identifier(rte->alias->aliasname));
quote_identifier(get_rtable_name(query->resultRelation, context)));
}
/* Add the USING clause if given */

View File

@ -442,7 +442,7 @@ struct TaskPlacementExecution;
/* GUC, determining whether Citus opens 1 connection per task */
bool ForceMaxQueryParallelization = false;
int MaxAdaptiveExecutorPoolSize = 16;
bool EnableBinaryProtocol = true;
bool EnableBinaryProtocol = false;
/* GUC, number of ms to wait between opening connections to the same worker */
int ExecutorSlowStartInterval = 10;
@ -656,6 +656,16 @@ static void SetAttributeInputMetadata(DistributedExecution *execution,
void
AdaptiveExecutorPreExecutorRun(CitusScanState *scanState)
{
if (scanState->finishedPreScan)
{
/*
* Cursors (and hence RETURN QUERY syntax in pl/pgsql functions)
* may trigger AdaptiveExecutorPreExecutorRun() on every fetch
* operation. Though, we should only execute PreScan once.
*/
return;
}
DistributedPlan *distributedPlan = scanState->distributedPlan;
/*
@ -666,6 +676,8 @@ AdaptiveExecutorPreExecutorRun(CitusScanState *scanState)
LockPartitionsForDistributedPlan(distributedPlan);
ExecuteSubPlans(distributedPlan);
scanState->finishedPreScan = true;
}
@ -694,13 +706,7 @@ AdaptiveExecutor(CitusScanState *scanState)
Assert(!scanState->finishedRemoteScan);
/* Reset Task fields that are only valid for a single execution */
Task *task = NULL;
foreach_ptr(task, taskList)
{
task->totalReceivedTupleData = 0;
task->fetchedExplainAnalyzePlacementIndex = 0;
task->fetchedExplainAnalyzePlan = NULL;
}
ResetExplainAnalyzeData(taskList);
scanState->tuplestorestate =
tuplestore_begin_heap(randomAccess, interTransactions, work_mem);

View File

@ -297,7 +297,7 @@ CitusBeginReadOnlyScan(CustomScanState *node, EState *estate, int eflags)
*
* TODO: evaluate stable functions
*/
ExecuteMasterEvaluableExpressions(jobQuery, planState);
ExecuteCoordinatorEvaluableExpressions(jobQuery, planState);
/* job query no longer has parameters, so we should not send any */
workerJob->parametersInJobQueryResolved = true;
@ -347,7 +347,7 @@ CitusBeginModifyScan(CustomScanState *node, EState *estate, int eflags)
if (ModifyJobNeedsEvaluation(workerJob))
{
ExecuteMasterEvaluableExpressions(jobQuery, planState);
ExecuteCoordinatorEvaluableExpressions(jobQuery, planState);
/* job query no longer has parameters, so we should not send any */
workerJob->parametersInJobQueryResolved = true;
@ -375,7 +375,7 @@ CitusBeginModifyScan(CustomScanState *node, EState *estate, int eflags)
RegenerateTaskForFasthPathQuery(workerJob);
}
}
else if (workerJob->requiresMasterEvaluation)
else if (workerJob->requiresCoordinatorEvaluation)
{
/*
* When there is no deferred pruning, but we did evaluate functions, then
@ -428,7 +428,7 @@ CitusBeginModifyScan(CustomScanState *node, EState *estate, int eflags)
static bool
ModifyJobNeedsEvaluation(Job *workerJob)
{
if (workerJob->requiresMasterEvaluation)
if (workerJob->requiresCoordinatorEvaluation)
{
/* query contains functions that need to be evaluated on the coordinator */
return true;
@ -575,6 +575,9 @@ AdaptiveExecutorCreateScan(CustomScan *scan)
scanState->customScanState.methods = &AdaptiveExecutorCustomExecMethods;
scanState->PreExecScan = &CitusPreExecScan;
scanState->finishedPreScan = false;
scanState->finishedRemoteScan = false;
return (Node *) scanState;
}
@ -613,6 +616,9 @@ NonPushableInsertSelectCreateScan(CustomScan *scan)
scanState->customScanState.methods =
&NonPushableInsertSelectCustomExecMethods;
scanState->finishedPreScan = false;
scanState->finishedRemoteScan = false;
return (Node *) scanState;
}

View File

@ -136,7 +136,7 @@ broadcast_intermediate_result(PG_FUNCTION_ARGS)
*/
UseCoordinatedTransaction();
List *nodeList = ActivePrimaryWorkerNodeList(NoLock);
List *nodeList = ActivePrimaryNonCoordinatorNodeList(NoLock);
EState *estate = CreateExecutorState();
RemoteFileDestReceiver *resultDest =
(RemoteFileDestReceiver *) CreateRemoteFileDestReceiver(resultIdString,

View File

@ -118,8 +118,7 @@ JobExecutorType(DistributedPlan *distributedPlan)
}
else
{
List *workerNodeList = ActiveReadableWorkerNodeList();
int workerNodeCount = list_length(workerNodeList);
int workerNodeCount = list_length(ActiveReadableNodeList());
int taskCount = list_length(job->taskList);
double tasksPerNode = taskCount / ((double) workerNodeCount);

View File

@ -209,7 +209,7 @@ MultiTaskTrackerExecute(Job *job)
* assigning and checking the status of tasks. The second (temporary) hash
* helps us in fetching results data from worker nodes to the master node.
*/
List *workerNodeList = ActivePrimaryWorkerNodeList(NoLock);
List *workerNodeList = ActivePrimaryNodeList(ShareLock);
uint32 taskTrackerCount = (uint32) list_length(workerNodeList);
/* connect as the current user for running queries */

View File

@ -104,7 +104,7 @@ CreateTemporarySchemasForMergeTasks(Job *topLeveLJob)
{
List *jobIds = ExtractJobsInJobTree(topLeveLJob);
char *createSchemasCommand = GenerateCreateSchemasCommand(jobIds, CurrentUserName());
SendCommandToWorkersInParallel(ALL_WORKERS, createSchemasCommand,
SendCommandToWorkersInParallel(ALL_SHARD_NODES, createSchemasCommand,
CitusExtensionOwnerName());
return jobIds;
}
@ -191,7 +191,8 @@ GenerateJobCommands(List *jobIds, char *templateCommand)
void
DoRepartitionCleanup(List *jobIds)
{
SendCommandToWorkersOptionalInParallel(ALL_WORKERS, GenerateDeleteJobsCommand(jobIds),
SendCommandToWorkersOptionalInParallel(ALL_SHARD_NODES, GenerateDeleteJobsCommand(
jobIds),
CitusExtensionOwnerName());
}

View File

@ -22,6 +22,8 @@
#include "executor/executor.h"
#include "utils/datetime.h"
#define SECOND_TO_MILLI_SECOND 1000
#define MICRO_TO_MILLI_SECOND 0.001
int MaxIntermediateResult = 1048576; /* maximum size in KB the intermediate result can grow to */
/* when this is true, we enforce intermediate result size limit in all executors */
@ -86,7 +88,9 @@ ExecuteSubPlans(DistributedPlan *distributedPlan)
int durationMicrosecs = 0;
TimestampDifference(startTimestamp, GetCurrentTimestamp(), &durationSeconds,
&durationMicrosecs);
subPlan->durationMillisecs = durationSeconds * 1000 * +durationMicrosecs * 10e-3;
subPlan->durationMillisecs = durationSeconds * SECOND_TO_MILLI_SECOND;
subPlan->durationMillisecs += durationMicrosecs * MICRO_TO_MILLI_SECOND;
subPlan->bytesSentPerWorker = RemoteFileDestReceiverBytesSent(copyDest);
subPlan->remoteWorkerCount = list_length(remoteWorkerNodeList);

View File

@ -282,7 +282,7 @@ EnsureModificationsCanRun(void)
if (RecoveryInProgress() && !WritableStandbyCoordinator)
{
ereport(ERROR, (errmsg("writing to worker nodes is not currently allowed"),
errdetail("the database is in recovery mode")));
errdetail("the database is read-only")));
}
if (ReadFromSecondaries == USE_SECONDARY_NODES_ALWAYS)
@ -1422,12 +1422,12 @@ HasUniformHashDistribution(ShardInterval **shardIntervalArray,
for (int shardIndex = 0; shardIndex < shardIntervalArrayLength; shardIndex++)
{
ShardInterval *shardInterval = shardIntervalArray[shardIndex];
int32 shardMinHashToken = INT32_MIN + (shardIndex * hashTokenIncrement);
int32 shardMinHashToken = PG_INT32_MIN + (shardIndex * hashTokenIncrement);
int32 shardMaxHashToken = shardMinHashToken + (hashTokenIncrement - 1);
if (shardIndex == (shardIntervalArrayLength - 1))
{
shardMaxHashToken = INT32_MAX;
shardMaxHashToken = PG_INT32_MAX;
}
if (DatumGetInt32(shardInterval->minValue) != shardMinHashToken ||

View File

@ -1254,7 +1254,7 @@ SchemaOwnerName(Oid objectId)
static bool
HasMetadataWorkers(void)
{
List *workerNodeList = ActivePrimaryWorkerNodeList(NoLock);
List *workerNodeList = ActivePrimaryNonCoordinatorNodeList(NoLock);
WorkerNode *workerNode = NULL;
foreach_ptr(workerNode, workerNodeList)
@ -1373,7 +1373,7 @@ SyncMetadataToNodes(void)
return METADATA_SYNC_FAILED_LOCK;
}
List *workerList = ActivePrimaryWorkerNodeList(NoLock);
List *workerList = ActivePrimaryNonCoordinatorNodeList(NoLock);
WorkerNode *workerNode = NULL;
foreach_ptr(workerNode, workerList)
{

View File

@ -117,7 +117,7 @@ OpenConnectionsToAllWorkerNodes(LOCKMODE lockMode)
List *connectionList = NIL;
int connectionFlags = FORCE_NEW_CONNECTION;
List *workerNodeList = ActivePrimaryWorkerNodeList(lockMode);
List *workerNodeList = ActivePrimaryNonCoordinatorNodeList(lockMode);
WorkerNode *workerNode = NULL;
foreach_ptr(workerNode, workerNodeList)

View File

@ -200,14 +200,14 @@ CreateShardsWithRoundRobinPolicy(Oid distributedTableId, int32 shardCount,
uint32 roundRobinNodeIndex = shardIndex % workerNodeCount;
/* initialize the hash token space for this shard */
int32 shardMinHashToken = INT32_MIN + (shardIndex * hashTokenIncrement);
int32 shardMinHashToken = PG_INT32_MIN + (shardIndex * hashTokenIncrement);
int32 shardMaxHashToken = shardMinHashToken + (hashTokenIncrement - 1);
uint64 shardId = GetNextShardId();
/* if we are at the last shard, make sure the max token value is INT_MAX */
if (shardIndex == (shardCount - 1))
{
shardMaxHashToken = INT32_MAX;
shardMaxHashToken = PG_INT32_MAX;
}
/* insert the shard metadata row along with its min/max values */

View File

@ -457,7 +457,7 @@ master_get_active_worker_nodes(PG_FUNCTION_ARGS)
MemoryContext oldContext = MemoryContextSwitchTo(
functionContext->multi_call_memory_ctx);
List *workerNodeList = ActiveReadableWorkerNodeList();
List *workerNodeList = ActiveReadableNonCoordinatorNodeList();
workerNodeCount = (uint32) list_length(workerNodeList);
functionContext->user_fctx = workerNodeList;

View File

@ -293,12 +293,13 @@ WorkerGetNodeWithName(const char *hostname)
/*
* ActivePrimaryWorkerNodeCount returns the number of groups with a primary in the cluster.
* ActivePrimaryNonCoordinatorNodeCount returns the number of groups with a primary in the cluster.
* This method excludes coordinator even if it is added as a worker to cluster.
*/
uint32
ActivePrimaryWorkerNodeCount(void)
ActivePrimaryNonCoordinatorNodeCount(void)
{
List *workerNodeList = ActivePrimaryWorkerNodeList(NoLock);
List *workerNodeList = ActivePrimaryNonCoordinatorNodeList(NoLock);
uint32 liveWorkerCount = list_length(workerNodeList);
return liveWorkerCount;
@ -306,12 +307,13 @@ ActivePrimaryWorkerNodeCount(void)
/*
* ActiveReadableWorkerNodeCount returns the number of groups with a node we can read from.
* ActiveReadableNonCoordinatorNodeCount returns the number of groups with a node we can read from.
* This method excludes coordinator even if it is added as a worker.
*/
uint32
ActiveReadableWorkerNodeCount(void)
ActiveReadableNonCoordinatorNodeCount(void)
{
List *workerNodeList = ActiveReadableWorkerNodeList();
List *workerNodeList = ActiveReadableNonCoordinatorNodeList();
uint32 liveWorkerCount = list_length(workerNodeList);
return liveWorkerCount;
@ -366,13 +368,14 @@ FilterActiveNodeListFunc(LOCKMODE lockMode, bool (*checkFunction)(WorkerNode *))
/*
* ActivePrimaryWorkerNodeList returns a list of all active primary worker nodes
* ActivePrimaryNonCoordinatorNodeList returns a list of all active primary worker nodes
* in workerNodeHash. lockMode specifies which lock to use on pg_dist_node,
* this is necessary when the caller wouldn't want nodes to be added concurrent
* to their use of this list.
* This method excludes coordinator even if it is added as a worker to cluster.
*/
List *
ActivePrimaryWorkerNodeList(LOCKMODE lockMode)
ActivePrimaryNonCoordinatorNodeList(LOCKMODE lockMode)
{
EnsureModificationsCanRun();
return FilterActiveNodeListFunc(lockMode, NodeIsPrimaryWorker);
@ -443,11 +446,11 @@ NodeCanHaveDistTablePlacements(WorkerNode *node)
/*
* ActiveReadableWorkerNodeList returns a list of all nodes in workerNodeHash
* that are readable workers.
* ActiveReadableNonCoordinatorNodeList returns a list of all nodes in workerNodeHash
* that are readable nodes This method excludes coordinator.
*/
List *
ActiveReadableWorkerNodeList(void)
ActiveReadableNonCoordinatorNodeList(void)
{
return FilterActiveNodeListFunc(NoLock, NodeIsReadableWorker);
}
@ -456,6 +459,7 @@ ActiveReadableWorkerNodeList(void)
/*
* ActiveReadableNodeList returns a list of all nodes in workerNodeHash
* that are readable workers.
* This method includes coordinator if it is added as a worker to the cluster.
*/
List *
ActiveReadableNodeList(void)
@ -602,7 +606,7 @@ WorkerNodeCompare(const void *lhsKey, const void *rhsKey, Size keySize)
WorkerNode *
GetFirstPrimaryWorkerNode(void)
{
List *workerNodeList = ActivePrimaryWorkerNodeList(NoLock);
List *workerNodeList = ActivePrimaryNonCoordinatorNodeList(NoLock);
WorkerNode *firstWorkerNode = NULL;
WorkerNode *workerNode = NULL;
foreach_ptr(workerNode, workerNodeList)

View File

@ -50,7 +50,7 @@ static CustomPathMethods CitusCustomScanPathMethods = {
};
/*
* MasterNodeSelectPlan takes in a distributed plan and a custom scan node which
* PlanCombineQuery takes in a distributed plan and a custom scan node which
* wraps remote part of the plan. This function finds the combine query structure
* in the multi plan, and builds the final select plan to execute on the tuples
* returned by remote scan on the coordinator node. Note that this select
@ -58,7 +58,7 @@ static CustomPathMethods CitusCustomScanPathMethods = {
* filled into the tuple store inside provided custom scan.
*/
PlannedStmt *
MasterNodeSelectPlan(DistributedPlan *distributedPlan, CustomScan *remoteScan)
PlanCombineQuery(DistributedPlan *distributedPlan, CustomScan *remoteScan)
{
Query *combineQuery = distributedPlan->combineQuery;

View File

@ -74,7 +74,7 @@ RebuildQueryStrings(Job *workerJob)
query = copyObject(originalQuery);
RangeTblEntry *copiedInsertRte = ExtractResultRelationRTE(query);
RangeTblEntry *copiedInsertRte = ExtractResultRelationRTEOrError(query);
RangeTblEntry *copiedSubqueryRte = ExtractSelectRangeTableEntry(query);
Query *copiedSubquery = copiedSubqueryRte->subquery;

View File

@ -1368,7 +1368,7 @@ static PlannedStmt *
FinalizeNonRouterPlan(PlannedStmt *localPlan, DistributedPlan *distributedPlan,
CustomScan *customScan)
{
PlannedStmt *finalPlan = MasterNodeSelectPlan(distributedPlan, customScan);
PlannedStmt *finalPlan = PlanCombineQuery(distributedPlan, customScan);
finalPlan->queryId = localPlan->queryId;
finalPlan->utilityStmt = localPlan->utilityStmt;

View File

@ -275,7 +275,7 @@ CreateDistributedInsertSelectPlan(Query *originalQuery,
uint32 taskIdIndex = 1; /* 0 is reserved for invalid taskId */
uint64 jobId = INVALID_JOB_ID;
DistributedPlan *distributedPlan = CitusMakeNode(DistributedPlan);
RangeTblEntry *insertRte = ExtractResultRelationRTE(originalQuery);
RangeTblEntry *insertRte = ExtractResultRelationRTEOrError(originalQuery);
RangeTblEntry *subqueryRte = ExtractSelectRangeTableEntry(originalQuery);
Oid targetRelationId = insertRte->relid;
CitusTableCacheEntry *targetCacheEntry = GetCitusTableCacheEntry(targetRelationId);
@ -348,7 +348,8 @@ CreateDistributedInsertSelectPlan(Query *originalQuery,
workerJob->dependentJobList = NIL;
workerJob->jobId = jobId;
workerJob->jobQuery = originalQuery;
workerJob->requiresMasterEvaluation = RequiresMasterEvaluation(originalQuery);
workerJob->requiresCoordinatorEvaluation =
RequiresCoordinatorEvaluation(originalQuery);
/* and finally the multi plan */
distributedPlan->workerJob = workerJob;
@ -648,7 +649,7 @@ RouterModifyTaskForShardInterval(Query *originalQuery,
DeferredErrorMessage **routerPlannerError)
{
Query *copiedQuery = copyObject(originalQuery);
RangeTblEntry *copiedInsertRte = ExtractResultRelationRTE(copiedQuery);
RangeTblEntry *copiedInsertRte = ExtractResultRelationRTEOrError(copiedQuery);
RangeTblEntry *copiedSubqueryRte = ExtractSelectRangeTableEntry(copiedQuery);
Query *copiedSubquery = (Query *) copiedSubqueryRte->subquery;
@ -1343,7 +1344,7 @@ CreateNonPushableInsertSelectPlan(uint64 planId, Query *parse, ParamListInfo bou
Query *insertSelectQuery = copyObject(parse);
RangeTblEntry *selectRte = ExtractSelectRangeTableEntry(insertSelectQuery);
RangeTblEntry *insertRte = ExtractResultRelationRTE(insertSelectQuery);
RangeTblEntry *insertRte = ExtractResultRelationRTEOrError(insertSelectQuery);
Oid targetRelationId = insertRte->relid;
DistributedPlan *distributedPlan = CitusMakeNode(DistributedPlan);

View File

@ -151,7 +151,7 @@ RecordSubplanExecutionsOnNodes(HTAB *intermediateResultsHash,
List *usedSubPlanNodeList = distributedPlan->usedSubPlanNodeList;
List *subPlanList = distributedPlan->subPlanList;
ListCell *subPlanCell = NULL;
int workerNodeCount = ActiveReadableWorkerNodeCount();
int workerNodeCount = ActiveReadableNonCoordinatorNodeCount();
foreach(subPlanCell, usedSubPlanNodeList)
{
@ -269,7 +269,7 @@ AppendAllAccessedWorkerNodes(IntermediateResultsHashEntry *entry,
static void
AppendAllWorkerNodes(IntermediateResultsHashEntry *entry)
{
List *workerNodeList = ActiveReadableWorkerNodeList();
List *workerNodeList = ActiveReadableNonCoordinatorNodeList();
WorkerNode *workerNode = NULL;
foreach_ptr(workerNode, workerNodeList)

View File

@ -190,7 +190,7 @@ IsLocalPlanCachingSupported(Job *currentJob, DistributedPlan *originalDistribute
* We do not cache plans with volatile functions in the query.
*
* The reason we care about volatile functions is primarily that we
* already executed them in ExecuteMasterEvaluableExpressions
* already executed them in ExecuteCoordinatorEvaluableExpressions
* and since we're falling back to the original query tree here we would
* execute them again if we execute the plan.
*/

View File

@ -147,7 +147,7 @@ static void ExplainAnalyzeDestPutTuple(TupleDestination *self, Task *task,
static TupleDesc ExplainAnalyzeDestTupleDescForQuery(TupleDestination *self, int
queryNumber);
static char * WrapQueryForExplainAnalyze(const char *queryString, TupleDesc tupleDesc);
static List * SplitString(const char *str, char delimiter);
static List * SplitString(const char *str, char delimiter, int maxLength);
/* Static Explain functions copied from explain.c */
static void ExplainOneQuery(Query *query, int cursorOptions,
@ -576,8 +576,11 @@ GetSavedRemoteExplain(Task *task, ExplainState *es)
*/
if (es->format == EXPLAIN_FORMAT_TEXT)
{
/*
* We limit the size of EXPLAIN plans to RSIZE_MAX_MEM (256MB).
*/
remotePlan->explainOutputList = SplitString(task->fetchedExplainAnalyzePlan,
'\n');
'\n', RSIZE_MAX_MEM);
}
else
{
@ -957,7 +960,7 @@ worker_save_query_explain_analyze(PG_FUNCTION_ARGS)
INSTR_TIME_SET_CURRENT(planStart);
PlannedStmt *plan = pg_plan_query(query, 0, NULL);
PlannedStmt *plan = pg_plan_query(query, CURSOR_OPT_PARALLEL_OK, NULL);
INSTR_TIME_SET_CURRENT(planDuration);
INSTR_TIME_SUBTRACT(planDuration, planStart);
@ -1194,8 +1197,26 @@ ExplainAnalyzeDestPutTuple(TupleDestination *self, Task *task,
char *fetchedExplainAnalyzePlan = TextDatumGetCString(explainAnalyze);
/*
* Allocate fetchedExplainAnalyzePlan in the same context as the Task, since we are
* currently in execution context and a Task can span multiple executions.
*
* Although we won't reuse the same value in a future execution, but we have
* calls to CheckNodeCopyAndSerialization() which asserts copy functions of the task
* work as expected, which will try to copy this value in a future execution.
*
* Why we don't we just allocate this field in executor context and reset it before
* the next execution? Because when an error is raised we can skip pretty much most
* of the meaningful places that we can insert the reset.
*
* TODO: Take all EXPLAIN ANALYZE related fields out of Task and store them in a
* Task to ExplainAnalyzePrivate mapping in multi_explain.c, so we don't need to
* do these hacky memory context management tricks.
*/
MemoryContext taskContext = GetMemoryChunkContext(tupleDestination->originalTask);
tupleDestination->originalTask->fetchedExplainAnalyzePlan =
pstrdup(fetchedExplainAnalyzePlan);
MemoryContextStrdup(taskContext, fetchedExplainAnalyzePlan);
tupleDestination->originalTask->fetchedExplainAnalyzePlacementIndex =
placementIndex;
}
@ -1207,6 +1228,27 @@ ExplainAnalyzeDestPutTuple(TupleDestination *self, Task *task,
}
/*
* ResetExplainAnalyzeData reset fields in Task that are used by multi_explain.c
*/
void
ResetExplainAnalyzeData(List *taskList)
{
Task *task = NULL;
foreach_ptr(task, taskList)
{
if (task->fetchedExplainAnalyzePlan != NULL)
{
pfree(task->fetchedExplainAnalyzePlan);
}
task->totalReceivedTupleData = 0;
task->fetchedExplainAnalyzePlacementIndex = 0;
task->fetchedExplainAnalyzePlan = NULL;
}
}
/*
* ExplainAnalyzeDestTupleDescForQuery implements TupleDestination->tupleDescForQuery
* for ExplainAnalyzeDestination.
@ -1353,9 +1395,9 @@ WrapQueryForExplainAnalyze(const char *queryString, TupleDesc tupleDesc)
* it isn't safe if by any chance str is not null-terminated.
*/
static List *
SplitString(const char *str, char delimiter)
SplitString(const char *str, char delimiter, int maxLength)
{
size_t len = strnlen_s(str, RSIZE_MAX_STR);
size_t len = strnlen(str, maxLength);
if (len == 0)
{
return NIL;

View File

@ -165,7 +165,7 @@ static Task * QueryPushdownTaskCreate(Query *originalQuery, int shardIndex,
RelationRestrictionContext *restrictionContext,
uint32 taskId,
TaskType taskType,
bool modifyRequiresMasterEvaluation);
bool modifyRequiresCoordinatorEvaluation);
static bool ShardIntervalsEqual(FmgrInfo *comparisonFunction,
Oid collation,
ShardInterval *firstInterval,
@ -2015,7 +2015,7 @@ BuildJob(Query *jobQuery, List *dependentJobList)
job->jobId = UniqueJobId();
job->jobQuery = jobQuery;
job->dependentJobList = dependentJobList;
job->requiresMasterEvaluation = false;
job->requiresCoordinatorEvaluation = false;
return job;
}
@ -2107,7 +2107,7 @@ BuildMapMergeJob(Query *jobQuery, List *dependentJobList, Var *partitionKey,
static uint32
HashPartitionCount(void)
{
uint32 groupCount = ActiveReadableWorkerNodeCount();
uint32 groupCount = list_length(ActiveReadableNodeList());
double maxReduceTasksPerNode = MaxRunningTasksPerNode / 2.0;
uint32 partitionCount = (uint32) rint(groupCount * maxReduceTasksPerNode);
@ -2289,7 +2289,7 @@ List *
QueryPushdownSqlTaskList(Query *query, uint64 jobId,
RelationRestrictionContext *relationRestrictionContext,
List *prunedRelationShardList, TaskType taskType, bool
modifyRequiresMasterEvaluation)
modifyRequiresCoordinatorEvaluation)
{
List *sqlTaskList = NIL;
ListCell *restrictionCell = NULL;
@ -2393,7 +2393,7 @@ QueryPushdownSqlTaskList(Query *query, uint64 jobId,
relationRestrictionContext,
taskIdIndex,
taskType,
modifyRequiresMasterEvaluation);
modifyRequiresCoordinatorEvaluation);
subqueryTask->jobId = jobId;
sqlTaskList = lappend(sqlTaskList, subqueryTask);
@ -2570,7 +2570,7 @@ ErrorIfUnsupportedShardDistribution(Query *query)
static Task *
QueryPushdownTaskCreate(Query *originalQuery, int shardIndex,
RelationRestrictionContext *restrictionContext, uint32 taskId,
TaskType taskType, bool modifyRequiresMasterEvaluation)
TaskType taskType, bool modifyRequiresCoordinatorEvaluation)
{
Query *taskQuery = copyObject(originalQuery);
@ -2672,7 +2672,7 @@ QueryPushdownTaskCreate(Query *originalQuery, int shardIndex,
Task *subqueryTask = CreateBasicTask(jobId, taskId, taskType, NULL);
if ((taskType == MODIFY_TASK && !modifyRequiresMasterEvaluation) ||
if ((taskType == MODIFY_TASK && !modifyRequiresCoordinatorEvaluation) ||
taskType == READ_TASK)
{
pg_get_query_def(taskQuery, queryString);
@ -4599,7 +4599,7 @@ GenerateSyntheticShardIntervalArray(int partitionCount)
ShardInterval *shardInterval = CitusMakeNode(ShardInterval);
/* calculate the split of the hash space */
int32 shardMinHashToken = INT32_MIN + (shardIndex * hashTokenIncrement);
int32 shardMinHashToken = PG_INT32_MIN + (shardIndex * hashTokenIncrement);
int32 shardMaxHashToken = shardMinHashToken + (hashTokenIncrement - 1);
shardInterval->relationId = InvalidOid;
@ -5717,7 +5717,7 @@ AssignDualHashTaskList(List *taskList)
* if subsequent jobs have a small number of tasks, we won't allocate the
* tasks to the same worker repeatedly.
*/
List *workerNodeList = ActiveReadableWorkerNodeList();
List *workerNodeList = ActiveReadableNodeList();
uint32 workerNodeCount = (uint32) list_length(workerNodeList);
uint32 beginningNodeIndex = jobId % workerNodeCount;

View File

@ -45,6 +45,7 @@
#include "distributed/citus_ruleutils.h"
#include "distributed/query_pushdown_planning.h"
#include "distributed/query_utils.h"
#include "distributed/recursive_planning.h"
#include "distributed/reference_table_utils.h"
#include "distributed/relation_restriction_equivalence.h"
#include "distributed/relay_utility.h"
@ -156,6 +157,7 @@ static DeferredErrorMessage * MultiRouterPlannableQuery(Query *query);
static DeferredErrorMessage * ErrorIfQueryHasUnroutableModifyingCTE(Query *queryTree);
static bool SelectsFromDistributedTable(List *rangeTableList, Query *query);
static ShardPlacement * CreateDummyPlacement(bool hasLocalRelation);
static ShardPlacement * CreateLocalDummyPlacement();
static List * get_all_actual_clauses(List *restrictinfo_list);
static int CompareInsertValuesByShardId(const void *leftElement,
const void *rightElement);
@ -507,7 +509,9 @@ ResultRelationOidForQuery(Query *query)
/*
* ExtractResultRelationRTE returns the table's resultRelation range table entry.
* ExtractResultRelationRTE returns the table's resultRelation range table
* entry. This returns NULL when there's no resultRelation, such as in a SELECT
* query.
*/
RangeTblEntry *
ExtractResultRelationRTE(Query *query)
@ -521,6 +525,28 @@ ExtractResultRelationRTE(Query *query)
}
/*
* ExtractResultRelationRTEOrError returns the table's resultRelation range table
* entry and errors out if there's no result relation at all, e.g. like in a
* SELECT query.
*
* This is a separate function (instead of using missingOk), so static analysis
* reasons about NULL returns correctly.
*/
RangeTblEntry *
ExtractResultRelationRTEOrError(Query *query)
{
RangeTblEntry *relation = ExtractResultRelationRTE(query);
if (relation == NULL)
{
ereport(ERROR, (errmsg("no result relation could be found for the query"),
errhint("is this a SELECT query?")));
}
return relation;
}
/*
* IsTidColumn gets a node and returns true if the node is a Var type of TID.
*/
@ -1302,9 +1328,9 @@ MasterIrreducibleExpressionWalker(Node *expression, WalkerState *state)
/*
* In order for statement replication to give us consistent results it's important
* that we either disallow or evaluate on the master anything which has a volatility
* category above IMMUTABLE. Newer versions of postgres might add node types which
* should be checked in this function.
* that we either disallow or evaluate on the coordinator anything which has a
* volatility category above IMMUTABLE. Newer versions of postgres might add node
* types which should be checked in this function.
*
* Look through contain_mutable_functions_walker or future PG's equivalent for new
* node types before bumping this version number to fix compilation; e.g. for any
@ -1451,7 +1477,7 @@ RouterInsertJob(Query *originalQuery)
}
Job *job = CreateJob(originalQuery);
job->requiresMasterEvaluation = RequiresMasterEvaluation(originalQuery);
job->requiresCoordinatorEvaluation = RequiresCoordinatorEvaluation(originalQuery);
job->deferredPruning = true;
job->partitionKeyValue = ExtractInsertPartitionKeyValue(originalQuery);
@ -1471,7 +1497,7 @@ CreateJob(Query *query)
job->taskList = NIL;
job->dependentJobList = NIL;
job->subqueryPushdown = false;
job->requiresMasterEvaluation = false;
job->requiresCoordinatorEvaluation = false;
job->deferredPruning = false;
return job;
@ -1625,8 +1651,8 @@ RouterJob(Query *originalQuery, PlannerRestrictionContext *plannerRestrictionCon
/* router planner should create task even if it doesn't hit a shard at all */
bool replacePrunedQueryWithDummy = true;
/* check if this query requires master evaluation */
bool requiresMasterEvaluation = RequiresMasterEvaluation(originalQuery);
/* check if this query requires coordinator evaluation */
bool requiresCoordinatorEvaluation = RequiresCoordinatorEvaluation(originalQuery);
FastPathRestrictionContext *fastPathRestrictionContext =
plannerRestrictionContext->fastPathRestrictionContext;
@ -1688,7 +1714,7 @@ RouterJob(Query *originalQuery, PlannerRestrictionContext *plannerRestrictionCon
relationRestrictionContext,
prunedShardIntervalListList,
MODIFY_TASK,
requiresMasterEvaluation);
requiresCoordinatorEvaluation);
}
else
{
@ -1696,7 +1722,7 @@ RouterJob(Query *originalQuery, PlannerRestrictionContext *plannerRestrictionCon
placementList, shardId);
}
job->requiresMasterEvaluation = requiresMasterEvaluation;
job->requiresCoordinatorEvaluation = requiresCoordinatorEvaluation;
return job;
}
@ -1979,6 +2005,16 @@ SelectsFromDistributedTable(List *rangeTableList, Query *query)
continue;
}
if (rangeTableEntry->relkind == RELKIND_VIEW ||
rangeTableEntry->relkind == RELKIND_MATVIEW)
{
/*
* Skip over views, which would error out in GetCitusTableCacheEntry.
* Distributed tables within (regular) views are already in rangeTableList.
*/
continue;
}
CitusTableCacheEntry *cacheEntry = GetCitusTableCacheEntry(
rangeTableEntry->relid);
if (cacheEntry->partitionMethod != DISTRIBUTE_BY_NONE &&
@ -2022,8 +2058,6 @@ PlanRouterQuery(Query *originalQuery,
bool replacePrunedQueryWithDummy, bool *multiShardModifyQuery,
Const **partitionValueConst)
{
RelationRestrictionContext *relationRestrictionContext =
plannerRestrictionContext->relationRestrictionContext;
bool isMultiShardQuery = false;
DeferredErrorMessage *planningError = NULL;
bool shardsPresent = false;
@ -2136,7 +2170,12 @@ PlanRouterQuery(Query *originalQuery,
/* we need anchor shard id for select queries with router planner */
uint64 shardId = GetAnchorShardId(*prunedShardIntervalListList);
bool hasLocalRelation = relationRestrictionContext->hasLocalRelation;
/*
* We keep track of hasLocalRelation in plannerRestrictionContext->
* relationRestrictionContext, but in rare cases tables are excluded from
* there (e.g. catalog table on inside of an inner join). So we recheck.
*/
bool hasLocalRelation = FindNodeCheck((Node *) originalQuery, IsLocalTableRTE);
List *taskPlacementList =
CreateTaskPlacementListForShardIntervals(*prunedShardIntervalListList,
@ -2152,10 +2191,11 @@ PlanRouterQuery(Query *originalQuery,
}
/*
* If this is an UPDATE or DELETE query which requires master evaluation,
* If this is an UPDATE or DELETE query which requires coordinator evaluation,
* don't try update shard names, and postpone that to execution phase.
*/
if (!(UpdateOrDeleteQuery(originalQuery) && RequiresMasterEvaluation(originalQuery)))
bool isUpdateOrDelete = UpdateOrDeleteQuery(originalQuery);
if (!(isUpdateOrDelete && RequiresCoordinatorEvaluation(originalQuery)))
{
UpdateRelationToShardNames((Node *) originalQuery, *relationShardList);
}
@ -2232,6 +2272,25 @@ CreateTaskPlacementListForShardIntervals(List *shardIntervalListList, bool shard
}
/*
* CreateLocalDummyPlacement creates a dummy placement for the local node that
* can be used for queries that don't involve any shards. The typical examples
* are:
* (a) queries that consist of only intermediate results
* (b) queries that hit zero shards (... WHERE false;)
*/
static ShardPlacement *
CreateLocalDummyPlacement()
{
ShardPlacement *dummyPlacement = CitusMakeNode(ShardPlacement);
dummyPlacement->nodeId = LOCAL_NODE_ID;
dummyPlacement->nodeName = LOCAL_HOST_NAME;
dummyPlacement->nodePort = PostPortNumber;
dummyPlacement->groupId = GetLocalGroupId();
return dummyPlacement;
}
/*
* CreateDummyPlacement creates a dummy placement that can be used for queries
* that don't involve any shards. The typical examples are:
@ -2248,31 +2307,32 @@ static ShardPlacement *
CreateDummyPlacement(bool hasLocalRelation)
{
static uint32 zeroShardQueryRoundRobin = 0;
if (TaskAssignmentPolicy != TASK_ASSIGNMENT_ROUND_ROBIN || hasLocalRelation)
{
return CreateLocalDummyPlacement();
}
List *workerNodeList = ActiveReadableNonCoordinatorNodeList();
if (workerNodeList == NIL)
{
/*
* We want to round-robin over the workers, but there are no workers.
* To make sure the query can still succeed we fall back to returning
* a local dummy placement.
*/
return CreateLocalDummyPlacement();
}
int workerNodeCount = list_length(workerNodeList);
int workerNodeIndex = zeroShardQueryRoundRobin % workerNodeCount;
WorkerNode *workerNode = (WorkerNode *) list_nth(workerNodeList,
workerNodeIndex);
ShardPlacement *dummyPlacement = CitusMakeNode(ShardPlacement);
SetPlacementNodeMetadata(dummyPlacement, workerNode);
if (TaskAssignmentPolicy == TASK_ASSIGNMENT_ROUND_ROBIN && !hasLocalRelation)
{
List *workerNodeList = ActiveReadableWorkerNodeList();
if (workerNodeList == NIL)
{
return NULL;
}
int workerNodeCount = list_length(workerNodeList);
int workerNodeIndex = zeroShardQueryRoundRobin % workerNodeCount;
WorkerNode *workerNode = (WorkerNode *) list_nth(workerNodeList,
workerNodeIndex);
SetPlacementNodeMetadata(dummyPlacement, workerNode);
zeroShardQueryRoundRobin++;
}
else
{
dummyPlacement->nodeId = LOCAL_NODE_ID;
dummyPlacement->nodeName = LOCAL_HOST_NAME;
dummyPlacement->nodePort = PostPortNumber;
dummyPlacement->groupId = GetLocalGroupId();
}
zeroShardQueryRoundRobin++;
return dummyPlacement;
}
@ -2654,7 +2714,6 @@ BuildRoutesForInsert(Query *query, DeferredErrorMessage **planningError)
Oid distributedTableId = ExtractFirstCitusTableId(query);
CitusTableCacheEntry *cacheEntry = GetCitusTableCacheEntry(distributedTableId);
char partitionMethod = cacheEntry->partitionMethod;
uint32 rangeTableId = 1;
List *modifyRouteList = NIL;
ListCell *insertValuesCell = NULL;
@ -2692,7 +2751,7 @@ BuildRoutesForInsert(Query *query, DeferredErrorMessage **planningError)
return modifyRouteList;
}
Var *partitionColumn = PartitionColumn(distributedTableId, rangeTableId);
Var *partitionColumn = cacheEntry->partitionColumn;
/* get full list of insert values and iterate over them to prune */
List *insertValuesList = ExtractInsertValuesList(query, partitionColumn);
@ -2701,8 +2760,38 @@ BuildRoutesForInsert(Query *query, DeferredErrorMessage **planningError)
{
InsertValues *insertValues = (InsertValues *) lfirst(insertValuesCell);
List *prunedShardIntervalList = NIL;
Expr *partitionValueExpr = (Expr *) strip_implicit_coercions(
(Node *) insertValues->partitionValueExpr);
Node *partitionValueExpr = (Node *) insertValues->partitionValueExpr;
/*
* We only support constant partition values at this point. Sometimes
* they are wrappend in an implicit coercion though. Most notably
* FuncExpr coercions for casts created with CREATE CAST ... WITH
* FUNCTION .. AS IMPLICIT. To support this first we strip them here.
* Then we do the coercion manually below using
* TransformPartitionRestrictionValue, if the types are not the same.
*
* NOTE: eval_const_expressions below would do some of these removals
* too, but it's unclear if it would do all of them. It is possible
* that there are no cases where this strip_implicit_coercions call is
* really necessary at all, but currently that's hard to rule out.
* So to be on the safe side we call strip_implicit_coercions too, to
* be sure we support as much as possible.
*/
partitionValueExpr = strip_implicit_coercions(partitionValueExpr);
/*
* By evaluating constant expressions an expression such as 2 + 4
* will become const 6. That way we can use them as a partition column
* value. Normally the planner evaluates constant expressions, but we
* may be working on the original query tree here. So we do it here
* explicitely before checking that the partition value is a const.
*
* NOTE: We do not use expression_planner here, since all it does
* apart from calling eval_const_expressions is call fix_opfuncids.
* This is not needed here, since it's a no-op for T_Const nodes and we
* error out below in all other cases.
*/
partitionValueExpr = eval_const_expressions(NULL, partitionValueExpr);
if (!IsA(partitionValueExpr, Const))
{
@ -2719,21 +2808,20 @@ BuildRoutesForInsert(Query *query, DeferredErrorMessage **planningError)
"column")));
}
/* actually do the coercions that we skipped before, if fails throw an
* error */
if (partitionValueConst->consttype != partitionColumn->vartype)
{
bool missingOk = false;
partitionValueConst =
TransformPartitionRestrictionValue(partitionColumn,
partitionValueConst,
missingOk);
}
if (partitionMethod == DISTRIBUTE_BY_HASH || partitionMethod ==
DISTRIBUTE_BY_RANGE)
{
Var *distributionKey = cacheEntry->partitionColumn;
/* handle coercions, if fails throw an error */
if (partitionValueConst->consttype != distributionKey->vartype)
{
bool missingOk = false;
partitionValueConst =
TransformPartitionRestrictionValue(distributionKey,
partitionValueConst,
missingOk);
}
Datum partitionValue = partitionValueConst->constvalue;
ShardInterval *shardInterval = FindShardInterval(partitionValue, cacheEntry);

View File

@ -168,7 +168,6 @@ static bool ShouldRecursivelyPlanSetOperation(Query *query,
RecursivePlanningContext *context);
static void RecursivelyPlanSetOperations(Query *query, Node *node,
RecursivePlanningContext *context);
static bool IsLocalTableRTE(Node *node);
static void RecursivelyPlanSubquery(Query *subquery,
RecursivePlanningContext *planningContext);
static DistributedSubPlan * CreateDistributedSubPlan(uint32 subPlanId,
@ -1060,7 +1059,7 @@ RecursivelyPlanSetOperations(Query *query, Node *node,
* is a range table relation entry that points to a local
* relation (i.e., not a distributed relation).
*/
static bool
bool
IsLocalTableRTE(Node *node)
{
if (node == NULL)
@ -1440,7 +1439,8 @@ TransformFunctionRTE(RangeTblEntry *rangeTblEntry)
{
ereport(ERROR, (errmsg("bad number of tuple descriptor attributes")));
}
for (targetColumnIndex = 0; targetColumnIndex < (AttrNumber) tupleDesc->natts;
AttrNumber natts = tupleDesc->natts;
for (targetColumnIndex = 0; targetColumnIndex < natts;
targetColumnIndex++)
{
FormData_pg_attribute *attribute = TupleDescAttr(tupleDesc,

View File

@ -217,8 +217,8 @@ create_monolithic_shard_row(PG_FUNCTION_ARGS)
StringInfo maxInfo = makeStringInfo();
uint64 newShardId = GetNextShardId();
appendStringInfo(minInfo, "%d", INT32_MIN);
appendStringInfo(maxInfo, "%d", INT32_MAX);
appendStringInfo(minInfo, "%d", PG_INT32_MIN);
appendStringInfo(maxInfo, "%d", PG_INT32_MAX);
text *minInfoText = cstring_to_text(minInfo->data);
text *maxInfoText = cstring_to_text(maxInfo->data);

View File

@ -75,7 +75,7 @@ wait_until_metadata_sync(PG_FUNCTION_ARGS)
{
uint32 timeout = PG_GETARG_UINT32(0);
List *workerList = ActivePrimaryWorkerNodeList(NoLock);
List *workerList = ActivePrimaryNonCoordinatorNodeList(NoLock);
bool waitNotifications = false;
WorkerNode *workerNode = NULL;

View File

@ -0,0 +1,56 @@
/*-------------------------------------------------------------------------
*
* xact_stats.c
*
* This file contains functions to provide helper UDFs for testing transaction
* statistics.
*
* Copyright (c) Citus Data, Inc.
*
*-------------------------------------------------------------------------
*/
#include <sys/stat.h>
#include <unistd.h>
#include "postgres.h"
#include "funcapi.h"
#include "libpq-fe.h"
#include "miscadmin.h"
#include "pgstat.h"
static Size MemoryContextTotalSpace(MemoryContext context);
PG_FUNCTION_INFO_V1(top_transaction_context_size);
/*
* top_transaction_context_size returns current size of TopTransactionContext.
*/
Datum
top_transaction_context_size(PG_FUNCTION_ARGS)
{
Size totalSpace = MemoryContextTotalSpace(TopTransactionContext);
PG_RETURN_INT64(totalSpace);
}
/*
* MemoryContextTotalSpace returns total space allocated in context and its children.
*/
static Size
MemoryContextTotalSpace(MemoryContext context)
{
Size totalSpace = 0;
MemoryContextCounters totals = { 0 };
TopTransactionContext->methods->stats(TopTransactionContext, NULL, NULL, &totals);
totalSpace += totals.totalspace;
for (MemoryContext child = context->firstchild;
child != NULL;
child = child->nextchild)
{
totalSpace += MemoryContextTotalSpace(child);
}
return totalSpace;
}

View File

@ -217,7 +217,7 @@ Datum
get_global_active_transactions(PG_FUNCTION_ARGS)
{
TupleDesc tupleDescriptor = NULL;
List *workerNodeList = ActivePrimaryWorkerNodeList(NoLock);
List *workerNodeList = ActivePrimaryNonCoordinatorNodeList(NoLock);
List *connectionList = NIL;
StringInfo queryToSend = makeStringInfo();

View File

@ -311,7 +311,7 @@ citus_worker_stat_activity(PG_FUNCTION_ARGS)
static List *
CitusStatActivity(const char *statQuery)
{
List *workerNodeList = ActivePrimaryWorkerNodeList(NoLock);
List *workerNodeList = ActivePrimaryNonCoordinatorNodeList(NoLock);
List *connectionList = NIL;
/*
@ -437,7 +437,7 @@ GetLocalNodeCitusDistStat(const char *statQuery)
int32 localGroupId = GetLocalGroupId();
/* get the current worker's node stats */
List *workerNodeList = ActivePrimaryWorkerNodeList(NoLock);
List *workerNodeList = ActivePrimaryNonCoordinatorNodeList(NoLock);
WorkerNode *workerNode = NULL;
foreach_ptr(workerNode, workerNodeList)
{

View File

@ -593,7 +593,17 @@ AdjustMaxPreparedTransactions(void)
static void
PushSubXact(SubTransactionId subId)
{
MemoryContext old_context = MemoryContextSwitchTo(CurTransactionContext);
/*
* We need to allocate these in TopTransactionContext instead of current
* subxact's memory context. This is because AtSubCommit_Memory won't
* delete the subxact's memory context unless it is empty, and this
* can cause in memory leaks. For emptiness it just checks if the memory
* has been reset, and we cannot reset the subxact context since other
* data can be in the context that are needed by upper commits.
*
* See https://github.com/citusdata/citus/issues/3999
*/
MemoryContext old_context = MemoryContextSwitchTo(TopTransactionContext);
/* save provided subId as well as propagated SET LOCAL stmts */
SubXactContext *state = palloc(sizeof(SubXactContext));
@ -612,19 +622,34 @@ PushSubXact(SubTransactionId subId)
static void
PopSubXact(SubTransactionId subId)
{
MemoryContext old_context = MemoryContextSwitchTo(CurTransactionContext);
SubXactContext *state = linitial(activeSubXactContexts);
/*
* the previous activeSetStmts is already invalid because it's in the now-
* aborted subxact (what we're popping), so no need to free before assign-
* ing with the setLocalCmds of the popped context
*/
Assert(state->subId == subId);
activeSetStmts = state->setLocalCmds;
activeSubXactContexts = list_delete_first(activeSubXactContexts);
MemoryContextSwitchTo(old_context);
/*
* Free activeSetStmts to avoid memory leaks when we create subxacts
* for each row, e.g. in exception handling of UDFs.
*/
if (activeSetStmts != NULL)
{
pfree(activeSetStmts->data);
pfree(activeSetStmts);
}
/*
* SET LOCAL commands are local to subxact blocks. When a subxact commits
* or rolls back, we should roll back our set of SET LOCAL commands to the
* ones we had in the upper commit.
*/
activeSetStmts = state->setLocalCmds;
/*
* Free state to avoid memory leaks when we create subxacts for each row,
* e.g. in exception handling of UDFs.
*/
pfree(state);
activeSubXactContexts = list_delete_first(activeSubXactContexts);
}

View File

@ -36,6 +36,7 @@
#include "distributed/metadata_cache.h"
#include "distributed/pg_dist_transaction.h"
#include "distributed/remote_commands.h"
#include "distributed/resource_lock.h"
#include "distributed/transaction_recovery.h"
#include "distributed/worker_manager.h"
#include "distributed/version_compat.h"
@ -118,6 +119,9 @@ RecoverTwoPhaseCommits(void)
{
int recoveredTransactionCount = 0;
/* take advisory lock first to avoid running concurrently */
LockTransactionRecovery(ShareUpdateExclusiveLock);
List *workerList = ActivePrimaryNodeList(NoLock);
WorkerNode *workerNode = NULL;
foreach_ptr(workerNode, workerList)
@ -172,7 +176,7 @@ RecoverWorkerTransactions(WorkerNode *workerNode)
/* take table lock first to avoid running concurrently */
Relation pgDistTransaction = heap_open(DistTransactionRelationId(),
ShareUpdateExclusiveLock);
RowExclusiveLock);
TupleDesc tupleDescriptor = RelationGetDescr(pgDistTransaction);
/*

View File

@ -156,7 +156,7 @@ static void
SendCommandListToAllWorkersInternal(List *commandList, bool failOnError, const
char *superuser)
{
List *workerNodeList = ActivePrimaryWorkerNodeList(NoLock);
List *workerNodeList = ActivePrimaryNonCoordinatorNodeList(NoLock);
WorkerNode *workerNode = NULL;
foreach_ptr(workerNode, workerNodeList)
@ -198,19 +198,21 @@ SendOptionalCommandListToAllWorkers(List *commandList, const char *superuser)
List *
TargetWorkerSetNodeList(TargetWorkerSet targetWorkerSet, LOCKMODE lockMode)
{
List *workerNodeList = ActivePrimaryWorkerNodeList(lockMode);
List *workerNodeList = NIL;
if (targetWorkerSet == ALL_SHARD_NODES)
{
workerNodeList = ActivePrimaryNodeList(lockMode);
}
else
{
workerNodeList = ActivePrimaryNonCoordinatorNodeList(lockMode);
}
List *result = NIL;
int32 localGroupId = GetLocalGroupId();
WorkerNode *workerNode = NULL;
foreach_ptr(workerNode, workerNodeList)
{
if (targetWorkerSet == WORKERS_WITH_METADATA && !workerNode->hasMetadata)
{
continue;
}
if (targetWorkerSet == OTHER_WORKERS && workerNode->groupId == localGroupId)
if (targetWorkerSet == NON_COORDINATOR_METADATA_NODES && !workerNode->hasMetadata)
{
continue;
}
@ -232,7 +234,7 @@ TargetWorkerSetNodeList(TargetWorkerSet targetWorkerSet, LOCKMODE lockMode)
void
SendBareCommandListToMetadataWorkers(List *commandList)
{
TargetWorkerSet targetWorkerSet = WORKERS_WITH_METADATA;
TargetWorkerSet targetWorkerSet = NON_COORDINATOR_METADATA_NODES;
List *workerNodeList = TargetWorkerSetNodeList(targetWorkerSet, ShareLock);
char *nodeUser = CitusExtensionOwnerName();
@ -271,7 +273,7 @@ SendBareCommandListToMetadataWorkers(List *commandList)
int
SendBareOptionalCommandListToAllWorkersAsUser(List *commandList, const char *user)
{
TargetWorkerSet targetWorkerSet = ALL_WORKERS;
TargetWorkerSet targetWorkerSet = NON_COORDINATOR_NODES;
List *workerNodeList = TargetWorkerSetNodeList(targetWorkerSet, ShareLock);
int maxError = RESPONSE_OKAY;
@ -318,11 +320,12 @@ SendCommandToMetadataWorkersParams(const char *command,
const Oid *parameterTypes,
const char *const *parameterValues)
{
List *workerNodeList = TargetWorkerSetNodeList(WORKERS_WITH_METADATA, ShareLock);
List *workerNodeList = TargetWorkerSetNodeList(NON_COORDINATOR_METADATA_NODES,
ShareLock);
ErrorIfAnyMetadataNodeOutOfSync(workerNodeList);
SendCommandToWorkersParamsInternal(WORKERS_WITH_METADATA, command, user,
SendCommandToWorkersParamsInternal(NON_COORDINATOR_METADATA_NODES, command, user,
parameterCount, parameterTypes,
parameterValues);
}

View File

@ -34,22 +34,23 @@
static bool IsVariableExpression(Node *node);
static Expr * citus_evaluate_expr(Expr *expr, Oid result_type, int32 result_typmod,
Oid result_collation,
MasterEvaluationContext *masterEvaluationContext);
CoordinatorEvaluationContext *
coordinatorEvaluationContext);
static bool CitusIsVolatileFunctionIdChecker(Oid func_id, void *context);
static bool CitusIsMutableFunctionIdChecker(Oid func_id, void *context);
static bool ShouldEvaluateExpression(Expr *expression);
static bool ShouldEvaluateFunctionWithMasterContext(MasterEvaluationContext *
evaluationContext);
static bool ShouldEvaluateFunctions(CoordinatorEvaluationContext *evaluationContext);
static void FixFunctionArguments(Node *expr);
static bool FixFunctionArgumentsWalker(Node *expr, void *context);
/*
* RequiresMasterEvaluation returns the executor needs to reparse and
* RequiresCoordinatorEvaluation returns the executor needs to reparse and
* try to execute this query, which is the case if the query contains
* any stable or volatile function.
*/
bool
RequiresMasterEvaluation(Query *query)
RequiresCoordinatorEvaluation(Query *query)
{
if (query->commandType == CMD_SELECT && !query->hasModifyingCTE)
{
@ -61,25 +62,25 @@ RequiresMasterEvaluation(Query *query)
/*
* ExecuteMasterEvaluableExpressions evaluates expressions and parameters
* ExecuteCoordinatorEvaluableExpressions evaluates expressions and parameters
* that can be resolved to a constant.
*/
void
ExecuteMasterEvaluableExpressions(Query *query, PlanState *planState)
ExecuteCoordinatorEvaluableExpressions(Query *query, PlanState *planState)
{
MasterEvaluationContext masterEvaluationContext;
CoordinatorEvaluationContext coordinatorEvaluationContext;
masterEvaluationContext.planState = planState;
coordinatorEvaluationContext.planState = planState;
if (query->commandType == CMD_SELECT)
{
masterEvaluationContext.evaluationMode = EVALUATE_PARAMS;
coordinatorEvaluationContext.evaluationMode = EVALUATE_PARAMS;
}
else
{
masterEvaluationContext.evaluationMode = EVALUATE_FUNCTIONS_PARAMS;
coordinatorEvaluationContext.evaluationMode = EVALUATE_FUNCTIONS_PARAMS;
}
PartiallyEvaluateExpression((Node *) query, &masterEvaluationContext);
PartiallyEvaluateExpression((Node *) query, &coordinatorEvaluationContext);
}
@ -91,7 +92,7 @@ ExecuteMasterEvaluableExpressions(Query *query, PlanState *planState)
*/
Node *
PartiallyEvaluateExpression(Node *expression,
MasterEvaluationContext *masterEvaluationContext)
CoordinatorEvaluationContext *coordinatorEvaluationContext)
{
if (expression == NULL || IsA(expression, Const))
{
@ -112,11 +113,45 @@ PartiallyEvaluateExpression(Node *expression,
exprType(expression),
exprTypmod(expression),
exprCollation(expression),
masterEvaluationContext);
coordinatorEvaluationContext);
}
else if (ShouldEvaluateExpression((Expr *) expression) &&
ShouldEvaluateFunctionWithMasterContext(masterEvaluationContext))
ShouldEvaluateFunctions(coordinatorEvaluationContext))
{
/*
* The planner normally evaluates constant expressions, but we may be
* working on the original query tree. We could rely on
* citus_evaluate_expr to evaluate constant expressions, but there are
* certain node types that citus_evaluate_expr does not expect because
* the planner normally replaces them (in particular, CollateExpr).
* Hence, we first evaluate constant expressions using
* eval_const_expressions before continuing.
*
* NOTE: We do not use expression_planner here, since all it does
* apart from calling eval_const_expressions is call fix_opfuncids.
* We do not need this, since that is already called in
* citus_evaluate_expr. So we won't needlessly traverse the expression
* tree by calling it another time.
*/
expression = eval_const_expressions(NULL, expression);
/*
* It's possible that after evaluating const expressions we
* actually don't need to evaluate this expression anymore e.g:
*
* 1 = 0 AND now() > timestamp '10-10-2000 00:00'
*
* This statement would simply resolve to false, because 1 = 0 is
* false. That's why we now check again if we should evaluate the
* expression and only continue if we still do.
*/
if (!ShouldEvaluateExpression((Expr *) expression))
{
return (Node *) expression_tree_mutator(expression,
PartiallyEvaluateExpression,
coordinatorEvaluationContext);
}
if (FindNodeCheck(expression, IsVariableExpression))
{
/*
@ -132,19 +167,19 @@ PartiallyEvaluateExpression(Node *expression,
*/
return (Node *) expression_tree_mutator(expression,
PartiallyEvaluateExpression,
masterEvaluationContext);
coordinatorEvaluationContext);
}
return (Node *) citus_evaluate_expr((Expr *) expression,
exprType(expression),
exprTypmod(expression),
exprCollation(expression),
masterEvaluationContext);
coordinatorEvaluationContext);
}
else if (nodeTag == T_Query)
{
Query *query = (Query *) expression;
MasterEvaluationContext subContext = *masterEvaluationContext;
CoordinatorEvaluationContext subContext = *coordinatorEvaluationContext;
if (query->commandType != CMD_SELECT)
{
/*
@ -165,7 +200,7 @@ PartiallyEvaluateExpression(Node *expression,
{
return (Node *) expression_tree_mutator(expression,
PartiallyEvaluateExpression,
masterEvaluationContext);
coordinatorEvaluationContext);
}
return expression;
@ -173,12 +208,12 @@ PartiallyEvaluateExpression(Node *expression,
/*
* ShouldEvaluateFunctionWithMasterContext is a helper function which is used to
* ShouldEvaluateFunctions is a helper function which is used to
* decide whether the function/expression should be evaluated with the input
* masterEvaluationContext.
* coordinatorEvaluationContext.
*/
static bool
ShouldEvaluateFunctionWithMasterContext(MasterEvaluationContext *evaluationContext)
ShouldEvaluateFunctions(CoordinatorEvaluationContext *evaluationContext)
{
if (evaluationContext == NULL)
{
@ -269,7 +304,7 @@ IsVariableExpression(Node *node)
static Expr *
citus_evaluate_expr(Expr *expr, Oid result_type, int32 result_typmod,
Oid result_collation,
MasterEvaluationContext *masterEvaluationContext)
CoordinatorEvaluationContext *coordinatorEvaluationContext)
{
PlanState *planState = NULL;
EState *estate;
@ -280,19 +315,19 @@ citus_evaluate_expr(Expr *expr, Oid result_type, int32 result_typmod,
int16 resultTypLen;
bool resultTypByVal;
if (masterEvaluationContext)
if (coordinatorEvaluationContext)
{
planState = masterEvaluationContext->planState;
planState = coordinatorEvaluationContext->planState;
if (IsA(expr, Param))
{
if (masterEvaluationContext->evaluationMode == EVALUATE_NONE)
if (coordinatorEvaluationContext->evaluationMode == EVALUATE_NONE)
{
/* bail out, the caller doesn't want params to be evaluated */
return expr;
}
}
else if (masterEvaluationContext->evaluationMode != EVALUATE_FUNCTIONS_PARAMS)
else if (coordinatorEvaluationContext->evaluationMode != EVALUATE_FUNCTIONS_PARAMS)
{
/* should only get here for node types we should evaluate */
Assert(ShouldEvaluateExpression(expr));

View File

@ -96,7 +96,7 @@ copyJobInfo(Job *newnode, Job *from)
COPY_NODE_FIELD(taskList);
COPY_NODE_FIELD(dependentJobList);
COPY_SCALAR_FIELD(subqueryPushdown);
COPY_SCALAR_FIELD(requiresMasterEvaluation);
COPY_SCALAR_FIELD(requiresCoordinatorEvaluation);
COPY_SCALAR_FIELD(deferredPruning);
COPY_NODE_FIELD(partitionKeyValue);
COPY_NODE_FIELD(localPlannedStatements);

View File

@ -340,7 +340,7 @@ OutJobFields(StringInfo str, const Job *node)
WRITE_NODE_FIELD(taskList);
WRITE_NODE_FIELD(dependentJobList);
WRITE_BOOL_FIELD(subqueryPushdown);
WRITE_BOOL_FIELD(requiresMasterEvaluation);
WRITE_BOOL_FIELD(requiresCoordinatorEvaluation);
WRITE_BOOL_FIELD(deferredPruning);
WRITE_NODE_FIELD(partitionKeyValue);
WRITE_NODE_FIELD(localPlannedStatements);

View File

@ -367,8 +367,8 @@ CreateCertificate(EVP_PKEY *privateKey)
* Postgres does not check the validity on the certificates, but we can't omit the
* dates either to create a certificate that can be parsed. We settled on a validity
* of 0 seconds. When postgres would fix the validity check in a future version it
* would fail right after an upgrade instead of setting a time bomb till certificate
* expiration date.
* would fail right after an upgrade. Instead of working until the certificate
* expiration date and then suddenly erroring out.
*/
X509_gmtime_adj(X509_get_notBefore(certificate), 0);
X509_gmtime_adj(X509_get_notAfter(certificate), 0);

View File

@ -104,6 +104,9 @@ static HTAB *MaintenanceDaemonDBHash;
static volatile sig_atomic_t got_SIGHUP = false;
static volatile sig_atomic_t got_SIGTERM = false;
/* set to true when becoming a maintenance daemon */
static bool IsMaintenanceDaemon = false;
static void MaintenanceDaemonSigTermHandler(SIGNAL_ARGS);
static void MaintenanceDaemonSigHupHandler(SIGNAL_ARGS);
static size_t MaintenanceDaemonShmemSize(void);
@ -160,15 +163,31 @@ InitializeMaintenanceDaemonBackend(void)
return;
}
/* maintenance daemon can ignore itself */
if (dbData->workerPid == MyProcPid)
if (!found)
{
/* ensure the values in MaintenanceDaemonDBData are zero */
memset(((char *) dbData) + sizeof(Oid), 0,
sizeof(MaintenanceDaemonDBData) - sizeof(Oid));
}
if (IsMaintenanceDaemon)
{
/*
* InitializeMaintenanceDaemonBackend is called by the maintenance daemon
* itself. In that case, we clearly don't need to start another maintenance
* daemon.
*/
Assert(found);
Assert(dbData->workerPid == MyProcPid);
LWLockRelease(&MaintenanceDaemonControl->lock);
return;
}
if (!found || !dbData->daemonStarted)
{
Assert(dbData->workerPid == 0);
BackgroundWorker worker;
BackgroundWorkerHandle *handle = NULL;
@ -287,13 +306,33 @@ CitusMaintenanceDaemonMain(Datum main_arg)
proc_exit(0);
}
if (myDbData->workerPid != 0)
{
/*
* Another maintenance daemon is running. This usually happens because
* postgres restarts the daemon after an non-zero exit, and
* InitializeMaintenanceDaemonBackend started one before postgres did.
* In that case, the first one stays and the last one exits.
*/
proc_exit(0);
}
before_shmem_exit(MaintenanceDaemonShmemExit, main_arg);
Assert(myDbData->workerPid == 0);
/* from this point, DROP DATABASE will attempt to kill the worker */
/*
* Signal that I am the maintenance daemon now.
*
* From this point, DROP DATABASE/EXTENSION will send a SIGTERM to me.
*/
myDbData->workerPid = MyProcPid;
/*
* Signal that we are running. This in mainly needed in case of restart after
* an error, otherwise the daemonStarted flag is already true.
*/
myDbData->daemonStarted = true;
/* wire up signals */
pqsignal(SIGTERM, MaintenanceDaemonSigTermHandler);
pqsignal(SIGHUP, MaintenanceDaemonSigHupHandler);
@ -301,6 +340,8 @@ CitusMaintenanceDaemonMain(Datum main_arg)
myDbData->latch = MyLatch;
IsMaintenanceDaemon = true;
LWLockRelease(&MaintenanceDaemonControl->lock);
/*
@ -334,8 +375,6 @@ CitusMaintenanceDaemonMain(Datum main_arg)
CHECK_FOR_INTERRUPTS();
Assert(myDbData->workerPid == MyProcPid);
CitusTableCacheFlushInvalidatedEntries();
/*
@ -562,15 +601,6 @@ CitusMaintenanceDaemonMain(Datum main_arg)
/* check for changed configuration */
if (myDbData->userOid != GetSessionUserId())
{
/*
* Reset myDbData->daemonStarted so InitializeMaintenanceDaemonBackend()
* notices this is a restart.
*/
LWLockAcquire(&MaintenanceDaemonControl->lock, LW_EXCLUSIVE);
myDbData->daemonStarted = false;
myDbData->workerPid = 0;
LWLockRelease(&MaintenanceDaemonControl->lock);
/* return code of 1 requests worker restart */
proc_exit(1);
}
@ -682,8 +712,15 @@ MaintenanceDaemonShmemExit(int code, Datum arg)
MaintenanceDaemonDBData *myDbData = (MaintenanceDaemonDBData *)
hash_search(MaintenanceDaemonDBHash, &databaseOid,
HASH_FIND, NULL);
if (myDbData && myDbData->workerPid == MyProcPid)
/* myDbData is NULL after StopMaintenanceDaemon */
if (myDbData != NULL)
{
/*
* Confirm that I am still the registered maintenance daemon before exiting.
*/
Assert(myDbData->workerPid == MyProcPid);
myDbData->daemonStarted = false;
myDbData->workerPid = 0;
}

View File

@ -230,7 +230,7 @@ LockShardListResourcesOnFirstWorker(LOCKMODE lockmode, List *shardIntervalList)
static bool
IsFirstWorkerNode()
{
List *workerNodeList = ActivePrimaryWorkerNodeList(NoLock);
List *workerNodeList = ActivePrimaryNonCoordinatorNodeList(NoLock);
workerNodeList = SortList(workerNodeList, CompareWorkerNodes);
@ -543,6 +543,20 @@ UnlockShardResource(uint64 shardId, LOCKMODE lockmode)
}
/* LockTransactionRecovery acquires a lock for transaction recovery */
void
LockTransactionRecovery(LOCKMODE lockmode)
{
LOCKTAG tag;
const bool sessionLock = false;
const bool dontWait = false;
SET_LOCKTAG_CITUS_OPERATION(tag, CITUS_TRANSACTION_RECOVERY);
(void) LockAcquire(&tag, lockmode, sessionLock, dontWait);
}
/*
* LockJobResource acquires a lock for creating resources associated with the
* given jobId. This resource is typically a job schema (namespace), and less

View File

@ -307,7 +307,7 @@ FindShardInterval(Datum partitionColumnValue, CitusTableCacheEntry *cacheEntry)
* INVALID_SHARD_INDEX is returned). This should only happen if something is
* terribly wrong, either metadata tables are corrupted or we have a bug
* somewhere. Such as a hash function which returns a value not in the range
* of [INT32_MIN, INT32_MAX] can fire this.
* of [PG_INT32_MIN, PG_INT32_MAX] can fire this.
*/
int
FindShardIntervalIndex(Datum searchedValue, CitusTableCacheEntry *cacheEntry)
@ -348,20 +348,8 @@ FindShardIntervalIndex(Datum searchedValue, CitusTableCacheEntry *cacheEntry)
else
{
int hashedValue = DatumGetInt32(searchedValue);
uint64 hashTokenIncrement = HASH_TOKEN_COUNT / shardCount;
shardIndex = (uint32) (hashedValue - INT32_MIN) / hashTokenIncrement;
Assert(shardIndex <= shardCount);
/*
* If the shard count is not power of 2, the range of the last
* shard becomes larger than others. For that extra piece of range,
* we still need to use the last shard.
*/
if (shardIndex == shardCount)
{
shardIndex = shardCount - 1;
}
shardIndex = CalculateUniformHashRangeIndex(hashedValue, shardCount);
}
}
else if (partitionMethod == DISTRIBUTE_BY_NONE)
@ -442,6 +430,48 @@ SearchCachedShardInterval(Datum partitionColumnValue, ShardInterval **shardInter
}
/*
* CalculateUniformHashRangeIndex returns the index of the hash range in
* which hashedValue falls, assuming shardCount uniform hash ranges.
*
* We use 64-bit integers to avoid overflow issues during arithmetic.
*
* NOTE: This function is ONLY for hash-distributed tables with uniform
* hash ranges.
*/
int
CalculateUniformHashRangeIndex(int hashedValue, int shardCount)
{
int64 hashedValue64 = (int64) hashedValue;
/* normalize to the 0-UINT32_MAX range */
int64 normalizedHashValue = hashedValue64 - PG_INT32_MIN;
/* size of each hash range */
int64 hashRangeSize = HASH_TOKEN_COUNT / shardCount;
/* index of hash range into which the hash value falls */
int shardIndex = (int) (normalizedHashValue / hashRangeSize);
if (shardIndex < 0 || shardIndex > shardCount)
{
ereport(ERROR, (errmsg("bug: shard index %d out of bounds", shardIndex)));
}
/*
* If the shard count is not power of 2, the range of the last
* shard becomes larger than others. For that extra piece of range,
* we still need to use the last shard.
*/
if (shardIndex == shardCount)
{
shardIndex = shardCount - 1;
}
return shardIndex;
}
/*
* SingleReplicatedTable checks whether all shards of a distributed table, do not have
* more than one replica. If even one shard has more than one replica, this function
@ -454,7 +484,7 @@ SingleReplicatedTable(Oid relationId)
List *shardPlacementList = NIL;
/* we could have append/range distributed tables without shards */
if (list_length(shardList) <= 1)
if (list_length(shardList) == 0)
{
return false;
}

View File

@ -96,7 +96,7 @@ CollectBasicUsageStatistics(void)
distTableOids = DistTableOidList();
roundedDistTableCount = NextPow2(list_length(distTableOids));
roundedClusterSize = NextPow2(DistributedTablesSize(distTableOids));
workerNodeCount = ActivePrimaryWorkerNodeCount();
workerNodeCount = ActivePrimaryNonCoordinatorNodeCount();
metadataJsonbDatum = DistNodeMetadata();
metadataJsonbStr = DatumGetCString(DirectFunctionCall1(jsonb_out,
metadataJsonbDatum));

View File

@ -250,7 +250,7 @@ worker_hash_partition_table(PG_FUNCTION_ARGS)
static ShardInterval **
SyntheticShardIntervalArrayForShardMinValues(Datum *shardMinValues, int shardCount)
{
Datum nextShardMaxValue = Int32GetDatum(INT32_MAX);
Datum nextShardMaxValue = Int32GetDatum(PG_INT32_MAX);
ShardInterval **syntheticShardIntervalArray =
palloc(sizeof(ShardInterval *) * shardCount);
@ -780,7 +780,12 @@ CitusRemoveDirectory(const char *filename)
/* we now have an empty directory or a regular file, remove it */
if (S_ISDIR(fileStat.st_mode))
{
removed = rmdir(filename);
/*
* We ignore the TOCTUO race condition static analysis warning
* here, since we don't actually read the files or directories. We
* simply want to remove them.
*/
removed = rmdir(filename); /* lgtm[cpp/toctou-race-condition] */
if (errno == ENOTEMPTY || errno == EEXIST)
{
@ -789,7 +794,12 @@ CitusRemoveDirectory(const char *filename)
}
else
{
removed = unlink(filename);
/*
* We ignore the TOCTUO race condition static analysis warning
* here, since we don't actually read the files or directories. We
* simply want to remove them.
*/
removed = unlink(filename); /* lgtm[cpp/toctou-race-condition] */
}
if (removed != 0 && errno != ENOENT)
@ -1240,7 +1250,6 @@ HashPartitionId(Datum partitionValue, Oid partitionCollation, const void *contex
FmgrInfo *comparisonFunction = hashPartitionContext->comparisonFunction;
Datum hashDatum = FunctionCall1Coll(hashFunction, DEFAULT_COLLATION_OID,
partitionValue);
int32 hashResult = 0;
uint32 hashPartitionId = 0;
if (hashDatum == 0)
@ -1250,10 +1259,8 @@ HashPartitionId(Datum partitionValue, Oid partitionCollation, const void *contex
if (hashPartitionContext->hasUniformHashDistribution)
{
uint64 hashTokenIncrement = HASH_TOKEN_COUNT / partitionCount;
hashResult = DatumGetInt32(hashDatum);
hashPartitionId = (uint32) (hashResult - INT32_MIN) / hashTokenIncrement;
int hashValue = DatumGetInt32(hashDatum);
hashPartitionId = CalculateUniformHashRangeIndex(hashValue, partitionCount);
}
else
{

View File

@ -17,10 +17,10 @@
/*
* MasterEvaluationMode is used to signal what expressions in the query
* CoordinatorEvaluationMode is used to signal what expressions in the query
* should be evaluated on the coordinator.
*/
typedef enum MasterEvaluationMode
typedef enum CoordinatorEvaluationMode
{
/* evaluate nothing */
EVALUATE_NONE = 0,
@ -30,23 +30,24 @@ typedef enum MasterEvaluationMode
/* evaluate both the functions/expressions and the external paramaters */
EVALUATE_FUNCTIONS_PARAMS
} MasterEvaluationMode;
} CoordinatorEvaluationMode;
/*
* This struct is used to pass information to master
* evaluation logic.
*/
typedef struct MasterEvaluationContext
typedef struct CoordinatorEvaluationContext
{
PlanState *planState;
MasterEvaluationMode evaluationMode;
} MasterEvaluationContext;
CoordinatorEvaluationMode evaluationMode;
} CoordinatorEvaluationContext;
extern bool RequiresMasterEvaluation(Query *query);
extern void ExecuteMasterEvaluableExpressions(Query *query, PlanState *planState);
extern bool RequiresCoordinatorEvaluation(Query *query);
extern void ExecuteCoordinatorEvaluableExpressions(Query *query, PlanState *planState);
extern Node * PartiallyEvaluateExpression(Node *expression,
MasterEvaluationContext *masterEvaluationContext);
CoordinatorEvaluationContext *
coordinatorEvaluationContext);
extern bool CitusIsVolatileFunction(Node *node);
extern bool CitusIsMutableFunction(Node *node);

View File

@ -20,6 +20,7 @@ typedef struct CitusScanState
CustomScanState customScanState; /* underlying custom scan node */
/* function that gets called before postgres starts its execution */
bool finishedPreScan; /* flag to check if the pre scan is finished */
void (*PreExecScan)(struct CitusScanState *scanState);
DistributedPlan *distributedPlan; /* distributed execution plan */

View File

@ -1,6 +1,6 @@
/*-------------------------------------------------------------------------
*
* merge_planner.h
* combine_query_planner.h
* Function declarations for building planned statements; these statements
* are then executed on the coordinator node.
*
@ -9,8 +9,8 @@
*-------------------------------------------------------------------------
*/
#ifndef MERGE_PLANNER_H
#define MERGE_PLANNER_H
#ifndef COMBINE_QUERY_PLANNER_H
#define COMBINE_QUERY_PLANNER_H
#include "lib/stringinfo.h"
#include "nodes/parsenodes.h"
@ -29,10 +29,10 @@ struct CustomScan;
extern Path * CreateCitusCustomScanPath(PlannerInfo *root, RelOptInfo *relOptInfo,
Index restrictionIndex, RangeTblEntry *rte,
CustomScan *remoteScan);
extern PlannedStmt * MasterNodeSelectPlan(struct DistributedPlan *distributedPlan,
struct CustomScan *dataScan);
extern PlannedStmt * PlanCombineQuery(struct DistributedPlan *distributedPlan,
struct CustomScan *dataScan);
extern Unique * make_unique_from_sortclauses(Plan *lefttree, List *distinctList);
extern bool ReplaceCitusExtraDataContainer;
extern CustomScan *ReplaceCitusExtraDataContainerWithCustomScan;
#endif /* MERGE_PLANNER_H */
#endif /* COMBINE_QUERY_PLANNER_H */

View File

@ -96,7 +96,7 @@ typedef enum MultiConnectionStructInitializationState
} MultiConnectionStructInitializationState;
/* declaring this directly above makes uncrustify go crazy */
/* declaring this directly above causes uncrustify to format it badly */
typedef enum MultiConnectionMode MultiConnectionMode;
typedef struct MultiConnection
@ -173,6 +173,9 @@ typedef struct ConnectionHashEntry
{
ConnectionHashKey key;
dlist_head *connections;
/* connections list is valid or not */
bool isValid;
} ConnectionHashEntry;
/* hash entry for cached connection parameters */

View File

@ -99,7 +99,8 @@ extern void QualifyAlterFunctionDependsStmt(Node *stmt);
extern char * DeparseAlterRoleStmt(Node *stmt);
extern char * DeparseAlterRoleSetStmt(Node *stmt);
extern Node * MakeSetStatementArgument(char *configurationName, char *configurationValue);
extern List * MakeSetStatementArguments(char *configurationName,
char *configurationValue);
extern void QualifyAlterRoleSetStmt(Node *stmt);
/* forward declarations for deparse_extension_stmts.c */

View File

@ -26,5 +26,6 @@ extern List * ExplainAnalyzeTaskList(List *originalTaskList,
TupleDestination *defaultTupleDest, TupleDesc
tupleDesc, ParamListInfo params);
extern bool RequestedForExplainAnalyze(CitusScanState *node);
extern void ResetExplainAnalyzeData(List *taskList);
#endif /* MULTI_EXPLAIN_H */

View File

@ -151,7 +151,7 @@ typedef struct Job
List *taskList;
List *dependentJobList;
bool subqueryPushdown;
bool requiresMasterEvaluation; /* only applies to modify jobs */
bool requiresCoordinatorEvaluation; /* only applies to modify jobs */
bool deferredPruning;
Const *partitionKeyValue;
@ -599,7 +599,7 @@ extern List * QueryPushdownSqlTaskList(Query *query, uint64 jobId,
RelationRestrictionContext *
relationRestrictionContext,
List *prunedRelationShardList, TaskType taskType,
bool modifyRequiresMasterEvaluation);
bool modifyRequiresCoordinatorEvaluation);
/* function declarations for managing jobs */
extern uint64 UniqueJobId(void);

View File

@ -71,6 +71,7 @@ extern Oid ExtractFirstCitusTableId(Query *query);
extern RangeTblEntry * ExtractSelectRangeTableEntry(Query *query);
extern Oid ModifyQueryResultRelationId(Query *query);
extern RangeTblEntry * ExtractResultRelationRTE(Query *query);
extern RangeTblEntry * ExtractResultRelationRTEOrError(Query *query);
extern RangeTblEntry * ExtractDistributedInsertValuesRTE(Query *query);
extern bool IsMultiRowInsert(Query *query);
extern void AddShardIntervalRestrictionToSelect(Query *subqery,

View File

@ -33,5 +33,6 @@ extern Query * BuildReadIntermediateResultsArrayQuery(List *targetEntryList,
List *resultIdList,
bool useBinaryCopyFormat);
extern bool GeneratingSubplans(void);
extern bool IsLocalTableRTE(Node *node);
#endif /* RECURSIVE_PLANNING_H */

View File

@ -38,8 +38,14 @@ typedef enum AdvisoryLocktagClass
ADV_LOCKTAG_CLASS_CITUS_JOB = 6,
ADV_LOCKTAG_CLASS_CITUS_REBALANCE_COLOCATION = 7,
ADV_LOCKTAG_CLASS_CITUS_COLOCATED_SHARDS_METADATA = 8,
ADV_LOCKTAG_CLASS_CITUS_OPERATIONS = 9
} AdvisoryLocktagClass;
/* CitusOperations has constants for citus operations */
typedef enum CitusOperations
{
CITUS_TRANSACTION_RECOVERY = 0
} CitusOperations;
/* reuse advisory lock, but with different, unused field 4 (4)*/
#define SET_LOCKTAG_SHARD_METADATA_RESOURCE(tag, db, shardid) \
@ -83,6 +89,14 @@ typedef enum AdvisoryLocktagClass
(uint32) (colocationOrTableId), \
ADV_LOCKTAG_CLASS_CITUS_REBALANCE_COLOCATION)
/* advisory lock for citus operations, also it has the database hardcoded to MyDatabaseId,
* to ensure the locks are local to each database */
#define SET_LOCKTAG_CITUS_OPERATION(tag, operationId) \
SET_LOCKTAG_ADVISORY(tag, \
MyDatabaseId, \
(uint32) 0, \
(uint32) operationId, \
ADV_LOCKTAG_CLASS_CITUS_OPERATIONS)
/* Lock shard/relation metadata for safe modifications */
extern void LockShardDistributionMetadata(int64 shardId, LOCKMODE lockMode);
@ -110,6 +124,9 @@ extern void UnlockColocationId(int colocationId, LOCKMODE lockMode);
extern void LockShardListMetadata(List *shardIntervalList, LOCKMODE lockMode);
extern void LockShardsInPlacementListMetadata(List *shardPlacementList,
LOCKMODE lockMode);
extern void LockTransactionRecovery(LOCKMODE lockMode);
extern void SerializeNonCommutativeWrites(List *shardIntervalList, LOCKMODE lockMode);
extern void LockRelationShardResources(List *relationShardList, LOCKMODE lockMode);
extern List * GetSortedReferenceShardIntervals(List *relationList);

View File

@ -47,6 +47,7 @@ extern int CompareShardPlacementsByShardId(const void *leftElement,
extern int CompareRelationShards(const void *leftElement,
const void *rightElement);
extern int ShardIndex(ShardInterval *shardInterval);
extern int CalculateUniformHashRangeIndex(int hashedValue, int shardCount);
extern ShardInterval * FindShardInterval(Datum partitionColumnValue,
CitusTableCacheEntry *cacheEntry);
extern int FindShardIntervalIndex(Datum searchedValue, CitusTableCacheEntry *cacheEntry);

View File

@ -70,14 +70,14 @@ extern WorkerNode * WorkerGetRoundRobinCandidateNode(List *workerNodeList,
uint64 shardId,
uint32 placementIndex);
extern WorkerNode * WorkerGetLocalFirstCandidateNode(List *currentNodeList);
extern uint32 ActivePrimaryWorkerNodeCount(void);
extern List * ActivePrimaryWorkerNodeList(LOCKMODE lockMode);
extern uint32 ActivePrimaryNonCoordinatorNodeCount(void);
extern List * ActivePrimaryNonCoordinatorNodeList(LOCKMODE lockMode);
extern List * ActivePrimaryNodeList(LOCKMODE lockMode);
extern List * ReferenceTablePlacementNodeList(LOCKMODE lockMode);
extern List * DistributedTablePlacementNodeList(LOCKMODE lockMode);
extern bool NodeCanHaveDistTablePlacements(WorkerNode *node);
extern uint32 ActiveReadableWorkerNodeCount(void);
extern List * ActiveReadableWorkerNodeList(void);
extern uint32 ActiveReadableNonCoordinatorNodeCount(void);
extern List * ActiveReadableNonCoordinatorNodeList(void);
extern List * ActiveReadableNodeList(void);
extern WorkerNode * FindWorkerNode(const char *nodeName, int32 nodePort);
extern WorkerNode * ForceFindWorkerNode(const char *nodeName, int32 nodePort);

View File

@ -22,9 +22,9 @@
*/
typedef enum TargetWorkerSet
{
WORKERS_WITH_METADATA,
OTHER_WORKERS,
ALL_WORKERS
NON_COORDINATOR_METADATA_NODES,
NON_COORDINATOR_NODES,
ALL_SHARD_NODES
} TargetWorkerSet;

View File

@ -1,4 +1,5 @@
CREATE SCHEMA alter_role;
CREATE SCHEMA ",CitUs,.TeeN!?";
-- test if the passowrd of the extension owner can be upgraded
ALTER ROLE CURRENT_USER PASSWORD 'password123' VALID UNTIL 'infinity';
SELECT run_command_on_workers($$SELECT row(rolname, rolsuper, rolinherit, rolcreaterole, rolcreatedb, rolcanlogin, rolreplication, rolbypassrls, rolconnlimit, rolpassword, EXTRACT (year FROM rolvaliduntil)) FROM pg_authid WHERE rolname = current_user$$);
@ -111,6 +112,12 @@ SELECT run_command_on_workers('SHOW enable_hashagg');
(localhost,57638,t,off)
(1 row)
-- provide a list of values in a supported configuration
ALTER ROLE CURRENT_USER SET search_path TO ",CitUs,.TeeN!?", alter_role, public;
-- test user defined GUCs that appear to be a list, but instead a single string
ALTER ROLE ALL SET public.myguc TO "Hello, World";
-- test for configuration values that should not be downcased even when unquoted
ALTER ROLE CURRENT_USER SET lc_messages TO 'C';
-- add worker and check all settings are copied
SELECT 1 FROM master_add_node('localhost', :worker_1_port);
?column?
@ -139,6 +146,27 @@ SELECT run_command_on_workers('SHOW enable_hashagg');
(localhost,57638,t,off)
(2 rows)
SELECT run_command_on_workers('SHOW search_path');
run_command_on_workers
---------------------------------------------------------------------
(localhost,57637,t,""",CitUs,.TeeN!?"", alter_role, public")
(localhost,57638,t,""",CitUs,.TeeN!?"", alter_role, public")
(2 rows)
SELECT run_command_on_workers('SHOW lc_messages');
run_command_on_workers
---------------------------------------------------------------------
(localhost,57637,t,C)
(localhost,57638,t,C)
(2 rows)
SELECT run_command_on_workers('SHOW public.myguc');
run_command_on_workers
---------------------------------------------------------------------
(localhost,57637,t,"Hello, World")
(localhost,57638,t,"Hello, World")
(2 rows)
-- reset to default values
ALTER ROLE CURRENT_USER RESET enable_hashagg;
SELECT run_command_on_workers('SHOW enable_hashagg');
@ -226,4 +254,4 @@ SELECT run_command_on_workers('SHOW enable_hashjoin');
(localhost,57638,t,on)
(2 rows)
DROP SCHEMA alter_role CASCADE;
DROP SCHEMA alter_role, ",CitUs,.TeeN!?" CASCADE;

View File

@ -0,0 +1,133 @@
SET citus.next_shard_id TO 20080000;
CREATE SCHEMA anonymous_columns;
SET search_path TO anonymous_columns;
CREATE TABLE t0 (a int PRIMARY KEY, b int, "?column?" text);
SELECT create_distributed_table('t0', 'a');
create_distributed_table
---------------------------------------------------------------------
(1 row)
INSERT INTO t0 VALUES (1, 2, 'hello'), (2, 4, 'world');
SELECT "?column?" FROM t0 ORDER BY 1;
?column?
---------------------------------------------------------------------
hello
world
(2 rows)
WITH a AS (SELECT * FROM t0) SELECT "?column?" FROM a ORDER BY 1;
?column?
---------------------------------------------------------------------
hello
world
(2 rows)
WITH a AS (SELECT '' FROM t0) SELECT * FROM a;
?column?
---------------------------------------------------------------------
(2 rows)
-- test CTE's that could be rewritten as subquery
WITH a AS (SELECT '' FROM t0 GROUP BY a) SELECT * FROM a;
?column?
---------------------------------------------------------------------
(2 rows)
WITH a AS (SELECT '' FROM t0 GROUP BY b) SELECT * FROM a;
?column?
---------------------------------------------------------------------
(2 rows)
WITH a AS (SELECT '','' FROM t0 GROUP BY a) SELECT * FROM a;
?column? | ?column?
---------------------------------------------------------------------
|
|
(2 rows)
WITH a AS (SELECT '','' FROM t0 GROUP BY b) SELECT * FROM a;
?column? | ?column?
---------------------------------------------------------------------
|
|
(2 rows)
WITH a AS (SELECT 1, * FROM t0 WHERE a = 1) SELECT * FROM a;
?column? | a | b | ?column?
---------------------------------------------------------------------
1 | 1 | 2 | hello
(1 row)
-- test CTE's that are referenced multiple times and hence need to stay CTE's
WITH a AS (SELECT '' FROM t0 WHERE a = 1) SELECT * FROM a, a b;
?column? | ?column?
---------------------------------------------------------------------
|
(1 row)
WITH a AS (SELECT '','' FROM t0 WHERE a = 42) SELECT * FROM a, a b;
?column? | ?column? | ?column? | ?column?
---------------------------------------------------------------------
(0 rows)
-- test with explicit subqueries
SELECT * FROM (SELECT a, '' FROM t0 GROUP BY a) as foo ORDER BY 1;
a | ?column?
---------------------------------------------------------------------
1 |
2 |
(2 rows)
SELECT * FROM (SELECT a, '', '' FROM t0 GROUP BY a ) as foo ORDER BY 1;
a | ?column? | ?column?
---------------------------------------------------------------------
1 | |
2 | |
(2 rows)
SELECT * FROM (SELECT b, '' FROM t0 GROUP BY b ) as foo ORDER BY 1;
b | ?column?
---------------------------------------------------------------------
2 |
4 |
(2 rows)
SELECT * FROM (SELECT b, '', '' FROM t0 GROUP BY b ) as foo ORDER BY 1;
b | ?column? | ?column?
---------------------------------------------------------------------
2 | |
4 | |
(2 rows)
-- some tests that follow very similar codeoaths
SELECT a + 1 FROM t0 ORDER BY 1;
?column?
---------------------------------------------------------------------
2
3
(2 rows)
SELECT a + 1, a - 1 FROM t0 ORDER BY 1;
?column? | ?column?
---------------------------------------------------------------------
2 | 0
3 | 1
(2 rows)
WITH cte1 AS (SELECT row_to_json(row(a))->'f1' FROM t0) SELECT * FROM cte1 ORDER BY 1::text;
?column?
---------------------------------------------------------------------
1
2
(2 rows)
-- clean up after test
SET client_min_messages TO WARNING;
DROP SCHEMA anonymous_columns CASCADE;

View File

@ -1,7 +1,7 @@
-- This test relies on metadata being synced
-- that's why is should be executed on MX schedule
CREATE SCHEMA master_evaluation;
SET search_path TO master_evaluation;
CREATE SCHEMA coordinator_evaluation;
SET search_path TO coordinator_evaluation;
-- create a volatile function that returns the local node id
CREATE OR REPLACE FUNCTION get_local_node_id_volatile()
RETURNS INT AS $$
@ -29,8 +29,8 @@ SELECT create_distributed_function('get_local_node_id_volatile_sum_with_param(in
(1 row)
CREATE TABLE master_evaluation_table (key int, value int);
SELECT create_distributed_table('master_evaluation_table', 'key');
CREATE TABLE coordinator_evaluation_table (key int, value int);
SELECT create_distributed_table('coordinator_evaluation_table', 'key');
create_distributed_table
---------------------------------------------------------------------
@ -44,16 +44,16 @@ SELECT get_local_node_id_volatile();
(1 row)
-- load data
INSERT INTO master_evaluation_table SELECT i, i FROM generate_series(0,100)i;
INSERT INTO coordinator_evaluation_table SELECT i, i FROM generate_series(0,100)i;
-- we expect that the function is evaluated on the worker node, so we should get a row
SELECT get_local_node_id_volatile() > 0 FROM master_evaluation_table WHERE key = 1;
SELECT get_local_node_id_volatile() > 0 FROM coordinator_evaluation_table WHERE key = 1;
?column?
---------------------------------------------------------------------
t
(1 row)
-- make sure that it is also true for fast-path router queries with paramaters
PREPARE fast_path_router_with_param(int) AS SELECT get_local_node_id_volatile() > 0 FROM master_evaluation_table WHERE key = $1;
PREPARE fast_path_router_with_param(int) AS SELECT get_local_node_id_volatile() > 0 FROM coordinator_evaluation_table WHERE key = $1;
execute fast_path_router_with_param(1);
?column?
---------------------------------------------------------------------
@ -103,13 +103,13 @@ execute fast_path_router_with_param(8);
(1 row)
-- same query as fast_path_router_with_param, but with consts
SELECT get_local_node_id_volatile() > 0 FROM master_evaluation_table WHERE key = 1;
SELECT get_local_node_id_volatile() > 0 FROM coordinator_evaluation_table WHERE key = 1;
?column?
---------------------------------------------------------------------
t
(1 row)
PREPARE router_with_param(int) AS SELECT get_local_node_id_volatile() > 0 FROM master_evaluation_table m1 JOIN master_evaluation_table m2 USING(key) WHERE key = $1;
PREPARE router_with_param(int) AS SELECT get_local_node_id_volatile() > 0 FROM coordinator_evaluation_table m1 JOIN coordinator_evaluation_table m2 USING(key) WHERE key = $1;
execute router_with_param(1);
?column?
---------------------------------------------------------------------
@ -159,21 +159,21 @@ execute router_with_param(8);
(1 row)
-- same query as router_with_param, but with consts
SELECT get_local_node_id_volatile() > 0 FROM master_evaluation_table m1 JOIN master_evaluation_table m2 USING(key) WHERE key = 1;
SELECT get_local_node_id_volatile() > 0 FROM coordinator_evaluation_table m1 JOIN coordinator_evaluation_table m2 USING(key) WHERE key = 1;
?column?
---------------------------------------------------------------------
t
(1 row)
-- for multi-shard queries, we still expect the evaluation to happen on the workers
SELECT count(*), max(get_local_node_id_volatile()) != 0, min(get_local_node_id_volatile()) != 0 FROM master_evaluation_table;
SELECT count(*), max(get_local_node_id_volatile()) != 0, min(get_local_node_id_volatile()) != 0 FROM coordinator_evaluation_table;
count | ?column? | ?column?
---------------------------------------------------------------------
101 | t | t
(1 row)
-- when executed locally, we expect to get the result from the coordinator
SELECT (SELECT count(*) FROM master_evaluation_table), get_local_node_id_volatile() = 0;
SELECT (SELECT count(*) FROM coordinator_evaluation_table), get_local_node_id_volatile() = 0;
count | ?column?
---------------------------------------------------------------------
101 | t
@ -181,7 +181,7 @@ SELECT (SELECT count(*) FROM master_evaluation_table), get_local_node_id_volatil
-- make sure that we get the results from the workers when the query is sent to workers
SET citus.task_assignment_policy TO "round-robin";
SELECT (SELECT count(*) FROM master_evaluation_table), get_local_node_id_volatile() = 0;
SELECT (SELECT count(*) FROM coordinator_evaluation_table), get_local_node_id_volatile() = 0;
count | ?column?
---------------------------------------------------------------------
101 | f
@ -189,13 +189,13 @@ SELECT (SELECT count(*) FROM master_evaluation_table), get_local_node_id_volatil
RESET citus.task_assignment_policy;
-- for multi-shard SELECTs, we don't try to evaluate on the coordinator
SELECT min(get_local_node_id_volatile()) > 0 FROM master_evaluation_table;
SELECT min(get_local_node_id_volatile()) > 0 FROM coordinator_evaluation_table;
?column?
---------------------------------------------------------------------
t
(1 row)
SELECT count(*) > 0 FROM master_evaluation_table WHERE value >= get_local_node_id_volatile();
SELECT count(*) > 0 FROM coordinator_evaluation_table WHERE value >= get_local_node_id_volatile();
?column?
---------------------------------------------------------------------
t
@ -204,7 +204,7 @@ SELECT count(*) > 0 FROM master_evaluation_table WHERE value >= get_local_node_i
-- let's have some tests around expressions
-- for modifications, we expect the evaluation to happen on the coordinator
-- thus the results should be 0
PREPARE insert_with_param_expression(int) AS INSERT INTO master_evaluation_table (key, value) VALUES ($1 + get_local_node_id_volatile(), $1 + get_local_node_id_volatile()) RETURNING key, value;
PREPARE insert_with_param_expression(int) AS INSERT INTO coordinator_evaluation_table (key, value) VALUES ($1 + get_local_node_id_volatile(), $1 + get_local_node_id_volatile()) RETURNING key, value;
EXECUTE insert_with_param_expression(0);
key | value
---------------------------------------------------------------------
@ -249,7 +249,7 @@ EXECUTE insert_with_param_expression(0);
-- for modifications, we expect the evaluation to happen on the coordinator
-- thus the results should be 0
PREPARE insert_with_param(int) AS INSERT INTO master_evaluation_table (key, value) VALUES ($1, $1) RETURNING key, value;
PREPARE insert_with_param(int) AS INSERT INTO coordinator_evaluation_table (key, value) VALUES ($1, $1) RETURNING key, value;
EXECUTE insert_with_param(0 + get_local_node_id_volatile());
key | value
---------------------------------------------------------------------
@ -292,7 +292,7 @@ EXECUTE insert_with_param(0 + get_local_node_id_volatile());
0 | 0
(1 row)
PREPARE router_select_with_param_expression(int) AS SELECT value > 0 FROM master_evaluation_table WHERE key = $1 + get_local_node_id_volatile();
PREPARE router_select_with_param_expression(int) AS SELECT value > 0 FROM coordinator_evaluation_table WHERE key = $1 + get_local_node_id_volatile();
-- for selects, we expect the evaluation to happen on the workers
-- this means that the query should be hitting multiple workers
SET client_min_messages TO DEBUG2;
@ -353,7 +353,7 @@ DEBUG: Router planner cannot handle multi-shard select queries
t
(1 row)
PREPARE router_select_with_param(int) AS SELECT DISTINCT value FROM master_evaluation_table WHERE key = $1;
PREPARE router_select_with_param(int) AS SELECT DISTINCT value FROM coordinator_evaluation_table WHERE key = $1;
-- this time the parameter itself is a function, so should be evaluated
-- on the coordinator
EXECUTE router_select_with_param(0 + get_local_node_id_volatile());
@ -460,7 +460,7 @@ EXECUTE router_select_with_param(get_local_node_id_volatile());
(1 row)
-- this time use the parameter inside the function
PREPARE router_select_with_parameter_in_function(int) AS SELECT bool_and(get_local_node_id_volatile_sum_with_param($1) > 1) FROM master_evaluation_table WHERE key = get_local_node_id_volatile_sum_with_param($1);
PREPARE router_select_with_parameter_in_function(int) AS SELECT bool_and(get_local_node_id_volatile_sum_with_param($1) > 1) FROM coordinator_evaluation_table WHERE key = get_local_node_id_volatile_sum_with_param($1);
EXECUTE router_select_with_parameter_in_function(0);
DEBUG: Router planner cannot handle multi-shard select queries
bool_and
@ -514,8 +514,8 @@ DEBUG: Router planner cannot handle multi-shard select queries
RESET client_min_messages;
RESET citus.log_remote_commands;
-- numeric has different casting affects, so some tests on that
CREATE TABLE master_evaluation_table_2 (key numeric, value numeric);
SELECT create_distributed_table('master_evaluation_table_2', 'key');
CREATE TABLE coordinator_evaluation_table_2 (key numeric, value numeric);
SELECT create_distributed_table('coordinator_evaluation_table_2', 'key');
create_distributed_table
---------------------------------------------------------------------
@ -529,13 +529,13 @@ BEGIN
RETURN trunc(random() * (end_int-start_int) + start_int);
END;
$$ LANGUAGE 'plpgsql' STRICT;
CREATE OR REPLACE PROCEDURE master_evaluation.test_procedure(int)
CREATE OR REPLACE PROCEDURE coordinator_evaluation.test_procedure(int)
LANGUAGE plpgsql
AS $procedure$
DECLARE filterKey INTEGER;
BEGIN
filterKey := round(master_evaluation.TEST_RANDOM(1,1)) + $1;
PERFORM DISTINCT value FROM master_evaluation_table_2 WHERE key = filterKey;
filterKey := round(coordinator_evaluation.TEST_RANDOM(1,1)) + $1;
PERFORM DISTINCT value FROM coordinator_evaluation_table_2 WHERE key = filterKey;
END;
$procedure$;
-- we couldn't find a meaningful query to write for this
@ -567,13 +567,13 @@ DEBUG: Deferred pruning for a fast-path router query
DEBUG: Creating router plan
DEBUG: Plan is router executable
CALL test_procedure(100);
CREATE OR REPLACE PROCEDURE master_evaluation.test_procedure_2(int)
CREATE OR REPLACE PROCEDURE coordinator_evaluation.test_procedure_2(int)
LANGUAGE plpgsql
AS $procedure$
DECLARE filterKey INTEGER;
BEGIN
filterKey := round(master_evaluation.TEST_RANDOM(1,1)) + $1;
INSERT INTO master_evaluation_table_2 VALUES (filterKey, filterKey);
filterKey := round(coordinator_evaluation.TEST_RANDOM(1,1)) + $1;
INSERT INTO coordinator_evaluation_table_2 VALUES (filterKey, filterKey);
END;
$procedure$;
RESET citus.log_remote_commands ;
@ -586,11 +586,11 @@ CALL test_procedure_2(100);
CALL test_procedure_2(100);
CALL test_procedure_2(100);
CALL test_procedure_2(100);
SELECT count(*) FROM master_evaluation_table_2 WHERE key = 101;
SELECT count(*) FROM coordinator_evaluation_table_2 WHERE key = 101;
count
---------------------------------------------------------------------
7
(1 row)
SET client_min_messages TO ERROR;
DROP SCHEMA master_evaluation CASCADE;
DROP SCHEMA coordinator_evaluation CASCADE;

View File

@ -7,10 +7,10 @@
-- (b) Local Execution vs Remote Execution
-- (c) Parameters on distribution key vs Parameters on non-dist key
-- vs Non-parametrized queries
-- (d) Master Function Evaluation Required vs
-- Master Function Evaluation Not Required
CREATE SCHEMA master_evaluation_combinations;
SET search_path TO master_evaluation_combinations;
-- (d) Coordinator Function Evaluation Required vs
-- Coordinator Function Evaluation Not Required
CREATE SCHEMA coordinator_evaluation_combinations;
SET search_path TO coordinator_evaluation_combinations;
SET citus.next_shard_id TO 1170000;
-- create a volatile function that returns the local node id
CREATE OR REPLACE FUNCTION get_local_node_id_volatile()
@ -824,10 +824,10 @@ EXECUTE router_with_only_function;
\c - - - :worker_2_port
SET citus.log_local_commands TO ON;
SET search_path TO master_evaluation_combinations;
SET search_path TO coordinator_evaluation_combinations;
-- show that the data with user_id = 3 is local
SELECT count(*) FROM user_info_data WHERE user_id = 3;
NOTICE: executing the command locally: SELECT count(*) AS count FROM master_evaluation_combinations.user_info_data_1170001 user_info_data WHERE (user_id OPERATOR(pg_catalog.=) 3)
NOTICE: executing the command locally: SELECT count(*) AS count FROM coordinator_evaluation_combinations.user_info_data_1170001 user_info_data WHERE (user_id OPERATOR(pg_catalog.=) 3)
count
---------------------------------------------------------------------
1
@ -836,63 +836,63 @@ NOTICE: executing the command locally: SELECT count(*) AS count FROM master_eva
-- make sure that it is also true for fast-path router queries with paramaters
PREPARE fast_path_router_with_param(int) AS SELECT count(*) FROM user_info_data WHERE user_id = $1;
execute fast_path_router_with_param(3);
NOTICE: executing the command locally: SELECT count(*) AS count FROM master_evaluation_combinations.user_info_data_1170001 user_info_data WHERE (user_id OPERATOR(pg_catalog.=) 3)
NOTICE: executing the command locally: SELECT count(*) AS count FROM coordinator_evaluation_combinations.user_info_data_1170001 user_info_data WHERE (user_id OPERATOR(pg_catalog.=) 3)
count
---------------------------------------------------------------------
1
(1 row)
execute fast_path_router_with_param(3);
NOTICE: executing the command locally: SELECT count(*) AS count FROM master_evaluation_combinations.user_info_data_1170001 user_info_data WHERE (user_id OPERATOR(pg_catalog.=) 3)
NOTICE: executing the command locally: SELECT count(*) AS count FROM coordinator_evaluation_combinations.user_info_data_1170001 user_info_data WHERE (user_id OPERATOR(pg_catalog.=) 3)
count
---------------------------------------------------------------------
1
(1 row)
execute fast_path_router_with_param(3);
NOTICE: executing the command locally: SELECT count(*) AS count FROM master_evaluation_combinations.user_info_data_1170001 user_info_data WHERE (user_id OPERATOR(pg_catalog.=) 3)
NOTICE: executing the command locally: SELECT count(*) AS count FROM coordinator_evaluation_combinations.user_info_data_1170001 user_info_data WHERE (user_id OPERATOR(pg_catalog.=) 3)
count
---------------------------------------------------------------------
1
(1 row)
execute fast_path_router_with_param(3);
NOTICE: executing the command locally: SELECT count(*) AS count FROM master_evaluation_combinations.user_info_data_1170001 user_info_data WHERE (user_id OPERATOR(pg_catalog.=) 3)
NOTICE: executing the command locally: SELECT count(*) AS count FROM coordinator_evaluation_combinations.user_info_data_1170001 user_info_data WHERE (user_id OPERATOR(pg_catalog.=) 3)
count
---------------------------------------------------------------------
1
(1 row)
execute fast_path_router_with_param(3);
NOTICE: executing the command locally: SELECT count(*) AS count FROM master_evaluation_combinations.user_info_data_1170001 user_info_data WHERE (user_id OPERATOR(pg_catalog.=) 3)
NOTICE: executing the command locally: SELECT count(*) AS count FROM coordinator_evaluation_combinations.user_info_data_1170001 user_info_data WHERE (user_id OPERATOR(pg_catalog.=) 3)
count
---------------------------------------------------------------------
1
(1 row)
execute fast_path_router_with_param(3);
NOTICE: executing the command locally: SELECT count(*) AS count FROM master_evaluation_combinations.user_info_data_1170001 user_info_data WHERE (user_id OPERATOR(pg_catalog.=) 3)
NOTICE: executing the command locally: SELECT count(*) AS count FROM coordinator_evaluation_combinations.user_info_data_1170001 user_info_data WHERE (user_id OPERATOR(pg_catalog.=) 3)
count
---------------------------------------------------------------------
1
(1 row)
execute fast_path_router_with_param(3);
NOTICE: executing the command locally: SELECT count(*) AS count FROM master_evaluation_combinations.user_info_data_1170001 user_info_data WHERE (user_id OPERATOR(pg_catalog.=) 3)
NOTICE: executing the command locally: SELECT count(*) AS count FROM coordinator_evaluation_combinations.user_info_data_1170001 user_info_data WHERE (user_id OPERATOR(pg_catalog.=) 3)
count
---------------------------------------------------------------------
1
(1 row)
execute fast_path_router_with_param(3);
NOTICE: executing the command locally: SELECT count(*) AS count FROM master_evaluation_combinations.user_info_data_1170001 user_info_data WHERE (user_id OPERATOR(pg_catalog.=) 3)
NOTICE: executing the command locally: SELECT count(*) AS count FROM coordinator_evaluation_combinations.user_info_data_1170001 user_info_data WHERE (user_id OPERATOR(pg_catalog.=) 3)
count
---------------------------------------------------------------------
1
(1 row)
SELECT get_local_node_id_volatile() > 0 FROM user_info_data WHERE user_id = 3;
NOTICE: executing the command locally: SELECT (master_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM master_evaluation_combinations.user_info_data_1170001 user_info_data WHERE (user_id OPERATOR(pg_catalog.=) 3)
NOTICE: executing the command locally: SELECT (coordinator_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM coordinator_evaluation_combinations.user_info_data_1170001 user_info_data WHERE (user_id OPERATOR(pg_catalog.=) 3)
?column?
---------------------------------------------------------------------
t
@ -901,49 +901,49 @@ NOTICE: executing the command locally: SELECT (master_evaluation_combinations.g
-- make sure that it is also true for fast-path router queries with paramaters
PREPARE fast_path_router_with_param_and_func(int) AS SELECT get_local_node_id_volatile() > 0 FROM user_info_data WHERE user_id = $1;
execute fast_path_router_with_param_and_func(3);
NOTICE: executing the command locally: SELECT (master_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM master_evaluation_combinations.user_info_data_1170001 user_info_data WHERE (user_id OPERATOR(pg_catalog.=) 3)
NOTICE: executing the command locally: SELECT (coordinator_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM coordinator_evaluation_combinations.user_info_data_1170001 user_info_data WHERE (user_id OPERATOR(pg_catalog.=) 3)
?column?
---------------------------------------------------------------------
t
(1 row)
execute fast_path_router_with_param_and_func(3);
NOTICE: executing the command locally: SELECT (master_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM master_evaluation_combinations.user_info_data_1170001 user_info_data WHERE (user_id OPERATOR(pg_catalog.=) 3)
NOTICE: executing the command locally: SELECT (coordinator_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM coordinator_evaluation_combinations.user_info_data_1170001 user_info_data WHERE (user_id OPERATOR(pg_catalog.=) 3)
?column?
---------------------------------------------------------------------
t
(1 row)
execute fast_path_router_with_param_and_func(3);
NOTICE: executing the command locally: SELECT (master_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM master_evaluation_combinations.user_info_data_1170001 user_info_data WHERE (user_id OPERATOR(pg_catalog.=) 3)
NOTICE: executing the command locally: SELECT (coordinator_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM coordinator_evaluation_combinations.user_info_data_1170001 user_info_data WHERE (user_id OPERATOR(pg_catalog.=) 3)
?column?
---------------------------------------------------------------------
t
(1 row)
execute fast_path_router_with_param_and_func(3);
NOTICE: executing the command locally: SELECT (master_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM master_evaluation_combinations.user_info_data_1170001 user_info_data WHERE (user_id OPERATOR(pg_catalog.=) 3)
NOTICE: executing the command locally: SELECT (coordinator_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM coordinator_evaluation_combinations.user_info_data_1170001 user_info_data WHERE (user_id OPERATOR(pg_catalog.=) 3)
?column?
---------------------------------------------------------------------
t
(1 row)
execute fast_path_router_with_param_and_func(3);
NOTICE: executing the command locally: SELECT (master_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM master_evaluation_combinations.user_info_data_1170001 user_info_data WHERE (user_id OPERATOR(pg_catalog.=) 3)
NOTICE: executing the command locally: SELECT (coordinator_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM coordinator_evaluation_combinations.user_info_data_1170001 user_info_data WHERE (user_id OPERATOR(pg_catalog.=) 3)
?column?
---------------------------------------------------------------------
t
(1 row)
execute fast_path_router_with_param_and_func(3);
NOTICE: executing the command locally: SELECT (master_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM master_evaluation_combinations.user_info_data_1170001 user_info_data WHERE (user_id OPERATOR(pg_catalog.=) 3)
NOTICE: executing the command locally: SELECT (coordinator_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM coordinator_evaluation_combinations.user_info_data_1170001 user_info_data WHERE (user_id OPERATOR(pg_catalog.=) 3)
?column?
---------------------------------------------------------------------
t
(1 row)
execute fast_path_router_with_param_and_func(3);
NOTICE: executing the command locally: SELECT (master_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM master_evaluation_combinations.user_info_data_1170001 user_info_data WHERE (user_id OPERATOR(pg_catalog.=) 3)
NOTICE: executing the command locally: SELECT (coordinator_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM coordinator_evaluation_combinations.user_info_data_1170001 user_info_data WHERE (user_id OPERATOR(pg_catalog.=) 3)
?column?
---------------------------------------------------------------------
t
@ -958,56 +958,56 @@ execute fast_path_router_with_param_and_func(8);
PREPARE fast_path_router_with_param_and_func_on_non_dist_key(int) AS
SELECT get_local_node_id_volatile() > 0 FROM user_info_data WHERE user_id = 3 AND user_index = $1;
EXECUTE fast_path_router_with_param_and_func_on_non_dist_key(3);
NOTICE: executing the command locally: SELECT (master_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM master_evaluation_combinations.user_info_data_1170001 user_info_data WHERE ((user_id OPERATOR(pg_catalog.=) 3) AND (user_index OPERATOR(pg_catalog.=) $1))
NOTICE: executing the command locally: SELECT (coordinator_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM coordinator_evaluation_combinations.user_info_data_1170001 user_info_data WHERE ((user_id OPERATOR(pg_catalog.=) 3) AND (user_index OPERATOR(pg_catalog.=) $1))
?column?
---------------------------------------------------------------------
t
(1 row)
EXECUTE fast_path_router_with_param_and_func_on_non_dist_key(3);
NOTICE: executing the command locally: SELECT (master_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM master_evaluation_combinations.user_info_data_1170001 user_info_data WHERE ((user_id OPERATOR(pg_catalog.=) 3) AND (user_index OPERATOR(pg_catalog.=) $1))
NOTICE: executing the command locally: SELECT (coordinator_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM coordinator_evaluation_combinations.user_info_data_1170001 user_info_data WHERE ((user_id OPERATOR(pg_catalog.=) 3) AND (user_index OPERATOR(pg_catalog.=) $1))
?column?
---------------------------------------------------------------------
t
(1 row)
EXECUTE fast_path_router_with_param_and_func_on_non_dist_key(3);
NOTICE: executing the command locally: SELECT (master_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM master_evaluation_combinations.user_info_data_1170001 user_info_data WHERE ((user_id OPERATOR(pg_catalog.=) 3) AND (user_index OPERATOR(pg_catalog.=) $1))
NOTICE: executing the command locally: SELECT (coordinator_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM coordinator_evaluation_combinations.user_info_data_1170001 user_info_data WHERE ((user_id OPERATOR(pg_catalog.=) 3) AND (user_index OPERATOR(pg_catalog.=) $1))
?column?
---------------------------------------------------------------------
t
(1 row)
EXECUTE fast_path_router_with_param_and_func_on_non_dist_key(3);
NOTICE: executing the command locally: SELECT (master_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM master_evaluation_combinations.user_info_data_1170001 user_info_data WHERE ((user_id OPERATOR(pg_catalog.=) 3) AND (user_index OPERATOR(pg_catalog.=) $1))
NOTICE: executing the command locally: SELECT (coordinator_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM coordinator_evaluation_combinations.user_info_data_1170001 user_info_data WHERE ((user_id OPERATOR(pg_catalog.=) 3) AND (user_index OPERATOR(pg_catalog.=) $1))
?column?
---------------------------------------------------------------------
t
(1 row)
EXECUTE fast_path_router_with_param_and_func_on_non_dist_key(3);
NOTICE: executing the command locally: SELECT (master_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM master_evaluation_combinations.user_info_data_1170001 user_info_data WHERE ((user_id OPERATOR(pg_catalog.=) 3) AND (user_index OPERATOR(pg_catalog.=) $1))
NOTICE: executing the command locally: SELECT (coordinator_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM coordinator_evaluation_combinations.user_info_data_1170001 user_info_data WHERE ((user_id OPERATOR(pg_catalog.=) 3) AND (user_index OPERATOR(pg_catalog.=) $1))
?column?
---------------------------------------------------------------------
t
(1 row)
EXECUTE fast_path_router_with_param_and_func_on_non_dist_key(3);
NOTICE: executing the command locally: SELECT (master_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM master_evaluation_combinations.user_info_data_1170001 user_info_data WHERE ((user_id OPERATOR(pg_catalog.=) 3) AND (user_index OPERATOR(pg_catalog.=) $1))
NOTICE: executing the command locally: SELECT (coordinator_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM coordinator_evaluation_combinations.user_info_data_1170001 user_info_data WHERE ((user_id OPERATOR(pg_catalog.=) 3) AND (user_index OPERATOR(pg_catalog.=) $1))
?column?
---------------------------------------------------------------------
t
(1 row)
EXECUTE fast_path_router_with_param_and_func_on_non_dist_key(3);
NOTICE: executing the command locally: SELECT (master_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM master_evaluation_combinations.user_info_data_1170001 user_info_data WHERE ((user_id OPERATOR(pg_catalog.=) 3) AND (user_index OPERATOR(pg_catalog.=) $1))
NOTICE: executing the command locally: SELECT (coordinator_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM coordinator_evaluation_combinations.user_info_data_1170001 user_info_data WHERE ((user_id OPERATOR(pg_catalog.=) 3) AND (user_index OPERATOR(pg_catalog.=) $1))
?column?
---------------------------------------------------------------------
t
(1 row)
SELECT get_local_node_id_volatile() > 0 FROM user_info_data WHERE user_id = 3 AND u_data = ('name3', 23)::user_data;
NOTICE: executing the command locally: SELECT (master_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM master_evaluation_combinations.user_info_data_1170001 user_info_data WHERE ((user_id OPERATOR(pg_catalog.=) 3) AND (u_data OPERATOR(pg_catalog.=) ROW('name3'::text, 23)::master_evaluation_combinations.user_data))
NOTICE: executing the command locally: SELECT (coordinator_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM coordinator_evaluation_combinations.user_info_data_1170001 user_info_data WHERE ((user_id OPERATOR(pg_catalog.=) 3) AND (u_data OPERATOR(pg_catalog.=) ROW('name3'::text, 23)::coordinator_evaluation_combinations.user_data))
?column?
---------------------------------------------------------------------
t
@ -1015,77 +1015,77 @@ NOTICE: executing the command locally: SELECT (master_evaluation_combinations.g
PREPARE fast_path_router_with_param_on_non_dist_key_and_func(user_data) AS SELECT get_local_node_id_volatile() > 0 FROM user_info_data WHERE user_id = 3 AND u_data = $1;
EXECUTE fast_path_router_with_param_on_non_dist_key_and_func(('name3', 23)::user_data);
NOTICE: executing the command locally: SELECT (master_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM master_evaluation_combinations.user_info_data_1170001 user_info_data WHERE ((user_id OPERATOR(pg_catalog.=) 3) AND (u_data OPERATOR(pg_catalog.=) $1::master_evaluation_combinations.user_data))
NOTICE: executing the command locally: SELECT (coordinator_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM coordinator_evaluation_combinations.user_info_data_1170001 user_info_data WHERE ((user_id OPERATOR(pg_catalog.=) 3) AND (u_data OPERATOR(pg_catalog.=) $1::coordinator_evaluation_combinations.user_data))
?column?
---------------------------------------------------------------------
t
(1 row)
EXECUTE fast_path_router_with_param_on_non_dist_key_and_func(('name3', 23)::user_data);
NOTICE: executing the command locally: SELECT (master_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM master_evaluation_combinations.user_info_data_1170001 user_info_data WHERE ((user_id OPERATOR(pg_catalog.=) 3) AND (u_data OPERATOR(pg_catalog.=) $1::master_evaluation_combinations.user_data))
NOTICE: executing the command locally: SELECT (coordinator_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM coordinator_evaluation_combinations.user_info_data_1170001 user_info_data WHERE ((user_id OPERATOR(pg_catalog.=) 3) AND (u_data OPERATOR(pg_catalog.=) $1::coordinator_evaluation_combinations.user_data))
?column?
---------------------------------------------------------------------
t
(1 row)
EXECUTE fast_path_router_with_param_on_non_dist_key_and_func(('name3', 23)::user_data);
NOTICE: executing the command locally: SELECT (master_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM master_evaluation_combinations.user_info_data_1170001 user_info_data WHERE ((user_id OPERATOR(pg_catalog.=) 3) AND (u_data OPERATOR(pg_catalog.=) $1::master_evaluation_combinations.user_data))
NOTICE: executing the command locally: SELECT (coordinator_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM coordinator_evaluation_combinations.user_info_data_1170001 user_info_data WHERE ((user_id OPERATOR(pg_catalog.=) 3) AND (u_data OPERATOR(pg_catalog.=) $1::coordinator_evaluation_combinations.user_data))
?column?
---------------------------------------------------------------------
t
(1 row)
EXECUTE fast_path_router_with_param_on_non_dist_key_and_func(('name3', 23)::user_data);
NOTICE: executing the command locally: SELECT (master_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM master_evaluation_combinations.user_info_data_1170001 user_info_data WHERE ((user_id OPERATOR(pg_catalog.=) 3) AND (u_data OPERATOR(pg_catalog.=) $1::master_evaluation_combinations.user_data))
NOTICE: executing the command locally: SELECT (coordinator_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM coordinator_evaluation_combinations.user_info_data_1170001 user_info_data WHERE ((user_id OPERATOR(pg_catalog.=) 3) AND (u_data OPERATOR(pg_catalog.=) $1::coordinator_evaluation_combinations.user_data))
?column?
---------------------------------------------------------------------
t
(1 row)
EXECUTE fast_path_router_with_param_on_non_dist_key_and_func(('name3', 23)::user_data);
NOTICE: executing the command locally: SELECT (master_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM master_evaluation_combinations.user_info_data_1170001 user_info_data WHERE ((user_id OPERATOR(pg_catalog.=) 3) AND (u_data OPERATOR(pg_catalog.=) $1::master_evaluation_combinations.user_data))
NOTICE: executing the command locally: SELECT (coordinator_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM coordinator_evaluation_combinations.user_info_data_1170001 user_info_data WHERE ((user_id OPERATOR(pg_catalog.=) 3) AND (u_data OPERATOR(pg_catalog.=) $1::coordinator_evaluation_combinations.user_data))
?column?
---------------------------------------------------------------------
t
(1 row)
EXECUTE fast_path_router_with_param_on_non_dist_key_and_func(('name3', 23)::user_data);
NOTICE: executing the command locally: SELECT (master_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM master_evaluation_combinations.user_info_data_1170001 user_info_data WHERE ((user_id OPERATOR(pg_catalog.=) 3) AND (u_data OPERATOR(pg_catalog.=) $1::master_evaluation_combinations.user_data))
NOTICE: executing the command locally: SELECT (coordinator_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM coordinator_evaluation_combinations.user_info_data_1170001 user_info_data WHERE ((user_id OPERATOR(pg_catalog.=) 3) AND (u_data OPERATOR(pg_catalog.=) $1::coordinator_evaluation_combinations.user_data))
?column?
---------------------------------------------------------------------
t
(1 row)
EXECUTE fast_path_router_with_param_on_non_dist_key_and_func(('name3', 23)::user_data);
NOTICE: executing the command locally: SELECT (master_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM master_evaluation_combinations.user_info_data_1170001 user_info_data WHERE ((user_id OPERATOR(pg_catalog.=) 3) AND (u_data OPERATOR(pg_catalog.=) $1::master_evaluation_combinations.user_data))
NOTICE: executing the command locally: SELECT (coordinator_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM coordinator_evaluation_combinations.user_info_data_1170001 user_info_data WHERE ((user_id OPERATOR(pg_catalog.=) 3) AND (u_data OPERATOR(pg_catalog.=) $1::coordinator_evaluation_combinations.user_data))
?column?
---------------------------------------------------------------------
t
(1 row)
EXECUTE fast_path_router_with_param_on_non_dist_key_and_func(('name3', 23)::user_data);
NOTICE: executing the command locally: SELECT (master_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM master_evaluation_combinations.user_info_data_1170001 user_info_data WHERE ((user_id OPERATOR(pg_catalog.=) 3) AND (u_data OPERATOR(pg_catalog.=) $1::master_evaluation_combinations.user_data))
NOTICE: executing the command locally: SELECT (coordinator_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM coordinator_evaluation_combinations.user_info_data_1170001 user_info_data WHERE ((user_id OPERATOR(pg_catalog.=) 3) AND (u_data OPERATOR(pg_catalog.=) $1::coordinator_evaluation_combinations.user_data))
?column?
---------------------------------------------------------------------
t
(1 row)
EXECUTE fast_path_router_with_param_on_non_dist_key_and_func(('name3', 23)::user_data);
NOTICE: executing the command locally: SELECT (master_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM master_evaluation_combinations.user_info_data_1170001 user_info_data WHERE ((user_id OPERATOR(pg_catalog.=) 3) AND (u_data OPERATOR(pg_catalog.=) $1::master_evaluation_combinations.user_data))
NOTICE: executing the command locally: SELECT (coordinator_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM coordinator_evaluation_combinations.user_info_data_1170001 user_info_data WHERE ((user_id OPERATOR(pg_catalog.=) 3) AND (u_data OPERATOR(pg_catalog.=) $1::coordinator_evaluation_combinations.user_data))
?column?
---------------------------------------------------------------------
t
(1 row)
EXECUTE fast_path_router_with_param_on_non_dist_key_and_func(('name3', 23)::user_data);
NOTICE: executing the command locally: SELECT (master_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM master_evaluation_combinations.user_info_data_1170001 user_info_data WHERE ((user_id OPERATOR(pg_catalog.=) 3) AND (u_data OPERATOR(pg_catalog.=) $1::master_evaluation_combinations.user_data))
NOTICE: executing the command locally: SELECT (coordinator_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM coordinator_evaluation_combinations.user_info_data_1170001 user_info_data WHERE ((user_id OPERATOR(pg_catalog.=) 3) AND (u_data OPERATOR(pg_catalog.=) $1::coordinator_evaluation_combinations.user_data))
?column?
---------------------------------------------------------------------
t
(1 row)
SELECT count(*) FROM user_info_data WHERE user_id = 3 AND u_data = ('name3', 23)::user_data;
NOTICE: executing the command locally: SELECT count(*) AS count FROM master_evaluation_combinations.user_info_data_1170001 user_info_data WHERE ((user_id OPERATOR(pg_catalog.=) 3) AND (u_data OPERATOR(pg_catalog.=) ROW('name3'::text, 23)::master_evaluation_combinations.user_data))
NOTICE: executing the command locally: SELECT count(*) AS count FROM coordinator_evaluation_combinations.user_info_data_1170001 user_info_data WHERE ((user_id OPERATOR(pg_catalog.=) 3) AND (u_data OPERATOR(pg_catalog.=) ROW('name3'::text, 23)::coordinator_evaluation_combinations.user_data))
count
---------------------------------------------------------------------
1
@ -1093,63 +1093,63 @@ NOTICE: executing the command locally: SELECT count(*) AS count FROM master_eva
PREPARE fast_path_router_with_param_on_non_dist_key(user_data) AS SELECT count(*) FROM user_info_data WHERE user_id = 3 AND u_data = $1;
EXECUTE fast_path_router_with_param_on_non_dist_key(('name3', 23)::user_data);
NOTICE: executing the command locally: SELECT count(*) AS count FROM master_evaluation_combinations.user_info_data_1170001 user_info_data WHERE ((user_id OPERATOR(pg_catalog.=) 3) AND (u_data OPERATOR(pg_catalog.=) $1::master_evaluation_combinations.user_data))
NOTICE: executing the command locally: SELECT count(*) AS count FROM coordinator_evaluation_combinations.user_info_data_1170001 user_info_data WHERE ((user_id OPERATOR(pg_catalog.=) 3) AND (u_data OPERATOR(pg_catalog.=) $1::coordinator_evaluation_combinations.user_data))
count
---------------------------------------------------------------------
1
(1 row)
EXECUTE fast_path_router_with_param_on_non_dist_key(('name3', 23)::user_data);
NOTICE: executing the command locally: SELECT count(*) AS count FROM master_evaluation_combinations.user_info_data_1170001 user_info_data WHERE ((user_id OPERATOR(pg_catalog.=) 3) AND (u_data OPERATOR(pg_catalog.=) $1::master_evaluation_combinations.user_data))
NOTICE: executing the command locally: SELECT count(*) AS count FROM coordinator_evaluation_combinations.user_info_data_1170001 user_info_data WHERE ((user_id OPERATOR(pg_catalog.=) 3) AND (u_data OPERATOR(pg_catalog.=) $1::coordinator_evaluation_combinations.user_data))
count
---------------------------------------------------------------------
1
(1 row)
EXECUTE fast_path_router_with_param_on_non_dist_key(('name3', 23)::user_data);
NOTICE: executing the command locally: SELECT count(*) AS count FROM master_evaluation_combinations.user_info_data_1170001 user_info_data WHERE ((user_id OPERATOR(pg_catalog.=) 3) AND (u_data OPERATOR(pg_catalog.=) $1::master_evaluation_combinations.user_data))
NOTICE: executing the command locally: SELECT count(*) AS count FROM coordinator_evaluation_combinations.user_info_data_1170001 user_info_data WHERE ((user_id OPERATOR(pg_catalog.=) 3) AND (u_data OPERATOR(pg_catalog.=) $1::coordinator_evaluation_combinations.user_data))
count
---------------------------------------------------------------------
1
(1 row)
EXECUTE fast_path_router_with_param_on_non_dist_key(('name3', 23)::user_data);
NOTICE: executing the command locally: SELECT count(*) AS count FROM master_evaluation_combinations.user_info_data_1170001 user_info_data WHERE ((user_id OPERATOR(pg_catalog.=) 3) AND (u_data OPERATOR(pg_catalog.=) $1::master_evaluation_combinations.user_data))
NOTICE: executing the command locally: SELECT count(*) AS count FROM coordinator_evaluation_combinations.user_info_data_1170001 user_info_data WHERE ((user_id OPERATOR(pg_catalog.=) 3) AND (u_data OPERATOR(pg_catalog.=) $1::coordinator_evaluation_combinations.user_data))
count
---------------------------------------------------------------------
1
(1 row)
EXECUTE fast_path_router_with_param_on_non_dist_key(('name3', 23)::user_data);
NOTICE: executing the command locally: SELECT count(*) AS count FROM master_evaluation_combinations.user_info_data_1170001 user_info_data WHERE ((user_id OPERATOR(pg_catalog.=) 3) AND (u_data OPERATOR(pg_catalog.=) $1::master_evaluation_combinations.user_data))
NOTICE: executing the command locally: SELECT count(*) AS count FROM coordinator_evaluation_combinations.user_info_data_1170001 user_info_data WHERE ((user_id OPERATOR(pg_catalog.=) 3) AND (u_data OPERATOR(pg_catalog.=) $1::coordinator_evaluation_combinations.user_data))
count
---------------------------------------------------------------------
1
(1 row)
EXECUTE fast_path_router_with_param_on_non_dist_key(('name3', 23)::user_data);
NOTICE: executing the command locally: SELECT count(*) AS count FROM master_evaluation_combinations.user_info_data_1170001 user_info_data WHERE ((user_id OPERATOR(pg_catalog.=) 3) AND (u_data OPERATOR(pg_catalog.=) $1::master_evaluation_combinations.user_data))
NOTICE: executing the command locally: SELECT count(*) AS count FROM coordinator_evaluation_combinations.user_info_data_1170001 user_info_data WHERE ((user_id OPERATOR(pg_catalog.=) 3) AND (u_data OPERATOR(pg_catalog.=) $1::coordinator_evaluation_combinations.user_data))
count
---------------------------------------------------------------------
1
(1 row)
EXECUTE fast_path_router_with_param_on_non_dist_key(('name3', 23)::user_data);
NOTICE: executing the command locally: SELECT count(*) AS count FROM master_evaluation_combinations.user_info_data_1170001 user_info_data WHERE ((user_id OPERATOR(pg_catalog.=) 3) AND (u_data OPERATOR(pg_catalog.=) $1::master_evaluation_combinations.user_data))
NOTICE: executing the command locally: SELECT count(*) AS count FROM coordinator_evaluation_combinations.user_info_data_1170001 user_info_data WHERE ((user_id OPERATOR(pg_catalog.=) 3) AND (u_data OPERATOR(pg_catalog.=) $1::coordinator_evaluation_combinations.user_data))
count
---------------------------------------------------------------------
1
(1 row)
EXECUTE fast_path_router_with_param_on_non_dist_key(('name3', 23)::user_data);
NOTICE: executing the command locally: SELECT count(*) AS count FROM master_evaluation_combinations.user_info_data_1170001 user_info_data WHERE ((user_id OPERATOR(pg_catalog.=) 3) AND (u_data OPERATOR(pg_catalog.=) $1::master_evaluation_combinations.user_data))
NOTICE: executing the command locally: SELECT count(*) AS count FROM coordinator_evaluation_combinations.user_info_data_1170001 user_info_data WHERE ((user_id OPERATOR(pg_catalog.=) 3) AND (u_data OPERATOR(pg_catalog.=) $1::coordinator_evaluation_combinations.user_data))
count
---------------------------------------------------------------------
1
(1 row)
EXECUTE fast_path_router_with_param_on_non_dist_key(('name3', 23)::user_data);
NOTICE: executing the command locally: SELECT count(*) AS count FROM master_evaluation_combinations.user_info_data_1170001 user_info_data WHERE ((user_id OPERATOR(pg_catalog.=) 3) AND (u_data OPERATOR(pg_catalog.=) $1::master_evaluation_combinations.user_data))
NOTICE: executing the command locally: SELECT count(*) AS count FROM coordinator_evaluation_combinations.user_info_data_1170001 user_info_data WHERE ((user_id OPERATOR(pg_catalog.=) 3) AND (u_data OPERATOR(pg_catalog.=) $1::coordinator_evaluation_combinations.user_data))
count
---------------------------------------------------------------------
1
@ -1157,63 +1157,63 @@ NOTICE: executing the command locally: SELECT count(*) AS count FROM master_eva
PREPARE fast_path_router_with_only_function AS SELECT get_local_node_id_volatile() > 0 FROM user_info_data WHERE user_id = 3;
EXECUTE fast_path_router_with_only_function;
NOTICE: executing the command locally: SELECT (master_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM master_evaluation_combinations.user_info_data_1170001 user_info_data WHERE (user_id OPERATOR(pg_catalog.=) 3)
NOTICE: executing the command locally: SELECT (coordinator_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM coordinator_evaluation_combinations.user_info_data_1170001 user_info_data WHERE (user_id OPERATOR(pg_catalog.=) 3)
?column?
---------------------------------------------------------------------
t
(1 row)
EXECUTE fast_path_router_with_only_function;
NOTICE: executing the command locally: SELECT (master_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM master_evaluation_combinations.user_info_data_1170001 user_info_data WHERE (user_id OPERATOR(pg_catalog.=) 3)
NOTICE: executing the command locally: SELECT (coordinator_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM coordinator_evaluation_combinations.user_info_data_1170001 user_info_data WHERE (user_id OPERATOR(pg_catalog.=) 3)
?column?
---------------------------------------------------------------------
t
(1 row)
EXECUTE fast_path_router_with_only_function;
NOTICE: executing the command locally: SELECT (master_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM master_evaluation_combinations.user_info_data_1170001 user_info_data WHERE (user_id OPERATOR(pg_catalog.=) 3)
NOTICE: executing the command locally: SELECT (coordinator_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM coordinator_evaluation_combinations.user_info_data_1170001 user_info_data WHERE (user_id OPERATOR(pg_catalog.=) 3)
?column?
---------------------------------------------------------------------
t
(1 row)
EXECUTE fast_path_router_with_only_function;
NOTICE: executing the command locally: SELECT (master_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM master_evaluation_combinations.user_info_data_1170001 user_info_data WHERE (user_id OPERATOR(pg_catalog.=) 3)
NOTICE: executing the command locally: SELECT (coordinator_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM coordinator_evaluation_combinations.user_info_data_1170001 user_info_data WHERE (user_id OPERATOR(pg_catalog.=) 3)
?column?
---------------------------------------------------------------------
t
(1 row)
EXECUTE fast_path_router_with_only_function;
NOTICE: executing the command locally: SELECT (master_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM master_evaluation_combinations.user_info_data_1170001 user_info_data WHERE (user_id OPERATOR(pg_catalog.=) 3)
NOTICE: executing the command locally: SELECT (coordinator_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM coordinator_evaluation_combinations.user_info_data_1170001 user_info_data WHERE (user_id OPERATOR(pg_catalog.=) 3)
?column?
---------------------------------------------------------------------
t
(1 row)
EXECUTE fast_path_router_with_only_function;
NOTICE: executing the command locally: SELECT (master_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM master_evaluation_combinations.user_info_data_1170001 user_info_data WHERE (user_id OPERATOR(pg_catalog.=) 3)
NOTICE: executing the command locally: SELECT (coordinator_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM coordinator_evaluation_combinations.user_info_data_1170001 user_info_data WHERE (user_id OPERATOR(pg_catalog.=) 3)
?column?
---------------------------------------------------------------------
t
(1 row)
EXECUTE fast_path_router_with_only_function;
NOTICE: executing the command locally: SELECT (master_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM master_evaluation_combinations.user_info_data_1170001 user_info_data WHERE (user_id OPERATOR(pg_catalog.=) 3)
NOTICE: executing the command locally: SELECT (coordinator_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM coordinator_evaluation_combinations.user_info_data_1170001 user_info_data WHERE (user_id OPERATOR(pg_catalog.=) 3)
?column?
---------------------------------------------------------------------
t
(1 row)
EXECUTE fast_path_router_with_only_function;
NOTICE: executing the command locally: SELECT (master_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM master_evaluation_combinations.user_info_data_1170001 user_info_data WHERE (user_id OPERATOR(pg_catalog.=) 3)
NOTICE: executing the command locally: SELECT (coordinator_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM coordinator_evaluation_combinations.user_info_data_1170001 user_info_data WHERE (user_id OPERATOR(pg_catalog.=) 3)
?column?
---------------------------------------------------------------------
t
(1 row)
SELECT count(*) FROM user_info_data u1 JOIN user_info_data u2 USING (user_id) WHERE user_id = 3;
NOTICE: executing the command locally: SELECT count(*) AS count FROM (master_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN master_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE (u1.user_id OPERATOR(pg_catalog.=) 3)
NOTICE: executing the command locally: SELECT count(*) AS count FROM (coordinator_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN coordinator_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE (u1.user_id OPERATOR(pg_catalog.=) 3)
count
---------------------------------------------------------------------
1
@ -1222,63 +1222,63 @@ NOTICE: executing the command locally: SELECT count(*) AS count FROM (master_ev
-- make sure that it is also true for fast-path router queries with paramaters
PREPARE router_with_param(int) AS SELECT count(*) FROM user_info_data u1 JOIN user_info_data u2 USING (user_id) WHERE user_id = $1;
execute router_with_param(3);
NOTICE: executing the command locally: SELECT count(*) AS count FROM (master_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN master_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE (u1.user_id OPERATOR(pg_catalog.=) $1)
NOTICE: executing the command locally: SELECT count(*) AS count FROM (coordinator_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN coordinator_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE (u1.user_id OPERATOR(pg_catalog.=) $1)
count
---------------------------------------------------------------------
1
(1 row)
execute router_with_param(3);
NOTICE: executing the command locally: SELECT count(*) AS count FROM (master_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN master_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE (u1.user_id OPERATOR(pg_catalog.=) $1)
NOTICE: executing the command locally: SELECT count(*) AS count FROM (coordinator_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN coordinator_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE (u1.user_id OPERATOR(pg_catalog.=) $1)
count
---------------------------------------------------------------------
1
(1 row)
execute router_with_param(3);
NOTICE: executing the command locally: SELECT count(*) AS count FROM (master_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN master_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE (u1.user_id OPERATOR(pg_catalog.=) $1)
NOTICE: executing the command locally: SELECT count(*) AS count FROM (coordinator_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN coordinator_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE (u1.user_id OPERATOR(pg_catalog.=) $1)
count
---------------------------------------------------------------------
1
(1 row)
execute router_with_param(3);
NOTICE: executing the command locally: SELECT count(*) AS count FROM (master_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN master_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE (u1.user_id OPERATOR(pg_catalog.=) $1)
NOTICE: executing the command locally: SELECT count(*) AS count FROM (coordinator_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN coordinator_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE (u1.user_id OPERATOR(pg_catalog.=) $1)
count
---------------------------------------------------------------------
1
(1 row)
execute router_with_param(3);
NOTICE: executing the command locally: SELECT count(*) AS count FROM (master_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN master_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE (u1.user_id OPERATOR(pg_catalog.=) $1)
NOTICE: executing the command locally: SELECT count(*) AS count FROM (coordinator_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN coordinator_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE (u1.user_id OPERATOR(pg_catalog.=) $1)
count
---------------------------------------------------------------------
1
(1 row)
execute router_with_param(3);
NOTICE: executing the command locally: SELECT count(*) AS count FROM (master_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN master_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE (u1.user_id OPERATOR(pg_catalog.=) $1)
NOTICE: executing the command locally: SELECT count(*) AS count FROM (coordinator_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN coordinator_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE (u1.user_id OPERATOR(pg_catalog.=) $1)
count
---------------------------------------------------------------------
1
(1 row)
execute router_with_param(3);
NOTICE: executing the command locally: SELECT count(*) AS count FROM (master_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN master_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE (u1.user_id OPERATOR(pg_catalog.=) $1)
NOTICE: executing the command locally: SELECT count(*) AS count FROM (coordinator_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN coordinator_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE (u1.user_id OPERATOR(pg_catalog.=) $1)
count
---------------------------------------------------------------------
1
(1 row)
execute router_with_param(3);
NOTICE: executing the command locally: SELECT count(*) AS count FROM (master_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN master_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE (u1.user_id OPERATOR(pg_catalog.=) $1)
NOTICE: executing the command locally: SELECT count(*) AS count FROM (coordinator_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN coordinator_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE (u1.user_id OPERATOR(pg_catalog.=) $1)
count
---------------------------------------------------------------------
1
(1 row)
SELECT get_local_node_id_volatile() > 0 FROM user_info_data m1 JOIN user_info_data m2 USING(user_id) WHERE m1.user_id = 3;
NOTICE: executing the command locally: SELECT (master_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM (master_evaluation_combinations.user_info_data_1170001 m1(user_id, u_data, user_index) JOIN master_evaluation_combinations.user_info_data_1170001 m2(user_id, u_data, user_index) USING (user_id)) WHERE (m1.user_id OPERATOR(pg_catalog.=) 3)
NOTICE: executing the command locally: SELECT (coordinator_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM (coordinator_evaluation_combinations.user_info_data_1170001 m1(user_id, u_data, user_index) JOIN coordinator_evaluation_combinations.user_info_data_1170001 m2(user_id, u_data, user_index) USING (user_id)) WHERE (m1.user_id OPERATOR(pg_catalog.=) 3)
?column?
---------------------------------------------------------------------
t
@ -1286,56 +1286,56 @@ NOTICE: executing the command locally: SELECT (master_evaluation_combinations.g
PREPARE router_with_param_and_func(int) AS SELECT get_local_node_id_volatile() > 0 FROM user_info_data m1 JOIN user_info_data m2 USING(user_id) WHERE m1.user_id = $1;
execute router_with_param_and_func(3);
NOTICE: executing the command locally: SELECT (master_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM (master_evaluation_combinations.user_info_data_1170001 m1(user_id, u_data, user_index) JOIN master_evaluation_combinations.user_info_data_1170001 m2(user_id, u_data, user_index) USING (user_id)) WHERE (m1.user_id OPERATOR(pg_catalog.=) $1)
NOTICE: executing the command locally: SELECT (coordinator_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM (coordinator_evaluation_combinations.user_info_data_1170001 m1(user_id, u_data, user_index) JOIN coordinator_evaluation_combinations.user_info_data_1170001 m2(user_id, u_data, user_index) USING (user_id)) WHERE (m1.user_id OPERATOR(pg_catalog.=) $1)
?column?
---------------------------------------------------------------------
t
(1 row)
execute router_with_param_and_func(3);
NOTICE: executing the command locally: SELECT (master_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM (master_evaluation_combinations.user_info_data_1170001 m1(user_id, u_data, user_index) JOIN master_evaluation_combinations.user_info_data_1170001 m2(user_id, u_data, user_index) USING (user_id)) WHERE (m1.user_id OPERATOR(pg_catalog.=) $1)
NOTICE: executing the command locally: SELECT (coordinator_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM (coordinator_evaluation_combinations.user_info_data_1170001 m1(user_id, u_data, user_index) JOIN coordinator_evaluation_combinations.user_info_data_1170001 m2(user_id, u_data, user_index) USING (user_id)) WHERE (m1.user_id OPERATOR(pg_catalog.=) $1)
?column?
---------------------------------------------------------------------
t
(1 row)
execute router_with_param_and_func(3);
NOTICE: executing the command locally: SELECT (master_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM (master_evaluation_combinations.user_info_data_1170001 m1(user_id, u_data, user_index) JOIN master_evaluation_combinations.user_info_data_1170001 m2(user_id, u_data, user_index) USING (user_id)) WHERE (m1.user_id OPERATOR(pg_catalog.=) $1)
NOTICE: executing the command locally: SELECT (coordinator_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM (coordinator_evaluation_combinations.user_info_data_1170001 m1(user_id, u_data, user_index) JOIN coordinator_evaluation_combinations.user_info_data_1170001 m2(user_id, u_data, user_index) USING (user_id)) WHERE (m1.user_id OPERATOR(pg_catalog.=) $1)
?column?
---------------------------------------------------------------------
t
(1 row)
execute router_with_param_and_func(3);
NOTICE: executing the command locally: SELECT (master_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM (master_evaluation_combinations.user_info_data_1170001 m1(user_id, u_data, user_index) JOIN master_evaluation_combinations.user_info_data_1170001 m2(user_id, u_data, user_index) USING (user_id)) WHERE (m1.user_id OPERATOR(pg_catalog.=) $1)
NOTICE: executing the command locally: SELECT (coordinator_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM (coordinator_evaluation_combinations.user_info_data_1170001 m1(user_id, u_data, user_index) JOIN coordinator_evaluation_combinations.user_info_data_1170001 m2(user_id, u_data, user_index) USING (user_id)) WHERE (m1.user_id OPERATOR(pg_catalog.=) $1)
?column?
---------------------------------------------------------------------
t
(1 row)
execute router_with_param_and_func(3);
NOTICE: executing the command locally: SELECT (master_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM (master_evaluation_combinations.user_info_data_1170001 m1(user_id, u_data, user_index) JOIN master_evaluation_combinations.user_info_data_1170001 m2(user_id, u_data, user_index) USING (user_id)) WHERE (m1.user_id OPERATOR(pg_catalog.=) $1)
NOTICE: executing the command locally: SELECT (coordinator_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM (coordinator_evaluation_combinations.user_info_data_1170001 m1(user_id, u_data, user_index) JOIN coordinator_evaluation_combinations.user_info_data_1170001 m2(user_id, u_data, user_index) USING (user_id)) WHERE (m1.user_id OPERATOR(pg_catalog.=) $1)
?column?
---------------------------------------------------------------------
t
(1 row)
execute router_with_param_and_func(3);
NOTICE: executing the command locally: SELECT (master_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM (master_evaluation_combinations.user_info_data_1170001 m1(user_id, u_data, user_index) JOIN master_evaluation_combinations.user_info_data_1170001 m2(user_id, u_data, user_index) USING (user_id)) WHERE (m1.user_id OPERATOR(pg_catalog.=) $1)
NOTICE: executing the command locally: SELECT (coordinator_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM (coordinator_evaluation_combinations.user_info_data_1170001 m1(user_id, u_data, user_index) JOIN coordinator_evaluation_combinations.user_info_data_1170001 m2(user_id, u_data, user_index) USING (user_id)) WHERE (m1.user_id OPERATOR(pg_catalog.=) $1)
?column?
---------------------------------------------------------------------
t
(1 row)
execute router_with_param_and_func(3);
NOTICE: executing the command locally: SELECT (master_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM (master_evaluation_combinations.user_info_data_1170001 m1(user_id, u_data, user_index) JOIN master_evaluation_combinations.user_info_data_1170001 m2(user_id, u_data, user_index) USING (user_id)) WHERE (m1.user_id OPERATOR(pg_catalog.=) $1)
NOTICE: executing the command locally: SELECT (coordinator_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM (coordinator_evaluation_combinations.user_info_data_1170001 m1(user_id, u_data, user_index) JOIN coordinator_evaluation_combinations.user_info_data_1170001 m2(user_id, u_data, user_index) USING (user_id)) WHERE (m1.user_id OPERATOR(pg_catalog.=) $1)
?column?
---------------------------------------------------------------------
t
(1 row)
execute router_with_param_and_func(3);
NOTICE: executing the command locally: SELECT (master_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM (master_evaluation_combinations.user_info_data_1170001 m1(user_id, u_data, user_index) JOIN master_evaluation_combinations.user_info_data_1170001 m2(user_id, u_data, user_index) USING (user_id)) WHERE (m1.user_id OPERATOR(pg_catalog.=) $1)
NOTICE: executing the command locally: SELECT (coordinator_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM (coordinator_evaluation_combinations.user_info_data_1170001 m1(user_id, u_data, user_index) JOIN coordinator_evaluation_combinations.user_info_data_1170001 m2(user_id, u_data, user_index) USING (user_id)) WHERE (m1.user_id OPERATOR(pg_catalog.=) $1)
?column?
---------------------------------------------------------------------
t
@ -1344,49 +1344,49 @@ NOTICE: executing the command locally: SELECT (master_evaluation_combinations.g
PREPARE router_with_param_and_func_on_non_dist_key(int) AS
SELECT get_local_node_id_volatile() > 0 FROM user_info_data WHERE user_id = 3 AND user_id = 3 AND user_index = $1;
EXECUTE router_with_param_and_func_on_non_dist_key(3);
NOTICE: executing the command locally: SELECT (master_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM master_evaluation_combinations.user_info_data_1170001 user_info_data WHERE ((user_id OPERATOR(pg_catalog.=) 3) AND (user_id OPERATOR(pg_catalog.=) 3) AND (user_index OPERATOR(pg_catalog.=) $1))
NOTICE: executing the command locally: SELECT (coordinator_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM coordinator_evaluation_combinations.user_info_data_1170001 user_info_data WHERE ((user_id OPERATOR(pg_catalog.=) 3) AND (user_id OPERATOR(pg_catalog.=) 3) AND (user_index OPERATOR(pg_catalog.=) $1))
?column?
---------------------------------------------------------------------
t
(1 row)
EXECUTE router_with_param_and_func_on_non_dist_key(3);
NOTICE: executing the command locally: SELECT (master_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM master_evaluation_combinations.user_info_data_1170001 user_info_data WHERE ((user_id OPERATOR(pg_catalog.=) 3) AND (user_id OPERATOR(pg_catalog.=) 3) AND (user_index OPERATOR(pg_catalog.=) $1))
NOTICE: executing the command locally: SELECT (coordinator_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM coordinator_evaluation_combinations.user_info_data_1170001 user_info_data WHERE ((user_id OPERATOR(pg_catalog.=) 3) AND (user_id OPERATOR(pg_catalog.=) 3) AND (user_index OPERATOR(pg_catalog.=) $1))
?column?
---------------------------------------------------------------------
t
(1 row)
EXECUTE router_with_param_and_func_on_non_dist_key(3);
NOTICE: executing the command locally: SELECT (master_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM master_evaluation_combinations.user_info_data_1170001 user_info_data WHERE ((user_id OPERATOR(pg_catalog.=) 3) AND (user_id OPERATOR(pg_catalog.=) 3) AND (user_index OPERATOR(pg_catalog.=) $1))
NOTICE: executing the command locally: SELECT (coordinator_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM coordinator_evaluation_combinations.user_info_data_1170001 user_info_data WHERE ((user_id OPERATOR(pg_catalog.=) 3) AND (user_id OPERATOR(pg_catalog.=) 3) AND (user_index OPERATOR(pg_catalog.=) $1))
?column?
---------------------------------------------------------------------
t
(1 row)
EXECUTE router_with_param_and_func_on_non_dist_key(3);
NOTICE: executing the command locally: SELECT (master_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM master_evaluation_combinations.user_info_data_1170001 user_info_data WHERE ((user_id OPERATOR(pg_catalog.=) 3) AND (user_id OPERATOR(pg_catalog.=) 3) AND (user_index OPERATOR(pg_catalog.=) $1))
NOTICE: executing the command locally: SELECT (coordinator_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM coordinator_evaluation_combinations.user_info_data_1170001 user_info_data WHERE ((user_id OPERATOR(pg_catalog.=) 3) AND (user_id OPERATOR(pg_catalog.=) 3) AND (user_index OPERATOR(pg_catalog.=) $1))
?column?
---------------------------------------------------------------------
t
(1 row)
EXECUTE router_with_param_and_func_on_non_dist_key(3);
NOTICE: executing the command locally: SELECT (master_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM master_evaluation_combinations.user_info_data_1170001 user_info_data WHERE ((user_id OPERATOR(pg_catalog.=) 3) AND (user_id OPERATOR(pg_catalog.=) 3) AND (user_index OPERATOR(pg_catalog.=) $1))
NOTICE: executing the command locally: SELECT (coordinator_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM coordinator_evaluation_combinations.user_info_data_1170001 user_info_data WHERE ((user_id OPERATOR(pg_catalog.=) 3) AND (user_id OPERATOR(pg_catalog.=) 3) AND (user_index OPERATOR(pg_catalog.=) $1))
?column?
---------------------------------------------------------------------
t
(1 row)
EXECUTE router_with_param_and_func_on_non_dist_key(3);
NOTICE: executing the command locally: SELECT (master_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM master_evaluation_combinations.user_info_data_1170001 user_info_data WHERE ((user_id OPERATOR(pg_catalog.=) 3) AND (user_id OPERATOR(pg_catalog.=) 3) AND (user_index OPERATOR(pg_catalog.=) $1))
NOTICE: executing the command locally: SELECT (coordinator_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM coordinator_evaluation_combinations.user_info_data_1170001 user_info_data WHERE ((user_id OPERATOR(pg_catalog.=) 3) AND (user_id OPERATOR(pg_catalog.=) 3) AND (user_index OPERATOR(pg_catalog.=) $1))
?column?
---------------------------------------------------------------------
t
(1 row)
EXECUTE router_with_param_and_func_on_non_dist_key(3);
NOTICE: executing the command locally: SELECT (master_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM master_evaluation_combinations.user_info_data_1170001 user_info_data WHERE ((user_id OPERATOR(pg_catalog.=) 3) AND (user_id OPERATOR(pg_catalog.=) 3) AND (user_index OPERATOR(pg_catalog.=) $1))
NOTICE: executing the command locally: SELECT (coordinator_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM coordinator_evaluation_combinations.user_info_data_1170001 user_info_data WHERE ((user_id OPERATOR(pg_catalog.=) 3) AND (user_id OPERATOR(pg_catalog.=) 3) AND (user_index OPERATOR(pg_catalog.=) $1))
?column?
---------------------------------------------------------------------
t
@ -1394,21 +1394,21 @@ NOTICE: executing the command locally: SELECT (master_evaluation_combinations.g
-- same query as router_with_param, but with consts
SELECT get_local_node_id_volatile() > 0 FROM user_info_data m1 JOIN user_info_data m2 USING(user_id) WHERE m1.user_id = 3;
NOTICE: executing the command locally: SELECT (master_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM (master_evaluation_combinations.user_info_data_1170001 m1(user_id, u_data, user_index) JOIN master_evaluation_combinations.user_info_data_1170001 m2(user_id, u_data, user_index) USING (user_id)) WHERE (m1.user_id OPERATOR(pg_catalog.=) 3)
NOTICE: executing the command locally: SELECT (coordinator_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM (coordinator_evaluation_combinations.user_info_data_1170001 m1(user_id, u_data, user_index) JOIN coordinator_evaluation_combinations.user_info_data_1170001 m2(user_id, u_data, user_index) USING (user_id)) WHERE (m1.user_id OPERATOR(pg_catalog.=) 3)
?column?
---------------------------------------------------------------------
t
(1 row)
SELECT count(*) FROM user_info_data u1 JOIN user_info_data u2 USING (user_id) WHERE u1.user_id = 3 AND u1.u_data = ('name3', 23)::user_data;
NOTICE: executing the command locally: SELECT count(*) AS count FROM (master_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN master_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE ((u1.user_id OPERATOR(pg_catalog.=) 3) AND (u1.u_data OPERATOR(pg_catalog.=) ROW('name3'::text, 23)::master_evaluation_combinations.user_data))
NOTICE: executing the command locally: SELECT count(*) AS count FROM (coordinator_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN coordinator_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE ((u1.user_id OPERATOR(pg_catalog.=) 3) AND (u1.u_data OPERATOR(pg_catalog.=) ROW('name3'::text, 23)::coordinator_evaluation_combinations.user_data))
count
---------------------------------------------------------------------
1
(1 row)
SELECT count(*) FROM user_info_data u1 JOIN user_info_data u2 USING (user_id) WHERE u1.user_id = 3 AND u1.u_data = ('name3', 23)::user_data;
NOTICE: executing the command locally: SELECT count(*) AS count FROM (master_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN master_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE ((u1.user_id OPERATOR(pg_catalog.=) 3) AND (u1.u_data OPERATOR(pg_catalog.=) ROW('name3'::text, 23)::master_evaluation_combinations.user_data))
NOTICE: executing the command locally: SELECT count(*) AS count FROM (coordinator_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN coordinator_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE ((u1.user_id OPERATOR(pg_catalog.=) 3) AND (u1.u_data OPERATOR(pg_catalog.=) ROW('name3'::text, 23)::coordinator_evaluation_combinations.user_data))
count
---------------------------------------------------------------------
1
@ -1416,70 +1416,70 @@ NOTICE: executing the command locally: SELECT count(*) AS count FROM (master_ev
PREPARE router_with_param_on_non_dist_key(user_data) AS SELECT count(*) FROM user_info_data u1 JOIN user_info_data u2 USING (user_id) WHERE u1.user_id = 3 AND u1.u_data = $1;
EXECUTE router_with_param_on_non_dist_key(('name3', 23)::user_data);
NOTICE: executing the command locally: SELECT count(*) AS count FROM (master_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN master_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE ((u1.user_id OPERATOR(pg_catalog.=) 3) AND (u1.u_data OPERATOR(pg_catalog.=) $1::master_evaluation_combinations.user_data))
NOTICE: executing the command locally: SELECT count(*) AS count FROM (coordinator_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN coordinator_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE ((u1.user_id OPERATOR(pg_catalog.=) 3) AND (u1.u_data OPERATOR(pg_catalog.=) $1::coordinator_evaluation_combinations.user_data))
count
---------------------------------------------------------------------
1
(1 row)
EXECUTE router_with_param_on_non_dist_key(('name3', 23)::user_data);
NOTICE: executing the command locally: SELECT count(*) AS count FROM (master_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN master_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE ((u1.user_id OPERATOR(pg_catalog.=) 3) AND (u1.u_data OPERATOR(pg_catalog.=) $1::master_evaluation_combinations.user_data))
NOTICE: executing the command locally: SELECT count(*) AS count FROM (coordinator_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN coordinator_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE ((u1.user_id OPERATOR(pg_catalog.=) 3) AND (u1.u_data OPERATOR(pg_catalog.=) $1::coordinator_evaluation_combinations.user_data))
count
---------------------------------------------------------------------
1
(1 row)
EXECUTE router_with_param_on_non_dist_key(('name3', 23)::user_data);
NOTICE: executing the command locally: SELECT count(*) AS count FROM (master_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN master_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE ((u1.user_id OPERATOR(pg_catalog.=) 3) AND (u1.u_data OPERATOR(pg_catalog.=) $1::master_evaluation_combinations.user_data))
NOTICE: executing the command locally: SELECT count(*) AS count FROM (coordinator_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN coordinator_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE ((u1.user_id OPERATOR(pg_catalog.=) 3) AND (u1.u_data OPERATOR(pg_catalog.=) $1::coordinator_evaluation_combinations.user_data))
count
---------------------------------------------------------------------
1
(1 row)
EXECUTE router_with_param_on_non_dist_key(('name3', 23)::user_data);
NOTICE: executing the command locally: SELECT count(*) AS count FROM (master_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN master_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE ((u1.user_id OPERATOR(pg_catalog.=) 3) AND (u1.u_data OPERATOR(pg_catalog.=) $1::master_evaluation_combinations.user_data))
NOTICE: executing the command locally: SELECT count(*) AS count FROM (coordinator_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN coordinator_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE ((u1.user_id OPERATOR(pg_catalog.=) 3) AND (u1.u_data OPERATOR(pg_catalog.=) $1::coordinator_evaluation_combinations.user_data))
count
---------------------------------------------------------------------
1
(1 row)
EXECUTE router_with_param_on_non_dist_key(('name3', 23)::user_data);
NOTICE: executing the command locally: SELECT count(*) AS count FROM (master_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN master_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE ((u1.user_id OPERATOR(pg_catalog.=) 3) AND (u1.u_data OPERATOR(pg_catalog.=) $1::master_evaluation_combinations.user_data))
NOTICE: executing the command locally: SELECT count(*) AS count FROM (coordinator_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN coordinator_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE ((u1.user_id OPERATOR(pg_catalog.=) 3) AND (u1.u_data OPERATOR(pg_catalog.=) $1::coordinator_evaluation_combinations.user_data))
count
---------------------------------------------------------------------
1
(1 row)
EXECUTE router_with_param_on_non_dist_key(('name3', 23)::user_data);
NOTICE: executing the command locally: SELECT count(*) AS count FROM (master_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN master_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE ((u1.user_id OPERATOR(pg_catalog.=) 3) AND (u1.u_data OPERATOR(pg_catalog.=) $1::master_evaluation_combinations.user_data))
NOTICE: executing the command locally: SELECT count(*) AS count FROM (coordinator_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN coordinator_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE ((u1.user_id OPERATOR(pg_catalog.=) 3) AND (u1.u_data OPERATOR(pg_catalog.=) $1::coordinator_evaluation_combinations.user_data))
count
---------------------------------------------------------------------
1
(1 row)
EXECUTE router_with_param_on_non_dist_key(('name3', 23)::user_data);
NOTICE: executing the command locally: SELECT count(*) AS count FROM (master_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN master_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE ((u1.user_id OPERATOR(pg_catalog.=) 3) AND (u1.u_data OPERATOR(pg_catalog.=) $1::master_evaluation_combinations.user_data))
NOTICE: executing the command locally: SELECT count(*) AS count FROM (coordinator_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN coordinator_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE ((u1.user_id OPERATOR(pg_catalog.=) 3) AND (u1.u_data OPERATOR(pg_catalog.=) $1::coordinator_evaluation_combinations.user_data))
count
---------------------------------------------------------------------
1
(1 row)
EXECUTE router_with_param_on_non_dist_key(('name3', 23)::user_data);
NOTICE: executing the command locally: SELECT count(*) AS count FROM (master_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN master_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE ((u1.user_id OPERATOR(pg_catalog.=) 3) AND (u1.u_data OPERATOR(pg_catalog.=) $1::master_evaluation_combinations.user_data))
NOTICE: executing the command locally: SELECT count(*) AS count FROM (coordinator_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN coordinator_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE ((u1.user_id OPERATOR(pg_catalog.=) 3) AND (u1.u_data OPERATOR(pg_catalog.=) $1::coordinator_evaluation_combinations.user_data))
count
---------------------------------------------------------------------
1
(1 row)
EXECUTE router_with_param_on_non_dist_key(('name3', 23)::user_data);
NOTICE: executing the command locally: SELECT count(*) AS count FROM (master_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN master_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE ((u1.user_id OPERATOR(pg_catalog.=) 3) AND (u1.u_data OPERATOR(pg_catalog.=) $1::master_evaluation_combinations.user_data))
NOTICE: executing the command locally: SELECT count(*) AS count FROM (coordinator_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN coordinator_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE ((u1.user_id OPERATOR(pg_catalog.=) 3) AND (u1.u_data OPERATOR(pg_catalog.=) $1::coordinator_evaluation_combinations.user_data))
count
---------------------------------------------------------------------
1
(1 row)
SELECT get_local_node_id_volatile() > 0 FROM user_info_data u1 JOIN user_info_data u2 USING (user_id) WHERE u1.user_id = 3 AND u1.u_data = ('name3', 23)::user_data;
NOTICE: executing the command locally: SELECT (master_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM (master_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN master_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE ((u1.user_id OPERATOR(pg_catalog.=) 3) AND (u1.u_data OPERATOR(pg_catalog.=) ROW('name3'::text, 23)::master_evaluation_combinations.user_data))
NOTICE: executing the command locally: SELECT (coordinator_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM (coordinator_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN coordinator_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE ((u1.user_id OPERATOR(pg_catalog.=) 3) AND (u1.u_data OPERATOR(pg_catalog.=) ROW('name3'::text, 23)::coordinator_evaluation_combinations.user_data))
?column?
---------------------------------------------------------------------
t
@ -1487,70 +1487,70 @@ NOTICE: executing the command locally: SELECT (master_evaluation_combinations.g
PREPARE router_with_param_on_non_dist_key_and_func(user_data) AS SELECT get_local_node_id_volatile() > 0 FROM user_info_data u1 JOIN user_info_data u2 USING (user_id) WHERE u1.user_id = 3 AND u1.u_data = $1;
EXECUTE router_with_param_on_non_dist_key_and_func(('name3', 23)::user_data);
NOTICE: executing the command locally: SELECT (master_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM (master_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN master_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE ((u1.user_id OPERATOR(pg_catalog.=) 3) AND (u1.u_data OPERATOR(pg_catalog.=) $1::master_evaluation_combinations.user_data))
NOTICE: executing the command locally: SELECT (coordinator_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM (coordinator_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN coordinator_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE ((u1.user_id OPERATOR(pg_catalog.=) 3) AND (u1.u_data OPERATOR(pg_catalog.=) $1::coordinator_evaluation_combinations.user_data))
?column?
---------------------------------------------------------------------
t
(1 row)
EXECUTE router_with_param_on_non_dist_key_and_func(('name3', 23)::user_data);
NOTICE: executing the command locally: SELECT (master_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM (master_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN master_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE ((u1.user_id OPERATOR(pg_catalog.=) 3) AND (u1.u_data OPERATOR(pg_catalog.=) $1::master_evaluation_combinations.user_data))
NOTICE: executing the command locally: SELECT (coordinator_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM (coordinator_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN coordinator_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE ((u1.user_id OPERATOR(pg_catalog.=) 3) AND (u1.u_data OPERATOR(pg_catalog.=) $1::coordinator_evaluation_combinations.user_data))
?column?
---------------------------------------------------------------------
t
(1 row)
EXECUTE router_with_param_on_non_dist_key_and_func(('name3', 23)::user_data);
NOTICE: executing the command locally: SELECT (master_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM (master_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN master_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE ((u1.user_id OPERATOR(pg_catalog.=) 3) AND (u1.u_data OPERATOR(pg_catalog.=) $1::master_evaluation_combinations.user_data))
NOTICE: executing the command locally: SELECT (coordinator_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM (coordinator_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN coordinator_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE ((u1.user_id OPERATOR(pg_catalog.=) 3) AND (u1.u_data OPERATOR(pg_catalog.=) $1::coordinator_evaluation_combinations.user_data))
?column?
---------------------------------------------------------------------
t
(1 row)
EXECUTE router_with_param_on_non_dist_key_and_func(('name3', 23)::user_data);
NOTICE: executing the command locally: SELECT (master_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM (master_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN master_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE ((u1.user_id OPERATOR(pg_catalog.=) 3) AND (u1.u_data OPERATOR(pg_catalog.=) $1::master_evaluation_combinations.user_data))
NOTICE: executing the command locally: SELECT (coordinator_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM (coordinator_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN coordinator_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE ((u1.user_id OPERATOR(pg_catalog.=) 3) AND (u1.u_data OPERATOR(pg_catalog.=) $1::coordinator_evaluation_combinations.user_data))
?column?
---------------------------------------------------------------------
t
(1 row)
EXECUTE router_with_param_on_non_dist_key_and_func(('name3', 23)::user_data);
NOTICE: executing the command locally: SELECT (master_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM (master_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN master_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE ((u1.user_id OPERATOR(pg_catalog.=) 3) AND (u1.u_data OPERATOR(pg_catalog.=) $1::master_evaluation_combinations.user_data))
NOTICE: executing the command locally: SELECT (coordinator_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM (coordinator_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN coordinator_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE ((u1.user_id OPERATOR(pg_catalog.=) 3) AND (u1.u_data OPERATOR(pg_catalog.=) $1::coordinator_evaluation_combinations.user_data))
?column?
---------------------------------------------------------------------
t
(1 row)
EXECUTE router_with_param_on_non_dist_key_and_func(('name3', 23)::user_data);
NOTICE: executing the command locally: SELECT (master_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM (master_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN master_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE ((u1.user_id OPERATOR(pg_catalog.=) 3) AND (u1.u_data OPERATOR(pg_catalog.=) $1::master_evaluation_combinations.user_data))
NOTICE: executing the command locally: SELECT (coordinator_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM (coordinator_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN coordinator_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE ((u1.user_id OPERATOR(pg_catalog.=) 3) AND (u1.u_data OPERATOR(pg_catalog.=) $1::coordinator_evaluation_combinations.user_data))
?column?
---------------------------------------------------------------------
t
(1 row)
EXECUTE router_with_param_on_non_dist_key_and_func(('name3', 23)::user_data);
NOTICE: executing the command locally: SELECT (master_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM (master_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN master_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE ((u1.user_id OPERATOR(pg_catalog.=) 3) AND (u1.u_data OPERATOR(pg_catalog.=) $1::master_evaluation_combinations.user_data))
NOTICE: executing the command locally: SELECT (coordinator_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM (coordinator_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN coordinator_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE ((u1.user_id OPERATOR(pg_catalog.=) 3) AND (u1.u_data OPERATOR(pg_catalog.=) $1::coordinator_evaluation_combinations.user_data))
?column?
---------------------------------------------------------------------
t
(1 row)
EXECUTE router_with_param_on_non_dist_key_and_func(('name3', 23)::user_data);
NOTICE: executing the command locally: SELECT (master_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM (master_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN master_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE ((u1.user_id OPERATOR(pg_catalog.=) 3) AND (u1.u_data OPERATOR(pg_catalog.=) $1::master_evaluation_combinations.user_data))
NOTICE: executing the command locally: SELECT (coordinator_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM (coordinator_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN coordinator_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE ((u1.user_id OPERATOR(pg_catalog.=) 3) AND (u1.u_data OPERATOR(pg_catalog.=) $1::coordinator_evaluation_combinations.user_data))
?column?
---------------------------------------------------------------------
t
(1 row)
EXECUTE router_with_param_on_non_dist_key_and_func(('name3', 23)::user_data);
NOTICE: executing the command locally: SELECT (master_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM (master_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN master_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE ((u1.user_id OPERATOR(pg_catalog.=) 3) AND (u1.u_data OPERATOR(pg_catalog.=) $1::master_evaluation_combinations.user_data))
NOTICE: executing the command locally: SELECT (coordinator_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM (coordinator_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN coordinator_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE ((u1.user_id OPERATOR(pg_catalog.=) 3) AND (u1.u_data OPERATOR(pg_catalog.=) $1::coordinator_evaluation_combinations.user_data))
?column?
---------------------------------------------------------------------
t
(1 row)
SELECT count(*) FROM user_info_data u1 JOIN user_info_data u2 USING (user_id) WHERE user_id = 3 AND u1.u_data = ('name3', 23)::user_data;
NOTICE: executing the command locally: SELECT count(*) AS count FROM (master_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN master_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE ((u1.user_id OPERATOR(pg_catalog.=) 3) AND (u1.u_data OPERATOR(pg_catalog.=) ROW('name3'::text, 23)::master_evaluation_combinations.user_data))
NOTICE: executing the command locally: SELECT count(*) AS count FROM (coordinator_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN coordinator_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE ((u1.user_id OPERATOR(pg_catalog.=) 3) AND (u1.u_data OPERATOR(pg_catalog.=) ROW('name3'::text, 23)::coordinator_evaluation_combinations.user_data))
count
---------------------------------------------------------------------
1
@ -1558,70 +1558,70 @@ NOTICE: executing the command locally: SELECT count(*) AS count FROM (master_ev
PREPARE router_with_two_params(user_data, int) AS SELECT count(*) FROM user_info_data u1 JOIN user_info_data u2 USING (user_id) WHERE user_id = $2 AND u1.u_data = $1;
EXECUTE router_with_two_params(('name3', 23)::user_data, 3);
NOTICE: executing the command locally: SELECT count(*) AS count FROM (master_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN master_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE ((u1.user_id OPERATOR(pg_catalog.=) $2) AND (u1.u_data OPERATOR(pg_catalog.=) $1::master_evaluation_combinations.user_data))
NOTICE: executing the command locally: SELECT count(*) AS count FROM (coordinator_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN coordinator_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE ((u1.user_id OPERATOR(pg_catalog.=) $2) AND (u1.u_data OPERATOR(pg_catalog.=) $1::coordinator_evaluation_combinations.user_data))
count
---------------------------------------------------------------------
1
(1 row)
EXECUTE router_with_two_params(('name3', 23)::user_data, 3);
NOTICE: executing the command locally: SELECT count(*) AS count FROM (master_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN master_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE ((u1.user_id OPERATOR(pg_catalog.=) $2) AND (u1.u_data OPERATOR(pg_catalog.=) $1::master_evaluation_combinations.user_data))
NOTICE: executing the command locally: SELECT count(*) AS count FROM (coordinator_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN coordinator_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE ((u1.user_id OPERATOR(pg_catalog.=) $2) AND (u1.u_data OPERATOR(pg_catalog.=) $1::coordinator_evaluation_combinations.user_data))
count
---------------------------------------------------------------------
1
(1 row)
EXECUTE router_with_two_params(('name3', 23)::user_data, 3);
NOTICE: executing the command locally: SELECT count(*) AS count FROM (master_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN master_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE ((u1.user_id OPERATOR(pg_catalog.=) $2) AND (u1.u_data OPERATOR(pg_catalog.=) $1::master_evaluation_combinations.user_data))
NOTICE: executing the command locally: SELECT count(*) AS count FROM (coordinator_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN coordinator_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE ((u1.user_id OPERATOR(pg_catalog.=) $2) AND (u1.u_data OPERATOR(pg_catalog.=) $1::coordinator_evaluation_combinations.user_data))
count
---------------------------------------------------------------------
1
(1 row)
EXECUTE router_with_two_params(('name3', 23)::user_data, 3);
NOTICE: executing the command locally: SELECT count(*) AS count FROM (master_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN master_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE ((u1.user_id OPERATOR(pg_catalog.=) $2) AND (u1.u_data OPERATOR(pg_catalog.=) $1::master_evaluation_combinations.user_data))
NOTICE: executing the command locally: SELECT count(*) AS count FROM (coordinator_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN coordinator_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE ((u1.user_id OPERATOR(pg_catalog.=) $2) AND (u1.u_data OPERATOR(pg_catalog.=) $1::coordinator_evaluation_combinations.user_data))
count
---------------------------------------------------------------------
1
(1 row)
EXECUTE router_with_two_params(('name3', 23)::user_data, 3);
NOTICE: executing the command locally: SELECT count(*) AS count FROM (master_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN master_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE ((u1.user_id OPERATOR(pg_catalog.=) $2) AND (u1.u_data OPERATOR(pg_catalog.=) $1::master_evaluation_combinations.user_data))
NOTICE: executing the command locally: SELECT count(*) AS count FROM (coordinator_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN coordinator_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE ((u1.user_id OPERATOR(pg_catalog.=) $2) AND (u1.u_data OPERATOR(pg_catalog.=) $1::coordinator_evaluation_combinations.user_data))
count
---------------------------------------------------------------------
1
(1 row)
EXECUTE router_with_two_params(('name3', 23)::user_data, 3);
NOTICE: executing the command locally: SELECT count(*) AS count FROM (master_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN master_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE ((u1.user_id OPERATOR(pg_catalog.=) $2) AND (u1.u_data OPERATOR(pg_catalog.=) $1::master_evaluation_combinations.user_data))
NOTICE: executing the command locally: SELECT count(*) AS count FROM (coordinator_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN coordinator_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE ((u1.user_id OPERATOR(pg_catalog.=) $2) AND (u1.u_data OPERATOR(pg_catalog.=) $1::coordinator_evaluation_combinations.user_data))
count
---------------------------------------------------------------------
1
(1 row)
EXECUTE router_with_two_params(('name3', 23)::user_data, 3);
NOTICE: executing the command locally: SELECT count(*) AS count FROM (master_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN master_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE ((u1.user_id OPERATOR(pg_catalog.=) $2) AND (u1.u_data OPERATOR(pg_catalog.=) $1::master_evaluation_combinations.user_data))
NOTICE: executing the command locally: SELECT count(*) AS count FROM (coordinator_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN coordinator_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE ((u1.user_id OPERATOR(pg_catalog.=) $2) AND (u1.u_data OPERATOR(pg_catalog.=) $1::coordinator_evaluation_combinations.user_data))
count
---------------------------------------------------------------------
1
(1 row)
EXECUTE router_with_two_params(('name3', 23)::user_data, 3);
NOTICE: executing the command locally: SELECT count(*) AS count FROM (master_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN master_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE ((u1.user_id OPERATOR(pg_catalog.=) $2) AND (u1.u_data OPERATOR(pg_catalog.=) $1::master_evaluation_combinations.user_data))
NOTICE: executing the command locally: SELECT count(*) AS count FROM (coordinator_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN coordinator_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE ((u1.user_id OPERATOR(pg_catalog.=) $2) AND (u1.u_data OPERATOR(pg_catalog.=) $1::coordinator_evaluation_combinations.user_data))
count
---------------------------------------------------------------------
1
(1 row)
EXECUTE router_with_two_params(('name3', 23)::user_data, 3);
NOTICE: executing the command locally: SELECT count(*) AS count FROM (master_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN master_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE ((u1.user_id OPERATOR(pg_catalog.=) $2) AND (u1.u_data OPERATOR(pg_catalog.=) $1::master_evaluation_combinations.user_data))
NOTICE: executing the command locally: SELECT count(*) AS count FROM (coordinator_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN coordinator_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE ((u1.user_id OPERATOR(pg_catalog.=) $2) AND (u1.u_data OPERATOR(pg_catalog.=) $1::coordinator_evaluation_combinations.user_data))
count
---------------------------------------------------------------------
1
(1 row)
SELECT get_local_node_id_volatile() > 0 FROM user_info_data u1 JOIN user_info_data u2 USING(user_id) WHERE user_id = 3;
NOTICE: executing the command locally: SELECT (master_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM (master_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN master_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE (u1.user_id OPERATOR(pg_catalog.=) 3)
NOTICE: executing the command locally: SELECT (coordinator_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM (coordinator_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN coordinator_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE (u1.user_id OPERATOR(pg_catalog.=) 3)
?column?
---------------------------------------------------------------------
t
@ -1629,56 +1629,56 @@ NOTICE: executing the command locally: SELECT (master_evaluation_combinations.g
PREPARE router_with_only_function AS SELECT get_local_node_id_volatile() > 0 FROM user_info_data u1 JOIN user_info_data u2 USING(user_id) WHERE user_id = 3;
EXECUTE router_with_only_function;
NOTICE: executing the command locally: SELECT (master_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM (master_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN master_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE (u1.user_id OPERATOR(pg_catalog.=) 3)
NOTICE: executing the command locally: SELECT (coordinator_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM (coordinator_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN coordinator_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE (u1.user_id OPERATOR(pg_catalog.=) 3)
?column?
---------------------------------------------------------------------
t
(1 row)
EXECUTE router_with_only_function;
NOTICE: executing the command locally: SELECT (master_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM (master_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN master_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE (u1.user_id OPERATOR(pg_catalog.=) 3)
NOTICE: executing the command locally: SELECT (coordinator_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM (coordinator_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN coordinator_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE (u1.user_id OPERATOR(pg_catalog.=) 3)
?column?
---------------------------------------------------------------------
t
(1 row)
EXECUTE router_with_only_function;
NOTICE: executing the command locally: SELECT (master_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM (master_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN master_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE (u1.user_id OPERATOR(pg_catalog.=) 3)
NOTICE: executing the command locally: SELECT (coordinator_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM (coordinator_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN coordinator_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE (u1.user_id OPERATOR(pg_catalog.=) 3)
?column?
---------------------------------------------------------------------
t
(1 row)
EXECUTE router_with_only_function;
NOTICE: executing the command locally: SELECT (master_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM (master_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN master_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE (u1.user_id OPERATOR(pg_catalog.=) 3)
NOTICE: executing the command locally: SELECT (coordinator_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM (coordinator_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN coordinator_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE (u1.user_id OPERATOR(pg_catalog.=) 3)
?column?
---------------------------------------------------------------------
t
(1 row)
EXECUTE router_with_only_function;
NOTICE: executing the command locally: SELECT (master_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM (master_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN master_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE (u1.user_id OPERATOR(pg_catalog.=) 3)
NOTICE: executing the command locally: SELECT (coordinator_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM (coordinator_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN coordinator_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE (u1.user_id OPERATOR(pg_catalog.=) 3)
?column?
---------------------------------------------------------------------
t
(1 row)
EXECUTE router_with_only_function;
NOTICE: executing the command locally: SELECT (master_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM (master_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN master_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE (u1.user_id OPERATOR(pg_catalog.=) 3)
NOTICE: executing the command locally: SELECT (coordinator_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM (coordinator_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN coordinator_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE (u1.user_id OPERATOR(pg_catalog.=) 3)
?column?
---------------------------------------------------------------------
t
(1 row)
EXECUTE router_with_only_function;
NOTICE: executing the command locally: SELECT (master_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM (master_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN master_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE (u1.user_id OPERATOR(pg_catalog.=) 3)
NOTICE: executing the command locally: SELECT (coordinator_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM (coordinator_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN coordinator_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE (u1.user_id OPERATOR(pg_catalog.=) 3)
?column?
---------------------------------------------------------------------
t
(1 row)
EXECUTE router_with_only_function;
NOTICE: executing the command locally: SELECT (master_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM (master_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN master_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE (u1.user_id OPERATOR(pg_catalog.=) 3)
NOTICE: executing the command locally: SELECT (coordinator_evaluation_combinations.get_local_node_id_volatile() OPERATOR(pg_catalog.>) 0) FROM (coordinator_evaluation_combinations.user_info_data_1170001 u1(user_id, u_data, user_index) JOIN coordinator_evaluation_combinations.user_info_data_1170001 u2(user_id, u_data, user_index) USING (user_id)) WHERE (u1.user_id OPERATOR(pg_catalog.=) 3)
?column?
---------------------------------------------------------------------
t
@ -1687,4 +1687,4 @@ NOTICE: executing the command locally: SELECT (master_evaluation_combinations.g
-- suppress notices
\c - - - :master_port
SET client_min_messages TO ERROR;
DROP SCHEMA master_evaluation_combinations CASCADE;
DROP SCHEMA coordinator_evaluation_combinations CASCADE;

View File

@ -198,6 +198,23 @@ NOTICE: executing the command locally: SELECT count(*) AS count FROM coordinato
(1 row)
ROLLBACK;
-- repartition queries should work fine
SET citus.enable_repartition_joins TO ON;
SELECT count(*) FROM test t1, test t2 WHERE t1.x = t2.y;
count
---------------------------------------------------------------------
100
(1 row)
BEGIN;
SET citus.enable_repartition_joins TO ON;
SELECT count(*) FROM test t1, test t2 WHERE t1.x = t2.y;
count
---------------------------------------------------------------------
100
(1 row)
END;
BEGIN;
SET citus.enable_repartition_joins TO ON;
-- trigger local execution
@ -439,6 +456,7 @@ BEGIN;
-- copying task
INSERT INTO dist_table SELECT a + 1 FROM dist_table;
ROLLBACK;
SET citus.shard_replication_factor TO 1;
BEGIN;
SET citus.shard_replication_factor TO 2;
CREATE TABLE dist_table1(a int);
@ -459,11 +477,12 @@ RESET citus.enable_cte_inlining;
DELETE FROM test;
DROP TABLE test;
DROP TABLE dist_table;
DROP TABLE ref;
NOTICE: executing the command locally: DROP TABLE IF EXISTS coordinator_shouldhaveshards.ref_xxxxx CASCADE
CONTEXT: SQL statement "SELECT master_drop_all_shards(v_obj.objid, v_obj.schema_name, v_obj.object_name)"
PL/pgSQL function citus_drop_trigger() line 19 at PERFORM
DROP SCHEMA coordinator_shouldhaveshards CASCADE;
NOTICE: drop cascades to 3 other objects
DETAIL: drop cascades to table ref
drop cascades to table ref_1503016
drop cascades to table local
NOTICE: drop cascades to table local
SELECT 1 FROM master_set_node_property('localhost', :master_port, 'shouldhaveshards', false);
?column?
---------------------------------------------------------------------

View File

@ -165,7 +165,7 @@ DEBUG: Router planner cannot handle multi-shard select queries
DEBUG: Creating router plan
DEBUG: Plan is router executable
DEBUG: generating subplan XXX_3 for subquery SELECT key, value, other_value, (SELECT 1) FROM (SELECT cte_1.key, cte_1.value, cte_1.other_value FROM (SELECT intermediate_result.key, intermediate_result.value, intermediate_result.other_value FROM read_intermediate_result('XXX_2'::text, 'binary'::citus_copy_format) intermediate_result(key integer, value text, other_value jsonb)) cte_1) foo
DEBUG: Plan XXX query after replacing subqueries and CTEs: SELECT count(*) AS count FROM (SELECT intermediate_result.key, intermediate_result.value, intermediate_result.other_value FROM read_intermediate_result('XXX_1'::text, 'binary'::citus_copy_format) intermediate_result(key integer, value text, other_value jsonb)) top_cte, (SELECT intermediate_result.key, intermediate_result.value, intermediate_result.other_value, intermediate_result."?column?" FROM read_intermediate_result('XXX_3'::text, 'binary'::citus_copy_format) intermediate_result(key integer, value text, other_value jsonb, "?column?" integer)) bar
DEBUG: Plan XXX query after replacing subqueries and CTEs: SELECT count(*) AS count FROM (SELECT intermediate_result.key, intermediate_result.value, intermediate_result.other_value FROM read_intermediate_result('XXX_1'::text, 'binary'::citus_copy_format) intermediate_result(key integer, value text, other_value jsonb)) top_cte, (SELECT intermediate_result.key, intermediate_result.value, intermediate_result.other_value, intermediate_result."?column?" FROM read_intermediate_result('XXX_3'::text, 'binary'::citus_copy_format) intermediate_result(key integer, value text, other_value jsonb, "?column?" integer)) bar(key, value, other_value, "?column?")
DEBUG: Creating router plan
DEBUG: Plan is router executable
count
@ -249,7 +249,7 @@ DEBUG: CTE cte_2 is going to be inlined via distributed planning
DEBUG: Router planner cannot handle multi-shard select queries
DEBUG: Router planner cannot handle multi-shard select queries
DEBUG: generating subplan XXX_1 for subquery SELECT key, value, other_value FROM cte_inline.test_table
DEBUG: Plan XXX query after replacing subqueries and CTEs: SELECT count(*) AS count FROM ((SELECT cte_1.key, cte_1.value, cte_1.other_value, (SELECT 1) FROM (SELECT test_table.key, test_table.value, test_table.other_value FROM cte_inline.test_table) cte_1) foo JOIN (SELECT intermediate_result.key, intermediate_result.value, intermediate_result.other_value FROM read_intermediate_result('XXX_1'::text, 'binary'::citus_copy_format) intermediate_result(key integer, value text, other_value jsonb)) cte_2 ON (true))
DEBUG: Plan XXX query after replacing subqueries and CTEs: SELECT count(*) AS count FROM ((SELECT cte_1.key, cte_1.value, cte_1.other_value, (SELECT 1) FROM (SELECT test_table.key, test_table.value, test_table.other_value FROM cte_inline.test_table) cte_1) foo(key, value, other_value, "?column?") JOIN (SELECT intermediate_result.key, intermediate_result.value, intermediate_result.other_value FROM read_intermediate_result('XXX_1'::text, 'binary'::citus_copy_format) intermediate_result(key integer, value text, other_value jsonb)) cte_2 ON (true))
DEBUG: Router planner cannot handle multi-shard select queries
count
---------------------------------------------------------------------

View File

@ -153,7 +153,7 @@ DEBUG: Router planner cannot handle multi-shard select queries
DEBUG: Creating router plan
DEBUG: Plan is router executable
DEBUG: generating subplan XXX_3 for subquery SELECT key, value, other_value, (SELECT 1) FROM (SELECT cte_1.key, cte_1.value, cte_1.other_value FROM (SELECT intermediate_result.key, intermediate_result.value, intermediate_result.other_value FROM read_intermediate_result('XXX_2'::text, 'binary'::citus_copy_format) intermediate_result(key integer, value text, other_value jsonb)) cte_1) foo
DEBUG: Plan XXX query after replacing subqueries and CTEs: SELECT count(*) AS count FROM (SELECT intermediate_result.key, intermediate_result.value, intermediate_result.other_value FROM read_intermediate_result('XXX_1'::text, 'binary'::citus_copy_format) intermediate_result(key integer, value text, other_value jsonb)) top_cte, (SELECT intermediate_result.key, intermediate_result.value, intermediate_result.other_value, intermediate_result."?column?" FROM read_intermediate_result('XXX_3'::text, 'binary'::citus_copy_format) intermediate_result(key integer, value text, other_value jsonb, "?column?" integer)) bar
DEBUG: Plan XXX query after replacing subqueries and CTEs: SELECT count(*) AS count FROM (SELECT intermediate_result.key, intermediate_result.value, intermediate_result.other_value FROM read_intermediate_result('XXX_1'::text, 'binary'::citus_copy_format) intermediate_result(key integer, value text, other_value jsonb)) top_cte, (SELECT intermediate_result.key, intermediate_result.value, intermediate_result.other_value, intermediate_result."?column?" FROM read_intermediate_result('XXX_3'::text, 'binary'::citus_copy_format) intermediate_result(key integer, value text, other_value jsonb, "?column?" integer)) bar(key, value, other_value, "?column?")
DEBUG: Creating router plan
DEBUG: Plan is router executable
count
@ -237,7 +237,7 @@ DEBUG: CTE cte_2 is going to be inlined via distributed planning
DEBUG: Router planner cannot handle multi-shard select queries
DEBUG: Router planner cannot handle multi-shard select queries
DEBUG: generating subplan XXX_1 for subquery SELECT key, value, other_value FROM cte_inline.test_table
DEBUG: Plan XXX query after replacing subqueries and CTEs: SELECT count(*) AS count FROM ((SELECT cte_1.key, cte_1.value, cte_1.other_value, (SELECT 1) FROM (SELECT test_table.key, test_table.value, test_table.other_value FROM cte_inline.test_table) cte_1) foo JOIN (SELECT intermediate_result.key, intermediate_result.value, intermediate_result.other_value FROM read_intermediate_result('XXX_1'::text, 'binary'::citus_copy_format) intermediate_result(key integer, value text, other_value jsonb)) cte_2 ON (true))
DEBUG: Plan XXX query after replacing subqueries and CTEs: SELECT count(*) AS count FROM ((SELECT cte_1.key, cte_1.value, cte_1.other_value, (SELECT 1) FROM (SELECT test_table.key, test_table.value, test_table.other_value FROM cte_inline.test_table) cte_1) foo(key, value, other_value, "?column?") JOIN (SELECT intermediate_result.key, intermediate_result.value, intermediate_result.other_value FROM read_intermediate_result('XXX_1'::text, 'binary'::citus_copy_format) intermediate_result(key integer, value text, other_value jsonb)) cte_2 ON (true))
DEBUG: Router planner cannot handle multi-shard select queries
count
---------------------------------------------------------------------

View File

@ -0,0 +1,218 @@
CREATE SCHEMA cursors;
SET search_path TO cursors;
CREATE TABLE distributed_table (key int, value text);
SELECT create_distributed_table('distributed_table', 'key');
create_distributed_table
---------------------------------------------------------------------
(1 row)
-- load some data, but not very small amounts because RETURN QUERY in plpgsql
-- hard codes the cursor fetch to 50 rows on PG 12, though they might increase
-- it sometime in the future, so be mindful
INSERT INTO distributed_table SELECT i % 10, i::text FROM generate_series(0, 1000) i;
CREATE OR REPLACE FUNCTION simple_cursor_on_dist_table(cursor_name refcursor) RETURNS refcursor AS '
BEGIN
OPEN $1 FOR SELECT DISTINCT key FROM distributed_table ORDER BY 1;
RETURN $1;
END;
' LANGUAGE plpgsql;
CREATE OR REPLACE FUNCTION cursor_with_intermediate_result_on_dist_table(cursor_name refcursor) RETURNS refcursor AS '
BEGIN
OPEN $1 FOR
WITH cte_1 AS (SELECT * FROM distributed_table OFFSET 0)
SELECT DISTINCT key FROM distributed_table WHERE value in (SELECT value FROM cte_1) ORDER BY 1;
RETURN $1;
END;
' LANGUAGE plpgsql;
CREATE OR REPLACE FUNCTION cursor_with_intermediate_result_on_dist_table_with_param(cursor_name refcursor, filter text) RETURNS refcursor AS '
BEGIN
OPEN $1 FOR
WITH cte_1 AS (SELECT * FROM distributed_table WHERE value < $2 OFFSET 0)
SELECT DISTINCT key FROM distributed_table WHERE value in (SELECT value FROM cte_1) ORDER BY 1;
RETURN $1;
END;
' LANGUAGE plpgsql;
-- pretty basic query with cursors
-- Citus should plan/execute once and pull
-- the results to coordinator, then serve it
-- from the coordinator
BEGIN;
SELECT simple_cursor_on_dist_table('cursor_1');
simple_cursor_on_dist_table
---------------------------------------------------------------------
cursor_1
(1 row)
SET LOCAL citus.log_intermediate_results TO ON;
SET LOCAL client_min_messages TO DEBUG1;
FETCH 5 IN cursor_1;
key
---------------------------------------------------------------------
0
1
2
3
4
(5 rows)
FETCH 50 IN cursor_1;
key
---------------------------------------------------------------------
5
6
7
8
9
(5 rows)
FETCH ALL IN cursor_1;
key
---------------------------------------------------------------------
(0 rows)
COMMIT;
BEGIN;
SELECT cursor_with_intermediate_result_on_dist_table('cursor_1');
cursor_with_intermediate_result_on_dist_table
---------------------------------------------------------------------
cursor_1
(1 row)
-- multiple FETCH commands should not trigger re-running the subplans
SET LOCAL citus.log_intermediate_results TO ON;
SET LOCAL client_min_messages TO DEBUG1;
FETCH 5 IN cursor_1;
DEBUG: Subplan XXX_1 will be sent to localhost:xxxxx
DEBUG: Subplan XXX_1 will be sent to localhost:xxxxx
key
---------------------------------------------------------------------
0
1
2
3
4
(5 rows)
FETCH 1 IN cursor_1;
key
---------------------------------------------------------------------
5
(1 row)
FETCH ALL IN cursor_1;
key
---------------------------------------------------------------------
6
7
8
9
(4 rows)
FETCH 5 IN cursor_1;
key
---------------------------------------------------------------------
(0 rows)
COMMIT;
BEGIN;
SELECT cursor_with_intermediate_result_on_dist_table_with_param('cursor_1', '600');
cursor_with_intermediate_result_on_dist_table_with_param
---------------------------------------------------------------------
cursor_1
(1 row)
-- multiple FETCH commands should not trigger re-running the subplans
-- also test with parameters
SET LOCAL citus.log_intermediate_results TO ON;
SET LOCAL client_min_messages TO DEBUG1;
FETCH 1 IN cursor_1;
DEBUG: Subplan XXX_1 will be sent to localhost:xxxxx
DEBUG: Subplan XXX_1 will be sent to localhost:xxxxx
key
---------------------------------------------------------------------
0
(1 row)
FETCH 1 IN cursor_1;
key
---------------------------------------------------------------------
1
(1 row)
FETCH 1 IN cursor_1;
key
---------------------------------------------------------------------
2
(1 row)
FETCH 1 IN cursor_1;
key
---------------------------------------------------------------------
3
(1 row)
FETCH 1 IN cursor_1;
key
---------------------------------------------------------------------
4
(1 row)
FETCH 1 IN cursor_1;
key
---------------------------------------------------------------------
5
(1 row)
FETCH ALL IN cursor_1;
key
---------------------------------------------------------------------
6
7
8
9
(4 rows)
COMMIT;
CREATE OR REPLACE FUNCTION value_counter() RETURNS TABLE(counter text) LANGUAGE PLPGSQL AS $function$
BEGIN
return query
WITH cte AS
(SELECT dt.value
FROM distributed_table dt
WHERE dt.value in
(SELECT value
FROM distributed_table p
GROUP BY p.value
HAVING count(*) > 0))
SELECT * FROM cte;
END;
$function$ ;
SET citus.log_intermediate_results TO ON;
SET client_min_messages TO DEBUG1;
\set VERBOSITY terse
SELECT count(*) from (SELECT value_counter()) as foo;
DEBUG: CTE cte is going to be inlined via distributed planning
DEBUG: generating subplan XXX_1 for subquery SELECT value FROM cursors.distributed_table p GROUP BY value HAVING (count(*) OPERATOR(pg_catalog.>) 0)
DEBUG: Plan XXX query after replacing subqueries and CTEs: SELECT value FROM (SELECT dt.value FROM cursors.distributed_table dt WHERE (dt.value OPERATOR(pg_catalog.=) ANY (SELECT intermediate_result.value FROM read_intermediate_result('XXX_1'::text, 'binary'::citus_copy_format) intermediate_result(value text)))) cte
DEBUG: Subplan XXX_1 will be sent to localhost:xxxxx
DEBUG: Subplan XXX_1 will be sent to localhost:xxxxx
count
---------------------------------------------------------------------
1001
(1 row)
BEGIN;
SELECT count(*) from (SELECT value_counter()) as foo;
DEBUG: Subplan XXX_1 will be sent to localhost:xxxxx
DEBUG: Subplan XXX_1 will be sent to localhost:xxxxx
count
---------------------------------------------------------------------
1001
(1 row)
COMMIT;
-- suppress NOTICEs
SET client_min_messages TO ERROR;
DROP SCHEMA cursors CASCADE;

View File

@ -299,7 +299,7 @@ INSERT INTO
VALUES ('3', (WITH vals AS (SELECT 3) select * from vals));
DEBUG: CTE vals is going to be inlined via distributed planning
DEBUG: generating subplan XXX_1 for CTE vals: SELECT 3
DEBUG: Plan XXX query after replacing subqueries and CTEs: INSERT INTO recursive_dml_queries.second_distributed_table (tenant_id, dept) VALUES ('3'::text, (SELECT vals."?column?" FROM (SELECT intermediate_result."?column?" FROM read_intermediate_result('XXX_1'::text, 'binary'::citus_copy_format) intermediate_result("?column?" integer)) vals))
DEBUG: Plan XXX query after replacing subqueries and CTEs: INSERT INTO recursive_dml_queries.second_distributed_table (tenant_id, dept) VALUES ('3'::text, (SELECT vals."?column?" FROM (SELECT intermediate_result."?column?" FROM read_intermediate_result('XXX_1'::text, 'binary'::citus_copy_format) intermediate_result("?column?" integer)) vals("?column?")))
ERROR: subqueries are not supported within INSERT queries
HINT: Try rewriting your queries with 'INSERT INTO ... SELECT' syntax.
INSERT INTO

View File

@ -0,0 +1,370 @@
\c - - - :master_port
CREATE SCHEMA single_node;
SET search_path TO single_node;
SET citus.shard_count TO 4;
SET citus.shard_replication_factor TO 1;
SET citus.next_shard_id TO 93630500;
SELECT 1 FROM master_add_node('localhost', :master_port, groupid => 0);
?column?
---------------------------------------------------------------------
1
(1 row)
SELECT 1 FROM master_set_node_property('localhost', :master_port, 'shouldhaveshards', true);
?column?
---------------------------------------------------------------------
1
(1 row)
CREATE TABLE test(x int, y int);
SELECT create_distributed_table('test','x');
create_distributed_table
---------------------------------------------------------------------
(1 row)
CREATE TABLE ref(a int, b int);
SELECT create_reference_table('ref');
create_reference_table
---------------------------------------------------------------------
(1 row)
CREATE TABLE local(c int, d int);
INSERT INTO test VALUES (1, 2), (3, 4), (5, 6), (2, 7), (4, 5);
INSERT INTO ref VALUES (1, 2), (5, 6), (7, 8);
INSERT INTO local VALUES (1, 2), (3, 4), (7, 8);
-- Check repartion joins are supported
SET citus.enable_repartition_joins TO ON;
SELECT * FROM test t1, test t2 WHERE t1.x = t2.y ORDER BY t1.x;
x | y | x | y
---------------------------------------------------------------------
2 | 7 | 1 | 2
4 | 5 | 3 | 4
5 | 6 | 4 | 5
(3 rows)
SET citus.enable_single_hash_repartition_joins TO ON;
SELECT * FROM test t1, test t2 WHERE t1.x = t2.y ORDER BY t1.x;
x | y | x | y
---------------------------------------------------------------------
2 | 7 | 1 | 2
4 | 5 | 3 | 4
5 | 6 | 4 | 5
(3 rows)
RESET citus.enable_single_hash_repartition_joins;
SET citus.task_assignment_policy TO 'round-robin';
SET citus.enable_single_hash_repartition_joins TO ON;
SELECT * FROM test t1, test t2 WHERE t1.x = t2.y ORDER BY t1.x;
x | y | x | y
---------------------------------------------------------------------
2 | 7 | 1 | 2
4 | 5 | 3 | 4
5 | 6 | 4 | 5
(3 rows)
SET citus.task_assignment_policy TO 'greedy';
SELECT * FROM test t1, test t2 WHERE t1.x = t2.y ORDER BY t1.x;
x | y | x | y
---------------------------------------------------------------------
2 | 7 | 1 | 2
4 | 5 | 3 | 4
5 | 6 | 4 | 5
(3 rows)
SET citus.task_assignment_policy TO 'first-replica';
SELECT * FROM test t1, test t2 WHERE t1.x = t2.y ORDER BY t1.x;
x | y | x | y
---------------------------------------------------------------------
2 | 7 | 1 | 2
4 | 5 | 3 | 4
5 | 6 | 4 | 5
(3 rows)
RESET citus.enable_repartition_joins;
-- connect to the follower and check that a simple select query works, the follower
-- is still in the default cluster and will send queries to the primary nodes
\c - - - :follower_master_port
SET search_path TO single_node;
SELECT * FROM test WHERE x = 1;
x | y
---------------------------------------------------------------------
1 | 2
(1 row)
SELECT count(*) FROM test;
count
---------------------------------------------------------------------
5
(1 row)
SELECT * FROM test ORDER BY x;
x | y
---------------------------------------------------------------------
1 | 2
2 | 7
3 | 4
4 | 5
5 | 6
(5 rows)
SELECT count(*) FROM ref;
count
---------------------------------------------------------------------
3
(1 row)
SELECT * FROM ref ORDER BY a;
a | b
---------------------------------------------------------------------
1 | 2
5 | 6
7 | 8
(3 rows)
SELECT * FROM test, ref WHERE x = a ORDER BY x;
x | y | a | b
---------------------------------------------------------------------
1 | 2 | 1 | 2
5 | 6 | 5 | 6
(2 rows)
SELECT count(*) FROM local;
count
---------------------------------------------------------------------
3
(1 row)
SELECT * FROM local ORDER BY c;
c | d
---------------------------------------------------------------------
1 | 2
3 | 4
7 | 8
(3 rows)
SELECT * FROM ref, local WHERE a = c ORDER BY a;
a | b | c | d
---------------------------------------------------------------------
1 | 2 | 1 | 2
7 | 8 | 7 | 8
(2 rows)
SET citus.enable_repartition_joins TO ON;
SELECT * FROM test t1, test t2 WHERE t1.x = t2.y ORDER BY t1.x;
ERROR: writing to worker nodes is not currently allowed
DETAIL: the database is read-only
SET citus.enable_single_hash_repartition_joins TO ON;
SELECT * FROM test t1, test t2 WHERE t1.x = t2.y ORDER BY t1.x;
ERROR: writing to worker nodes is not currently allowed
DETAIL: the database is read-only
SET citus.task_assignment_policy TO 'round-robin';
SET citus.enable_single_hash_repartition_joins TO ON;
SELECT * FROM test t1, test t2 WHERE t1.x = t2.y ORDER BY t1.x;
ERROR: writing to worker nodes is not currently allowed
DETAIL: the database is read-only
SET citus.task_assignment_policy TO 'greedy';
SELECT * FROM test t1, test t2 WHERE t1.x = t2.y ORDER BY t1.x;
ERROR: writing to worker nodes is not currently allowed
DETAIL: the database is read-only
SET citus.task_assignment_policy TO 'first-replica';
SELECT * FROM test t1, test t2 WHERE t1.x = t2.y ORDER BY t1.x;
ERROR: writing to worker nodes is not currently allowed
DETAIL: the database is read-only
RESET citus.enable_repartition_joins;
RESET citus.enable_single_hash_repartition_joins;
-- Confirm that dummy placements work
SELECT count(*) FROM test WHERE false;
count
---------------------------------------------------------------------
0
(1 row)
SELECT count(*) FROM test WHERE false GROUP BY GROUPING SETS (x,y);
count
---------------------------------------------------------------------
(0 rows)
-- Confirm that they work with round-robin task assignment policy
SET citus.task_assignment_policy TO 'round-robin';
SELECT count(*) FROM test WHERE false;
count
---------------------------------------------------------------------
0
(1 row)
SELECT count(*) FROM test WHERE false GROUP BY GROUPING SETS (x,y);
count
---------------------------------------------------------------------
(0 rows)
SET citus.task_assignment_policy TO 'greedy';
SELECT count(*) FROM test WHERE false;
count
---------------------------------------------------------------------
0
(1 row)
SELECT count(*) FROM test WHERE false GROUP BY GROUPING SETS (x,y);
count
---------------------------------------------------------------------
(0 rows)
SET citus.task_assignment_policy TO 'first-replica';
SELECT count(*) FROM test WHERE false;
count
---------------------------------------------------------------------
0
(1 row)
SELECT count(*) FROM test WHERE false GROUP BY GROUPING SETS (x,y);
count
---------------------------------------------------------------------
(0 rows)
RESET citus.task_assignment_policy;
-- now, connect to the follower but tell it to use secondary nodes. There are no
-- secondary nodes so this should fail.
-- (this is :follower_master_port but substitution doesn't work here)
\c "port=9070 dbname=regression options='-c\ citus.use_secondary_nodes=always'"
SET search_path TO single_node;
SELECT * FROM test WHERE x = 1;
ERROR: node group 0 does not have a secondary node
-- add the the follower as secondary nodes and try again, the SELECT statement
-- should work this time
\c - - - :master_port
SET search_path TO single_node;
SELECT 1 FROM master_add_node('localhost', :follower_master_port, groupid => 0, noderole => 'secondary');
?column?
---------------------------------------------------------------------
1
(1 row)
SELECT 1 FROM master_set_node_property('localhost', :master_port, 'shouldhaveshards', true);
?column?
---------------------------------------------------------------------
1
(1 row)
\c "port=9070 dbname=regression options='-c\ citus.use_secondary_nodes=always'"
SET search_path TO single_node;
SELECT * FROM test WHERE x = 1;
x | y
---------------------------------------------------------------------
1 | 2
(1 row)
SELECT count(*) FROM test;
count
---------------------------------------------------------------------
5
(1 row)
SELECT * FROM test ORDER BY x;
x | y
---------------------------------------------------------------------
1 | 2
2 | 7
3 | 4
4 | 5
5 | 6
(5 rows)
SELECT count(*) FROM ref;
count
---------------------------------------------------------------------
3
(1 row)
SELECT * FROM ref ORDER BY a;
a | b
---------------------------------------------------------------------
1 | 2
5 | 6
7 | 8
(3 rows)
SELECT * FROM test, ref WHERE x = a ORDER BY x;
x | y | a | b
---------------------------------------------------------------------
1 | 2 | 1 | 2
5 | 6 | 5 | 6
(2 rows)
SELECT count(*) FROM local;
count
---------------------------------------------------------------------
3
(1 row)
SELECT * FROM local ORDER BY c;
c | d
---------------------------------------------------------------------
1 | 2
3 | 4
7 | 8
(3 rows)
SELECT * FROM ref, local WHERE a = c ORDER BY a;
a | b | c | d
---------------------------------------------------------------------
1 | 2 | 1 | 2
7 | 8 | 7 | 8
(2 rows)
SET citus.enable_repartition_joins TO ON;
SELECT * FROM test t1, test t2 WHERE t1.x = t2.y ORDER BY t1.x;
ERROR: writing to worker nodes is not currently allowed
DETAIL: the database is read-only
SET citus.enable_single_hash_repartition_joins TO ON;
SELECT * FROM test t1, test t2 WHERE t1.x = t2.y ORDER BY t1.x;
ERROR: writing to worker nodes is not currently allowed
DETAIL: the database is read-only
RESET citus.enable_repartition_joins;
RESET citus.enable_single_hash_repartition_joins;
-- Confirm that dummy placements work
SELECT count(*) FROM test WHERE false;
count
---------------------------------------------------------------------
0
(1 row)
SELECT count(*) FROM test WHERE false GROUP BY GROUPING SETS (x,y);
count
---------------------------------------------------------------------
(0 rows)
-- Confirm that they work with round-robin task assignment policy
SET citus.task_assignment_policy TO 'round-robin';
SELECT count(*) FROM test WHERE false;
count
---------------------------------------------------------------------
0
(1 row)
SELECT count(*) FROM test WHERE false GROUP BY GROUPING SETS (x,y);
count
---------------------------------------------------------------------
(0 rows)
RESET citus.task_assignment_policy;
-- Cleanup
\c - - - :master_port
SET search_path TO single_node;
SET client_min_messages TO WARNING;
DROP SCHEMA single_node CASCADE;
-- Remove the coordinator again
SELECT 1 FROM master_remove_node('localhost', :master_port);
?column?
---------------------------------------------------------------------
1
(1 row)
-- Remove the secondary coordinator again
SELECT 1 FROM master_remove_node('localhost', :follower_master_port);
?column?
---------------------------------------------------------------------
1
(1 row)

View File

@ -142,6 +142,85 @@ SELECT * FROM composite_type_partitioned_table WHERE col = '(7, 8)'::test_compo
6 | (7,8)
(1 row)
CREATE TYPE other_composite_type AS (
i integer,
i2 integer
);
-- Check that casts are correctly done on partition columns
SELECT run_command_on_coordinator_and_workers($cf$
CREATE CAST (other_composite_type AS test_composite_type) WITH INOUT AS IMPLICIT;
$cf$);
run_command_on_coordinator_and_workers
---------------------------------------------------------------------
(1 row)
INSERT INTO composite_type_partitioned_table VALUES (123, '(123, 456)'::other_composite_type);
SELECT * FROM composite_type_partitioned_table WHERE id = 123;
id | col
---------------------------------------------------------------------
123 | (123,456)
(1 row)
EXPLAIN (ANALYZE TRUE, COSTS FALSE, VERBOSE TRUE, TIMING FALSE, SUMMARY FALSE)
INSERT INTO composite_type_partitioned_table VALUES (123, '(123, 456)'::other_composite_type);
QUERY PLAN
---------------------------------------------------------------------
Custom Scan (Citus Adaptive) (actual rows=0 loops=1)
Task Count: 1
Tasks Shown: All
-> Task
Query: INSERT INTO public.composite_type_partitioned_table_530003 (id, col) VALUES (123, '(123,456)'::public.test_composite_type)
Node: host=localhost port=xxxxx dbname=regression
-> Insert on public.composite_type_partitioned_table_530003 (actual rows=0 loops=1)
-> Result (actual rows=1 loops=1)
Output: 123, '(123,456)'::test_composite_type
(9 rows)
SELECT run_command_on_coordinator_and_workers($cf$
DROP CAST (other_composite_type as test_composite_type);
$cf$);
run_command_on_coordinator_and_workers
---------------------------------------------------------------------
(1 row)
SELECT run_command_on_coordinator_and_workers($cf$
CREATE FUNCTION to_test_composite_type(arg other_composite_type) RETURNS test_composite_type
AS 'select arg::text::test_composite_type;'
LANGUAGE SQL
IMMUTABLE
RETURNS NULL ON NULL INPUT;
$cf$);
run_command_on_coordinator_and_workers
---------------------------------------------------------------------
(1 row)
SELECT run_command_on_coordinator_and_workers($cf$
CREATE CAST (other_composite_type AS test_composite_type) WITH FUNCTION to_test_composite_type(other_composite_type) AS IMPLICIT;
$cf$);
run_command_on_coordinator_and_workers
---------------------------------------------------------------------
(1 row)
INSERT INTO composite_type_partitioned_table VALUES (456, '(456, 678)'::other_composite_type);
EXPLAIN (ANALYZE TRUE, COSTS FALSE, VERBOSE TRUE, TIMING FALSE, SUMMARY FALSE)
INSERT INTO composite_type_partitioned_table VALUES (123, '(456, 678)'::other_composite_type);
QUERY PLAN
---------------------------------------------------------------------
Custom Scan (Citus Adaptive) (actual rows=0 loops=1)
Task Count: 1
Tasks Shown: All
-> Task
Query: INSERT INTO public.composite_type_partitioned_table_530000 (id, col) VALUES (123, '(456,678)'::public.other_composite_type)
Node: host=localhost port=xxxxx dbname=regression
-> Insert on public.composite_type_partitioned_table_530000 (actual rows=0 loops=1)
-> Result (actual rows=1 loops=1)
Output: 123, '(456,678)'::test_composite_type
(9 rows)
-- create and distribute a table on enum type column
CREATE TYPE bug_status AS ENUM ('new', 'open', 'closed');
CREATE TABLE bugs (

View File

@ -2602,3 +2602,17 @@ Custom Scan (Citus Adaptive) (actual rows=1 loops=1)
Filter: (a = 1)
Rows Removed by Filter: 3
DROP TABLE dist_table_rep1, dist_table_rep2;
CREATE TABLE users_table_2 (user_id int primary key, time timestamp, value_1 int, value_2 int, value_3 float, value_4 bigint);
SELECT create_reference_table('users_table_2');
-- simple test to confirm we can fetch long (>4KB) plans
EXPLAIN (ANALYZE, COSTS OFF, TIMING OFF, SUMMARY OFF) SELECT * FROM users_table_2 WHERE value_1::text = '00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000X';
Custom Scan (Citus Adaptive) (actual rows=0 loops=1)
Task Count: 1
Tuple data received from nodes: 0 bytes
Tasks Shown: All
-> Task
Tuple data received from node: 0 bytes
Node: host=localhost port=xxxxx dbname=regression
-> Seq Scan on users_table_2_570026 users_table_2 (actual rows=0 loops=1)
Filter: ((value_1)::text = '00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000X'::text)

View File

@ -411,7 +411,7 @@ DROP TABLE prev_objects, extension_diff;
SHOW citus.version;
citus.version
---------------------------------------------------------------------
9.4devel
9.4.4
(1 row)
-- ensure no objects were created outside pg_catalog
@ -676,3 +676,69 @@ CONTEXT: PL/pgSQL function inline_code_block line 6 at RAISE
DROP DATABASE another;
\c - - - :worker_1_port
DROP DATABASE another;
\c - - - :master_port
-- only the regression database should have a maintenance daemon
SELECT count(*) FROM pg_stat_activity WHERE application_name = 'Citus Maintenance Daemon';
count
---------------------------------------------------------------------
1
(1 row)
-- recreate the extension immediately after the maintenancae daemon errors
SELECT pg_cancel_backend(pid) FROM pg_stat_activity WHERE application_name = 'Citus Maintenance Daemon';
pg_cancel_backend
---------------------------------------------------------------------
t
(1 row)
DROP EXTENSION citus;
CREATE EXTENSION citus;
-- wait for maintenance daemon restart
SELECT datname, current_database(),
usename, (SELECT extowner::regrole::text FROM pg_extension WHERE extname = 'citus')
FROM test.maintenance_worker();
datname | current_database | usename | extowner
---------------------------------------------------------------------
regression | regression | postgres | postgres
(1 row)
-- confirm that there is only one maintenance daemon
SELECT count(*) FROM pg_stat_activity WHERE application_name = 'Citus Maintenance Daemon';
count
---------------------------------------------------------------------
1
(1 row)
-- kill the maintenance daemon
SELECT pg_cancel_backend(pid) FROM pg_stat_activity WHERE application_name = 'Citus Maintenance Daemon';
pg_cancel_backend
---------------------------------------------------------------------
t
(1 row)
-- reconnect
\c - - - :master_port
-- run something that goes through planner hook and therefore kicks of maintenance daemon
SELECT 1;
?column?
---------------------------------------------------------------------
1
(1 row)
-- wait for maintenance daemon restart
SELECT datname, current_database(),
usename, (SELECT extowner::regrole::text FROM pg_extension WHERE extname = 'citus')
FROM test.maintenance_worker();
datname | current_database | usename | extowner
---------------------------------------------------------------------
regression | regression | postgres | postgres
(1 row)
-- confirm that there is only one maintenance daemon
SELECT count(*) FROM pg_stat_activity WHERE application_name = 'Citus Maintenance Daemon';
count
---------------------------------------------------------------------
1
(1 row)
DROP TABLE version_mismatch_table;

View File

@ -11,7 +11,7 @@ CREATE TABLE local (a int, b int);
-- inserts normally do not work on a standby coordinator
INSERT INTO the_table (a, b, z) VALUES (1, 2, 2);
ERROR: writing to worker nodes is not currently allowed
DETAIL: the database is in recovery mode
DETAIL: the database is read-only
-- we can allow DML on a writable standby coordinator
SET citus.writable_standby_coordinator TO on;
INSERT INTO the_table (a, b, z) VALUES (1, 2, 2);

View File

@ -33,6 +33,21 @@ SELECT create_distributed_table('stock','s_w_id');
(1 row)
INSERT INTO stock SELECT c, c, c FROM generate_series(1, 5) as c;
SET citus.enable_repartition_joins TO ON;
SELECT count(*) FROM the_table t1 JOIN the_table t2 USING(b);
count
---------------------------------------------------------------------
2
(1 row)
SET citus.enable_single_hash_repartition_joins TO ON;
SELECT count(*) FROM the_table t1 , the_table t2 WHERE t1.a = t2.b;
count
---------------------------------------------------------------------
2
(1 row)
RESET citus.enable_repartition_joins;
-- connect to the follower and check that a simple select query works, the follower
-- is still in the default cluster and will send queries to the primary nodes
\c - - - :follower_master_port
@ -100,6 +115,14 @@ order by s_i_id;
5 | 5
(3 rows)
SET citus.enable_repartition_joins TO ON;
SELECT count(*) FROM the_table t1 JOIN the_table t2 USING(b);
ERROR: writing to worker nodes is not currently allowed
DETAIL: the database is read-only
SET citus.enable_single_hash_repartition_joins TO ON;
SELECT count(*) FROM the_table t1 , the_table t2 WHERE t1.a = t2.b;
ERROR: writing to worker nodes is not currently allowed
DETAIL: the database is read-only
SELECT
node_name, node_port
FROM

View File

@ -86,15 +86,16 @@ CREATE INDEX IF NOT EXISTS lineitem_orderkey_index on index_test_hash(a);
NOTICE: relation "lineitem_orderkey_index" already exists, skipping
-- Verify that we can create indexes concurrently
CREATE INDEX CONCURRENTLY lineitem_concurrently_index ON lineitem (l_orderkey);
-- Verify that no-name local CREATE INDEX CONCURRENTLY works
CREATE TABLE local_table (id integer, name text);
CREATE INDEX CONCURRENTLY ON local_table(id);
-- Verify that we warn out on CLUSTER command for distributed tables and no parameter
CLUSTER index_test_hash USING index_test_hash_index_a;
WARNING: not propagating CLUSTER command to worker nodes
CLUSTER;
WARNING: not propagating CLUSTER command to worker nodes
-- Verify that no-name local CREATE INDEX CONCURRENTLY works
CREATE TABLE local_table (id integer, name text);
CREATE INDEX CONCURRENTLY local_table_index ON local_table(id);
-- Vefify we don't warn out on CLUSTER command for local tables
CREATE INDEX CONCURRENTLY local_table_index ON local_table(id);
CLUSTER local_table USING local_table_index;
DROP TABLE local_table;
-- Verify that all indexes got created on the master node and one of the workers

View File

@ -716,7 +716,7 @@ DEBUG: CTE sub_cte is going to be inlined via distributed planning
DEBUG: Router planner cannot handle multi-shard select queries
DEBUG: Router planner cannot handle multi-shard select queries
DEBUG: generating subplan XXX_1 for CTE sub_cte: SELECT 1
DEBUG: Plan XXX query after replacing subqueries and CTEs: SELECT user_id, (SELECT sub_cte."?column?" FROM (SELECT intermediate_result."?column?" FROM read_intermediate_result('XXX_1'::text, 'binary'::citus_copy_format) intermediate_result("?column?" integer)) sub_cte) AS value_1_agg FROM public.raw_events_first
DEBUG: Plan XXX query after replacing subqueries and CTEs: SELECT user_id, (SELECT sub_cte."?column?" FROM (SELECT intermediate_result."?column?" FROM read_intermediate_result('XXX_1'::text, 'binary'::citus_copy_format) intermediate_result("?column?" integer)) sub_cte("?column?")) AS value_1_agg FROM public.raw_events_first
DEBUG: Router planner cannot handle multi-shard select queries
ERROR: could not run distributed query with subquery outside the FROM, WHERE and HAVING clauses
HINT: Consider using an equality filter on the distributed table's partition column.

View File

@ -1641,6 +1641,76 @@ DETAIL: distribution column value: 1
---------------------------------------------------------------------
(0 rows)
-- if these queries get routed, they would fail since number1() does not exist
-- on workers. This tests an exceptional case in which some local tables bypass
-- checks.
CREATE OR REPLACE FUNCTION number1(OUT datid int)
RETURNS SETOF int
AS $$
DECLARE
BEGIN
RETURN QUERY SELECT 1;
END;
$$ LANGUAGE plpgsql;
SELECT 1 FROM authors_reference r JOIN (
SELECT s.datid FROM number1() s LEFT JOIN pg_database d ON s.datid = d.oid
) num_db ON (r.id = num_db.datid) LIMIT 1;
DEBUG: found no worker with all shard placements
DEBUG: generating subplan XXX_1 for subquery SELECT datid FROM public.number1() s(datid)
DEBUG: Creating router plan
DEBUG: Plan is router executable
DEBUG: generating subplan XXX_2 for subquery SELECT s.datid FROM ((SELECT intermediate_result.datid FROM read_intermediate_result('XXX_1'::text, 'binary'::citus_copy_format) intermediate_result(datid integer)) s LEFT JOIN pg_database d ON (((s.datid)::oid OPERATOR(pg_catalog.=) d.oid)))
DEBUG: Plan XXX query after replacing subqueries and CTEs: SELECT 1 FROM (public.authors_reference r JOIN (SELECT intermediate_result.datid FROM read_intermediate_result('XXX_2'::text, 'binary'::citus_copy_format) intermediate_result(datid integer)) num_db ON ((r.id OPERATOR(pg_catalog.=) num_db.datid))) LIMIT 1
DEBUG: Creating router plan
DEBUG: Plan is router executable
?column?
---------------------------------------------------------------------
(0 rows)
-- same scenario with a view
CREATE VIEW num_db AS
SELECT s.datid FROM number1() s LEFT JOIN pg_database d ON s.datid = d.oid;
SELECT 1 FROM authors_reference r JOIN num_db ON (r.id = num_db.datid) LIMIT 1;
DEBUG: found no worker with all shard placements
DEBUG: generating subplan XXX_1 for subquery SELECT datid FROM public.number1() s(datid)
DEBUG: Creating router plan
DEBUG: Plan is router executable
DEBUG: generating subplan XXX_2 for subquery SELECT s.datid FROM ((SELECT intermediate_result.datid FROM read_intermediate_result('XXX_1'::text, 'binary'::citus_copy_format) intermediate_result(datid integer)) s LEFT JOIN pg_database d ON (((s.datid)::oid OPERATOR(pg_catalog.=) d.oid)))
DEBUG: Plan XXX query after replacing subqueries and CTEs: SELECT 1 FROM (public.authors_reference r JOIN (SELECT intermediate_result.datid FROM read_intermediate_result('XXX_2'::text, 'binary'::citus_copy_format) intermediate_result(datid integer)) num_db ON ((r.id OPERATOR(pg_catalog.=) num_db.datid))) LIMIT 1
DEBUG: Creating router plan
DEBUG: Plan is router executable
?column?
---------------------------------------------------------------------
(0 rows)
-- with a CTE in a view
WITH cte AS (SELECT * FROM num_db)
SELECT 1 FROM authors_reference r JOIN cte ON (r.id = cte.datid) LIMIT 1;
DEBUG: found no worker with all shard placements
DEBUG: generating subplan XXX_1 for CTE cte: SELECT datid FROM (SELECT s.datid FROM (public.number1() s(datid) LEFT JOIN pg_database d ON (((s.datid)::oid OPERATOR(pg_catalog.=) d.oid)))) num_db
DEBUG: Plan XXX query after replacing subqueries and CTEs: SELECT 1 FROM (public.authors_reference r JOIN (SELECT intermediate_result.datid FROM read_intermediate_result('XXX_1'::text, 'binary'::citus_copy_format) intermediate_result(datid integer)) cte ON ((r.id OPERATOR(pg_catalog.=) cte.datid))) LIMIT 1
DEBUG: Creating router plan
DEBUG: Plan is router executable
?column?
---------------------------------------------------------------------
(0 rows)
-- hide changes between major versions
RESET client_min_messages;
-- with pg_stat_activity view
WITH pg_stat_activity AS (
SELECT
pg_stat_activity.datid,
pg_stat_activity.application_name,
pg_stat_activity.query
FROM pg_catalog.pg_stat_activity
)
SELECT 1 FROM authors_reference r LEFT JOIN pg_stat_activity ON (r.id = pg_stat_activity.datid) LIMIT 1;
?column?
---------------------------------------------------------------------
(0 rows)
SET client_min_messages TO DEBUG2;
-- CTEs with where false
-- terse because distribution column inference varies between pg11 & pg12
\set VERBOSITY terse
@ -2525,6 +2595,8 @@ DROP FUNCTION author_articles_max_id();
DROP FUNCTION author_articles_id_word_count();
DROP MATERIALIZED VIEW mv_articles_hash_empty;
DROP MATERIALIZED VIEW mv_articles_hash_data;
DROP VIEW num_db;
DROP FUNCTION number1();
DROP TABLE articles_hash;
DROP TABLE articles_single_shard_hash;
DROP TABLE authors_hash;

View File

@ -1197,6 +1197,25 @@ ALTER TABLE existing_schema.non_existent_table SET SCHEMA non_existent_schema;
ERROR: relation "existing_schema.non_existent_table" does not exist
ALTER TABLE existing_schema.table_set_schema SET SCHEMA non_existent_schema;
ERROR: schema "non_existent_schema" does not exist
-- test ALTER TABLE IF EXISTS SET SCHEMA with nonexisting schemas and table
ALTER TABLE IF EXISTS non_existent_schema.table_set_schema SET SCHEMA another_existing_schema;
NOTICE: relation "table_set_schema" does not exist, skipping
ALTER TABLE IF EXISTS non_existent_schema.non_existent_table SET SCHEMA another_existing_schema;
NOTICE: relation "non_existent_table" does not exist, skipping
ALTER TABLE IF EXISTS non_existent_schema.table_set_schema SET SCHEMA another_non_existent_schema;
NOTICE: relation "table_set_schema" does not exist, skipping
ALTER TABLE IF EXISTS non_existent_schema.non_existent_table SET SCHEMA another_non_existent_schema;
NOTICE: relation "non_existent_table" does not exist, skipping
ALTER TABLE IF EXISTS existing_schema.non_existent_table SET SCHEMA another_existing_schema;
NOTICE: relation "non_existent_table" does not exist, skipping
ALTER TABLE IF EXISTS existing_schema.non_existent_table SET SCHEMA non_existent_schema;
NOTICE: relation "non_existent_table" does not exist, skipping
ALTER TABLE IF EXISTS existing_schema.table_set_schema SET SCHEMA non_existent_schema;
ERROR: schema "non_existent_schema" does not exist
ALTER TABLE IF EXISTS non_existent_table SET SCHEMA another_existing_schema;
NOTICE: relation "non_existent_table" does not exist, skipping
ALTER TABLE IF EXISTS non_existent_table SET SCHEMA non_existent_schema;
NOTICE: relation "non_existent_table" does not exist, skipping
DROP SCHEMA existing_schema, another_existing_schema CASCADE;
NOTICE: drop cascades to table existing_schema.table_set_schema
-- test ALTER TABLE SET SCHEMA with interesting names

View File

@ -413,7 +413,7 @@ DEBUG: generating subplan XXX_1 for subquery SELECT user_id, (random() OPERATOR
DEBUG: Creating router plan
DEBUG: Plan is router executable
DEBUG: generating subplan XXX_2 for subquery SELECT sub1.id, (random() OPERATOR(pg_catalog.*) (0)::double precision) FROM (SELECT users_ref_test_table.id FROM public.users_ref_test_table) sub1 UNION SELECT intermediate_result.user_id, intermediate_result."?column?" FROM read_intermediate_result('XXX_1'::text, 'binary'::citus_copy_format) intermediate_result(user_id integer, "?column?" double precision)
DEBUG: Plan XXX query after replacing subqueries and CTEs: SELECT id, "?column?" FROM (SELECT intermediate_result.id, intermediate_result."?column?" FROM read_intermediate_result('XXX_2'::text, 'binary'::citus_copy_format) intermediate_result(id integer, "?column?" double precision)) sub ORDER BY id DESC
DEBUG: Plan XXX query after replacing subqueries and CTEs: SELECT id, "?column?" FROM (SELECT intermediate_result.id, intermediate_result."?column?" FROM read_intermediate_result('XXX_2'::text, 'binary'::citus_copy_format) intermediate_result(id integer, "?column?" double precision)) sub(id, "?column?") ORDER BY id DESC
DEBUG: Creating router plan
DEBUG: Plan is router executable
id | ?column?

Some files were not shown because too many files have changed in this diff Show More