Commit normalized test output for better diffs (#3336)

We have a `normalize.sed` script that before diffing test output normalizes the
expected file and the actual file. This makes sure that we don't have random
test failures and that we have to update our test output all the time. This PR
takes that one step further and actually commits the normalized files. That way
whenever we DO have to update our test output files only relevant changes will
be visible in the diff.

The other change that this PR does it that it strips trailing whitespace during 
normalization. This works well with our editorconfig settings.

As an added benefit of committing these files it's also much more visible what
new normalization rules will result in. The original changes that were proposed
here were a bit to wide and were changing output that was not intentended to
be changed: https://github.com/citusdata/citus/pull/3161#discussion_r360928922
Because these changes are now in the diff of the commit they are much easier to
spot.

Finally the Plan number normalization rules were also added to this PR, because
they are useful even without the CTE inlining PR.
pull/3358/head
Jelte Fennema 2020-01-06 09:56:31 +01:00 committed by GitHub
commit de75243000
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
376 changed files with 31895 additions and 31862 deletions

View File

@ -34,6 +34,12 @@ jobs:
- run:
name: 'Check if changed'
command: git diff --cached --exit-code
- run:
name: 'Normalize test output'
command: ci/normalize_expected.sh
- run:
name: 'Check if changed'
command: git diff --cached --exit-code
check-sql-snapshots:
docker:
- image: 'citus/extbuilder:latest'

7
ci/normalize_expected.sh Executable file
View File

@ -0,0 +1,7 @@
#!/bin/sh
set -eu
for f in $(git ls-tree -r HEAD --name-only src/test/regress/expected/*.out); do
sed -Ef src/test/regress/bin/normalize.sed < "$f" > "$f.modified"
mv "$f.modified" "$f"
done

View File

@ -22,3 +22,7 @@
# python
*.pyc
# output from diff normalization that shouldn't be commited
*.unmodified
*.modified

View File

@ -0,0 +1,99 @@
# How our testing works
We use the test tooling of postgres to run our tests. This tooling is very
simple but effective. The basics it runs a series of `.sql` scripts, gets
their output and stores that in `results/$sqlfilename.out`. It then compares the
actual output to the expected output with a simple `diff` command:
```bash
diff results/$sqlfilename.out expected/$sqlfilename.out
```
## Schedules
Which sql scripts to run is defined in a schedule file, e.g. `multi_schedule`,
`multi_mx_schedule`.
## Makefile
In our `Makefile` we have rules to run the different types of test schedules.
You can run them from the root of the repository like so:
```bash
# e.g. the multi_schedule
make install -j9 && make -C src/test/regress/ check-multi
```
Take a look at the makefile for a list of all the testing targets.
### Running a specific test
Often you want to run a specific test and don't want to run everything. You can
use one of the following commands to do so:
```bash
# If your tests needs almost no setup you can use check-minimal
make install -j9 && make -C src/test/regress/ check-minimal EXTRA_TESTS='multi_utility_warnings'
# Often tests need some testing data, if you get missing table errors using
# check-minimal you should try check-base
make install -j9 && make -C src/test/regress/ check-base EXTRA_TESTS='with_prepare'
# Sometimes this is still not enough and some other test needs to be run before
# the test you want to run. You can do so by adding it to EXTRA_TESTS too.
make install -j9 && make -C src/test/regress/ check-base EXTRA_TESTS='add_coordinator coordinator_shouldhaveshards'
```
## Normalization
The output of tests is sadly not completely predictable. Still we want to
compare the output of different runs and error when the important things are
different. We do this by not using the regular system `diff` to compare files.
Instead we use `src/test/regress/bin/diff` which does the following things:
1. Change the `$sqlfilename.out` file by running it through `sed` using the
`src/test/regress/bin/normalize.sed` file. This does stuff like replacing
numbers that keep changing across runs with an `XXX` string, e.g. portnumbers
or transaction numbers.
2. Backup the original output to `$sqlfilename.out.unmodified` in case it's
needed for debugging
3. Compare the changed `results` and `expected` files with the system `diff`
command.
## Updating the expected test output
Sometimes you add a test to an existing file, or test output changes in a way
that's not bad (possibly even good if support for queries is added). In those
cases you want to update the expected test output.
The way to do this is very simple, you run the test and copy the new .out file
in the `results` directory to the `expected` directory, e.g.:
```bash
make install -j9 && make -C src/test/regress/ check-minimal EXTRA_TESTS='multi_utility_warnings'
cp src/test/regress/{results,expected}/multi_utility_warnings.out
```
## Adding a new test file
Adding a new test file is quite simple:
1. Write the SQL file in the `sql` directory
2. Add it to a schedule file, to make sure it's run in CI
3. Run the test
4. Check that the output is as expected
5. Copy the `.out` file from `results` to `expected`
## Isolation testing
See [`src/test/regress/spec/README.md`](https://github.com/citusdata/citus/blob/master/src/test/regress/spec/README.md)
## Upgrade testing
See [`src/test/regress/upgrade/README.md`](https://github.com/citusdata/citus/blob/master/src/test/regress/upgrade/README.md)
## Failure testing
See [`src/test/regress/mitmscripts/README.md`](https://github.com/citusdata/citus/blob/master/src/test/regress/mitmscripts/README.md)
## Perl test setup script
To automatically setup a citus cluster in tests we use our
`src/test/regress/pg_regress_multi.pl` script. This sets up a citus cluster and
then starts the standard postgres test tooling. You almost never have to change
this file.

View File

@ -9,6 +9,7 @@
#
# Note that src/test/regress/Makefile adds this directory to $PATH so
# pg_regress uses this diff tool instead of the system diff tool.
set -eu -o pipefail
file1="${@:(-2):1}"
file2="${@:(-1):1}"
@ -29,13 +30,17 @@ then
DIFF=/usr/bin/diff
fi
if test -z "$VANILLATEST"
if test -z "${VANILLATEST:-}"
then
sed -Ef $BASEDIR/normalize.sed < $file1 > $file1.modified
sed -Ef $BASEDIR/normalize.sed < $file2 > $file2.modified
$DIFF -w $args $file1.modified $file2.modified
touch "$file1" # when adding a new test the expected file does not exist
sed -Ef $BASEDIR/normalize.sed < $file1 > "$file1.modified"
mv "$file1" "$file1.unmodified"
mv "$file1.modified" "$file1"
sed -Ef $BASEDIR/normalize.sed < "$file2" > "$file2.modified"
mv "$file2" "$file2.unmodified"
mv "$file2.modified" "$file2"
$DIFF -w $args $file1 $file2
exitcode=$?
rm -f $file1.modified $file2.modified
exit $exitcode
else
exec $DIFF -w $args $file1 $file2

View File

@ -15,8 +15,7 @@ s/assigned task [0-9]+ to node/assigned task to node/
s/node group [12] (but|does)/node group \1/
# Differing names can have differing table column widths
s/(-+\|)+-+/---/g
s/.*-------------.*/---------------------------------------------------------------------/g
s/^-[+-]{2,}$/---------------------------------------------------------------------/g
# In foreign_key_to_reference_table, normalize shard table names, etc in
# the generated plan
@ -45,9 +44,6 @@ s/name_len_12345678901234567890123456789012345678_fcd8ab6f_[0-9]+/name_len_12345
# normalize pkey constraints in multi_insert_select.sql
s/"(raw_events_second_user_id_value_1_key_|agg_events_user_id_value_1_agg_key_)[0-9]+"/"\1xxxxxxx"/g
# normalize failed task ids
s/ERROR: failed to execute task [0-9]+/ERROR: failed to execute task X/g
# ignore could not consume warnings
/WARNING: could not consume data from worker node/d
@ -65,6 +61,9 @@ s/"(ref_table_[0-9]_|ref_table_[0-9]_value_fkey_)[0-9]+"/"\1xxxxxxx"/g
/^LINE [0-9]+:.*$/d
/^ *\^$/d
# Remove trailing whitespace
s/ *$//g
# pg12 changes
s/Partitioned table "/Table "/g
s/\) TABLESPACE pg_default$/\)/g
@ -76,3 +75,10 @@ s/_id_other_column_ref_fkey/_id_fkey/g
# intermediate_results
s/(ERROR.*)pgsql_job_cache\/([0-9]+_[0-9]+_[0-9]+)\/(.*).data/\1pgsql_job_cache\/xx_x_xxx\/\3.data/g
# Plan numbers are not very stable, so we normalize those
# subplan numbers are quite stable so we keep those
s/DEBUG: Plan [0-9]+/DEBUG: Plan XXX/g
s/generating subplan [0-9]+\_/generating subplan XXX\_/g
s/read_intermediate_result\('[0-9]+_/read_intermediate_result('XXX_/g
s/Subplan [0-9]+\_/Subplan XXX\_/g

View File

@ -6,7 +6,7 @@ SET citus.shard_replication_factor TO 1;
SET citus.next_shard_id TO 801009000;
SELECT create_distributed_table('test','x');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -19,7 +19,7 @@ SET citus.task_executor_type TO 'adaptive';
BEGIN;
SELECT count(*) FROM test a JOIN (SELECT x, pg_sleep(0.1) FROM test) b USING (x);
count
-------
---------------------------------------------------------------------
2
(1 row)
@ -28,7 +28,7 @@ SELECT sum(result::bigint) FROM run_command_on_workers($$
WHERE pid <> pg_backend_pid() AND query LIKE '%8010090%'
$$);
sum
-----
---------------------------------------------------------------------
2
(1 row)
@ -38,7 +38,7 @@ SET citus.executor_slow_start_interval TO '10ms';
BEGIN;
SELECT count(*) FROM test a JOIN (SELECT x, pg_sleep(0.1) FROM test) b USING (x);
count
-------
---------------------------------------------------------------------
2
(1 row)
@ -47,7 +47,7 @@ SELECT sum(result::bigint) FROM run_command_on_workers($$
WHERE pid <> pg_backend_pid() AND query LIKE '%8010090%'
$$);
sum
-----
---------------------------------------------------------------------
4
(1 row)

View File

@ -6,7 +6,7 @@ SET citus.enable_repartition_joins TO true;
CREATE TABLE ab(a int, b int);
SELECT create_distributed_table('ab', 'a');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -14,39 +14,39 @@ INSERT INTO ab SELECT *,* FROM generate_series(1,10);
SELECT COUNT(*) FROM ab k, ab l
WHERE k.a = l.b;
count
-------
---------------------------------------------------------------------
10
(1 row)
SELECT COUNT(*) FROM ab k, ab l, ab m, ab t
WHERE k.a = l.b AND k.a = m.b AND t.b = l.a;
count
-------
---------------------------------------------------------------------
10
(1 row)
SELECT count(*) FROM (SELECT k.a FROM ab k, ab l WHERE k.a = l.b) first, (SELECT * FROM ab) second WHERE first.a = second.b;
count
-------
---------------------------------------------------------------------
10
(1 row)
BEGIN;
SELECT count(*) FROM (SELECT k.a FROM ab k, ab l WHERE k.a = l.b) first, (SELECT * FROM ab) second WHERE first.a = second.b;
count
-------
---------------------------------------------------------------------
10
(1 row)
SELECT count(*) FROM (SELECT k.a FROM ab k, ab l WHERE k.a = l.b) first, (SELECT * FROM ab) second WHERE first.a = second.b;
count
-------
---------------------------------------------------------------------
10
(1 row)
SELECT count(*) FROM (SELECT k.a FROM ab k, ab l WHERE k.a = l.b) first, (SELECT * FROM ab) second WHERE first.a = second.b;
count
-------
---------------------------------------------------------------------
10
(1 row)
@ -63,19 +63,19 @@ CREATE TABLE single_hash_repartition_second (id int, sum int, avg float);
CREATE TABLE ref_table (id int, sum int, avg float);
SELECT create_distributed_table('single_hash_repartition_first', 'id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
SELECT create_distributed_table('single_hash_repartition_second', 'id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
SELECT create_reference_table('ref_table');
create_reference_table
------------------------
---------------------------------------------------------------------
(1 row)
@ -87,7 +87,7 @@ FROM
WHERE
r1.id = t1.id AND t2.sum = t1.id;
QUERY PLAN
----------------------------------------------------------------------
---------------------------------------------------------------------
Aggregate (cost=0.00..0.00 rows=0 width=0)
-> Custom Scan (Citus Adaptive) (cost=0.00..0.00 rows=0 width=0)
Task Count: 4
@ -105,7 +105,7 @@ FROM
WHERE
t1.id = t2.id AND t1.sum = t3.id;
QUERY PLAN
----------------------------------------------------------------------
---------------------------------------------------------------------
Aggregate (cost=0.00..0.00 rows=0 width=0)
-> Custom Scan (Citus Adaptive) (cost=0.00..0.00 rows=0 width=0)
Task Count: 4

View File

@ -5,7 +5,7 @@ SELECT master_add_node('localhost', :master_port, groupid => 0) AS master_nodeid
-- adding the same node again should return the existing nodeid
SELECT master_add_node('localhost', :master_port, groupid => 0) = :master_nodeid;
?column?
----------
---------------------------------------------------------------------
t
(1 row)
@ -14,9 +14,9 @@ SELECT master_add_node('localhost', 12345, groupid => 0) = :master_nodeid;
ERROR: group 0 already has a primary node
-- start_metadata_sync_to_node() for coordinator should raise a notice
SELECT start_metadata_sync_to_node('localhost', :master_port);
NOTICE: localhost:57636 is the coordinator and already contains metadata, skipping syncing the metadata
NOTICE: localhost:xxxxx is the coordinator and already contains metadata, skipping syncing the metadata
start_metadata_sync_to_node
-----------------------------
---------------------------------------------------------------------
(1 row)

View File

@ -40,27 +40,27 @@ create aggregate sum2_strict (int) (
);
select create_distributed_function('sum2(int)');
create_distributed_function
-----------------------------
---------------------------------------------------------------------
(1 row)
select create_distributed_function('sum2_strict(int)');
create_distributed_function
-----------------------------
---------------------------------------------------------------------
(1 row)
create table aggdata (id int, key int, val int, valf float8);
select create_distributed_table('aggdata', 'id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
insert into aggdata (id, key, val, valf) values (1, 1, 2, 11.2), (2, 1, NULL, 2.1), (3, 2, 2, 3.22), (4, 2, 3, 4.23), (5, 2, 5, 5.25), (6, 3, 4, 63.4), (7, 5, NULL, 75), (8, 6, NULL, NULL), (9, 6, NULL, 96), (10, 7, 8, 1078), (11, 9, 0, 1.19);
select key, sum2(val), sum2_strict(val), stddev(valf) from aggdata group by key order by key;
key | sum2 | sum2_strict | stddev
-----+------+-------------+------------------
---------------------------------------------------------------------
1 | | 4 | 6.43467170879758
2 | 20 | 20 | 1.01500410508201
3 | 8 | 8 |
@ -73,7 +73,7 @@ select key, sum2(val), sum2_strict(val), stddev(valf) from aggdata group by key
-- FILTER supported
select key, sum2(val) filter (where valf < 5), sum2_strict(val) filter (where valf < 5) from aggdata group by key order by key;
key | sum2 | sum2_strict
-----+------+-------------
---------------------------------------------------------------------
1 | |
2 | 10 | 10
3 | 0 |
@ -89,7 +89,7 @@ ERROR: cannot compute aggregate (distinct)
DETAIL: table partitioning is unsuitable for aggregate (distinct)
select id, sum2(distinct val), sum2_strict(distinct val) from aggdata group by id order by id;
id | sum2 | sum2_strict
----+------+-------------
---------------------------------------------------------------------
1 | 4 | 4
2 | |
3 | 4 | 4
@ -109,7 +109,7 @@ ERROR: unsupported aggregate function sum2
-- Test handling a lack of intermediate results
select sum2(val), sum2_strict(val) from aggdata where valf = 0;
sum2 | sum2_strict
------+-------------
---------------------------------------------------------------------
0 |
(1 row)
@ -137,13 +137,13 @@ CREATE AGGREGATE last (
);
SELECT create_distributed_function('first(anyelement)');
create_distributed_function
-----------------------------
---------------------------------------------------------------------
(1 row)
SELECT create_distributed_function('last(anyelement)');
create_distributed_function
-----------------------------
---------------------------------------------------------------------
(1 row)
@ -154,7 +154,7 @@ ERROR: unsupported aggregate function first
SELECT id, first(val ORDER BY key), last(val ORDER BY key)
FROM aggdata GROUP BY id ORDER BY id;
id | first | last
----+-------+------
---------------------------------------------------------------------
1 | 2 | 2
2 | |
3 | 2 | 2
@ -187,16 +187,16 @@ create aggregate sumstring(text) (
);
select sumstring(valf::text) from aggdata where valf is not null;
ERROR: function "aggregate_support.sumstring(text)" does not exist
CONTEXT: while executing command on localhost:57637
CONTEXT: while executing command on localhost:xxxxx
select create_distributed_function('sumstring(text)');
create_distributed_function
-----------------------------
---------------------------------------------------------------------
(1 row)
select sumstring(valf::text) from aggdata where valf is not null;
sumstring
-----------
---------------------------------------------------------------------
1339.59
(1 row)
@ -214,13 +214,13 @@ create aggregate array_collect_sort(el int) (
);
select create_distributed_function('array_collect_sort(int)');
create_distributed_function
-----------------------------
---------------------------------------------------------------------
(1 row)
select array_collect_sort(val) from aggdata;
array_collect_sort
-------------------------------------
---------------------------------------------------------------------
{0,2,2,3,4,5,8,NULL,NULL,NULL,NULL}
(1 row)
@ -230,7 +230,7 @@ NOTICE: not propagating CREATE ROLE/USER commands to worker nodes
HINT: Connect to worker nodes directly to manually create all necessary users and roles.
select run_command_on_workers($$create user notsuper$$);
run_command_on_workers
-----------------------------------
---------------------------------------------------------------------
(localhost,57637,t,"CREATE ROLE")
(localhost,57638,t,"CREATE ROLE")
(2 rows)
@ -242,7 +242,7 @@ grant all on schema aggregate_support to notsuper;
grant all on all tables in schema aggregate_support to notsuper;
$$);
run_command_on_workers
---------------------------
---------------------------------------------------------------------
(localhost,57637,t,GRANT)
(localhost,57638,t,GRANT)
(2 rows)
@ -250,7 +250,7 @@ $$);
set role notsuper;
select array_collect_sort(val) from aggdata;
array_collect_sort
-------------------------------------
---------------------------------------------------------------------
{0,2,2,3,4,5,8,NULL,NULL,NULL,NULL}
(1 row)

View File

@ -5,7 +5,7 @@ NOTICE: not propagating CREATE ROLE/USER commands to worker nodes
HINT: Connect to worker nodes directly to manually create all necessary users and roles.
SELECT run_command_on_workers($$CREATE ROLE alter_role_1 WITH LOGIN;$$);
run_command_on_workers
-----------------------------------
---------------------------------------------------------------------
(localhost,57637,t,"CREATE ROLE")
(localhost,57638,t,"CREATE ROLE")
(2 rows)
@ -17,13 +17,13 @@ ERROR: conflicting or redundant options
ALTER ROLE alter_role_1 WITH SUPERUSER CREATEDB CREATEROLE INHERIT LOGIN REPLICATION BYPASSRLS CONNECTION LIMIT 66 VALID UNTIL '2032-05-05';
SELECT row(rolname, rolsuper, rolinherit, rolcreaterole, rolcreatedb, rolcanlogin, rolreplication, rolbypassrls, rolconnlimit, rolpassword, EXTRACT (year FROM rolvaliduntil)) FROM pg_authid WHERE rolname = 'alter_role_1';
row
---------------------------------------
---------------------------------------------------------------------
(alter_role_1,t,t,t,t,t,t,t,66,,2032)
(1 row)
SELECT run_command_on_workers($$SELECT row(rolname, rolsuper, rolinherit, rolcreaterole, rolcreatedb, rolcanlogin, rolreplication, rolbypassrls, rolconnlimit, rolpassword, EXTRACT (year FROM rolvaliduntil)) FROM pg_authid WHERE rolname = 'alter_role_1'$$);
run_command_on_workers
-------------------------------------------------------------
---------------------------------------------------------------------
(localhost,57637,t,"(alter_role_1,t,t,t,t,t,t,t,66,,2032)")
(localhost,57638,t,"(alter_role_1,t,t,t,t,t,t,t,66,,2032)")
(2 rows)
@ -32,13 +32,13 @@ SELECT run_command_on_workers($$SELECT row(rolname, rolsuper, rolinherit, rolcr
ALTER ROLE alter_role_1 WITH NOSUPERUSER NOCREATEDB NOCREATEROLE NOINHERIT NOLOGIN NOREPLICATION NOBYPASSRLS CONNECTION LIMIT 0 VALID UNTIL '2052-05-05';
SELECT row(rolname, rolsuper, rolinherit, rolcreaterole, rolcreatedb, rolcanlogin, rolreplication, rolbypassrls, rolconnlimit, rolpassword, EXTRACT (year FROM rolvaliduntil)) FROM pg_authid WHERE rolname = 'alter_role_1';
row
--------------------------------------
---------------------------------------------------------------------
(alter_role_1,f,f,f,f,f,f,f,0,,2052)
(1 row)
SELECT run_command_on_workers($$SELECT row(rolname, rolsuper, rolinherit, rolcreaterole, rolcreatedb, rolcanlogin, rolreplication, rolbypassrls, rolconnlimit, rolpassword, EXTRACT (year FROM rolvaliduntil)) FROM pg_authid WHERE rolname = 'alter_role_1'$$);
run_command_on_workers
------------------------------------------------------------
---------------------------------------------------------------------
(localhost,57637,t,"(alter_role_1,f,f,f,f,f,f,f,0,,2052)")
(localhost,57638,t,"(alter_role_1,f,f,f,f,f,f,f,0,,2052)")
(2 rows)
@ -52,13 +52,13 @@ ERROR: role "alter_role_2" does not exist
ALTER ROLE CURRENT_USER WITH CONNECTION LIMIT 123;
SELECT rolconnlimit FROM pg_authid WHERE rolname = CURRENT_USER;
rolconnlimit
--------------
---------------------------------------------------------------------
123
(1 row)
SELECT run_command_on_workers($$SELECT rolconnlimit FROM pg_authid WHERE rolname = CURRENT_USER;$$);
run_command_on_workers
-------------------------
---------------------------------------------------------------------
(localhost,57637,t,123)
(localhost,57638,t,123)
(2 rows)
@ -67,13 +67,13 @@ SELECT run_command_on_workers($$SELECT rolconnlimit FROM pg_authid WHERE rolname
ALTER ROLE SESSION_USER WITH CONNECTION LIMIT 124;
SELECT rolconnlimit FROM pg_authid WHERE rolname = SESSION_USER;
rolconnlimit
--------------
---------------------------------------------------------------------
124
(1 row)
SELECT run_command_on_workers($$SELECT rolconnlimit FROM pg_authid WHERE rolname = SESSION_USER;$$);
run_command_on_workers
-------------------------
---------------------------------------------------------------------
(localhost,57637,t,124)
(localhost,57638,t,124)
(2 rows)
@ -82,13 +82,13 @@ SELECT run_command_on_workers($$SELECT rolconnlimit FROM pg_authid WHERE rolname
ALTER ROLE alter_role_1 WITH PASSWORD NULL;
SELECT rolpassword is NULL FROM pg_authid WHERE rolname = 'alter_role_1';
?column?
----------
---------------------------------------------------------------------
t
(1 row)
SELECT run_command_on_workers($$SELECT rolpassword is NULL FROM pg_authid WHERE rolname = 'alter_role_1'$$);
run_command_on_workers
------------------------
---------------------------------------------------------------------
(localhost,57637,t,t)
(localhost,57638,t,t)
(2 rows)
@ -96,13 +96,13 @@ SELECT run_command_on_workers($$SELECT rolpassword is NULL FROM pg_authid WHERE
ALTER ROLE alter_role_1 WITH PASSWORD 'test1';
SELECT rolpassword FROM pg_authid WHERE rolname = 'alter_role_1';
rolpassword
-------------------------------------
---------------------------------------------------------------------
md52f9cc8d65e37edcc45c4a489bdfc699d
(1 row)
SELECT run_command_on_workers($$SELECT rolpassword FROM pg_authid WHERE rolname = 'alter_role_1'$$);
run_command_on_workers
---------------------------------------------------------
---------------------------------------------------------------------
(localhost,57637,t,md52f9cc8d65e37edcc45c4a489bdfc699d)
(localhost,57638,t,md52f9cc8d65e37edcc45c4a489bdfc699d)
(2 rows)
@ -110,13 +110,13 @@ SELECT run_command_on_workers($$SELECT rolpassword FROM pg_authid WHERE rolname
ALTER ROLE alter_role_1 WITH ENCRYPTED PASSWORD 'test2';
SELECT rolpassword FROM pg_authid WHERE rolname = 'alter_role_1';
rolpassword
-------------------------------------
---------------------------------------------------------------------
md5e17f7818c5ec023fa87bdb97fd3e842e
(1 row)
SELECT run_command_on_workers($$SELECT rolpassword FROM pg_authid WHERE rolname = 'alter_role_1'$$);
run_command_on_workers
---------------------------------------------------------
---------------------------------------------------------------------
(localhost,57637,t,md5e17f7818c5ec023fa87bdb97fd3e842e)
(localhost,57638,t,md5e17f7818c5ec023fa87bdb97fd3e842e)
(2 rows)
@ -124,13 +124,13 @@ SELECT run_command_on_workers($$SELECT rolpassword FROM pg_authid WHERE rolname
ALTER ROLE alter_role_1 WITH ENCRYPTED PASSWORD 'md59cce240038b7b335c6aa9674a6f13e72';
SELECT rolpassword FROM pg_authid WHERE rolname = 'alter_role_1';
rolpassword
-------------------------------------
---------------------------------------------------------------------
md59cce240038b7b335c6aa9674a6f13e72
(1 row)
SELECT run_command_on_workers($$SELECT rolpassword FROM pg_authid WHERE rolname = 'alter_role_1'$$);
run_command_on_workers
---------------------------------------------------------
---------------------------------------------------------------------
(localhost,57637,t,md59cce240038b7b335c6aa9674a6f13e72)
(localhost,57638,t,md59cce240038b7b335c6aa9674a6f13e72)
(2 rows)
@ -141,7 +141,7 @@ NOTICE: not propagating CREATE ROLE/USER commands to worker nodes
HINT: Connect to worker nodes directly to manually create all necessary users and roles.
SELECT run_command_on_workers($$CREATE ROLE "alter_role'1" WITH LOGIN;$$);
run_command_on_workers
-----------------------------------
---------------------------------------------------------------------
(localhost,57637,t,"CREATE ROLE")
(localhost,57638,t,"CREATE ROLE")
(2 rows)
@ -149,13 +149,13 @@ SELECT run_command_on_workers($$CREATE ROLE "alter_role'1" WITH LOGIN;$$);
ALTER ROLE "alter_role'1" CREATEROLE;
SELECT rolcreaterole FROM pg_authid WHERE rolname = 'alter_role''1';
rolcreaterole
---------------
---------------------------------------------------------------------
t
(1 row)
SELECT run_command_on_workers($$SELECT rolcreaterole FROM pg_authid WHERE rolname = 'alter_role''1'$$);
run_command_on_workers
------------------------
---------------------------------------------------------------------
(localhost,57637,t,t)
(localhost,57638,t,t)
(2 rows)
@ -165,7 +165,7 @@ NOTICE: not propagating CREATE ROLE/USER commands to worker nodes
HINT: Connect to worker nodes directly to manually create all necessary users and roles.
SELECT run_command_on_workers($$CREATE ROLE "alter_role""1" WITH LOGIN;$$);
run_command_on_workers
-----------------------------------
---------------------------------------------------------------------
(localhost,57637,t,"CREATE ROLE")
(localhost,57638,t,"CREATE ROLE")
(2 rows)
@ -173,13 +173,13 @@ SELECT run_command_on_workers($$CREATE ROLE "alter_role""1" WITH LOGIN;$$);
ALTER ROLE "alter_role""1" CREATEROLE;
SELECT rolcreaterole FROM pg_authid WHERE rolname = 'alter_role"1';
rolcreaterole
---------------
---------------------------------------------------------------------
t
(1 row)
SELECT run_command_on_workers($$SELECT rolcreaterole FROM pg_authid WHERE rolname = 'alter_role"1'$$);
run_command_on_workers
------------------------
---------------------------------------------------------------------
(localhost,57637,t,t)
(localhost,57638,t,t)
(2 rows)
@ -188,51 +188,51 @@ SELECT run_command_on_workers($$SELECT rolcreaterole FROM pg_authid WHERE rolnam
ALTER ROLE alter_role_1 WITH SUPERUSER CREATEDB CREATEROLE INHERIT LOGIN REPLICATION BYPASSRLS CONNECTION LIMIT 66 VALID UNTIL '2032-05-05' PASSWORD 'test3';
SELECT row(rolname, rolsuper, rolinherit, rolcreaterole, rolcreatedb, rolcanlogin, rolreplication, rolbypassrls, rolconnlimit, rolpassword, EXTRACT (year FROM rolvaliduntil)) FROM pg_authid WHERE rolname = 'alter_role_1';
row
--------------------------------------------------------------------------
---------------------------------------------------------------------
(alter_role_1,t,t,t,t,t,t,t,66,md5ead5c53df946838b1291bba7757f41a7,2032)
(1 row)
SELECT run_command_on_workers($$SELECT row(rolname, rolsuper, rolinherit, rolcreaterole, rolcreatedb, rolcanlogin, rolreplication, rolbypassrls, rolconnlimit, rolpassword, EXTRACT (year FROM rolvaliduntil)) FROM pg_authid WHERE rolname = 'alter_role_1'$$);
run_command_on_workers
------------------------------------------------------------------------------------------------
---------------------------------------------------------------------
(localhost,57637,t,"(alter_role_1,t,t,t,t,t,t,t,66,md5ead5c53df946838b1291bba7757f41a7,2032)")
(localhost,57638,t,"(alter_role_1,t,t,t,t,t,t,t,66,md5ead5c53df946838b1291bba7757f41a7,2032)")
(2 rows)
SELECT master_remove_node('localhost', :worker_1_port);
master_remove_node
--------------------
---------------------------------------------------------------------
(1 row)
ALTER ROLE alter_role_1 WITH NOSUPERUSER NOCREATEDB NOCREATEROLE NOINHERIT NOLOGIN NOREPLICATION NOBYPASSRLS CONNECTION LIMIT 0 VALID UNTIL '2052-05-05' PASSWORD 'test4';
SELECT row(rolname, rolsuper, rolinherit, rolcreaterole, rolcreatedb, rolcanlogin, rolreplication, rolbypassrls, rolconnlimit, rolpassword, EXTRACT (year FROM rolvaliduntil)) FROM pg_authid WHERE rolname = 'alter_role_1';
row
-------------------------------------------------------------------------
---------------------------------------------------------------------
(alter_role_1,f,f,f,f,f,f,f,0,md5be308f25c7b1a2d50c85cf7e6f074df9,2052)
(1 row)
SELECT run_command_on_workers($$SELECT row(rolname, rolsuper, rolinherit, rolcreaterole, rolcreatedb, rolcanlogin, rolreplication, rolbypassrls, rolconnlimit, rolpassword, EXTRACT (year FROM rolvaliduntil)) FROM pg_authid WHERE rolname = 'alter_role_1'$$);
run_command_on_workers
-----------------------------------------------------------------------------------------------
---------------------------------------------------------------------
(localhost,57638,t,"(alter_role_1,f,f,f,f,f,f,f,0,md5be308f25c7b1a2d50c85cf7e6f074df9,2052)")
(1 row)
SELECT 1 FROM master_add_node('localhost', :worker_1_port);
?column?
----------
---------------------------------------------------------------------
1
(1 row)
SELECT row(rolname, rolsuper, rolinherit, rolcreaterole, rolcreatedb, rolcanlogin, rolreplication, rolbypassrls, rolconnlimit, rolpassword, EXTRACT (year FROM rolvaliduntil)) FROM pg_authid WHERE rolname = 'alter_role_1';
row
-------------------------------------------------------------------------
---------------------------------------------------------------------
(alter_role_1,f,f,f,f,f,f,f,0,md5be308f25c7b1a2d50c85cf7e6f074df9,2052)
(1 row)
SELECT run_command_on_workers($$SELECT row(rolname, rolsuper, rolinherit, rolcreaterole, rolcreatedb, rolcanlogin, rolreplication, rolbypassrls, rolconnlimit, rolpassword, EXTRACT (year FROM rolvaliduntil)) FROM pg_authid WHERE rolname = 'alter_role_1'$$);
run_command_on_workers
-----------------------------------------------------------------------------------------------
---------------------------------------------------------------------
(localhost,57637,t,"(alter_role_1,f,f,f,f,f,f,f,0,md5be308f25c7b1a2d50c85cf7e6f074df9,2052)")
(localhost,57638,t,"(alter_role_1,f,f,f,f,f,f,f,0,md5be308f25c7b1a2d50c85cf7e6f074df9,2052)")
(2 rows)

View File

@ -3,13 +3,13 @@
--
SELECT start_metadata_sync_to_node('localhost', :worker_1_port);
start_metadata_sync_to_node
-----------------------------
---------------------------------------------------------------------
(1 row)
SELECT start_metadata_sync_to_node('localhost', :worker_2_port);
start_metadata_sync_to_node
-----------------------------
---------------------------------------------------------------------
(1 row)

View File

@ -4,7 +4,7 @@ SET search_path TO bool_agg;
CREATE TABLE bool_test (id int, val int, flag bool, kind int);
SELECT create_distributed_table('bool_agg.bool_test','id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -12,13 +12,13 @@ INSERT INTO bool_test VALUES (1, 1, true, 99), (2, 2, false, 99), (2, 3, true, 8
-- mix of true and false
SELECT bool_and(flag), bool_or(flag), every(flag) FROM bool_test;
bool_and | bool_or | every
----------+---------+-------
---------------------------------------------------------------------
f | t | f
(1 row)
SELECT kind, bool_and(flag), bool_or(flag), every(flag) FROM bool_test GROUP BY kind ORDER BY 2;
kind | bool_and | bool_or | every
------+----------+---------+-------
---------------------------------------------------------------------
99 | f | t | f
88 | t | t | t
(2 rows)
@ -26,13 +26,13 @@ SELECT kind, bool_and(flag), bool_or(flag), every(flag) FROM bool_test GROUP BY
-- expressions in aggregate
SELECT bool_or(val > 2 OR id < 2), bool_and(val < 3) FROM bool_test;
bool_or | bool_and
---------+----------
---------------------------------------------------------------------
t | f
(1 row)
SELECT kind, bool_or(val > 2 OR id < 2), bool_and(val < 3) FROM bool_test GROUP BY kind ORDER BY 3;
kind | bool_or | bool_and
------+---------+----------
---------------------------------------------------------------------
88 | t | f
99 | t | t
(2 rows)
@ -40,13 +40,13 @@ SELECT kind, bool_or(val > 2 OR id < 2), bool_and(val < 3) FROM bool_test GROUP
-- 1 & 3, 1 | 3
SELECT bit_and(val), bit_or(val) FROM bool_test WHERE flag;
bit_and | bit_or
---------+--------
---------------------------------------------------------------------
1 | 3
(1 row)
SELECT flag, bit_and(val), bit_or(val) FROM bool_test GROUP BY flag ORDER BY flag;
flag | bit_and | bit_or
------+---------+--------
---------------------------------------------------------------------
f | 2 | 2
t | 1 | 3
(2 rows)

View File

@ -8,7 +8,7 @@ CREATE TABLE stock (
);
SELECT create_distributed_table('stock','s_w_id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -20,7 +20,7 @@ group by s_i_id
having sum(s_order_cnt) > (select max(s_order_cnt) - 3 as having_query from stock)
order by s_i_id;
QUERY PLAN
-----------------------------------------------------------------------------------------------
---------------------------------------------------------------------
Sort
Sort Key: s_i_id
InitPlan 1 (returns $0)
@ -29,28 +29,28 @@ order by s_i_id;
Group Key: s_i_id
Filter: ((pg_catalog.sum(worker_column_3))::bigint > $0)
-> Custom Scan (Citus Adaptive)
-> Distributed Subplan 1_1
-> Distributed Subplan XXX_1
-> Aggregate
-> Custom Scan (Citus Adaptive)
Task Count: 4
Tasks Shown: One of 4
-> Task
Node: host=localhost port=57637 dbname=regression
Node: host=localhost port=xxxxx dbname=regression
-> Aggregate
-> Seq Scan on stock_1640000 stock
-> Distributed Subplan 1_2
-> Distributed Subplan XXX_2
-> Aggregate
-> Custom Scan (Citus Adaptive)
Task Count: 4
Tasks Shown: One of 4
-> Task
Node: host=localhost port=57637 dbname=regression
Node: host=localhost port=xxxxx dbname=regression
-> Aggregate
-> Seq Scan on stock_1640000 stock
Task Count: 4
Tasks Shown: One of 4
-> Task
Node: host=localhost port=57637 dbname=regression
Node: host=localhost port=xxxxx dbname=regression
-> HashAggregate
Group Key: stock.s_i_id
InitPlan 1 (returns $0)
@ -66,7 +66,7 @@ group by s_i_id
having sum(s_order_cnt) > (select max(s_order_cnt) - 3 as having_query from stock)
order by s_i_id;
QUERY PLAN
-----------------------------------------------------------------------------------------
---------------------------------------------------------------------
Sort
Sort Key: s_i_id
InitPlan 1 (returns $0)
@ -75,19 +75,19 @@ order by s_i_id;
Group Key: s_i_id
Filter: ((pg_catalog.sum(worker_column_3))::bigint > $0)
-> Custom Scan (Citus Adaptive)
-> Distributed Subplan 4_1
-> Distributed Subplan XXX_1
-> Aggregate
-> Custom Scan (Citus Adaptive)
Task Count: 4
Tasks Shown: One of 4
-> Task
Node: host=localhost port=57637 dbname=regression
Node: host=localhost port=xxxxx dbname=regression
-> Aggregate
-> Seq Scan on stock_1640000 stock
Task Count: 4
Tasks Shown: One of 4
-> Task
Node: host=localhost port=57637 dbname=regression
Node: host=localhost port=xxxxx dbname=regression
-> HashAggregate
Group Key: stock.s_i_id
-> Seq Scan on stock_1640000 stock
@ -99,26 +99,26 @@ from stock
group by s_i_id
having sum(s_order_cnt) > (select max(s_order_cnt) - 3 as having_query from stock);
QUERY PLAN
-----------------------------------------------------------------------------------
---------------------------------------------------------------------
HashAggregate
Group Key: s_i_id
Filter: ((pg_catalog.sum(worker_column_3))::bigint > $0)
InitPlan 1 (returns $0)
-> Function Scan on read_intermediate_result intermediate_result
-> Custom Scan (Citus Adaptive)
-> Distributed Subplan 6_1
-> Distributed Subplan XXX_1
-> Aggregate
-> Custom Scan (Citus Adaptive)
Task Count: 4
Tasks Shown: One of 4
-> Task
Node: host=localhost port=57637 dbname=regression
Node: host=localhost port=xxxxx dbname=regression
-> Aggregate
-> Seq Scan on stock_1640000 stock
Task Count: 4
Tasks Shown: One of 4
-> Task
Node: host=localhost port=57637 dbname=regression
Node: host=localhost port=xxxxx dbname=regression
-> HashAggregate
Group Key: stock.s_i_id
-> Seq Scan on stock_1640000 stock
@ -130,7 +130,7 @@ group by s_i_id
having (select true)
order by s_i_id;
QUERY PLAN
-------------------------------------------------------------------------------------------------
---------------------------------------------------------------------
Sort (cost=0.00..0.00 rows=0 width=0)
Sort Key: remote_scan.s_i_id
InitPlan 1 (returns $0)
@ -142,7 +142,7 @@ order by s_i_id;
Task Count: 4
Tasks Shown: One of 4
-> Task
Node: host=localhost port=57637 dbname=regression
Node: host=localhost port=xxxxx dbname=regression
-> HashAggregate (cost=40.60..42.60 rows=200 width=12)
Group Key: s.s_i_id
-> Seq Scan on stock_1640000 s (cost=0.00..30.40 rows=2040 width=8)
@ -153,7 +153,7 @@ from stock s
group by s_i_id
having (select true);
QUERY PLAN
-------------------------------------------------------------------------------------------
---------------------------------------------------------------------
HashAggregate (cost=0.00..0.00 rows=0 width=0)
Group Key: remote_scan.s_i_id
Filter: $0
@ -163,7 +163,7 @@ having (select true);
Task Count: 4
Tasks Shown: One of 4
-> Task
Node: host=localhost port=57637 dbname=regression
Node: host=localhost port=xxxxx dbname=regression
-> HashAggregate (cost=40.60..42.60 rows=200 width=12)
Group Key: s.s_i_id
-> Seq Scan on stock_1640000 s (cost=0.00..30.40 rows=2040 width=8)
@ -176,7 +176,7 @@ group by s_i_id
having sum(s_order_cnt) > (select max(s_order_cnt) - 3 as having_query from stock)
order by s_i_id;
s_i_id | ordercount
--------+------------
---------------------------------------------------------------------
(0 rows)
INSERT INTO stock SELECT c, c, c FROM generate_series(1, 5) as c;
@ -187,7 +187,7 @@ group by s_i_id
having sum(s_order_cnt) > (select max(s_order_cnt) - 3 as having_query from stock)
order by s_i_id;
s_i_id | ordercount
--------+------------
---------------------------------------------------------------------
3 | 3
4 | 4
5 | 5
@ -199,7 +199,7 @@ group by s_i_id
having sum(s_order_cnt) > (select max(s_order_cnt) - 3 as having_query from stock)
order by s_i_id;
s_i_id | ordercount
--------+------------
---------------------------------------------------------------------
3 | 3
4 | 4
5 | 5
@ -212,7 +212,7 @@ group by s_i_id
having (select true)
order by s_i_id;
s_i_id | ordercount
--------+------------
---------------------------------------------------------------------
1 | 1
2 | 2
3 | 3
@ -227,7 +227,7 @@ group by s_i_id
having (select false)
order by s_i_id;
s_i_id | ordercount
--------+------------
---------------------------------------------------------------------
(0 rows)
select s_i_id, sum(s_order_cnt) as ordercount
@ -236,7 +236,7 @@ group by s_i_id
having (select true)
order by s_i_id;
s_i_id | ordercount
--------+------------
---------------------------------------------------------------------
1 | 1
2 | 2
3 | 3
@ -250,7 +250,7 @@ group by s_i_id
having (select false)
order by s_i_id;
s_i_id | ordercount
--------+------------
---------------------------------------------------------------------
(0 rows)
select s_i_id, sum(s_order_cnt) as ordercount
@ -259,7 +259,7 @@ group by s_i_id
having (select true)
order by s_i_id;
s_i_id | ordercount
--------+------------
---------------------------------------------------------------------
1 | 1
2 | 2
3 | 3
@ -310,7 +310,7 @@ insert into stock VALUES
SELECT create_distributed_table('stock','s_w_id');
NOTICE: Copying data from local table...
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -328,7 +328,7 @@ having sum(s_order_cnt) >
and n_name = 'GERMANY')
order by ordercount desc;
s_i_id | ordercount
--------+------------
---------------------------------------------------------------------
33 | 1
1 | 1
(2 rows)
@ -349,7 +349,7 @@ having sum(s_order_cnt) >
and n_name = 'GERMANY')
order by ordercount desc;
s_i_id | ordercount
--------+------------
---------------------------------------------------------------------
1 | 100001
(1 row)

View File

@ -11,7 +11,7 @@ CREATE TABLE stock (
);
SELECT create_distributed_table('stock','s_w_id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -25,7 +25,7 @@ group by s_i_id
having sum(s_order_cnt) > (select max(s_order_cnt) - 3 as having_query from stock)
order by s_i_id;
QUERY PLAN
-----------------------------------------------------------------------------------------------
---------------------------------------------------------------------
Sort
Sort Key: s_i_id
InitPlan 1 (returns $0)
@ -34,28 +34,28 @@ order by s_i_id;
Group Key: s_i_id
Filter: ((pg_catalog.sum(worker_column_3))::bigint > $0)
-> Custom Scan (Citus Adaptive)
-> Distributed Subplan 1_1
-> Distributed Subplan XXX_1
-> Aggregate
-> Custom Scan (Citus Adaptive)
Task Count: 4
Tasks Shown: One of 4
-> Task
Node: host=localhost port=57637 dbname=regression
Node: host=localhost port=xxxxx dbname=regression
-> Aggregate
-> Seq Scan on stock_1640000 stock
-> Distributed Subplan 1_2
-> Distributed Subplan XXX_2
-> Aggregate
-> Custom Scan (Citus Adaptive)
Task Count: 4
Tasks Shown: One of 4
-> Task
Node: host=localhost port=57637 dbname=regression
Node: host=localhost port=xxxxx dbname=regression
-> Aggregate
-> Seq Scan on stock_1640000 stock
Task Count: 4
Tasks Shown: One of 4
-> Task
Node: host=localhost port=57637 dbname=regression
Node: host=localhost port=xxxxx dbname=regression
-> HashAggregate
Group Key: stock.s_i_id
InitPlan 1 (returns $0)
@ -71,7 +71,7 @@ group by s_i_id
having sum(s_order_cnt) > (select max(s_order_cnt) - 3 as having_query from stock)
order by s_i_id;
QUERY PLAN
-----------------------------------------------------------------------------------------
---------------------------------------------------------------------
Sort
Sort Key: s_i_id
InitPlan 1 (returns $0)
@ -80,19 +80,19 @@ order by s_i_id;
Group Key: s_i_id
Filter: ((pg_catalog.sum(worker_column_3))::bigint > $0)
-> Custom Scan (Citus Adaptive)
-> Distributed Subplan 4_1
-> Distributed Subplan XXX_1
-> Aggregate
-> Custom Scan (Citus Adaptive)
Task Count: 4
Tasks Shown: One of 4
-> Task
Node: host=localhost port=57637 dbname=regression
Node: host=localhost port=xxxxx dbname=regression
-> Aggregate
-> Seq Scan on stock_1640000 stock
Task Count: 4
Tasks Shown: One of 4
-> Task
Node: host=localhost port=57637 dbname=regression
Node: host=localhost port=xxxxx dbname=regression
-> HashAggregate
Group Key: stock.s_i_id
-> Seq Scan on stock_1640000 stock
@ -104,26 +104,26 @@ from stock
group by s_i_id
having sum(s_order_cnt) > (select max(s_order_cnt) - 3 as having_query from stock);
QUERY PLAN
-----------------------------------------------------------------------------------
---------------------------------------------------------------------
HashAggregate
Group Key: s_i_id
Filter: ((pg_catalog.sum(worker_column_3))::bigint > $0)
InitPlan 1 (returns $0)
-> Function Scan on read_intermediate_result intermediate_result
-> Custom Scan (Citus Adaptive)
-> Distributed Subplan 6_1
-> Distributed Subplan XXX_1
-> Aggregate
-> Custom Scan (Citus Adaptive)
Task Count: 4
Tasks Shown: One of 4
-> Task
Node: host=localhost port=57637 dbname=regression
Node: host=localhost port=xxxxx dbname=regression
-> Aggregate
-> Seq Scan on stock_1640000 stock
Task Count: 4
Tasks Shown: One of 4
-> Task
Node: host=localhost port=57637 dbname=regression
Node: host=localhost port=xxxxx dbname=regression
-> HashAggregate
Group Key: stock.s_i_id
-> Seq Scan on stock_1640000 stock
@ -135,7 +135,7 @@ group by s_i_id
having (select true)
order by s_i_id;
QUERY PLAN
-------------------------------------------------------------------------------------------------
---------------------------------------------------------------------
Sort (cost=0.00..0.00 rows=0 width=0)
Sort Key: remote_scan.s_i_id
InitPlan 1 (returns $0)
@ -147,7 +147,7 @@ order by s_i_id;
Task Count: 4
Tasks Shown: One of 4
-> Task
Node: host=localhost port=57637 dbname=regression
Node: host=localhost port=xxxxx dbname=regression
-> HashAggregate (cost=40.60..42.60 rows=200 width=12)
Group Key: s.s_i_id
-> Seq Scan on stock_1640000 s (cost=0.00..30.40 rows=2040 width=8)
@ -158,7 +158,7 @@ from stock s
group by s_i_id
having (select true);
QUERY PLAN
-------------------------------------------------------------------------------------------
---------------------------------------------------------------------
HashAggregate (cost=0.00..0.00 rows=0 width=0)
Group Key: remote_scan.s_i_id
Filter: $0
@ -168,7 +168,7 @@ having (select true);
Task Count: 4
Tasks Shown: One of 4
-> Task
Node: host=localhost port=57637 dbname=regression
Node: host=localhost port=xxxxx dbname=regression
-> HashAggregate (cost=40.60..42.60 rows=200 width=12)
Group Key: s.s_i_id
-> Seq Scan on stock_1640000 s (cost=0.00..30.40 rows=2040 width=8)
@ -181,7 +181,7 @@ group by s_i_id
having sum(s_order_cnt) > (select max(s_order_cnt) - 3 as having_query from stock)
order by s_i_id;
s_i_id | ordercount
--------+------------
---------------------------------------------------------------------
(0 rows)
INSERT INTO stock SELECT c, c, c FROM generate_series(1, 5) as c;
@ -192,7 +192,7 @@ group by s_i_id
having sum(s_order_cnt) > (select max(s_order_cnt) - 3 as having_query from stock)
order by s_i_id;
s_i_id | ordercount
--------+------------
---------------------------------------------------------------------
3 | 3
4 | 4
5 | 5
@ -204,7 +204,7 @@ group by s_i_id
having sum(s_order_cnt) > (select max(s_order_cnt) - 3 as having_query from stock)
order by s_i_id;
s_i_id | ordercount
--------+------------
---------------------------------------------------------------------
3 | 3
4 | 4
5 | 5
@ -217,7 +217,7 @@ group by s_i_id
having (select true)
order by s_i_id;
s_i_id | ordercount
--------+------------
---------------------------------------------------------------------
1 | 1
2 | 2
3 | 3
@ -232,7 +232,7 @@ group by s_i_id
having (select false)
order by s_i_id;
s_i_id | ordercount
--------+------------
---------------------------------------------------------------------
(0 rows)
select s_i_id, sum(s_order_cnt) as ordercount
@ -241,7 +241,7 @@ group by s_i_id
having (select true)
order by s_i_id;
s_i_id | ordercount
--------+------------
---------------------------------------------------------------------
1 | 1
2 | 2
3 | 3
@ -255,7 +255,7 @@ group by s_i_id
having (select false)
order by s_i_id;
s_i_id | ordercount
--------+------------
---------------------------------------------------------------------
(0 rows)
select s_i_id, sum(s_order_cnt) as ordercount
@ -264,7 +264,7 @@ group by s_i_id
having (select true)
order by s_i_id;
s_i_id | ordercount
--------+------------
---------------------------------------------------------------------
1 | 1
2 | 2
3 | 3
@ -320,7 +320,7 @@ insert into stock VALUES
SELECT create_distributed_table('stock','s_w_id');
NOTICE: Copying data from local table...
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -340,7 +340,7 @@ having sum(s_order_cnt) >
and n_name = 'GERMANY')
order by ordercount desc;
s_i_id | ordercount
--------+------------
---------------------------------------------------------------------
33 | 1
1 | 1
(2 rows)
@ -361,7 +361,7 @@ having sum(s_order_cnt) >
and n_name = 'GERMANY')
order by ordercount desc;
s_i_id | ordercount
--------+------------
---------------------------------------------------------------------
1 | 100001
(1 row)

View File

@ -62,31 +62,31 @@ create table supplier (
);
SELECT create_distributed_table('order_line','ol_w_id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
SELECT create_distributed_table('stock','s_w_id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
SELECT create_reference_table('item');
create_reference_table
------------------------
---------------------------------------------------------------------
(1 row)
SELECT create_reference_table('nation');
create_reference_table
------------------------
---------------------------------------------------------------------
(1 row)
SELECT create_reference_table('supplier');
create_reference_table
------------------------
---------------------------------------------------------------------
(1 row)
@ -103,7 +103,7 @@ select s_i_id
AND s_i_id = ol_i_id
order by s_i_id;
s_i_id
--------
---------------------------------------------------------------------
1
2
3
@ -151,7 +151,7 @@ where su_suppkey in
and n_name = 'Germany'
order by su_name;
su_name | su_address
---------+------------
---------------------------------------------------------------------
(0 rows)
-- Fallback to public tables with prefilled data
@ -185,7 +185,7 @@ where s_suppkey in
and n_name = 'GERMANY'
order by s_name;
s_name | s_address
---------------------------+-------------------------------------
---------------------------------------------------------------------
Supplier#000000033 | gfeKpYw3400L0SDywXA6Ya1Qmq1w6YB9f3R
(1 row)
@ -206,7 +206,7 @@ where s_suppkey in
and n_name = 'GERMANY'
order by s_name;
s_name | s_address
---------------------------+-------------------------------------
---------------------------------------------------------------------
Supplier#000000033 | gfeKpYw3400L0SDywXA6Ya1Qmq1w6YB9f3R
Supplier#000000044 | kERxlLDnlIZJdN66zAPHklyL
(2 rows)

View File

@ -146,73 +146,73 @@ CREATE TABLE supplier (
);
SELECT create_distributed_table('order_line','ol_w_id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
SELECT create_distributed_table('new_order','no_w_id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
SELECT create_distributed_table('stock','s_w_id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
SELECT create_distributed_table('oorder','o_w_id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
SELECT create_distributed_table('history','h_w_id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
SELECT create_distributed_table('customer','c_w_id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
SELECT create_distributed_table('district','d_w_id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
SELECT create_distributed_table('warehouse','w_id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
SELECT create_reference_table('item');
create_reference_table
------------------------
---------------------------------------------------------------------
(1 row)
SELECT create_reference_table('region');
create_reference_table
------------------------
---------------------------------------------------------------------
(1 row)
SELECT create_reference_table('nation');
create_reference_table
------------------------
---------------------------------------------------------------------
(1 row)
SELECT create_reference_table('supplier');
create_reference_table
------------------------
---------------------------------------------------------------------
(1 row)
@ -246,7 +246,7 @@ WHERE ol_delivery_d > '2007-01-02 00:00:00.000000'
GROUP BY ol_number
ORDER BY ol_number;
ol_number | sum_qty | sum_amount | avg_qty | avg_amount | count_order
-----------+---------+------------+------------------------+------------------------+-------------
---------------------------------------------------------------------
0 | 0 | 0.00 | 0.00000000000000000000 | 0.00000000000000000000 | 1
1 | 1 | 1.00 | 1.00000000000000000000 | 1.00000000000000000000 | 1
2 | 2 | 2.00 | 2.0000000000000000 | 2.0000000000000000 | 1
@ -302,7 +302,7 @@ ORDER BY
su_name,
i_id;
su_suppkey | su_name | n_name | i_id | i_name | su_address | su_phone | su_comment
------------+---------------------------+---------------------------+------+----------+------------+-----------------+-------------------------------------------------------------------------------------------------------
---------------------------------------------------------------------
9 | abc | Germany | 3 | Keyboard | def | ghi | jkl
4 | abc | The Netherlands | 2 | Keyboard | def | ghi | jkl
(2 rows)
@ -339,7 +339,7 @@ ORDER BY
revenue DESC,
o_entry_d;
ol_o_id | ol_w_id | ol_d_id | revenue | o_entry_d
---------+---------+---------+---------+--------------------------
---------------------------------------------------------------------
10 | 10 | 10 | 10.00 | Fri Oct 17 00:00:00 2008
9 | 9 | 9 | 9.00 | Fri Oct 17 00:00:00 2008
8 | 8 | 8 | 8.00 | Fri Oct 17 00:00:00 2008
@ -370,7 +370,7 @@ WHERE o_entry_d >= '2007-01-02 00:00:00.000000'
GROUP BY o_ol_cnt
ORDER BY o_ol_cnt;
o_ol_cnt | order_count
----------+-------------
---------------------------------------------------------------------
1 | 11
(1 row)
@ -407,7 +407,7 @@ WHERE c_id = o_c_id
GROUP BY n_name
ORDER BY revenue DESC;
n_name | revenue
---------------------------+---------
---------------------------------------------------------------------
Germany | 3.00
The Netherlands | 2.00
(2 rows)
@ -420,7 +420,7 @@ WHERE ol_delivery_d >= '1999-01-01 00:00:00.000000'
AND ol_delivery_d < '2020-01-01 00:00:00.000000'
AND ol_quantity BETWEEN 1 AND 100000;
revenue
---------
---------------------------------------------------------------------
55.00
(1 row)
@ -463,7 +463,7 @@ ORDER BY
cust_nation,
l_year;
supp_nation | cust_nation | l_year | revenue
-------------+-------------+--------+---------
---------------------------------------------------------------------
9 | C | 2008 | 3.00
(1 row)
@ -502,7 +502,7 @@ WHERE i_id = s_i_id
GROUP BY extract(YEAR FROM o_entry_d)
ORDER BY l_year;
l_year | mkt_share
--------+------------------------
---------------------------------------------------------------------
2008 | 0.50000000000000000000
(1 row)
@ -534,7 +534,7 @@ ORDER BY
n_name,
l_year DESC;
n_name | l_year | sum_profit
---------------------------+--------+------------
---------------------------------------------------------------------
Germany | 2008 | 3.00
The Netherlands | 2008 | 2.00
United States | 2008 | 1.00
@ -570,7 +570,7 @@ GROUP BY
n_name
ORDER BY revenue DESC;
c_id | c_last | revenue | c_city | c_phone | n_name
------+--------+---------+-----------+------------------+---------------------------
---------------------------------------------------------------------
10 | John | 10.00 | Some City | +1 000 0000000 | Cambodia
9 | John | 9.00 | Some City | +1 000 0000000 | Cambodia
8 | John | 8.00 | Some City | +1 000 0000000 | Cambodia
@ -607,7 +607,7 @@ HAVING sum(s_order_cnt) >
AND n_name = 'Germany')
ORDER BY ordercount DESC;
s_i_id | ordercount
--------+------------
---------------------------------------------------------------------
3 | 3
(1 row)
@ -627,7 +627,7 @@ WHERE ol_w_id = o_w_id
GROUP BY o_ol_cnt
ORDER BY o_ol_cnt;
o_ol_cnt | high_line_count | low_line_count
----------+-----------------+----------------
---------------------------------------------------------------------
1 | 2 | 9
(1 row)
@ -650,7 +650,7 @@ ORDER BY
custdist DESC,
c_count DESC;
c_count | custdist
---------+----------
---------------------------------------------------------------------
0 | 9
1 | 2
(2 rows)
@ -665,7 +665,7 @@ WHERE ol_i_id = i_id
AND ol_delivery_d >= '2007-01-02 00:00:00.000000'
AND ol_delivery_d < '2020-01-02 00:00:00.000000';
promo_revenue
------------------------
---------------------------------------------------------------------
0.00000000000000000000
(1 row)
@ -694,7 +694,7 @@ WHERE su_suppkey = supplier_no
AND total_revenue = (SELECT max(total_revenue) FROM revenue)
ORDER BY su_suppkey;
su_suppkey | su_name | su_address | su_phone | total_revenue
------------+---------------------------+------------+-----------------+---------------
---------------------------------------------------------------------
9 | abc | def | ghi | 3.00
(1 row)
@ -719,7 +719,7 @@ GROUP BY
i_price
ORDER BY supplier_cnt DESC;
i_name | brand | i_price | supplier_cnt
----------+-------+---------+--------------
---------------------------------------------------------------------
Keyboard | co | 50.00 | 3
(1 row)
@ -739,7 +739,7 @@ FROM
GROUP BY i_id) t
WHERE ol_i_id = t.i_id;
avg_yearly
---------------------
---------------------------------------------------------------------
27.5000000000000000
(1 row)
@ -776,7 +776,7 @@ ORDER BY
sum(ol_amount) DESC,
o_entry_d;
c_last | o_id | o_entry_d | o_ol_cnt | sum
--------+------+--------------------------+----------+-------
---------------------------------------------------------------------
John | 10 | Fri Oct 17 00:00:00 2008 | 1 | 10.00
John | 9 | Fri Oct 17 00:00:00 2008 | 1 | 9.00
John | 8 | Fri Oct 17 00:00:00 2008 | 1 | 8.00
@ -809,7 +809,7 @@ WHERE ( ol_i_id = i_id
AND i_price BETWEEN 1 AND 400000
AND ol_w_id IN (1,5,3));
revenue
---------
---------------------------------------------------------------------
7.00
(1 row)
@ -838,7 +838,7 @@ WHERE su_suppkey in
AND n_name = 'Germany'
ORDER BY su_name;
su_name | su_address
---------------------------+------------
---------------------------------------------------------------------
abc | def
(1 row)
@ -873,7 +873,7 @@ ORDER BY
numwait desc,
su_name;
su_name | numwait
---------+---------
---------------------------------------------------------------------
(0 rows)
-- Query 22
@ -896,7 +896,7 @@ WHERE substr(c_phone,1,1) in ('1','2','3','4','5','6','7')
GROUP BY substr(c_state,1,1)
ORDER BY substr(c_state,1,1);
country | numcust | totacctbal
---------+---------+------------
---------------------------------------------------------------------
(0 rows)
SET client_min_messages TO WARNING;

View File

@ -5,14 +5,14 @@ SET search_path TO coordinator_shouldhaveshards;
SET client_min_messages TO WARNING;
SELECT 1 FROM master_add_node('localhost', :master_port, groupid => 0);
?column?
----------
---------------------------------------------------------------------
1
(1 row)
RESET client_min_messages;
SELECT 1 FROM master_set_node_property('localhost', :master_port, 'shouldhaveshards', true);
?column?
----------
---------------------------------------------------------------------
1
(1 row)
@ -20,14 +20,14 @@ SET citus.shard_replication_factor TO 1;
CREATE TABLE test (x int, y int);
SELECT create_distributed_table('test','x', colocate_with := 'none');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
SELECT count(*) FROM pg_dist_shard JOIN pg_dist_placement USING (shardid)
WHERE logicalrelid = 'test'::regclass AND groupid = 0;
count
-------
---------------------------------------------------------------------
2
(1 row)
@ -37,20 +37,20 @@ INSERT INTO test SELECT s,s FROM generate_series(2,100) s;
INSERT INTO test VALUES (1, 1);
SELECT y FROM test WHERE x = 1;
y
---
---------------------------------------------------------------------
1
(1 row)
-- multi-shard queries connect to localhost
SELECT count(*) FROM test;
count
-------
---------------------------------------------------------------------
100
(1 row)
WITH a AS (SELECT * FROM test) SELECT count(*) FROM test;
count
-------
---------------------------------------------------------------------
100
(1 row)
@ -58,13 +58,13 @@ WITH a AS (SELECT * FROM test) SELECT count(*) FROM test;
BEGIN;
SELECT y FROM test WHERE x = 1;
y
---
---------------------------------------------------------------------
1
(1 row)
SELECT count(*) FROM test;
count
-------
---------------------------------------------------------------------
100
(1 row)
@ -72,13 +72,13 @@ END;
BEGIN;
SELECT y FROM test WHERE x = 1;
y
---
---------------------------------------------------------------------
1
(1 row)
SELECT count(*) FROM test;
count
-------
---------------------------------------------------------------------
100
(1 row)
@ -89,7 +89,7 @@ ALTER TABLE test ADD COLUMN z int;
BEGIN;
SELECT y FROM test WHERE x = 1;
y
---
---------------------------------------------------------------------
1
(1 row)
@ -102,7 +102,7 @@ BEGIN;
ALTER TABLE test DROP COLUMN z;
SELECT y FROM test WHERE x = 1;
y
---
---------------------------------------------------------------------
1
(1 row)
@ -112,7 +112,7 @@ DROP TABLE test;
DROP SCHEMA coordinator_shouldhaveshards CASCADE;
SELECT 1 FROM master_set_node_property('localhost', :master_port, 'shouldhaveshards', false);
?column?
----------
---------------------------------------------------------------------
1
(1 row)

View File

@ -5,7 +5,7 @@ INSERT INTO tt1 VALUES(1,2),(2,3),(3,4);
SELECT create_distributed_table('tt1','id');
NOTICE: Copying data from local table...
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -14,7 +14,7 @@ INSERT INTO tt2 VALUES(3,3),(4,4),(5,5);
SELECT create_distributed_table('tt2','id');
NOTICE: Copying data from local table...
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -41,7 +41,7 @@ FROM cte_1
WHERE cte_1.id = tt1.id;
SELECT * FROM tt1 ORDER BY id;
id | value_1
----+---------
---------------------------------------------------------------------
1 | 2
2 | 6
3 | 4
@ -65,7 +65,7 @@ UPDATE tt1
SET value_1 = (SELECT max(id) + abs(2 + 3.5) FROM cte_1);
SELECT * FROM tt1 ORDER BY id;
id | value_1
----+---------
---------------------------------------------------------------------
1 | 9
2 | 9
3 | 9
@ -89,7 +89,7 @@ UPDATE tt1
SET value_1 = (SELECT max(id) + abs(2 + 3.5) FROM cte_1);
SELECT * FROM tt1 ORDER BY id;
id | value_1
----+---------
---------------------------------------------------------------------
1 | 9
2 | 9
3 | 9
@ -115,7 +115,7 @@ USING cte_1
WHERE tt1.id < cte_1.id;
SELECT * FROM tt1 ORDER BY id;
id | value_1
----+---------
---------------------------------------------------------------------
3 | 4
(1 row)
@ -135,7 +135,7 @@ USING cte_1
WHERE tt1.id < cte_1.id;
SELECT * FROM tt1 ORDER BY id;
id | value_1
----+---------
---------------------------------------------------------------------
(0 rows)
ROLLBACK;

View File

@ -5,7 +5,7 @@ INSERT INTO tt1 VALUES(1,2),(2,3),(3,4);
SELECT create_distributed_table('tt1','id');
NOTICE: Copying data from local table...
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -14,7 +14,7 @@ INSERT INTO tt2 VALUES(3,3),(4,4),(5,5);
SELECT create_distributed_table('tt2','id');
NOTICE: Copying data from local table...
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)

View File

@ -15,13 +15,13 @@ CREATE TABLE raw_table (day date, user_id int);
CREATE TABLE daily_uniques(day date, unique_users hll);
SELECT create_distributed_table('raw_table', 'user_id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
SELECT create_distributed_table('daily_uniques', 'day');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -39,7 +39,7 @@ FROM (
SELECT hll_add_agg(hll_hash_integer(user_id)) AS agg
FROM raw_table)a;
hll_cardinality
-----------------
---------------------------------------------------------------------
19
(1 row)
@ -55,7 +55,7 @@ WHERE day >= '2018-06-20' and day <= '2018-06-30'
ORDER BY 2 DESC,1
LIMIT 10;
day | hll_cardinality
------------+-----------------
---------------------------------------------------------------------
06-20-2018 | 19
06-21-2018 | 19
06-22-2018 | 19
@ -73,7 +73,7 @@ SELECT hll_cardinality(hll_union_agg(unique_users))
FROM daily_uniques
WHERE day >= '2018-05-24'::date AND day <= '2018-05-31'::date;
hll_cardinality
-----------------
---------------------------------------------------------------------
19
(1 row)
@ -83,7 +83,7 @@ WHERE day >= '2018-06-23' AND day <= '2018-07-01'
GROUP BY 1
ORDER BY 1;
month | hll_cardinality
-------+-----------------
---------------------------------------------------------------------
6 | 19
7 | 13
(2 rows)
@ -109,30 +109,30 @@ FROM
daily_uniques
GROUP BY(1);
QUERY PLAN
------------------------------------------------------------------
---------------------------------------------------------------------
Custom Scan (Citus Adaptive)
Task Count: 4
Tasks Shown: All
-> Task
Node: host=localhost port=57637 dbname=regression
Node: host=localhost port=xxxxx dbname=regression
-> HashAggregate
Group Key: day
-> Seq Scan on daily_uniques_360615 daily_uniques
-> Seq Scan on daily_uniques_xxxxxxx daily_uniques
-> Task
Node: host=localhost port=57638 dbname=regression
Node: host=localhost port=xxxxx dbname=regression
-> HashAggregate
Group Key: day
-> Seq Scan on daily_uniques_360616 daily_uniques
-> Seq Scan on daily_uniques_xxxxxxx daily_uniques
-> Task
Node: host=localhost port=57637 dbname=regression
Node: host=localhost port=xxxxx dbname=regression
-> HashAggregate
Group Key: day
-> Seq Scan on daily_uniques_360617 daily_uniques
-> Seq Scan on daily_uniques_xxxxxxx daily_uniques
-> Task
Node: host=localhost port=57638 dbname=regression
Node: host=localhost port=xxxxx dbname=regression
-> HashAggregate
Group Key: day
-> Seq Scan on daily_uniques_360618 daily_uniques
-> Seq Scan on daily_uniques_xxxxxxx daily_uniques
(23 rows)
SET hll.force_groupagg to ON;
@ -143,30 +143,30 @@ FROM
daily_uniques
GROUP BY(1);
QUERY PLAN
------------------------------------------------------------------
---------------------------------------------------------------------
Custom Scan (Citus Adaptive)
Task Count: 4
Tasks Shown: All
-> Task
Node: host=localhost port=57637 dbname=regression
Node: host=localhost port=xxxxx dbname=regression
-> HashAggregate
Group Key: day
-> Seq Scan on daily_uniques_360615 daily_uniques
-> Seq Scan on daily_uniques_xxxxxxx daily_uniques
-> Task
Node: host=localhost port=57638 dbname=regression
Node: host=localhost port=xxxxx dbname=regression
-> HashAggregate
Group Key: day
-> Seq Scan on daily_uniques_360616 daily_uniques
-> Seq Scan on daily_uniques_xxxxxxx daily_uniques
-> Task
Node: host=localhost port=57637 dbname=regression
Node: host=localhost port=xxxxx dbname=regression
-> HashAggregate
Group Key: day
-> Seq Scan on daily_uniques_360617 daily_uniques
-> Seq Scan on daily_uniques_xxxxxxx daily_uniques
-> Task
Node: host=localhost port=57638 dbname=regression
Node: host=localhost port=xxxxx dbname=regression
-> HashAggregate
Group Key: day
-> Seq Scan on daily_uniques_360618 daily_uniques
-> Seq Scan on daily_uniques_xxxxxxx daily_uniques
(23 rows)
-- Test disabling hash_agg with operator on coordinator query
@ -178,30 +178,30 @@ FROM
daily_uniques
GROUP BY(1);
QUERY PLAN
------------------------------------------------------------------
---------------------------------------------------------------------
Custom Scan (Citus Adaptive)
Task Count: 4
Tasks Shown: All
-> Task
Node: host=localhost port=57637 dbname=regression
Node: host=localhost port=xxxxx dbname=regression
-> HashAggregate
Group Key: day
-> Seq Scan on daily_uniques_360615 daily_uniques
-> Seq Scan on daily_uniques_xxxxxxx daily_uniques
-> Task
Node: host=localhost port=57638 dbname=regression
Node: host=localhost port=xxxxx dbname=regression
-> HashAggregate
Group Key: day
-> Seq Scan on daily_uniques_360616 daily_uniques
-> Seq Scan on daily_uniques_xxxxxxx daily_uniques
-> Task
Node: host=localhost port=57637 dbname=regression
Node: host=localhost port=xxxxx dbname=regression
-> HashAggregate
Group Key: day
-> Seq Scan on daily_uniques_360617 daily_uniques
-> Seq Scan on daily_uniques_xxxxxxx daily_uniques
-> Task
Node: host=localhost port=57638 dbname=regression
Node: host=localhost port=xxxxx dbname=regression
-> HashAggregate
Group Key: day
-> Seq Scan on daily_uniques_360618 daily_uniques
-> Seq Scan on daily_uniques_xxxxxxx daily_uniques
(23 rows)
SET hll.force_groupagg to ON;
@ -212,30 +212,30 @@ FROM
daily_uniques
GROUP BY(1);
QUERY PLAN
------------------------------------------------------------------
---------------------------------------------------------------------
Custom Scan (Citus Adaptive)
Task Count: 4
Tasks Shown: All
-> Task
Node: host=localhost port=57637 dbname=regression
Node: host=localhost port=xxxxx dbname=regression
-> HashAggregate
Group Key: day
-> Seq Scan on daily_uniques_360615 daily_uniques
-> Seq Scan on daily_uniques_xxxxxxx daily_uniques
-> Task
Node: host=localhost port=57638 dbname=regression
Node: host=localhost port=xxxxx dbname=regression
-> HashAggregate
Group Key: day
-> Seq Scan on daily_uniques_360616 daily_uniques
-> Seq Scan on daily_uniques_xxxxxxx daily_uniques
-> Task
Node: host=localhost port=57637 dbname=regression
Node: host=localhost port=xxxxx dbname=regression
-> HashAggregate
Group Key: day
-> Seq Scan on daily_uniques_360617 daily_uniques
-> Seq Scan on daily_uniques_xxxxxxx daily_uniques
-> Task
Node: host=localhost port=57638 dbname=regression
Node: host=localhost port=xxxxx dbname=regression
-> HashAggregate
Group Key: day
-> Seq Scan on daily_uniques_360618 daily_uniques
-> Seq Scan on daily_uniques_xxxxxxx daily_uniques
(23 rows)
-- Test disabling hash_agg with expression on coordinator query
@ -247,30 +247,30 @@ FROM
daily_uniques
GROUP BY(1);
QUERY PLAN
------------------------------------------------------------------
---------------------------------------------------------------------
Custom Scan (Citus Adaptive)
Task Count: 4
Tasks Shown: All
-> Task
Node: host=localhost port=57637 dbname=regression
Node: host=localhost port=xxxxx dbname=regression
-> HashAggregate
Group Key: day
-> Seq Scan on daily_uniques_360615 daily_uniques
-> Seq Scan on daily_uniques_xxxxxxx daily_uniques
-> Task
Node: host=localhost port=57638 dbname=regression
Node: host=localhost port=xxxxx dbname=regression
-> HashAggregate
Group Key: day
-> Seq Scan on daily_uniques_360616 daily_uniques
-> Seq Scan on daily_uniques_xxxxxxx daily_uniques
-> Task
Node: host=localhost port=57637 dbname=regression
Node: host=localhost port=xxxxx dbname=regression
-> HashAggregate
Group Key: day
-> Seq Scan on daily_uniques_360617 daily_uniques
-> Seq Scan on daily_uniques_xxxxxxx daily_uniques
-> Task
Node: host=localhost port=57638 dbname=regression
Node: host=localhost port=xxxxx dbname=regression
-> HashAggregate
Group Key: day
-> Seq Scan on daily_uniques_360618 daily_uniques
-> Seq Scan on daily_uniques_xxxxxxx daily_uniques
(23 rows)
SET hll.force_groupagg to ON;
@ -281,30 +281,30 @@ FROM
daily_uniques
GROUP BY(1);
QUERY PLAN
------------------------------------------------------------------
---------------------------------------------------------------------
Custom Scan (Citus Adaptive)
Task Count: 4
Tasks Shown: All
-> Task
Node: host=localhost port=57637 dbname=regression
Node: host=localhost port=xxxxx dbname=regression
-> HashAggregate
Group Key: day
-> Seq Scan on daily_uniques_360615 daily_uniques
-> Seq Scan on daily_uniques_xxxxxxx daily_uniques
-> Task
Node: host=localhost port=57638 dbname=regression
Node: host=localhost port=xxxxx dbname=regression
-> HashAggregate
Group Key: day
-> Seq Scan on daily_uniques_360616 daily_uniques
-> Seq Scan on daily_uniques_xxxxxxx daily_uniques
-> Task
Node: host=localhost port=57637 dbname=regression
Node: host=localhost port=xxxxx dbname=regression
-> HashAggregate
Group Key: day
-> Seq Scan on daily_uniques_360617 daily_uniques
-> Seq Scan on daily_uniques_xxxxxxx daily_uniques
-> Task
Node: host=localhost port=57638 dbname=regression
Node: host=localhost port=xxxxx dbname=regression
-> HashAggregate
Group Key: day
-> Seq Scan on daily_uniques_360618 daily_uniques
-> Seq Scan on daily_uniques_xxxxxxx daily_uniques
(23 rows)
-- Test disabling hash_agg with having
@ -316,30 +316,30 @@ FROM
daily_uniques
GROUP BY(1);
QUERY PLAN
------------------------------------------------------------------
---------------------------------------------------------------------
Custom Scan (Citus Adaptive)
Task Count: 4
Tasks Shown: All
-> Task
Node: host=localhost port=57637 dbname=regression
Node: host=localhost port=xxxxx dbname=regression
-> HashAggregate
Group Key: day
-> Seq Scan on daily_uniques_360615 daily_uniques
-> Seq Scan on daily_uniques_xxxxxxx daily_uniques
-> Task
Node: host=localhost port=57638 dbname=regression
Node: host=localhost port=xxxxx dbname=regression
-> HashAggregate
Group Key: day
-> Seq Scan on daily_uniques_360616 daily_uniques
-> Seq Scan on daily_uniques_xxxxxxx daily_uniques
-> Task
Node: host=localhost port=57637 dbname=regression
Node: host=localhost port=xxxxx dbname=regression
-> HashAggregate
Group Key: day
-> Seq Scan on daily_uniques_360617 daily_uniques
-> Seq Scan on daily_uniques_xxxxxxx daily_uniques
-> Task
Node: host=localhost port=57638 dbname=regression
Node: host=localhost port=xxxxx dbname=regression
-> HashAggregate
Group Key: day
-> Seq Scan on daily_uniques_360618 daily_uniques
-> Seq Scan on daily_uniques_xxxxxxx daily_uniques
(23 rows)
SET hll.force_groupagg to ON;
@ -351,34 +351,34 @@ FROM
GROUP BY(1)
HAVING hll_cardinality(hll_union_agg(unique_users)) > 1;
QUERY PLAN
----------------------------------------------------------------------------------------------
---------------------------------------------------------------------
Custom Scan (Citus Adaptive)
Task Count: 4
Tasks Shown: All
-> Task
Node: host=localhost port=57637 dbname=regression
Node: host=localhost port=xxxxx dbname=regression
-> HashAggregate
Group Key: day
Filter: (hll_cardinality(hll_union_agg(unique_users)) > '1'::double precision)
-> Seq Scan on daily_uniques_360615 daily_uniques
-> Seq Scan on daily_uniques_xxxxxxx daily_uniques
-> Task
Node: host=localhost port=57638 dbname=regression
Node: host=localhost port=xxxxx dbname=regression
-> HashAggregate
Group Key: day
Filter: (hll_cardinality(hll_union_agg(unique_users)) > '1'::double precision)
-> Seq Scan on daily_uniques_360616 daily_uniques
-> Seq Scan on daily_uniques_xxxxxxx daily_uniques
-> Task
Node: host=localhost port=57637 dbname=regression
Node: host=localhost port=xxxxx dbname=regression
-> HashAggregate
Group Key: day
Filter: (hll_cardinality(hll_union_agg(unique_users)) > '1'::double precision)
-> Seq Scan on daily_uniques_360617 daily_uniques
-> Seq Scan on daily_uniques_xxxxxxx daily_uniques
-> Task
Node: host=localhost port=57638 dbname=regression
Node: host=localhost port=xxxxx dbname=regression
-> HashAggregate
Group Key: day
Filter: (hll_cardinality(hll_union_agg(unique_users)) > '1'::double precision)
-> Seq Scan on daily_uniques_360618 daily_uniques
-> Seq Scan on daily_uniques_xxxxxxx daily_uniques
(27 rows)
DROP TABLE raw_table;
@ -396,13 +396,13 @@ CREATE TABLE customer_reviews (day date, user_id int, review int);
CREATE TABLE popular_reviewer(day date, reviewers jsonb);
SELECT create_distributed_table('customer_reviews', 'user_id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
SELECT create_distributed_table('popular_reviewer', 'day');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -422,7 +422,7 @@ FROM (
)a
ORDER BY 2 DESC, 1;
item | frequency
------+-----------
---------------------------------------------------------------------
1 | 7843
2 | 7843
3 | 6851
@ -447,7 +447,7 @@ WHERE day >= '2018-06-20' and day <= '2018-06-30'
ORDER BY 3 DESC, 1, 2
LIMIT 10;
day | item | frequency
------------+------+-----------
---------------------------------------------------------------------
06-20-2018 | 1 | 248
06-20-2018 | 2 | 248
06-21-2018 | 1 | 248
@ -469,7 +469,7 @@ FROM (
)a
ORDER BY 2 DESC, 1;
item | frequency
------+-----------
---------------------------------------------------------------------
1 | 1240
2 | 1240
0 | 992
@ -489,7 +489,7 @@ FROM (
)a
ORDER BY 1, 3 DESC, 2;
month | item | frequency
-------+------+-----------
---------------------------------------------------------------------
6 | 1 | 1054
6 | 2 | 1054
6 | 3 | 992
@ -509,14 +509,10 @@ FROM popular_reviewer
WHERE day >= '2018-05-24'::date AND day <= '2018-05-31'::date
ORDER BY 2 DESC, 1;
ERROR: set-valued function called in context that cannot accept a set
LINE 1: SELECT (topn(topn_union_agg(reviewers), 10)).*
^
SELECT (topn(topn_add_agg(user_id::text), 10)).*
FROM customer_reviews
ORDER BY 2 DESC, 1;
ERROR: set-valued function called in context that cannot accept a set
LINE 1: SELECT (topn(topn_add_agg(user_id::text), 10)).*
^
-- The following is going to be supported after window function support
SELECT day, (topn(agg, 10)).*
FROM (

View File

@ -10,7 +10,7 @@ WHERE name = 'hll'
\gset
:create_cmd;
hll_present
-------------
---------------------------------------------------------------------
f
(1 row)
@ -18,18 +18,14 @@ SET citus.shard_count TO 4;
CREATE TABLE raw_table (day date, user_id int);
CREATE TABLE daily_uniques(day date, unique_users hll);
ERROR: type "hll" does not exist
LINE 1: CREATE TABLE daily_uniques(day date, unique_users hll);
^
SELECT create_distributed_table('raw_table', 'user_id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
SELECT create_distributed_table('daily_uniques', 'day');
ERROR: relation "daily_uniques" does not exist
LINE 1: SELECT create_distributed_table('daily_uniques', 'day');
^
INSERT INTO raw_table
SELECT day, user_id % 19
FROM generate_series('2018-05-24'::timestamp, '2018-06-24'::timestamp, '1 day'::interval) as f(day),
@ -44,8 +40,6 @@ FROM (
SELECT hll_add_agg(hll_hash_integer(user_id)) AS agg
FROM raw_table)a;
ERROR: function hll_hash_integer(integer) does not exist
LINE 3: SELECT hll_add_agg(hll_hash_integer(user_id)) AS agg
^
HINT: No function matches the given name and argument types. You might need to add explicit type casts.
-- Aggregate the data into daily_uniques
INSERT INTO daily_uniques
@ -53,8 +47,6 @@ INSERT INTO daily_uniques
FROM raw_table
GROUP BY 1;
ERROR: relation "daily_uniques" does not exist
LINE 1: INSERT INTO daily_uniques
^
-- Basic hll_cardinality check on aggregated data
SELECT day, hll_cardinality(unique_users)
FROM daily_uniques
@ -62,36 +54,26 @@ WHERE day >= '2018-06-20' and day <= '2018-06-30'
ORDER BY 2 DESC,1
LIMIT 10;
ERROR: relation "daily_uniques" does not exist
LINE 2: FROM daily_uniques
^
-- Union aggregated data for one week
SELECT hll_cardinality(hll_union_agg(unique_users))
FROM daily_uniques
WHERE day >= '2018-05-24'::date AND day <= '2018-05-31'::date;
ERROR: relation "daily_uniques" does not exist
LINE 2: FROM daily_uniques
^
SELECT EXTRACT(MONTH FROM day) AS month, hll_cardinality(hll_union_agg(unique_users))
FROM daily_uniques
WHERE day >= '2018-06-23' AND day <= '2018-07-01'
GROUP BY 1
ORDER BY 1;
ERROR: relation "daily_uniques" does not exist
LINE 2: FROM daily_uniques
^
-- These are going to be supported after window function support
SELECT day, hll_cardinality(hll_union_agg(unique_users) OVER seven_days)
FROM daily_uniques
WINDOW seven_days AS (ORDER BY day ASC ROWS 6 PRECEDING);
ERROR: relation "daily_uniques" does not exist
LINE 2: FROM daily_uniques
^
SELECT day, (hll_cardinality(hll_union_agg(unique_users) OVER two_days)) - hll_cardinality(unique_users) AS lost_uniques
FROM daily_uniques
WINDOW two_days AS (ORDER BY day ASC ROWS 1 PRECEDING);
ERROR: relation "daily_uniques" does not exist
LINE 2: FROM daily_uniques
^
-- Test disabling hash_agg on coordinator query
SET citus.explain_all_tasks to true;
SET hll.force_groupagg to OFF;
@ -102,8 +84,6 @@ FROM
daily_uniques
GROUP BY(1);
ERROR: relation "daily_uniques" does not exist
LINE 5: daily_uniques
^
SET hll.force_groupagg to ON;
EXPLAIN(COSTS OFF)
SELECT
@ -112,8 +92,6 @@ FROM
daily_uniques
GROUP BY(1);
ERROR: relation "daily_uniques" does not exist
LINE 5: daily_uniques
^
-- Test disabling hash_agg with operator on coordinator query
SET hll.force_groupagg to OFF;
EXPLAIN(COSTS OFF)
@ -123,8 +101,6 @@ FROM
daily_uniques
GROUP BY(1);
ERROR: relation "daily_uniques" does not exist
LINE 5: daily_uniques
^
SET hll.force_groupagg to ON;
EXPLAIN(COSTS OFF)
SELECT
@ -133,8 +109,6 @@ FROM
daily_uniques
GROUP BY(1);
ERROR: relation "daily_uniques" does not exist
LINE 5: daily_uniques
^
-- Test disabling hash_agg with expression on coordinator query
SET hll.force_groupagg to OFF;
EXPLAIN(COSTS OFF)
@ -144,8 +118,6 @@ FROM
daily_uniques
GROUP BY(1);
ERROR: relation "daily_uniques" does not exist
LINE 5: daily_uniques
^
SET hll.force_groupagg to ON;
EXPLAIN(COSTS OFF)
SELECT
@ -154,8 +126,6 @@ FROM
daily_uniques
GROUP BY(1);
ERROR: relation "daily_uniques" does not exist
LINE 5: daily_uniques
^
-- Test disabling hash_agg with having
SET hll.force_groupagg to OFF;
EXPLAIN(COSTS OFF)
@ -165,8 +135,6 @@ FROM
daily_uniques
GROUP BY(1);
ERROR: relation "daily_uniques" does not exist
LINE 5: daily_uniques
^
SET hll.force_groupagg to ON;
EXPLAIN(COSTS OFF)
SELECT
@ -176,8 +144,6 @@ FROM
GROUP BY(1)
HAVING hll_cardinality(hll_union_agg(unique_users)) > 1;
ERROR: relation "daily_uniques" does not exist
LINE 5: daily_uniques
^
DROP TABLE raw_table;
DROP TABLE daily_uniques;
ERROR: table "daily_uniques" does not exist
@ -191,7 +157,7 @@ WHERE name = 'topn'
\gset
:create_topn;
topn_present
--------------
---------------------------------------------------------------------
f
(1 row)
@ -199,13 +165,13 @@ CREATE TABLE customer_reviews (day date, user_id int, review int);
CREATE TABLE popular_reviewer(day date, reviewers jsonb);
SELECT create_distributed_table('customer_reviews', 'user_id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
SELECT create_distributed_table('popular_reviewer', 'day');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -225,8 +191,6 @@ FROM (
)a
ORDER BY 2 DESC, 1;
ERROR: function topn_add_agg(text) does not exist
LINE 3: SELECT topn_add_agg(user_id::text) AS agg
^
HINT: No function matches the given name and argument types. You might need to add explicit type casts.
-- Aggregate the data into popular_reviewer
INSERT INTO popular_reviewer
@ -234,8 +198,6 @@ INSERT INTO popular_reviewer
FROM customer_reviews
GROUP BY 1;
ERROR: function topn_add_agg(text) does not exist
LINE 2: SELECT day, topn_add_agg(user_id::text)
^
HINT: No function matches the given name and argument types. You might need to add explicit type casts.
-- Basic topn check on aggregated data
SELECT day, (topn(reviewers, 10)).*
@ -244,8 +206,6 @@ WHERE day >= '2018-06-20' and day <= '2018-06-30'
ORDER BY 3 DESC, 1, 2
LIMIT 10;
ERROR: function topn(jsonb, integer) does not exist
LINE 1: SELECT day, (topn(reviewers, 10)).*
^
HINT: No function matches the given name and argument types. You might need to add explicit type casts.
-- Union aggregated data for one week
SELECT (topn(agg, 10)).*
@ -256,8 +216,6 @@ FROM (
)a
ORDER BY 2 DESC, 1;
ERROR: function topn_union_agg(jsonb) does not exist
LINE 3: SELECT topn_union_agg(reviewers) AS agg
^
HINT: No function matches the given name and argument types. You might need to add explicit type casts.
SELECT month, (topn(agg, 5)).*
FROM (
@ -269,8 +227,6 @@ FROM (
)a
ORDER BY 1, 3 DESC, 2;
ERROR: function topn_union_agg(jsonb) does not exist
LINE 3: SELECT EXTRACT(MONTH FROM day) AS month, topn_union_agg(rev...
^
HINT: No function matches the given name and argument types. You might need to add explicit type casts.
-- TODO the following queries will be supported after we fix #2265
-- They work for PG9.6 but not for PG10
@ -279,15 +235,11 @@ FROM popular_reviewer
WHERE day >= '2018-05-24'::date AND day <= '2018-05-31'::date
ORDER BY 2 DESC, 1;
ERROR: function topn_union_agg(jsonb) does not exist
LINE 1: SELECT (topn(topn_union_agg(reviewers), 10)).*
^
HINT: No function matches the given name and argument types. You might need to add explicit type casts.
SELECT (topn(topn_add_agg(user_id::text), 10)).*
FROM customer_reviews
ORDER BY 2 DESC, 1;
ERROR: function topn_add_agg(text) does not exist
LINE 1: SELECT (topn(topn_add_agg(user_id::text), 10)).*
^
HINT: No function matches the given name and argument types. You might need to add explicit type casts.
-- The following is going to be supported after window function support
SELECT day, (topn(agg, 10)).*
@ -299,8 +251,6 @@ FROM (
ORDER BY 3 DESC, 1, 2
LIMIT 10;
ERROR: function topn_union_agg(jsonb) does not exist
LINE 3: SELECT day, topn_union_agg(reviewers) OVER seven_days AS ag...
^
HINT: No function matches the given name and argument types. You might need to add explicit type casts.
SELECT day, (topn(topn_add_agg(user_id::text) OVER seven_days, 10)).*
FROM customer_reviews
@ -308,8 +258,6 @@ WINDOW seven_days AS (ORDER BY day ASC ROWS 6 PRECEDING)
ORDER BY 3 DESC, 1, 2
LIMIT 10;
ERROR: function topn_add_agg(text) does not exist
LINE 1: SELECT day, (topn(topn_add_agg(user_id::text) OVER seven_day...
^
HINT: No function matches the given name and argument types. You might need to add explicit type casts.
DROP TABLE customer_reviews;
DROP TABLE popular_reviewer;

View File

@ -15,13 +15,13 @@ CREATE TABLE raw_table (day date, user_id int);
CREATE TABLE daily_uniques(day date, unique_users hll);
SELECT create_distributed_table('raw_table', 'user_id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
SELECT create_distributed_table('daily_uniques', 'day');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -39,7 +39,7 @@ FROM (
SELECT hll_add_agg(hll_hash_integer(user_id)) AS agg
FROM raw_table)a;
hll_cardinality
-----------------
---------------------------------------------------------------------
19
(1 row)
@ -55,7 +55,7 @@ WHERE day >= '2018-06-20' and day <= '2018-06-30'
ORDER BY 2 DESC,1
LIMIT 10;
day | hll_cardinality
------------+-----------------
---------------------------------------------------------------------
06-20-2018 | 19
06-21-2018 | 19
06-22-2018 | 19
@ -73,7 +73,7 @@ SELECT hll_cardinality(hll_union_agg(unique_users))
FROM daily_uniques
WHERE day >= '2018-05-24'::date AND day <= '2018-05-31'::date;
hll_cardinality
-----------------
---------------------------------------------------------------------
19
(1 row)
@ -83,7 +83,7 @@ WHERE day >= '2018-06-23' AND day <= '2018-07-01'
GROUP BY 1
ORDER BY 1;
month | hll_cardinality
-------+-----------------
---------------------------------------------------------------------
6 | 19
7 | 13
(2 rows)
@ -109,32 +109,32 @@ FROM
daily_uniques
GROUP BY(1);
QUERY PLAN
------------------------------------------------------------------------
---------------------------------------------------------------------
HashAggregate
Group Key: remote_scan.day
-> Custom Scan (Citus Adaptive)
Task Count: 4
Tasks Shown: All
-> Task
Node: host=localhost port=57637 dbname=regression
Node: host=localhost port=xxxxx dbname=regression
-> HashAggregate
Group Key: day
-> Seq Scan on daily_uniques_360261 daily_uniques
-> Seq Scan on daily_uniques_xxxxxxx daily_uniques
-> Task
Node: host=localhost port=57638 dbname=regression
Node: host=localhost port=xxxxx dbname=regression
-> HashAggregate
Group Key: day
-> Seq Scan on daily_uniques_360262 daily_uniques
-> Seq Scan on daily_uniques_xxxxxxx daily_uniques
-> Task
Node: host=localhost port=57637 dbname=regression
Node: host=localhost port=xxxxx dbname=regression
-> HashAggregate
Group Key: day
-> Seq Scan on daily_uniques_360263 daily_uniques
-> Seq Scan on daily_uniques_xxxxxxx daily_uniques
-> Task
Node: host=localhost port=57638 dbname=regression
Node: host=localhost port=xxxxx dbname=regression
-> HashAggregate
Group Key: day
-> Seq Scan on daily_uniques_360264 daily_uniques
-> Seq Scan on daily_uniques_xxxxxxx daily_uniques
(25 rows)
SET hll.force_groupagg to ON;
@ -145,7 +145,7 @@ FROM
daily_uniques
GROUP BY(1);
QUERY PLAN
------------------------------------------------------------------------------
---------------------------------------------------------------------
GroupAggregate
Group Key: remote_scan.day
-> Sort
@ -154,25 +154,25 @@ GROUP BY(1);
Task Count: 4
Tasks Shown: All
-> Task
Node: host=localhost port=57637 dbname=regression
Node: host=localhost port=xxxxx dbname=regression
-> HashAggregate
Group Key: day
-> Seq Scan on daily_uniques_360261 daily_uniques
-> Seq Scan on daily_uniques_xxxxxxx daily_uniques
-> Task
Node: host=localhost port=57638 dbname=regression
Node: host=localhost port=xxxxx dbname=regression
-> HashAggregate
Group Key: day
-> Seq Scan on daily_uniques_360262 daily_uniques
-> Seq Scan on daily_uniques_xxxxxxx daily_uniques
-> Task
Node: host=localhost port=57637 dbname=regression
Node: host=localhost port=xxxxx dbname=regression
-> HashAggregate
Group Key: day
-> Seq Scan on daily_uniques_360263 daily_uniques
-> Seq Scan on daily_uniques_xxxxxxx daily_uniques
-> Task
Node: host=localhost port=57638 dbname=regression
Node: host=localhost port=xxxxx dbname=regression
-> HashAggregate
Group Key: day
-> Seq Scan on daily_uniques_360264 daily_uniques
-> Seq Scan on daily_uniques_xxxxxxx daily_uniques
(27 rows)
-- Test disabling hash_agg with operator on coordinator query
@ -184,32 +184,32 @@ FROM
daily_uniques
GROUP BY(1);
QUERY PLAN
------------------------------------------------------------------------
---------------------------------------------------------------------
HashAggregate
Group Key: remote_scan.day
-> Custom Scan (Citus Adaptive)
Task Count: 4
Tasks Shown: All
-> Task
Node: host=localhost port=57637 dbname=regression
Node: host=localhost port=xxxxx dbname=regression
-> HashAggregate
Group Key: day
-> Seq Scan on daily_uniques_360261 daily_uniques
-> Seq Scan on daily_uniques_xxxxxxx daily_uniques
-> Task
Node: host=localhost port=57638 dbname=regression
Node: host=localhost port=xxxxx dbname=regression
-> HashAggregate
Group Key: day
-> Seq Scan on daily_uniques_360262 daily_uniques
-> Seq Scan on daily_uniques_xxxxxxx daily_uniques
-> Task
Node: host=localhost port=57637 dbname=regression
Node: host=localhost port=xxxxx dbname=regression
-> HashAggregate
Group Key: day
-> Seq Scan on daily_uniques_360263 daily_uniques
-> Seq Scan on daily_uniques_xxxxxxx daily_uniques
-> Task
Node: host=localhost port=57638 dbname=regression
Node: host=localhost port=xxxxx dbname=regression
-> HashAggregate
Group Key: day
-> Seq Scan on daily_uniques_360264 daily_uniques
-> Seq Scan on daily_uniques_xxxxxxx daily_uniques
(25 rows)
SET hll.force_groupagg to ON;
@ -220,7 +220,7 @@ FROM
daily_uniques
GROUP BY(1);
QUERY PLAN
------------------------------------------------------------------------------
---------------------------------------------------------------------
GroupAggregate
Group Key: remote_scan.day
-> Sort
@ -229,25 +229,25 @@ GROUP BY(1);
Task Count: 4
Tasks Shown: All
-> Task
Node: host=localhost port=57637 dbname=regression
Node: host=localhost port=xxxxx dbname=regression
-> HashAggregate
Group Key: day
-> Seq Scan on daily_uniques_360261 daily_uniques
-> Seq Scan on daily_uniques_xxxxxxx daily_uniques
-> Task
Node: host=localhost port=57638 dbname=regression
Node: host=localhost port=xxxxx dbname=regression
-> HashAggregate
Group Key: day
-> Seq Scan on daily_uniques_360262 daily_uniques
-> Seq Scan on daily_uniques_xxxxxxx daily_uniques
-> Task
Node: host=localhost port=57637 dbname=regression
Node: host=localhost port=xxxxx dbname=regression
-> HashAggregate
Group Key: day
-> Seq Scan on daily_uniques_360263 daily_uniques
-> Seq Scan on daily_uniques_xxxxxxx daily_uniques
-> Task
Node: host=localhost port=57638 dbname=regression
Node: host=localhost port=xxxxx dbname=regression
-> HashAggregate
Group Key: day
-> Seq Scan on daily_uniques_360264 daily_uniques
-> Seq Scan on daily_uniques_xxxxxxx daily_uniques
(27 rows)
-- Test disabling hash_agg with expression on coordinator query
@ -259,32 +259,32 @@ FROM
daily_uniques
GROUP BY(1);
QUERY PLAN
------------------------------------------------------------------------
---------------------------------------------------------------------
HashAggregate
Group Key: remote_scan.day
-> Custom Scan (Citus Adaptive)
Task Count: 4
Tasks Shown: All
-> Task
Node: host=localhost port=57637 dbname=regression
Node: host=localhost port=xxxxx dbname=regression
-> HashAggregate
Group Key: day
-> Seq Scan on daily_uniques_360261 daily_uniques
-> Seq Scan on daily_uniques_xxxxxxx daily_uniques
-> Task
Node: host=localhost port=57638 dbname=regression
Node: host=localhost port=xxxxx dbname=regression
-> HashAggregate
Group Key: day
-> Seq Scan on daily_uniques_360262 daily_uniques
-> Seq Scan on daily_uniques_xxxxxxx daily_uniques
-> Task
Node: host=localhost port=57637 dbname=regression
Node: host=localhost port=xxxxx dbname=regression
-> HashAggregate
Group Key: day
-> Seq Scan on daily_uniques_360263 daily_uniques
-> Seq Scan on daily_uniques_xxxxxxx daily_uniques
-> Task
Node: host=localhost port=57638 dbname=regression
Node: host=localhost port=xxxxx dbname=regression
-> HashAggregate
Group Key: day
-> Seq Scan on daily_uniques_360264 daily_uniques
-> Seq Scan on daily_uniques_xxxxxxx daily_uniques
(25 rows)
SET hll.force_groupagg to ON;
@ -295,7 +295,7 @@ FROM
daily_uniques
GROUP BY(1);
QUERY PLAN
------------------------------------------------------------------------------
---------------------------------------------------------------------
GroupAggregate
Group Key: remote_scan.day
-> Sort
@ -304,25 +304,25 @@ GROUP BY(1);
Task Count: 4
Tasks Shown: All
-> Task
Node: host=localhost port=57637 dbname=regression
Node: host=localhost port=xxxxx dbname=regression
-> HashAggregate
Group Key: day
-> Seq Scan on daily_uniques_360261 daily_uniques
-> Seq Scan on daily_uniques_xxxxxxx daily_uniques
-> Task
Node: host=localhost port=57638 dbname=regression
Node: host=localhost port=xxxxx dbname=regression
-> HashAggregate
Group Key: day
-> Seq Scan on daily_uniques_360262 daily_uniques
-> Seq Scan on daily_uniques_xxxxxxx daily_uniques
-> Task
Node: host=localhost port=57637 dbname=regression
Node: host=localhost port=xxxxx dbname=regression
-> HashAggregate
Group Key: day
-> Seq Scan on daily_uniques_360263 daily_uniques
-> Seq Scan on daily_uniques_xxxxxxx daily_uniques
-> Task
Node: host=localhost port=57638 dbname=regression
Node: host=localhost port=xxxxx dbname=regression
-> HashAggregate
Group Key: day
-> Seq Scan on daily_uniques_360264 daily_uniques
-> Seq Scan on daily_uniques_xxxxxxx daily_uniques
(27 rows)
-- Test disabling hash_agg with having
@ -334,32 +334,32 @@ FROM
daily_uniques
GROUP BY(1);
QUERY PLAN
------------------------------------------------------------------------
---------------------------------------------------------------------
HashAggregate
Group Key: remote_scan.day
-> Custom Scan (Citus Adaptive)
Task Count: 4
Tasks Shown: All
-> Task
Node: host=localhost port=57637 dbname=regression
Node: host=localhost port=xxxxx dbname=regression
-> HashAggregate
Group Key: day
-> Seq Scan on daily_uniques_360261 daily_uniques
-> Seq Scan on daily_uniques_xxxxxxx daily_uniques
-> Task
Node: host=localhost port=57638 dbname=regression
Node: host=localhost port=xxxxx dbname=regression
-> HashAggregate
Group Key: day
-> Seq Scan on daily_uniques_360262 daily_uniques
-> Seq Scan on daily_uniques_xxxxxxx daily_uniques
-> Task
Node: host=localhost port=57637 dbname=regression
Node: host=localhost port=xxxxx dbname=regression
-> HashAggregate
Group Key: day
-> Seq Scan on daily_uniques_360263 daily_uniques
-> Seq Scan on daily_uniques_xxxxxxx daily_uniques
-> Task
Node: host=localhost port=57638 dbname=regression
Node: host=localhost port=xxxxx dbname=regression
-> HashAggregate
Group Key: day
-> Seq Scan on daily_uniques_360264 daily_uniques
-> Seq Scan on daily_uniques_xxxxxxx daily_uniques
(25 rows)
SET hll.force_groupagg to ON;
@ -371,7 +371,7 @@ FROM
GROUP BY(1)
HAVING hll_cardinality(hll_union_agg(unique_users)) > 1;
QUERY PLAN
----------------------------------------------------------------------------------------------------------
---------------------------------------------------------------------
GroupAggregate
Group Key: remote_scan.day
Filter: (hll_cardinality(hll_union_agg(remote_scan.worker_column_3)) > '1'::double precision)
@ -381,37 +381,37 @@ HAVING hll_cardinality(hll_union_agg(unique_users)) > 1;
Task Count: 4
Tasks Shown: All
-> Task
Node: host=localhost port=57637 dbname=regression
Node: host=localhost port=xxxxx dbname=regression
-> GroupAggregate
Group Key: day
Filter: (hll_cardinality(hll_union_agg(unique_users)) > '1'::double precision)
-> Sort
Sort Key: day
-> Seq Scan on daily_uniques_360261 daily_uniques
-> Seq Scan on daily_uniques_xxxxxxx daily_uniques
-> Task
Node: host=localhost port=57638 dbname=regression
Node: host=localhost port=xxxxx dbname=regression
-> GroupAggregate
Group Key: day
Filter: (hll_cardinality(hll_union_agg(unique_users)) > '1'::double precision)
-> Sort
Sort Key: day
-> Seq Scan on daily_uniques_360262 daily_uniques
-> Seq Scan on daily_uniques_xxxxxxx daily_uniques
-> Task
Node: host=localhost port=57637 dbname=regression
Node: host=localhost port=xxxxx dbname=regression
-> GroupAggregate
Group Key: day
Filter: (hll_cardinality(hll_union_agg(unique_users)) > '1'::double precision)
-> Sort
Sort Key: day
-> Seq Scan on daily_uniques_360263 daily_uniques
-> Seq Scan on daily_uniques_xxxxxxx daily_uniques
-> Task
Node: host=localhost port=57638 dbname=regression
Node: host=localhost port=xxxxx dbname=regression
-> GroupAggregate
Group Key: day
Filter: (hll_cardinality(hll_union_agg(unique_users)) > '1'::double precision)
-> Sort
Sort Key: day
-> Seq Scan on daily_uniques_360264 daily_uniques
-> Seq Scan on daily_uniques_xxxxxxx daily_uniques
(40 rows)
DROP TABLE raw_table;
@ -429,13 +429,13 @@ CREATE TABLE customer_reviews (day date, user_id int, review int);
CREATE TABLE popular_reviewer(day date, reviewers jsonb);
SELECT create_distributed_table('customer_reviews', 'user_id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
SELECT create_distributed_table('popular_reviewer', 'day');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -455,7 +455,7 @@ FROM (
)a
ORDER BY 2 DESC, 1;
item | frequency
------+-----------
---------------------------------------------------------------------
1 | 7843
2 | 7843
3 | 6851
@ -480,7 +480,7 @@ WHERE day >= '2018-06-20' and day <= '2018-06-30'
ORDER BY 3 DESC, 1, 2
LIMIT 10;
day | item | frequency
------------+------+-----------
---------------------------------------------------------------------
06-20-2018 | 1 | 248
06-20-2018 | 2 | 248
06-21-2018 | 1 | 248
@ -502,7 +502,7 @@ FROM (
)a
ORDER BY 2 DESC, 1;
item | frequency
------+-----------
---------------------------------------------------------------------
1 | 1240
2 | 1240
0 | 992
@ -522,7 +522,7 @@ FROM (
)a
ORDER BY 1, 3 DESC, 2;
month | item | frequency
-------+------+-----------
---------------------------------------------------------------------
6 | 1 | 1054
6 | 2 | 1054
6 | 3 | 992
@ -542,7 +542,7 @@ FROM popular_reviewer
WHERE day >= '2018-05-24'::date AND day <= '2018-05-31'::date
ORDER BY 2 DESC, 1;
item | frequency
------+-----------
---------------------------------------------------------------------
1 | 1240
2 | 1240
0 | 992
@ -556,7 +556,7 @@ SELECT (topn(topn_add_agg(user_id::text), 10)).*
FROM customer_reviews
ORDER BY 2 DESC, 1;
item | frequency
------+-----------
---------------------------------------------------------------------
1 | 7843
2 | 7843
3 | 6851

View File

@ -10,7 +10,7 @@ SET search_path TO disabled_object_propagation;
CREATE TABLE t1 (a int PRIMARY KEY , b int);
SELECT create_distributed_table('t1','a');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -19,7 +19,7 @@ CREATE TYPE tt1 AS (a int , b int);
CREATE TABLE t2 (a int PRIMARY KEY, b tt1);
SELECT create_distributed_table('t2', 'a');
ERROR: type "disabled_object_propagation.tt1" does not exist
CONTEXT: while executing command on localhost:57637
CONTEXT: while executing command on localhost:xxxxx
SELECT 1 FROM run_command_on_workers($$
BEGIN;
SET LOCAL citus.enable_ddl_propagation TO off;
@ -27,14 +27,14 @@ SELECT 1 FROM run_command_on_workers($$
COMMIT;
$$);
?column?
----------
---------------------------------------------------------------------
1
1
(2 rows)
SELECT create_distributed_table('t2', 'a');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -43,7 +43,7 @@ CREATE TYPE tt2 AS ENUM ('a', 'b');
CREATE TABLE t3 (a int PRIMARY KEY, b tt2);
SELECT create_distributed_table('t3', 'a');
ERROR: type "disabled_object_propagation.tt2" does not exist
CONTEXT: while executing command on localhost:57637
CONTEXT: while executing command on localhost:xxxxx
SELECT 1 FROM run_command_on_workers($$
BEGIN;
SET LOCAL citus.enable_ddl_propagation TO off;
@ -51,14 +51,14 @@ SELECT 1 FROM run_command_on_workers($$
COMMIT;
$$);
?column?
----------
---------------------------------------------------------------------
1
1
(2 rows)
SELECT create_distributed_table('t3', 'a');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -70,7 +70,7 @@ CREATE TYPE tt3 AS (a int, b int);
CREATE TABLE t4 (a int PRIMARY KEY, b tt3);
SELECT create_distributed_table('t4','a');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -79,7 +79,7 @@ COMMIT;
-- verify the type is distributed
SELECT count(*) FROM citus.pg_dist_object WHERE objid = 'disabled_object_propagation.tt3'::regtype::oid;
count
-------
---------------------------------------------------------------------
1
(1 row)
@ -98,7 +98,7 @@ SELECT row(nspname, typname, usename)
WHERE typname = 'tt3';
$$);
run_command_on_workers
------------------------------------------------------------------
---------------------------------------------------------------------
(localhost,57637,t,"(disabled_object_propagation,tt3,postgres)")
(localhost,57638,t,"(disabled_object_propagation,tt3,postgres)")
(2 rows)
@ -113,7 +113,7 @@ SELECT run_command_on_workers($$
GROUP BY pg_type.typname;
$$);
run_command_on_workers
------------------------------------------------
---------------------------------------------------------------------
(localhost,57637,t,"(tt3,""a int4, b int4"")")
(localhost,57638,t,"(tt3,""a int4, b int4"")")
(2 rows)

View File

@ -4,7 +4,7 @@ NOTICE: not propagating CREATE ROLE/USER commands to worker nodes
HINT: Connect to worker nodes directly to manually create all necessary users and roles.
SELECT run_command_on_workers($$CREATE USER collationuser;$$);
run_command_on_workers
-----------------------------------
---------------------------------------------------------------------
(localhost,57637,t,"CREATE ROLE")
(localhost,57638,t,"CREATE ROLE")
(2 rows)
@ -24,7 +24,7 @@ JOIN pg_authid a ON a.oid = c.collowner
WHERE collname like 'german_phonebook%'
ORDER BY 1,2,3;
collname | nspname | rolname
------------------+-----------------+----------
---------------------------------------------------------------------
german_phonebook | collation_tests | postgres
(1 row)
@ -36,20 +36,20 @@ INSERT INTO test_propagate VALUES (1, 'aesop', U&'\00E4sop'), (2, U&'Vo\1E9Er',
SELECT create_distributed_table('test_propagate', 'id');
NOTICE: Copying data from local table...
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
-- Test COLLATE is pushed down
SELECT * FROM collation_tests.test_propagate WHERE t2 < 'b';
id | t1 | t2
----+-------+------
---------------------------------------------------------------------
1 | aesop | äsop
(1 row)
SELECT * FROM collation_tests.test_propagate WHERE t2 < 'b' COLLATE "C";
id | t1 | t2
----+------+-------
---------------------------------------------------------------------
2 | Voẞr | Vossr
(1 row)
@ -57,7 +57,7 @@ SELECT * FROM collation_tests.test_propagate WHERE t2 < 'b' COLLATE "C";
CREATE TABLE test_range(key text COLLATE german_phonebook, val int);
SELECT create_distributed_table('test_range', 'key', 'range');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -77,7 +77,7 @@ SELECT * FROM test_range WHERE key > 'Ab' AND key < U&'\00E4z';
DEBUG: Creating router plan
DEBUG: Plan is router executable
key | val
------+-----
---------------------------------------------------------------------
äsop | 1
(1 row)
@ -89,7 +89,7 @@ JOIN pg_authid a ON a.oid = c.collowner
WHERE collname like 'german_phonebook%'
ORDER BY 1,2,3;
collname | nspname | rolname
-------------------------------+-----------------+----------
---------------------------------------------------------------------
german_phonebook | collation_tests | postgres
german_phonebook_unpropagated | collation_tests | postgres
(2 rows)
@ -106,7 +106,7 @@ JOIN pg_authid a ON a.oid = c.collowner
WHERE collname like 'german_phonebook%'
ORDER BY 1,2,3;
collname | nspname | rolname
-------------------------------+------------------+---------------
---------------------------------------------------------------------
german_phonebook2 | collation_tests2 | collationuser
german_phonebook_unpropagated | collation_tests | postgres
(2 rows)
@ -128,7 +128,7 @@ DROP SCHEMA collation_tests2 CASCADE;
DROP USER collationuser;
SELECT run_command_on_workers($$DROP USER collationuser;$$);
run_command_on_workers
---------------------------------
---------------------------------------------------------------------
(localhost,57637,t,"DROP ROLE")
(localhost,57638,t,"DROP ROLE")
(2 rows)

View File

@ -1,7 +1,7 @@
CREATE SCHEMA collation_conflict;
SELECT run_command_on_workers($$CREATE SCHEMA collation_conflict;$$);
run_command_on_workers
-------------------------------------
---------------------------------------------------------------------
(localhost,57637,t,"CREATE SCHEMA")
(localhost,57638,t,"CREATE SCHEMA")
(2 rows)
@ -21,7 +21,7 @@ CREATE COLLATION caseinsensitive (
CREATE TABLE tblcoll(val text COLLATE caseinsensitive);
SELECT create_reference_table('tblcoll');
create_reference_table
------------------------
---------------------------------------------------------------------
(1 row)
@ -33,7 +33,7 @@ JOIN pg_authid a ON a.oid = c.collowner
WHERE collname like 'caseinsensitive%'
ORDER BY 1,2,3;
collname | nspname | rolname
-----------------+--------------------+----------
---------------------------------------------------------------------
caseinsensitive | collation_conflict | postgres
(1 row)
@ -59,7 +59,7 @@ CREATE COLLATION caseinsensitive (
CREATE TABLE tblcoll(val text COLLATE caseinsensitive);
SELECT create_reference_table('tblcoll');
create_reference_table
------------------------
---------------------------------------------------------------------
(1 row)
@ -71,7 +71,7 @@ JOIN pg_authid a ON a.oid = c.collowner
WHERE collname like 'caseinsensitive%'
ORDER BY 1,2,3;
collname | nspname | rolname
---------------------------------+--------------------+----------
---------------------------------------------------------------------
caseinsensitive | collation_conflict | postgres
caseinsensitive(citus_backup_0) | collation_conflict | postgres
(2 rows)
@ -81,13 +81,13 @@ SET search_path TO collation_conflict;
-- now test worker_create_or_replace_object directly
SELECT worker_create_or_replace_object($$CREATE COLLATION collation_conflict.caseinsensitive (provider = 'icu', lc_collate = 'und-u-ks-level2', lc_ctype = 'und-u-ks-level2')$$);
worker_create_or_replace_object
---------------------------------
---------------------------------------------------------------------
f
(1 row)
SELECT worker_create_or_replace_object($$CREATE COLLATION collation_conflict.caseinsensitive (provider = 'icu', lc_collate = 'und-u-ks-level2', lc_ctype = 'und-u-ks-level2')$$);
worker_create_or_replace_object
---------------------------------
---------------------------------------------------------------------
f
(1 row)

View File

@ -4,7 +4,7 @@ NOTICE: not propagating CREATE ROLE/USER commands to worker nodes
HINT: Connect to worker nodes directly to manually create all necessary users and roles.
SELECT run_command_on_workers($$CREATE USER functionuser;$$);
run_command_on_workers
-----------------------------------
---------------------------------------------------------------------
(localhost,57637,t,"CREATE ROLE")
(localhost,57638,t,"CREATE ROLE")
(2 rows)
@ -131,7 +131,7 @@ SET citus.replication_model TO 'statement';
SET citus.shard_replication_factor TO 1;
SELECT create_distributed_table('statement_table','id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -141,7 +141,7 @@ SET citus.replication_model TO 'streaming';
SET citus.shard_replication_factor TO 1;
SELECT create_distributed_table('streaming_table','id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -149,7 +149,7 @@ SELECT create_distributed_table('streaming_table','id');
-- at the start of the test
select bool_or(hasmetadata) from pg_dist_node WHERE isactive AND noderole = 'primary';
bool_or
---------
---------------------------------------------------------------------
f
(1 row)
@ -157,21 +157,21 @@ select bool_or(hasmetadata) from pg_dist_node WHERE isactive AND noderole = 'pr
-- distribution_argument_index and colocationid
SELECT create_distributed_function('"add_mi''xed_param_names"(int, int)');
create_distributed_function
-----------------------------
---------------------------------------------------------------------
(1 row)
SELECT distribution_argument_index is NULL, colocationid is NULL from citus.pg_dist_object
WHERE objid = 'add_mi''xed_param_names(int, int)'::regprocedure;
?column? | ?column?
----------+----------
---------------------------------------------------------------------
t | t
(1 row)
-- also show that we can use the function
SELECT * FROM run_command_on_workers('SELECT function_tests."add_mi''xed_param_names"(2,3);') ORDER BY 1,2;
nodename | nodeport | success | result
-----------+----------+---------+--------
---------------------------------------------------------------------
localhost | 57637 | t | 5
localhost | 57638 | t | 5
(2 rows)
@ -180,7 +180,7 @@ SELECT * FROM run_command_on_workers('SELECT function_tests."add_mi''xed_param_n
-- since the function doesn't have a parameter
select bool_or(hasmetadata) from pg_dist_node WHERE isactive AND noderole = 'primary';
bool_or
---------
---------------------------------------------------------------------
f
(1 row)
@ -203,52 +203,52 @@ END;
-- try to co-locate with a table that uses streaming replication
SELECT create_distributed_function('dup(int)', '$1', colocate_with := 'streaming_table');
create_distributed_function
-----------------------------
---------------------------------------------------------------------
(1 row)
SELECT * FROM run_command_on_workers('SELECT function_tests.dup(42);') ORDER BY 1,2;
nodename | nodeport | success | result
-----------+----------+---------+-------------------
---------------------------------------------------------------------
localhost | 57637 | t | (42,"42 is text")
localhost | 57638 | t | (42,"42 is text")
(2 rows)
SELECT create_distributed_function('add(int,int)', '$1', colocate_with := 'streaming_table');
create_distributed_function
-----------------------------
---------------------------------------------------------------------
(1 row)
SELECT * FROM run_command_on_workers('SELECT function_tests.add(2,3);') ORDER BY 1,2;
nodename | nodeport | success | result
-----------+----------+---------+--------
---------------------------------------------------------------------
localhost | 57637 | t | 5
localhost | 57638 | t | 5
(2 rows)
SELECT public.verify_function_is_same_on_workers('function_tests.add(int,int)');
verify_function_is_same_on_workers
------------------------------------
---------------------------------------------------------------------
t
(1 row)
-- distribute aggregate
SELECT create_distributed_function('sum2(int)');
create_distributed_function
-----------------------------
---------------------------------------------------------------------
(1 row)
SELECT create_distributed_function('my_rank("any")');
create_distributed_function
-----------------------------
---------------------------------------------------------------------
(1 row)
SELECT create_distributed_function('agg_names(dup_result,dup_result)');
create_distributed_function
-----------------------------
---------------------------------------------------------------------
(1 row)
@ -258,21 +258,21 @@ SELECT create_distributed_function('agg_names(dup_result,dup_result)');
ALTER FUNCTION add(int,int) CALLED ON NULL INPUT IMMUTABLE SECURITY INVOKER PARALLEL UNSAFE LEAKPROOF COST 5;
SELECT public.verify_function_is_same_on_workers('function_tests.add(int,int)');
verify_function_is_same_on_workers
------------------------------------
---------------------------------------------------------------------
t
(1 row)
ALTER FUNCTION add(int,int) RETURNS NULL ON NULL INPUT STABLE SECURITY DEFINER PARALLEL RESTRICTED;
SELECT public.verify_function_is_same_on_workers('function_tests.add(int,int)');
verify_function_is_same_on_workers
------------------------------------
---------------------------------------------------------------------
t
(1 row)
ALTER FUNCTION add(int,int) STRICT VOLATILE PARALLEL SAFE;
SELECT public.verify_function_is_same_on_workers('function_tests.add(int,int)');
verify_function_is_same_on_workers
------------------------------------
---------------------------------------------------------------------
t
(1 row)
@ -280,49 +280,49 @@ SELECT public.verify_function_is_same_on_workers('function_tests.add(int,int)');
ALTER FUNCTION add(int,int) SET client_min_messages TO warning;
SELECT public.verify_function_is_same_on_workers('function_tests.add(int,int)');
verify_function_is_same_on_workers
------------------------------------
---------------------------------------------------------------------
t
(1 row)
ALTER FUNCTION add(int,int) SET client_min_messages TO error;
SELECT public.verify_function_is_same_on_workers('function_tests.add(int,int)');
verify_function_is_same_on_workers
------------------------------------
---------------------------------------------------------------------
t
(1 row)
ALTER FUNCTION add(int,int) SET client_min_messages TO debug;
SELECT public.verify_function_is_same_on_workers('function_tests.add(int,int)');
verify_function_is_same_on_workers
------------------------------------
---------------------------------------------------------------------
t
(1 row)
ALTER FUNCTION add(int,int) RESET client_min_messages;
SELECT public.verify_function_is_same_on_workers('function_tests.add(int,int)');
verify_function_is_same_on_workers
------------------------------------
---------------------------------------------------------------------
t
(1 row)
ALTER FUNCTION add(int,int) SET "citus.setting;'" TO 'hello '' world';
SELECT public.verify_function_is_same_on_workers('function_tests.add(int,int)');
verify_function_is_same_on_workers
------------------------------------
---------------------------------------------------------------------
t
(1 row)
ALTER FUNCTION add(int,int) RESET "citus.setting;'";
SELECT public.verify_function_is_same_on_workers('function_tests.add(int,int)');
verify_function_is_same_on_workers
------------------------------------
---------------------------------------------------------------------
t
(1 row)
ALTER FUNCTION add(int,int) SET search_path TO 'sch'';ma', public;
SELECT public.verify_function_is_same_on_workers('function_tests.add(int,int)');
verify_function_is_same_on_workers
------------------------------------
---------------------------------------------------------------------
t
(1 row)
@ -333,7 +333,7 @@ ERROR: unsupported ALTER FUNCTION ... SET ... FROM CURRENT for a distributed fu
HINT: SET FROM CURRENT is not supported for distributed functions, instead use the SET ... TO ... syntax with a constant value.
SELECT public.verify_function_is_same_on_workers('function_tests.add(int,int)');
verify_function_is_same_on_workers
------------------------------------
---------------------------------------------------------------------
t
(1 row)
@ -342,7 +342,7 @@ ERROR: unsupported ALTER FUNCTION ... SET ... FROM CURRENT for a distributed fu
HINT: SET FROM CURRENT is not supported for distributed functions, instead use the SET ... TO ... syntax with a constant value.
SELECT public.verify_function_is_same_on_workers('function_tests.add(int,int)');
verify_function_is_same_on_workers
------------------------------------
---------------------------------------------------------------------
t
(1 row)
@ -351,7 +351,7 @@ ERROR: unsupported ALTER FUNCTION ... SET ... FROM CURRENT for a distributed fu
HINT: SET FROM CURRENT is not supported for distributed functions, instead use the SET ... TO ... syntax with a constant value.
SELECT public.verify_function_is_same_on_workers('function_tests.add(int,int)');
verify_function_is_same_on_workers
------------------------------------
---------------------------------------------------------------------
t
(1 row)
@ -359,20 +359,20 @@ SELECT public.verify_function_is_same_on_workers('function_tests.add(int,int)');
ALTER FUNCTION add(int,int) RENAME TO add2;
SELECT public.verify_function_is_same_on_workers('function_tests.add2(int,int)');
verify_function_is_same_on_workers
------------------------------------
---------------------------------------------------------------------
t
(1 row)
SELECT * FROM run_command_on_workers('SELECT function_tests.add(2,3);') ORDER BY 1,2;
nodename | nodeport | success | result
-----------+----------+---------+----------------------------------------------------------------------
---------------------------------------------------------------------
localhost | 57637 | f | ERROR: function function_tests.add(integer, integer) does not exist
localhost | 57638 | f | ERROR: function function_tests.add(integer, integer) does not exist
(2 rows)
SELECT * FROM run_command_on_workers('SELECT function_tests.add2(2,3);') ORDER BY 1,2;
nodename | nodeport | success | result
-----------+----------+---------+--------
---------------------------------------------------------------------
localhost | 57637 | t | 5
localhost | 57638 | t | 5
(2 rows)
@ -381,7 +381,7 @@ ALTER FUNCTION add2(int,int) RENAME TO add;
ALTER AGGREGATE sum2(int) RENAME TO sum27;
SELECT * FROM run_command_on_workers($$SELECT 1 from pg_proc where proname = 'sum27';$$) ORDER BY 1,2;
nodename | nodeport | success | result
-----------+----------+---------+--------
---------------------------------------------------------------------
localhost | 57637 | t | 1
localhost | 57638 | t | 1
(2 rows)
@ -391,7 +391,7 @@ ALTER AGGREGATE sum27(int) RENAME TO sum2;
ALTER FUNCTION add(int,int) OWNER TO functionuser;
SELECT public.verify_function_is_same_on_workers('function_tests.add(int,int)');
verify_function_is_same_on_workers
------------------------------------
---------------------------------------------------------------------
t
(1 row)
@ -404,7 +404,7 @@ JOIN pg_namespace ON (pg_namespace.oid = pronamespace and nspname = 'function_te
WHERE proname = 'add';
$$);
run_command_on_workers
---------------------------------------------------------
---------------------------------------------------------------------
(localhost,57637,t,"(functionuser,function_tests,add)")
(localhost,57638,t,"(functionuser,function_tests,add)")
(2 rows)
@ -417,7 +417,7 @@ JOIN pg_namespace ON (pg_namespace.oid = pronamespace and nspname = 'function_te
WHERE proname = 'sum2';
$$);
run_command_on_workers
----------------------------------------------------------
---------------------------------------------------------------------
(localhost,57637,t,"(functionuser,function_tests,sum2)")
(localhost,57638,t,"(functionuser,function_tests,sum2)")
(2 rows)
@ -427,20 +427,20 @@ $$);
ALTER FUNCTION add(int,int) SET SCHEMA function_tests2;
SELECT public.verify_function_is_same_on_workers('function_tests2.add(int,int)');
verify_function_is_same_on_workers
------------------------------------
---------------------------------------------------------------------
t
(1 row)
SELECT * FROM run_command_on_workers('SELECT function_tests.add(2,3);') ORDER BY 1,2;
nodename | nodeport | success | result
-----------+----------+---------+----------------------------------------------------------------------
---------------------------------------------------------------------
localhost | 57637 | f | ERROR: function function_tests.add(integer, integer) does not exist
localhost | 57638 | f | ERROR: function function_tests.add(integer, integer) does not exist
(2 rows)
SELECT * FROM run_command_on_workers('SELECT function_tests2.add(2,3);') ORDER BY 1,2;
nodename | nodeport | success | result
-----------+----------+---------+--------
---------------------------------------------------------------------
localhost | 57637 | t | 5
localhost | 57638 | t | 5
(2 rows)
@ -455,13 +455,13 @@ AS 'select $1 * $2;' -- I know, this is not an add, but the output will tell us
RETURNS NULL ON NULL INPUT;
SELECT public.verify_function_is_same_on_workers('function_tests.add(int,int)');
verify_function_is_same_on_workers
------------------------------------
---------------------------------------------------------------------
t
(1 row)
SELECT * FROM run_command_on_workers('SELECT function_tests.add(2,3);') ORDER BY 1,2;
nodename | nodeport | success | result
-----------+----------+---------+--------
---------------------------------------------------------------------
localhost | 57637 | t | 6
localhost | 57638 | t | 6
(2 rows)
@ -478,7 +478,7 @@ DROP FUNCTION add(int,int);
-- call should fail as function should have been dropped
SELECT * FROM run_command_on_workers('SELECT function_tests.add(2,3);') ORDER BY 1,2;
nodename | nodeport | success | result
-----------+----------+---------+----------------------------------------------------------------------
---------------------------------------------------------------------
localhost | 57637 | f | ERROR: function function_tests.add(integer, integer) does not exist
localhost | 57638 | f | ERROR: function function_tests.add(integer, integer) does not exist
(2 rows)
@ -487,7 +487,7 @@ DROP AGGREGATE function_tests2.sum2(int);
-- call should fail as aggregate should have been dropped
SELECT * FROM run_command_on_workers('SELECT function_tests2.sum2(id) FROM (select 1 id, 2) subq;') ORDER BY 1,2;
nodename | nodeport | success | result
-----------+----------+---------+---------------------------------------------------------------
---------------------------------------------------------------------
localhost | 57637 | f | ERROR: function function_tests2.sum2(integer) does not exist
localhost | 57638 | f | ERROR: function function_tests2.sum2(integer) does not exist
(2 rows)
@ -495,8 +495,6 @@ SELECT * FROM run_command_on_workers('SELECT function_tests2.sum2(id) FROM (sele
-- postgres doesn't accept parameter names in the regprocedure input
SELECT create_distributed_function('add_with_param_names(val1 int, int)', 'val1');
ERROR: syntax error at or near "int"
LINE 1: SELECT create_distributed_function('add_with_param_names(val...
^
CONTEXT: invalid type name "val1 int"
-- invalid distribution_arg_name
SELECT create_distributed_function('add_with_param_names(int, int)', distribution_arg_name:='test');
@ -541,7 +539,7 @@ HINT: Either provide a valid function argument name or a valid "$paramIndex" to
BEGIN;
SELECT create_distributed_function('add_with_param_names(int, int)', distribution_arg_name:='val1');
create_distributed_function
-----------------------------
---------------------------------------------------------------------
(1 row)
@ -549,7 +547,7 @@ ROLLBACK;
-- make sure that none of the nodes have the function because we've rollbacked
SELECT run_command_on_workers($$SELECT count(*) FROM pg_proc WHERE proname='add_with_param_names';$$);
run_command_on_workers
------------------------
---------------------------------------------------------------------
(localhost,57637,t,0)
(localhost,57638,t,0)
(2 rows)
@ -557,28 +555,28 @@ SELECT run_command_on_workers($$SELECT count(*) FROM pg_proc WHERE proname='add_
-- make sure that none of the active and primary nodes hasmetadata
select bool_or(hasmetadata) from pg_dist_node WHERE isactive AND noderole = 'primary';
bool_or
---------
---------------------------------------------------------------------
t
(1 row)
-- valid distribution with distribution_arg_name
SELECT create_distributed_function('add_with_param_names(int, int)', distribution_arg_name:='val1');
create_distributed_function
-----------------------------
---------------------------------------------------------------------
(1 row)
-- make sure that the primary nodes are now metadata synced
select bool_and(hasmetadata) from pg_dist_node WHERE isactive AND noderole = 'primary';
bool_and
----------
---------------------------------------------------------------------
t
(1 row)
-- make sure that both of the nodes have the function because we've succeeded
SELECT run_command_on_workers($$SELECT count(*) FROM pg_proc WHERE proname='add_with_param_names';$$);
run_command_on_workers
------------------------
---------------------------------------------------------------------
(localhost,57637,t,1)
(localhost,57638,t,1)
(2 rows)
@ -586,14 +584,14 @@ SELECT run_command_on_workers($$SELECT count(*) FROM pg_proc WHERE proname='add_
-- valid distribution with distribution_arg_name -- case insensitive
SELECT create_distributed_function('add_with_param_names(int, int)', distribution_arg_name:='VaL1');
create_distributed_function
-----------------------------
---------------------------------------------------------------------
(1 row)
-- valid distribution with distribution_arg_index
SELECT create_distributed_function('add_with_param_names(int, int)','$1');
create_distributed_function
-----------------------------
---------------------------------------------------------------------
(1 row)
@ -603,7 +601,7 @@ CREATE TABLE replicated_table_func_test (a int);
SET citus.replication_model TO "statement";
SELECT create_distributed_table('replicated_table_func_test', 'a');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -613,7 +611,7 @@ DETAIL: Citus currently only supports colocating function with distributed tabl
HINT: When distributing tables make sure that citus.replication_model = 'streaming'
SELECT public.wait_until_metadata_sync();
wait_until_metadata_sync
--------------------------
---------------------------------------------------------------------
(1 row)
@ -624,13 +622,13 @@ CREATE TABLE replicated_table_func_test_2 (a bigint);
SET citus.replication_model TO "streaming";
SELECT create_distributed_table('replicated_table_func_test_2', 'a');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
SELECT create_distributed_function('add_with_param_names(int, int)', 'val1', colocate_with:='replicated_table_func_test_2');
create_distributed_function
-----------------------------
---------------------------------------------------------------------
(1 row)
@ -645,7 +643,7 @@ ERROR: relation replicated_table_func_test_3 is not distributed
-- a function cannot be colocated with a reference table
SELECT create_reference_table('replicated_table_func_test_3');
create_reference_table
------------------------
---------------------------------------------------------------------
(1 row)
@ -657,13 +655,13 @@ CREATE TABLE replicated_table_func_test_4 (a int);
SET citus.replication_model TO "streaming";
SELECT create_distributed_table('replicated_table_func_test_4', 'a');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
SELECT create_distributed_function('add_with_param_names(int, int)', '$1', colocate_with:='replicated_table_func_test_4');
create_distributed_function
-----------------------------
---------------------------------------------------------------------
(1 row)
@ -673,7 +671,7 @@ FROM pg_dist_partition, citus.pg_dist_object as objects
WHERE pg_dist_partition.logicalrelid = 'replicated_table_func_test_4'::regclass AND
objects.objid = 'add_with_param_names(int, int)'::regprocedure;
table_and_function_colocated
------------------------------
---------------------------------------------------------------------
t
(1 row)
@ -681,7 +679,7 @@ WHERE pg_dist_partition.logicalrelid = 'replicated_table_func_test_4'::regclass
-- group preserved, because we're using the default shard creation settings
SELECT create_distributed_function('add_with_param_names(int, int)', 'val1');
create_distributed_function
-----------------------------
---------------------------------------------------------------------
(1 row)
@ -690,7 +688,7 @@ FROM pg_dist_partition, citus.pg_dist_object as objects
WHERE pg_dist_partition.logicalrelid = 'replicated_table_func_test_4'::regclass AND
objects.objid = 'add_with_param_names(int, int)'::regprocedure;
table_and_function_colocated
------------------------------
---------------------------------------------------------------------
t
(1 row)
@ -700,7 +698,7 @@ WHERE pg_dist_partition.logicalrelid = 'replicated_table_func_test_4'::regclass
-- to coerce the values
SELECT create_distributed_function('add_numeric(numeric, numeric)', '$1', colocate_with:='replicated_table_func_test_4');
create_distributed_function
-----------------------------
---------------------------------------------------------------------
(1 row)
@ -709,13 +707,13 @@ FROM pg_dist_partition, citus.pg_dist_object as objects
WHERE pg_dist_partition.logicalrelid = 'replicated_table_func_test_4'::regclass AND
objects.objid = 'add_numeric(numeric, numeric)'::regprocedure;
table_and_function_colocated
------------------------------
---------------------------------------------------------------------
t
(1 row)
SELECT create_distributed_function('add_text(text, text)', '$1', colocate_with:='replicated_table_func_test_4');
create_distributed_function
-----------------------------
---------------------------------------------------------------------
(1 row)
@ -724,7 +722,7 @@ FROM pg_dist_partition, citus.pg_dist_object as objects
WHERE pg_dist_partition.logicalrelid = 'replicated_table_func_test_4'::regclass AND
objects.objid = 'add_text(text, text)'::regprocedure;
table_and_function_colocated
------------------------------
---------------------------------------------------------------------
t
(1 row)
@ -741,7 +739,7 @@ HINT: Provide a distributed table via "colocate_with" option to create_distribu
-- sync metadata to workers for consistent results when clearing objects
SELECT public.wait_until_metadata_sync();
wait_until_metadata_sync
--------------------------
---------------------------------------------------------------------
(1 row)
@ -750,7 +748,7 @@ SET citus.shard_count TO 4;
CREATE TABLE test (id int, name text);
SELECT create_distributed_table('test','id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -765,7 +763,7 @@ END;
$$ LANGUAGE plpgsql;
SELECT create_distributed_function('increment(int)', '$1', colocate_with := 'test');
create_distributed_function
-----------------------------
---------------------------------------------------------------------
(1 row)
@ -780,13 +778,13 @@ END;
$$ LANGUAGE plpgsql;
SELECT test_func_calls_dist_func();
test_func_calls_dist_func
---------------------------
---------------------------------------------------------------------
(1 row)
SELECT test_func_calls_dist_func();
test_func_calls_dist_func
---------------------------
---------------------------------------------------------------------
(1 row)
@ -794,7 +792,7 @@ SELECT test_func_calls_dist_func();
INSERT INTO test SELECT increment(3);
SELECT * FROM test ORDER BY id;
id | name
----+-------
---------------------------------------------------------------------
3 | three
4 |
(2 rows)
@ -806,7 +804,7 @@ DROP SCHEMA function_tests2 CASCADE;
-- clear objects
SELECT stop_metadata_sync_to_node(nodename,nodeport) FROM pg_dist_node WHERE isactive AND noderole = 'primary';
stop_metadata_sync_to_node
----------------------------
---------------------------------------------------------------------
(2 rows)
@ -829,7 +827,7 @@ DROP SCHEMA function_tests2 CASCADE;
DROP USER functionuser;
SELECT run_command_on_workers($$DROP USER functionuser$$);
run_command_on_workers
---------------------------------
---------------------------------------------------------------------
(localhost,57637,t,"DROP ROLE")
(localhost,57638,t,"DROP ROLE")
(2 rows)

View File

@ -3,7 +3,7 @@
CREATE SCHEMA proc_conflict;
SELECT run_command_on_workers($$CREATE SCHEMA proc_conflict;$$);
run_command_on_workers
-------------------------------------
---------------------------------------------------------------------
(localhost,57637,t,"CREATE SCHEMA")
(localhost,57638,t,"CREATE SCHEMA")
(2 rows)
@ -32,7 +32,7 @@ CREATE AGGREGATE existing_agg(int) (
);
SELECT create_distributed_function('existing_agg(int)');
create_distributed_function
-----------------------------
---------------------------------------------------------------------
(1 row)
@ -45,7 +45,7 @@ WITH data (val) AS (
)
SELECT existing_agg(val) FROM data;
existing_agg
--------------
---------------------------------------------------------------------
78
(1 row)
@ -58,7 +58,7 @@ WITH data (val) AS (
)
SELECT existing_agg(val) FROM data;
existing_agg
--------------
---------------------------------------------------------------------
78
(1 row)
@ -91,7 +91,7 @@ CREATE AGGREGATE existing_agg(int) (
);
SELECT create_distributed_function('existing_agg(int)');
create_distributed_function
-----------------------------
---------------------------------------------------------------------
(1 row)
@ -104,7 +104,7 @@ WITH data (val) AS (
)
SELECT existing_agg(val) FROM data;
existing_agg
--------------
---------------------------------------------------------------------
76
(1 row)
@ -117,7 +117,7 @@ WITH data (val) AS (
)
SELECT existing_agg(val) FROM data;
existing_agg
--------------
---------------------------------------------------------------------
76
(1 row)
@ -129,13 +129,13 @@ END;
$$ LANGUAGE plpgsql STRICT IMMUTABLE;
SELECT worker_create_or_replace_object('CREATE AGGREGATE proc_conflict.existing_agg(integer) (STYPE = integer,SFUNC = proc_conflict.existing_func2)');
worker_create_or_replace_object
---------------------------------
---------------------------------------------------------------------
t
(1 row)
SELECT worker_create_or_replace_object('CREATE AGGREGATE proc_conflict.existing_agg(integer) (STYPE = integer,SFUNC = proc_conflict.existing_func2)');
worker_create_or_replace_object
---------------------------------
---------------------------------------------------------------------
f
(1 row)

View File

@ -4,7 +4,7 @@ NOTICE: not propagating CREATE ROLE/USER commands to worker nodes
HINT: Connect to worker nodes directly to manually create all necessary users and roles.
SELECT run_command_on_workers($$CREATE USER procedureuser;$$);
run_command_on_workers
-----------------------------------
---------------------------------------------------------------------
(localhost,57637,t,"CREATE ROLE")
(localhost,57638,t,"CREATE ROLE")
(2 rows)
@ -25,7 +25,7 @@ ALTER SYSTEM SET citus.metadata_sync_interval TO 3000;
ALTER SYSTEM SET citus.metadata_sync_retry_interval TO 500;
SELECT pg_reload_conf();
pg_reload_conf
----------------
---------------------------------------------------------------------
t
(1 row)
@ -39,32 +39,32 @@ SET citus.replication_model TO 'streaming';
SET citus.shard_replication_factor TO 1;
SELECT create_distributed_table('colocation_table','id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
SELECT create_distributed_function('raise_info(text)', '$1', colocate_with := 'colocation_table');
create_distributed_function
-----------------------------
---------------------------------------------------------------------
(1 row)
SELECT wait_until_metadata_sync();
wait_until_metadata_sync
--------------------------
---------------------------------------------------------------------
(1 row)
SELECT * FROM run_command_on_workers($$CALL procedure_tests.raise_info('hello');$$) ORDER BY 1,2;
nodename | nodeport | success | result
-----------+----------+---------+--------
---------------------------------------------------------------------
localhost | 57637 | t | CALL
localhost | 57638 | t | CALL
(2 rows)
SELECT public.verify_function_is_same_on_workers('procedure_tests.raise_info(text)');
verify_function_is_same_on_workers
------------------------------------
---------------------------------------------------------------------
t
(1 row)
@ -74,14 +74,14 @@ SELECT public.verify_function_is_same_on_workers('procedure_tests.raise_info(tex
ALTER PROCEDURE raise_info(text) SECURITY INVOKER;
SELECT public.verify_function_is_same_on_workers('procedure_tests.raise_info(text)');
verify_function_is_same_on_workers
------------------------------------
---------------------------------------------------------------------
t
(1 row)
ALTER PROCEDURE raise_info(text) SECURITY DEFINER;
SELECT public.verify_function_is_same_on_workers('procedure_tests.raise_info(text)');
verify_function_is_same_on_workers
------------------------------------
---------------------------------------------------------------------
t
(1 row)
@ -89,28 +89,28 @@ SELECT public.verify_function_is_same_on_workers('procedure_tests.raise_info(tex
ALTER PROCEDURE raise_info(text) SET client_min_messages TO warning;
SELECT public.verify_function_is_same_on_workers('procedure_tests.raise_info(text)');
verify_function_is_same_on_workers
------------------------------------
---------------------------------------------------------------------
t
(1 row)
ALTER PROCEDURE raise_info(text) SET client_min_messages TO error;
SELECT public.verify_function_is_same_on_workers('procedure_tests.raise_info(text)');
verify_function_is_same_on_workers
------------------------------------
---------------------------------------------------------------------
t
(1 row)
ALTER PROCEDURE raise_info(text) SET client_min_messages TO debug;
SELECT public.verify_function_is_same_on_workers('procedure_tests.raise_info(text)');
verify_function_is_same_on_workers
------------------------------------
---------------------------------------------------------------------
t
(1 row)
ALTER PROCEDURE raise_info(text) RESET client_min_messages;
SELECT public.verify_function_is_same_on_workers('procedure_tests.raise_info(text)');
verify_function_is_same_on_workers
------------------------------------
---------------------------------------------------------------------
t
(1 row)
@ -118,20 +118,20 @@ SELECT public.verify_function_is_same_on_workers('procedure_tests.raise_info(tex
ALTER PROCEDURE raise_info(text) RENAME TO raise_info2;
SELECT public.verify_function_is_same_on_workers('procedure_tests.raise_info2(text)');
verify_function_is_same_on_workers
------------------------------------
---------------------------------------------------------------------
t
(1 row)
SELECT * FROM run_command_on_workers($$CALL procedure_tests.raise_info('hello');$$) ORDER BY 1,2;
nodename | nodeport | success | result
-----------+----------+---------+----------------------------------------------------------------------
---------------------------------------------------------------------
localhost | 57637 | f | ERROR: procedure procedure_tests.raise_info(unknown) does not exist
localhost | 57638 | f | ERROR: procedure procedure_tests.raise_info(unknown) does not exist
(2 rows)
SELECT * FROM run_command_on_workers($$CALL procedure_tests.raise_info2('hello');$$) ORDER BY 1,2;
nodename | nodeport | success | result
-----------+----------+---------+--------
---------------------------------------------------------------------
localhost | 57637 | t | CALL
localhost | 57638 | t | CALL
(2 rows)
@ -141,7 +141,7 @@ ALTER PROCEDURE raise_info2(text) RENAME TO raise_info;
ALTER PROCEDURE raise_info(text) OWNER TO procedureuser;
SELECT public.verify_function_is_same_on_workers('procedure_tests.raise_info(text)');
verify_function_is_same_on_workers
------------------------------------
---------------------------------------------------------------------
t
(1 row)
@ -153,7 +153,7 @@ JOIN pg_namespace ON (pg_namespace.oid = pronamespace)
WHERE proname = 'raise_info';
$$);
run_command_on_workers
------------------------------------------------------------------
---------------------------------------------------------------------
(localhost,57637,t,"(procedureuser,procedure_tests,raise_info)")
(localhost,57638,t,"(procedureuser,procedure_tests,raise_info)")
(2 rows)
@ -163,20 +163,20 @@ $$);
ALTER PROCEDURE raise_info(text) SET SCHEMA procedure_tests2;
SELECT public.verify_function_is_same_on_workers('procedure_tests2.raise_info(text)');
verify_function_is_same_on_workers
------------------------------------
---------------------------------------------------------------------
t
(1 row)
SELECT * FROM run_command_on_workers($$CALL procedure_tests.raise_info('hello');$$) ORDER BY 1,2;
nodename | nodeport | success | result
-----------+----------+---------+----------------------------------------------------------------------
---------------------------------------------------------------------
localhost | 57637 | f | ERROR: procedure procedure_tests.raise_info(unknown) does not exist
localhost | 57638 | f | ERROR: procedure procedure_tests.raise_info(unknown) does not exist
(2 rows)
SELECT * FROM run_command_on_workers($$CALL procedure_tests2.raise_info('hello');$$) ORDER BY 1,2;
nodename | nodeport | success | result
-----------+----------+---------+--------
---------------------------------------------------------------------
localhost | 57637 | t | CALL
localhost | 57638 | t | CALL
(2 rows)
@ -186,7 +186,7 @@ DROP PROCEDURE raise_info(text);
-- call should fail as procedure should have been dropped
SELECT * FROM run_command_on_workers($$CALL procedure_tests.raise_info('hello');$$) ORDER BY 1,2;
nodename | nodeport | success | result
-----------+----------+---------+----------------------------------------------------------------------
---------------------------------------------------------------------
localhost | 57637 | f | ERROR: procedure procedure_tests.raise_info(unknown) does not exist
localhost | 57638 | f | ERROR: procedure procedure_tests.raise_info(unknown) does not exist
(2 rows)
@ -195,7 +195,7 @@ SET client_min_messages TO error; -- suppress cascading objects dropping
DROP SCHEMA procedure_tests CASCADE;
SELECT run_command_on_workers($$DROP SCHEMA procedure_tests CASCADE;$$);
run_command_on_workers
-----------------------------------
---------------------------------------------------------------------
(localhost,57637,t,"DROP SCHEMA")
(localhost,57638,t,"DROP SCHEMA")
(2 rows)
@ -203,7 +203,7 @@ SELECT run_command_on_workers($$DROP SCHEMA procedure_tests CASCADE;$$);
DROP SCHEMA procedure_tests2 CASCADE;
SELECT run_command_on_workers($$DROP SCHEMA procedure_tests2 CASCADE;$$);
run_command_on_workers
-----------------------------------
---------------------------------------------------------------------
(localhost,57637,t,"DROP SCHEMA")
(localhost,57638,t,"DROP SCHEMA")
(2 rows)
@ -211,7 +211,7 @@ SELECT run_command_on_workers($$DROP SCHEMA procedure_tests2 CASCADE;$$);
DROP USER procedureuser;
SELECT run_command_on_workers($$DROP USER procedureuser;$$);
run_command_on_workers
---------------------------------
---------------------------------------------------------------------
(localhost,57637,t,"DROP ROLE")
(localhost,57638,t,"DROP ROLE")
(2 rows)

View File

@ -4,7 +4,7 @@ NOTICE: not propagating CREATE ROLE/USER commands to worker nodes
HINT: Connect to worker nodes directly to manually create all necessary users and roles.
SELECT run_command_on_workers($$CREATE USER typeuser;$$);
run_command_on_workers
-----------------------------------
---------------------------------------------------------------------
(localhost,57637,t,"CREATE ROLE")
(localhost,57638,t,"CREATE ROLE")
(2 rows)
@ -18,14 +18,14 @@ CREATE TYPE tc1 AS (a int, b int);
CREATE TABLE t1 (a int PRIMARY KEY, b tc1);
SELECT create_distributed_table('t1','a');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
INSERT INTO t1 VALUES (1, (2,3)::tc1);
SELECT * FROM t1;
a | b
---+-------
---------------------------------------------------------------------
1 | (2,3)
(1 row)
@ -38,14 +38,14 @@ CREATE TYPE te1 AS ENUM ('one', 'two', 'three');
CREATE TABLE t2 (a int PRIMARY KEY, b te1);
SELECT create_distributed_table('t2','a');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
INSERT INTO t2 VALUES (1, 'two');
SELECT * FROM t2;
a | b
---+-----
---------------------------------------------------------------------
1 | two
(1 row)
@ -56,7 +56,7 @@ ALTER TYPE te1_newname ADD VALUE 'four';
UPDATE t2 SET b = 'four';
SELECT * FROM t2;
a | b
---+------
---------------------------------------------------------------------
1 | four
(1 row)
@ -69,14 +69,14 @@ CREATE TYPE tc2 AS (a int, b int);
CREATE TABLE t3 (a int PRIMARY KEY, b tc2);
SELECT create_distributed_table('t3','a');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
INSERT INTO t3 VALUES (4, (5,6)::tc2);
SELECT * FROM t3;
a | b
---+-------
---------------------------------------------------------------------
4 | (5,6)
(1 row)
@ -87,14 +87,14 @@ CREATE TYPE te2 AS ENUM ('yes', 'no');
CREATE TABLE t4 (a int PRIMARY KEY, b te2);
SELECT create_distributed_table('t4','a');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
INSERT INTO t4 VALUES (1, 'yes');
SELECT * FROM t4;
a | b
---+-----
---------------------------------------------------------------------
1 | yes
(1 row)
@ -103,13 +103,13 @@ COMMIT;
-- verify order of enum labels
SELECT string_agg(enumlabel, ',' ORDER BY enumsortorder ASC) FROM pg_enum WHERE enumtypid = 'type_tests.te2'::regtype;
string_agg
------------
---------------------------------------------------------------------
yes,no
(1 row)
SELECT run_command_on_workers($$SELECT string_agg(enumlabel, ',' ORDER BY enumsortorder ASC) FROM pg_enum WHERE enumtypid = 'type_tests.te2'::regtype;$$);
run_command_on_workers
------------------------------
---------------------------------------------------------------------
(localhost,57637,t,"yes,no")
(localhost,57638,t,"yes,no")
(2 rows)
@ -125,7 +125,7 @@ RESET citus.enable_ddl_propagation;
CREATE TABLE t5 (a int PRIMARY KEY, b tc5[], c te3);
SELECT create_distributed_table('t5','a');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -145,7 +145,7 @@ INSERT INTO t5 VALUES (1, NULL, 'a', 'd', (1,2,(4,5)::tc6c)::tc6);
ALTER TYPE tc6 RENAME ATTRIBUTE b TO d;
SELECT (e::tc6).d FROM t5 ORDER BY 1;
d
---
---------------------------------------------------------------------
2
(1 row)
@ -153,13 +153,13 @@ SELECT (e::tc6).d FROM t5 ORDER BY 1;
ALTER TYPE te4 OWNER TO typeuser;
SELECT typname, usename FROM pg_type, pg_user where typname = 'te4' and typowner = usesysid;
typname | usename
---------+----------
---------------------------------------------------------------------
te4 | typeuser
(1 row)
SELECT run_command_on_workers($$SELECT row(typname, usename) FROM pg_type, pg_user where typname = 'te4' and typowner = usesysid;$$);
run_command_on_workers
--------------------------------------
---------------------------------------------------------------------
(localhost,57637,t,"(te4,typeuser)")
(localhost,57638,t,"(te4,typeuser)")
(2 rows)
@ -167,13 +167,13 @@ SELECT run_command_on_workers($$SELECT row(typname, usename) FROM pg_type, pg_us
ALTER TYPE tc6 OWNER TO typeuser;
SELECT typname, usename FROM pg_type, pg_user where typname = 'tc6' and typowner = usesysid;
typname | usename
---------+----------
---------------------------------------------------------------------
tc6 | typeuser
(1 row)
SELECT run_command_on_workers($$SELECT row(typname, usename) FROM pg_type, pg_user where typname = 'tc6' and typowner = usesysid;$$);
run_command_on_workers
--------------------------------------
---------------------------------------------------------------------
(localhost,57637,t,"(tc6,typeuser)")
(localhost,57638,t,"(tc6,typeuser)")
(2 rows)
@ -191,7 +191,7 @@ RESET citus.enable_ddl_propagation;
CREATE TABLE t6 (a int, b tc8, c te6);
SELECT create_distributed_table('t6', 'a');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -199,52 +199,52 @@ RESET ROLE;
-- test ownership of all types
SELECT typname, usename FROM pg_type, pg_user where typname = 'tc7' and typowner = usesysid;
typname | usename
---------+----------
---------------------------------------------------------------------
tc7 | typeuser
(1 row)
SELECT run_command_on_workers($$SELECT row(typname, usename) FROM pg_type, pg_user where typname = 'tc7' and typowner = usesysid;$$);
run_command_on_workers
--------------------------------------
---------------------------------------------------------------------
(localhost,57637,t,"(tc7,typeuser)")
(localhost,57638,t,"(tc7,typeuser)")
(2 rows)
SELECT typname, usename FROM pg_type, pg_user where typname = 'te5' and typowner = usesysid;
typname | usename
---------+----------
---------------------------------------------------------------------
te5 | typeuser
(1 row)
SELECT run_command_on_workers($$SELECT row(typname, usename) FROM pg_type, pg_user where typname = 'te5' and typowner = usesysid;$$);
run_command_on_workers
--------------------------------------
---------------------------------------------------------------------
(localhost,57637,t,"(te5,typeuser)")
(localhost,57638,t,"(te5,typeuser)")
(2 rows)
SELECT typname, usename FROM pg_type, pg_user where typname = 'tc8' and typowner = usesysid;
typname | usename
---------+----------
---------------------------------------------------------------------
tc8 | typeuser
(1 row)
SELECT run_command_on_workers($$SELECT row(typname, usename) FROM pg_type, pg_user where typname = 'tc8' and typowner = usesysid;$$);
run_command_on_workers
--------------------------------------
---------------------------------------------------------------------
(localhost,57637,t,"(tc8,typeuser)")
(localhost,57638,t,"(tc8,typeuser)")
(2 rows)
SELECT typname, usename FROM pg_type, pg_user where typname = 'te6' and typowner = usesysid;
typname | usename
---------+----------
---------------------------------------------------------------------
te6 | typeuser
(1 row)
SELECT run_command_on_workers($$SELECT row(typname, usename) FROM pg_type, pg_user where typname = 'te6' and typowner = usesysid;$$);
run_command_on_workers
--------------------------------------
---------------------------------------------------------------------
(localhost,57637,t,"(te6,typeuser)")
(localhost,57638,t,"(te6,typeuser)")
(2 rows)
@ -258,12 +258,12 @@ NOTICE: drop cascades to column b of table t5
-- test if the types are deleted
SELECT typname FROM pg_type, pg_user where typname IN ('te3','tc3','tc4','tc5') and typowner = usesysid ORDER BY typname;
typname
---------
---------------------------------------------------------------------
(0 rows)
SELECT run_command_on_workers($$SELECT typname FROM pg_type, pg_user where typname IN ('te3','tc3','tc4','tc5') and typowner = usesysid ORDER BY typname;$$);
run_command_on_workers
------------------------
---------------------------------------------------------------------
(localhost,57637,t,"")
(localhost,57638,t,"")
(2 rows)
@ -302,7 +302,7 @@ CREATE TYPE distributed_enum_type AS ENUM ('a', 'c');
CREATE TABLE type_proc (a int, b distributed_composite_type, c distributed_enum_type);
SELECT create_distributed_table('type_proc','a');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -331,13 +331,13 @@ CREATE TYPE feature_flag_enum_type AS ENUM ('a', 'b');
-- verify types do not exist on workers
SELECT count(*) FROM pg_type where typname IN ('feature_flag_composite_type', 'feature_flag_enum_type');
count
-------
---------------------------------------------------------------------
2
(1 row)
SELECT run_command_on_workers($$SELECT count(*) FROM pg_type where typname IN ('feature_flag_composite_type', 'feature_flag_enum_type');$$);
run_command_on_workers
------------------------
---------------------------------------------------------------------
(localhost,57637,t,0)
(localhost,57638,t,0)
(2 rows)
@ -346,19 +346,19 @@ SELECT run_command_on_workers($$SELECT count(*) FROM pg_type where typname IN ('
CREATE TABLE feature_flag_table (a int PRIMARY KEY, b feature_flag_composite_type, c feature_flag_enum_type);
SELECT create_distributed_table('feature_flag_table','a');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
SELECT count(*) FROM pg_type where typname IN ('feature_flag_composite_type', 'feature_flag_enum_type');
count
-------
---------------------------------------------------------------------
2
(1 row)
SELECT run_command_on_workers($$SELECT count(*) FROM pg_type where typname IN ('feature_flag_composite_type', 'feature_flag_enum_type');$$);
run_command_on_workers
------------------------
---------------------------------------------------------------------
(localhost,57637,t,2)
(localhost,57638,t,2)
(2 rows)
@ -369,7 +369,7 @@ SET client_min_messages TO error; -- suppress cascading objects dropping
DROP SCHEMA type_tests CASCADE;
SELECT run_command_on_workers($$DROP SCHEMA type_tests CASCADE;$$);
run_command_on_workers
-----------------------------------
---------------------------------------------------------------------
(localhost,57637,t,"DROP SCHEMA")
(localhost,57638,t,"DROP SCHEMA")
(2 rows)
@ -377,7 +377,7 @@ SELECT run_command_on_workers($$DROP SCHEMA type_tests CASCADE;$$);
DROP SCHEMA type_tests2 CASCADE;
SELECT run_command_on_workers($$DROP SCHEMA type_tests2 CASCADE;$$);
run_command_on_workers
-----------------------------------
---------------------------------------------------------------------
(localhost,57637,t,"DROP SCHEMA")
(localhost,57638,t,"DROP SCHEMA")
(2 rows)
@ -385,7 +385,7 @@ SELECT run_command_on_workers($$DROP SCHEMA type_tests2 CASCADE;$$);
DROP USER typeuser;
SELECT run_command_on_workers($$DROP USER typeuser;$$);
run_command_on_workers
---------------------------------
---------------------------------------------------------------------
(localhost,57637,t,"DROP ROLE")
(localhost,57638,t,"DROP ROLE")
(2 rows)

View File

@ -2,7 +2,7 @@ SET citus.next_shard_id TO 20020000;
CREATE SCHEMA type_conflict;
SELECT run_command_on_workers($$CREATE SCHEMA type_conflict;$$);
run_command_on_workers
-------------------------------------
---------------------------------------------------------------------
(localhost,57637,t,"CREATE SCHEMA")
(localhost,57638,t,"CREATE SCHEMA")
(2 rows)
@ -34,14 +34,14 @@ SET search_path TO type_conflict;
AND attnum > 0
ORDER BY attnum;
relname | attname | typname
-------------+---------+----------------------------------
---------------------------------------------------------------------
local_table | a | int4
local_table | b | my_precious_type(citus_backup_0)
(2 rows)
SELECT * FROM local_table;
a | b
----+----------------------------
---------------------------------------------------------------------
42 | ("always bring a towel",t)
(1 row)
@ -50,37 +50,37 @@ SET search_path TO type_conflict;
-- make sure worker_create_or_replace correctly generates new names while types are existing
SELECT worker_create_or_replace_object('CREATE TYPE type_conflict.multi_conflicting_type AS (a int, b int);');
worker_create_or_replace_object
---------------------------------
---------------------------------------------------------------------
t
(1 row)
SELECT worker_create_or_replace_object('CREATE TYPE type_conflict.multi_conflicting_type AS (a int, b int, c int);');
worker_create_or_replace_object
---------------------------------
---------------------------------------------------------------------
t
(1 row)
SELECT worker_create_or_replace_object('CREATE TYPE type_conflict.multi_conflicting_type AS (a int, b int, c int, d int);');
worker_create_or_replace_object
---------------------------------
---------------------------------------------------------------------
t
(1 row)
SELECT worker_create_or_replace_object('CREATE TYPE type_conflict.multi_conflicting_type_with_a_really_long_name_that_truncates AS (a int, b int);');
worker_create_or_replace_object
---------------------------------
---------------------------------------------------------------------
t
(1 row)
SELECT worker_create_or_replace_object('CREATE TYPE type_conflict.multi_conflicting_type_with_a_really_long_name_that_truncates AS (a int, b int, c int);');
worker_create_or_replace_object
---------------------------------
---------------------------------------------------------------------
t
(1 row)
SELECT worker_create_or_replace_object('CREATE TYPE type_conflict.multi_conflicting_type_with_a_really_long_name_that_truncates AS (a int, b int, c int, d int);');
worker_create_or_replace_object
---------------------------------
---------------------------------------------------------------------
t
(1 row)
@ -94,7 +94,7 @@ FROM pg_attribute
WHERE pg_type.typname LIKE 'multi_conflicting_type%'
GROUP BY pg_type.typname;
typname | fields
-----------------------------------------------------------------+--------------------------------
---------------------------------------------------------------------
multi_conflicting_type | a int4, b int4, c int4, d int4
multi_conflicting_type(citus_backup_0) | a int4, b int4
multi_conflicting_type(citus_backup_1) | a int4, b int4, c int4

View File

@ -1,7 +1,7 @@
SHOW server_version \gset
SELECT substring(:'server_version', '\d+')::int > 11 AS version_above_eleven;
version_above_eleven
----------------------
---------------------------------------------------------------------
t
(1 row)
@ -15,14 +15,14 @@ CREATE TYPE xact_enum_edit AS ENUM ('yes', 'no');
CREATE TABLE t1 (a int PRIMARY KEY, b xact_enum_edit);
SELECT create_distributed_table('t1','a');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
INSERT INTO t1 VALUES (1, 'yes');
SELECT * FROM t1;
a | b
---+-----
---------------------------------------------------------------------
1 | yes
(1 row)
@ -33,13 +33,13 @@ ABORT;
-- maybe should not be on the workers
SELECT string_agg(enumlabel, ',' ORDER BY enumsortorder ASC) FROM pg_enum WHERE enumtypid = 'xact_enum_type.xact_enum_edit'::regtype;
string_agg
------------
---------------------------------------------------------------------
yes,no
(1 row)
SELECT run_command_on_workers($$SELECT string_agg(enumlabel, ',' ORDER BY enumsortorder ASC) FROM pg_enum WHERE enumtypid = 'xact_enum_type.xact_enum_edit'::regtype;$$);
run_command_on_workers
------------------------------
---------------------------------------------------------------------
(localhost,57637,t,"yes,no")
(localhost,57638,t,"yes,no")
(2 rows)
@ -50,13 +50,13 @@ COMMIT;
-- maybe should be on the workers (pg12 and above)
SELECT string_agg(enumlabel, ',' ORDER BY enumsortorder ASC) FROM pg_enum WHERE enumtypid = 'xact_enum_type.xact_enum_edit'::regtype;
string_agg
--------------
---------------------------------------------------------------------
yes,no,maybe
(1 row)
SELECT run_command_on_workers($$SELECT string_agg(enumlabel, ',' ORDER BY enumsortorder ASC) FROM pg_enum WHERE enumtypid = 'xact_enum_type.xact_enum_edit'::regtype;$$);
run_command_on_workers
------------------------------------
---------------------------------------------------------------------
(localhost,57637,t,"yes,no,maybe")
(localhost,57638,t,"yes,no,maybe")
(2 rows)
@ -66,7 +66,7 @@ SET client_min_messages TO error; -- suppress cascading objects dropping
DROP SCHEMA xact_enum_type CASCADE;
SELECT run_command_on_workers($$DROP SCHEMA xact_enum_type CASCADE;$$);
run_command_on_workers
-----------------------------------
---------------------------------------------------------------------
(localhost,57637,t,"DROP SCHEMA")
(localhost,57638,t,"DROP SCHEMA")
(2 rows)

View File

@ -1,7 +1,7 @@
SHOW server_version \gset
SELECT substring(:'server_version', '\d+')::int > 11 AS version_above_eleven;
version_above_eleven
----------------------
---------------------------------------------------------------------
f
(1 row)
@ -15,14 +15,14 @@ CREATE TYPE xact_enum_edit AS ENUM ('yes', 'no');
CREATE TABLE t1 (a int PRIMARY KEY, b xact_enum_edit);
SELECT create_distributed_table('t1','a');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
INSERT INTO t1 VALUES (1, 'yes');
SELECT * FROM t1;
a | b
---+-----
---------------------------------------------------------------------
1 | yes
(1 row)
@ -34,13 +34,13 @@ ABORT;
-- maybe should not be on the workers
SELECT string_agg(enumlabel, ',' ORDER BY enumsortorder ASC) FROM pg_enum WHERE enumtypid = 'xact_enum_type.xact_enum_edit'::regtype;
string_agg
------------
---------------------------------------------------------------------
yes,no
(1 row)
SELECT run_command_on_workers($$SELECT string_agg(enumlabel, ',' ORDER BY enumsortorder ASC) FROM pg_enum WHERE enumtypid = 'xact_enum_type.xact_enum_edit'::regtype;$$);
run_command_on_workers
------------------------------
---------------------------------------------------------------------
(localhost,57637,t,"yes,no")
(localhost,57638,t,"yes,no")
(2 rows)
@ -52,13 +52,13 @@ COMMIT;
-- maybe should be on the workers (pg12 and above)
SELECT string_agg(enumlabel, ',' ORDER BY enumsortorder ASC) FROM pg_enum WHERE enumtypid = 'xact_enum_type.xact_enum_edit'::regtype;
string_agg
------------
---------------------------------------------------------------------
yes,no
(1 row)
SELECT run_command_on_workers($$SELECT string_agg(enumlabel, ',' ORDER BY enumsortorder ASC) FROM pg_enum WHERE enumtypid = 'xact_enum_type.xact_enum_edit'::regtype;$$);
run_command_on_workers
------------------------------
---------------------------------------------------------------------
(localhost,57637,t,"yes,no")
(localhost,57638,t,"yes,no")
(2 rows)
@ -68,7 +68,7 @@ SET client_min_messages TO error; -- suppress cascading objects dropping
DROP SCHEMA xact_enum_type CASCADE;
SELECT run_command_on_workers($$DROP SCHEMA xact_enum_type CASCADE;$$);
run_command_on_workers
-----------------------------------
---------------------------------------------------------------------
(localhost,57637,t,"DROP SCHEMA")
(localhost,57638,t,"DROP SCHEMA")
(2 rows)

View File

@ -4,21 +4,21 @@ SET citus.next_shard_id TO 2370000;
CREATE TABLE recursive_dml_queries.distributed_table (tenant_id text, dept int, info jsonb);
SELECT create_distributed_table('distributed_table', 'tenant_id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
CREATE TABLE recursive_dml_queries.second_distributed_table (tenant_id text, dept int, info jsonb);
SELECT create_distributed_table('second_distributed_table', 'tenant_id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
CREATE TABLE recursive_dml_queries.reference_table (id text, name text);
SELECT create_reference_table('reference_table');
create_reference_table
------------------------
---------------------------------------------------------------------
(1 row)
@ -52,10 +52,10 @@ WHERE
foo.avg_tenant_id::int::text = reference_table.id
RETURNING
reference_table.name;
DEBUG: generating subplan 4_1 for subquery SELECT avg((tenant_id)::integer) AS avg_tenant_id FROM recursive_dml_queries.second_distributed_table
DEBUG: Plan 4 query after replacing subqueries and CTEs: UPDATE recursive_dml_queries.reference_table SET name = ('new_'::text OPERATOR(pg_catalog.||) reference_table.name) FROM (SELECT intermediate_result.avg_tenant_id FROM read_intermediate_result('4_1'::text, 'binary'::citus_copy_format) intermediate_result(avg_tenant_id numeric)) foo WHERE (((foo.avg_tenant_id)::integer)::text OPERATOR(pg_catalog.=) reference_table.id) RETURNING reference_table.name
DEBUG: generating subplan XXX_1 for subquery SELECT avg((tenant_id)::integer) AS avg_tenant_id FROM recursive_dml_queries.second_distributed_table
DEBUG: Plan XXX query after replacing subqueries and CTEs: UPDATE recursive_dml_queries.reference_table SET name = ('new_'::text OPERATOR(pg_catalog.||) reference_table.name) FROM (SELECT intermediate_result.avg_tenant_id FROM read_intermediate_result('XXX_1'::text, 'binary'::citus_copy_format) intermediate_result(avg_tenant_id numeric)) foo WHERE (((foo.avg_tenant_id)::integer)::text OPERATOR(pg_catalog.=) reference_table.id) RETURNING reference_table.name
name
-------------
---------------------------------------------------------------------
new_user_50
(1 row)
@ -85,10 +85,10 @@ WHERE
AND second_distributed_table.dept IN (2)
RETURNING
second_distributed_table.tenant_id, second_distributed_table.dept;
DEBUG: generating subplan 6_1 for subquery SELECT DISTINCT ON (tenant_id) tenant_id, max(dept) AS max_dept FROM (SELECT second_distributed_table.dept, second_distributed_table.tenant_id FROM recursive_dml_queries.second_distributed_table, recursive_dml_queries.distributed_table WHERE (distributed_table.tenant_id OPERATOR(pg_catalog.=) second_distributed_table.tenant_id)) foo_inner GROUP BY tenant_id ORDER BY tenant_id DESC
DEBUG: Plan 6 query after replacing subqueries and CTEs: UPDATE recursive_dml_queries.second_distributed_table SET dept = (foo.max_dept OPERATOR(pg_catalog.*) 2) FROM (SELECT intermediate_result.tenant_id, intermediate_result.max_dept FROM read_intermediate_result('6_1'::text, 'binary'::citus_copy_format) intermediate_result(tenant_id text, max_dept integer)) foo WHERE ((foo.tenant_id OPERATOR(pg_catalog.<>) second_distributed_table.tenant_id) AND (second_distributed_table.dept OPERATOR(pg_catalog.=) 2)) RETURNING second_distributed_table.tenant_id, second_distributed_table.dept
DEBUG: generating subplan XXX_1 for subquery SELECT DISTINCT ON (tenant_id) tenant_id, max(dept) AS max_dept FROM (SELECT second_distributed_table.dept, second_distributed_table.tenant_id FROM recursive_dml_queries.second_distributed_table, recursive_dml_queries.distributed_table WHERE (distributed_table.tenant_id OPERATOR(pg_catalog.=) second_distributed_table.tenant_id)) foo_inner GROUP BY tenant_id ORDER BY tenant_id DESC
DEBUG: Plan XXX query after replacing subqueries and CTEs: UPDATE recursive_dml_queries.second_distributed_table SET dept = (foo.max_dept OPERATOR(pg_catalog.*) 2) FROM (SELECT intermediate_result.tenant_id, intermediate_result.max_dept FROM read_intermediate_result('XXX_1'::text, 'binary'::citus_copy_format) intermediate_result(tenant_id text, max_dept integer)) foo WHERE ((foo.tenant_id OPERATOR(pg_catalog.<>) second_distributed_table.tenant_id) AND (second_distributed_table.dept OPERATOR(pg_catalog.=) 2)) RETURNING second_distributed_table.tenant_id, second_distributed_table.dept
tenant_id | dept
-----------+------
---------------------------------------------------------------------
12 | 18
2 | 18
22 | 18
@ -135,9 +135,9 @@ FROM
WHERE
foo.tenant_id != second_distributed_table.tenant_id
AND second_distributed_table.dept IN (3);
DEBUG: generating subplan 8_1 for subquery SELECT second_distributed_table.tenant_id FROM recursive_dml_queries.second_distributed_table, recursive_dml_queries.distributed_table WHERE ((distributed_table.tenant_id OPERATOR(pg_catalog.=) second_distributed_table.tenant_id) AND (second_distributed_table.dept OPERATOR(pg_catalog.=) ANY (ARRAY[4, 5])))
DEBUG: generating subplan 8_2 for subquery SELECT DISTINCT foo_inner_1.tenant_id FROM (SELECT second_distributed_table.dept, second_distributed_table.tenant_id FROM recursive_dml_queries.second_distributed_table, recursive_dml_queries.distributed_table WHERE ((distributed_table.tenant_id OPERATOR(pg_catalog.=) second_distributed_table.tenant_id) AND (second_distributed_table.dept OPERATOR(pg_catalog.=) ANY (ARRAY[3, 4])))) foo_inner_1, (SELECT intermediate_result.tenant_id FROM read_intermediate_result('8_1'::text, 'binary'::citus_copy_format) intermediate_result(tenant_id text)) foo_inner_2 WHERE (foo_inner_1.tenant_id OPERATOR(pg_catalog.<>) foo_inner_2.tenant_id)
DEBUG: Plan 8 query after replacing subqueries and CTEs: UPDATE recursive_dml_queries.second_distributed_table SET dept = ((foo.tenant_id)::integer OPERATOR(pg_catalog./) 4) FROM (SELECT intermediate_result.tenant_id FROM read_intermediate_result('8_2'::text, 'binary'::citus_copy_format) intermediate_result(tenant_id text)) foo WHERE ((foo.tenant_id OPERATOR(pg_catalog.<>) second_distributed_table.tenant_id) AND (second_distributed_table.dept OPERATOR(pg_catalog.=) 3))
DEBUG: generating subplan XXX_1 for subquery SELECT second_distributed_table.tenant_id FROM recursive_dml_queries.second_distributed_table, recursive_dml_queries.distributed_table WHERE ((distributed_table.tenant_id OPERATOR(pg_catalog.=) second_distributed_table.tenant_id) AND (second_distributed_table.dept OPERATOR(pg_catalog.=) ANY (ARRAY[4, 5])))
DEBUG: generating subplan XXX_2 for subquery SELECT DISTINCT foo_inner_1.tenant_id FROM (SELECT second_distributed_table.dept, second_distributed_table.tenant_id FROM recursive_dml_queries.second_distributed_table, recursive_dml_queries.distributed_table WHERE ((distributed_table.tenant_id OPERATOR(pg_catalog.=) second_distributed_table.tenant_id) AND (second_distributed_table.dept OPERATOR(pg_catalog.=) ANY (ARRAY[3, 4])))) foo_inner_1, (SELECT intermediate_result.tenant_id FROM read_intermediate_result('XXX_1'::text, 'binary'::citus_copy_format) intermediate_result(tenant_id text)) foo_inner_2 WHERE (foo_inner_1.tenant_id OPERATOR(pg_catalog.<>) foo_inner_2.tenant_id)
DEBUG: Plan XXX query after replacing subqueries and CTEs: UPDATE recursive_dml_queries.second_distributed_table SET dept = ((foo.tenant_id)::integer OPERATOR(pg_catalog./) 4) FROM (SELECT intermediate_result.tenant_id FROM read_intermediate_result('XXX_2'::text, 'binary'::citus_copy_format) intermediate_result(tenant_id text)) foo WHERE ((foo.tenant_id OPERATOR(pg_catalog.<>) second_distributed_table.tenant_id) AND (second_distributed_table.dept OPERATOR(pg_catalog.=) 3))
-- we currently do not allow local tables in modification queries
UPDATE
distributed_table
@ -154,10 +154,10 @@ WHERE
foo.avg_tenant_id::int::text = distributed_table.tenant_id
RETURNING
distributed_table.*;
DEBUG: generating subplan 11_1 for subquery SELECT avg((id)::integer) AS avg_tenant_id FROM recursive_dml_queries.local_table
DEBUG: Plan 11 query after replacing subqueries and CTEs: UPDATE recursive_dml_queries.distributed_table SET dept = (foo.avg_tenant_id)::integer FROM (SELECT intermediate_result.avg_tenant_id FROM read_intermediate_result('11_1'::text, 'binary'::citus_copy_format) intermediate_result(avg_tenant_id numeric)) foo WHERE (((foo.avg_tenant_id)::integer)::text OPERATOR(pg_catalog.=) distributed_table.tenant_id) RETURNING distributed_table.tenant_id, distributed_table.dept, distributed_table.info
DEBUG: generating subplan XXX_1 for subquery SELECT avg((id)::integer) AS avg_tenant_id FROM recursive_dml_queries.local_table
DEBUG: Plan XXX query after replacing subqueries and CTEs: UPDATE recursive_dml_queries.distributed_table SET dept = (foo.avg_tenant_id)::integer FROM (SELECT intermediate_result.avg_tenant_id FROM read_intermediate_result('XXX_1'::text, 'binary'::citus_copy_format) intermediate_result(avg_tenant_id numeric)) foo WHERE (((foo.avg_tenant_id)::integer)::text OPERATOR(pg_catalog.=) distributed_table.tenant_id) RETURNING distributed_table.tenant_id, distributed_table.dept, distributed_table.info
tenant_id | dept | info
-----------+------+------------------------
---------------------------------------------------------------------
50 | 50 | {"f1": 50, "f2": 2500}
(1 row)
@ -177,10 +177,10 @@ WHERE
foo.avg_tenant_id::int::text = distributed_table.tenant_id
RETURNING
distributed_table.*;
DEBUG: generating subplan 12_1 for subquery SELECT avg((tenant_id)::integer) AS avg_tenant_id FROM (SELECT distributed_table.tenant_id, reference_table.name FROM recursive_dml_queries.distributed_table, recursive_dml_queries.reference_table WHERE ((distributed_table.dept)::text OPERATOR(pg_catalog.=) reference_table.id) ORDER BY reference_table.name DESC, distributed_table.tenant_id DESC) tenant_ids
DEBUG: Plan 12 query after replacing subqueries and CTEs: UPDATE recursive_dml_queries.distributed_table SET dept = (foo.avg_tenant_id)::integer FROM (SELECT intermediate_result.avg_tenant_id FROM read_intermediate_result('12_1'::text, 'binary'::citus_copy_format) intermediate_result(avg_tenant_id numeric)) foo WHERE (((foo.avg_tenant_id)::integer)::text OPERATOR(pg_catalog.=) distributed_table.tenant_id) RETURNING distributed_table.tenant_id, distributed_table.dept, distributed_table.info
DEBUG: generating subplan XXX_1 for subquery SELECT avg((tenant_id)::integer) AS avg_tenant_id FROM (SELECT distributed_table.tenant_id, reference_table.name FROM recursive_dml_queries.distributed_table, recursive_dml_queries.reference_table WHERE ((distributed_table.dept)::text OPERATOR(pg_catalog.=) reference_table.id) ORDER BY reference_table.name DESC, distributed_table.tenant_id DESC) tenant_ids
DEBUG: Plan XXX query after replacing subqueries and CTEs: UPDATE recursive_dml_queries.distributed_table SET dept = (foo.avg_tenant_id)::integer FROM (SELECT intermediate_result.avg_tenant_id FROM read_intermediate_result('XXX_1'::text, 'binary'::citus_copy_format) intermediate_result(avg_tenant_id numeric)) foo WHERE (((foo.avg_tenant_id)::integer)::text OPERATOR(pg_catalog.=) distributed_table.tenant_id) RETURNING distributed_table.tenant_id, distributed_table.dept, distributed_table.info
tenant_id | dept | info
-----------+------+------------------------
---------------------------------------------------------------------
50 | 50 | {"f1": 50, "f2": 2500}
(1 row)
@ -213,7 +213,7 @@ foo_inner_1 JOIN LATERAL
ON (foo_inner_2.tenant_id != foo_inner_1.tenant_id)
ORDER BY foo_inner_1.tenant_id;
tenant_id
-----------
---------------------------------------------------------------------
14
24
34
@ -261,7 +261,7 @@ FROM
ON (foo_inner_2.tenant_id != foo_inner_1.tenant_id)
) as foo
RETURNING *;
DEBUG: generating subplan 15_1 for subquery SELECT dept FROM recursive_dml_queries.second_distributed_table
DEBUG: generating subplan XXX_1 for subquery SELECT dept FROM recursive_dml_queries.second_distributed_table
ERROR: complex joins are only supported when all distributed tables are co-located and joined on their distribution columns
-- again a corrolated subquery
-- this time distribution key eq. exists
@ -297,8 +297,8 @@ ERROR: complex joins are only supported when all distributed tables are co-loca
INSERT INTO
second_distributed_table (tenant_id, dept)
VALUES ('3', (WITH vals AS (SELECT 3) select * from vals));
DEBUG: generating subplan 20_1 for CTE vals: SELECT 3
DEBUG: Plan 20 query after replacing subqueries and CTEs: INSERT INTO recursive_dml_queries.second_distributed_table (tenant_id, dept) VALUES ('3'::text, (SELECT vals."?column?" FROM (SELECT intermediate_result."?column?" FROM read_intermediate_result('20_1'::text, 'binary'::citus_copy_format) intermediate_result("?column?" integer)) vals))
DEBUG: generating subplan XXX_1 for CTE vals: SELECT 3
DEBUG: Plan XXX query after replacing subqueries and CTEs: INSERT INTO recursive_dml_queries.second_distributed_table (tenant_id, dept) VALUES ('3'::text, (SELECT vals."?column?" FROM (SELECT intermediate_result."?column?" FROM read_intermediate_result('XXX_1'::text, 'binary'::citus_copy_format) intermediate_result("?column?" integer)) vals))
ERROR: subqueries are not supported within INSERT queries
HINT: Try rewriting your queries with 'INSERT INTO ... SELECT' syntax.
INSERT INTO
@ -321,8 +321,8 @@ UPDATE distributed_table
SET dept = 5
FROM cte_1
WHERE distributed_table.tenant_id < cte_1.tenant_id;
DEBUG: generating subplan 22_1 for CTE cte_1: WITH cte_2 AS (SELECT second_distributed_table.tenant_id AS cte2_id FROM recursive_dml_queries.second_distributed_table WHERE (second_distributed_table.dept OPERATOR(pg_catalog.>=) 2)) UPDATE recursive_dml_queries.distributed_table SET dept = 10 RETURNING tenant_id, dept, info
DEBUG: Plan 22 query after replacing subqueries and CTEs: UPDATE recursive_dml_queries.distributed_table SET dept = 5 FROM (SELECT intermediate_result.tenant_id, intermediate_result.dept, intermediate_result.info FROM read_intermediate_result('22_1'::text, 'binary'::citus_copy_format) intermediate_result(tenant_id text, dept integer, info jsonb)) cte_1 WHERE (distributed_table.tenant_id OPERATOR(pg_catalog.<) cte_1.tenant_id)
DEBUG: generating subplan XXX_1 for CTE cte_1: WITH cte_2 AS (SELECT second_distributed_table.tenant_id AS cte2_id FROM recursive_dml_queries.second_distributed_table WHERE (second_distributed_table.dept OPERATOR(pg_catalog.>=) 2)) UPDATE recursive_dml_queries.distributed_table SET dept = 10 RETURNING tenant_id, dept, info
DEBUG: Plan XXX query after replacing subqueries and CTEs: UPDATE recursive_dml_queries.distributed_table SET dept = 5 FROM (SELECT intermediate_result.tenant_id, intermediate_result.dept, intermediate_result.info FROM read_intermediate_result('XXX_1'::text, 'binary'::citus_copy_format) intermediate_result(tenant_id text, dept integer, info jsonb)) cte_1 WHERE (distributed_table.tenant_id OPERATOR(pg_catalog.<) cte_1.tenant_id)
WITH cte_1 AS (
WITH cte_2 AS (
SELECT tenant_id as cte2_id
@ -337,8 +337,8 @@ UPDATE distributed_table
SET dept = 5
FROM cte_1
WHERE distributed_table.tenant_id < cte_1.tenant_id;
DEBUG: generating subplan 24_1 for CTE cte_1: WITH cte_2 AS (SELECT second_distributed_table.tenant_id AS cte2_id FROM recursive_dml_queries.second_distributed_table WHERE (second_distributed_table.dept OPERATOR(pg_catalog.>=) 2)) UPDATE recursive_dml_queries.distributed_table SET dept = 10 RETURNING tenant_id, dept, info
DEBUG: Plan 24 query after replacing subqueries and CTEs: UPDATE recursive_dml_queries.distributed_table SET dept = 5 FROM (SELECT intermediate_result.tenant_id, intermediate_result.dept, intermediate_result.info FROM read_intermediate_result('24_1'::text, 'binary'::citus_copy_format) intermediate_result(tenant_id text, dept integer, info jsonb)) cte_1 WHERE (distributed_table.tenant_id OPERATOR(pg_catalog.<) cte_1.tenant_id)
DEBUG: generating subplan XXX_1 for CTE cte_1: WITH cte_2 AS (SELECT second_distributed_table.tenant_id AS cte2_id FROM recursive_dml_queries.second_distributed_table WHERE (second_distributed_table.dept OPERATOR(pg_catalog.>=) 2)) UPDATE recursive_dml_queries.distributed_table SET dept = 10 RETURNING tenant_id, dept, info
DEBUG: Plan XXX query after replacing subqueries and CTEs: UPDATE recursive_dml_queries.distributed_table SET dept = 5 FROM (SELECT intermediate_result.tenant_id, intermediate_result.dept, intermediate_result.info FROM read_intermediate_result('XXX_1'::text, 'binary'::citus_copy_format) intermediate_result(tenant_id text, dept integer, info jsonb)) cte_1 WHERE (distributed_table.tenant_id OPERATOR(pg_catalog.<) cte_1.tenant_id)
-- we don't support updating local table with a join with
-- distributed tables
UPDATE

View File

@ -1,17 +1,17 @@
------
---------------------------------------------------------------------
-- THIS TEST SHOULD IDEALLY BE EXECUTED AT THE END OF
-- THE REGRESSION TEST SUITE TO MAKE SURE THAT WE
-- CLEAR ALL INTERMEDIATE RESULTS ON BOTH THE COORDINATOR
-- AND ON THE WORKERS. HOWEVER, WE HAVE SOME ISSUES AROUND
-- WINDOWS SUPPORT SO WE DISABLE THIS TEST ON WINDOWS
------
---------------------------------------------------------------------
SELECT pg_ls_dir('base/pgsql_job_cache') WHERE citus_version() NOT ILIKE '%windows%';
pg_ls_dir
-----------
---------------------------------------------------------------------
(0 rows)
SELECT * FROM run_command_on_workers($$SELECT pg_ls_dir('base/pgsql_job_cache') r WHERE citus_version() NOT ILIKE '%windows%'$$) WHERE result <> '';
nodename | nodeport | success | result
----------+----------+---------+--------
---------------------------------------------------------------------
(0 rows)

View File

@ -15,7 +15,7 @@ WHERE name = 'uuid-ossp'
-- show that the extension is created on both nodes
SELECT run_command_on_workers($$SELECT count(*) FROM pg_extension WHERE extname = 'uuid-ossp'$$);
run_command_on_workers
------------------------
---------------------------------------------------------------------
(localhost,57637,t,1)
(localhost,57638,t,1)
(2 rows)
@ -26,7 +26,7 @@ RESET client_min_messages;
-- show that the extension is dropped from both nodes
SELECT run_command_on_workers($$SELECT count(*) FROM pg_extension WHERE extname = 'uuid-ossp'$$);
run_command_on_workers
------------------------
---------------------------------------------------------------------
(localhost,57637,t,0)
(localhost,57638,t,0)
(2 rows)
@ -34,7 +34,7 @@ SELECT run_command_on_workers($$SELECT count(*) FROM pg_extension WHERE extname
-- show that extension recreation on new nodes works also fine with extension names that require escaping
SELECT 1 from master_remove_node('localhost', :worker_2_port);
?column?
----------
---------------------------------------------------------------------
1
(1 row)
@ -51,14 +51,14 @@ WHERE name = 'uuid-ossp'
-- and add the other node
SELECT 1 from master_add_node('localhost', :worker_2_port);
?column?
----------
---------------------------------------------------------------------
1
(1 row)
-- show that the extension exists on both nodes
SELECT run_command_on_workers($$SELECT count(*) FROM pg_extension WHERE extname = 'uuid-ossp'$$);
run_command_on_workers
------------------------
---------------------------------------------------------------------
(localhost,57637,t,1)
(localhost,57638,t,1)
(2 rows)

View File

@ -13,14 +13,14 @@ WHERE name = 'uuid-ossp'
\gset
:uuid_present_command;
uuid_ossp_present
-------------------
---------------------------------------------------------------------
f
(1 row)
-- show that the extension is created on both nodes
SELECT run_command_on_workers($$SELECT count(*) FROM pg_extension WHERE extname = 'uuid-ossp'$$);
run_command_on_workers
------------------------
---------------------------------------------------------------------
(localhost,57637,t,0)
(localhost,57638,t,0)
(2 rows)
@ -32,7 +32,7 @@ RESET client_min_messages;
-- show that the extension is dropped from both nodes
SELECT run_command_on_workers($$SELECT count(*) FROM pg_extension WHERE extname = 'uuid-ossp'$$);
run_command_on_workers
------------------------
---------------------------------------------------------------------
(localhost,57637,t,0)
(localhost,57638,t,0)
(2 rows)
@ -40,7 +40,7 @@ SELECT run_command_on_workers($$SELECT count(*) FROM pg_extension WHERE extname
-- show that extension recreation on new nodes works also fine with extension names that require escaping
SELECT 1 from master_remove_node('localhost', :worker_2_port);
?column?
----------
---------------------------------------------------------------------
1
(1 row)
@ -55,21 +55,21 @@ WHERE name = 'uuid-ossp'
\gset
:uuid_present_command;
uuid_ossp_present
-------------------
---------------------------------------------------------------------
f
(1 row)
-- and add the other node
SELECT 1 from master_add_node('localhost', :worker_2_port);
?column?
----------
---------------------------------------------------------------------
1
(1 row)
-- show that the extension exists on both nodes
SELECT run_command_on_workers($$SELECT count(*) FROM pg_extension WHERE extname = 'uuid-ossp'$$);
run_command_on_workers
------------------------
---------------------------------------------------------------------
(localhost,57637,t,0)
(localhost,57638,t,0)
(2 rows)

View File

@ -14,14 +14,14 @@ INSERT INTO test VALUES
SELECT create_reference_table('ref');
NOTICE: Copying data from local table...
create_reference_table
------------------------
---------------------------------------------------------------------
(1 row)
SELECT create_distributed_table('test', 'x');
NOTICE: Copying data from local table...
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -34,7 +34,7 @@ FROM
WHERE t2.y * 2 = a.a
ORDER BY 1,2,3;
y | x | x | a | b
---+---+---+---+---
---------------------------------------------------------------------
2 | 1 | 1 | 4 | 4
2 | 1 | 2 | 4 | 4
2 | 2 | 1 | 4 | 4
@ -54,7 +54,7 @@ FROM
WHERE t2.y - a.a - b.b = 0
ORDER BY 1,2,3;
y | x | x | a | b | a | b
---+---+---+---+---+---+---
---------------------------------------------------------------------
(0 rows)
-- The join clause is wider than it used to be, causing this query to be recognized by the LogicalPlanner as a repartition join.

View File

@ -1,6 +1,6 @@
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -14,26 +14,26 @@ ALTER SEQUENCE pg_catalog.pg_dist_placement_placementid_seq RESTART 100;
CREATE TABLE copy_test (key int, value int);
SELECT create_distributed_table('copy_test', 'key', 'append');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
SELECT citus.clear_network_traffic();
clear_network_traffic
-----------------------
---------------------------------------------------------------------
(1 row)
COPY copy_test FROM PROGRAM 'echo 0, 0 && echo 1, 1 && echo 2, 4 && echo 3, 9' WITH CSV;
SELECT count(1) FROM copy_test;
count
-------
---------------------------------------------------------------------
4
(1 row)
SELECT citus.dump_network_traffic();
dump_network_traffic
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
---------------------------------------------------------------------
(0,coordinator,"[initial message]")
(0,worker,"['AuthenticationOk()', 'ParameterStatus(application_name=citus)', 'ParameterStatus(client_encoding=UTF8)', 'ParameterStatus(DateStyle=ISO, MDY)', 'ParameterStatus(integer_datetimes=on)', 'ParameterStatus(IntervalStyle=postgres)', 'ParameterStatus(is_superuser=on)', 'ParameterStatus(server_encoding=UTF8)', 'ParameterStatus(server_version=XXX)', 'ParameterStatus(session_authorization=postgres)', 'ParameterStatus(standard_conforming_strings=on)', 'ParameterStatus(TimeZone=XXX)', 'BackendKeyData(XXX)', 'ReadyForQuery(state=idle)']")
(0,coordinator,"[""Query(query=SELECT worker_apply_shard_ddl_command (100400, 'CREATE TABLE public.copy_test (key integer, value integer)'))""]")
@ -59,14 +59,14 @@ SELECT citus.dump_network_traffic();
---- all of the following tests test behavior with 2 shard placements ----
SHOW citus.shard_replication_factor;
citus.shard_replication_factor
--------------------------------
---------------------------------------------------------------------
2
(1 row)
---- kill the connection when we try to create the shard ----
SELECT citus.mitmproxy('conn.onQuery(query="worker_apply_shard_ddl_command").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -74,52 +74,52 @@ COPY copy_test FROM PROGRAM 'echo 0, 0 && echo 1, 1 && echo 2, 4 && echo 3, 9' W
ERROR: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
CONTEXT: while executing command on localhost:9060
CONTEXT: while executing command on localhost:xxxxx
SELECT * FROM pg_dist_shard s, pg_dist_shard_placement p
WHERE (s.shardid = p.shardid) AND s.logicalrelid = 'copy_test'::regclass
ORDER BY placementid;
logicalrelid | shardid | shardstorage | shardminvalue | shardmaxvalue | shardid | shardstate | shardlength | nodename | nodeport | placementid
--------------+---------+--------------+---------------+---------------+---------+------------+-------------+-----------+----------+-------------
---------------------------------------------------------------------
copy_test | 100400 | t | 0 | 3 | 100400 | 1 | 8192 | localhost | 57637 | 100
copy_test | 100400 | t | 0 | 3 | 100400 | 1 | 8192 | localhost | 9060 | 101
(2 rows)
SELECT count(1) FROM copy_test;
count
-------
---------------------------------------------------------------------
4
(1 row)
---- kill the connection when we try to start a transaction ----
SELECT citus.mitmproxy('conn.onQuery(query="assign_distributed_transaction_id").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
COPY copy_test FROM PROGRAM 'echo 0, 0 && echo 1, 1 && echo 2, 4 && echo 3, 9' WITH CSV;
WARNING: connection not open
CONTEXT: while executing command on localhost:9060
ERROR: failure on connection marked as essential: localhost:9060
CONTEXT: while executing command on localhost:xxxxx
ERROR: failure on connection marked as essential: localhost:xxxxx
SELECT * FROM pg_dist_shard s, pg_dist_shard_placement p
WHERE (s.shardid = p.shardid) AND s.logicalrelid = 'copy_test'::regclass
ORDER BY placementid;
logicalrelid | shardid | shardstorage | shardminvalue | shardmaxvalue | shardid | shardstate | shardlength | nodename | nodeport | placementid
--------------+---------+--------------+---------------+---------------+---------+------------+-------------+-----------+----------+-------------
---------------------------------------------------------------------
copy_test | 100400 | t | 0 | 3 | 100400 | 1 | 8192 | localhost | 57637 | 100
copy_test | 100400 | t | 0 | 3 | 100400 | 1 | 8192 | localhost | 9060 | 101
(2 rows)
SELECT count(1) FROM copy_test;
count
-------
---------------------------------------------------------------------
4
(1 row)
---- kill the connection when we start the COPY ----
SELECT citus.mitmproxy('conn.onQuery(query="FROM STDIN WITH").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -127,60 +127,60 @@ COPY copy_test FROM PROGRAM 'echo 0, 0 && echo 1, 1 && echo 2, 4 && echo 3, 9' W
ERROR: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
CONTEXT: while executing command on localhost:9060
CONTEXT: while executing command on localhost:xxxxx
SELECT * FROM pg_dist_shard s, pg_dist_shard_placement p
WHERE (s.shardid = p.shardid) AND s.logicalrelid = 'copy_test'::regclass
ORDER BY placementid;
logicalrelid | shardid | shardstorage | shardminvalue | shardmaxvalue | shardid | shardstate | shardlength | nodename | nodeport | placementid
--------------+---------+--------------+---------------+---------------+---------+------------+-------------+-----------+----------+-------------
---------------------------------------------------------------------
copy_test | 100400 | t | 0 | 3 | 100400 | 1 | 8192 | localhost | 57637 | 100
copy_test | 100400 | t | 0 | 3 | 100400 | 1 | 8192 | localhost | 9060 | 101
(2 rows)
SELECT count(1) FROM copy_test;
count
-------
---------------------------------------------------------------------
4
(1 row)
---- kill the connection when we send the data ----
SELECT citus.mitmproxy('conn.onCopyData().kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
COPY copy_test FROM PROGRAM 'echo 0, 0 && echo 1, 1 && echo 2, 4 && echo 3, 9' WITH CSV;
ERROR: failed to COPY to shard 100404 on localhost:9060
ERROR: failed to COPY to shard xxxxx on localhost:xxxxx
SELECT * FROM pg_dist_shard s, pg_dist_shard_placement p
WHERE (s.shardid = p.shardid) AND s.logicalrelid = 'copy_test'::regclass
ORDER BY placementid;
logicalrelid | shardid | shardstorage | shardminvalue | shardmaxvalue | shardid | shardstate | shardlength | nodename | nodeport | placementid
--------------+---------+--------------+---------------+---------------+---------+------------+-------------+-----------+----------+-------------
---------------------------------------------------------------------
copy_test | 100400 | t | 0 | 3 | 100400 | 1 | 8192 | localhost | 57637 | 100
copy_test | 100400 | t | 0 | 3 | 100400 | 1 | 8192 | localhost | 9060 | 101
(2 rows)
SELECT citus.mitmproxy('conn.onQuery(query="SELECT|COPY").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
SELECT count(1) FROM copy_test;
WARNING: connection error: localhost:9060
WARNING: connection error: localhost:xxxxx
DETAIL: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
count
-------
---------------------------------------------------------------------
4
(1 row)
---- cancel the connection when we send the data ----
SELECT citus.mitmproxy('conn.onQuery(query="SELECT|COPY").cancel(' || pg_backend_pid() || ')');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -190,7 +190,7 @@ SELECT * FROM pg_dist_shard s, pg_dist_shard_placement p
WHERE (s.shardid = p.shardid) AND s.logicalrelid = 'copy_test'::regclass
ORDER BY placementid;
logicalrelid | shardid | shardstorage | shardminvalue | shardmaxvalue | shardid | shardstate | shardlength | nodename | nodeport | placementid
--------------+---------+--------------+---------------+---------------+---------+------------+-------------+-----------+----------+-------------
---------------------------------------------------------------------
copy_test | 100400 | t | 0 | 3 | 100400 | 1 | 8192 | localhost | 57637 | 100
copy_test | 100400 | t | 0 | 3 | 100400 | 1 | 8192 | localhost | 9060 | 101
(2 rows)
@ -200,7 +200,7 @@ ERROR: canceling statement due to user request
---- kill the connection when we try to get the size of the table ----
SELECT citus.mitmproxy('conn.onQuery(query="pg_table_size").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -208,29 +208,29 @@ COPY copy_test FROM PROGRAM 'echo 0, 0 && echo 1, 1 && echo 2, 4 && echo 3, 9' W
WARNING: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
CONTEXT: while executing command on localhost:9060
CONTEXT: while executing command on localhost:xxxxx
WARNING: connection not open
CONTEXT: while executing command on localhost:9060
ERROR: failure on connection marked as essential: localhost:9060
CONTEXT: while executing command on localhost:xxxxx
ERROR: failure on connection marked as essential: localhost:xxxxx
SELECT * FROM pg_dist_shard s, pg_dist_shard_placement p
WHERE (s.shardid = p.shardid) AND s.logicalrelid = 'copy_test'::regclass
ORDER BY placementid;
logicalrelid | shardid | shardstorage | shardminvalue | shardmaxvalue | shardid | shardstate | shardlength | nodename | nodeport | placementid
--------------+---------+--------------+---------------+---------------+---------+------------+-------------+-----------+----------+-------------
---------------------------------------------------------------------
copy_test | 100400 | t | 0 | 3 | 100400 | 1 | 8192 | localhost | 57637 | 100
copy_test | 100400 | t | 0 | 3 | 100400 | 1 | 8192 | localhost | 9060 | 101
(2 rows)
SELECT count(1) FROM copy_test;
count
-------
---------------------------------------------------------------------
4
(1 row)
---- kill the connection when we try to get the min, max of the table ----
SELECT citus.mitmproxy('conn.onQuery(query="SELECT min\(key\), max\(key\)").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -238,43 +238,43 @@ COPY copy_test FROM PROGRAM 'echo 0, 0 && echo 1, 1 && echo 2, 4 && echo 3, 9' W
WARNING: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
CONTEXT: while executing command on localhost:9060
CONTEXT: while executing command on localhost:xxxxx
WARNING: connection not open
CONTEXT: while executing command on localhost:9060
ERROR: failure on connection marked as essential: localhost:9060
CONTEXT: while executing command on localhost:xxxxx
ERROR: failure on connection marked as essential: localhost:xxxxx
SELECT * FROM pg_dist_shard s, pg_dist_shard_placement p
WHERE (s.shardid = p.shardid) AND s.logicalrelid = 'copy_test'::regclass
ORDER BY placementid;
logicalrelid | shardid | shardstorage | shardminvalue | shardmaxvalue | shardid | shardstate | shardlength | nodename | nodeport | placementid
--------------+---------+--------------+---------------+---------------+---------+------------+-------------+-----------+----------+-------------
---------------------------------------------------------------------
copy_test | 100400 | t | 0 | 3 | 100400 | 1 | 8192 | localhost | 57637 | 100
copy_test | 100400 | t | 0 | 3 | 100400 | 1 | 8192 | localhost | 9060 | 101
(2 rows)
SELECT count(1) FROM copy_test;
count
-------
---------------------------------------------------------------------
4
(1 row)
---- kill the connection when we try to COMMIT ----
SELECT citus.mitmproxy('conn.onQuery(query="^COMMIT").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
COPY copy_test FROM PROGRAM 'echo 0, 0 && echo 1, 1 && echo 2, 4 && echo 3, 9' WITH CSV;
WARNING: connection not open
CONTEXT: while executing command on localhost:9060
WARNING: failed to commit transaction on localhost:9060
CONTEXT: while executing command on localhost:xxxxx
WARNING: failed to commit transaction on localhost:xxxxx
WARNING: connection not open
CONTEXT: while executing command on localhost:9060
CONTEXT: while executing command on localhost:xxxxx
SELECT * FROM pg_dist_shard s, pg_dist_shard_placement p
WHERE (s.shardid = p.shardid) AND s.logicalrelid = 'copy_test'::regclass
ORDER BY placementid;
logicalrelid | shardid | shardstorage | shardminvalue | shardmaxvalue | shardid | shardstate | shardlength | nodename | nodeport | placementid
--------------+---------+--------------+---------------+---------------+---------+------------+-------------+-----------+----------+-------------
---------------------------------------------------------------------
copy_test | 100400 | t | 0 | 3 | 100400 | 1 | 8192 | localhost | 57637 | 100
copy_test | 100400 | t | 0 | 3 | 100400 | 1 | 8192 | localhost | 9060 | 101
copy_test | 100408 | t | 0 | 3 | 100408 | 1 | 8192 | localhost | 57637 | 112
@ -283,14 +283,14 @@ SELECT * FROM pg_dist_shard s, pg_dist_shard_placement p
SELECT count(1) FROM copy_test;
count
-------
---------------------------------------------------------------------
8
(1 row)
-- ==== Clean up, we're done here ====
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)

View File

@ -1,6 +1,6 @@
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -15,26 +15,26 @@ ALTER SEQUENCE pg_catalog.pg_dist_placement_placementid_seq RESTART 100;
CREATE TABLE copy_test (key int, value int);
SELECT create_distributed_table('copy_test', 'key');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
SELECT citus.clear_network_traffic();
clear_network_traffic
-----------------------
---------------------------------------------------------------------
(1 row)
COPY copy_test FROM PROGRAM 'echo 0, 0 && echo 1, 1 && echo 2, 4 && echo 3, 9' WITH CSV;
SELECT count(1) FROM copy_test;
count
-------
---------------------------------------------------------------------
4
(1 row)
SELECT citus.dump_network_traffic();
dump_network_traffic
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
---------------------------------------------------------------------
(0,coordinator,"[initial message]")
(0,worker,"['AuthenticationOk()', 'ParameterStatus(application_name=citus)', 'ParameterStatus(client_encoding=UTF8)', 'ParameterStatus(DateStyle=ISO, MDY)', 'ParameterStatus(integer_datetimes=on)', 'ParameterStatus(IntervalStyle=postgres)', 'ParameterStatus(is_superuser=on)', 'ParameterStatus(server_encoding=UTF8)', 'ParameterStatus(server_version=XXX)', 'ParameterStatus(session_authorization=postgres)', 'ParameterStatus(standard_conforming_strings=on)', 'ParameterStatus(TimeZone=XXX)', 'BackendKeyData(XXX)', 'ReadyForQuery(state=idle)']")
(0,coordinator,"[""Query(query=BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;SELECT assign_distributed_transaction_id(0, XX, 'XXXX-XX-XX XX:XX:XX.XXXXXX-XX');)""]")
@ -55,21 +55,21 @@ SELECT citus.dump_network_traffic();
-- the query should abort
SELECT citus.mitmproxy('conn.onQuery(query="assign_distributed_transaction").killall()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
COPY copy_test FROM PROGRAM 'echo 0, 0 && echo 1, 1 && echo 2, 4 && echo 3, 9' WITH CSV;
WARNING: connection not open
CONTEXT: while executing command on localhost:9060
CONTEXT: while executing command on localhost:xxxxx
COPY copy_test, line 1: "0, 0"
ERROR: failure on connection marked as essential: localhost:9060
ERROR: failure on connection marked as essential: localhost:xxxxx
CONTEXT: COPY copy_test, line 1: "0, 0"
-- ==== kill the connection when we try to start the COPY ====
-- the query should abort
SELECT citus.mitmproxy('conn.onQuery(query="FROM STDIN WITH").killall()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -77,33 +77,33 @@ COPY copy_test FROM PROGRAM 'echo 0, 0 && echo 1, 1 && echo 2, 4 && echo 3, 9' W
ERROR: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
CONTEXT: while executing command on localhost:9060
CONTEXT: while executing command on localhost:xxxxx
COPY copy_test, line 1: "0, 0"
-- ==== kill the connection when we first start sending data ====
-- the query should abort
SELECT citus.mitmproxy('conn.onCopyData().killall()'); -- raw rows from the client
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
COPY copy_test FROM PROGRAM 'echo 0, 0 && echo 1, 1 && echo 2, 4 && echo 3, 9' WITH CSV;
ERROR: failed to COPY to shard 100400 on localhost:9060
ERROR: failed to COPY to shard xxxxx on localhost:xxxxx
-- ==== kill the connection when the worker confirms it's received the data ====
-- the query should abort
SELECT citus.mitmproxy('conn.onCommandComplete(command="COPY").killall()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
COPY copy_test FROM PROGRAM 'echo 0, 0 && echo 1, 1 && echo 2, 4 && echo 3, 9' WITH CSV;
ERROR: failed to COPY to shard 100400 on localhost:9060
ERROR: failed to COPY to shard xxxxx on localhost:xxxxx
-- ==== kill the connection when we try to send COMMIT ====
-- the query should succeed, and the placement should be marked inactive
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -111,32 +111,32 @@ SELECT count(1) FROM pg_dist_shard_placement WHERE shardid IN (
SELECT shardid FROM pg_dist_shard WHERE logicalrelid = 'copy_test'::regclass
) AND shardstate = 3;
count
-------
---------------------------------------------------------------------
0
(1 row)
SELECT count(1) FROM copy_test;
count
-------
---------------------------------------------------------------------
4
(1 row)
SELECT citus.mitmproxy('conn.onQuery(query="^COMMIT$").killall()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
COPY copy_test FROM PROGRAM 'echo 0, 0 && echo 1, 1 && echo 2, 4 && echo 3, 9' WITH CSV;
WARNING: connection not open
CONTEXT: while executing command on localhost:9060
WARNING: failed to commit transaction on localhost:9060
CONTEXT: while executing command on localhost:xxxxx
WARNING: failed to commit transaction on localhost:xxxxx
WARNING: connection not open
CONTEXT: while executing command on localhost:9060
CONTEXT: while executing command on localhost:xxxxx
-- the shard is marked invalid
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -144,13 +144,13 @@ SELECT count(1) FROM pg_dist_shard_placement WHERE shardid IN (
SELECT shardid FROM pg_dist_shard WHERE logicalrelid = 'copy_test'::regclass
) AND shardstate = 3;
count
-------
---------------------------------------------------------------------
1
(1 row)
SELECT count(1) FROM copy_test;
count
-------
---------------------------------------------------------------------
8
(1 row)
@ -170,7 +170,7 @@ CONTEXT: COPY copy_test, line 5: "10"
-- kill the connection if the coordinator sends COMMIT. It doesn't, so nothing changes
SELECT citus.mitmproxy('conn.onQuery(query="^COMMIT$").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -179,13 +179,13 @@ ERROR: missing data for column "value"
CONTEXT: COPY copy_test, line 5: "10"
SELECT * FROM copy_test ORDER BY key, value;
key | value
-----+-------
---------------------------------------------------------------------
(0 rows)
-- ==== clean up some more to prepare for tests with only one replica ====
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -195,7 +195,7 @@ SELECT * FROM pg_dist_shard_placement WHERE shardid IN (
SELECT shardid FROM pg_dist_shard WHERE logicalrelid = 'copy_test'::regclass
) ORDER BY nodeport, placementid;
shardid | shardstate | shardlength | nodename | nodeport | placementid
---------+------------+-------------+-----------+----------+-------------
---------------------------------------------------------------------
100400 | 1 | 0 | localhost | 9060 | 100
100400 | 3 | 0 | localhost | 57637 | 101
(2 rows)
@ -204,7 +204,7 @@ SELECT * FROM pg_dist_shard_placement WHERE shardid IN (
COPY copy_test FROM PROGRAM 'echo 0, 0 && echo 1, 1 && echo 2, 4 && echo 3, 9' WITH CSV;
SELECT * FROM copy_test;
key | value
-----+-------
---------------------------------------------------------------------
0 | 0
1 | 1
2 | 4
@ -214,12 +214,12 @@ SELECT * FROM copy_test;
-- the worker is unreachable
SELECT citus.mitmproxy('conn.killall()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
COPY copy_test FROM PROGRAM 'echo 0, 0 && echo 1, 1 && echo 2, 4 && echo 3, 9' WITH CSV;
WARNING: connection error: localhost:9060
WARNING: connection error: localhost:xxxxx
DETAIL: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
@ -228,13 +228,13 @@ ERROR: could not connect to any active placements
CONTEXT: COPY copy_test, line 1: "0, 0"
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
SELECT * FROM copy_test;
key | value
-----+-------
---------------------------------------------------------------------
0 | 0
1 | 1
2 | 4
@ -244,25 +244,25 @@ SELECT * FROM copy_test;
-- the first message fails
SELECT citus.mitmproxy('conn.onQuery(query="assign_distributed_transaction_id").killall()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
COPY copy_test FROM PROGRAM 'echo 0, 0 && echo 1, 1 && echo 2, 4 && echo 3, 9' WITH CSV;
WARNING: connection not open
CONTEXT: while executing command on localhost:9060
CONTEXT: while executing command on localhost:xxxxx
COPY copy_test, line 1: "0, 0"
ERROR: failure on connection marked as essential: localhost:9060
ERROR: failure on connection marked as essential: localhost:xxxxx
CONTEXT: COPY copy_test, line 1: "0, 0"
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
SELECT * FROM copy_test;
key | value
-----+-------
---------------------------------------------------------------------
0 | 0
1 | 1
2 | 4
@ -272,7 +272,7 @@ SELECT * FROM copy_test;
-- the COPY message fails
SELECT citus.mitmproxy('conn.onQuery(query="FROM STDIN WITH").killall()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -280,17 +280,17 @@ COPY copy_test FROM PROGRAM 'echo 0, 0 && echo 1, 1 && echo 2, 4 && echo 3, 9' W
ERROR: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
CONTEXT: while executing command on localhost:9060
CONTEXT: while executing command on localhost:xxxxx
COPY copy_test, line 1: "0, 0"
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
SELECT * FROM copy_test;
key | value
-----+-------
---------------------------------------------------------------------
0 | 0
1 | 1
2 | 4
@ -300,21 +300,21 @@ SELECT * FROM copy_test;
-- the COPY data fails
SELECT citus.mitmproxy('conn.onCopyData().killall()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
COPY copy_test FROM PROGRAM 'echo 0, 0 && echo 1, 1 && echo 2, 4 && echo 3, 9' WITH CSV;
ERROR: failed to COPY to shard 100400 on localhost:9060
ERROR: failed to COPY to shard xxxxx on localhost:xxxxx
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
SELECT * FROM copy_test;
key | value
-----+-------
---------------------------------------------------------------------
0 | 0
1 | 1
2 | 4
@ -324,27 +324,27 @@ SELECT * FROM copy_test;
-- the COMMIT fails
SELECT citus.mitmproxy('conn.onQuery(query="^COMMIT$").killall()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
COPY copy_test FROM PROGRAM 'echo 0, 0 && echo 1, 1 && echo 2, 4 && echo 3, 9' WITH CSV;
WARNING: connection not open
CONTEXT: while executing command on localhost:9060
WARNING: failed to commit transaction on localhost:9060
CONTEXT: while executing command on localhost:xxxxx
WARNING: failed to commit transaction on localhost:xxxxx
WARNING: connection not open
CONTEXT: while executing command on localhost:9060
WARNING: could not commit transaction for shard 100400 on any active node
CONTEXT: while executing command on localhost:xxxxx
WARNING: could not commit transaction for shard xxxxx on any active node
ERROR: could not commit transaction on any active node
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
SELECT * FROM copy_test;
key | value
-----+-------
---------------------------------------------------------------------
0 | 0
1 | 1
2 | 4
@ -356,7 +356,7 @@ SELECT * FROM pg_dist_shard_placement WHERE shardid IN (
SELECT shardid FROM pg_dist_shard WHERE logicalrelid = 'copy_test'::regclass
) ORDER BY nodeport, placementid;
shardid | shardstate | shardlength | nodename | nodeport | placementid
---------+------------+-------------+-----------+----------+-------------
---------------------------------------------------------------------
100400 | 1 | 0 | localhost | 9060 | 100
100400 | 3 | 0 | localhost | 57637 | 101
(2 rows)
@ -364,21 +364,21 @@ SELECT * FROM pg_dist_shard_placement WHERE shardid IN (
-- the COMMIT makes it through but the connection dies before we get a response
SELECT citus.mitmproxy('conn.onCommandComplete(command="COMMIT").killall()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
COPY copy_test FROM PROGRAM 'echo 0, 0 && echo 1, 1 && echo 2, 4 && echo 3, 9' WITH CSV;
WARNING: connection not open
CONTEXT: while executing command on localhost:9060
WARNING: failed to commit transaction on localhost:9060
CONTEXT: while executing command on localhost:xxxxx
WARNING: failed to commit transaction on localhost:xxxxx
WARNING: connection not open
CONTEXT: while executing command on localhost:9060
WARNING: could not commit transaction for shard 100400 on any active node
CONTEXT: while executing command on localhost:xxxxx
WARNING: could not commit transaction for shard xxxxx on any active node
ERROR: could not commit transaction on any active node
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -386,14 +386,14 @@ SELECT * FROM pg_dist_shard_placement WHERE shardid IN (
SELECT shardid FROM pg_dist_shard WHERE logicalrelid = 'copy_test'::regclass
) ORDER BY nodeport, placementid;
shardid | shardstate | shardlength | nodename | nodeport | placementid
---------+------------+-------------+-----------+----------+-------------
---------------------------------------------------------------------
100400 | 1 | 0 | localhost | 9060 | 100
100400 | 3 | 0 | localhost | 57637 | 101
(2 rows)
SELECT * FROM copy_test;
key | value
-----+-------
---------------------------------------------------------------------
0 | 0
1 | 1
2 | 4
@ -407,7 +407,7 @@ SELECT * FROM copy_test;
-- ==== Clean up, we're done here ====
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)

View File

@ -6,7 +6,7 @@
--
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -15,7 +15,7 @@ SET citus.next_shard_id TO 200000;
SELECT * FROM master_get_active_worker_nodes()
ORDER BY 1, 2;
node_name | node_port
-----------+-----------
---------------------------------------------------------------------
localhost | 9060
localhost | 57637
(2 rows)
@ -23,7 +23,7 @@ ORDER BY 1, 2;
-- verify there are no tables that could prevent add/remove node operations
SELECT * FROM pg_dist_partition;
logicalrelid | partmethod | partkey | colocationid | repmodel
--------------+------------+---------+--------------+----------
---------------------------------------------------------------------
(0 rows)
CREATE SCHEMA add_remove_node;
@ -31,14 +31,14 @@ SET SEARCH_PATH=add_remove_node;
CREATE TABLE user_table(user_id int, user_name text);
SELECT create_reference_table('user_table');
create_reference_table
------------------------
---------------------------------------------------------------------
(1 row)
CREATE TABLE event_table(user_id int, event_id int, event_name text);
SELECT create_distributed_table('event_table', 'user_id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -47,22 +47,22 @@ FROM pg_dist_placement p JOIN pg_dist_shard s USING (shardid)
WHERE s.logicalrelid = 'user_table'::regclass
ORDER BY placementid;
shardid | shardstate
---------+------------
---------------------------------------------------------------------
200000 | 1
200000 | 1
(2 rows)
SELECT master_disable_node('localhost', :worker_2_proxy_port);
NOTICE: Node localhost:9060 has active shard placements. Some queries may fail after this operation. Use SELECT master_activate_node('localhost', 9060) to activate this node back.
NOTICE: Node localhost:xxxxx has active shard placements. Some queries may fail after this operation. Use SELECT master_activate_node('localhost', 9060) to activate this node back.
master_disable_node
---------------------
---------------------------------------------------------------------
(1 row)
SELECT * FROM master_get_active_worker_nodes()
ORDER BY 1, 2;
node_name | node_port
-----------+-----------
---------------------------------------------------------------------
localhost | 57637
(1 row)
@ -71,26 +71,26 @@ FROM pg_dist_placement p JOIN pg_dist_shard s USING (shardid)
WHERE s.logicalrelid = 'user_table'::regclass
ORDER BY placementid;
shardid | shardstate
---------+------------
---------------------------------------------------------------------
200000 | 1
(1 row)
-- fail activate node by failing reference table creation
SELECT citus.mitmproxy('conn.onQuery(query="CREATE TABLE").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
SELECT master_activate_node('localhost', :worker_2_proxy_port);
NOTICE: Replicating reference table "user_table" to the node localhost:9060
NOTICE: Replicating reference table "user_table" to the node localhost:xxxxx
ERROR: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
CONTEXT: while executing command on localhost:9060
CONTEXT: while executing command on localhost:xxxxx
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -98,7 +98,7 @@ SELECT citus.mitmproxy('conn.allow()');
SELECT * FROM master_get_active_worker_nodes()
ORDER BY 1, 2;
node_name | node_port
-----------+-----------
---------------------------------------------------------------------
localhost | 57637
(1 row)
@ -107,14 +107,14 @@ FROM pg_dist_placement p JOIN pg_dist_shard s USING (shardid)
WHERE s.logicalrelid = 'user_table'::regclass
ORDER BY placementid;
shardid | shardstate
---------+------------
---------------------------------------------------------------------
200000 | 1
(1 row)
-- fail create schema command
SELECT citus.mitmproxy('conn.onQuery(query="CREATE SCHEMA").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -122,12 +122,12 @@ SELECT master_activate_node('localhost', :worker_2_proxy_port);
ERROR: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
CONTEXT: while executing command on localhost:9060
CONTEXT: while executing command on localhost:xxxxx
-- verify node is not activated
SELECT * FROM master_get_active_worker_nodes()
ORDER BY 1, 2;
node_name | node_port
-----------+-----------
---------------------------------------------------------------------
localhost | 57637
(1 row)
@ -136,25 +136,25 @@ FROM pg_dist_placement p JOIN pg_dist_shard s USING (shardid)
WHERE s.logicalrelid = 'user_table'::regclass
ORDER BY placementid;
shardid | shardstate
---------+------------
---------------------------------------------------------------------
200000 | 1
(1 row)
-- fail activate node by failing reference table creation
SELECT citus.mitmproxy('conn.onQuery(query="CREATE TABLE").cancel(' || pg_backend_pid() || ')');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
SELECT master_activate_node('localhost', :worker_2_proxy_port);
NOTICE: Replicating reference table "user_table" to the node localhost:9060
NOTICE: Replicating reference table "user_table" to the node localhost:xxxxx
ERROR: canceling statement due to user request
-- verify node is not activated
SELECT * FROM master_get_active_worker_nodes()
ORDER BY 1, 2;
node_name | node_port
-----------+-----------
---------------------------------------------------------------------
localhost | 57637
(1 row)
@ -163,13 +163,13 @@ FROM pg_dist_placement p JOIN pg_dist_shard s USING (shardid)
WHERE s.logicalrelid = 'user_table'::regclass
ORDER BY placementid;
shardid | shardstate
---------+------------
---------------------------------------------------------------------
200000 | 1
(1 row)
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -180,7 +180,7 @@ ERROR: you cannot remove the primary node of a node group which has shard place
DROP TABLE event_table;
SELECT master_remove_node('localhost', :worker_2_proxy_port);
master_remove_node
--------------------
---------------------------------------------------------------------
(1 row)
@ -188,7 +188,7 @@ SELECT master_remove_node('localhost', :worker_2_proxy_port);
SELECT * FROM master_get_active_worker_nodes()
ORDER BY 1, 2;
node_name | node_port
-----------+-----------
---------------------------------------------------------------------
localhost | 57637
(1 row)
@ -197,7 +197,7 @@ FROM pg_dist_placement p JOIN pg_dist_shard s USING (shardid)
WHERE s.logicalrelid = 'user_table'::regclass
ORDER BY placementid;
shardid | shardstate
---------+------------
---------------------------------------------------------------------
200000 | 1
(1 row)
@ -206,13 +206,13 @@ ORDER BY placementid;
-- be injected failure through network
SELECT master_add_inactive_node('localhost', :worker_2_proxy_port);
master_add_inactive_node
--------------------------
---------------------------------------------------------------------
3
(1 row)
SELECT master_remove_node('localhost', :worker_2_proxy_port);
master_remove_node
--------------------
---------------------------------------------------------------------
(1 row)
@ -221,7 +221,7 @@ FROM pg_dist_placement p JOIN pg_dist_shard s USING (shardid)
WHERE s.logicalrelid = 'user_table'::regclass
ORDER BY placementid;
shardid | shardstate
---------+------------
---------------------------------------------------------------------
200000 | 1
(1 row)
@ -229,21 +229,21 @@ ORDER BY placementid;
-- to newly added node.
SELECT citus.mitmproxy('conn.onQuery(query="CREATE TABLE").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
SELECT master_add_node('localhost', :worker_2_proxy_port);
NOTICE: Replicating reference table "user_table" to the node localhost:9060
NOTICE: Replicating reference table "user_table" to the node localhost:xxxxx
ERROR: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
CONTEXT: while executing command on localhost:9060
CONTEXT: while executing command on localhost:xxxxx
-- verify node is not added
SELECT * FROM master_get_active_worker_nodes()
ORDER BY 1, 2;
node_name | node_port
-----------+-----------
---------------------------------------------------------------------
localhost | 57637
(1 row)
@ -252,24 +252,24 @@ FROM pg_dist_placement p JOIN pg_dist_shard s USING (shardid)
WHERE s.logicalrelid = 'user_table'::regclass
ORDER BY placementid;
shardid | shardstate
---------+------------
---------------------------------------------------------------------
200000 | 1
(1 row)
SELECT citus.mitmproxy('conn.onQuery(query="CREATE TABLE").cancel(' || pg_backend_pid() || ')');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
SELECT master_add_node('localhost', :worker_2_proxy_port);
NOTICE: Replicating reference table "user_table" to the node localhost:9060
NOTICE: Replicating reference table "user_table" to the node localhost:xxxxx
ERROR: canceling statement due to user request
-- verify node is not added
SELECT * FROM master_get_active_worker_nodes()
ORDER BY 1, 2;
node_name | node_port
-----------+-----------
---------------------------------------------------------------------
localhost | 57637
(1 row)
@ -278,21 +278,21 @@ FROM pg_dist_placement p JOIN pg_dist_shard s USING (shardid)
WHERE s.logicalrelid = 'user_table'::regclass
ORDER BY placementid;
shardid | shardstate
---------+------------
---------------------------------------------------------------------
200000 | 1
(1 row)
-- reset cluster to original state
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
SELECT master_add_node('localhost', :worker_2_proxy_port);
NOTICE: Replicating reference table "user_table" to the node localhost:9060
NOTICE: Replicating reference table "user_table" to the node localhost:xxxxx
master_add_node
-----------------
---------------------------------------------------------------------
6
(1 row)
@ -300,7 +300,7 @@ NOTICE: Replicating reference table "user_table" to the node localhost:9060
SELECT * FROM master_get_active_worker_nodes()
ORDER BY 1, 2;
node_name | node_port
-----------+-----------
---------------------------------------------------------------------
localhost | 9060
localhost | 57637
(2 rows)
@ -310,7 +310,7 @@ FROM pg_dist_placement p JOIN pg_dist_shard s USING (shardid)
WHERE s.logicalrelid = 'user_table'::regclass
ORDER BY placementid;
shardid | shardstate
---------+------------
---------------------------------------------------------------------
200000 | 1
200000 | 1
(2 rows)
@ -318,38 +318,38 @@ ORDER BY placementid;
-- fail master_add_node by failing copy out operation
SELECT master_remove_node('localhost', :worker_1_port);
master_remove_node
--------------------
---------------------------------------------------------------------
(1 row)
SELECT citus.mitmproxy('conn.onQuery(query="COPY").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
SELECT master_add_node('localhost', :worker_1_port);
NOTICE: Replicating reference table "user_table" to the node localhost:57637
ERROR: could not copy table "user_table_200000" from "localhost:9060"
CONTEXT: while executing command on localhost:57637
NOTICE: Replicating reference table "user_table" to the node localhost:xxxxx
ERROR: could not copy table "user_table_200000" from "localhost:xxxxx"
CONTEXT: while executing command on localhost:xxxxx
-- verify node is not added
SELECT * FROM master_get_active_worker_nodes()
ORDER BY 1, 2;
node_name | node_port
-----------+-----------
---------------------------------------------------------------------
localhost | 9060
(1 row)
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
SELECT master_add_node('localhost', :worker_1_port);
NOTICE: Replicating reference table "user_table" to the node localhost:57637
NOTICE: Replicating reference table "user_table" to the node localhost:xxxxx
master_add_node
-----------------
---------------------------------------------------------------------
8
(1 row)
@ -357,7 +357,7 @@ NOTICE: Replicating reference table "user_table" to the node localhost:57637
SELECT * FROM master_get_active_worker_nodes()
ORDER BY 1, 2;
node_name | node_port
-----------+-----------
---------------------------------------------------------------------
localhost | 9060
localhost | 57637
(2 rows)
@ -367,7 +367,7 @@ FROM pg_dist_placement p JOIN pg_dist_shard s USING (shardid)
WHERE s.logicalrelid = 'user_table'::regclass
ORDER BY placementid;
shardid | shardstate
---------+------------
---------------------------------------------------------------------
200000 | 1
200000 | 1
(2 rows)
@ -378,7 +378,7 @@ NOTICE: drop cascades to table add_remove_node.user_table
SELECT * FROM run_command_on_workers('DROP SCHEMA IF EXISTS add_remove_node CASCADE')
ORDER BY nodeport;
nodename | nodeport | success | result
-----------+----------+---------+-------------
---------------------------------------------------------------------
localhost | 9060 | t | DROP SCHEMA
localhost | 57637 | t | DROP SCHEMA
(2 rows)

View File

@ -7,7 +7,7 @@
--
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -24,7 +24,7 @@ CREATE TABLE products (
);
SELECT create_distributed_table('products', 'product_no');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -39,15 +39,15 @@ DETAIL: Distributed relations cannot have UNIQUE, EXCLUDE, or PRIMARY KEY const
SET citus.node_connection_timeout TO 400;
SELECT citus.mitmproxy('conn.delay(500)');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
ALTER TABLE products ADD CONSTRAINT p_key PRIMARY KEY(product_no);
ERROR: could not establish any connections to the node localhost:9060 after 400 ms
ERROR: could not establish any connections to the node localhost:xxxxx after 400 ms
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -62,19 +62,19 @@ INSERT INTO r1 (id, name) VALUES
SELECT create_reference_table('r1');
NOTICE: Copying data from local table...
create_reference_table
------------------------
---------------------------------------------------------------------
(1 row)
SELECT citus.clear_network_traffic();
clear_network_traffic
-----------------------
---------------------------------------------------------------------
(1 row)
SELECT citus.mitmproxy('conn.delay(500)');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -89,13 +89,13 @@ SET citus.task_assignment_policy TO 'round-robin';
-- test would be if one of the queries does not return the result but an error.
SELECT name FROM r1 WHERE id = 2;
name
------
---------------------------------------------------------------------
bar
(1 row)
SELECT name FROM r1 WHERE id = 2;
name
------
---------------------------------------------------------------------
bar
(1 row)
@ -103,13 +103,13 @@ SELECT name FROM r1 WHERE id = 2;
-- connection to have been delayed and thus caused a timeout
SELECT citus.dump_network_traffic();
dump_network_traffic
-------------------------------------
---------------------------------------------------------------------
(0,coordinator,"[initial message]")
(1 row)
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -119,7 +119,7 @@ SELECT citus.mitmproxy('conn.allow()');
SET citus.force_max_query_parallelization TO ON;
SELECT citus.mitmproxy('conn.delay(500)');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -127,13 +127,13 @@ SELECT citus.mitmproxy('conn.delay(500)');
-- test would be if one of the queries does not return the result but an error.
SELECT count(*) FROM products;
count
-------
---------------------------------------------------------------------
0
(1 row)
SELECT count(*) FROM products;
count
-------
---------------------------------------------------------------------
0
(1 row)
@ -141,13 +141,13 @@ SELECT count(*) FROM products;
-- is the worker
SELECT citus.dump_network_traffic() ORDER BY 1 OFFSET 1;
dump_network_traffic
-------------------------------------
---------------------------------------------------------------------
(1,coordinator,"[initial message]")
(1 row)
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -155,7 +155,7 @@ SET citus.shard_replication_factor TO 1;
CREATE TABLE single_replicatated(key int);
SELECT create_distributed_table('single_replicatated', 'key');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -164,19 +164,19 @@ SELECT create_distributed_table('single_replicatated', 'key');
SET citus.force_max_query_parallelization TO ON;
SELECT citus.mitmproxy('conn.delay(500)');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
SELECT count(*) FROM single_replicatated;
ERROR: could not establish any connections to the node localhost:9060 after 400 ms
ERROR: could not establish any connections to the node localhost:xxxxx after 400 ms
SET citus.force_max_query_parallelization TO OFF;
-- one similar test, but this time on modification queries
-- to see that connection establishement failures could
-- mark placement INVALID
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -189,13 +189,13 @@ WHERE
shardstate = 3 AND
shardid IN (SELECT shardid from pg_dist_shard where logicalrelid = 'products'::regclass);
invalid_placement_count
-------------------------
---------------------------------------------------------------------
0
(1 row)
SELECT citus.mitmproxy('conn.delay(500)');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -209,14 +209,14 @@ WHERE
shardstate = 3 AND
shardid IN (SELECT shardid from pg_dist_shard where logicalrelid = 'products'::regclass);
invalid_placement_count
-------------------------
---------------------------------------------------------------------
1
(1 row)
-- show that INSERT went through
SELECT count(*) FROM products WHERE product_no = 100;
count
-------
---------------------------------------------------------------------
1
(1 row)
@ -224,14 +224,14 @@ RESET client_min_messages;
-- verify get_global_active_transactions works when a timeout happens on a connection
SELECT get_global_active_transactions();
WARNING: could not establish connection after 400 ms
WARNING: connection error: localhost:9060
WARNING: connection error: localhost:xxxxx
get_global_active_transactions
--------------------------------
---------------------------------------------------------------------
(0 rows)
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)

View File

@ -6,7 +6,7 @@ SET search_path TO 'copy_distributed_table';
SET citus.next_shard_id TO 1710000;
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -17,7 +17,7 @@ SET citus.max_cached_conns_per_worker to 0;
CREATE TABLE test_table(id int, value_1 int);
SELECT create_distributed_table('test_table','id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -31,12 +31,12 @@ CREATE VIEW unhealthy_shard_count AS
-- Just kill the connection after sending the first query to the worker.
SELECT citus.mitmproxy('conn.kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
\COPY test_table FROM stdin delimiter ',';
WARNING: connection error: localhost:9060
WARNING: connection error: localhost:xxxxx
DETAIL: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
@ -45,46 +45,46 @@ ERROR: could not connect to any active placements
CONTEXT: COPY test_table, line 1: "1,2"
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
SELECT * FROM unhealthy_shard_count;
count
-------
---------------------------------------------------------------------
0
(1 row)
SELECT count(*) FROM test_table;
count
-------
---------------------------------------------------------------------
0
(1 row)
-- Now, kill the connection while copying the data
SELECT citus.mitmproxy('conn.onCopyData().kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
\COPY test_table FROM stdin delimiter ',';
ERROR: failed to COPY to shard 1710000 on localhost:9060
ERROR: failed to COPY to shard xxxxx on localhost:xxxxx
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
SELECT * FROM unhealthy_shard_count;
count
-------
---------------------------------------------------------------------
0
(1 row)
SELECT count(*) FROM test_table;
count
-------
---------------------------------------------------------------------
0
(1 row)
@ -92,7 +92,7 @@ SELECT count(*) FROM test_table;
-- instead of killing it.
SELECT citus.mitmproxy('conn.onCopyData().cancel(' || pg_backend_pid() || ')');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -100,53 +100,53 @@ SELECT citus.mitmproxy('conn.onCopyData().cancel(' || pg_backend_pid() || ')');
ERROR: canceling statement due to user request
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
SELECT * FROM unhealthy_shard_count;
count
-------
---------------------------------------------------------------------
0
(1 row)
SELECT count(*) FROM test_table;
count
-------
---------------------------------------------------------------------
0
(1 row)
-- kill the connection after worker sends command complete message
SELECT citus.mitmproxy('conn.onCommandComplete(command="COPY 1").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
\COPY test_table FROM stdin delimiter ',';
ERROR: failed to COPY to shard 1710002 on localhost:9060
ERROR: failed to COPY to shard xxxxx on localhost:xxxxx
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
SELECT * FROM unhealthy_shard_count;
count
-------
---------------------------------------------------------------------
0
(1 row)
SELECT count(*) FROM test_table;
count
-------
---------------------------------------------------------------------
0
(1 row)
-- similar to above one, but cancel the connection on command complete
SELECT citus.mitmproxy('conn.onCommandComplete(command="COPY 1").cancel(' || pg_backend_pid() || ')');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -154,47 +154,47 @@ SELECT citus.mitmproxy('conn.onCommandComplete(command="COPY 1").cancel(' || pg_
ERROR: canceling statement due to user request
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
SELECT * FROM unhealthy_shard_count;
count
-------
---------------------------------------------------------------------
0
(1 row)
SELECT count(*) FROM test_table;
count
-------
---------------------------------------------------------------------
0
(1 row)
-- kill the connection on PREPARE TRANSACTION
SELECT citus.mitmproxy('conn.onQuery(query="PREPARE TRANSACTION").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
\COPY test_table FROM stdin delimiter ',';
ERROR: connection not open
CONTEXT: while executing command on localhost:9060
CONTEXT: while executing command on localhost:xxxxx
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
SELECT * FROM unhealthy_shard_count;
count
-------
---------------------------------------------------------------------
0
(1 row)
SELECT count(*) FROM test_table;
count
-------
---------------------------------------------------------------------
0
(1 row)
@ -203,7 +203,7 @@ SET client_min_messages TO ERROR;
-- kill on command complete on COMMIT PREPARE, command should succeed
SELECT citus.mitmproxy('conn.onCommandComplete(command="COMMIT PREPARED").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -211,19 +211,19 @@ SELECT citus.mitmproxy('conn.onCommandComplete(command="COMMIT PREPARED").kill()
SET client_min_messages TO NOTICE;
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
SELECT * FROM unhealthy_shard_count;
count
-------
---------------------------------------------------------------------
0
(1 row)
SELECT count(*) FROM test_table;
count
-------
---------------------------------------------------------------------
4
(1 row)
@ -231,7 +231,7 @@ TRUNCATE TABLE test_table;
-- kill on ROLLBACK, command could be rollbacked
SELECT citus.mitmproxy('conn.onQuery(query="ROLLBACK").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -239,22 +239,22 @@ BEGIN;
\COPY test_table FROM stdin delimiter ',';
ROLLBACK;
WARNING: connection not open
CONTEXT: while executing command on localhost:9060
CONTEXT: while executing command on localhost:xxxxx
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
SELECT * FROM unhealthy_shard_count;
count
-------
---------------------------------------------------------------------
0
(1 row)
SELECT count(*) FROM test_table;
count
-------
---------------------------------------------------------------------
0
(1 row)
@ -265,40 +265,40 @@ SET citus.shard_replication_factor TO 2;
CREATE TABLE test_table_2(id int, value_1 int);
SELECT create_distributed_table('test_table_2','id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
SELECT citus.mitmproxy('conn.kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
\COPY test_table_2 FROM stdin delimiter ',';
WARNING: connection error: localhost:9060
WARNING: connection error: localhost:xxxxx
DETAIL: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
CONTEXT: COPY test_table_2, line 1: "1,2"
WARNING: connection error: localhost:9060
WARNING: connection error: localhost:xxxxx
DETAIL: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
CONTEXT: COPY test_table_2, line 2: "3,4"
WARNING: connection error: localhost:9060
WARNING: connection error: localhost:xxxxx
DETAIL: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
CONTEXT: COPY test_table_2, line 3: "6,7"
WARNING: connection error: localhost:9060
WARNING: connection error: localhost:xxxxx
DETAIL: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
CONTEXT: COPY test_table_2, line 5: "9,10"
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -309,7 +309,7 @@ SELECT pds.logicalrelid, pdsd.shardid, pdsd.shardstate
WHERE pds.logicalrelid = 'test_table_2'::regclass
ORDER BY shardid, nodeport;
logicalrelid | shardid | shardstate
--------------+---------+------------
---------------------------------------------------------------------
test_table_2 | 1710004 | 3
test_table_2 | 1710004 | 1
test_table_2 | 1710005 | 3
@ -325,7 +325,7 @@ DROP TABLE test_table_2;
CREATE TABLE test_table_2(id int, value_1 int);
SELECT create_distributed_table('test_table_2','id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -333,7 +333,7 @@ SELECT create_distributed_table('test_table_2','id');
-- The query should abort
SELECT citus.mitmproxy('conn.onQuery(query="FROM STDIN WITH").killall()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -341,11 +341,11 @@ SELECT citus.mitmproxy('conn.onQuery(query="FROM STDIN WITH").killall()');
ERROR: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
CONTEXT: while executing command on localhost:9060
CONTEXT: while executing command on localhost:xxxxx
COPY test_table_2, line 1: "1,2"
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -356,7 +356,7 @@ SELECT pds.logicalrelid, pdsd.shardid, pdsd.shardstate
WHERE pds.logicalrelid = 'test_table_2'::regclass
ORDER BY shardid, nodeport;
logicalrelid | shardid | shardstate
--------------+---------+------------
---------------------------------------------------------------------
test_table_2 | 1710008 | 1
test_table_2 | 1710008 | 1
test_table_2 | 1710009 | 1
@ -372,24 +372,24 @@ DROP TABLE test_table_2;
CREATE TABLE test_table_2(id int, value_1 int);
SELECT create_distributed_table('test_table_2','id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
-- When kill on copying data, it will be rollbacked and placements won't be labaled as invalid.
-- Note that now we sent data to shard 210007, yet it is not marked as invalid.
-- Note that now we sent data to shard xxxxx, yet it is not marked as invalid.
-- You can check the issue about this behaviour: https://github.com/citusdata/citus/issues/1933
SELECT citus.mitmproxy('conn.onCopyData().kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
\COPY test_table_2 FROM stdin delimiter ',';
ERROR: failed to COPY to shard 1710012 on localhost:9060
ERROR: failed to COPY to shard xxxxx on localhost:xxxxx
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -400,7 +400,7 @@ SELECT pds.logicalrelid, pdsd.shardid, pdsd.shardstate
WHERE pds.logicalrelid = 'test_table_2'::regclass
ORDER BY shardid, nodeport;
logicalrelid | shardid | shardstate
--------------+---------+------------
---------------------------------------------------------------------
test_table_2 | 1710012 | 1
test_table_2 | 1710012 | 1
test_table_2 | 1710013 | 1

View File

@ -8,14 +8,14 @@ SET citus.next_shard_id TO 130000;
SET client_min_messages TO ERROR;
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
CREATE TABLE test_table(id int, value_1 int);
SELECT create_reference_table('test_table');
create_reference_table
------------------------
---------------------------------------------------------------------
(1 row)
@ -30,63 +30,63 @@ CREATE VIEW unhealthy_shard_count AS
-- response we get from the worker
SELECT citus.mitmproxy('conn.kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
\copy test_table FROM STDIN DELIMITER ','
ERROR: failure on connection marked as essential: localhost:9060
ERROR: failure on connection marked as essential: localhost:xxxxx
CONTEXT: COPY test_table, line 1: "1,2"
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
SELECT * FROM unhealthy_shard_count;
count
-------
---------------------------------------------------------------------
0
(1 row)
SELECT count(*) FROM test_table;
count
-------
---------------------------------------------------------------------
0
(1 row)
-- kill as soon as the coordinator sends begin
SELECT citus.mitmproxy('conn.onQuery(query="^BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
\copy test_table FROM STDIN DELIMITER ','
ERROR: failure on connection marked as essential: localhost:9060
ERROR: failure on connection marked as essential: localhost:xxxxx
CONTEXT: COPY test_table, line 1: "1,2"
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
SELECT * FROM unhealthy_shard_count;
count
-------
---------------------------------------------------------------------
0
(1 row)
SELECT count(*) FROM test_table;
count
-------
---------------------------------------------------------------------
0
(1 row)
-- cancel as soon as the coordinator sends begin
SELECT citus.mitmproxy('conn.onQuery(query="^BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED").cancel(' || pg_backend_pid() || ')');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -95,26 +95,26 @@ ERROR: canceling statement due to user request
CONTEXT: COPY test_table, line 1: "1,2"
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
SELECT * FROM unhealthy_shard_count;
count
-------
---------------------------------------------------------------------
0
(1 row)
SELECT count(*) FROM test_table;
count
-------
---------------------------------------------------------------------
0
(1 row)
-- kill as soon as the coordinator sends COPY command
SELECT citus.mitmproxy('conn.onQuery(query="^COPY").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -122,30 +122,30 @@ SELECT citus.mitmproxy('conn.onQuery(query="^COPY").kill()');
ERROR: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
CONTEXT: while executing command on localhost:9060
CONTEXT: while executing command on localhost:xxxxx
COPY test_table, line 1: "1,2"
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
SELECT * FROM unhealthy_shard_count;
count
-------
---------------------------------------------------------------------
0
(1 row)
SELECT count(*) FROM test_table;
count
-------
---------------------------------------------------------------------
0
(1 row)
-- cancel as soon as the coordinator sends COPY command
SELECT citus.mitmproxy('conn.onQuery(query="^COPY").cancel(' || pg_backend_pid() || ')');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -154,53 +154,53 @@ ERROR: canceling statement due to user request
CONTEXT: COPY test_table, line 1: "1,2"
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
SELECT * FROM unhealthy_shard_count;
count
-------
---------------------------------------------------------------------
0
(1 row)
SELECT count(*) FROM test_table;
count
-------
---------------------------------------------------------------------
0
(1 row)
-- kill as soon as the worker sends CopyComplete
SELECT citus.mitmproxy('conn.onCommandComplete(command="^COPY 3").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
\copy test_table FROM STDIN DELIMITER ','
ERROR: failed to COPY to shard 130000 on localhost:9060
ERROR: failed to COPY to shard xxxxx on localhost:xxxxx
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
SELECT * FROM unhealthy_shard_count;
count
-------
---------------------------------------------------------------------
0
(1 row)
SELECT count(*) FROM test_table;
count
-------
---------------------------------------------------------------------
0
(1 row)
-- cancel as soon as the coordinator sends CopyData
SELECT citus.mitmproxy('conn.onCommandComplete(command="^COPY 3").cancel(' || pg_backend_pid() || ')');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -208,19 +208,19 @@ SELECT citus.mitmproxy('conn.onCommandComplete(command="^COPY 3").cancel(' || p
ERROR: canceling statement due to user request
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
SELECT * FROM unhealthy_shard_count;
count
-------
---------------------------------------------------------------------
0
(1 row)
SELECT count(*) FROM test_table;
count
-------
---------------------------------------------------------------------
0
(1 row)
@ -228,7 +228,7 @@ SELECT count(*) FROM test_table;
-- the query should abort
SELECT citus.mitmproxy('conn.onQuery(query="FROM STDIN WITH").killall()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -236,58 +236,58 @@ SELECT citus.mitmproxy('conn.onQuery(query="FROM STDIN WITH").killall()');
ERROR: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
CONTEXT: while executing command on localhost:9060
CONTEXT: while executing command on localhost:xxxxx
COPY test_table, line 1: "1,2"
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
SELECT * FROM unhealthy_shard_count;
count
-------
---------------------------------------------------------------------
0
(1 row)
SELECT count(*) FROM test_table;
count
-------
---------------------------------------------------------------------
0
(1 row)
-- killing on PREPARE should be fine, everything should be rollbacked
SELECT citus.mitmproxy('conn.onQuery(query="^PREPARE TRANSACTION").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
\copy test_table FROM STDIN DELIMITER ','
ERROR: connection not open
CONTEXT: while executing command on localhost:9060
CONTEXT: while executing command on localhost:xxxxx
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
SELECT * FROM unhealthy_shard_count;
count
-------
---------------------------------------------------------------------
0
(1 row)
SELECT count(*) FROM test_table;
count
-------
---------------------------------------------------------------------
0
(1 row)
-- cancelling on PREPARE should be fine, everything should be rollbacked
SELECT citus.mitmproxy('conn.onQuery(query="^PREPARE TRANSACTION").cancel(' || pg_backend_pid() || ')');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -295,19 +295,19 @@ SELECT citus.mitmproxy('conn.onQuery(query="^PREPARE TRANSACTION").cancel(' ||
ERROR: canceling statement due to user request
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
SELECT * FROM unhealthy_shard_count;
count
-------
---------------------------------------------------------------------
0
(1 row)
SELECT count(*) FROM test_table;
count
-------
---------------------------------------------------------------------
0
(1 row)
@ -315,33 +315,33 @@ SELECT count(*) FROM test_table;
-- and all the workers committed
SELECT citus.mitmproxy('conn.onCommandComplete(command="^COMMIT PREPARED").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
\copy test_table FROM STDIN DELIMITER ','
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
-- we shouldn't have any prepared transactions in the workers
SELECT recover_prepared_transactions();
recover_prepared_transactions
-------------------------------
---------------------------------------------------------------------
0
(1 row)
SELECT * FROM unhealthy_shard_count;
count
-------
---------------------------------------------------------------------
0
(1 row)
SELECT count(*) FROM test_table;
count
-------
---------------------------------------------------------------------
3
(1 row)
@ -349,14 +349,14 @@ TRUNCATE test_table;
-- kill as soon as the coordinator sends COMMIT
SELECT citus.mitmproxy('conn.onQuery(query="^COMMIT").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
\copy test_table FROM STDIN DELIMITER ','
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -366,19 +366,19 @@ SELECT citus.mitmproxy('conn.allow()');
-- we expect to see 1 recovered prepared transactions.
SELECT recover_prepared_transactions();
recover_prepared_transactions
-------------------------------
---------------------------------------------------------------------
1
(1 row)
SELECT * FROM unhealthy_shard_count;
count
-------
---------------------------------------------------------------------
0
(1 row)
SELECT count(*) FROM test_table;
count
-------
---------------------------------------------------------------------
3
(1 row)
@ -387,7 +387,7 @@ TRUNCATE test_table;
-- sends the ROLLBACK so the command can be rollbacked
SELECT citus.mitmproxy('conn.onQuery(query="^ROLLBACK").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -396,22 +396,22 @@ SET LOCAL client_min_messages TO WARNING;
\copy test_table FROM STDIN DELIMITER ','
ROLLBACK;
WARNING: connection not open
CONTEXT: while executing command on localhost:9060
CONTEXT: while executing command on localhost:xxxxx
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
SELECT * FROM unhealthy_shard_count;
count
-------
---------------------------------------------------------------------
0
(1 row)
SELECT count(*) FROM test_table;
count
-------
---------------------------------------------------------------------
0
(1 row)
@ -420,7 +420,7 @@ SELECT count(*) FROM test_table;
-- both on the distributed table and the placements
SELECT citus.mitmproxy('conn.onCommandComplete(command="^ROLLBACK").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -429,28 +429,28 @@ SET LOCAL client_min_messages TO WARNING;
\copy test_table FROM STDIN DELIMITER ','
ROLLBACK;
WARNING: connection not open
CONTEXT: while executing command on localhost:9060
CONTEXT: while executing command on localhost:xxxxx
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
SELECT recover_prepared_transactions();
recover_prepared_transactions
-------------------------------
---------------------------------------------------------------------
0
(1 row)
SELECT * FROM unhealthy_shard_count;
count
-------
---------------------------------------------------------------------
0
(1 row)
SELECT count(*) FROM test_table;
count
-------
---------------------------------------------------------------------
0
(1 row)

View File

@ -4,7 +4,7 @@
-- failure.
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -14,14 +14,14 @@ SET SEARCH_PATH=index_schema;
CREATE TABLE index_test(id int, value_1 int, value_2 int);
SELECT create_distributed_table('index_test', 'id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
-- kill the connection when create command is issued
SELECT citus.mitmproxy('conn.onQuery(query="CREATE").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -31,7 +31,7 @@ DETAIL: CONCURRENTLY-enabled index commands can fail partially, leaving behind
HINT: Use DROP INDEX CONCURRENTLY IF EXISTS to remove the invalid index, then retry the original command.
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -39,7 +39,7 @@ SELECT citus.mitmproxy('conn.allow()');
SELECT * FROM run_command_on_workers($$SELECT count(*) FROM pg_indexes WHERE indexname LIKE 'idx_index_test%' $$)
WHERE nodeport = :worker_2_proxy_port;
nodename | nodeport | success | result
-----------+----------+---------+--------
---------------------------------------------------------------------
localhost | 9060 | t | 0
(1 row)
@ -47,14 +47,14 @@ DROP TABLE index_test;
CREATE TABLE index_test(id int, value_1 int, value_2 int);
SELECT create_reference_table('index_test');
create_reference_table
------------------------
---------------------------------------------------------------------
(1 row)
-- kill the connection when create command is issued
SELECT citus.mitmproxy('conn.onQuery(query="CREATE").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -64,7 +64,7 @@ DETAIL: CONCURRENTLY-enabled index commands can fail partially, leaving behind
HINT: Use DROP INDEX CONCURRENTLY IF EXISTS to remove the invalid index, then retry the original command.
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -72,7 +72,7 @@ DROP TABLE index_test;
CREATE TABLE index_test(id int, value_1 int, value_2 int);
SELECT create_distributed_table('index_test', 'id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -81,7 +81,7 @@ SELECT create_distributed_table('index_test', 'id');
-- therefore dump_network_traffic() calls are not made
SELECT citus.mitmproxy('conn.onQuery(query="CREATE").cancel(' || pg_backend_pid() || ')');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -91,7 +91,7 @@ DETAIL: CONCURRENTLY-enabled index commands can fail partially, leaving behind
HINT: Use DROP INDEX CONCURRENTLY IF EXISTS to remove the invalid index, then retry the original command.
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -99,14 +99,14 @@ DROP TABLE index_test;
CREATE TABLE index_test(id int, value_1 int, value_2 int);
SELECT create_reference_table('index_test');
create_reference_table
------------------------
---------------------------------------------------------------------
(1 row)
-- cancel the connection when create command is issued
SELECT citus.mitmproxy('conn.onQuery(query="CREATE").cancel(' || pg_backend_pid() || ')');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -116,7 +116,7 @@ DETAIL: CONCURRENTLY-enabled index commands can fail partially, leaving behind
HINT: Use DROP INDEX CONCURRENTLY IF EXISTS to remove the invalid index, then retry the original command.
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -124,7 +124,7 @@ DROP TABLE index_test;
CREATE TABLE index_test(id int, value_1 int, value_2 int);
SELECT create_distributed_table('index_test', 'id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -132,7 +132,7 @@ CREATE INDEX CONCURRENTLY idx_index_test ON index_test(id, value_1);
-- kill the connection when create command is issued
SELECT citus.mitmproxy('conn.onQuery(query="DROP INDEX CONCURRENTLY").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -142,7 +142,7 @@ DETAIL: CONCURRENTLY-enabled index commands can fail partially, leaving behind
HINT: Use DROP INDEX CONCURRENTLY IF EXISTS to remove the invalid index, then retry the original command.
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -150,7 +150,7 @@ SELECT citus.mitmproxy('conn.allow()');
SELECT * FROM run_command_on_workers($$SELECT count(*) FROM pg_indexes WHERE indexname LIKE 'idx_index_test%' $$)
WHERE nodeport = :worker_2_proxy_port;
nodename | nodeport | success | result
-----------+----------+---------+--------
---------------------------------------------------------------------
localhost | 9060 | t | 4
(1 row)
@ -161,7 +161,7 @@ NOTICE: drop cascades to table index_schema.index_test
SELECT * FROM run_command_on_workers($$SELECT count(*) FROM pg_indexes WHERE indexname LIKE 'idx_index_test%' $$)
WHERE nodeport = :worker_2_proxy_port;
nodename | nodeport | success | result
-----------+----------+---------+--------
---------------------------------------------------------------------
localhost | 9060 | t | 0
(1 row)

View File

@ -6,7 +6,7 @@ SET search_path TO 'failure_reference_table';
SET citus.next_shard_id TO 10000000;
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -20,43 +20,43 @@ INSERT INTO ref_table VALUES(1),(2),(3);
-- out and not create any placement
SELECT citus.mitmproxy('conn.onQuery().kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
SELECT create_reference_table('ref_table');
ERROR: connection error: localhost:9060
ERROR: connection error: localhost:xxxxx
DETAIL: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
SELECT count(*) FROM pg_dist_shard_placement;
count
-------
---------------------------------------------------------------------
0
(1 row)
-- Kill after creating transaction on worker node
SELECT citus.mitmproxy('conn.onCommandComplete(command="BEGIN").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
SELECT create_reference_table('ref_table');
ERROR: connection error: localhost:9060
ERROR: connection error: localhost:xxxxx
DETAIL: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
SELECT count(*) FROM pg_dist_shard_placement;
count
-------
---------------------------------------------------------------------
0
(1 row)
-- Cancel after creating transaction on worker node
SELECT citus.mitmproxy('conn.onCommandComplete(command="BEGIN").cancel(' || pg_backend_pid() || ')');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -64,32 +64,32 @@ SELECT create_reference_table('ref_table');
ERROR: canceling statement due to user request
SELECT count(*) FROM pg_dist_shard_placement;
count
-------
---------------------------------------------------------------------
0
(1 row)
-- Kill after copying data to worker node
SELECT citus.mitmproxy('conn.onCommandComplete(command="SELECT 1").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
SELECT create_reference_table('ref_table');
ERROR: connection error: localhost:9060
ERROR: connection error: localhost:xxxxx
DETAIL: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
SELECT count(*) FROM pg_dist_shard_placement;
count
-------
---------------------------------------------------------------------
0
(1 row)
-- Cancel after copying data to worker node
SELECT citus.mitmproxy('conn.onCommandComplete(command="SELECT 1").cancel(' || pg_backend_pid() || ')');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -97,30 +97,30 @@ SELECT create_reference_table('ref_table');
ERROR: canceling statement due to user request
SELECT count(*) FROM pg_dist_shard_placement;
count
-------
---------------------------------------------------------------------
0
(1 row)
-- Kill after copying data to worker node
SELECT citus.mitmproxy('conn.onCommandComplete(command="COPY 3").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
SELECT create_reference_table('ref_table');
NOTICE: Copying data from local table...
ERROR: failed to COPY to shard 10000005 on localhost:9060
ERROR: failed to COPY to shard xxxxx on localhost:xxxxx
SELECT count(*) FROM pg_dist_shard_placement;
count
-------
---------------------------------------------------------------------
0
(1 row)
-- Cancel after copying data to worker node
SELECT citus.mitmproxy('conn.onCommandComplete(command="COPY 3").cancel(' || pg_backend_pid() || ')');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -129,7 +129,7 @@ NOTICE: Copying data from local table...
ERROR: canceling statement due to user request
SELECT count(*) FROM pg_dist_shard_placement;
count
-------
---------------------------------------------------------------------
0
(1 row)
@ -139,41 +139,41 @@ SET client_min_messages TO ERROR;
-- prepared transaction afterwards.
SELECT citus.mitmproxy('conn.onCommandComplete(command="PREPARE TRANSACTION").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
SELECT create_reference_table('ref_table');
ERROR: connection not open
CONTEXT: while executing command on localhost:9060
CONTEXT: while executing command on localhost:xxxxx
SELECT count(*) FROM pg_dist_shard_placement;
count
-------
---------------------------------------------------------------------
0
(1 row)
SELECT recover_prepared_transactions();
recover_prepared_transactions
-------------------------------
---------------------------------------------------------------------
1
(1 row)
-- Kill after commiting prepared, this should succeed
SELECT citus.mitmproxy('conn.onCommandComplete(command="COMMIT PREPARED").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
SELECT create_reference_table('ref_table');
create_reference_table
------------------------
---------------------------------------------------------------------
(1 row)
SELECT shardid, nodeport, shardstate FROM pg_dist_shard_placement ORDER BY shardid, nodeport;
shardid | nodeport | shardstate
----------+----------+------------
---------------------------------------------------------------------
10000008 | 9060 | 1
10000008 | 57637 | 1
(2 rows)
@ -181,7 +181,7 @@ SELECT shardid, nodeport, shardstate FROM pg_dist_shard_placement ORDER BY shard
SET client_min_messages TO NOTICE;
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -193,7 +193,7 @@ INSERT INTO ref_table VALUES(1),(2),(3);
-- Test in transaction
SELECT citus.mitmproxy('conn.onQuery().kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -202,13 +202,13 @@ SELECT create_reference_table('ref_table');
WARNING: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
CONTEXT: while executing command on localhost:9060
ERROR: failure on connection marked as essential: localhost:9060
CONTEXT: while executing command on localhost:xxxxx
ERROR: failure on connection marked as essential: localhost:xxxxx
COMMIT;
-- kill on ROLLBACK, should be rollbacked
SELECT citus.mitmproxy('conn.onQuery(query="^ROLLBACK").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -216,23 +216,23 @@ BEGIN;
SELECT create_reference_table('ref_table');
NOTICE: Copying data from local table...
create_reference_table
------------------------
---------------------------------------------------------------------
(1 row)
ROLLBACK;
WARNING: connection not open
CONTEXT: while executing command on localhost:9060
CONTEXT: while executing command on localhost:xxxxx
SELECT * FROM pg_dist_shard_placement ORDER BY shardid, nodeport;
shardid | shardstate | shardlength | nodename | nodeport | placementid
---------+------------+-------------+----------+----------+-------------
---------------------------------------------------------------------
(0 rows)
-- cancel when the coordinator send ROLLBACK, should be rollbacked. We ignore cancellations
-- during the ROLLBACK.
SELECT citus.mitmproxy('conn.onQuery(query="^ROLLBACK").cancel(' || pg_backend_pid() || ')');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -240,14 +240,14 @@ BEGIN;
SELECT create_reference_table('ref_table');
NOTICE: Copying data from local table...
create_reference_table
------------------------
---------------------------------------------------------------------
(1 row)
ROLLBACK;
SELECT * FROM pg_dist_shard_placement ORDER BY shardid, nodeport;
shardid | shardstate | shardlength | nodename | nodeport | placementid
---------+------------+-------------+----------+----------+-------------
---------------------------------------------------------------------
(0 rows)
DROP SCHEMA failure_reference_table CASCADE;

View File

@ -5,7 +5,7 @@ CREATE SCHEMA failure_create_table;
SET search_path TO 'failure_create_table';
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -15,30 +15,30 @@ CREATE TABLE test_table(id int, value_1 int);
-- Kill connection before sending query to the worker
SELECT citus.mitmproxy('conn.kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
SELECT create_distributed_table('test_table','id');
ERROR: connection error: localhost:9060
ERROR: connection error: localhost:xxxxx
DETAIL: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
SELECT count(*) FROM pg_dist_shard;
count
-------
---------------------------------------------------------------------
0
(1 row)
SELECT run_command_on_workers($$SELECT count(*) FROM information_schema.tables WHERE table_schema = 'failure_create_table' and table_name LIKE 'test_table%' ORDER BY 1$$);
run_command_on_workers
------------------------
---------------------------------------------------------------------
(localhost,9060,t,0)
(localhost,57637,t,0)
(2 rows)
@ -49,7 +49,7 @@ SELECT run_command_on_workers($$SELECT count(*) FROM information_schema.tables W
-- https://github.com/citusdata/citus/pull/1652
SELECT citus.mitmproxy('conn.onQuery(query="^CREATE SCHEMA").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -57,22 +57,22 @@ SELECT create_distributed_table('test_table', 'id');
ERROR: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
CONTEXT: while executing command on localhost:9060
CONTEXT: while executing command on localhost:xxxxx
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
SELECT count(*) FROM pg_dist_shard;
count
-------
---------------------------------------------------------------------
0
(1 row)
SELECT run_command_on_workers($$SELECT count(*) FROM information_schema.schemata WHERE schema_name = 'failure_create_table'$$);
run_command_on_workers
------------------------
---------------------------------------------------------------------
(localhost,9060,t,0)
(localhost,57637,t,1)
(2 rows)
@ -84,30 +84,30 @@ DROP TYPE schema_proc;
-- Now, kill the connection while opening transaction on workers.
SELECT citus.mitmproxy('conn.onQuery(query="^BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
SELECT create_distributed_table('test_table','id');
ERROR: connection error: localhost:9060
ERROR: connection error: localhost:xxxxx
DETAIL: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
SELECT count(*) FROM pg_dist_shard;
count
-------
---------------------------------------------------------------------
0
(1 row)
SELECT run_command_on_workers($$SELECT count(*) FROM information_schema.tables WHERE table_schema = 'failure_create_table' and table_name LIKE 'test_table%' ORDER BY 1$$);
run_command_on_workers
------------------------
---------------------------------------------------------------------
(localhost,9060,t,0)
(localhost,57637,t,0)
(2 rows)
@ -115,30 +115,30 @@ SELECT run_command_on_workers($$SELECT count(*) FROM information_schema.tables W
-- Now, kill the connection after sending create table command with worker_apply_shard_ddl_command UDF
SELECT citus.mitmproxy('conn.onQuery(query="SELECT worker_apply_shard_ddl_command").after(1).kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
SELECT create_distributed_table('test_table','id');
ERROR: connection error: localhost:9060
ERROR: connection error: localhost:xxxxx
DETAIL: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
SELECT count(*) FROM pg_dist_shard;
count
-------
---------------------------------------------------------------------
0
(1 row)
SELECT run_command_on_workers($$SELECT count(*) FROM information_schema.tables WHERE table_schema = 'failure_create_table' and table_name LIKE 'test_table%' ORDER BY 1$$);
run_command_on_workers
------------------------
---------------------------------------------------------------------
(localhost,9060,t,0)
(localhost,57637,t,0)
(2 rows)
@ -149,31 +149,31 @@ BEGIN;
SET LOCAL citus.multi_shard_modify_mode TO 'sequential';
SELECT citus.mitmproxy('conn.onQuery(query="SELECT worker_apply_shard_ddl_command").after(1).kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
SELECT create_distributed_table('test_table', 'id');
ERROR: connection error: localhost:9060
ERROR: connection error: localhost:xxxxx
DETAIL: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
COMMIT;
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
SELECT count(*) FROM pg_dist_shard;
count
-------
---------------------------------------------------------------------
0
(1 row)
SELECT run_command_on_workers($$SELECT count(*) FROM information_schema.tables WHERE table_schema = 'failure_create_table' and table_name LIKE 'test_table%' ORDER BY 1$$);
run_command_on_workers
------------------------
---------------------------------------------------------------------
(localhost,9060,t,0)
(localhost,57637,t,0)
(2 rows)
@ -183,7 +183,7 @@ SELECT run_command_on_workers($$SELECT count(*) FROM information_schema.tables W
-- shard creation.
SELECT citus.mitmproxy('conn.onQuery(query="^BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED").cancel(' || pg_backend_pid() || ')');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -191,19 +191,19 @@ SELECT create_distributed_table('test_table','id');
ERROR: canceling statement due to user request
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
SELECT count(*) FROM pg_dist_shard;
count
-------
---------------------------------------------------------------------
0
(1 row)
SELECT run_command_on_workers($$SELECT count(*) FROM information_schema.tables WHERE table_schema = 'failure_create_table' and table_name LIKE 'test_table%' ORDER BY 1$$);
run_command_on_workers
------------------------
---------------------------------------------------------------------
(localhost,9060,t,0)
(localhost,57637,t,0)
(2 rows)
@ -214,43 +214,43 @@ CREATE TABLE test_table(id int, value_1 int);
CREATE TABLE temp_table(id int, value_1 int);
SELECT create_distributed_table('temp_table','id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
SELECT citus.mitmproxy('conn.onQuery(query="CREATE TABLE").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
SELECT create_distributed_table('test_table','id',colocate_with=>'temp_table');
ERROR: connection error: localhost:9060
ERROR: connection error: localhost:xxxxx
DETAIL: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
SELECT count(*) FROM pg_dist_shard;
count
-------
---------------------------------------------------------------------
4
(1 row)
SELECT run_command_on_workers($$SELECT count(*) FROM information_schema.tables WHERE table_schema = 'failure_create_table' and table_name LIKE 'test_table%' ORDER BY 1$$);
run_command_on_workers
------------------------
---------------------------------------------------------------------
(localhost,9060,t,0)
(localhost,57637,t,0)
(2 rows)
SELECT citus.mitmproxy('conn.onQuery(query="CREATE TABLE").cancel(' || pg_backend_pid() || ')');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -258,19 +258,19 @@ SELECT create_distributed_table('test_table','id',colocate_with=>'temp_table');
ERROR: canceling statement due to user request
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
SELECT count(*) FROM pg_dist_shard;
count
-------
---------------------------------------------------------------------
4
(1 row)
SELECT run_command_on_workers($$SELECT count(*) FROM information_schema.tables WHERE table_schema = 'failure_create_table' and table_name LIKE 'test_table%' ORDER BY 1$$);
run_command_on_workers
------------------------
---------------------------------------------------------------------
(localhost,9060,t,0)
(localhost,57637,t,0)
(2 rows)
@ -278,35 +278,35 @@ SELECT run_command_on_workers($$SELECT count(*) FROM information_schema.tables W
-- Kill and cancel the connection after worker sends "PREPARE TRANSACTION" ack with colocate_with option
SELECT citus.mitmproxy('conn.onCommandComplete(command="PREPARE TRANSACTION").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
SELECT create_distributed_table('test_table','id',colocate_with=>'temp_table');
ERROR: connection not open
CONTEXT: while executing command on localhost:9060
CONTEXT: while executing command on localhost:xxxxx
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
SELECT count(*) FROM pg_dist_shard;
count
-------
---------------------------------------------------------------------
4
(1 row)
SELECT run_command_on_workers($$SELECT count(*) FROM information_schema.tables WHERE table_schema = 'failure_create_table' and table_name LIKE 'test_table%' ORDER BY 1$$);
run_command_on_workers
------------------------
---------------------------------------------------------------------
(localhost,9060,t,0)
(localhost,57637,t,0)
(2 rows)
SELECT citus.mitmproxy('conn.onCommandComplete(command="PREPARE TRANSACTION").cancel(' || pg_backend_pid() || ')');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -314,19 +314,19 @@ SELECT create_distributed_table('test_table','id',colocate_with=>'temp_table');
ERROR: canceling statement due to user request
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
SELECT count(*) FROM pg_dist_shard;
count
-------
---------------------------------------------------------------------
4
(1 row)
SELECT run_command_on_workers($$SELECT count(*) FROM information_schema.tables WHERE table_schema = 'failure_create_table' and table_name LIKE 'test_table%' ORDER BY 1$$);
run_command_on_workers
------------------------
---------------------------------------------------------------------
(localhost,9060,t,0)
(localhost,57637,t,0)
(2 rows)
@ -341,32 +341,32 @@ CREATE TABLE test_table(id int, value_1 int);
-- Kill connection before sending query to the worker
SELECT citus.mitmproxy('conn.kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
BEGIN;
SELECT create_distributed_table('test_table','id');
ERROR: connection error: localhost:9060
ERROR: connection error: localhost:xxxxx
DETAIL: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
ROLLBACK;
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
SELECT count(*) FROM pg_dist_shard;
count
-------
---------------------------------------------------------------------
0
(1 row)
SELECT run_command_on_workers($$SELECT count(*) FROM information_schema.tables WHERE table_schema = 'failure_create_table' and table_name LIKE 'test_table%' ORDER BY 1$$);
run_command_on_workers
------------------------
---------------------------------------------------------------------
(localhost,9060,t,0)
(localhost,57637,t,0)
(2 rows)
@ -378,32 +378,32 @@ DROP TYPE schema_proc;
-- Now, kill the connection while creating transaction on workers in transaction.
SELECT citus.mitmproxy('conn.onQuery(query="^BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
BEGIN;
SELECT create_distributed_table('test_table','id');
ERROR: connection error: localhost:9060
ERROR: connection error: localhost:xxxxx
DETAIL: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
ROLLBACK;
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
SELECT count(*) FROM pg_dist_shard;
count
-------
---------------------------------------------------------------------
0
(1 row)
SELECT run_command_on_workers($$SELECT count(*) FROM information_schema.tables WHERE table_schema = 'failure_create_table' and table_name LIKE 'test_table%' ORDER BY 1$$);
run_command_on_workers
------------------------
---------------------------------------------------------------------
(localhost,9060,t,0)
(localhost,57637,t,0)
(2 rows)
@ -414,7 +414,7 @@ SELECT run_command_on_workers($$SELECT count(*) FROM information_schema.tables W
-- executor. So, we'll have two output files
SELECT citus.mitmproxy('conn.onQuery(query="^BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED").cancel(' || pg_backend_pid() || ')');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -424,25 +424,25 @@ ERROR: canceling statement due to user request
COMMIT;
SELECT recover_prepared_transactions();
recover_prepared_transactions
-------------------------------
---------------------------------------------------------------------
1
(1 row)
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
SELECT count(*) FROM pg_dist_shard;
count
-------
---------------------------------------------------------------------
0
(1 row)
SELECT run_command_on_workers($$SELECT count(*) FROM information_schema.tables WHERE table_schema = 'failure_create_table' and table_name LIKE 'test_table%' ORDER BY 1$$);
run_command_on_workers
------------------------
---------------------------------------------------------------------
(localhost,9060,t,0)
(localhost,57637,t,0)
(2 rows)
@ -457,32 +457,32 @@ SET citus.multi_shard_commit_protocol TO "1pc";
-- Kill connection before sending query to the worker with 1pc.
SELECT citus.mitmproxy('conn.kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
BEGIN;
SELECT create_distributed_table('test_table','id');
ERROR: connection error: localhost:9060
ERROR: connection error: localhost:xxxxx
DETAIL: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
ROLLBACK;
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
SELECT count(*) FROM pg_dist_shard;
count
-------
---------------------------------------------------------------------
0
(1 row)
SELECT run_command_on_workers($$SELECT count(*) FROM information_schema.tables WHERE table_schema = 'failure_create_table' and table_name LIKE 'test_table%' ORDER BY 1$$);
run_command_on_workers
------------------------
---------------------------------------------------------------------
(localhost,9060,t,0)
(localhost,57637,t,0)
(2 rows)
@ -490,32 +490,32 @@ SELECT run_command_on_workers($$SELECT count(*) FROM information_schema.tables W
-- Kill connection while sending create table command with 1pc.
SELECT citus.mitmproxy('conn.onQuery(query="CREATE TABLE").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
BEGIN;
SELECT create_distributed_table('test_table','id');
ERROR: connection error: localhost:9060
ERROR: connection error: localhost:xxxxx
DETAIL: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
ROLLBACK;
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
SELECT count(*) FROM pg_dist_shard;
count
-------
---------------------------------------------------------------------
0
(1 row)
SELECT run_command_on_workers($$SELECT count(*) FROM information_schema.tables WHERE table_schema = 'failure_create_table' and table_name LIKE 'test_table%' ORDER BY 1$$);
run_command_on_workers
------------------------
---------------------------------------------------------------------
(localhost,9060,t,0)
(localhost,57637,t,0)
(2 rows)
@ -527,32 +527,32 @@ DROP TYPE schema_proc;
-- Now, kill the connection while opening transactions on workers with 1pc. Transaction will be opened due to BEGIN.
SELECT citus.mitmproxy('conn.onQuery(query="^BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
BEGIN;
SELECT create_distributed_table('test_table','id');
ERROR: connection error: localhost:9060
ERROR: connection error: localhost:xxxxx
DETAIL: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
ROLLBACK;
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
SELECT count(*) FROM pg_dist_shard;
count
-------
---------------------------------------------------------------------
0
(1 row)
SELECT run_command_on_workers($$SELECT count(*) FROM information_schema.tables WHERE table_schema = 'failure_create_table' and table_name LIKE 'test_table%' ORDER BY 1$$);
run_command_on_workers
------------------------
---------------------------------------------------------------------
(localhost,9060,t,0)
(localhost,57637,t,0)
(2 rows)
@ -562,7 +562,7 @@ SELECT run_command_on_workers($$SELECT count(*) FROM information_schema.tables W
-- shard creation unless the executor is used. So, we'll have two output files
SELECT citus.mitmproxy('conn.onQuery(query="^BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED").cancel(' || pg_backend_pid() || ')');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -572,25 +572,25 @@ ERROR: canceling statement due to user request
COMMIT;
SELECT recover_prepared_transactions();
recover_prepared_transactions
-------------------------------
---------------------------------------------------------------------
0
(1 row)
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
SELECT count(*) FROM pg_dist_shard;
count
-------
---------------------------------------------------------------------
0
(1 row)
SELECT run_command_on_workers($$SELECT count(*) FROM information_schema.tables WHERE table_schema = 'failure_create_table' and table_name LIKE 'test_table%' ORDER BY 1$$);
run_command_on_workers
------------------------
---------------------------------------------------------------------
(localhost,9060,t,0)
(localhost,57637,t,0)
(2 rows)
@ -603,43 +603,43 @@ SET citus.multi_shard_commit_protocol TO "2pc";
CREATE TABLE test_table_2(id int, value_1 int);
SELECT master_create_distributed_table('test_table_2', 'id', 'hash');
master_create_distributed_table
---------------------------------
---------------------------------------------------------------------
(1 row)
-- Kill connection before sending query to the worker
SELECT citus.mitmproxy('conn.onQuery(query="^BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
SELECT master_create_worker_shards('test_table_2', 4, 2);
ERROR: connection error: localhost:9060
ERROR: connection error: localhost:xxxxx
DETAIL: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
SELECT count(*) FROM pg_dist_shard;
count
-------
---------------------------------------------------------------------
0
(1 row)
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
SELECT count(*) FROM pg_dist_shard;
count
-------
---------------------------------------------------------------------
0
(1 row)
SELECT run_command_on_workers($$SELECT count(*) FROM information_schema.tables WHERE table_schema = 'failure_create_table' and table_name LIKE 'test_table%' ORDER BY 1$$);
run_command_on_workers
------------------------
---------------------------------------------------------------------
(localhost,9060,t,0)
(localhost,57637,t,0)
(2 rows)
@ -647,28 +647,28 @@ SELECT run_command_on_workers($$SELECT count(*) FROM information_schema.tables W
-- Kill the connection after worker sends "PREPARE TRANSACTION" ack
SELECT citus.mitmproxy('conn.onCommandComplete(command="^PREPARE TRANSACTION").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
SELECT master_create_worker_shards('test_table_2', 4, 2);
ERROR: connection not open
CONTEXT: while executing command on localhost:9060
CONTEXT: while executing command on localhost:xxxxx
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
SELECT count(*) FROM pg_dist_shard;
count
-------
---------------------------------------------------------------------
0
(1 row)
SELECT run_command_on_workers($$SELECT count(*) FROM information_schema.tables WHERE table_schema = 'failure_create_table' and table_name LIKE 'test_table%' ORDER BY 1$$);
run_command_on_workers
------------------------
---------------------------------------------------------------------
(localhost,9060,t,0)
(localhost,57637,t,0)
(2 rows)
@ -676,7 +676,7 @@ SELECT run_command_on_workers($$SELECT count(*) FROM information_schema.tables W
-- Cancel the connection after sending prepare transaction in master_create_worker_shards
SELECT citus.mitmproxy('conn.onCommandComplete(command="PREPARE TRANSACTION").cancel(' || pg_backend_pid() || ')');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -685,25 +685,25 @@ ERROR: canceling statement due to user request
-- Show that there is no pending transaction
SELECT recover_prepared_transactions();
recover_prepared_transactions
-------------------------------
---------------------------------------------------------------------
1
(1 row)
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
SELECT count(*) FROM pg_dist_shard;
count
-------
---------------------------------------------------------------------
0
(1 row)
SELECT run_command_on_workers($$SELECT count(*) FROM information_schema.tables WHERE table_schema = 'failure_create_table' and table_name LIKE 'test_table%' ORDER BY 1$$);
run_command_on_workers
------------------------
---------------------------------------------------------------------
(localhost,9060,t,0)
(localhost,57637,t,0)
(2 rows)

View File

@ -8,13 +8,13 @@ CREATE TABLE users_table (user_id int, user_name text);
CREATE TABLE events_table(user_id int, event_id int, event_type int);
SELECT create_distributed_table('users_table', 'user_id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
SELECT create_distributed_table('events_table', 'user_id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -22,7 +22,7 @@ CREATE TABLE users_table_local AS SELECT * FROM users_table;
-- kill at the first copy (push)
SELECT citus.mitmproxy('conn.onQuery(query="^COPY").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -52,11 +52,11 @@ FROM
ERROR: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
CONTEXT: while executing command on localhost:9060
CONTEXT: while executing command on localhost:xxxxx
-- kill at the second copy (pull)
SELECT citus.mitmproxy('conn.onQuery(query="SELECT user_id FROM cte_failure.events_table_16000002").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -83,14 +83,14 @@ FROM
ORDER BY 1 DESC LIMIT 5
) as foo
WHERE foo.user_id = cte.user_id;
ERROR: connection error: localhost:9060
ERROR: connection error: localhost:xxxxx
DETAIL: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
-- kill at the third copy (pull)
SELECT citus.mitmproxy('conn.onQuery(query="SELECT DISTINCT users_table.user").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -117,14 +117,14 @@ FROM
ORDER BY 1 DESC LIMIT 5
) as foo
WHERE foo.user_id = cte.user_id;
ERROR: connection error: localhost:9060
ERROR: connection error: localhost:xxxxx
DETAIL: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
-- cancel at the first copy (push)
SELECT citus.mitmproxy('conn.onQuery(query="^COPY").cancel(' || :pid || ')');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -155,7 +155,7 @@ ERROR: canceling statement due to user request
-- cancel at the second copy (pull)
SELECT citus.mitmproxy('conn.onQuery(query="SELECT user_id FROM").cancel(' || :pid || ')');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -186,7 +186,7 @@ ERROR: canceling statement due to user request
-- cancel at the third copy (pull)
SELECT citus.mitmproxy('conn.onQuery(query="SELECT DISTINCT users_table.user").cancel(' || :pid || ')');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -217,7 +217,7 @@ ERROR: canceling statement due to user request
-- distributed update tests
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -226,7 +226,7 @@ INSERT INTO users_table VALUES (1, 'A'), (2, 'B'), (3, 'C'), (4, 'D'), (5, 'E');
INSERT INTO events_table VALUES (1,1,1), (1,2,1), (1,3,1), (2,1, 4), (3, 4,1), (5, 1, 2), (5, 2, 1), (5, 2,2);
SELECT * FROM users_table ORDER BY 1, 2;
user_id | user_name
---------+-----------
---------------------------------------------------------------------
1 | A
2 | B
3 | C
@ -240,7 +240,7 @@ INSERT INTO users_table SELECT * FROM cte_delete;
-- verify contents are the same
SELECT * FROM users_table ORDER BY 1, 2;
user_id | user_name
---------+-----------
---------------------------------------------------------------------
1 | A
2 | B
3 | C
@ -251,26 +251,26 @@ SELECT * FROM users_table ORDER BY 1, 2;
-- kill connection during deletion
SELECT citus.mitmproxy('conn.onQuery(query="^DELETE FROM").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
WITH cte_delete as (DELETE FROM users_table WHERE user_name in ('A', 'D') RETURNING *)
INSERT INTO users_table SELECT * FROM cte_delete;
ERROR: connection error: localhost:9060
ERROR: connection error: localhost:xxxxx
DETAIL: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
-- verify contents are the same
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
SELECT * FROM users_table ORDER BY 1, 2;
user_id | user_name
---------+-----------
---------------------------------------------------------------------
1 | A
2 | B
3 | C
@ -281,7 +281,7 @@ SELECT * FROM users_table ORDER BY 1, 2;
-- kill connection during insert
SELECT citus.mitmproxy('conn.onQuery(query="^COPY").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -290,17 +290,17 @@ INSERT INTO users_table SELECT * FROM cte_delete;
ERROR: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
CONTEXT: while executing command on localhost:9060
CONTEXT: while executing command on localhost:xxxxx
-- verify contents are the same
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
SELECT * FROM users_table ORDER BY 1, 2;
user_id | user_name
---------+-----------
---------------------------------------------------------------------
1 | A
2 | B
3 | C
@ -311,7 +311,7 @@ SELECT * FROM users_table ORDER BY 1, 2;
-- cancel during deletion
SELECT citus.mitmproxy('conn.onQuery(query="^DELETE FROM").cancel(' || :pid || ')');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -321,13 +321,13 @@ ERROR: canceling statement due to user request
-- verify contents are the same
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
SELECT * FROM users_table ORDER BY 1, 2;
user_id | user_name
---------+-----------
---------------------------------------------------------------------
1 | A
2 | B
3 | C
@ -338,7 +338,7 @@ SELECT * FROM users_table ORDER BY 1, 2;
-- cancel during insert
SELECT citus.mitmproxy('conn.onQuery(query="^COPY").cancel(' || :pid || ')');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -348,13 +348,13 @@ ERROR: canceling statement due to user request
-- verify contents are the same
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
SELECT * FROM users_table ORDER BY 1, 2;
user_id | user_name
---------+-----------
---------------------------------------------------------------------
1 | A
2 | B
3 | C
@ -365,7 +365,7 @@ SELECT * FROM users_table ORDER BY 1, 2;
-- test sequential delete/insert
SELECT citus.mitmproxy('conn.onQuery(query="^DELETE FROM").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -373,7 +373,7 @@ BEGIN;
SET LOCAL citus.multi_shard_modify_mode = 'sequential';
WITH cte_delete as (DELETE FROM users_table WHERE user_name in ('A', 'D') RETURNING *)
INSERT INTO users_table SELECT * FROM cte_delete;
ERROR: connection error: localhost:9060
ERROR: connection error: localhost:xxxxx
DETAIL: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
@ -381,7 +381,7 @@ END;
RESET SEARCH_PATH;
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)

File diff suppressed because it is too large Load Diff

View File

@ -5,7 +5,7 @@
--
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -18,20 +18,20 @@ CREATE TABLE events_table(user_id int, event_id int, event_type int);
CREATE TABLE events_summary(user_id int, event_id int, event_count int);
SELECT create_distributed_table('events_table', 'user_id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
SELECT create_distributed_table('events_summary', 'user_id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
INSERT INTO events_table VALUES (1, 1, 3 ), (1, 2, 1), (1, 3, 2), (2, 4, 3), (3, 5, 1), (4, 7, 1), (4, 1, 9), (4, 3, 2);
SELECT count(*) FROM events_summary;
count
-------
---------------------------------------------------------------------
0
(1 row)
@ -39,32 +39,32 @@ SELECT count(*) FROM events_summary;
-- kill worker query
SELECT citus.mitmproxy('conn.onQuery(query="^INSERT INTO insert_select_pushdown").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
INSERT INTO events_summary SELECT user_id, event_id, count(*) FROM events_table GROUP BY 1,2;
ERROR: connection error: localhost:9060
ERROR: connection error: localhost:xxxxx
DETAIL: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
--verify nothing is modified
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
SELECT count(*) FROM events_summary;
count
-------
---------------------------------------------------------------------
0
(1 row)
-- cancel worker query
SELECT citus.mitmproxy('conn.onQuery(query="^INSERT INTO insert_select_pushdown").cancel(' || :pid || ')');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -73,52 +73,52 @@ ERROR: canceling statement due to user request
--verify nothing is modified
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
SELECT count(*) FROM events_summary;
count
-------
---------------------------------------------------------------------
0
(1 row)
-- test self insert/select
SELECT count(*) FROM events_table;
count
-------
---------------------------------------------------------------------
8
(1 row)
-- kill worker query
SELECT citus.mitmproxy('conn.onQuery(query="^INSERT INTO insert_select_pushdown").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
INSERT INTO events_table SELECT * FROM events_table;
ERROR: connection error: localhost:9060
ERROR: connection error: localhost:xxxxx
DETAIL: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
--verify nothing is modified
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
SELECT count(*) FROM events_table;
count
-------
---------------------------------------------------------------------
8
(1 row)
-- cancel worker query
SELECT citus.mitmproxy('conn.onQuery(query="^INSERT INTO insert_select_pushdown").cancel(' || :pid || ')');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -127,20 +127,20 @@ ERROR: canceling statement due to user request
--verify nothing is modified
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
SELECT count(*) FROM events_table;
count
-------
---------------------------------------------------------------------
8
(1 row)
RESET SEARCH_PATH;
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)

View File

@ -15,32 +15,32 @@ CREATE TABLE events_reference(event_type int, event_count int);
CREATE TABLE events_reference_distributed(event_type int, event_count int);
SELECT create_distributed_table('events_table', 'user_id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
SELECT create_distributed_table('events_summary', 'event_id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
SELECT create_reference_table('events_reference');
create_reference_table
------------------------
---------------------------------------------------------------------
(1 row)
SELECT create_distributed_table('events_reference_distributed', 'event_type');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
INSERT INTO events_table VALUES (1, 1, 3 ), (1, 2, 1), (1, 3, 2), (2, 4, 3), (3, 5, 1), (4, 7, 1), (4, 1, 9), (4, 3, 2);
SELECT count(*) FROM events_summary;
count
-------
---------------------------------------------------------------------
0
(1 row)
@ -48,7 +48,7 @@ SELECT count(*) FROM events_summary;
-- kill coordinator pull query
SELECT citus.mitmproxy('conn.onQuery(query="^COPY").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -56,11 +56,11 @@ INSERT INTO events_summary SELECT event_id, event_type, count(*) FROM events_tab
ERROR: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
CONTEXT: while executing command on localhost:9060
CONTEXT: while executing command on localhost:xxxxx
-- kill data push
SELECT citus.mitmproxy('conn.onQuery(query="^COPY coordinator_insert_select").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -68,11 +68,11 @@ INSERT INTO events_summary SELECT event_id, event_type, count(*) FROM events_tab
ERROR: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
CONTEXT: while executing command on localhost:9060
CONTEXT: while executing command on localhost:xxxxx
-- cancel coordinator pull query
SELECT citus.mitmproxy('conn.onQuery(query="^COPY").cancel(' || :pid || ')');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -81,7 +81,7 @@ ERROR: canceling statement due to user request
-- cancel data push
SELECT citus.mitmproxy('conn.onQuery(query="^COPY coordinator_insert_select").cancel(' || :pid || ')');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -90,13 +90,13 @@ ERROR: canceling statement due to user request
--verify nothing is modified
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
SELECT count(*) FROM events_summary;
count
-------
---------------------------------------------------------------------
0
(1 row)
@ -104,7 +104,7 @@ SELECT count(*) FROM events_summary;
-- kill coordinator pull query
SELECT citus.mitmproxy('conn.onQuery(query="^COPY").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -112,11 +112,11 @@ INSERT INTO events_reference SELECT event_type, count(*) FROM events_table GROUP
ERROR: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
CONTEXT: while executing command on localhost:9060
CONTEXT: while executing command on localhost:xxxxx
-- kill data push
SELECT citus.mitmproxy('conn.onQuery(query="^COPY coordinator_insert_select").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -124,11 +124,11 @@ INSERT INTO events_reference SELECT event_type, count(*) FROM events_table GROUP
ERROR: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
CONTEXT: while executing command on localhost:9060
CONTEXT: while executing command on localhost:xxxxx
-- cancel coordinator pull query
SELECT citus.mitmproxy('conn.onQuery(query="^COPY").cancel(' || :pid || ')');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -137,7 +137,7 @@ ERROR: canceling statement due to user request
-- cancel data push
SELECT citus.mitmproxy('conn.onQuery(query="^COPY coordinator_insert_select").cancel(' || :pid || ')');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -146,13 +146,13 @@ ERROR: canceling statement due to user request
--verify nothing is modified
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
SELECT count(*) FROM events_reference;
count
-------
---------------------------------------------------------------------
0
(1 row)
@ -162,7 +162,7 @@ INSERT INTO events_reference SELECT event_type, count(*) FROM events_table GROUP
-- kill coordinator pull query
SELECT citus.mitmproxy('conn.onQuery(query="^COPY").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -170,11 +170,11 @@ INSERT INTO events_reference_distributed SELECT * FROM events_reference;
ERROR: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
CONTEXT: while executing command on localhost:9060
CONTEXT: while executing command on localhost:xxxxx
-- kill data push
SELECT citus.mitmproxy('conn.onQuery(query="^COPY coordinator_insert_select").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -182,11 +182,11 @@ INSERT INTO events_reference_distributed SELECT * FROM events_reference;
ERROR: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
CONTEXT: while executing command on localhost:9060
CONTEXT: while executing command on localhost:xxxxx
-- cancel coordinator pull query
SELECT citus.mitmproxy('conn.onQuery(query="^COPY").cancel(' || :pid || ')');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -195,7 +195,7 @@ ERROR: canceling statement due to user request
-- cancel data push
SELECT citus.mitmproxy('conn.onQuery(query="^COPY coordinator_insert_select").cancel(' || :pid || ')');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -204,20 +204,20 @@ ERROR: canceling statement due to user request
--verify nothing is modified
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
SELECT count(*) FROM events_reference_distributed;
count
-------
---------------------------------------------------------------------
0
(1 row)
RESET SEARCH_PATH;
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)

View File

@ -1,6 +1,6 @@
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -11,14 +11,14 @@ ALTER SEQUENCE pg_catalog.pg_dist_placement_placementid_seq RESTART 100;
CREATE TABLE dml_test (id integer, name text);
SELECT create_distributed_table('dml_test', 'id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
COPY dml_test FROM STDIN WITH CSV;
SELECT citus.clear_network_traffic();
clear_network_traffic
-----------------------
---------------------------------------------------------------------
(1 row)
@ -27,13 +27,13 @@ SELECT citus.clear_network_traffic();
-- fail at DELETE
SELECT citus.mitmproxy('conn.onQuery(query="^DELETE").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
BEGIN;
DELETE FROM dml_test WHERE id = 1;
ERROR: connection error: localhost:9060
ERROR: connection error: localhost:xxxxx
DETAIL: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
@ -49,7 +49,7 @@ COMMIT;
--- shouldn't see any changes performed in failed transaction
SELECT * FROM dml_test ORDER BY id ASC;
id | name
----+-------
---------------------------------------------------------------------
1 | Alpha
2 | Beta
3 | Gamma
@ -59,7 +59,7 @@ SELECT * FROM dml_test ORDER BY id ASC;
-- cancel at DELETE
SELECT citus.mitmproxy('conn.onQuery(query="^DELETE").cancel(' || pg_backend_pid() || ')');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -78,7 +78,7 @@ COMMIT;
--- shouldn't see any changes performed in failed transaction
SELECT * FROM dml_test ORDER BY id ASC;
id | name
----+-------
---------------------------------------------------------------------
1 | Alpha
2 | Beta
3 | Gamma
@ -88,7 +88,7 @@ SELECT * FROM dml_test ORDER BY id ASC;
-- fail at INSERT
SELECT citus.mitmproxy('conn.onQuery(query="^INSERT").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -96,7 +96,7 @@ BEGIN;
DELETE FROM dml_test WHERE id = 1;
DELETE FROM dml_test WHERE id = 2;
INSERT INTO dml_test VALUES (5, 'Epsilon');
ERROR: connection error: localhost:9060
ERROR: connection error: localhost:xxxxx
DETAIL: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
@ -108,7 +108,7 @@ COMMIT;
--- shouldn't see any changes before failed INSERT
SELECT * FROM dml_test ORDER BY id ASC;
id | name
----+-------
---------------------------------------------------------------------
1 | Alpha
2 | Beta
3 | Gamma
@ -118,7 +118,7 @@ SELECT * FROM dml_test ORDER BY id ASC;
-- cancel at INSERT
SELECT citus.mitmproxy('conn.onQuery(query="^INSERT").cancel(' || pg_backend_pid() || ')');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -135,7 +135,7 @@ COMMIT;
--- shouldn't see any changes before failed INSERT
SELECT * FROM dml_test ORDER BY id ASC;
id | name
----+-------
---------------------------------------------------------------------
1 | Alpha
2 | Beta
3 | Gamma
@ -145,7 +145,7 @@ SELECT * FROM dml_test ORDER BY id ASC;
-- fail at UPDATE
SELECT citus.mitmproxy('conn.onQuery(query="^UPDATE").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -154,7 +154,7 @@ DELETE FROM dml_test WHERE id = 1;
DELETE FROM dml_test WHERE id = 2;
INSERT INTO dml_test VALUES (5, 'Epsilon');
UPDATE dml_test SET name = 'alpha' WHERE id = 1;
ERROR: connection error: localhost:9060
ERROR: connection error: localhost:xxxxx
DETAIL: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
@ -164,7 +164,7 @@ COMMIT;
--- shouldn't see any changes after failed UPDATE
SELECT * FROM dml_test ORDER BY id ASC;
id | name
----+-------
---------------------------------------------------------------------
1 | Alpha
2 | Beta
3 | Gamma
@ -174,7 +174,7 @@ SELECT * FROM dml_test ORDER BY id ASC;
-- cancel at UPDATE
SELECT citus.mitmproxy('conn.onQuery(query="^UPDATE").cancel(' || pg_backend_pid() || ')');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -190,7 +190,7 @@ COMMIT;
--- shouldn't see any changes after failed UPDATE
SELECT * FROM dml_test ORDER BY id ASC;
id | name
----+-------
---------------------------------------------------------------------
1 | Alpha
2 | Beta
3 | Gamma
@ -200,7 +200,7 @@ SELECT * FROM dml_test ORDER BY id ASC;
-- fail at PREPARE TRANSACTION
SELECT citus.mitmproxy('conn.onQuery(query="^PREPARE TRANSACTION").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -222,31 +222,31 @@ COMMIT;
false
);
master_run_on_worker
---------------------------
---------------------------------------------------------------------
(localhost,57636,t,BEGIN)
(1 row)
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
SELECT shardid FROM pg_dist_shard_placement WHERE shardstate = 3;
shardid
---------
---------------------------------------------------------------------
(0 rows)
SELECT recover_prepared_transactions();
recover_prepared_transactions
-------------------------------
---------------------------------------------------------------------
0
(1 row)
-- shouldn't see any changes after failed PREPARE
SELECT * FROM dml_test ORDER BY id ASC;
id | name
----+-------
---------------------------------------------------------------------
1 | Alpha
2 | Beta
3 | Gamma
@ -256,7 +256,7 @@ SELECT * FROM dml_test ORDER BY id ASC;
-- cancel at PREPARE TRANSACTION
SELECT citus.mitmproxy('conn.onQuery(query="^PREPARE TRANSACTION").cancel(' || pg_backend_pid() || ')');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -271,25 +271,25 @@ COMMIT;
ERROR: canceling statement due to user request
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
SELECT shardid FROM pg_dist_shard_placement WHERE shardstate = 3;
shardid
---------
---------------------------------------------------------------------
(0 rows)
SELECT recover_prepared_transactions();
recover_prepared_transactions
-------------------------------
---------------------------------------------------------------------
0
(1 row)
-- shouldn't see any changes after failed PREPARE
SELECT * FROM dml_test ORDER BY id ASC;
id | name
----+-------
---------------------------------------------------------------------
1 | Alpha
2 | Beta
3 | Gamma
@ -299,7 +299,7 @@ SELECT * FROM dml_test ORDER BY id ASC;
-- fail at COMMIT
SELECT citus.mitmproxy('conn.onQuery(query="^COMMIT").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -316,25 +316,25 @@ COMMIT;
SET client_min_messages TO DEFAULT;
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
SELECT shardid FROM pg_dist_shard_placement WHERE shardstate = 3;
shardid
---------
---------------------------------------------------------------------
(0 rows)
SELECT recover_prepared_transactions();
recover_prepared_transactions
-------------------------------
---------------------------------------------------------------------
1
(1 row)
-- should see changes, because of txn recovery
SELECT * FROM dml_test ORDER BY id ASC;
id | name
----+---------
---------------------------------------------------------------------
3 | gamma
4 | Delta
5 | Epsilon
@ -343,7 +343,7 @@ SELECT * FROM dml_test ORDER BY id ASC;
-- cancel at COMMITs are ignored by Postgres
SELECT citus.mitmproxy('conn.onQuery(query="^COMMIT").cancel(' || pg_backend_pid() || ')');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -357,7 +357,7 @@ COMMIT;
-- should see changes, because cancellation is ignored
SELECT * FROM dml_test ORDER BY id ASC;
id | name
----+---------
---------------------------------------------------------------------
3 | gamma
4 | Delta
5 | Epsilon
@ -371,7 +371,7 @@ SET citus.shard_replication_factor = 2; -- two placements
CREATE TABLE dml_test (id integer, name text);
SELECT create_distributed_table('dml_test', 'id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -380,7 +380,7 @@ COPY dml_test FROM STDIN WITH CSV;
-- fail at COMMIT (actually COMMIT this time, as no 2pc in use)
SELECT citus.mitmproxy('conn.onQuery(query="^COMMIT").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -392,14 +392,14 @@ UPDATE dml_test SET name = 'alpha' WHERE id = 1;
UPDATE dml_test SET name = 'gamma' WHERE id = 3;
COMMIT;
WARNING: connection not open
CONTEXT: while executing command on localhost:9060
WARNING: failed to commit transaction on localhost:9060
CONTEXT: while executing command on localhost:xxxxx
WARNING: failed to commit transaction on localhost:xxxxx
WARNING: connection not open
CONTEXT: while executing command on localhost:9060
CONTEXT: while executing command on localhost:xxxxx
--- should see all changes, but they only went to one placement (other is unhealthy)
SELECT * FROM dml_test ORDER BY id ASC;
id | name
----+---------
---------------------------------------------------------------------
3 | gamma
4 | Delta
5 | Epsilon
@ -407,13 +407,13 @@ SELECT * FROM dml_test ORDER BY id ASC;
SELECT shardid FROM pg_dist_shard_placement WHERE shardstate = 3;
shardid
---------
---------------------------------------------------------------------
103402
(1 row)
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -424,7 +424,7 @@ SET citus.shard_replication_factor = 1;
CREATE TABLE dml_test (id integer, name text);
SELECT create_reference_table('dml_test');
create_reference_table
------------------------
---------------------------------------------------------------------
(1 row)
@ -432,7 +432,7 @@ COPY dml_test FROM STDIN WITH CSV;
-- fail at COMMIT (by failing to PREPARE)
SELECT citus.mitmproxy('conn.onQuery(query="^PREPARE").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -444,11 +444,11 @@ UPDATE dml_test SET name = 'alpha' WHERE id = 1;
UPDATE dml_test SET name = 'gamma' WHERE id = 3;
COMMIT;
ERROR: connection not open
CONTEXT: while executing command on localhost:9060
CONTEXT: while executing command on localhost:xxxxx
--- shouldn't see any changes after failed COMMIT
SELECT * FROM dml_test ORDER BY id ASC;
id | name
----+-------
---------------------------------------------------------------------
1 | Alpha
2 | Beta
3 | Gamma
@ -458,7 +458,7 @@ SELECT * FROM dml_test ORDER BY id ASC;
-- cancel at COMMIT (by cancelling on PREPARE)
SELECT citus.mitmproxy('conn.onQuery(query="^PREPARE").cancel(' || pg_backend_pid() || ')');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -473,7 +473,7 @@ ERROR: canceling statement due to user request
--- shouldn't see any changes after cancelled PREPARE
SELECT * FROM dml_test ORDER BY id ASC;
id | name
----+-------
---------------------------------------------------------------------
1 | Alpha
2 | Beta
3 | Gamma
@ -483,7 +483,7 @@ SELECT * FROM dml_test ORDER BY id ASC;
-- allow connection to allow DROP
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)

View File

@ -11,7 +11,7 @@ SET citus.shard_replication_factor TO 1;
SELECT pg_backend_pid() as pid \gset
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -19,13 +19,13 @@ CREATE TABLE distributed_table(key int, value int);
CREATE TABLE reference_table(value int);
SELECT create_distributed_table('distributed_table', 'key');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
SELECT create_reference_table('reference_table');
create_reference_table
------------------------
---------------------------------------------------------------------
(1 row)
@ -38,12 +38,12 @@ SELECT create_reference_table('reference_table');
-- Failure and cancellation on multi-row INSERT that hits the same shard with the same value
SELECT citus.mitmproxy('conn.onQuery(query="^INSERT").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
INSERT INTO distributed_table VALUES (1,1), (1,2), (1,3);
ERROR: connection error: localhost:9060
ERROR: connection error: localhost:xxxxx
DETAIL: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
@ -53,12 +53,12 @@ DETAIL: server closed the connection unexpectedly
-- Failure and cancellation on multi-row INSERT that hits the same shard with different values
SELECT citus.mitmproxy('conn.onQuery(query="^INSERT").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
INSERT INTO distributed_table VALUES (1,7), (5,8);
ERROR: connection error: localhost:9060
ERROR: connection error: localhost:xxxxx
DETAIL: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
@ -68,18 +68,18 @@ DETAIL: server closed the connection unexpectedly
-- Failure and cancellation multi-row INSERT that hits multiple shards in a single worker
SELECT citus.mitmproxy('conn.onQuery(query="^INSERT").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
INSERT INTO distributed_table VALUES (1,11), (6,12);
ERROR: connection error: localhost:9060
ERROR: connection error: localhost:xxxxx
DETAIL: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
SELECT citus.mitmproxy('conn.onQuery(query="^INSERT").cancel(' || :pid || ')');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -88,18 +88,18 @@ ERROR: canceling statement due to user request
-- Failure and cancellation multi-row INSERT that hits multiple shards in a single worker, happening on the second query
SELECT citus.mitmproxy('conn.onQuery(query="^INSERT").after(1).kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
INSERT INTO distributed_table VALUES (1,15), (6,16);
ERROR: connection error: localhost:9060
ERROR: connection error: localhost:xxxxx
DETAIL: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
SELECT citus.mitmproxy('conn.onQuery(query="^INSERT").after(1).cancel(' || :pid || ')');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -108,18 +108,18 @@ ERROR: canceling statement due to user request
-- Failure and cancellation multi-row INSERT that hits multiple shards in multiple workers
SELECT citus.mitmproxy('conn.onQuery(query="^INSERT").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
INSERT INTO distributed_table VALUES (2,19),(1,20);
ERROR: connection error: localhost:9060
ERROR: connection error: localhost:xxxxx
DETAIL: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
SELECT citus.mitmproxy('conn.onQuery(query="^INSERT").cancel(' || :pid || ')');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -128,7 +128,7 @@ ERROR: canceling statement due to user request
-- one test for the reference tables for completeness
SELECT citus.mitmproxy('conn.onQuery(query="^INSERT").cancel(' || :pid || ')');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -136,7 +136,7 @@ INSERT INTO reference_table VALUES (1), (2), (3), (4);
ERROR: canceling statement due to user request
SELECT citus.mitmproxy('conn.onQuery(query="^INSERT").cancel(' || :pid || ')');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -145,7 +145,7 @@ ERROR: canceling statement due to user request
-- cancel the second insert over the same connection
SELECT citus.mitmproxy('conn.onQuery(query="^INSERT").after(1).cancel(' || :pid || ')');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -154,17 +154,17 @@ ERROR: canceling statement due to user request
-- we've either failed or cancelled all queries, so should be empty
SELECT * FROM distributed_table;
key | value
-----+-------
---------------------------------------------------------------------
(0 rows)
SELECT * FROM reference_table;
value
-------
---------------------------------------------------------------------
(0 rows)
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)

View File

@ -10,7 +10,7 @@ SET citus.shard_replication_factor TO 1;
SET citus.max_cached_conns_per_worker TO 0;
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -19,19 +19,19 @@ CREATE TABLE r1(a int, b int PRIMARY KEY);
CREATE TABLE t2(a int REFERENCES t1(a) ON DELETE CASCADE, b int REFERENCES r1(b) ON DELETE CASCADE, c int);
SELECT create_distributed_table('t1', 'a');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
SELECT create_reference_table('r1');
create_reference_table
------------------------
---------------------------------------------------------------------
(1 row)
SELECT create_distributed_table('t2', 'a');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -42,13 +42,13 @@ INSERT INTO t2 VALUES (1, 1, 1), (1, 2, 1), (2, 1, 2), (2, 2, 4), (3, 1, 3), (3,
SELECT pg_backend_pid() as pid \gset
SELECT count(*) FROM t2;
count
-------
---------------------------------------------------------------------
7
(1 row)
SHOW citus.multi_shard_commit_protocol ;
citus.multi_shard_commit_protocol
-----------------------------------
---------------------------------------------------------------------
2pc
(1 row)
@ -57,46 +57,46 @@ SHOW citus.multi_shard_commit_protocol ;
-- test both kill and cancellation
SELECT citus.mitmproxy('conn.onQuery(query="DELETE FROM").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
-- issue a multi shard delete
DELETE FROM t2 WHERE b = 2;
ERROR: connection error: localhost:9060
ERROR: connection error: localhost:xxxxx
DETAIL: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
-- verify nothing is deleted
SELECT count(*) FROM t2;
count
-------
---------------------------------------------------------------------
7
(1 row)
-- kill just one connection
SELECT citus.mitmproxy('conn.onQuery(query="DELETE FROM multi_shard.t2_201005").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
DELETE FROM t2 WHERE b = 2;
ERROR: connection error: localhost:9060
ERROR: connection error: localhost:xxxxx
DETAIL: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
-- verify nothing is deleted
SELECT count(*) FROM t2;
count
-------
---------------------------------------------------------------------
7
(1 row)
-- cancellation
SELECT citus.mitmproxy('conn.onQuery(query="DELETE FROM").cancel(' || :pid || ')');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -106,14 +106,14 @@ ERROR: canceling statement due to user request
-- verify nothing is deleted
SELECT count(*) FROM t2;
count
-------
---------------------------------------------------------------------
7
(1 row)
-- cancel just one connection
SELECT citus.mitmproxy('conn.onQuery(query="DELETE FROM multi_shard.t2_201005").cancel(' || :pid || ')');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -122,7 +122,7 @@ ERROR: canceling statement due to user request
-- verify nothing is deleted
SELECT count(*) FROM t2;
count
-------
---------------------------------------------------------------------
7
(1 row)
@ -133,52 +133,52 @@ SELECT count(*) FROM t2;
-- test both kill and cancellation
SELECT count(*) FILTER (WHERE b = 2) AS b2, count(*) FILTER (WHERE c = 4) AS c4 FROM t2;
b2 | c4
----+----
---------------------------------------------------------------------
3 | 1
(1 row)
SELECT citus.mitmproxy('conn.onQuery(query="^UPDATE").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
-- issue a multi shard update
UPDATE t2 SET c = 4 WHERE b = 2;
ERROR: connection error: localhost:9060
ERROR: connection error: localhost:xxxxx
DETAIL: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
-- verify nothing is updated
SELECT count(*) FILTER (WHERE b = 2) AS b2, count(*) FILTER (WHERE c = 4) AS c4 FROM t2;
b2 | c4
----+----
---------------------------------------------------------------------
3 | 1
(1 row)
-- kill just one connection
SELECT citus.mitmproxy('conn.onQuery(query="UPDATE multi_shard.t2_201005").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
UPDATE t2 SET c = 4 WHERE b = 2;
ERROR: connection error: localhost:9060
ERROR: connection error: localhost:xxxxx
DETAIL: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
-- verify nothing is updated
SELECT count(*) FILTER (WHERE b = 2) AS b2, count(*) FILTER (WHERE c = 4) AS c4 FROM t2;
b2 | c4
----+----
---------------------------------------------------------------------
3 | 1
(1 row)
-- cancellation
SELECT citus.mitmproxy('conn.onQuery(query="^UPDATE").cancel(' || :pid || ')');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -188,14 +188,14 @@ ERROR: canceling statement due to user request
-- verify nothing is updated
SELECT count(*) FILTER (WHERE b = 2) AS b2, count(*) FILTER (WHERE c = 4) AS c4 FROM t2;
b2 | c4
----+----
---------------------------------------------------------------------
3 | 1
(1 row)
-- cancel just one connection
SELECT citus.mitmproxy('conn.onQuery(query="UPDATE multi_shard.t2_201005").cancel(' || :pid || ')');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -204,7 +204,7 @@ ERROR: canceling statement due to user request
-- verify nothing is updated
SELECT count(*) FILTER (WHERE b = 2) AS b2, count(*) FILTER (WHERE c = 4) AS c4 FROM t2;
b2 | c4
----+----
---------------------------------------------------------------------
3 | 1
(1 row)
@ -215,46 +215,46 @@ SET citus.multi_shard_commit_protocol TO '1PC';
-- test both kill and cancellation
SELECT citus.mitmproxy('conn.onQuery(query="DELETE FROM").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
-- issue a multi shard delete
DELETE FROM t2 WHERE b = 2;
ERROR: connection error: localhost:9060
ERROR: connection error: localhost:xxxxx
DETAIL: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
-- verify nothing is deleted
SELECT count(*) FROM t2;
count
-------
---------------------------------------------------------------------
7
(1 row)
-- kill just one connection
SELECT citus.mitmproxy('conn.onQuery(query="DELETE FROM multi_shard.t2_201005").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
DELETE FROM t2 WHERE b = 2;
ERROR: connection error: localhost:9060
ERROR: connection error: localhost:xxxxx
DETAIL: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
-- verify nothing is deleted
SELECT count(*) FROM t2;
count
-------
---------------------------------------------------------------------
7
(1 row)
-- cancellation
SELECT citus.mitmproxy('conn.onQuery(query="DELETE FROM").cancel(' || :pid || ')');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -264,14 +264,14 @@ ERROR: canceling statement due to user request
-- verify nothing is deleted
SELECT count(*) FROM t2;
count
-------
---------------------------------------------------------------------
7
(1 row)
-- cancel just one connection
SELECT citus.mitmproxy('conn.onQuery(query="DELETE FROM multi_shard.t2_201005").cancel(' || :pid || ')');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -280,7 +280,7 @@ ERROR: canceling statement due to user request
-- verify nothing is deleted
SELECT count(*) FROM t2;
count
-------
---------------------------------------------------------------------
7
(1 row)
@ -291,52 +291,52 @@ SELECT count(*) FROM t2;
-- test both kill and cancellation
SELECT count(*) FILTER (WHERE b = 2) AS b2, count(*) FILTER (WHERE c = 4) AS c4 FROM t2;
b2 | c4
----+----
---------------------------------------------------------------------
3 | 1
(1 row)
SELECT citus.mitmproxy('conn.onQuery(query="^UPDATE").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
-- issue a multi shard update
UPDATE t2 SET c = 4 WHERE b = 2;
ERROR: connection error: localhost:9060
ERROR: connection error: localhost:xxxxx
DETAIL: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
-- verify nothing is updated
SELECT count(*) FILTER (WHERE b = 2) AS b2, count(*) FILTER (WHERE c = 4) AS c4 FROM t2;
b2 | c4
----+----
---------------------------------------------------------------------
3 | 1
(1 row)
-- kill just one connection
SELECT citus.mitmproxy('conn.onQuery(query="UPDATE multi_shard.t2_201005").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
UPDATE t2 SET c = 4 WHERE b = 2;
ERROR: connection error: localhost:9060
ERROR: connection error: localhost:xxxxx
DETAIL: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
-- verify nothing is updated
SELECT count(*) FILTER (WHERE b = 2) AS b2, count(*) FILTER (WHERE c = 4) AS c4 FROM t2;
b2 | c4
----+----
---------------------------------------------------------------------
3 | 1
(1 row)
-- cancellation
SELECT citus.mitmproxy('conn.onQuery(query="^UPDATE").cancel(' || :pid || ')');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -346,14 +346,14 @@ ERROR: canceling statement due to user request
-- verify nothing is updated
SELECT count(*) FILTER (WHERE b = 2) AS b2, count(*) FILTER (WHERE c = 4) AS c4 FROM t2;
b2 | c4
----+----
---------------------------------------------------------------------
3 | 1
(1 row)
-- cancel just one connection
SELECT citus.mitmproxy('conn.onQuery(query="UPDATE multi_shard.t2_201005").cancel(' || :pid || ')');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -362,7 +362,7 @@ ERROR: canceling statement due to user request
-- verify nothing is updated
SELECT count(*) FILTER (WHERE b = 2) AS b2, count(*) FILTER (WHERE c = 4) AS c4 FROM t2;
b2 | c4
----+----
---------------------------------------------------------------------
3 | 1
(1 row)
@ -378,57 +378,57 @@ RESET citus.multi_shard_commit_protocol;
-- test coverage
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
-- check counts before delete
SELECT count(*) FILTER (WHERE b = 2) AS b2 FROM t2;
b2
----
---------------------------------------------------------------------
3
(1 row)
SELECT citus.mitmproxy('conn.onQuery(query="DELETE FROM").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
DELETE FROM r1 WHERE a = 2;
ERROR: connection error: localhost:9060
ERROR: connection error: localhost:xxxxx
DETAIL: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
-- verify nothing is deleted
SELECT count(*) FILTER (WHERE b = 2) AS b2 FROM t2;
b2
----
---------------------------------------------------------------------
3
(1 row)
SELECT citus.mitmproxy('conn.onQuery(query="DELETE FROM").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
DELETE FROM t2 WHERE b = 2;
ERROR: connection error: localhost:9060
ERROR: connection error: localhost:xxxxx
DETAIL: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
-- verify nothing is deleted
SELECT count(*) FILTER (WHERE b = 2) AS b2 FROM t2;
b2
----
---------------------------------------------------------------------
3
(1 row)
-- test update with subquery pull
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -436,13 +436,13 @@ CREATE TABLE t3 AS SELECT * FROM t2;
SELECT create_distributed_table('t3', 'a');
NOTICE: Copying data from local table...
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
SELECT * FROM t3 ORDER BY 1, 2, 3;
a | b | c
---+---+---
---------------------------------------------------------------------
1 | 1 | 1
1 | 2 | 1
2 | 1 | 2
@ -454,7 +454,7 @@ SELECT * FROM t3 ORDER BY 1, 2, 3;
SELECT citus.mitmproxy('conn.onQuery(query="^COPY").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -465,17 +465,17 @@ RETURNING *;
ERROR: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
CONTEXT: while executing command on localhost:9060
CONTEXT: while executing command on localhost:xxxxx
--- verify nothing is updated
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
SELECT * FROM t3 ORDER BY 1, 2, 3;
a | b | c
---+---+---
---------------------------------------------------------------------
1 | 1 | 1
1 | 2 | 1
2 | 1 | 2
@ -488,7 +488,7 @@ SELECT * FROM t3 ORDER BY 1, 2, 3;
-- kill update part
SELECT citus.mitmproxy('conn.onQuery(query="^UPDATE multi_shard.t3_201009").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -496,20 +496,20 @@ UPDATE t3 SET c = q.c FROM (
SELECT b, max(c) as c FROM t2 GROUP BY b) q
WHERE t3.b = q.b
RETURNING *;
ERROR: connection error: localhost:9060
ERROR: connection error: localhost:xxxxx
DETAIL: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
--- verify nothing is updated
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
SELECT * FROM t3 ORDER BY 1, 2, 3;
a | b | c
---+---+---
---------------------------------------------------------------------
1 | 1 | 1
1 | 2 | 1
2 | 1 | 2
@ -525,7 +525,7 @@ SELECT * FROM t3 ORDER BY 1, 2, 3;
SET citus.shard_replication_factor to 2;
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -534,32 +534,32 @@ CREATE TABLE t3 AS SELECT * FROM t2;
SELECT create_distributed_table('t3', 'a');
NOTICE: Copying data from local table...
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
SELECT count(*) FILTER (WHERE b = 1) b1, count(*) FILTER (WHERE b = 2) AS b2 FROM t3;
b1 | b2
----+----
---------------------------------------------------------------------
3 | 3
(1 row)
-- prevent update of one replica of one shard
SELECT citus.mitmproxy('conn.onQuery(query="UPDATE multi_shard.t3_201013").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
UPDATE t3 SET b = 2 WHERE b = 1;
ERROR: connection error: localhost:9060
ERROR: connection error: localhost:xxxxx
DETAIL: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
-- verify nothing is updated
SELECT count(*) FILTER (WHERE b = 1) b1, count(*) FILTER (WHERE b = 2) AS b2 FROM t3;
b1 | b2
----+----
---------------------------------------------------------------------
3 | 3
(1 row)
@ -567,13 +567,13 @@ SELECT count(*) FILTER (WHERE b = 1) b1, count(*) FILTER (WHERE b = 2) AS b2 FRO
BEGIN;
SELECT count(*) FILTER (WHERE b = 1) b1, count(*) FILTER (WHERE b = 2) AS b2 FROM t2;
b1 | b2
----+----
---------------------------------------------------------------------
3 | 3
(1 row)
SELECT count(*) FILTER (WHERE b = 1) b1, count(*) FILTER (WHERE b = 2) AS b2 FROM t3;
b1 | b2
----+----
---------------------------------------------------------------------
3 | 3
(1 row)
@ -581,13 +581,13 @@ UPDATE t2 SET b = 2 WHERE b = 1;
-- verify update is performed on t2
SELECT count(*) FILTER (WHERE b = 1) b1, count(*) FILTER (WHERE b = 2) AS b2 FROM t2;
b1 | b2
----+----
---------------------------------------------------------------------
0 | 6
(1 row)
-- following will fail
UPDATE t3 SET b = 2 WHERE b = 1;
ERROR: connection error: localhost:9060
ERROR: connection error: localhost:xxxxx
DETAIL: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
@ -595,25 +595,25 @@ END;
-- verify everything is rolled back
SELECT count(*) FILTER (WHERE b = 1) b1, count(*) FILTER (WHERE b = 2) AS b2 FROM t2;
b1 | b2
----+----
---------------------------------------------------------------------
3 | 3
(1 row)
SELECT count(*) FILTER (WHERE b = 1) b1, count(*) FILTER (WHERE b = 2) AS b2 FROM t3;
b1 | b2
----+----
---------------------------------------------------------------------
3 | 3
(1 row)
UPDATE t3 SET b = 1 WHERE b = 2 RETURNING *;
ERROR: connection error: localhost:9060
ERROR: connection error: localhost:xxxxx
DETAIL: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
-- verify nothing is updated
SELECT count(*) FILTER (WHERE b = 1) b1, count(*) FILTER (WHERE b = 2) AS b2 FROM t3;
b1 | b2
----+----
---------------------------------------------------------------------
3 | 3
(1 row)
@ -621,19 +621,19 @@ SELECT count(*) FILTER (WHERE b = 1) b1, count(*) FILTER (WHERE b = 2) AS b2 FRO
SET citus.multi_shard_commit_protocol TO '1PC';
SELECT count(*) FILTER (WHERE b = 1) b1, count(*) FILTER (WHERE b = 2) AS b2 FROM t3;
b1 | b2
----+----
---------------------------------------------------------------------
3 | 3
(1 row)
UPDATE t3 SET b = 2 WHERE b = 1;
ERROR: connection error: localhost:9060
ERROR: connection error: localhost:xxxxx
DETAIL: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
-- verify nothing is updated
SELECT count(*) FILTER (WHERE b = 1) b1, count(*) FILTER (WHERE b = 2) AS b2 FROM t3;
b1 | b2
----+----
---------------------------------------------------------------------
3 | 3
(1 row)
@ -641,13 +641,13 @@ SELECT count(*) FILTER (WHERE b = 1) b1, count(*) FILTER (WHERE b = 2) AS b2 FRO
BEGIN;
SELECT count(*) FILTER (WHERE b = 1) b1, count(*) FILTER (WHERE b = 2) AS b2 FROM t2;
b1 | b2
----+----
---------------------------------------------------------------------
3 | 3
(1 row)
SELECT count(*) FILTER (WHERE b = 1) b1, count(*) FILTER (WHERE b = 2) AS b2 FROM t3;
b1 | b2
----+----
---------------------------------------------------------------------
3 | 3
(1 row)
@ -655,13 +655,13 @@ UPDATE t2 SET b = 2 WHERE b = 1;
-- verify update is performed on t2
SELECT count(*) FILTER (WHERE b = 1) b1, count(*) FILTER (WHERE b = 2) AS b2 FROM t2;
b1 | b2
----+----
---------------------------------------------------------------------
0 | 6
(1 row)
-- following will fail
UPDATE t3 SET b = 2 WHERE b = 1;
ERROR: connection error: localhost:9060
ERROR: connection error: localhost:xxxxx
DETAIL: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
@ -669,19 +669,19 @@ END;
-- verify everything is rolled back
SELECT count(*) FILTER (WHERE b = 1) b1, count(*) FILTER (WHERE b = 2) AS b2 FROM t2;
b1 | b2
----+----
---------------------------------------------------------------------
3 | 3
(1 row)
SELECT count(*) FILTER (WHERE b = 1) b1, count(*) FILTER (WHERE b = 2) AS b2 FROM t3;
b1 | b2
----+----
---------------------------------------------------------------------
3 | 3
(1 row)
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)

View File

@ -10,14 +10,14 @@ SET citus.replication_model TO 'streaming';
SELECT pg_backend_pid() as pid \gset
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
CREATE TABLE t1 (id int PRIMARY KEY);
SELECT create_distributed_table('t1', 'id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -25,14 +25,14 @@ INSERT INTO t1 SELECT x FROM generate_series(1,100) AS f(x);
-- Initial metadata status
SELECT hasmetadata FROM pg_dist_node WHERE nodeport=:worker_2_proxy_port;
hasmetadata
-------------
---------------------------------------------------------------------
f
(1 row)
-- Failure to set groupid in the worker
SELECT citus.mitmproxy('conn.onQuery(query="^UPDATE pg_dist_local_group SET groupid").cancel(' || :pid || ')');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -40,7 +40,7 @@ SELECT start_metadata_sync_to_node('localhost', :worker_2_proxy_port);
ERROR: canceling statement due to user request
SELECT citus.mitmproxy('conn.onQuery(query="^UPDATE pg_dist_local_group SET groupid").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -48,11 +48,11 @@ SELECT start_metadata_sync_to_node('localhost', :worker_2_proxy_port);
ERROR: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
CONTEXT: while executing command on localhost:9060
CONTEXT: while executing command on localhost:xxxxx
-- Failure to drop all tables in pg_dist_partition
SELECT citus.mitmproxy('conn.onQuery(query="^SELECT worker_drop_distributed_table").cancel(' || :pid || ')');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -60,7 +60,7 @@ SELECT start_metadata_sync_to_node('localhost', :worker_2_proxy_port);
ERROR: canceling statement due to user request
SELECT citus.mitmproxy('conn.onQuery(query="^SELECT worker_drop_distributed_table").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -68,11 +68,11 @@ SELECT start_metadata_sync_to_node('localhost', :worker_2_proxy_port);
ERROR: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
CONTEXT: while executing command on localhost:9060
CONTEXT: while executing command on localhost:xxxxx
-- Failure to truncate pg_dist_node in the worker
SELECT citus.mitmproxy('conn.onQuery(query="^TRUNCATE pg_dist_node CASCADE").cancel(' || :pid || ')');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -80,7 +80,7 @@ SELECT start_metadata_sync_to_node('localhost', :worker_2_proxy_port);
ERROR: canceling statement due to user request
SELECT citus.mitmproxy('conn.onQuery(query="^TRUNCATE pg_dist_node CASCADE").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -88,11 +88,11 @@ SELECT start_metadata_sync_to_node('localhost', :worker_2_proxy_port);
ERROR: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
CONTEXT: while executing command on localhost:9060
CONTEXT: while executing command on localhost:xxxxx
-- Failure to populate pg_dist_node in the worker
SELECT citus.mitmproxy('conn.onQuery(query="^INSERT INTO pg_dist_node").cancel(' || :pid || ')');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -100,7 +100,7 @@ SELECT start_metadata_sync_to_node('localhost', :worker_2_proxy_port);
ERROR: canceling statement due to user request
SELECT citus.mitmproxy('conn.onQuery(query="^INSERT INTO pg_dist_node").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -108,30 +108,30 @@ SELECT start_metadata_sync_to_node('localhost', :worker_2_proxy_port);
ERROR: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
CONTEXT: while executing command on localhost:9060
CONTEXT: while executing command on localhost:xxxxx
-- Verify that coordinator knows worker does not have valid metadata
SELECT hasmetadata FROM pg_dist_node WHERE nodeport=:worker_2_proxy_port;
hasmetadata
-------------
---------------------------------------------------------------------
f
(1 row)
-- Verify we can sync metadata after unsuccessful attempts
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
SELECT start_metadata_sync_to_node('localhost', :worker_2_proxy_port);
start_metadata_sync_to_node
-----------------------------
---------------------------------------------------------------------
(1 row)
SELECT hasmetadata FROM pg_dist_node WHERE nodeport=:worker_2_proxy_port;
hasmetadata
-------------
---------------------------------------------------------------------
t
(1 row)
@ -139,7 +139,7 @@ SELECT hasmetadata FROM pg_dist_node WHERE nodeport=:worker_2_proxy_port;
CREATE TABLE t2 (id int PRIMARY KEY);
SELECT citus.mitmproxy('conn.onParse(query="^INSERT INTO pg_dist_placement").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -147,10 +147,10 @@ SELECT create_distributed_table('t2', 'id');
ERROR: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
CONTEXT: while executing command on localhost:9060
CONTEXT: while executing command on localhost:xxxxx
SELECT citus.mitmproxy('conn.onParse(query="^INSERT INTO pg_dist_shard").cancel(' || :pid || ')');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -161,7 +161,7 @@ SELECT count(*) > 0 AS is_table_distributed
FROM pg_dist_partition
WHERE logicalrelid='t2'::regclass;
is_table_distributed
----------------------
---------------------------------------------------------------------
f
(1 row)

View File

@ -1,98 +1,98 @@
SET citus.next_shard_id TO 100500;
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
CREATE TABLE ref_table (key int, value int);
SELECT create_reference_table('ref_table');
create_reference_table
------------------------
---------------------------------------------------------------------
(1 row)
\copy ref_table FROM stdin delimiter ',';
SELECT citus.clear_network_traffic();
clear_network_traffic
-----------------------
---------------------------------------------------------------------
(1 row)
SELECT COUNT(*) FROM ref_table;
count
-------
---------------------------------------------------------------------
4
(1 row)
-- verify behavior of single INSERT; should fail to execute
SELECT citus.mitmproxy('conn.onQuery(query="^INSERT").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
INSERT INTO ref_table VALUES (5, 6);
ERROR: connection error: localhost:9060
ERROR: connection error: localhost:xxxxx
DETAIL: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
SELECT COUNT(*) FROM ref_table WHERE key=5;
count
-------
---------------------------------------------------------------------
0
(1 row)
-- verify behavior of UPDATE ... RETURNING; should not execute
SELECT citus.mitmproxy('conn.onQuery(query="^UPDATE").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
UPDATE ref_table SET key=7 RETURNING value;
ERROR: connection error: localhost:9060
ERROR: connection error: localhost:xxxxx
DETAIL: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
SELECT COUNT(*) FROM ref_table WHERE key=7;
count
-------
---------------------------------------------------------------------
0
(1 row)
-- verify fix to #2214; should raise error and fail to execute
SELECT citus.mitmproxy('conn.onQuery(query="^UPDATE").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
BEGIN;
DELETE FROM ref_table WHERE key=5;
UPDATE ref_table SET key=value;
ERROR: connection error: localhost:9060
ERROR: connection error: localhost:xxxxx
DETAIL: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
COMMIT;
SELECT COUNT(*) FROM ref_table WHERE key=value;
count
-------
---------------------------------------------------------------------
0
(1 row)
-- all shards should still be healthy
SELECT COUNT(*) FROM pg_dist_shard_placement WHERE shardstate = 3;
count
-------
---------------------------------------------------------------------
0
(1 row)
-- ==== Clean up, we're done here ====
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)

View File

@ -5,7 +5,7 @@
-- as invalid and continue with a WARNING.
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -19,7 +19,7 @@ CREATE TABLE artists (
);
SELECT create_distributed_table('artists', 'id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -31,7 +31,7 @@ INSERT INTO artists VALUES (4, 'William Kurelek');
-- simply fail at SAVEPOINT
SELECT citus.mitmproxy('conn.onQuery(query="^SAVEPOINT").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -39,15 +39,15 @@ BEGIN;
INSERT INTO artists VALUES (5, 'Asher Lev');
SAVEPOINT s1;
WARNING: connection not open
CONTEXT: while executing command on localhost:9060
WARNING: connection error: localhost:9060
CONTEXT: while executing command on localhost:xxxxx
WARNING: connection error: localhost:xxxxx
DETAIL: connection not open
WARNING: connection not open
CONTEXT: while executing command on localhost:9060
CONTEXT: while executing command on localhost:xxxxx
WARNING: connection not open
CONTEXT: while executing command on localhost:9060
CONTEXT: while executing command on localhost:xxxxx
ERROR: connection not open
CONTEXT: while executing command on localhost:9060
CONTEXT: while executing command on localhost:xxxxx
DELETE FROM artists WHERE id=4;
ERROR: current transaction is aborted, commands ignored until end of transaction block
RELEASE SAVEPOINT s1;
@ -55,14 +55,14 @@ ERROR: current transaction is aborted, commands ignored until end of transactio
COMMIT;
SELECT * FROM artists WHERE id IN (4, 5);
id | name
----+-----------------
---------------------------------------------------------------------
4 | William Kurelek
(1 row)
-- fail at RELEASE
SELECT citus.mitmproxy('conn.onQuery(query="^RELEASE").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -73,28 +73,28 @@ DELETE FROM artists WHERE id=4;
RELEASE SAVEPOINT s1;
WARNING: AbortSubTransaction while in COMMIT state
WARNING: connection not open
CONTEXT: while executing command on localhost:9060
WARNING: connection error: localhost:9060
CONTEXT: while executing command on localhost:xxxxx
WARNING: connection error: localhost:xxxxx
DETAIL: connection not open
WARNING: connection not open
CONTEXT: while executing command on localhost:9060
CONTEXT: while executing command on localhost:xxxxx
WARNING: connection not open
CONTEXT: while executing command on localhost:9060
CONTEXT: while executing command on localhost:xxxxx
WARNING: savepoint "savepoint_2" does not exist
CONTEXT: while executing command on localhost:57637
CONTEXT: while executing command on localhost:xxxxx
ERROR: connection not open
CONTEXT: while executing command on localhost:9060
CONTEXT: while executing command on localhost:xxxxx
ROLLBACK;
SELECT * FROM artists WHERE id IN (4, 5);
id | name
----+-----------------
---------------------------------------------------------------------
4 | William Kurelek
(1 row)
-- fail at ROLLBACK
SELECT citus.mitmproxy('conn.onQuery(query="^ROLLBACK").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -104,21 +104,21 @@ SAVEPOINT s1;
DELETE FROM artists WHERE id=4;
ROLLBACK TO SAVEPOINT s1;
WARNING: connection not open
CONTEXT: while executing command on localhost:9060
CONTEXT: while executing command on localhost:xxxxx
WARNING: connection not open
CONTEXT: while executing command on localhost:9060
CONTEXT: while executing command on localhost:xxxxx
COMMIT;
ERROR: could not make changes to shard 100950 on any node
ERROR: could not make changes to shard xxxxx on any node
SELECT * FROM artists WHERE id IN (4, 5);
id | name
----+-----------------
---------------------------------------------------------------------
4 | William Kurelek
(1 row)
-- fail at second RELEASE
SELECT citus.mitmproxy('conn.onQuery(query="^RELEASE").after(1).kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -131,26 +131,26 @@ INSERT INTO artists VALUES (5, 'Jacob Kahn');
RELEASE SAVEPOINT s2;
WARNING: AbortSubTransaction while in COMMIT state
WARNING: connection not open
CONTEXT: while executing command on localhost:9060
WARNING: connection error: localhost:9060
CONTEXT: while executing command on localhost:xxxxx
WARNING: connection error: localhost:xxxxx
DETAIL: connection not open
WARNING: connection not open
CONTEXT: while executing command on localhost:9060
CONTEXT: while executing command on localhost:xxxxx
WARNING: connection not open
CONTEXT: while executing command on localhost:9060
CONTEXT: while executing command on localhost:xxxxx
ERROR: connection not open
CONTEXT: while executing command on localhost:9060
CONTEXT: while executing command on localhost:xxxxx
COMMIT;
SELECT * FROM artists WHERE id IN (4, 5);
id | name
----+-----------------
---------------------------------------------------------------------
4 | William Kurelek
(1 row)
-- fail at second ROLLBACK
SELECT citus.mitmproxy('conn.onQuery(query="^ROLLBACK").after(1).kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -162,20 +162,20 @@ SAVEPOINT s2;
DELETE FROM artists WHERE id=5;
ROLLBACK TO SAVEPOINT s2;
WARNING: connection not open
CONTEXT: while executing command on localhost:9060
CONTEXT: while executing command on localhost:xxxxx
WARNING: connection not open
CONTEXT: while executing command on localhost:9060
CONTEXT: while executing command on localhost:xxxxx
COMMIT;
ERROR: could not make changes to shard 100950 on any node
ERROR: could not make changes to shard xxxxx on any node
SELECT * FROM artists WHERE id IN (4, 5);
id | name
----+-----------------
---------------------------------------------------------------------
4 | William Kurelek
(1 row)
SELECT citus.mitmproxy('conn.onQuery(query="^RELEASE").after(1).kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -192,12 +192,12 @@ RELEASE SAVEPOINT s2;
COMMIT;
SELECT * FROM artists WHERE id=7;
id | name
----+------
---------------------------------------------------------------------
(0 rows)
SELECT citus.mitmproxy('conn.onQuery(query="^ROLLBACK").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -213,14 +213,14 @@ ROLLBACK TO SAVEPOINT s1;
WARNING: connection not open
WARNING: connection not open
WARNING: connection not open
WARNING: connection error: localhost:9060
WARNING: connection error: localhost:xxxxx
WARNING: connection not open
WARNING: connection not open
COMMIT;
ERROR: could not make changes to shard 100950 on any node
ERROR: could not make changes to shard xxxxx on any node
SELECT * FROM artists WHERE id=6;
id | name
----+------
---------------------------------------------------------------------
(0 rows)
-- replication factor > 1
@ -233,14 +233,14 @@ SET citus.shard_count = 1;
SET citus.shard_replication_factor = 2; -- single shard, on both workers
SELECT create_distributed_table('researchers', 'lab_id', 'hash');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
-- simply fail at SAVEPOINT
SELECT citus.mitmproxy('conn.onQuery(query="^SAVEPOINT").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -248,7 +248,7 @@ BEGIN;
INSERT INTO researchers VALUES (7, 4, 'Jan Plaza');
SAVEPOINT s1;
WARNING: connection not open
WARNING: connection error: localhost:9060
WARNING: connection error: localhost:xxxxx
WARNING: connection not open
WARNING: connection not open
ERROR: connection not open
@ -262,7 +262,7 @@ COMMIT;
-- should see correct results from healthy placement and one bad placement
SELECT * FROM researchers WHERE lab_id = 4;
id | lab_id | name
----+--------+------
---------------------------------------------------------------------
(0 rows)
UPDATE pg_dist_shard_placement SET shardstate = 1
@ -270,14 +270,14 @@ WHERE shardstate = 3 AND shardid IN (
SELECT shardid FROM pg_dist_shard WHERE logicalrelid = 'researchers'::regclass
) RETURNING placementid;
placementid
-------------
---------------------------------------------------------------------
(0 rows)
TRUNCATE researchers;
-- fail at rollback
SELECT citus.mitmproxy('conn.onQuery(query="^ROLLBACK").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -290,11 +290,11 @@ WARNING: connection not open
WARNING: connection not open
RELEASE SAVEPOINT s1;
COMMIT;
ERROR: failure on connection marked as essential: localhost:9060
ERROR: failure on connection marked as essential: localhost:xxxxx
-- should see correct results from healthy placement and one bad placement
SELECT * FROM researchers WHERE lab_id = 4;
id | lab_id | name
----+--------+------
---------------------------------------------------------------------
(0 rows)
UPDATE pg_dist_shard_placement SET shardstate = 1
@ -302,14 +302,14 @@ WHERE shardstate = 3 AND shardid IN (
SELECT shardid FROM pg_dist_shard WHERE logicalrelid = 'researchers'::regclass
) RETURNING placementid;
placementid
-------------
---------------------------------------------------------------------
(0 rows)
TRUNCATE researchers;
-- fail at release
SELECT citus.mitmproxy('conn.onQuery(query="^RELEASE").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -321,7 +321,7 @@ ROLLBACK TO s1;
RELEASE SAVEPOINT s1;
WARNING: AbortSubTransaction while in COMMIT state
WARNING: connection not open
WARNING: connection error: localhost:9060
WARNING: connection error: localhost:xxxxx
WARNING: connection not open
WARNING: connection not open
WARNING: savepoint "savepoint_3" does not exist
@ -330,7 +330,7 @@ COMMIT;
-- should see correct results from healthy placement and one bad placement
SELECT * FROM researchers WHERE lab_id = 4;
id | lab_id | name
----+--------+------
---------------------------------------------------------------------
(0 rows)
UPDATE pg_dist_shard_placement SET shardstate = 1
@ -338,14 +338,14 @@ WHERE shardstate = 3 AND shardid IN (
SELECT shardid FROM pg_dist_shard WHERE logicalrelid = 'researchers'::regclass
) RETURNING placementid;
placementid
-------------
---------------------------------------------------------------------
(0 rows)
TRUNCATE researchers;
-- clean up
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)

View File

@ -1,19 +1,19 @@
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
-- add the workers
SELECT master_add_node('localhost', :worker_1_port);
master_add_node
-----------------
---------------------------------------------------------------------
1
(1 row)
SELECT master_add_node('localhost', :worker_2_proxy_port); -- an mitmproxy which forwards to the second worker
master_add_node
-----------------
---------------------------------------------------------------------
2
(1 row)

View File

@ -1,12 +1,12 @@
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
SELECT citus.clear_network_traffic();
clear_network_traffic
-----------------------
---------------------------------------------------------------------
(1 row)
@ -15,25 +15,25 @@ SET citus.shard_replication_factor = 2;
CREATE TABLE mod_test (key int, value text);
SELECT create_distributed_table('mod_test', 'key');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
-- verify behavior of single INSERT; should mark shard as failed
SELECT citus.mitmproxy('conn.onQuery(query="^INSERT").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
INSERT INTO mod_test VALUES (2, 6);
WARNING: connection error: localhost:9060
WARNING: connection error: localhost:xxxxx
DETAIL: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
SELECT COUNT(*) FROM mod_test WHERE key=2;
count
-------
---------------------------------------------------------------------
1
(1 row)
@ -43,7 +43,7 @@ WHERE shardid IN (
SELECT shardid FROM pg_dist_shard WHERE logicalrelid = 'mod_test'::regclass
) AND shardstate = 3 RETURNING placementid;
placementid
-------------
---------------------------------------------------------------------
125
(1 row)
@ -51,30 +51,30 @@ TRUNCATE mod_test;
-- verify behavior of UPDATE ... RETURNING; should mark as failed
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
INSERT INTO mod_test VALUES (2, 6);
SELECT citus.mitmproxy('conn.onQuery(query="^UPDATE").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
UPDATE mod_test SET value='ok' WHERE key=2 RETURNING key;
WARNING: connection error: localhost:9060
WARNING: connection error: localhost:xxxxx
DETAIL: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
key
-----
---------------------------------------------------------------------
2
(1 row)
SELECT COUNT(*) FROM mod_test WHERE value='ok';
count
-------
---------------------------------------------------------------------
1
(1 row)
@ -84,7 +84,7 @@ WHERE shardid IN (
SELECT shardid FROM pg_dist_shard WHERE logicalrelid = 'mod_test'::regclass
) AND shardstate = 3 RETURNING placementid;
placementid
-------------
---------------------------------------------------------------------
125
(1 row)
@ -93,7 +93,7 @@ TRUNCATE mod_test;
-- should succeed but mark a placement as failed
SELECT citus.mitmproxy('conn.onQuery(query="^UPDATE").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -102,14 +102,14 @@ INSERT INTO mod_test VALUES (2, 6);
INSERT INTO mod_test VALUES (2, 7);
DELETE FROM mod_test WHERE key=2 AND value = '7';
UPDATE mod_test SET value='ok' WHERE key=2;
WARNING: connection error: localhost:9060
WARNING: connection error: localhost:xxxxx
DETAIL: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
COMMIT;
SELECT COUNT(*) FROM mod_test WHERE key=2;
count
-------
---------------------------------------------------------------------
1
(1 row)
@ -119,7 +119,7 @@ WHERE shardid IN (
SELECT shardid FROM pg_dist_shard WHERE logicalrelid = 'mod_test'::regclass
) AND shardstate = 3 RETURNING placementid;
placementid
-------------
---------------------------------------------------------------------
125
(1 row)

View File

@ -1,12 +1,12 @@
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
SELECT citus.clear_network_traffic();
clear_network_traffic
-----------------------
---------------------------------------------------------------------
(1 row)
@ -15,7 +15,7 @@ SET citus.shard_replication_factor = 2;
CREATE TABLE select_test (key int, value text);
SELECT create_distributed_table('select_test', 'key');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -23,58 +23,58 @@ SELECT create_distributed_table('select_test', 'key');
INSERT INTO select_test VALUES (3, 'test data');
SELECT citus.mitmproxy('conn.onQuery(query="^SELECT").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
SELECT * FROM select_test WHERE key = 3;
WARNING: connection error: localhost:9060
WARNING: connection error: localhost:xxxxx
DETAIL: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
key | value
-----+-----------
---------------------------------------------------------------------
3 | test data
(1 row)
SELECT * FROM select_test WHERE key = 3;
WARNING: connection error: localhost:9060
WARNING: connection error: localhost:xxxxx
DETAIL: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
key | value
-----+-----------
---------------------------------------------------------------------
3 | test data
(1 row)
-- kill after first SELECT; txn should work (though placement marked bad)
SELECT citus.mitmproxy('conn.onQuery(query="^SELECT").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
BEGIN;
INSERT INTO select_test VALUES (3, 'more data');
SELECT * FROM select_test WHERE key = 3;
WARNING: connection error: localhost:9060
WARNING: connection error: localhost:xxxxx
DETAIL: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
key | value
-----+-----------
---------------------------------------------------------------------
3 | test data
3 | more data
(2 rows)
INSERT INTO select_test VALUES (3, 'even more data');
SELECT * FROM select_test WHERE key = 3;
WARNING: connection error: localhost:9060
WARNING: connection error: localhost:xxxxx
DETAIL: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
key | value
-----+----------------
---------------------------------------------------------------------
3 | test data
3 | more data
3 | even more data
@ -92,7 +92,7 @@ TRUNCATE select_test;
INSERT INTO select_test VALUES (3, 'test data');
SELECT citus.mitmproxy('conn.onQuery(query="^SELECT").cancel(' || pg_backend_pid() || ')');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -103,7 +103,7 @@ ERROR: canceling statement due to user request
-- cancel after first SELECT; txn should fail and nothing should be marked as invalid
SELECT citus.mitmproxy('conn.onQuery(query="^SELECT").cancel(' || pg_backend_pid() || ')');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -118,7 +118,7 @@ WHERE shardid IN (
SELECT shardid FROM pg_dist_shard WHERE logicalrelid = 'select_test'::regclass
);
shardstate
------------
---------------------------------------------------------------------
1
(1 row)
@ -127,7 +127,7 @@ TRUNCATE select_test;
-- error after second SELECT; txn should fail
SELECT citus.mitmproxy('conn.onQuery(query="^SELECT").after(1).cancel(' || pg_backend_pid() || ')');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -135,7 +135,7 @@ BEGIN;
INSERT INTO select_test VALUES (3, 'more data');
SELECT * FROM select_test WHERE key = 3;
key | value
-----+-----------
---------------------------------------------------------------------
3 | more data
(1 row)
@ -146,7 +146,7 @@ COMMIT;
-- error after second SELECT; txn should work (though placement marked bad)
SELECT citus.mitmproxy('conn.onQuery(query="^SELECT").after(1).reset()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -154,18 +154,18 @@ BEGIN;
INSERT INTO select_test VALUES (3, 'more data');
SELECT * FROM select_test WHERE key = 3;
key | value
-----+-----------
---------------------------------------------------------------------
3 | more data
(1 row)
INSERT INTO select_test VALUES (3, 'even more data');
SELECT * FROM select_test WHERE key = 3;
WARNING: connection error: localhost:9060
WARNING: connection error: localhost:xxxxx
DETAIL: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
key | value
-----+----------------
---------------------------------------------------------------------
3 | more data
3 | even more data
(2 rows)
@ -173,13 +173,13 @@ DETAIL: server closed the connection unexpectedly
COMMIT;
SELECT citus.mitmproxy('conn.onQuery(query="^SELECT").after(2).kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
SELECT recover_prepared_transactions();
recover_prepared_transactions
-------------------------------
---------------------------------------------------------------------
0
(1 row)
@ -187,7 +187,7 @@ SELECT recover_prepared_transactions();
ERROR: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
CONTEXT: while executing command on localhost:9060
CONTEXT: while executing command on localhost:xxxxx
-- bug from https://github.com/citusdata/citus/issues/1926
SET citus.max_cached_conns_per_worker TO 0; -- purge cache
DROP TABLE select_test;
@ -196,7 +196,7 @@ SET citus.shard_replication_factor = 1;
CREATE TABLE select_test (key int, value text);
SELECT create_distributed_table('select_test', 'key');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -204,31 +204,31 @@ SET citus.max_cached_conns_per_worker TO 1; -- allow connection to be cached
INSERT INTO select_test VALUES (1, 'test data');
SELECT citus.mitmproxy('conn.onQuery(query="^SELECT").after(1).kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
SELECT * FROM select_test WHERE key = 1;
key | value
-----+-----------
---------------------------------------------------------------------
1 | test data
(1 row)
SELECT * FROM select_test WHERE key = 1;
ERROR: connection error: localhost:9060
ERROR: connection error: localhost:xxxxx
DETAIL: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
-- now the same test with query cancellation
SELECT citus.mitmproxy('conn.onQuery(query="^SELECT").after(1).cancel(' || pg_backend_pid() || ')');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
SELECT * FROM select_test WHERE key = 1;
key | value
-----+-----------
---------------------------------------------------------------------
1 | test data
(1 row)

View File

@ -6,7 +6,7 @@ ALTER SYSTEM SET citus.recover_2pc_interval TO -1;
ALTER SYSTEM set citus.enable_statistics_collection TO false;
SELECT pg_reload_conf();
pg_reload_conf
----------------
---------------------------------------------------------------------
t
(1 row)

File diff suppressed because it is too large Load Diff

View File

@ -4,7 +4,7 @@
SET citus.next_shard_id TO 12000000;
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -14,41 +14,41 @@ SET citus.multi_shard_commit_protocol TO '1pc';
CREATE TABLE vacuum_test (key int, value int);
SELECT create_distributed_table('vacuum_test', 'key');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
SELECT citus.clear_network_traffic();
clear_network_traffic
-----------------------
---------------------------------------------------------------------
(1 row)
SELECT citus.mitmproxy('conn.onQuery(query="^VACUUM").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
VACUUM vacuum_test;
ERROR: connection error: localhost:9060
ERROR: connection error: localhost:xxxxx
DETAIL: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
SELECT citus.mitmproxy('conn.onQuery(query="^ANALYZE").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
ANALYZE vacuum_test;
WARNING: connection error: localhost:9060
WARNING: connection error: localhost:xxxxx
DETAIL: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
SELECT citus.mitmproxy('conn.onQuery(query="^COMMIT").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -58,7 +58,7 @@ ANALYZE vacuum_test;
SELECT shardid, shardstate FROM pg_dist_shard_placement where shardstate != 1 AND
shardid in ( SELECT shardid FROM pg_dist_shard WHERE logicalrelid = 'vacuum_test'::regclass);
shardid | shardstate
----------+------------
---------------------------------------------------------------------
12000000 | 3
(1 row)
@ -69,7 +69,7 @@ WHERE shardid IN (
-- the same tests with cancel
SELECT citus.mitmproxy('conn.onQuery(query="^VACUUM").cancel(' || pg_backend_pid() || ')');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -77,7 +77,7 @@ VACUUM vacuum_test;
ERROR: canceling statement due to user request
SELECT citus.mitmproxy('conn.onQuery(query="^ANALYZE").cancel(' || pg_backend_pid() || ')');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -86,38 +86,38 @@ ERROR: canceling statement due to user request
-- cancel during COMMIT should be ignored
SELECT citus.mitmproxy('conn.onQuery(query="^COMMIT").cancel(' || pg_backend_pid() || ')');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
ANALYZE vacuum_test;
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
CREATE TABLE other_vacuum_test (key int, value int);
SELECT create_distributed_table('other_vacuum_test', 'key');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
SELECT citus.mitmproxy('conn.onQuery(query="^VACUUM.*other").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
VACUUM vacuum_test, other_vacuum_test;
ERROR: connection error: localhost:9060
ERROR: connection error: localhost:xxxxx
DETAIL: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
SELECT citus.mitmproxy('conn.onQuery(query="^VACUUM.*other").cancel(' || pg_backend_pid() || ')');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -126,7 +126,7 @@ ERROR: canceling statement due to user request
-- ==== Clean up, we're done here ====
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)

View File

@ -4,7 +4,7 @@
SET citus.next_shard_id TO 12000000;
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -14,41 +14,41 @@ SET citus.multi_shard_commit_protocol TO '1pc';
CREATE TABLE vacuum_test (key int, value int);
SELECT create_distributed_table('vacuum_test', 'key');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
SELECT citus.clear_network_traffic();
clear_network_traffic
-----------------------
---------------------------------------------------------------------
(1 row)
SELECT citus.mitmproxy('conn.onQuery(query="^VACUUM").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
VACUUM vacuum_test;
ERROR: connection error: localhost:9060
ERROR: connection error: localhost:xxxxx
DETAIL: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
SELECT citus.mitmproxy('conn.onQuery(query="^ANALYZE").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
ANALYZE vacuum_test;
WARNING: connection error: localhost:9060
WARNING: connection error: localhost:xxxxx
DETAIL: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
SELECT citus.mitmproxy('conn.onQuery(query="^COMMIT").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -58,7 +58,7 @@ ANALYZE vacuum_test;
SELECT shardid, shardstate FROM pg_dist_shard_placement where shardstate != 1 AND
shardid in ( SELECT shardid FROM pg_dist_shard WHERE logicalrelid = 'vacuum_test'::regclass);
shardid | shardstate
----------+------------
---------------------------------------------------------------------
12000000 | 3
(1 row)
@ -69,7 +69,7 @@ WHERE shardid IN (
-- the same tests with cancel
SELECT citus.mitmproxy('conn.onQuery(query="^VACUUM").cancel(' || pg_backend_pid() || ')');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -77,7 +77,7 @@ VACUUM vacuum_test;
ERROR: canceling statement due to user request
SELECT citus.mitmproxy('conn.onQuery(query="^ANALYZE").cancel(' || pg_backend_pid() || ')');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
@ -86,48 +86,44 @@ ERROR: canceling statement due to user request
-- cancel during COMMIT should be ignored
SELECT citus.mitmproxy('conn.onQuery(query="^COMMIT").cancel(' || pg_backend_pid() || ')');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
ANALYZE vacuum_test;
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
CREATE TABLE other_vacuum_test (key int, value int);
SELECT create_distributed_table('other_vacuum_test', 'key');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
SELECT citus.mitmproxy('conn.onQuery(query="^VACUUM.*other").kill()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
VACUUM vacuum_test, other_vacuum_test;
ERROR: syntax error at or near ","
LINE 1: VACUUM vacuum_test, other_vacuum_test;
^
SELECT citus.mitmproxy('conn.onQuery(query="^VACUUM.*other").cancel(' || pg_backend_pid() || ')');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)
VACUUM vacuum_test, other_vacuum_test;
ERROR: syntax error at or near ","
LINE 1: VACUUM vacuum_test, other_vacuum_test;
^
-- ==== Clean up, we're done here ====
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
---------------------------------------------------------------------
(1 row)

View File

@ -10,7 +10,7 @@ SET citus.shard_replication_factor TO 1;
CREATE TABLE modify_fast_path(key int, value_1 int, value_2 text);
SELECT create_distributed_table('modify_fast_path', 'key');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -18,14 +18,14 @@ SET citus.shard_replication_factor TO 2;
CREATE TABLE modify_fast_path_replication_2(key int, value_1 int, value_2 text);
SELECT create_distributed_table('modify_fast_path_replication_2', 'key');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
CREATE TABLE modify_fast_path_reference(key int, value_1 int, value_2 text);
SELECT create_reference_table('modify_fast_path_reference');
create_reference_table
------------------------
---------------------------------------------------------------------
(1 row)
@ -116,7 +116,7 @@ DEBUG: Creating router plan
DEBUG: Plan is router executable
DETAIL: distribution column value: 1
key | value_1 | value_2
-----+---------+---------
---------------------------------------------------------------------
1 | 1 |
(1 row)
@ -125,7 +125,7 @@ DEBUG: Creating router plan
DEBUG: Plan is router executable
DETAIL: distribution column value: 2
value_1 | key
---------+-----
---------------------------------------------------------------------
1 | 2
(1 row)
@ -135,7 +135,7 @@ DEBUG: Creating router plan
DEBUG: Plan is router executable
DETAIL: distribution column value: 2
?column? | ?column?
----------+----------
---------------------------------------------------------------------
15 | 16
(1 row)
@ -146,18 +146,18 @@ ERROR: non-IMMUTABLE functions are not allowed in the RETURNING clause
-- modifying ctes are not supported via fast-path
WITH t1 AS (DELETE FROM modify_fast_path WHERE key = 1), t2 AS (SELECT * FROM modify_fast_path) SELECT * FROM t2;
DEBUG: data-modifying statements are not supported in the WITH clauses of distributed queries
DEBUG: generating subplan 22_1 for CTE t1: DELETE FROM fast_path_router_modify.modify_fast_path WHERE (key OPERATOR(pg_catalog.=) 1)
DEBUG: generating subplan XXX_1 for CTE t1: DELETE FROM fast_path_router_modify.modify_fast_path WHERE (key OPERATOR(pg_catalog.=) 1)
DEBUG: Distributed planning for a fast-path router query
DEBUG: Creating router plan
DEBUG: Plan is router executable
DETAIL: distribution column value: 1
DEBUG: generating subplan 22_2 for CTE t2: SELECT key, value_1, value_2 FROM fast_path_router_modify.modify_fast_path
DEBUG: generating subplan XXX_2 for CTE t2: SELECT key, value_1, value_2 FROM fast_path_router_modify.modify_fast_path
DEBUG: Router planner cannot handle multi-shard select queries
DEBUG: Plan 22 query after replacing subqueries and CTEs: SELECT key, value_1, value_2 FROM (SELECT intermediate_result.key, intermediate_result.value_1, intermediate_result.value_2 FROM read_intermediate_result('22_2'::text, 'binary'::citus_copy_format) intermediate_result(key integer, value_1 integer, value_2 text)) t2
DEBUG: Plan XXX query after replacing subqueries and CTEs: SELECT key, value_1, value_2 FROM (SELECT intermediate_result.key, intermediate_result.value_1, intermediate_result.value_2 FROM read_intermediate_result('XXX_2'::text, 'binary'::citus_copy_format) intermediate_result(key integer, value_1 integer, value_2 text)) t2
DEBUG: Creating router plan
DEBUG: Plan is router executable
key | value_1 | value_2
-----+---------+---------
---------------------------------------------------------------------
(0 rows)
-- for update/share is supported via fast-path when replication factor = 1 or reference table
@ -167,7 +167,7 @@ DEBUG: Creating router plan
DEBUG: Plan is router executable
DETAIL: distribution column value: 1
key | value_1 | value_2
-----+---------+---------
---------------------------------------------------------------------
(0 rows)
SELECT * FROM modify_fast_path WHERE key = 1 FOR SHARE;
@ -176,7 +176,7 @@ DEBUG: Creating router plan
DEBUG: Plan is router executable
DETAIL: distribution column value: 1
key | value_1 | value_2
-----+---------+---------
---------------------------------------------------------------------
(0 rows)
SELECT * FROM modify_fast_path_reference WHERE key = 1 FOR UPDATE;
@ -184,7 +184,7 @@ DEBUG: Distributed planning for a fast-path router query
DEBUG: Creating router plan
DEBUG: Plan is router executable
key | value_1 | value_2
-----+---------+---------
---------------------------------------------------------------------
(0 rows)
SELECT * FROM modify_fast_path_reference WHERE key = 1 FOR SHARE;
@ -192,7 +192,7 @@ DEBUG: Distributed planning for a fast-path router query
DEBUG: Creating router plan
DEBUG: Plan is router executable
key | value_1 | value_2
-----+---------+---------
---------------------------------------------------------------------
(0 rows)
-- for update/share is not supported via fast-path wen replication factor > 1
@ -282,7 +282,7 @@ DETAIL: distribution column value: 1
CONTEXT: SQL statement "DELETE FROM modify_fast_path WHERE key = $1 AND value_1 = $2"
PL/pgSQL function modify_fast_path_plpsql(integer,integer) line 3 at SQL statement
modify_fast_path_plpsql
-------------------------
---------------------------------------------------------------------
(1 row)
@ -298,7 +298,7 @@ DETAIL: distribution column value: 2
CONTEXT: SQL statement "DELETE FROM modify_fast_path WHERE key = $1 AND value_1 = $2"
PL/pgSQL function modify_fast_path_plpsql(integer,integer) line 3 at SQL statement
modify_fast_path_plpsql
-------------------------
---------------------------------------------------------------------
(1 row)
@ -314,7 +314,7 @@ DETAIL: distribution column value: 3
CONTEXT: SQL statement "DELETE FROM modify_fast_path WHERE key = $1 AND value_1 = $2"
PL/pgSQL function modify_fast_path_plpsql(integer,integer) line 3 at SQL statement
modify_fast_path_plpsql
-------------------------
---------------------------------------------------------------------
(1 row)
@ -330,7 +330,7 @@ DETAIL: distribution column value: 4
CONTEXT: SQL statement "DELETE FROM modify_fast_path WHERE key = $1 AND value_1 = $2"
PL/pgSQL function modify_fast_path_plpsql(integer,integer) line 3 at SQL statement
modify_fast_path_plpsql
-------------------------
---------------------------------------------------------------------
(1 row)
@ -346,7 +346,7 @@ DETAIL: distribution column value: 5
CONTEXT: SQL statement "DELETE FROM modify_fast_path WHERE key = $1 AND value_1 = $2"
PL/pgSQL function modify_fast_path_plpsql(integer,integer) line 3 at SQL statement
modify_fast_path_plpsql
-------------------------
---------------------------------------------------------------------
(1 row)
@ -365,7 +365,7 @@ DETAIL: distribution column value: 6
CONTEXT: SQL statement "DELETE FROM modify_fast_path WHERE key = $1 AND value_1 = $2"
PL/pgSQL function modify_fast_path_plpsql(integer,integer) line 3 at SQL statement
modify_fast_path_plpsql
-------------------------
---------------------------------------------------------------------
(1 row)
@ -381,7 +381,7 @@ DETAIL: distribution column value: 6
CONTEXT: SQL statement "DELETE FROM modify_fast_path WHERE key = $1 AND value_1 = $2"
PL/pgSQL function modify_fast_path_plpsql(integer,integer) line 3 at SQL statement
modify_fast_path_plpsql
-------------------------
---------------------------------------------------------------------
(1 row)

View File

@ -11,28 +11,28 @@ SET citus.shard_replication_factor TO 1;
CREATE TABLE transitive_reference_table(id int PRIMARY KEY);
SELECT create_reference_table('transitive_reference_table');
create_reference_table
------------------------
---------------------------------------------------------------------
(1 row)
CREATE TABLE reference_table(id int PRIMARY KEY, value_1 int);
SELECT create_reference_table('reference_table');
create_reference_table
------------------------
---------------------------------------------------------------------
(1 row)
CREATE TABLE on_update_fkey_table(id int PRIMARY KEY, value_1 int);
SELECT create_distributed_table('on_update_fkey_table', 'id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
CREATE TABLE unrelated_dist_table(id int PRIMARY KEY, value_1 int);
SELECT create_distributed_table('unrelated_dist_table', 'id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -48,13 +48,13 @@ SET client_min_messages TO DEBUG1;
BEGIN;
SELECT count(*) FROM reference_table;
count
-------
---------------------------------------------------------------------
101
(1 row)
SELECT count(*) FROM on_update_fkey_table;
count
-------
---------------------------------------------------------------------
1001
(1 row)
@ -62,13 +62,13 @@ ROLLBACK;
BEGIN;
SELECT count(*) FROM transitive_reference_table;
count
-------
---------------------------------------------------------------------
101
(1 row)
SELECT count(*) FROM on_update_fkey_table;
count
-------
---------------------------------------------------------------------
1001
(1 row)
@ -77,31 +77,31 @@ ROLLBACK;
BEGIN;
SELECT count(*) FROM reference_table;
count
-------
---------------------------------------------------------------------
101
(1 row)
SELECT count(*) FROM on_update_fkey_table WHERE id = 15;
count
-------
---------------------------------------------------------------------
1
(1 row)
SELECT count(*) FROM on_update_fkey_table WHERE id = 16;
count
-------
---------------------------------------------------------------------
1
(1 row)
SELECT count(*) FROM on_update_fkey_table WHERE id = 17;
count
-------
---------------------------------------------------------------------
1
(1 row)
SELECT count(*) FROM on_update_fkey_table WHERE id = 18;
count
-------
---------------------------------------------------------------------
1
(1 row)
@ -109,31 +109,31 @@ ROLLBACK;
BEGIN;
SELECT count(*) FROM transitive_reference_table;
count
-------
---------------------------------------------------------------------
101
(1 row)
SELECT count(*) FROM on_update_fkey_table WHERE id = 15;
count
-------
---------------------------------------------------------------------
1
(1 row)
SELECT count(*) FROM on_update_fkey_table WHERE id = 16;
count
-------
---------------------------------------------------------------------
1
(1 row)
SELECT count(*) FROM on_update_fkey_table WHERE id = 17;
count
-------
---------------------------------------------------------------------
1
(1 row)
SELECT count(*) FROM on_update_fkey_table WHERE id = 18;
count
-------
---------------------------------------------------------------------
1
(1 row)
@ -142,7 +142,7 @@ ROLLBACK;
BEGIN;
SELECT count(*) FROM reference_table;
count
-------
---------------------------------------------------------------------
101
(1 row)
@ -151,7 +151,7 @@ ROLLBACK;
BEGIN;
SELECT count(*) FROM transitive_reference_table;
count
-------
---------------------------------------------------------------------
101
(1 row)
@ -161,7 +161,7 @@ ROLLBACK;
BEGIN;
SELECT count(*) FROM reference_table;
count
-------
---------------------------------------------------------------------
101
(1 row)
@ -173,7 +173,7 @@ ROLLBACK;
BEGIN;
SELECT count(*) FROM transitive_reference_table;
count
-------
---------------------------------------------------------------------
101
(1 row)
@ -186,7 +186,7 @@ ROLLBACK;
BEGIN;
SELECT count(*) FROM reference_table;
count
-------
---------------------------------------------------------------------
101
(1 row)
@ -198,7 +198,7 @@ ROLLBACK;
BEGIN;
SELECT count(*) FROM transitive_reference_table;
count
-------
---------------------------------------------------------------------
101
(1 row)
@ -211,7 +211,7 @@ ROLLBACK;
BEGIN;
SELECT count(*) FROM reference_table;
count
-------
---------------------------------------------------------------------
101
(1 row)
@ -222,7 +222,7 @@ ROLLBACK;
BEGIN;
SELECT count(*) FROM transitive_reference_table;
count
-------
---------------------------------------------------------------------
101
(1 row)
@ -235,7 +235,7 @@ ROLLBACK;
BEGIN;
SELECT count(*) FROM reference_table;
count
-------
---------------------------------------------------------------------
101
(1 row)
@ -246,7 +246,7 @@ ROLLBACK;
BEGIN;
SELECT count(*) FROM transitive_reference_table;
count
-------
---------------------------------------------------------------------
101
(1 row)
@ -259,13 +259,13 @@ ROLLBACK;
BEGIN;
SELECT count(*) FROM unrelated_dist_table;
count
-------
---------------------------------------------------------------------
1001
(1 row)
SELECT count(*) FROM reference_table;
count
-------
---------------------------------------------------------------------
101
(1 row)
@ -277,13 +277,13 @@ ROLLBACK;
BEGIN;
SELECT count(*) FROM unrelated_dist_table;
count
-------
---------------------------------------------------------------------
1001
(1 row)
SELECT count(*) FROM transitive_reference_table;
count
-------
---------------------------------------------------------------------
101
(1 row)
@ -297,13 +297,13 @@ ROLLBACK;
BEGIN;
SELECT count(*) FROM unrelated_dist_table;
count
-------
---------------------------------------------------------------------
1001
(1 row)
SELECT count(*) FROM reference_table;
count
-------
---------------------------------------------------------------------
101
(1 row)
@ -315,13 +315,13 @@ ROLLBACK;
BEGIN;
SELECT count(*) FROM unrelated_dist_table;
count
-------
---------------------------------------------------------------------
1001
(1 row)
SELECT count(*) FROM transitive_reference_table;
count
-------
---------------------------------------------------------------------
101
(1 row)
@ -334,7 +334,7 @@ ROLLBACK;
BEGIN;
SELECT count(*) FROM reference_table;
count
-------
---------------------------------------------------------------------
101
(1 row)
@ -343,7 +343,7 @@ ROLLBACK;
BEGIN;
SELECT count(*) FROM transitive_reference_table;
count
-------
---------------------------------------------------------------------
101
(1 row)
@ -356,13 +356,13 @@ DEBUG: switching to sequential query execution mode
DETAIL: Reference relation "reference_table" is modified, which might lead to data inconsistencies or distributed deadlocks via parallel accesses to hash distributed relations due to foreign keys. Any parallel modification to those hash distributed relations in the same transaction can only be executed in sequential query execution mode
SELECT count(*) FROM on_update_fkey_table WHERE value_1 = 99;
count
-------
---------------------------------------------------------------------
0
(1 row)
SELECT count(*) FROM on_update_fkey_table WHERE value_1 = 101;
count
-------
---------------------------------------------------------------------
10
(1 row)
@ -373,13 +373,13 @@ DEBUG: switching to sequential query execution mode
DETAIL: Reference relation "transitive_reference_table" is modified, which might lead to data inconsistencies or distributed deadlocks via parallel accesses to hash distributed relations due to foreign keys. Any parallel modification to those hash distributed relations in the same transaction can only be executed in sequential query execution mode
SELECT count(*) FROM on_update_fkey_table WHERE value_1 = 99;
count
-------
---------------------------------------------------------------------
10
(1 row)
SELECT count(*) FROM on_update_fkey_table WHERE value_1 = 101;
count
-------
---------------------------------------------------------------------
0
(1 row)
@ -391,25 +391,25 @@ DEBUG: switching to sequential query execution mode
DETAIL: Reference relation "reference_table" is modified, which might lead to data inconsistencies or distributed deadlocks via parallel accesses to hash distributed relations due to foreign keys. Any parallel modification to those hash distributed relations in the same transaction can only be executed in sequential query execution mode
SELECT count(*) FROM on_update_fkey_table WHERE value_1 = 101 AND id = 99;
count
-------
---------------------------------------------------------------------
1
(1 row)
SELECT count(*) FROM on_update_fkey_table WHERE value_1 = 101 AND id = 199;
count
-------
---------------------------------------------------------------------
1
(1 row)
SELECT count(*) FROM on_update_fkey_table WHERE value_1 = 101 AND id = 299;
count
-------
---------------------------------------------------------------------
1
(1 row)
SELECT count(*) FROM on_update_fkey_table WHERE value_1 = 101 AND id = 399;
count
-------
---------------------------------------------------------------------
1
(1 row)
@ -420,25 +420,25 @@ DEBUG: switching to sequential query execution mode
DETAIL: Reference relation "transitive_reference_table" is modified, which might lead to data inconsistencies or distributed deadlocks via parallel accesses to hash distributed relations due to foreign keys. Any parallel modification to those hash distributed relations in the same transaction can only be executed in sequential query execution mode
SELECT count(*) FROM on_update_fkey_table WHERE value_1 = 101 AND id = 99;
count
-------
---------------------------------------------------------------------
0
(1 row)
SELECT count(*) FROM on_update_fkey_table WHERE value_1 = 101 AND id = 199;
count
-------
---------------------------------------------------------------------
0
(1 row)
SELECT count(*) FROM on_update_fkey_table WHERE value_1 = 101 AND id = 299;
count
-------
---------------------------------------------------------------------
0
(1 row)
SELECT count(*) FROM on_update_fkey_table WHERE value_1 = 101 AND id = 399;
count
-------
---------------------------------------------------------------------
0
(1 row)
@ -471,9 +471,9 @@ BEGIN;
DEBUG: switching to sequential query execution mode
DETAIL: Reference relation "transitive_reference_table" is modified, which might lead to data inconsistencies or distributed deadlocks via parallel accesses to hash distributed relations due to foreign keys. Any parallel modification to those hash distributed relations in the same transaction can only be executed in sequential query execution mode
UPDATE on_update_fkey_table SET value_1 = 101 WHERE id = 1;
ERROR: insert or update on table "on_update_fkey_table_2380002" violates foreign key constraint "fkey_2380002"
ERROR: insert or update on table "on_update_fkey_table_xxxxxxx" violates foreign key constraint "fkey_xxxxxxx"
DETAIL: Key (value_1)=(101) is not present in table "reference_table_2380001".
CONTEXT: while executing command on localhost:57637
CONTEXT: while executing command on localhost:xxxxx
UPDATE on_update_fkey_table SET value_1 = 101 WHERE id = 2;
ERROR: current transaction is aborted, commands ignored until end of transaction block
UPDATE on_update_fkey_table SET value_1 = 101 WHERE id = 3;
@ -525,7 +525,7 @@ BEGIN;
DEBUG: switching to sequential query execution mode
DETAIL: Reference relation "transitive_reference_table" is modified, which might lead to data inconsistencies or distributed deadlocks via parallel accesses to hash distributed relations due to foreign keys. Any parallel modification to those hash distributed relations in the same transaction can only be executed in sequential query execution mode
COPY on_update_fkey_table FROM STDIN WITH CSV;
ERROR: insert or update on table "on_update_fkey_table_2380004" violates foreign key constraint "fkey_2380004"
ERROR: insert or update on table "on_update_fkey_table_xxxxxxx" violates foreign key constraint "fkey_xxxxxxx"
DETAIL: Key (value_1)=(101) is not present in table "reference_table_2380001".
ROLLBACK;
-- case 2.8: UPDATE to a reference table is followed by TRUNCATE
@ -550,7 +550,7 @@ DEBUG: switching to sequential query execution mode
DETAIL: Reference relation "reference_table" is modified, which might lead to data inconsistencies or distributed deadlocks via parallel accesses to hash distributed relations due to foreign keys. Any parallel modification to those hash distributed relations in the same transaction can only be executed in sequential query execution mode
SELECT count(*) FROM on_update_fkey_table;
count
-------
---------------------------------------------------------------------
1001
(1 row)
@ -561,7 +561,7 @@ DEBUG: switching to sequential query execution mode
DETAIL: Reference relation "transitive_reference_table" is modified, which might lead to data inconsistencies or distributed deadlocks via parallel accesses to hash distributed relations due to foreign keys. Any parallel modification to those hash distributed relations in the same transaction can only be executed in sequential query execution mode
SELECT count(*) FROM on_update_fkey_table;
count
-------
---------------------------------------------------------------------
1001
(1 row)
@ -571,7 +571,7 @@ BEGIN;
ALTER TABLE reference_table ALTER COLUMN id SET DATA TYPE int;
SELECT count(*) FROM on_update_fkey_table;
count
-------
---------------------------------------------------------------------
1001
(1 row)
@ -580,7 +580,7 @@ BEGIN;
ALTER TABLE transitive_reference_table ALTER COLUMN id SET DATA TYPE int;
SELECT count(*) FROM on_update_fkey_table;
count
-------
---------------------------------------------------------------------
1001
(1 row)
@ -700,20 +700,20 @@ DEBUG: validating foreign key constraint "fkey"
TRUNCATE on_update_fkey_table;
DEBUG: building index "on_update_fkey_table_pkey" on table "on_update_fkey_table" serially
ROLLBACK;
-----
---------------------------------------------------------------------
--- Now, start testing the other way araound
-----
---------------------------------------------------------------------
-- case 4.1: SELECT to a dist table is follwed by a SELECT to a reference table
BEGIN;
SELECT count(*) FROM on_update_fkey_table WHERE value_1 = 99;
count
-------
---------------------------------------------------------------------
10
(1 row)
SELECT count(*) FROM reference_table;
count
-------
---------------------------------------------------------------------
101
(1 row)
@ -721,13 +721,13 @@ ROLLBACK;
BEGIN;
SELECT count(*) FROM on_update_fkey_table WHERE value_1 = 99;
count
-------
---------------------------------------------------------------------
10
(1 row)
SELECT count(*) FROM transitive_reference_table;
count
-------
---------------------------------------------------------------------
101
(1 row)
@ -736,7 +736,7 @@ ROLLBACK;
BEGIN;
SELECT count(*) FROM on_update_fkey_table WHERE value_1 = 99;
count
-------
---------------------------------------------------------------------
10
(1 row)
@ -748,7 +748,7 @@ ROLLBACK;
BEGIN;
SELECT count(*) FROM on_update_fkey_table WHERE value_1 = 99;
count
-------
---------------------------------------------------------------------
10
(1 row)
@ -761,7 +761,7 @@ ROLLBACK;
BEGIN;
SELECT count(*) FROM on_update_fkey_table WHERE value_1 = 99;
count
-------
---------------------------------------------------------------------
10
(1 row)
@ -772,7 +772,7 @@ ROLLBACK;
BEGIN;
SELECT count(*) FROM on_update_fkey_table WHERE value_1 = 99;
count
-------
---------------------------------------------------------------------
10
(1 row)
@ -784,7 +784,7 @@ ROLLBACK;
BEGIN;
SELECT count(*) FROM on_update_fkey_table WHERE value_1 = 99;
count
-------
---------------------------------------------------------------------
10
(1 row)
@ -798,7 +798,7 @@ ROLLBACK;
BEGIN;
SELECT count(*) FROM on_update_fkey_table WHERE value_1 = 99;
count
-------
---------------------------------------------------------------------
10
(1 row)
@ -815,7 +815,7 @@ SET client_min_messages to LOG;
BEGIN;
SELECT count(*) FROM on_update_fkey_table WHERE value_1 = 99;
count
-------
---------------------------------------------------------------------
10
(1 row)
@ -826,7 +826,7 @@ ROLLBACK;
BEGIN;
SELECT count(*) FROM on_update_fkey_table WHERE value_1 = 99;
count
-------
---------------------------------------------------------------------
10
(1 row)
@ -839,7 +839,7 @@ ROLLBACK;
BEGIN;
SELECT count(*) FROM on_update_fkey_table WHERE id = 9;
count
-------
---------------------------------------------------------------------
1
(1 row)
@ -849,7 +849,7 @@ ROLLBACK;
BEGIN;
SELECT count(*) FROM on_update_fkey_table WHERE id = 9;
count
-------
---------------------------------------------------------------------
1
(1 row)
@ -864,7 +864,7 @@ BEGIN;
UPDATE on_update_fkey_table SET value_1 = 16 WHERE value_1 = 15;
SELECT count(*) FROM reference_table;
count
-------
---------------------------------------------------------------------
101
(1 row)
@ -873,7 +873,7 @@ BEGIN;
UPDATE on_update_fkey_table SET value_1 = 16 WHERE value_1 = 15;
SELECT count(*) FROM transitive_reference_table;
count
-------
---------------------------------------------------------------------
101
(1 row)
@ -1034,14 +1034,14 @@ BEGIN;
CREATE TABLE test_table_1(id int PRIMARY KEY);
SELECT create_reference_table('test_table_1');
create_reference_table
------------------------
---------------------------------------------------------------------
(1 row)
CREATE TABLE test_table_2(id int PRIMARY KEY, value_1 int, FOREIGN KEY(value_1) REFERENCES test_table_1(id));
SELECT create_distributed_table('test_table_2', 'id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -1055,14 +1055,14 @@ BEGIN;
CREATE TABLE test_table_1(id int PRIMARY KEY);
SELECT create_reference_table('test_table_1');
create_reference_table
------------------------
---------------------------------------------------------------------
(1 row)
CREATE TABLE tt4(id int PRIMARY KEY, value_1 int, FOREIGN KEY(id) REFERENCES tt4(id));
SELECT create_distributed_table('tt4', 'id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -1084,21 +1084,21 @@ BEGIN;
CREATE TABLE test_table_1(id int PRIMARY KEY);
SELECT create_reference_table('test_table_1');
create_reference_table
------------------------
---------------------------------------------------------------------
(1 row)
CREATE TABLE tt4(id int PRIMARY KEY, value_1 int, FOREIGN KEY(id) REFERENCES tt4(id));
SELECT create_distributed_table('tt4', 'id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
CREATE TABLE test_table_2(id int PRIMARY KEY, value_1 int, FOREIGN KEY(value_1) REFERENCES test_table_1(id), FOREIGN KEY(id) REFERENCES tt4(id));
SELECT create_distributed_table('test_table_2', 'id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -1113,14 +1113,14 @@ BEGIN;
CREATE TABLE test_table_1(id int PRIMARY KEY);
SELECT create_reference_table('test_table_1');
create_reference_table
------------------------
---------------------------------------------------------------------
(1 row)
CREATE TABLE test_table_2(id int PRIMARY KEY, value_1 int);
SELECT create_distributed_table('test_table_2', 'id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -1140,14 +1140,14 @@ BEGIN;
CREATE TABLE test_table_1(id int PRIMARY KEY);
SELECT create_reference_table('test_table_1');
create_reference_table
------------------------
---------------------------------------------------------------------
(1 row)
CREATE TABLE test_table_2(id int PRIMARY KEY, value_1 int);
SELECT create_distributed_table('test_table_2', 'id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -1163,14 +1163,14 @@ BEGIN;
CREATE TABLE test_table_2(id int PRIMARY KEY, value_1 int);
SELECT create_distributed_table('test_table_2', 'id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
CREATE TABLE test_table_1(id int PRIMARY KEY);
SELECT create_reference_table('test_table_1');
create_reference_table
------------------------
---------------------------------------------------------------------
(1 row)
@ -1190,14 +1190,14 @@ BEGIN;
CREATE TABLE test_table_2(id int PRIMARY KEY, value_1 int);
SELECT create_distributed_table('test_table_2', 'id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
CREATE TABLE test_table_1(id int PRIMARY KEY);
SELECT create_reference_table('test_table_1');
create_reference_table
------------------------
---------------------------------------------------------------------
(1 row)
@ -1212,7 +1212,7 @@ ROLLBACK;
BEGIN;
SELECT count(*) FROM on_update_fkey_table;
count
-------
---------------------------------------------------------------------
1001
(1 row)
@ -1244,7 +1244,7 @@ BEGIN;
SELECT create_reference_table('test_table_1');
NOTICE: Copying data from local table...
create_reference_table
------------------------
---------------------------------------------------------------------
(1 row)
@ -1268,7 +1268,7 @@ BEGIN;
SELECT create_reference_table('test_table_1');
NOTICE: Copying data from local table...
create_reference_table
------------------------
---------------------------------------------------------------------
(1 row)
@ -1288,13 +1288,13 @@ BEGIN;
CREATE TABLE test_table_2(id int PRIMARY KEY, value_1 int, FOREIGN KEY(value_1) REFERENCES test_table_1(id));
SELECT create_reference_table('test_table_1');
create_reference_table
------------------------
---------------------------------------------------------------------
(1 row)
SELECT create_distributed_table('test_table_2', 'id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -1303,13 +1303,13 @@ BEGIN;
ALTER TABLE test_table_2 ADD CONSTRAINT check_val CHECK (id > 0);
SELECT count(*) FROM test_table_2;
count
-------
---------------------------------------------------------------------
0
(1 row)
SELECT count(*) FROM test_table_1;
count
-------
---------------------------------------------------------------------
0
(1 row)
@ -1328,7 +1328,7 @@ DEBUG: CREATE TABLE / PRIMARY KEY will create implicit index "reference_table_p
DEBUG: building index "reference_table_pkey" on table "reference_table" serially
SELECT create_reference_table('reference_table');
create_reference_table
------------------------
---------------------------------------------------------------------
(1 row)
@ -1337,7 +1337,7 @@ DEBUG: CREATE TABLE / PRIMARY KEY will create implicit index "distributed_table
DEBUG: building index "distributed_table_pkey" on table "distributed_table" serially
SELECT create_distributed_table('distributed_table', 'id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -1359,12 +1359,12 @@ DEBUG: Collecting INSERT ... SELECT results on coordinator
-- see https://github.com/citusdata/citus_docs/issues/664 for the discussion
WITH t1 AS (DELETE FROM reference_table RETURNING id)
DELETE FROM distributed_table USING t1 WHERE value_1 = t1.id RETURNING *;
DEBUG: generating subplan 170_1 for CTE t1: DELETE FROM test_fkey_to_ref_in_tx.reference_table RETURNING id
DEBUG: Plan 170 query after replacing subqueries and CTEs: DELETE FROM test_fkey_to_ref_in_tx.distributed_table USING (SELECT intermediate_result.id FROM read_intermediate_result('170_1'::text, 'binary'::citus_copy_format) intermediate_result(id integer)) t1 WHERE (distributed_table.value_1 OPERATOR(pg_catalog.=) t1.id) RETURNING distributed_table.id, distributed_table.value_1, t1.id
DEBUG: generating subplan XXX_1 for CTE t1: DELETE FROM test_fkey_to_ref_in_tx.reference_table RETURNING id
DEBUG: Plan XXX query after replacing subqueries and CTEs: DELETE FROM test_fkey_to_ref_in_tx.distributed_table USING (SELECT intermediate_result.id FROM read_intermediate_result('XXX_1'::text, 'binary'::citus_copy_format) intermediate_result(id integer)) t1 WHERE (distributed_table.value_1 OPERATOR(pg_catalog.=) t1.id) RETURNING distributed_table.id, distributed_table.value_1, t1.id
DEBUG: switching to sequential query execution mode
DETAIL: Reference relation "reference_table" is modified, which might lead to data inconsistencies or distributed deadlocks via parallel accesses to hash distributed relations due to foreign keys. Any parallel modification to those hash distributed relations in the same transaction can only be executed in sequential query execution mode
id | value_1 | id
----+---------+----
---------------------------------------------------------------------
(0 rows)
-- load some more data for one more test with real-time selects
@ -1380,12 +1380,12 @@ DEBUG: Collecting INSERT ... SELECT results on coordinator
-- see https://github.com/citusdata/citus_docs/issues/664 for the discussion
WITH t1 AS (DELETE FROM reference_table RETURNING id)
SELECT count(*) FROM distributed_table, t1 WHERE value_1 = t1.id;
DEBUG: generating subplan 174_1 for CTE t1: DELETE FROM test_fkey_to_ref_in_tx.reference_table RETURNING id
DEBUG: Plan 174 query after replacing subqueries and CTEs: SELECT count(*) AS count FROM test_fkey_to_ref_in_tx.distributed_table, (SELECT intermediate_result.id FROM read_intermediate_result('174_1'::text, 'binary'::citus_copy_format) intermediate_result(id integer)) t1 WHERE (distributed_table.value_1 OPERATOR(pg_catalog.=) t1.id)
DEBUG: generating subplan XXX_1 for CTE t1: DELETE FROM test_fkey_to_ref_in_tx.reference_table RETURNING id
DEBUG: Plan XXX query after replacing subqueries and CTEs: SELECT count(*) AS count FROM test_fkey_to_ref_in_tx.distributed_table, (SELECT intermediate_result.id FROM read_intermediate_result('XXX_1'::text, 'binary'::citus_copy_format) intermediate_result(id integer)) t1 WHERE (distributed_table.value_1 OPERATOR(pg_catalog.=) t1.id)
DEBUG: switching to sequential query execution mode
DETAIL: Reference relation "reference_table" is modified, which might lead to data inconsistencies or distributed deadlocks via parallel accesses to hash distributed relations due to foreign keys. Any parallel modification to those hash distributed relations in the same transaction can only be executed in sequential query execution mode
count
-------
---------------------------------------------------------------------
0
(1 row)
@ -1394,17 +1394,17 @@ DETAIL: Reference relation "reference_table" is modified, which might lead to d
WITH t1 AS (DELETE FROM distributed_table RETURNING id),
t2 AS (DELETE FROM reference_table RETURNING id)
SELECT count(*) FROM distributed_table, t1, t2 WHERE value_1 = t1.id AND value_1 = t2.id;
DEBUG: generating subplan 176_1 for CTE t1: DELETE FROM test_fkey_to_ref_in_tx.distributed_table RETURNING id
DEBUG: generating subplan 176_2 for CTE t2: DELETE FROM test_fkey_to_ref_in_tx.reference_table RETURNING id
DEBUG: Plan 176 query after replacing subqueries and CTEs: SELECT count(*) AS count FROM test_fkey_to_ref_in_tx.distributed_table, (SELECT intermediate_result.id FROM read_intermediate_result('176_1'::text, 'binary'::citus_copy_format) intermediate_result(id integer)) t1, (SELECT intermediate_result.id FROM read_intermediate_result('176_2'::text, 'binary'::citus_copy_format) intermediate_result(id integer)) t2 WHERE ((distributed_table.value_1 OPERATOR(pg_catalog.=) t1.id) AND (distributed_table.value_1 OPERATOR(pg_catalog.=) t2.id))
DEBUG: generating subplan XXX_1 for CTE t1: DELETE FROM test_fkey_to_ref_in_tx.distributed_table RETURNING id
DEBUG: generating subplan XXX_2 for CTE t2: DELETE FROM test_fkey_to_ref_in_tx.reference_table RETURNING id
DEBUG: Plan XXX query after replacing subqueries and CTEs: SELECT count(*) AS count FROM test_fkey_to_ref_in_tx.distributed_table, (SELECT intermediate_result.id FROM read_intermediate_result('XXX_1'::text, 'binary'::citus_copy_format) intermediate_result(id integer)) t1, (SELECT intermediate_result.id FROM read_intermediate_result('XXX_2'::text, 'binary'::citus_copy_format) intermediate_result(id integer)) t2 WHERE ((distributed_table.value_1 OPERATOR(pg_catalog.=) t1.id) AND (distributed_table.value_1 OPERATOR(pg_catalog.=) t2.id))
ERROR: cannot execute DML on reference relation "reference_table" because there was a parallel DML access to distributed relation "distributed_table" in the same transaction
HINT: Try re-running the transaction with "SET LOCAL citus.multi_shard_modify_mode TO 'sequential';"
-- similarly this should fail since we first access to a distributed
-- table via t1, and then access to the reference table in the main query
WITH t1 AS (DELETE FROM distributed_table RETURNING id)
DELETE FROM reference_table RETURNING id;
DEBUG: generating subplan 179_1 for CTE t1: DELETE FROM test_fkey_to_ref_in_tx.distributed_table RETURNING id
DEBUG: Plan 179 query after replacing subqueries and CTEs: DELETE FROM test_fkey_to_ref_in_tx.reference_table RETURNING id
DEBUG: generating subplan XXX_1 for CTE t1: DELETE FROM test_fkey_to_ref_in_tx.distributed_table RETURNING id
DEBUG: Plan XXX query after replacing subqueries and CTEs: DELETE FROM test_fkey_to_ref_in_tx.reference_table RETURNING id
ERROR: cannot execute DML on reference relation "reference_table" because there was a parallel DML access to distributed relation "distributed_table" in the same transaction
HINT: Try re-running the transaction with "SET LOCAL citus.multi_shard_modify_mode TO 'sequential';"
-- finally, make sure that we can execute the same queries
@ -1414,11 +1414,11 @@ BEGIN;
WITH t1 AS (DELETE FROM distributed_table RETURNING id),
t2 AS (DELETE FROM reference_table RETURNING id)
SELECT count(*) FROM distributed_table, t1, t2 WHERE value_1 = t1.id AND value_1 = t2.id;
DEBUG: generating subplan 181_1 for CTE t1: DELETE FROM test_fkey_to_ref_in_tx.distributed_table RETURNING id
DEBUG: generating subplan 181_2 for CTE t2: DELETE FROM test_fkey_to_ref_in_tx.reference_table RETURNING id
DEBUG: Plan 181 query after replacing subqueries and CTEs: SELECT count(*) AS count FROM test_fkey_to_ref_in_tx.distributed_table, (SELECT intermediate_result.id FROM read_intermediate_result('181_1'::text, 'binary'::citus_copy_format) intermediate_result(id integer)) t1, (SELECT intermediate_result.id FROM read_intermediate_result('181_2'::text, 'binary'::citus_copy_format) intermediate_result(id integer)) t2 WHERE ((distributed_table.value_1 OPERATOR(pg_catalog.=) t1.id) AND (distributed_table.value_1 OPERATOR(pg_catalog.=) t2.id))
DEBUG: generating subplan XXX_1 for CTE t1: DELETE FROM test_fkey_to_ref_in_tx.distributed_table RETURNING id
DEBUG: generating subplan XXX_2 for CTE t2: DELETE FROM test_fkey_to_ref_in_tx.reference_table RETURNING id
DEBUG: Plan XXX query after replacing subqueries and CTEs: SELECT count(*) AS count FROM test_fkey_to_ref_in_tx.distributed_table, (SELECT intermediate_result.id FROM read_intermediate_result('XXX_1'::text, 'binary'::citus_copy_format) intermediate_result(id integer)) t1, (SELECT intermediate_result.id FROM read_intermediate_result('XXX_2'::text, 'binary'::citus_copy_format) intermediate_result(id integer)) t2 WHERE ((distributed_table.value_1 OPERATOR(pg_catalog.=) t1.id) AND (distributed_table.value_1 OPERATOR(pg_catalog.=) t2.id))
count
-------
---------------------------------------------------------------------
0
(1 row)
@ -1427,10 +1427,10 @@ BEGIN;
SET LOCAL citus.multi_shard_modify_mode TO 'sequential';
WITH t1 AS (DELETE FROM distributed_table RETURNING id)
DELETE FROM reference_table RETURNING id;
DEBUG: generating subplan 184_1 for CTE t1: DELETE FROM test_fkey_to_ref_in_tx.distributed_table RETURNING id
DEBUG: Plan 184 query after replacing subqueries and CTEs: DELETE FROM test_fkey_to_ref_in_tx.reference_table RETURNING id
DEBUG: generating subplan XXX_1 for CTE t1: DELETE FROM test_fkey_to_ref_in_tx.distributed_table RETURNING id
DEBUG: Plan XXX query after replacing subqueries and CTEs: DELETE FROM test_fkey_to_ref_in_tx.reference_table RETURNING id
id
----
---------------------------------------------------------------------
(0 rows)
ROLLBACK;

File diff suppressed because it is too large Load Diff

View File

@ -9,19 +9,19 @@ CREATE TABLE test_table_2(id bigint, val1 int);
CREATE TABLE test_table_3(id int, val1 bigint);
SELECT create_distributed_table('test_table_1', 'id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
SELECT create_distributed_table('test_table_2', 'id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
SELECT create_distributed_table('test_table_3', 'id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -31,7 +31,7 @@ INSERT INTO test_table_3 VALUES(1,1),(3,3),(4,5);
-- Simple full outer join
SELECT id FROM test_table_1 FULL JOIN test_table_3 using(id) ORDER BY 1;
id
----
---------------------------------------------------------------------
1
2
3
@ -41,7 +41,7 @@ SELECT id FROM test_table_1 FULL JOIN test_table_3 using(id) ORDER BY 1;
-- Get all columns as the result of the full join
SELECT * FROM test_table_1 FULL JOIN test_table_3 using(id) ORDER BY 1;
id | val1 | val1
----+------+------
---------------------------------------------------------------------
1 | 1 | 1
2 | 2 |
3 | 3 | 3
@ -56,7 +56,7 @@ SELECT * FROM
USING(id)
ORDER BY 1;
id
----
---------------------------------------------------------------------
1
2
3
@ -72,7 +72,7 @@ SELECT * FROM
USING(id, val1)
ORDER BY 1;
id | val1
----+------
---------------------------------------------------------------------
1 | 1
2 | 2
3 | 3
@ -83,7 +83,7 @@ SELECT * FROM
-- Full join using multiple columns
SELECT * FROM test_table_1 FULL JOIN test_table_3 USING(id, val1) ORDER BY 1;
id | val1
----+------
---------------------------------------------------------------------
1 | 1
2 | 2
3 | 3
@ -98,7 +98,7 @@ GROUP BY id
ORDER BY 2
ASC LIMIT 3;
count | avg_value | not_null
-------+-----------+----------
---------------------------------------------------------------------
1 | 2 | t
1 | 6 | t
1 | 12 | t
@ -109,7 +109,7 @@ FROM test_table_1 FULL JOIN test_table_3 USING(id, val1)
GROUP BY test_table_1.id
ORDER BY 1;
max
-----
---------------------------------------------------------------------
1
2
3
@ -122,7 +122,7 @@ FROM test_table_1 LEFT JOIN test_table_3 USING(id, val1)
GROUP BY test_table_1.id
ORDER BY 1;
max
-----
---------------------------------------------------------------------
1
2
3
@ -139,7 +139,7 @@ INSERT INTO test_table_3 VALUES(7, NULL);
-- Get all columns as the result of the full join
SELECT * FROM test_table_1 FULL JOIN test_table_3 using(id) ORDER BY 1;
id | val1 | val1
----+------+------
---------------------------------------------------------------------
1 | 1 | 1
2 | 2 |
3 | 3 | 3
@ -150,7 +150,7 @@ SELECT * FROM test_table_1 FULL JOIN test_table_3 using(id) ORDER BY 1;
-- Get the same result (with multiple id)
SELECT * FROM test_table_1 FULL JOIN test_table_3 ON (test_table_1.id = test_table_3.id) ORDER BY 1;
id | val1 | id | val1
----+------+----+------
---------------------------------------------------------------------
1 | 1 | 1 | 1
2 | 2 | |
3 | 3 | 3 | 3
@ -161,7 +161,7 @@ SELECT * FROM test_table_1 FULL JOIN test_table_3 ON (test_table_1.id = test_tab
-- Full join using multiple columns
SELECT * FROM test_table_1 FULL JOIN test_table_3 USING(id, val1) ORDER BY 1;
id | val1
----+------
---------------------------------------------------------------------
1 | 1
2 | 2
3 | 3
@ -179,13 +179,13 @@ CREATE TABLE test_table_1(id int, val1 text);
CREATE TABLE test_table_2(id int, val1 varchar(30));
SELECT create_distributed_table('test_table_1', 'id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
SELECT create_distributed_table('test_table_2', 'id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -194,7 +194,7 @@ INSERT INTO test_table_2 VALUES(2,'val_2'),(3,'val_3'),(4,'val_4'), (5, NULL);
-- Simple full outer join
SELECT id FROM test_table_1 FULL JOIN test_table_2 using(id) ORDER BY 1;
id
----
---------------------------------------------------------------------
1
2
3
@ -205,7 +205,7 @@ SELECT id FROM test_table_1 FULL JOIN test_table_2 using(id) ORDER BY 1;
-- Get all columns as the result of the full join
SELECT * FROM test_table_1 FULL JOIN test_table_2 using(id) ORDER BY 1;
id | val1 | val1
----+-------+-------
---------------------------------------------------------------------
1 | val_1 |
2 | val_2 | val_2
3 | val_3 | val_3
@ -221,7 +221,7 @@ SELECT * FROM
USING(id, val1)
ORDER BY 1,2;
id | val1
----+-------
---------------------------------------------------------------------
1 | val_1
2 | val_2
3 | val_3
@ -235,7 +235,7 @@ SELECT * FROM
-- Full join using multiple columns
SELECT * FROM test_table_1 FULL JOIN test_table_2 USING(id, val1) ORDER BY 1,2;
id | val1
----+-------
---------------------------------------------------------------------
1 | val_1
2 | val_2
3 | val_3

View File

@ -7,28 +7,28 @@ SET citus.shard_replication_factor = 1;
CREATE TABLE table_1 (key int, value text);
SELECT create_distributed_table('table_1', 'key');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
CREATE TABLE table_2 (key int, value text);
SELECT create_distributed_table('table_2', 'key');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
CREATE TABLE table_3 (key int, value text);
SELECT create_distributed_table('table_3', 'key');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
CREATE TABLE ref_table (key int, value text);
SELECT create_reference_table('ref_table');
create_reference_table
------------------------
---------------------------------------------------------------------
(1 row)
@ -47,12 +47,12 @@ SELECT
count(*)
FROM
some_values_1 JOIN table_2 USING (key);
DEBUG: generating subplan 5_1 for CTE some_values_1: SELECT key FROM intermediate_result_pruning.table_1 WHERE (value OPERATOR(pg_catalog.=) ANY (ARRAY['3'::text, '4'::text]))
DEBUG: Plan 5 query after replacing subqueries and CTEs: SELECT count(*) AS count FROM ((SELECT intermediate_result.key FROM read_intermediate_result('5_1'::text, 'binary'::citus_copy_format) intermediate_result(key integer)) some_values_1 JOIN intermediate_result_pruning.table_2 USING (key))
DEBUG: Subplan 5_1 will be sent to localhost:57637
DEBUG: Subplan 5_1 will be sent to localhost:57638
DEBUG: generating subplan XXX_1 for CTE some_values_1: SELECT key FROM intermediate_result_pruning.table_1 WHERE (value OPERATOR(pg_catalog.=) ANY (ARRAY['3'::text, '4'::text]))
DEBUG: Plan XXX query after replacing subqueries and CTEs: SELECT count(*) AS count FROM ((SELECT intermediate_result.key FROM read_intermediate_result('XXX_1'::text, 'binary'::citus_copy_format) intermediate_result(key integer)) some_values_1 JOIN intermediate_result_pruning.table_2 USING (key))
DEBUG: Subplan XXX_1 will be sent to localhost:xxxxx
DEBUG: Subplan XXX_1 will be sent to localhost:xxxxx
count
-------
---------------------------------------------------------------------
2
(1 row)
@ -65,11 +65,11 @@ SELECT
count(*)
FROM
some_values_1 JOIN table_2 USING (key) WHERE table_2.key = 1;
DEBUG: generating subplan 7_1 for CTE some_values_1: SELECT key, random() AS random FROM intermediate_result_pruning.table_1 WHERE (value OPERATOR(pg_catalog.=) ANY (ARRAY['3'::text, '4'::text]))
DEBUG: Plan 7 query after replacing subqueries and CTEs: SELECT count(*) AS count FROM ((SELECT intermediate_result.key, intermediate_result.random FROM read_intermediate_result('7_1'::text, 'binary'::citus_copy_format) intermediate_result(key integer, random double precision)) some_values_1 JOIN intermediate_result_pruning.table_2 USING (key)) WHERE (table_2.key OPERATOR(pg_catalog.=) 1)
DEBUG: Subplan 7_1 will be sent to localhost:57637
DEBUG: generating subplan XXX_1 for CTE some_values_1: SELECT key, random() AS random FROM intermediate_result_pruning.table_1 WHERE (value OPERATOR(pg_catalog.=) ANY (ARRAY['3'::text, '4'::text]))
DEBUG: Plan XXX query after replacing subqueries and CTEs: SELECT count(*) AS count FROM ((SELECT intermediate_result.key, intermediate_result.random FROM read_intermediate_result('XXX_1'::text, 'binary'::citus_copy_format) intermediate_result(key integer, random double precision)) some_values_1 JOIN intermediate_result_pruning.table_2 USING (key)) WHERE (table_2.key OPERATOR(pg_catalog.=) 1)
DEBUG: Subplan XXX_1 will be sent to localhost:xxxxx
count
-------
---------------------------------------------------------------------
0
(1 row)
@ -82,12 +82,12 @@ SELECT
count(*)
FROM
some_values_1 JOIN ref_table USING (key);
DEBUG: generating subplan 9_1 for CTE some_values_1: SELECT key, random() AS random FROM intermediate_result_pruning.table_1 WHERE (value OPERATOR(pg_catalog.=) ANY (ARRAY['3'::text, '4'::text]))
DEBUG: Plan 9 query after replacing subqueries and CTEs: SELECT count(*) AS count FROM ((SELECT intermediate_result.key, intermediate_result.random FROM read_intermediate_result('9_1'::text, 'binary'::citus_copy_format) intermediate_result(key integer, random double precision)) some_values_1 JOIN intermediate_result_pruning.ref_table USING (key))
DEBUG: Subplan 9_1 will be sent to localhost:57637
DEBUG: Subplan 9_1 will be sent to localhost:57638
DEBUG: generating subplan XXX_1 for CTE some_values_1: SELECT key, random() AS random FROM intermediate_result_pruning.table_1 WHERE (value OPERATOR(pg_catalog.=) ANY (ARRAY['3'::text, '4'::text]))
DEBUG: Plan XXX query after replacing subqueries and CTEs: SELECT count(*) AS count FROM ((SELECT intermediate_result.key, intermediate_result.random FROM read_intermediate_result('XXX_1'::text, 'binary'::citus_copy_format) intermediate_result(key integer, random double precision)) some_values_1 JOIN intermediate_result_pruning.ref_table USING (key))
DEBUG: Subplan XXX_1 will be sent to localhost:xxxxx
DEBUG: Subplan XXX_1 will be sent to localhost:xxxxx
count
-------
---------------------------------------------------------------------
2
(1 row)
@ -101,13 +101,13 @@ SELECT
count(*)
FROM
some_values_2 JOIN table_2 USING (key) WHERE table_2.key = 1;
DEBUG: generating subplan 11_1 for CTE some_values_1: SELECT key, random() AS random FROM intermediate_result_pruning.table_1 WHERE (value OPERATOR(pg_catalog.=) ANY (ARRAY['3'::text, '4'::text]))
DEBUG: generating subplan 11_2 for CTE some_values_2: SELECT key, random() AS random FROM (SELECT intermediate_result.key, intermediate_result.random FROM read_intermediate_result('11_1'::text, 'binary'::citus_copy_format) intermediate_result(key integer, random double precision)) some_values_1
DEBUG: Plan 11 query after replacing subqueries and CTEs: SELECT count(*) AS count FROM ((SELECT intermediate_result.key, intermediate_result.random FROM read_intermediate_result('11_2'::text, 'binary'::citus_copy_format) intermediate_result(key integer, random double precision)) some_values_2 JOIN intermediate_result_pruning.table_2 USING (key)) WHERE (table_2.key OPERATOR(pg_catalog.=) 1)
DEBUG: Subplan 11_1 will be sent to localhost:57638
DEBUG: Subplan 11_2 will be sent to localhost:57637
DEBUG: generating subplan XXX_1 for CTE some_values_1: SELECT key, random() AS random FROM intermediate_result_pruning.table_1 WHERE (value OPERATOR(pg_catalog.=) ANY (ARRAY['3'::text, '4'::text]))
DEBUG: generating subplan XXX_2 for CTE some_values_2: SELECT key, random() AS random FROM (SELECT intermediate_result.key, intermediate_result.random FROM read_intermediate_result('XXX_1'::text, 'binary'::citus_copy_format) intermediate_result(key integer, random double precision)) some_values_1
DEBUG: Plan XXX query after replacing subqueries and CTEs: SELECT count(*) AS count FROM ((SELECT intermediate_result.key, intermediate_result.random FROM read_intermediate_result('XXX_2'::text, 'binary'::citus_copy_format) intermediate_result(key integer, random double precision)) some_values_2 JOIN intermediate_result_pruning.table_2 USING (key)) WHERE (table_2.key OPERATOR(pg_catalog.=) 1)
DEBUG: Subplan XXX_1 will be sent to localhost:xxxxx
DEBUG: Subplan XXX_2 will be sent to localhost:xxxxx
count
-------
---------------------------------------------------------------------
0
(1 row)
@ -121,14 +121,14 @@ SELECT
count(*)
FROM
some_values_2 JOIN table_2 USING (key) WHERE table_2.key = 3;
DEBUG: generating subplan 14_1 for CTE some_values_1: SELECT key, random() AS random FROM intermediate_result_pruning.table_1 WHERE (value OPERATOR(pg_catalog.=) ANY (ARRAY['3'::text, '4'::text]))
DEBUG: generating subplan 14_2 for CTE some_values_2: SELECT some_values_1.key, random() AS random FROM ((SELECT intermediate_result.key, intermediate_result.random FROM read_intermediate_result('14_1'::text, 'binary'::citus_copy_format) intermediate_result(key integer, random double precision)) some_values_1 JOIN intermediate_result_pruning.table_2 USING (key))
DEBUG: Plan 14 query after replacing subqueries and CTEs: SELECT count(*) AS count FROM ((SELECT intermediate_result.key, intermediate_result.random FROM read_intermediate_result('14_2'::text, 'binary'::citus_copy_format) intermediate_result(key integer, random double precision)) some_values_2 JOIN intermediate_result_pruning.table_2 USING (key)) WHERE (table_2.key OPERATOR(pg_catalog.=) 3)
DEBUG: Subplan 14_1 will be sent to localhost:57637
DEBUG: Subplan 14_1 will be sent to localhost:57638
DEBUG: Subplan 14_2 will be sent to localhost:57638
DEBUG: generating subplan XXX_1 for CTE some_values_1: SELECT key, random() AS random FROM intermediate_result_pruning.table_1 WHERE (value OPERATOR(pg_catalog.=) ANY (ARRAY['3'::text, '4'::text]))
DEBUG: generating subplan XXX_2 for CTE some_values_2: SELECT some_values_1.key, random() AS random FROM ((SELECT intermediate_result.key, intermediate_result.random FROM read_intermediate_result('XXX_1'::text, 'binary'::citus_copy_format) intermediate_result(key integer, random double precision)) some_values_1 JOIN intermediate_result_pruning.table_2 USING (key))
DEBUG: Plan XXX query after replacing subqueries and CTEs: SELECT count(*) AS count FROM ((SELECT intermediate_result.key, intermediate_result.random FROM read_intermediate_result('XXX_2'::text, 'binary'::citus_copy_format) intermediate_result(key integer, random double precision)) some_values_2 JOIN intermediate_result_pruning.table_2 USING (key)) WHERE (table_2.key OPERATOR(pg_catalog.=) 3)
DEBUG: Subplan XXX_1 will be sent to localhost:xxxxx
DEBUG: Subplan XXX_1 will be sent to localhost:xxxxx
DEBUG: Subplan XXX_2 will be sent to localhost:xxxxx
count
-------
---------------------------------------------------------------------
1
(1 row)
@ -143,14 +143,14 @@ SELECT
count(*)
FROM
(some_values_2 JOIN table_2 USING (key)) JOIN some_values_1 USING (key) WHERE table_2.key = 3;
DEBUG: generating subplan 17_1 for CTE some_values_1: SELECT key, random() AS random FROM intermediate_result_pruning.table_1 WHERE (value OPERATOR(pg_catalog.=) ANY (ARRAY['3'::text, '4'::text]))
DEBUG: generating subplan 17_2 for CTE some_values_2: SELECT some_values_1.key, random() AS random FROM ((SELECT intermediate_result.key, intermediate_result.random FROM read_intermediate_result('17_1'::text, 'binary'::citus_copy_format) intermediate_result(key integer, random double precision)) some_values_1 JOIN intermediate_result_pruning.table_2 USING (key))
DEBUG: Plan 17 query after replacing subqueries and CTEs: SELECT count(*) AS count FROM (((SELECT intermediate_result.key, intermediate_result.random FROM read_intermediate_result('17_2'::text, 'binary'::citus_copy_format) intermediate_result(key integer, random double precision)) some_values_2 JOIN intermediate_result_pruning.table_2 USING (key)) JOIN (SELECT intermediate_result.key, intermediate_result.random FROM read_intermediate_result('17_1'::text, 'binary'::citus_copy_format) intermediate_result(key integer, random double precision)) some_values_1 USING (key)) WHERE (table_2.key OPERATOR(pg_catalog.=) 3)
DEBUG: Subplan 17_1 will be sent to localhost:57638
DEBUG: Subplan 17_1 will be sent to localhost:57637
DEBUG: Subplan 17_2 will be sent to localhost:57638
DEBUG: generating subplan XXX_1 for CTE some_values_1: SELECT key, random() AS random FROM intermediate_result_pruning.table_1 WHERE (value OPERATOR(pg_catalog.=) ANY (ARRAY['3'::text, '4'::text]))
DEBUG: generating subplan XXX_2 for CTE some_values_2: SELECT some_values_1.key, random() AS random FROM ((SELECT intermediate_result.key, intermediate_result.random FROM read_intermediate_result('XXX_1'::text, 'binary'::citus_copy_format) intermediate_result(key integer, random double precision)) some_values_1 JOIN intermediate_result_pruning.table_2 USING (key))
DEBUG: Plan XXX query after replacing subqueries and CTEs: SELECT count(*) AS count FROM (((SELECT intermediate_result.key, intermediate_result.random FROM read_intermediate_result('XXX_2'::text, 'binary'::citus_copy_format) intermediate_result(key integer, random double precision)) some_values_2 JOIN intermediate_result_pruning.table_2 USING (key)) JOIN (SELECT intermediate_result.key, intermediate_result.random FROM read_intermediate_result('XXX_1'::text, 'binary'::citus_copy_format) intermediate_result(key integer, random double precision)) some_values_1 USING (key)) WHERE (table_2.key OPERATOR(pg_catalog.=) 3)
DEBUG: Subplan XXX_1 will be sent to localhost:xxxxx
DEBUG: Subplan XXX_1 will be sent to localhost:xxxxx
DEBUG: Subplan XXX_2 will be sent to localhost:xxxxx
count
-------
---------------------------------------------------------------------
1
(1 row)
@ -165,14 +165,14 @@ SELECT
count(*)
FROM
(some_values_2 JOIN table_2 USING (key)) JOIN some_values_1 USING (key) WHERE table_2.key = 3;
DEBUG: generating subplan 20_1 for CTE some_values_1: SELECT key, random() AS random FROM intermediate_result_pruning.table_1 WHERE (value OPERATOR(pg_catalog.=) ANY (ARRAY['3'::text, '4'::text]))
DEBUG: generating subplan 20_2 for CTE some_values_2: SELECT some_values_1.key, random() AS random FROM ((SELECT intermediate_result.key, intermediate_result.random FROM read_intermediate_result('20_1'::text, 'binary'::citus_copy_format) intermediate_result(key integer, random double precision)) some_values_1 JOIN intermediate_result_pruning.table_2 USING (key)) WHERE (table_2.key OPERATOR(pg_catalog.=) 1)
DEBUG: Plan 20 query after replacing subqueries and CTEs: SELECT count(*) AS count FROM (((SELECT intermediate_result.key, intermediate_result.random FROM read_intermediate_result('20_2'::text, 'binary'::citus_copy_format) intermediate_result(key integer, random double precision)) some_values_2 JOIN intermediate_result_pruning.table_2 USING (key)) JOIN (SELECT intermediate_result.key, intermediate_result.random FROM read_intermediate_result('20_1'::text, 'binary'::citus_copy_format) intermediate_result(key integer, random double precision)) some_values_1 USING (key)) WHERE (table_2.key OPERATOR(pg_catalog.=) 3)
DEBUG: Subplan 20_1 will be sent to localhost:57638
DEBUG: Subplan 20_1 will be sent to localhost:57637
DEBUG: Subplan 20_2 will be sent to localhost:57638
DEBUG: generating subplan XXX_1 for CTE some_values_1: SELECT key, random() AS random FROM intermediate_result_pruning.table_1 WHERE (value OPERATOR(pg_catalog.=) ANY (ARRAY['3'::text, '4'::text]))
DEBUG: generating subplan XXX_2 for CTE some_values_2: SELECT some_values_1.key, random() AS random FROM ((SELECT intermediate_result.key, intermediate_result.random FROM read_intermediate_result('XXX_1'::text, 'binary'::citus_copy_format) intermediate_result(key integer, random double precision)) some_values_1 JOIN intermediate_result_pruning.table_2 USING (key)) WHERE (table_2.key OPERATOR(pg_catalog.=) 1)
DEBUG: Plan XXX query after replacing subqueries and CTEs: SELECT count(*) AS count FROM (((SELECT intermediate_result.key, intermediate_result.random FROM read_intermediate_result('XXX_2'::text, 'binary'::citus_copy_format) intermediate_result(key integer, random double precision)) some_values_2 JOIN intermediate_result_pruning.table_2 USING (key)) JOIN (SELECT intermediate_result.key, intermediate_result.random FROM read_intermediate_result('XXX_1'::text, 'binary'::citus_copy_format) intermediate_result(key integer, random double precision)) some_values_1 USING (key)) WHERE (table_2.key OPERATOR(pg_catalog.=) 3)
DEBUG: Subplan XXX_1 will be sent to localhost:xxxxx
DEBUG: Subplan XXX_1 will be sent to localhost:xxxxx
DEBUG: Subplan XXX_2 will be sent to localhost:xxxxx
count
-------
---------------------------------------------------------------------
0
(1 row)
@ -188,13 +188,13 @@ SELECT
count(*)
FROM
(some_values_2 JOIN table_2 USING (key)) JOIN some_values_1 USING (key) WHERE table_2.key = 1;
DEBUG: generating subplan 23_1 for CTE some_values_1: SELECT key, random() AS random FROM intermediate_result_pruning.table_1 WHERE (value OPERATOR(pg_catalog.=) ANY (ARRAY['3'::text, '4'::text]))
DEBUG: generating subplan 23_2 for CTE some_values_2: SELECT some_values_1.key, random() AS random FROM ((SELECT intermediate_result.key, intermediate_result.random FROM read_intermediate_result('23_1'::text, 'binary'::citus_copy_format) intermediate_result(key integer, random double precision)) some_values_1 JOIN intermediate_result_pruning.table_2 USING (key)) WHERE (table_2.key OPERATOR(pg_catalog.=) 1)
DEBUG: Plan 23 query after replacing subqueries and CTEs: SELECT count(*) AS count FROM (((SELECT intermediate_result.key, intermediate_result.random FROM read_intermediate_result('23_2'::text, 'binary'::citus_copy_format) intermediate_result(key integer, random double precision)) some_values_2 JOIN intermediate_result_pruning.table_2 USING (key)) JOIN (SELECT intermediate_result.key, intermediate_result.random FROM read_intermediate_result('23_1'::text, 'binary'::citus_copy_format) intermediate_result(key integer, random double precision)) some_values_1 USING (key)) WHERE (table_2.key OPERATOR(pg_catalog.=) 1)
DEBUG: Subplan 23_1 will be sent to localhost:57637
DEBUG: Subplan 23_2 will be sent to localhost:57637
DEBUG: generating subplan XXX_1 for CTE some_values_1: SELECT key, random() AS random FROM intermediate_result_pruning.table_1 WHERE (value OPERATOR(pg_catalog.=) ANY (ARRAY['3'::text, '4'::text]))
DEBUG: generating subplan XXX_2 for CTE some_values_2: SELECT some_values_1.key, random() AS random FROM ((SELECT intermediate_result.key, intermediate_result.random FROM read_intermediate_result('XXX_1'::text, 'binary'::citus_copy_format) intermediate_result(key integer, random double precision)) some_values_1 JOIN intermediate_result_pruning.table_2 USING (key)) WHERE (table_2.key OPERATOR(pg_catalog.=) 1)
DEBUG: Plan XXX query after replacing subqueries and CTEs: SELECT count(*) AS count FROM (((SELECT intermediate_result.key, intermediate_result.random FROM read_intermediate_result('XXX_2'::text, 'binary'::citus_copy_format) intermediate_result(key integer, random double precision)) some_values_2 JOIN intermediate_result_pruning.table_2 USING (key)) JOIN (SELECT intermediate_result.key, intermediate_result.random FROM read_intermediate_result('XXX_1'::text, 'binary'::citus_copy_format) intermediate_result(key integer, random double precision)) some_values_1 USING (key)) WHERE (table_2.key OPERATOR(pg_catalog.=) 1)
DEBUG: Subplan XXX_1 will be sent to localhost:xxxxx
DEBUG: Subplan XXX_2 will be sent to localhost:xxxxx
count
-------
---------------------------------------------------------------------
0
(1 row)
@ -207,15 +207,15 @@ SELECT
count(*)
FROM
(some_values_2 JOIN table_2 USING (key)) JOIN some_values_1 USING (key) WHERE table_2.key != 3;
DEBUG: generating subplan 26_1 for CTE some_values_1: SELECT key, random() AS random FROM intermediate_result_pruning.table_1 WHERE (value OPERATOR(pg_catalog.=) ANY (ARRAY['3'::text, '4'::text]))
DEBUG: generating subplan 26_2 for CTE some_values_2: SELECT some_values_1.key, random() AS random FROM ((SELECT intermediate_result.key, intermediate_result.random FROM read_intermediate_result('26_1'::text, 'binary'::citus_copy_format) intermediate_result(key integer, random double precision)) some_values_1 JOIN intermediate_result_pruning.table_2 USING (key))
DEBUG: Plan 26 query after replacing subqueries and CTEs: SELECT count(*) AS count FROM (((SELECT intermediate_result.key, intermediate_result.random FROM read_intermediate_result('26_2'::text, 'binary'::citus_copy_format) intermediate_result(key integer, random double precision)) some_values_2 JOIN intermediate_result_pruning.table_2 USING (key)) JOIN (SELECT intermediate_result.key, intermediate_result.random FROM read_intermediate_result('26_1'::text, 'binary'::citus_copy_format) intermediate_result(key integer, random double precision)) some_values_1 USING (key)) WHERE (table_2.key OPERATOR(pg_catalog.<>) 3)
DEBUG: Subplan 26_1 will be sent to localhost:57637
DEBUG: Subplan 26_1 will be sent to localhost:57638
DEBUG: Subplan 26_2 will be sent to localhost:57637
DEBUG: Subplan 26_2 will be sent to localhost:57638
DEBUG: generating subplan XXX_1 for CTE some_values_1: SELECT key, random() AS random FROM intermediate_result_pruning.table_1 WHERE (value OPERATOR(pg_catalog.=) ANY (ARRAY['3'::text, '4'::text]))
DEBUG: generating subplan XXX_2 for CTE some_values_2: SELECT some_values_1.key, random() AS random FROM ((SELECT intermediate_result.key, intermediate_result.random FROM read_intermediate_result('XXX_1'::text, 'binary'::citus_copy_format) intermediate_result(key integer, random double precision)) some_values_1 JOIN intermediate_result_pruning.table_2 USING (key))
DEBUG: Plan XXX query after replacing subqueries and CTEs: SELECT count(*) AS count FROM (((SELECT intermediate_result.key, intermediate_result.random FROM read_intermediate_result('XXX_2'::text, 'binary'::citus_copy_format) intermediate_result(key integer, random double precision)) some_values_2 JOIN intermediate_result_pruning.table_2 USING (key)) JOIN (SELECT intermediate_result.key, intermediate_result.random FROM read_intermediate_result('XXX_1'::text, 'binary'::citus_copy_format) intermediate_result(key integer, random double precision)) some_values_1 USING (key)) WHERE (table_2.key OPERATOR(pg_catalog.<>) 3)
DEBUG: Subplan XXX_1 will be sent to localhost:xxxxx
DEBUG: Subplan XXX_1 will be sent to localhost:xxxxx
DEBUG: Subplan XXX_2 will be sent to localhost:xxxxx
DEBUG: Subplan XXX_2 will be sent to localhost:xxxxx
count
-------
---------------------------------------------------------------------
1
(1 row)
@ -230,15 +230,15 @@ SELECT
count(*)
FROM
(some_values_2 JOIN table_2 USING (key)) JOIN some_values_1 USING (key) WHERE table_2.key != 3;
DEBUG: generating subplan 29_1 for CTE some_values_1: SELECT key, random() AS random FROM intermediate_result_pruning.table_1 WHERE (value OPERATOR(pg_catalog.=) ANY (ARRAY['3'::text, '4'::text]))
DEBUG: generating subplan 29_2 for CTE some_values_2: SELECT some_values_1.key, random() AS random FROM ((SELECT intermediate_result.key, intermediate_result.random FROM read_intermediate_result('29_1'::text, 'binary'::citus_copy_format) intermediate_result(key integer, random double precision)) some_values_1 JOIN intermediate_result_pruning.table_2 USING (key)) WHERE (table_2.key OPERATOR(pg_catalog.=) 3)
DEBUG: Plan 29 query after replacing subqueries and CTEs: SELECT count(*) AS count FROM (((SELECT intermediate_result.key, intermediate_result.random FROM read_intermediate_result('29_2'::text, 'binary'::citus_copy_format) intermediate_result(key integer, random double precision)) some_values_2 JOIN intermediate_result_pruning.table_2 USING (key)) JOIN (SELECT intermediate_result.key, intermediate_result.random FROM read_intermediate_result('29_1'::text, 'binary'::citus_copy_format) intermediate_result(key integer, random double precision)) some_values_1 USING (key)) WHERE (table_2.key OPERATOR(pg_catalog.<>) 3)
DEBUG: Subplan 29_1 will be sent to localhost:57637
DEBUG: Subplan 29_1 will be sent to localhost:57638
DEBUG: Subplan 29_2 will be sent to localhost:57637
DEBUG: Subplan 29_2 will be sent to localhost:57638
DEBUG: generating subplan XXX_1 for CTE some_values_1: SELECT key, random() AS random FROM intermediate_result_pruning.table_1 WHERE (value OPERATOR(pg_catalog.=) ANY (ARRAY['3'::text, '4'::text]))
DEBUG: generating subplan XXX_2 for CTE some_values_2: SELECT some_values_1.key, random() AS random FROM ((SELECT intermediate_result.key, intermediate_result.random FROM read_intermediate_result('XXX_1'::text, 'binary'::citus_copy_format) intermediate_result(key integer, random double precision)) some_values_1 JOIN intermediate_result_pruning.table_2 USING (key)) WHERE (table_2.key OPERATOR(pg_catalog.=) 3)
DEBUG: Plan XXX query after replacing subqueries and CTEs: SELECT count(*) AS count FROM (((SELECT intermediate_result.key, intermediate_result.random FROM read_intermediate_result('XXX_2'::text, 'binary'::citus_copy_format) intermediate_result(key integer, random double precision)) some_values_2 JOIN intermediate_result_pruning.table_2 USING (key)) JOIN (SELECT intermediate_result.key, intermediate_result.random FROM read_intermediate_result('XXX_1'::text, 'binary'::citus_copy_format) intermediate_result(key integer, random double precision)) some_values_1 USING (key)) WHERE (table_2.key OPERATOR(pg_catalog.<>) 3)
DEBUG: Subplan XXX_1 will be sent to localhost:xxxxx
DEBUG: Subplan XXX_1 will be sent to localhost:xxxxx
DEBUG: Subplan XXX_2 will be sent to localhost:xxxxx
DEBUG: Subplan XXX_2 will be sent to localhost:xxxxx
count
-------
---------------------------------------------------------------------
0
(1 row)
@ -251,12 +251,12 @@ SELECT
count(*)
FROM
(some_values_1 JOIN ref_table USING (key)) JOIN table_2 USING (key);
DEBUG: generating subplan 32_1 for CTE some_values_1: SELECT key, random() AS random FROM intermediate_result_pruning.ref_table WHERE (value OPERATOR(pg_catalog.=) ANY (ARRAY['3'::text, '4'::text]))
DEBUG: Plan 32 query after replacing subqueries and CTEs: SELECT count(*) AS count FROM (((SELECT intermediate_result.key, intermediate_result.random FROM read_intermediate_result('32_1'::text, 'binary'::citus_copy_format) intermediate_result(key integer, random double precision)) some_values_1 JOIN intermediate_result_pruning.ref_table USING (key)) JOIN intermediate_result_pruning.table_2 USING (key))
DEBUG: Subplan 32_1 will be sent to localhost:57637
DEBUG: Subplan 32_1 will be sent to localhost:57638
DEBUG: generating subplan XXX_1 for CTE some_values_1: SELECT key, random() AS random FROM intermediate_result_pruning.ref_table WHERE (value OPERATOR(pg_catalog.=) ANY (ARRAY['3'::text, '4'::text]))
DEBUG: Plan XXX query after replacing subqueries and CTEs: SELECT count(*) AS count FROM (((SELECT intermediate_result.key, intermediate_result.random FROM read_intermediate_result('XXX_1'::text, 'binary'::citus_copy_format) intermediate_result(key integer, random double precision)) some_values_1 JOIN intermediate_result_pruning.ref_table USING (key)) JOIN intermediate_result_pruning.table_2 USING (key))
DEBUG: Subplan XXX_1 will be sent to localhost:xxxxx
DEBUG: Subplan XXX_1 will be sent to localhost:xxxxx
count
-------
---------------------------------------------------------------------
2
(1 row)
@ -269,7 +269,7 @@ SELECT
FROM
(some_values_1 JOIN ref_table USING (key)) JOIN table_2 USING (key) WHERE table_2.key = 1;
count
-------
---------------------------------------------------------------------
0
(1 row)
@ -285,13 +285,13 @@ SELECT
count(*)
FROM
some_values_2;
DEBUG: generating subplan 35_1 for CTE some_values_1: SELECT key, random() AS random FROM intermediate_result_pruning.table_1 WHERE (value OPERATOR(pg_catalog.=) ANY (ARRAY['3'::text, '4'::text]))
DEBUG: generating subplan 35_2 for CTE some_values_2: SELECT some_values_1.key, random() AS random FROM ((SELECT intermediate_result.key, intermediate_result.random FROM read_intermediate_result('35_1'::text, 'binary'::citus_copy_format) intermediate_result(key integer, random double precision)) some_values_1 JOIN intermediate_result_pruning.table_2 USING (key)) WHERE (some_values_1.key OPERATOR(pg_catalog.=) 1)
DEBUG: Plan 35 query after replacing subqueries and CTEs: SELECT count(*) AS count FROM (SELECT intermediate_result.key, intermediate_result.random FROM read_intermediate_result('35_2'::text, 'binary'::citus_copy_format) intermediate_result(key integer, random double precision)) some_values_2
DEBUG: Subplan 35_1 will be sent to localhost:57637
DEBUG: Subplan 35_2 will be sent to localhost:57637
DEBUG: generating subplan XXX_1 for CTE some_values_1: SELECT key, random() AS random FROM intermediate_result_pruning.table_1 WHERE (value OPERATOR(pg_catalog.=) ANY (ARRAY['3'::text, '4'::text]))
DEBUG: generating subplan XXX_2 for CTE some_values_2: SELECT some_values_1.key, random() AS random FROM ((SELECT intermediate_result.key, intermediate_result.random FROM read_intermediate_result('XXX_1'::text, 'binary'::citus_copy_format) intermediate_result(key integer, random double precision)) some_values_1 JOIN intermediate_result_pruning.table_2 USING (key)) WHERE (some_values_1.key OPERATOR(pg_catalog.=) 1)
DEBUG: Plan XXX query after replacing subqueries and CTEs: SELECT count(*) AS count FROM (SELECT intermediate_result.key, intermediate_result.random FROM read_intermediate_result('XXX_2'::text, 'binary'::citus_copy_format) intermediate_result(key integer, random double precision)) some_values_2
DEBUG: Subplan XXX_1 will be sent to localhost:xxxxx
DEBUG: Subplan XXX_2 will be sent to localhost:xxxxx
count
-------
---------------------------------------------------------------------
0
(1 row)
@ -311,17 +311,17 @@ SELECT
count(*)
FROM
top_cte JOIN table_2 USING (key);
DEBUG: generating subplan 38_1 for CTE top_cte: WITH some_values_1 AS (SELECT table_1.key, random() AS random FROM intermediate_result_pruning.table_1 WHERE (table_1.value OPERATOR(pg_catalog.=) ANY (ARRAY['3'::text, '4'::text]))), some_values_2 AS (SELECT some_values_1.key, random() AS random FROM (some_values_1 JOIN intermediate_result_pruning.table_2 USING (key)) WHERE (some_values_1.key OPERATOR(pg_catalog.=) 1)) SELECT DISTINCT key FROM some_values_2
DEBUG: generating subplan 39_1 for CTE some_values_1: SELECT key, random() AS random FROM intermediate_result_pruning.table_1 WHERE (value OPERATOR(pg_catalog.=) ANY (ARRAY['3'::text, '4'::text]))
DEBUG: generating subplan 39_2 for CTE some_values_2: SELECT some_values_1.key, random() AS random FROM ((SELECT intermediate_result.key, intermediate_result.random FROM read_intermediate_result('39_1'::text, 'binary'::citus_copy_format) intermediate_result(key integer, random double precision)) some_values_1 JOIN intermediate_result_pruning.table_2 USING (key)) WHERE (some_values_1.key OPERATOR(pg_catalog.=) 1)
DEBUG: Plan 39 query after replacing subqueries and CTEs: SELECT DISTINCT key FROM (SELECT intermediate_result.key, intermediate_result.random FROM read_intermediate_result('39_2'::text, 'binary'::citus_copy_format) intermediate_result(key integer, random double precision)) some_values_2
DEBUG: Plan 38 query after replacing subqueries and CTEs: SELECT count(*) AS count FROM ((SELECT intermediate_result.key FROM read_intermediate_result('38_1'::text, 'binary'::citus_copy_format) intermediate_result(key integer)) top_cte JOIN intermediate_result_pruning.table_2 USING (key))
DEBUG: Subplan 38_1 will be sent to localhost:57637
DEBUG: Subplan 38_1 will be sent to localhost:57638
DEBUG: Subplan 39_1 will be sent to localhost:57637
DEBUG: Subplan 39_2 will be sent to localhost:57638
DEBUG: generating subplan XXX_1 for CTE top_cte: WITH some_values_1 AS (SELECT table_1.key, random() AS random FROM intermediate_result_pruning.table_1 WHERE (table_1.value OPERATOR(pg_catalog.=) ANY (ARRAY['3'::text, '4'::text]))), some_values_2 AS (SELECT some_values_1.key, random() AS random FROM (some_values_1 JOIN intermediate_result_pruning.table_2 USING (key)) WHERE (some_values_1.key OPERATOR(pg_catalog.=) 1)) SELECT DISTINCT key FROM some_values_2
DEBUG: generating subplan XXX_1 for CTE some_values_1: SELECT key, random() AS random FROM intermediate_result_pruning.table_1 WHERE (value OPERATOR(pg_catalog.=) ANY (ARRAY['3'::text, '4'::text]))
DEBUG: generating subplan XXX_2 for CTE some_values_2: SELECT some_values_1.key, random() AS random FROM ((SELECT intermediate_result.key, intermediate_result.random FROM read_intermediate_result('XXX_1'::text, 'binary'::citus_copy_format) intermediate_result(key integer, random double precision)) some_values_1 JOIN intermediate_result_pruning.table_2 USING (key)) WHERE (some_values_1.key OPERATOR(pg_catalog.=) 1)
DEBUG: Plan XXX query after replacing subqueries and CTEs: SELECT DISTINCT key FROM (SELECT intermediate_result.key, intermediate_result.random FROM read_intermediate_result('XXX_2'::text, 'binary'::citus_copy_format) intermediate_result(key integer, random double precision)) some_values_2
DEBUG: Plan XXX query after replacing subqueries and CTEs: SELECT count(*) AS count FROM ((SELECT intermediate_result.key FROM read_intermediate_result('XXX_1'::text, 'binary'::citus_copy_format) intermediate_result(key integer)) top_cte JOIN intermediate_result_pruning.table_2 USING (key))
DEBUG: Subplan XXX_1 will be sent to localhost:xxxxx
DEBUG: Subplan XXX_1 will be sent to localhost:xxxxx
DEBUG: Subplan XXX_1 will be sent to localhost:xxxxx
DEBUG: Subplan XXX_2 will be sent to localhost:xxxxx
count
-------
---------------------------------------------------------------------
0
(1 row)
@ -341,16 +341,16 @@ SELECT
count(*)
FROM
top_cte JOIN table_2 USING (key) WHERE table_2.key = 2;
DEBUG: generating subplan 42_1 for CTE top_cte: WITH some_values_1 AS (SELECT table_1.key, random() AS random FROM intermediate_result_pruning.table_1 WHERE (table_1.value OPERATOR(pg_catalog.=) ANY (ARRAY['3'::text, '4'::text]))), some_values_2 AS (SELECT some_values_1.key, random() AS random FROM (some_values_1 JOIN intermediate_result_pruning.table_2 USING (key)) WHERE (some_values_1.key OPERATOR(pg_catalog.=) 1)) SELECT DISTINCT key FROM some_values_2
DEBUG: generating subplan 43_1 for CTE some_values_1: SELECT key, random() AS random FROM intermediate_result_pruning.table_1 WHERE (value OPERATOR(pg_catalog.=) ANY (ARRAY['3'::text, '4'::text]))
DEBUG: generating subplan 43_2 for CTE some_values_2: SELECT some_values_1.key, random() AS random FROM ((SELECT intermediate_result.key, intermediate_result.random FROM read_intermediate_result('43_1'::text, 'binary'::citus_copy_format) intermediate_result(key integer, random double precision)) some_values_1 JOIN intermediate_result_pruning.table_2 USING (key)) WHERE (some_values_1.key OPERATOR(pg_catalog.=) 1)
DEBUG: Plan 43 query after replacing subqueries and CTEs: SELECT DISTINCT key FROM (SELECT intermediate_result.key, intermediate_result.random FROM read_intermediate_result('43_2'::text, 'binary'::citus_copy_format) intermediate_result(key integer, random double precision)) some_values_2
DEBUG: Plan 42 query after replacing subqueries and CTEs: SELECT count(*) AS count FROM ((SELECT intermediate_result.key FROM read_intermediate_result('42_1'::text, 'binary'::citus_copy_format) intermediate_result(key integer)) top_cte JOIN intermediate_result_pruning.table_2 USING (key)) WHERE (table_2.key OPERATOR(pg_catalog.=) 2)
DEBUG: Subplan 42_1 will be sent to localhost:57638
DEBUG: Subplan 43_1 will be sent to localhost:57637
DEBUG: Subplan 43_2 will be sent to localhost:57637
DEBUG: generating subplan XXX_1 for CTE top_cte: WITH some_values_1 AS (SELECT table_1.key, random() AS random FROM intermediate_result_pruning.table_1 WHERE (table_1.value OPERATOR(pg_catalog.=) ANY (ARRAY['3'::text, '4'::text]))), some_values_2 AS (SELECT some_values_1.key, random() AS random FROM (some_values_1 JOIN intermediate_result_pruning.table_2 USING (key)) WHERE (some_values_1.key OPERATOR(pg_catalog.=) 1)) SELECT DISTINCT key FROM some_values_2
DEBUG: generating subplan XXX_1 for CTE some_values_1: SELECT key, random() AS random FROM intermediate_result_pruning.table_1 WHERE (value OPERATOR(pg_catalog.=) ANY (ARRAY['3'::text, '4'::text]))
DEBUG: generating subplan XXX_2 for CTE some_values_2: SELECT some_values_1.key, random() AS random FROM ((SELECT intermediate_result.key, intermediate_result.random FROM read_intermediate_result('XXX_1'::text, 'binary'::citus_copy_format) intermediate_result(key integer, random double precision)) some_values_1 JOIN intermediate_result_pruning.table_2 USING (key)) WHERE (some_values_1.key OPERATOR(pg_catalog.=) 1)
DEBUG: Plan XXX query after replacing subqueries and CTEs: SELECT DISTINCT key FROM (SELECT intermediate_result.key, intermediate_result.random FROM read_intermediate_result('XXX_2'::text, 'binary'::citus_copy_format) intermediate_result(key integer, random double precision)) some_values_2
DEBUG: Plan XXX query after replacing subqueries and CTEs: SELECT count(*) AS count FROM ((SELECT intermediate_result.key FROM read_intermediate_result('XXX_1'::text, 'binary'::citus_copy_format) intermediate_result(key integer)) top_cte JOIN intermediate_result_pruning.table_2 USING (key)) WHERE (table_2.key OPERATOR(pg_catalog.=) 2)
DEBUG: Subplan XXX_1 will be sent to localhost:xxxxx
DEBUG: Subplan XXX_1 will be sent to localhost:xxxxx
DEBUG: Subplan XXX_2 will be sent to localhost:xxxxx
count
-------
---------------------------------------------------------------------
0
(1 row)
@ -363,18 +363,18 @@ WITH some_values_1 AS
some_values_3 AS
(SELECT key FROM (some_values_2 JOIN table_2 USING (key)) JOIN some_values_1 USING (key))
SELECT * FROM some_values_3 JOIN ref_table ON (true);
DEBUG: generating subplan 46_1 for CTE some_values_1: SELECT key, random() AS random FROM intermediate_result_pruning.table_1 WHERE (value OPERATOR(pg_catalog.=) ANY (ARRAY['3'::text, '4'::text]))
DEBUG: generating subplan 46_2 for CTE some_values_2: SELECT some_values_1.key, random() AS random FROM ((SELECT intermediate_result.key, intermediate_result.random FROM read_intermediate_result('46_1'::text, 'binary'::citus_copy_format) intermediate_result(key integer, random double precision)) some_values_1 JOIN intermediate_result_pruning.table_2 USING (key)) WHERE (some_values_1.key OPERATOR(pg_catalog.=) 1)
DEBUG: generating subplan 46_3 for CTE some_values_3: SELECT some_values_2.key FROM (((SELECT intermediate_result.key, intermediate_result.random FROM read_intermediate_result('46_2'::text, 'binary'::citus_copy_format) intermediate_result(key integer, random double precision)) some_values_2 JOIN intermediate_result_pruning.table_2 USING (key)) JOIN (SELECT intermediate_result.key, intermediate_result.random FROM read_intermediate_result('46_1'::text, 'binary'::citus_copy_format) intermediate_result(key integer, random double precision)) some_values_1 USING (key))
DEBUG: Plan 46 query after replacing subqueries and CTEs: SELECT some_values_3.key, ref_table.key, ref_table.value FROM ((SELECT intermediate_result.key FROM read_intermediate_result('46_3'::text, 'binary'::citus_copy_format) intermediate_result(key integer)) some_values_3 JOIN intermediate_result_pruning.ref_table ON (true))
DEBUG: Subplan 46_1 will be sent to localhost:57637
DEBUG: Subplan 46_1 will be sent to localhost:57638
DEBUG: Subplan 46_2 will be sent to localhost:57637
DEBUG: Subplan 46_2 will be sent to localhost:57638
DEBUG: Subplan 46_3 will be sent to localhost:57637
DEBUG: Subplan 46_3 will be sent to localhost:57638
DEBUG: generating subplan XXX_1 for CTE some_values_1: SELECT key, random() AS random FROM intermediate_result_pruning.table_1 WHERE (value OPERATOR(pg_catalog.=) ANY (ARRAY['3'::text, '4'::text]))
DEBUG: generating subplan XXX_2 for CTE some_values_2: SELECT some_values_1.key, random() AS random FROM ((SELECT intermediate_result.key, intermediate_result.random FROM read_intermediate_result('XXX_1'::text, 'binary'::citus_copy_format) intermediate_result(key integer, random double precision)) some_values_1 JOIN intermediate_result_pruning.table_2 USING (key)) WHERE (some_values_1.key OPERATOR(pg_catalog.=) 1)
DEBUG: generating subplan XXX_3 for CTE some_values_3: SELECT some_values_2.key FROM (((SELECT intermediate_result.key, intermediate_result.random FROM read_intermediate_result('XXX_2'::text, 'binary'::citus_copy_format) intermediate_result(key integer, random double precision)) some_values_2 JOIN intermediate_result_pruning.table_2 USING (key)) JOIN (SELECT intermediate_result.key, intermediate_result.random FROM read_intermediate_result('XXX_1'::text, 'binary'::citus_copy_format) intermediate_result(key integer, random double precision)) some_values_1 USING (key))
DEBUG: Plan XXX query after replacing subqueries and CTEs: SELECT some_values_3.key, ref_table.key, ref_table.value FROM ((SELECT intermediate_result.key FROM read_intermediate_result('XXX_3'::text, 'binary'::citus_copy_format) intermediate_result(key integer)) some_values_3 JOIN intermediate_result_pruning.ref_table ON (true))
DEBUG: Subplan XXX_1 will be sent to localhost:xxxxx
DEBUG: Subplan XXX_1 will be sent to localhost:xxxxx
DEBUG: Subplan XXX_2 will be sent to localhost:xxxxx
DEBUG: Subplan XXX_2 will be sent to localhost:xxxxx
DEBUG: Subplan XXX_3 will be sent to localhost:xxxxx
DEBUG: Subplan XXX_3 will be sent to localhost:xxxxx
key | key | value
-----+-----+-------
---------------------------------------------------------------------
(0 rows)
-- join on intermediate results, so should only
@ -384,13 +384,13 @@ WITH some_values_1 AS
some_values_2 AS
(SELECT key, random() FROM table_2 WHERE value IN ('3', '4'))
SELECT count(*) FROM some_values_2 JOIN some_values_1 USING (key);
DEBUG: generating subplan 50_1 for CTE some_values_1: SELECT key, random() AS random FROM intermediate_result_pruning.table_1 WHERE (value OPERATOR(pg_catalog.=) ANY (ARRAY['3'::text, '4'::text]))
DEBUG: generating subplan 50_2 for CTE some_values_2: SELECT key, random() AS random FROM intermediate_result_pruning.table_2 WHERE (value OPERATOR(pg_catalog.=) ANY (ARRAY['3'::text, '4'::text]))
DEBUG: Plan 50 query after replacing subqueries and CTEs: SELECT count(*) AS count FROM ((SELECT intermediate_result.key, intermediate_result.random FROM read_intermediate_result('50_2'::text, 'binary'::citus_copy_format) intermediate_result(key integer, random double precision)) some_values_2 JOIN (SELECT intermediate_result.key, intermediate_result.random FROM read_intermediate_result('50_1'::text, 'binary'::citus_copy_format) intermediate_result(key integer, random double precision)) some_values_1 USING (key))
DEBUG: Subplan 50_1 will be sent to localhost:57638
DEBUG: Subplan 50_2 will be sent to localhost:57638
DEBUG: generating subplan XXX_1 for CTE some_values_1: SELECT key, random() AS random FROM intermediate_result_pruning.table_1 WHERE (value OPERATOR(pg_catalog.=) ANY (ARRAY['3'::text, '4'::text]))
DEBUG: generating subplan XXX_2 for CTE some_values_2: SELECT key, random() AS random FROM intermediate_result_pruning.table_2 WHERE (value OPERATOR(pg_catalog.=) ANY (ARRAY['3'::text, '4'::text]))
DEBUG: Plan XXX query after replacing subqueries and CTEs: SELECT count(*) AS count FROM ((SELECT intermediate_result.key, intermediate_result.random FROM read_intermediate_result('XXX_2'::text, 'binary'::citus_copy_format) intermediate_result(key integer, random double precision)) some_values_2 JOIN (SELECT intermediate_result.key, intermediate_result.random FROM read_intermediate_result('XXX_1'::text, 'binary'::citus_copy_format) intermediate_result(key integer, random double precision)) some_values_1 USING (key))
DEBUG: Subplan XXX_1 will be sent to localhost:xxxxx
DEBUG: Subplan XXX_2 will be sent to localhost:xxxxx
count
-------
---------------------------------------------------------------------
2
(1 row)
@ -401,13 +401,13 @@ WITH some_values_1 AS
some_values_2 AS
(SELECT key, random() FROM table_2 WHERE value IN ('3', '4'))
SELECT count(*) FROM some_values_2 JOIN some_values_1 USING (key) WHERE false;
DEBUG: generating subplan 53_1 for CTE some_values_1: SELECT key, random() AS random FROM intermediate_result_pruning.table_1 WHERE (value OPERATOR(pg_catalog.=) ANY (ARRAY['3'::text, '4'::text]))
DEBUG: generating subplan 53_2 for CTE some_values_2: SELECT key, random() AS random FROM intermediate_result_pruning.table_2 WHERE (value OPERATOR(pg_catalog.=) ANY (ARRAY['3'::text, '4'::text]))
DEBUG: Plan 53 query after replacing subqueries and CTEs: SELECT count(*) AS count FROM ((SELECT intermediate_result.key, intermediate_result.random FROM read_intermediate_result('53_2'::text, 'binary'::citus_copy_format) intermediate_result(key integer, random double precision)) some_values_2 JOIN (SELECT intermediate_result.key, intermediate_result.random FROM read_intermediate_result('53_1'::text, 'binary'::citus_copy_format) intermediate_result(key integer, random double precision)) some_values_1 USING (key)) WHERE false
DEBUG: Subplan 53_1 will be sent to localhost:57637
DEBUG: Subplan 53_2 will be sent to localhost:57637
DEBUG: generating subplan XXX_1 for CTE some_values_1: SELECT key, random() AS random FROM intermediate_result_pruning.table_1 WHERE (value OPERATOR(pg_catalog.=) ANY (ARRAY['3'::text, '4'::text]))
DEBUG: generating subplan XXX_2 for CTE some_values_2: SELECT key, random() AS random FROM intermediate_result_pruning.table_2 WHERE (value OPERATOR(pg_catalog.=) ANY (ARRAY['3'::text, '4'::text]))
DEBUG: Plan XXX query after replacing subqueries and CTEs: SELECT count(*) AS count FROM ((SELECT intermediate_result.key, intermediate_result.random FROM read_intermediate_result('XXX_2'::text, 'binary'::citus_copy_format) intermediate_result(key integer, random double precision)) some_values_2 JOIN (SELECT intermediate_result.key, intermediate_result.random FROM read_intermediate_result('XXX_1'::text, 'binary'::citus_copy_format) intermediate_result(key integer, random double precision)) some_values_1 USING (key)) WHERE false
DEBUG: Subplan XXX_1 will be sent to localhost:xxxxx
DEBUG: Subplan XXX_2 will be sent to localhost:xxxxx
count
-------
---------------------------------------------------------------------
0
(1 row)
@ -423,13 +423,13 @@ SELECT
count(*)
FROM
some_values_3;
DEBUG: generating subplan 56_1 for CTE some_values_1: SELECT key, random() AS random FROM intermediate_result_pruning.table_1 WHERE (value OPERATOR(pg_catalog.=) ANY (ARRAY['3'::text, '4'::text]))
DEBUG: generating subplan 56_2 for CTE some_values_3: SELECT key, random() AS random FROM (SELECT intermediate_result.key, intermediate_result.random FROM read_intermediate_result('56_1'::text, 'binary'::citus_copy_format) intermediate_result(key integer, random double precision)) some_values_1
DEBUG: Plan 56 query after replacing subqueries and CTEs: SELECT count(*) AS count FROM (SELECT intermediate_result.key, intermediate_result.random FROM read_intermediate_result('56_2'::text, 'binary'::citus_copy_format) intermediate_result(key integer, random double precision)) some_values_3
DEBUG: Subplan 56_1 will be sent to localhost:57638
DEBUG: Subplan 56_2 will be sent to localhost:57637
DEBUG: generating subplan XXX_1 for CTE some_values_1: SELECT key, random() AS random FROM intermediate_result_pruning.table_1 WHERE (value OPERATOR(pg_catalog.=) ANY (ARRAY['3'::text, '4'::text]))
DEBUG: generating subplan XXX_2 for CTE some_values_3: SELECT key, random() AS random FROM (SELECT intermediate_result.key, intermediate_result.random FROM read_intermediate_result('XXX_1'::text, 'binary'::citus_copy_format) intermediate_result(key integer, random double precision)) some_values_1
DEBUG: Plan XXX query after replacing subqueries and CTEs: SELECT count(*) AS count FROM (SELECT intermediate_result.key, intermediate_result.random FROM read_intermediate_result('XXX_2'::text, 'binary'::citus_copy_format) intermediate_result(key integer, random double precision)) some_values_3
DEBUG: Subplan XXX_1 will be sent to localhost:xxxxx
DEBUG: Subplan XXX_2 will be sent to localhost:xxxxx
count
-------
---------------------------------------------------------------------
2
(1 row)
@ -472,24 +472,24 @@ SELECT count(*) FROM
) as level_6, table_1 WHERE table_1.key::int = level_6.min::int
GROUP BY table_1.value
) as bar;
DEBUG: generating subplan 59_1 for subquery SELECT count(*) AS cnt, value FROM intermediate_result_pruning.table_1 WHERE (key OPERATOR(pg_catalog.=) 1) GROUP BY value
DEBUG: generating subplan 59_2 for subquery SELECT avg((table_2.value)::integer) AS avg FROM (SELECT level_1.cnt FROM (SELECT intermediate_result.cnt, intermediate_result.value FROM read_intermediate_result('59_1'::text, 'binary'::citus_copy_format) intermediate_result(cnt bigint, value text)) level_1, intermediate_result_pruning.table_1 WHERE ((table_1.key OPERATOR(pg_catalog.=) level_1.cnt) AND (table_1.key OPERATOR(pg_catalog.=) 3))) level_2, intermediate_result_pruning.table_2 WHERE ((table_2.key OPERATOR(pg_catalog.=) level_2.cnt) AND (table_2.key OPERATOR(pg_catalog.=) 5)) GROUP BY level_2.cnt
DEBUG: generating subplan 59_3 for subquery SELECT max(table_1.value) AS mx_val_1 FROM (SELECT intermediate_result.avg FROM read_intermediate_result('59_2'::text, 'binary'::citus_copy_format) intermediate_result(avg numeric)) level_3, intermediate_result_pruning.table_1 WHERE (((table_1.value)::numeric OPERATOR(pg_catalog.=) level_3.avg) AND (table_1.key OPERATOR(pg_catalog.=) 6)) GROUP BY level_3.avg
DEBUG: generating subplan 59_4 for subquery SELECT avg((table_2.value)::integer) AS avg_ev_type FROM (SELECT intermediate_result.mx_val_1 FROM read_intermediate_result('59_3'::text, 'binary'::citus_copy_format) intermediate_result(mx_val_1 text)) level_4, intermediate_result_pruning.table_2 WHERE ((level_4.mx_val_1)::integer OPERATOR(pg_catalog.=) table_2.key) GROUP BY level_4.mx_val_1
DEBUG: generating subplan 59_5 for subquery SELECT min(table_1.value) AS min FROM (SELECT intermediate_result.avg_ev_type FROM read_intermediate_result('59_4'::text, 'binary'::citus_copy_format) intermediate_result(avg_ev_type numeric)) level_5, intermediate_result_pruning.table_1 WHERE ((level_5.avg_ev_type OPERATOR(pg_catalog.=) (table_1.key)::numeric) AND (table_1.key OPERATOR(pg_catalog.>) 111)) GROUP BY level_5.avg_ev_type
DEBUG: generating subplan 59_6 for subquery SELECT avg((level_6.min)::integer) AS avg FROM (SELECT intermediate_result.min FROM read_intermediate_result('59_5'::text, 'binary'::citus_copy_format) intermediate_result(min text)) level_6, intermediate_result_pruning.table_1 WHERE (table_1.key OPERATOR(pg_catalog.=) (level_6.min)::integer) GROUP BY table_1.value
DEBUG: Plan 59 query after replacing subqueries and CTEs: SELECT count(*) AS count FROM (SELECT intermediate_result.avg FROM read_intermediate_result('59_6'::text, 'binary'::citus_copy_format) intermediate_result(avg numeric)) bar
DEBUG: Subplan 59_1 will be sent to localhost:57638
DEBUG: Subplan 59_2 will be sent to localhost:57637
DEBUG: Subplan 59_3 will be sent to localhost:57637
DEBUG: Subplan 59_3 will be sent to localhost:57638
DEBUG: Subplan 59_4 will be sent to localhost:57637
DEBUG: Subplan 59_4 will be sent to localhost:57638
DEBUG: Subplan 59_5 will be sent to localhost:57637
DEBUG: Subplan 59_5 will be sent to localhost:57638
DEBUG: Subplan 59_6 will be sent to localhost:57637
DEBUG: generating subplan XXX_1 for subquery SELECT count(*) AS cnt, value FROM intermediate_result_pruning.table_1 WHERE (key OPERATOR(pg_catalog.=) 1) GROUP BY value
DEBUG: generating subplan XXX_2 for subquery SELECT avg((table_2.value)::integer) AS avg FROM (SELECT level_1.cnt FROM (SELECT intermediate_result.cnt, intermediate_result.value FROM read_intermediate_result('XXX_1'::text, 'binary'::citus_copy_format) intermediate_result(cnt bigint, value text)) level_1, intermediate_result_pruning.table_1 WHERE ((table_1.key OPERATOR(pg_catalog.=) level_1.cnt) AND (table_1.key OPERATOR(pg_catalog.=) 3))) level_2, intermediate_result_pruning.table_2 WHERE ((table_2.key OPERATOR(pg_catalog.=) level_2.cnt) AND (table_2.key OPERATOR(pg_catalog.=) 5)) GROUP BY level_2.cnt
DEBUG: generating subplan XXX_3 for subquery SELECT max(table_1.value) AS mx_val_1 FROM (SELECT intermediate_result.avg FROM read_intermediate_result('XXX_2'::text, 'binary'::citus_copy_format) intermediate_result(avg numeric)) level_3, intermediate_result_pruning.table_1 WHERE (((table_1.value)::numeric OPERATOR(pg_catalog.=) level_3.avg) AND (table_1.key OPERATOR(pg_catalog.=) 6)) GROUP BY level_3.avg
DEBUG: generating subplan XXX_4 for subquery SELECT avg((table_2.value)::integer) AS avg_ev_type FROM (SELECT intermediate_result.mx_val_1 FROM read_intermediate_result('XXX_3'::text, 'binary'::citus_copy_format) intermediate_result(mx_val_1 text)) level_4, intermediate_result_pruning.table_2 WHERE ((level_4.mx_val_1)::integer OPERATOR(pg_catalog.=) table_2.key) GROUP BY level_4.mx_val_1
DEBUG: generating subplan XXX_5 for subquery SELECT min(table_1.value) AS min FROM (SELECT intermediate_result.avg_ev_type FROM read_intermediate_result('XXX_4'::text, 'binary'::citus_copy_format) intermediate_result(avg_ev_type numeric)) level_5, intermediate_result_pruning.table_1 WHERE ((level_5.avg_ev_type OPERATOR(pg_catalog.=) (table_1.key)::numeric) AND (table_1.key OPERATOR(pg_catalog.>) 111)) GROUP BY level_5.avg_ev_type
DEBUG: generating subplan XXX_6 for subquery SELECT avg((level_6.min)::integer) AS avg FROM (SELECT intermediate_result.min FROM read_intermediate_result('XXX_5'::text, 'binary'::citus_copy_format) intermediate_result(min text)) level_6, intermediate_result_pruning.table_1 WHERE (table_1.key OPERATOR(pg_catalog.=) (level_6.min)::integer) GROUP BY table_1.value
DEBUG: Plan XXX query after replacing subqueries and CTEs: SELECT count(*) AS count FROM (SELECT intermediate_result.avg FROM read_intermediate_result('XXX_6'::text, 'binary'::citus_copy_format) intermediate_result(avg numeric)) bar
DEBUG: Subplan XXX_1 will be sent to localhost:xxxxx
DEBUG: Subplan XXX_2 will be sent to localhost:xxxxx
DEBUG: Subplan XXX_3 will be sent to localhost:xxxxx
DEBUG: Subplan XXX_3 will be sent to localhost:xxxxx
DEBUG: Subplan XXX_4 will be sent to localhost:xxxxx
DEBUG: Subplan XXX_4 will be sent to localhost:xxxxx
DEBUG: Subplan XXX_5 will be sent to localhost:xxxxx
DEBUG: Subplan XXX_5 will be sent to localhost:xxxxx
DEBUG: Subplan XXX_6 will be sent to localhost:xxxxx
count
-------
---------------------------------------------------------------------
0
(1 row)
@ -531,21 +531,21 @@ SELECT count(*) FROM
WHERE table_1.key::int = level_6.min::int AND table_1.key = 4
GROUP BY table_1.value
) as bar;
DEBUG: generating subplan 66_1 for subquery SELECT count(*) AS cnt, value FROM intermediate_result_pruning.table_1 WHERE (key OPERATOR(pg_catalog.=) 1) GROUP BY value
DEBUG: generating subplan 66_2 for subquery SELECT avg((table_2.value)::integer) AS avg FROM (SELECT level_1.cnt FROM (SELECT intermediate_result.cnt, intermediate_result.value FROM read_intermediate_result('66_1'::text, 'binary'::citus_copy_format) intermediate_result(cnt bigint, value text)) level_1, intermediate_result_pruning.table_1 WHERE ((table_1.key OPERATOR(pg_catalog.=) level_1.cnt) AND (table_1.key OPERATOR(pg_catalog.=) 3))) level_2, intermediate_result_pruning.table_2 WHERE ((table_2.key OPERATOR(pg_catalog.=) level_2.cnt) AND (table_2.key OPERATOR(pg_catalog.=) 5)) GROUP BY level_2.cnt
DEBUG: generating subplan 66_3 for subquery SELECT max(table_1.value) AS mx_val_1 FROM (SELECT intermediate_result.avg FROM read_intermediate_result('66_2'::text, 'binary'::citus_copy_format) intermediate_result(avg numeric)) level_3, intermediate_result_pruning.table_1 WHERE (((table_1.value)::numeric OPERATOR(pg_catalog.=) level_3.avg) AND (table_1.key OPERATOR(pg_catalog.=) 6)) GROUP BY level_3.avg
DEBUG: generating subplan 66_4 for subquery SELECT avg((table_2.value)::integer) AS avg_ev_type FROM (SELECT intermediate_result.mx_val_1 FROM read_intermediate_result('66_3'::text, 'binary'::citus_copy_format) intermediate_result(mx_val_1 text)) level_4, intermediate_result_pruning.table_2 WHERE (((level_4.mx_val_1)::integer OPERATOR(pg_catalog.=) table_2.key) AND (table_2.key OPERATOR(pg_catalog.=) 1)) GROUP BY level_4.mx_val_1
DEBUG: generating subplan 66_5 for subquery SELECT min(table_1.value) AS min FROM (SELECT intermediate_result.avg_ev_type FROM read_intermediate_result('66_4'::text, 'binary'::citus_copy_format) intermediate_result(avg_ev_type numeric)) level_5, intermediate_result_pruning.table_1 WHERE ((level_5.avg_ev_type OPERATOR(pg_catalog.=) (table_1.key)::numeric) AND (table_1.key OPERATOR(pg_catalog.=) 111)) GROUP BY level_5.avg_ev_type
DEBUG: generating subplan 66_6 for subquery SELECT avg((level_6.min)::integer) AS avg FROM (SELECT intermediate_result.min FROM read_intermediate_result('66_5'::text, 'binary'::citus_copy_format) intermediate_result(min text)) level_6, intermediate_result_pruning.table_1 WHERE ((table_1.key OPERATOR(pg_catalog.=) (level_6.min)::integer) AND (table_1.key OPERATOR(pg_catalog.=) 4)) GROUP BY table_1.value
DEBUG: Plan 66 query after replacing subqueries and CTEs: SELECT count(*) AS count FROM (SELECT intermediate_result.avg FROM read_intermediate_result('66_6'::text, 'binary'::citus_copy_format) intermediate_result(avg numeric)) bar
DEBUG: Subplan 66_1 will be sent to localhost:57638
DEBUG: Subplan 66_2 will be sent to localhost:57637
DEBUG: Subplan 66_3 will be sent to localhost:57637
DEBUG: Subplan 66_4 will be sent to localhost:57638
DEBUG: Subplan 66_5 will be sent to localhost:57638
DEBUG: Subplan 66_6 will be sent to localhost:57637
DEBUG: generating subplan XXX_1 for subquery SELECT count(*) AS cnt, value FROM intermediate_result_pruning.table_1 WHERE (key OPERATOR(pg_catalog.=) 1) GROUP BY value
DEBUG: generating subplan XXX_2 for subquery SELECT avg((table_2.value)::integer) AS avg FROM (SELECT level_1.cnt FROM (SELECT intermediate_result.cnt, intermediate_result.value FROM read_intermediate_result('XXX_1'::text, 'binary'::citus_copy_format) intermediate_result(cnt bigint, value text)) level_1, intermediate_result_pruning.table_1 WHERE ((table_1.key OPERATOR(pg_catalog.=) level_1.cnt) AND (table_1.key OPERATOR(pg_catalog.=) 3))) level_2, intermediate_result_pruning.table_2 WHERE ((table_2.key OPERATOR(pg_catalog.=) level_2.cnt) AND (table_2.key OPERATOR(pg_catalog.=) 5)) GROUP BY level_2.cnt
DEBUG: generating subplan XXX_3 for subquery SELECT max(table_1.value) AS mx_val_1 FROM (SELECT intermediate_result.avg FROM read_intermediate_result('XXX_2'::text, 'binary'::citus_copy_format) intermediate_result(avg numeric)) level_3, intermediate_result_pruning.table_1 WHERE (((table_1.value)::numeric OPERATOR(pg_catalog.=) level_3.avg) AND (table_1.key OPERATOR(pg_catalog.=) 6)) GROUP BY level_3.avg
DEBUG: generating subplan XXX_4 for subquery SELECT avg((table_2.value)::integer) AS avg_ev_type FROM (SELECT intermediate_result.mx_val_1 FROM read_intermediate_result('XXX_3'::text, 'binary'::citus_copy_format) intermediate_result(mx_val_1 text)) level_4, intermediate_result_pruning.table_2 WHERE (((level_4.mx_val_1)::integer OPERATOR(pg_catalog.=) table_2.key) AND (table_2.key OPERATOR(pg_catalog.=) 1)) GROUP BY level_4.mx_val_1
DEBUG: generating subplan XXX_5 for subquery SELECT min(table_1.value) AS min FROM (SELECT intermediate_result.avg_ev_type FROM read_intermediate_result('XXX_4'::text, 'binary'::citus_copy_format) intermediate_result(avg_ev_type numeric)) level_5, intermediate_result_pruning.table_1 WHERE ((level_5.avg_ev_type OPERATOR(pg_catalog.=) (table_1.key)::numeric) AND (table_1.key OPERATOR(pg_catalog.=) 111)) GROUP BY level_5.avg_ev_type
DEBUG: generating subplan XXX_6 for subquery SELECT avg((level_6.min)::integer) AS avg FROM (SELECT intermediate_result.min FROM read_intermediate_result('XXX_5'::text, 'binary'::citus_copy_format) intermediate_result(min text)) level_6, intermediate_result_pruning.table_1 WHERE ((table_1.key OPERATOR(pg_catalog.=) (level_6.min)::integer) AND (table_1.key OPERATOR(pg_catalog.=) 4)) GROUP BY table_1.value
DEBUG: Plan XXX query after replacing subqueries and CTEs: SELECT count(*) AS count FROM (SELECT intermediate_result.avg FROM read_intermediate_result('XXX_6'::text, 'binary'::citus_copy_format) intermediate_result(avg numeric)) bar
DEBUG: Subplan XXX_1 will be sent to localhost:xxxxx
DEBUG: Subplan XXX_2 will be sent to localhost:xxxxx
DEBUG: Subplan XXX_3 will be sent to localhost:xxxxx
DEBUG: Subplan XXX_4 will be sent to localhost:xxxxx
DEBUG: Subplan XXX_5 will be sent to localhost:xxxxx
DEBUG: Subplan XXX_6 will be sent to localhost:xxxxx
count
-------
---------------------------------------------------------------------
0
(1 row)
@ -554,13 +554,13 @@ DEBUG: Subplan 66_6 will be sent to localhost:57637
(SELECT key FROM table_1 WHERE key = 1)
INTERSECT
(SELECT key FROM table_1 WHERE key = 2);
DEBUG: generating subplan 73_1 for subquery SELECT key FROM intermediate_result_pruning.table_1 WHERE (key OPERATOR(pg_catalog.=) 1)
DEBUG: generating subplan 73_2 for subquery SELECT key FROM intermediate_result_pruning.table_1 WHERE (key OPERATOR(pg_catalog.=) 2)
DEBUG: Plan 73 query after replacing subqueries and CTEs: SELECT intermediate_result.key FROM read_intermediate_result('73_1'::text, 'binary'::citus_copy_format) intermediate_result(key integer) INTERSECT SELECT intermediate_result.key FROM read_intermediate_result('73_2'::text, 'binary'::citus_copy_format) intermediate_result(key integer)
DEBUG: Subplan 73_1 will be sent to localhost:57638
DEBUG: Subplan 73_2 will be sent to localhost:57638
DEBUG: generating subplan XXX_1 for subquery SELECT key FROM intermediate_result_pruning.table_1 WHERE (key OPERATOR(pg_catalog.=) 1)
DEBUG: generating subplan XXX_2 for subquery SELECT key FROM intermediate_result_pruning.table_1 WHERE (key OPERATOR(pg_catalog.=) 2)
DEBUG: Plan XXX query after replacing subqueries and CTEs: SELECT intermediate_result.key FROM read_intermediate_result('XXX_1'::text, 'binary'::citus_copy_format) intermediate_result(key integer) INTERSECT SELECT intermediate_result.key FROM read_intermediate_result('XXX_2'::text, 'binary'::citus_copy_format) intermediate_result(key integer)
DEBUG: Subplan XXX_1 will be sent to localhost:xxxxx
DEBUG: Subplan XXX_2 will be sent to localhost:xxxxx
key
-----
---------------------------------------------------------------------
(0 rows)
-- the intermediate results should just hit a single worker
@ -579,18 +579,18 @@ cte_2 AS
SELECT * FROM cte_1
UNION
SELECT * FROM cte_2;
DEBUG: generating subplan 76_1 for CTE cte_1: SELECT table_1.key FROM intermediate_result_pruning.table_1 WHERE (table_1.key OPERATOR(pg_catalog.=) 1) INTERSECT SELECT table_1.key FROM intermediate_result_pruning.table_1 WHERE (table_1.key OPERATOR(pg_catalog.=) 2)
DEBUG: generating subplan 77_1 for subquery SELECT key FROM intermediate_result_pruning.table_1 WHERE (key OPERATOR(pg_catalog.=) 1)
DEBUG: generating subplan 77_2 for subquery SELECT key FROM intermediate_result_pruning.table_1 WHERE (key OPERATOR(pg_catalog.=) 2)
DEBUG: Plan 77 query after replacing subqueries and CTEs: SELECT intermediate_result.key FROM read_intermediate_result('77_1'::text, 'binary'::citus_copy_format) intermediate_result(key integer) INTERSECT SELECT intermediate_result.key FROM read_intermediate_result('77_2'::text, 'binary'::citus_copy_format) intermediate_result(key integer)
DEBUG: generating subplan 76_2 for CTE cte_2: SELECT table_1.key FROM intermediate_result_pruning.table_1 WHERE (table_1.key OPERATOR(pg_catalog.=) 3) INTERSECT SELECT table_1.key FROM intermediate_result_pruning.table_1 WHERE (table_1.key OPERATOR(pg_catalog.=) 4)
DEBUG: Plan 76 query after replacing subqueries and CTEs: SELECT cte_1.key FROM (SELECT intermediate_result.key FROM read_intermediate_result('76_1'::text, 'binary'::citus_copy_format) intermediate_result(key integer)) cte_1 UNION SELECT cte_2.key FROM (SELECT intermediate_result.key FROM read_intermediate_result('76_2'::text, 'binary'::citus_copy_format) intermediate_result(key integer)) cte_2
DEBUG: Subplan 76_1 will be sent to localhost:57638
DEBUG: Subplan 77_1 will be sent to localhost:57637
DEBUG: Subplan 77_2 will be sent to localhost:57637
DEBUG: Subplan 76_2 will be sent to localhost:57638
DEBUG: generating subplan XXX_1 for CTE cte_1: SELECT table_1.key FROM intermediate_result_pruning.table_1 WHERE (table_1.key OPERATOR(pg_catalog.=) 1) INTERSECT SELECT table_1.key FROM intermediate_result_pruning.table_1 WHERE (table_1.key OPERATOR(pg_catalog.=) 2)
DEBUG: generating subplan XXX_1 for subquery SELECT key FROM intermediate_result_pruning.table_1 WHERE (key OPERATOR(pg_catalog.=) 1)
DEBUG: generating subplan XXX_2 for subquery SELECT key FROM intermediate_result_pruning.table_1 WHERE (key OPERATOR(pg_catalog.=) 2)
DEBUG: Plan XXX query after replacing subqueries and CTEs: SELECT intermediate_result.key FROM read_intermediate_result('XXX_1'::text, 'binary'::citus_copy_format) intermediate_result(key integer) INTERSECT SELECT intermediate_result.key FROM read_intermediate_result('XXX_2'::text, 'binary'::citus_copy_format) intermediate_result(key integer)
DEBUG: generating subplan XXX_2 for CTE cte_2: SELECT table_1.key FROM intermediate_result_pruning.table_1 WHERE (table_1.key OPERATOR(pg_catalog.=) 3) INTERSECT SELECT table_1.key FROM intermediate_result_pruning.table_1 WHERE (table_1.key OPERATOR(pg_catalog.=) 4)
DEBUG: Plan XXX query after replacing subqueries and CTEs: SELECT cte_1.key FROM (SELECT intermediate_result.key FROM read_intermediate_result('XXX_1'::text, 'binary'::citus_copy_format) intermediate_result(key integer)) cte_1 UNION SELECT cte_2.key FROM (SELECT intermediate_result.key FROM read_intermediate_result('XXX_2'::text, 'binary'::citus_copy_format) intermediate_result(key integer)) cte_2
DEBUG: Subplan XXX_1 will be sent to localhost:xxxxx
DEBUG: Subplan XXX_1 will be sent to localhost:xxxxx
DEBUG: Subplan XXX_2 will be sent to localhost:xxxxx
DEBUG: Subplan XXX_2 will be sent to localhost:xxxxx
key
-----
---------------------------------------------------------------------
(0 rows)
-- one final test with SET operations, where
@ -608,19 +608,19 @@ cte_2 AS
SELECT count(*) FROM table_1 JOIN cte_1 USING (key)
)
SELECT * FROM cte_2;
DEBUG: generating subplan 81_1 for CTE cte_1: SELECT table_1.key FROM intermediate_result_pruning.table_1 WHERE (table_1.key OPERATOR(pg_catalog.=) 1) INTERSECT SELECT table_1.key FROM intermediate_result_pruning.table_1 WHERE (table_1.key OPERATOR(pg_catalog.=) 2)
DEBUG: generating subplan 82_1 for subquery SELECT key FROM intermediate_result_pruning.table_1 WHERE (key OPERATOR(pg_catalog.=) 1)
DEBUG: generating subplan 82_2 for subquery SELECT key FROM intermediate_result_pruning.table_1 WHERE (key OPERATOR(pg_catalog.=) 2)
DEBUG: Plan 82 query after replacing subqueries and CTEs: SELECT intermediate_result.key FROM read_intermediate_result('82_1'::text, 'binary'::citus_copy_format) intermediate_result(key integer) INTERSECT SELECT intermediate_result.key FROM read_intermediate_result('82_2'::text, 'binary'::citus_copy_format) intermediate_result(key integer)
DEBUG: generating subplan 81_2 for CTE cte_2: SELECT count(*) AS count FROM (intermediate_result_pruning.table_1 JOIN (SELECT intermediate_result.key FROM read_intermediate_result('81_1'::text, 'binary'::citus_copy_format) intermediate_result(key integer)) cte_1 USING (key))
DEBUG: Plan 81 query after replacing subqueries and CTEs: SELECT count FROM (SELECT intermediate_result.count FROM read_intermediate_result('81_2'::text, 'binary'::citus_copy_format) intermediate_result(count bigint)) cte_2
DEBUG: Subplan 81_1 will be sent to localhost:57637
DEBUG: Subplan 81_1 will be sent to localhost:57638
DEBUG: Subplan 82_1 will be sent to localhost:57637
DEBUG: Subplan 82_2 will be sent to localhost:57637
DEBUG: Subplan 81_2 will be sent to localhost:57638
DEBUG: generating subplan XXX_1 for CTE cte_1: SELECT table_1.key FROM intermediate_result_pruning.table_1 WHERE (table_1.key OPERATOR(pg_catalog.=) 1) INTERSECT SELECT table_1.key FROM intermediate_result_pruning.table_1 WHERE (table_1.key OPERATOR(pg_catalog.=) 2)
DEBUG: generating subplan XXX_1 for subquery SELECT key FROM intermediate_result_pruning.table_1 WHERE (key OPERATOR(pg_catalog.=) 1)
DEBUG: generating subplan XXX_2 for subquery SELECT key FROM intermediate_result_pruning.table_1 WHERE (key OPERATOR(pg_catalog.=) 2)
DEBUG: Plan XXX query after replacing subqueries and CTEs: SELECT intermediate_result.key FROM read_intermediate_result('XXX_1'::text, 'binary'::citus_copy_format) intermediate_result(key integer) INTERSECT SELECT intermediate_result.key FROM read_intermediate_result('XXX_2'::text, 'binary'::citus_copy_format) intermediate_result(key integer)
DEBUG: generating subplan XXX_2 for CTE cte_2: SELECT count(*) AS count FROM (intermediate_result_pruning.table_1 JOIN (SELECT intermediate_result.key FROM read_intermediate_result('XXX_1'::text, 'binary'::citus_copy_format) intermediate_result(key integer)) cte_1 USING (key))
DEBUG: Plan XXX query after replacing subqueries and CTEs: SELECT count FROM (SELECT intermediate_result.count FROM read_intermediate_result('XXX_2'::text, 'binary'::citus_copy_format) intermediate_result(count bigint)) cte_2
DEBUG: Subplan XXX_1 will be sent to localhost:xxxxx
DEBUG: Subplan XXX_1 will be sent to localhost:xxxxx
DEBUG: Subplan XXX_1 will be sent to localhost:xxxxx
DEBUG: Subplan XXX_2 will be sent to localhost:xxxxx
DEBUG: Subplan XXX_2 will be sent to localhost:xxxxx
count
-------
---------------------------------------------------------------------
0
(1 row)
@ -634,12 +634,12 @@ FROM
(SELECT key, random() FROM table_2) as bar
WHERE
foo.key != bar.key;
DEBUG: generating subplan 86_1 for subquery SELECT key, random() AS random FROM intermediate_result_pruning.table_2
DEBUG: Plan 86 query after replacing subqueries and CTEs: SELECT count(*) AS count FROM (SELECT table_1.key, random() AS random FROM intermediate_result_pruning.table_1) foo, (SELECT intermediate_result.key, intermediate_result.random FROM read_intermediate_result('86_1'::text, 'binary'::citus_copy_format) intermediate_result(key integer, random double precision)) bar WHERE (foo.key OPERATOR(pg_catalog.<>) bar.key)
DEBUG: Subplan 86_1 will be sent to localhost:57637
DEBUG: Subplan 86_1 will be sent to localhost:57638
DEBUG: generating subplan XXX_1 for subquery SELECT key, random() AS random FROM intermediate_result_pruning.table_2
DEBUG: Plan XXX query after replacing subqueries and CTEs: SELECT count(*) AS count FROM (SELECT table_1.key, random() AS random FROM intermediate_result_pruning.table_1) foo, (SELECT intermediate_result.key, intermediate_result.random FROM read_intermediate_result('XXX_1'::text, 'binary'::citus_copy_format) intermediate_result(key integer, random double precision)) bar WHERE (foo.key OPERATOR(pg_catalog.<>) bar.key)
DEBUG: Subplan XXX_1 will be sent to localhost:xxxxx
DEBUG: Subplan XXX_1 will be sent to localhost:xxxxx
count
-------
---------------------------------------------------------------------
14
(1 row)
@ -652,11 +652,11 @@ FROM
(SELECT key, random() FROM table_2) as bar
WHERE
foo.key != bar.key;
DEBUG: generating subplan 88_1 for subquery SELECT key, random() AS random FROM intermediate_result_pruning.table_2
DEBUG: Plan 88 query after replacing subqueries and CTEs: SELECT count(*) AS count FROM (SELECT table_1.key, random() AS random FROM intermediate_result_pruning.table_1 WHERE (table_1.key OPERATOR(pg_catalog.=) 1)) foo, (SELECT intermediate_result.key, intermediate_result.random FROM read_intermediate_result('88_1'::text, 'binary'::citus_copy_format) intermediate_result(key integer, random double precision)) bar WHERE (foo.key OPERATOR(pg_catalog.<>) bar.key)
DEBUG: Subplan 88_1 will be sent to localhost:57637
DEBUG: generating subplan XXX_1 for subquery SELECT key, random() AS random FROM intermediate_result_pruning.table_2
DEBUG: Plan XXX query after replacing subqueries and CTEs: SELECT count(*) AS count FROM (SELECT table_1.key, random() AS random FROM intermediate_result_pruning.table_1 WHERE (table_1.key OPERATOR(pg_catalog.=) 1)) foo, (SELECT intermediate_result.key, intermediate_result.random FROM read_intermediate_result('XXX_1'::text, 'binary'::citus_copy_format) intermediate_result(key integer, random double precision)) bar WHERE (foo.key OPERATOR(pg_catalog.<>) bar.key)
DEBUG: Subplan XXX_1 will be sent to localhost:xxxxx
count
-------
---------------------------------------------------------------------
4
(1 row)
@ -673,17 +673,17 @@ raw_data AS (
DELETE FROM table_2 WHERE key >= (SELECT min(key) FROM select_data WHERE key > 1) RETURNING *
)
SELECT * FROM raw_data;
DEBUG: generating subplan 90_1 for CTE select_data: SELECT key, value FROM intermediate_result_pruning.table_1
DEBUG: generating subplan 90_2 for CTE raw_data: DELETE FROM intermediate_result_pruning.table_2 WHERE (key OPERATOR(pg_catalog.>=) (SELECT min(select_data.key) AS min FROM (SELECT intermediate_result.key, intermediate_result.value FROM read_intermediate_result('90_1'::text, 'binary'::citus_copy_format) intermediate_result(key integer, value text)) select_data WHERE (select_data.key OPERATOR(pg_catalog.>) 1))) RETURNING key, value
DEBUG: generating subplan 92_1 for subquery SELECT min(key) AS min FROM (SELECT intermediate_result.key, intermediate_result.value FROM read_intermediate_result('90_1'::text, 'binary'::citus_copy_format) intermediate_result(key integer, value text)) select_data WHERE (key OPERATOR(pg_catalog.>) 1)
DEBUG: Plan 92 query after replacing subqueries and CTEs: DELETE FROM intermediate_result_pruning.table_2 WHERE (key OPERATOR(pg_catalog.>=) (SELECT intermediate_result.min FROM read_intermediate_result('92_1'::text, 'binary'::citus_copy_format) intermediate_result(min integer))) RETURNING key, value
DEBUG: Plan 90 query after replacing subqueries and CTEs: SELECT key, value FROM (SELECT intermediate_result.key, intermediate_result.value FROM read_intermediate_result('90_2'::text, 'binary'::citus_copy_format) intermediate_result(key integer, value text)) raw_data
DEBUG: Subplan 90_1 will be sent to localhost:57637
DEBUG: Subplan 90_2 will be sent to localhost:57638
DEBUG: Subplan 92_1 will be sent to localhost:57637
DEBUG: Subplan 92_1 will be sent to localhost:57638
DEBUG: generating subplan XXX_1 for CTE select_data: SELECT key, value FROM intermediate_result_pruning.table_1
DEBUG: generating subplan XXX_2 for CTE raw_data: DELETE FROM intermediate_result_pruning.table_2 WHERE (key OPERATOR(pg_catalog.>=) (SELECT min(select_data.key) AS min FROM (SELECT intermediate_result.key, intermediate_result.value FROM read_intermediate_result('XXX_1'::text, 'binary'::citus_copy_format) intermediate_result(key integer, value text)) select_data WHERE (select_data.key OPERATOR(pg_catalog.>) 1))) RETURNING key, value
DEBUG: generating subplan XXX_1 for subquery SELECT min(key) AS min FROM (SELECT intermediate_result.key, intermediate_result.value FROM read_intermediate_result('XXX_1'::text, 'binary'::citus_copy_format) intermediate_result(key integer, value text)) select_data WHERE (key OPERATOR(pg_catalog.>) 1)
DEBUG: Plan XXX query after replacing subqueries and CTEs: DELETE FROM intermediate_result_pruning.table_2 WHERE (key OPERATOR(pg_catalog.>=) (SELECT intermediate_result.min FROM read_intermediate_result('XXX_1'::text, 'binary'::citus_copy_format) intermediate_result(min integer))) RETURNING key, value
DEBUG: Plan XXX query after replacing subqueries and CTEs: SELECT key, value FROM (SELECT intermediate_result.key, intermediate_result.value FROM read_intermediate_result('XXX_2'::text, 'binary'::citus_copy_format) intermediate_result(key integer, value text)) raw_data
DEBUG: Subplan XXX_1 will be sent to localhost:xxxxx
DEBUG: Subplan XXX_2 will be sent to localhost:xxxxx
DEBUG: Subplan XXX_1 will be sent to localhost:xxxxx
DEBUG: Subplan XXX_1 will be sent to localhost:xxxxx
key | value
-----+-------
---------------------------------------------------------------------
3 | 3
4 | 4
5 | 5
@ -703,17 +703,17 @@ raw_data AS (
DELETE FROM table_2 WHERE value::int >= (SELECT min(key) FROM select_data WHERE key > 1 + random()) RETURNING *
)
SELECT * FROM raw_data;
DEBUG: generating subplan 94_1 for CTE select_data: SELECT key, value FROM intermediate_result_pruning.table_1
DEBUG: generating subplan 94_2 for CTE raw_data: DELETE FROM intermediate_result_pruning.table_2 WHERE ((value)::integer OPERATOR(pg_catalog.>=) (SELECT min(select_data.key) AS min FROM (SELECT intermediate_result.key, intermediate_result.value FROM read_intermediate_result('94_1'::text, 'binary'::citus_copy_format) intermediate_result(key integer, value text)) select_data WHERE ((select_data.key)::double precision OPERATOR(pg_catalog.>) ((1)::double precision OPERATOR(pg_catalog.+) random())))) RETURNING key, value
DEBUG: generating subplan 96_1 for subquery SELECT min(key) AS min FROM (SELECT intermediate_result.key, intermediate_result.value FROM read_intermediate_result('94_1'::text, 'binary'::citus_copy_format) intermediate_result(key integer, value text)) select_data WHERE ((key)::double precision OPERATOR(pg_catalog.>) ((1)::double precision OPERATOR(pg_catalog.+) random()))
DEBUG: Plan 96 query after replacing subqueries and CTEs: DELETE FROM intermediate_result_pruning.table_2 WHERE ((value)::integer OPERATOR(pg_catalog.>=) (SELECT intermediate_result.min FROM read_intermediate_result('96_1'::text, 'binary'::citus_copy_format) intermediate_result(min integer))) RETURNING key, value
DEBUG: Plan 94 query after replacing subqueries and CTEs: SELECT key, value FROM (SELECT intermediate_result.key, intermediate_result.value FROM read_intermediate_result('94_2'::text, 'binary'::citus_copy_format) intermediate_result(key integer, value text)) raw_data
DEBUG: Subplan 94_1 will be sent to localhost:57637
DEBUG: Subplan 94_2 will be sent to localhost:57638
DEBUG: Subplan 96_1 will be sent to localhost:57637
DEBUG: Subplan 96_1 will be sent to localhost:57638
DEBUG: generating subplan XXX_1 for CTE select_data: SELECT key, value FROM intermediate_result_pruning.table_1
DEBUG: generating subplan XXX_2 for CTE raw_data: DELETE FROM intermediate_result_pruning.table_2 WHERE ((value)::integer OPERATOR(pg_catalog.>=) (SELECT min(select_data.key) AS min FROM (SELECT intermediate_result.key, intermediate_result.value FROM read_intermediate_result('XXX_1'::text, 'binary'::citus_copy_format) intermediate_result(key integer, value text)) select_data WHERE ((select_data.key)::double precision OPERATOR(pg_catalog.>) ((1)::double precision OPERATOR(pg_catalog.+) random())))) RETURNING key, value
DEBUG: generating subplan XXX_1 for subquery SELECT min(key) AS min FROM (SELECT intermediate_result.key, intermediate_result.value FROM read_intermediate_result('XXX_1'::text, 'binary'::citus_copy_format) intermediate_result(key integer, value text)) select_data WHERE ((key)::double precision OPERATOR(pg_catalog.>) ((1)::double precision OPERATOR(pg_catalog.+) random()))
DEBUG: Plan XXX query after replacing subqueries and CTEs: DELETE FROM intermediate_result_pruning.table_2 WHERE ((value)::integer OPERATOR(pg_catalog.>=) (SELECT intermediate_result.min FROM read_intermediate_result('XXX_1'::text, 'binary'::citus_copy_format) intermediate_result(min integer))) RETURNING key, value
DEBUG: Plan XXX query after replacing subqueries and CTEs: SELECT key, value FROM (SELECT intermediate_result.key, intermediate_result.value FROM read_intermediate_result('XXX_2'::text, 'binary'::citus_copy_format) intermediate_result(key integer, value text)) raw_data
DEBUG: Subplan XXX_1 will be sent to localhost:xxxxx
DEBUG: Subplan XXX_2 will be sent to localhost:xxxxx
DEBUG: Subplan XXX_1 will be sent to localhost:xxxxx
DEBUG: Subplan XXX_1 will be sent to localhost:xxxxx
key | value
-----+-------
---------------------------------------------------------------------
3 | 3
4 | 4
5 | 5
@ -731,14 +731,14 @@ raw_data AS (
DELETE FROM table_2 WHERE value::int >= (SELECT min(key) FROM table_1 WHERE key > random()) AND key = 6 RETURNING *
)
SELECT * FROM raw_data;
DEBUG: generating subplan 98_1 for CTE raw_data: DELETE FROM intermediate_result_pruning.table_2 WHERE (((value)::integer OPERATOR(pg_catalog.>=) (SELECT min(table_1.key) AS min FROM intermediate_result_pruning.table_1 WHERE ((table_1.key)::double precision OPERATOR(pg_catalog.>) random()))) AND (key OPERATOR(pg_catalog.=) 6)) RETURNING key, value
DEBUG: generating subplan 99_1 for subquery SELECT min(key) AS min FROM intermediate_result_pruning.table_1 WHERE ((key)::double precision OPERATOR(pg_catalog.>) random())
DEBUG: Plan 99 query after replacing subqueries and CTEs: DELETE FROM intermediate_result_pruning.table_2 WHERE (((value)::integer OPERATOR(pg_catalog.>=) (SELECT intermediate_result.min FROM read_intermediate_result('99_1'::text, 'binary'::citus_copy_format) intermediate_result(min integer))) AND (key OPERATOR(pg_catalog.=) 6)) RETURNING key, value
DEBUG: Plan 98 query after replacing subqueries and CTEs: SELECT key, value FROM (SELECT intermediate_result.key, intermediate_result.value FROM read_intermediate_result('98_1'::text, 'binary'::citus_copy_format) intermediate_result(key integer, value text)) raw_data
DEBUG: Subplan 98_1 will be sent to localhost:57637
DEBUG: Subplan 99_1 will be sent to localhost:57637
DEBUG: generating subplan XXX_1 for CTE raw_data: DELETE FROM intermediate_result_pruning.table_2 WHERE (((value)::integer OPERATOR(pg_catalog.>=) (SELECT min(table_1.key) AS min FROM intermediate_result_pruning.table_1 WHERE ((table_1.key)::double precision OPERATOR(pg_catalog.>) random()))) AND (key OPERATOR(pg_catalog.=) 6)) RETURNING key, value
DEBUG: generating subplan XXX_1 for subquery SELECT min(key) AS min FROM intermediate_result_pruning.table_1 WHERE ((key)::double precision OPERATOR(pg_catalog.>) random())
DEBUG: Plan XXX query after replacing subqueries and CTEs: DELETE FROM intermediate_result_pruning.table_2 WHERE (((value)::integer OPERATOR(pg_catalog.>=) (SELECT intermediate_result.min FROM read_intermediate_result('XXX_1'::text, 'binary'::citus_copy_format) intermediate_result(min integer))) AND (key OPERATOR(pg_catalog.=) 6)) RETURNING key, value
DEBUG: Plan XXX query after replacing subqueries and CTEs: SELECT key, value FROM (SELECT intermediate_result.key, intermediate_result.value FROM read_intermediate_result('XXX_1'::text, 'binary'::citus_copy_format) intermediate_result(key integer, value text)) raw_data
DEBUG: Subplan XXX_1 will be sent to localhost:xxxxx
DEBUG: Subplan XXX_1 will be sent to localhost:xxxxx
key | value
-----+-------
---------------------------------------------------------------------
6 | 6
(1 row)
@ -756,9 +756,9 @@ INSERT INTO table_1
SELECT * FROM table_2 where value IN (SELECT value FROM table_1 WHERE random() > 1) AND key = 1;
DEBUG: volatile functions are not allowed in distributed INSERT ... SELECT queries
DEBUG: Collecting INSERT ... SELECT results on coordinator
DEBUG: generating subplan 104_1 for subquery SELECT value FROM intermediate_result_pruning.table_1 WHERE (random() OPERATOR(pg_catalog.>) (1)::double precision)
DEBUG: Plan 104 query after replacing subqueries and CTEs: SELECT key, value FROM intermediate_result_pruning.table_2 WHERE ((value OPERATOR(pg_catalog.=) ANY (SELECT intermediate_result.value FROM read_intermediate_result('104_1'::text, 'binary'::citus_copy_format) intermediate_result(value text))) AND (key OPERATOR(pg_catalog.=) 1))
DEBUG: Subplan 104_1 will be sent to localhost:57637
DEBUG: generating subplan XXX_1 for subquery SELECT value FROM intermediate_result_pruning.table_1 WHERE (random() OPERATOR(pg_catalog.>) (1)::double precision)
DEBUG: Plan XXX query after replacing subqueries and CTEs: SELECT key, value FROM intermediate_result_pruning.table_2 WHERE ((value OPERATOR(pg_catalog.=) ANY (SELECT intermediate_result.value FROM read_intermediate_result('XXX_1'::text, 'binary'::citus_copy_format) intermediate_result(value text))) AND (key OPERATOR(pg_catalog.=) 1))
DEBUG: Subplan XXX_1 will be sent to localhost:xxxxx
-- a similar query, with more complex subquery
INSERT INTO table_1
SELECT * FROM table_2 where key = 1 AND
@ -780,18 +780,18 @@ INSERT INTO table_1
SELECT * FROM cte_2);
DEBUG: Set operations are not allowed in distributed INSERT ... SELECT queries
DEBUG: Collecting INSERT ... SELECT results on coordinator
DEBUG: generating subplan 107_1 for CTE cte_1: SELECT table_1.key FROM intermediate_result_pruning.table_1 WHERE (table_1.key OPERATOR(pg_catalog.=) 1) INTERSECT SELECT table_1.key FROM intermediate_result_pruning.table_1 WHERE (table_1.key OPERATOR(pg_catalog.=) 2)
DEBUG: generating subplan 108_1 for subquery SELECT key FROM intermediate_result_pruning.table_1 WHERE (key OPERATOR(pg_catalog.=) 1)
DEBUG: generating subplan 108_2 for subquery SELECT key FROM intermediate_result_pruning.table_1 WHERE (key OPERATOR(pg_catalog.=) 2)
DEBUG: Plan 108 query after replacing subqueries and CTEs: SELECT intermediate_result.key FROM read_intermediate_result('108_1'::text, 'binary'::citus_copy_format) intermediate_result(key integer) INTERSECT SELECT intermediate_result.key FROM read_intermediate_result('108_2'::text, 'binary'::citus_copy_format) intermediate_result(key integer)
DEBUG: generating subplan 107_2 for CTE cte_2: SELECT table_1.key FROM intermediate_result_pruning.table_1 WHERE (table_1.key OPERATOR(pg_catalog.=) 3) INTERSECT SELECT table_1.key FROM intermediate_result_pruning.table_1 WHERE (table_1.key OPERATOR(pg_catalog.=) 4)
DEBUG: generating subplan 107_3 for subquery SELECT cte_1.key FROM (SELECT intermediate_result.key FROM read_intermediate_result('107_1'::text, 'binary'::citus_copy_format) intermediate_result(key integer)) cte_1 UNION SELECT cte_2.key FROM (SELECT intermediate_result.key FROM read_intermediate_result('107_2'::text, 'binary'::citus_copy_format) intermediate_result(key integer)) cte_2
DEBUG: Plan 107 query after replacing subqueries and CTEs: SELECT key, value FROM intermediate_result_pruning.table_2 WHERE ((key OPERATOR(pg_catalog.=) 1) AND ((value)::integer OPERATOR(pg_catalog.=) ANY (SELECT intermediate_result.key FROM read_intermediate_result('107_3'::text, 'binary'::citus_copy_format) intermediate_result(key integer))))
DEBUG: Subplan 107_1 will be sent to localhost:57637
DEBUG: Subplan 108_1 will be sent to localhost:57638
DEBUG: Subplan 108_2 will be sent to localhost:57638
DEBUG: Subplan 107_2 will be sent to localhost:57637
DEBUG: Subplan 107_3 will be sent to localhost:57637
DEBUG: generating subplan XXX_1 for CTE cte_1: SELECT table_1.key FROM intermediate_result_pruning.table_1 WHERE (table_1.key OPERATOR(pg_catalog.=) 1) INTERSECT SELECT table_1.key FROM intermediate_result_pruning.table_1 WHERE (table_1.key OPERATOR(pg_catalog.=) 2)
DEBUG: generating subplan XXX_1 for subquery SELECT key FROM intermediate_result_pruning.table_1 WHERE (key OPERATOR(pg_catalog.=) 1)
DEBUG: generating subplan XXX_2 for subquery SELECT key FROM intermediate_result_pruning.table_1 WHERE (key OPERATOR(pg_catalog.=) 2)
DEBUG: Plan XXX query after replacing subqueries and CTEs: SELECT intermediate_result.key FROM read_intermediate_result('XXX_1'::text, 'binary'::citus_copy_format) intermediate_result(key integer) INTERSECT SELECT intermediate_result.key FROM read_intermediate_result('XXX_2'::text, 'binary'::citus_copy_format) intermediate_result(key integer)
DEBUG: generating subplan XXX_2 for CTE cte_2: SELECT table_1.key FROM intermediate_result_pruning.table_1 WHERE (table_1.key OPERATOR(pg_catalog.=) 3) INTERSECT SELECT table_1.key FROM intermediate_result_pruning.table_1 WHERE (table_1.key OPERATOR(pg_catalog.=) 4)
DEBUG: generating subplan XXX_3 for subquery SELECT cte_1.key FROM (SELECT intermediate_result.key FROM read_intermediate_result('XXX_1'::text, 'binary'::citus_copy_format) intermediate_result(key integer)) cte_1 UNION SELECT cte_2.key FROM (SELECT intermediate_result.key FROM read_intermediate_result('XXX_2'::text, 'binary'::citus_copy_format) intermediate_result(key integer)) cte_2
DEBUG: Plan XXX query after replacing subqueries and CTEs: SELECT key, value FROM intermediate_result_pruning.table_2 WHERE ((key OPERATOR(pg_catalog.=) 1) AND ((value)::integer OPERATOR(pg_catalog.=) ANY (SELECT intermediate_result.key FROM read_intermediate_result('XXX_3'::text, 'binary'::citus_copy_format) intermediate_result(key integer))))
DEBUG: Subplan XXX_1 will be sent to localhost:xxxxx
DEBUG: Subplan XXX_1 will be sent to localhost:xxxxx
DEBUG: Subplan XXX_2 will be sent to localhost:xxxxx
DEBUG: Subplan XXX_2 will be sent to localhost:xxxxx
DEBUG: Subplan XXX_3 will be sent to localhost:xxxxx
-- same query, cte is on the FROM clause
-- and this time the final query (and top-level intermediate result)
-- hits all the shards because table_2.key != 1
@ -817,19 +817,19 @@ INSERT INTO table_1
foo.key = table_2.value::int;
DEBUG: Set operations are not allowed in distributed INSERT ... SELECT queries
DEBUG: Collecting INSERT ... SELECT results on coordinator
DEBUG: generating subplan 114_1 for CTE cte_1: SELECT table_1.key FROM intermediate_result_pruning.table_1 WHERE (table_1.key OPERATOR(pg_catalog.=) 1) INTERSECT SELECT table_1.key FROM intermediate_result_pruning.table_1 WHERE (table_1.key OPERATOR(pg_catalog.=) 2)
DEBUG: generating subplan 115_1 for subquery SELECT key FROM intermediate_result_pruning.table_1 WHERE (key OPERATOR(pg_catalog.=) 1)
DEBUG: generating subplan 115_2 for subquery SELECT key FROM intermediate_result_pruning.table_1 WHERE (key OPERATOR(pg_catalog.=) 2)
DEBUG: Plan 115 query after replacing subqueries and CTEs: SELECT intermediate_result.key FROM read_intermediate_result('115_1'::text, 'binary'::citus_copy_format) intermediate_result(key integer) INTERSECT SELECT intermediate_result.key FROM read_intermediate_result('115_2'::text, 'binary'::citus_copy_format) intermediate_result(key integer)
DEBUG: generating subplan 114_2 for CTE cte_2: SELECT table_1.key FROM intermediate_result_pruning.table_1 WHERE (table_1.key OPERATOR(pg_catalog.=) 3) INTERSECT SELECT table_1.key FROM intermediate_result_pruning.table_1 WHERE (table_1.key OPERATOR(pg_catalog.=) 4)
DEBUG: generating subplan 114_3 for subquery SELECT cte_1.key FROM (SELECT intermediate_result.key FROM read_intermediate_result('114_1'::text, 'binary'::citus_copy_format) intermediate_result(key integer)) cte_1 UNION SELECT cte_2.key FROM (SELECT intermediate_result.key FROM read_intermediate_result('114_2'::text, 'binary'::citus_copy_format) intermediate_result(key integer)) cte_2
DEBUG: Plan 114 query after replacing subqueries and CTEs: SELECT table_2.key, table_2.value FROM intermediate_result_pruning.table_2, (SELECT intermediate_result.key FROM read_intermediate_result('114_3'::text, 'binary'::citus_copy_format) intermediate_result(key integer)) foo WHERE ((table_2.key OPERATOR(pg_catalog.<>) 1) AND (foo.key OPERATOR(pg_catalog.=) (table_2.value)::integer))
DEBUG: Subplan 114_1 will be sent to localhost:57637
DEBUG: Subplan 115_1 will be sent to localhost:57638
DEBUG: Subplan 115_2 will be sent to localhost:57638
DEBUG: Subplan 114_2 will be sent to localhost:57637
DEBUG: Subplan 114_3 will be sent to localhost:57637
DEBUG: Subplan 114_3 will be sent to localhost:57638
DEBUG: generating subplan XXX_1 for CTE cte_1: SELECT table_1.key FROM intermediate_result_pruning.table_1 WHERE (table_1.key OPERATOR(pg_catalog.=) 1) INTERSECT SELECT table_1.key FROM intermediate_result_pruning.table_1 WHERE (table_1.key OPERATOR(pg_catalog.=) 2)
DEBUG: generating subplan XXX_1 for subquery SELECT key FROM intermediate_result_pruning.table_1 WHERE (key OPERATOR(pg_catalog.=) 1)
DEBUG: generating subplan XXX_2 for subquery SELECT key FROM intermediate_result_pruning.table_1 WHERE (key OPERATOR(pg_catalog.=) 2)
DEBUG: Plan XXX query after replacing subqueries and CTEs: SELECT intermediate_result.key FROM read_intermediate_result('XXX_1'::text, 'binary'::citus_copy_format) intermediate_result(key integer) INTERSECT SELECT intermediate_result.key FROM read_intermediate_result('XXX_2'::text, 'binary'::citus_copy_format) intermediate_result(key integer)
DEBUG: generating subplan XXX_2 for CTE cte_2: SELECT table_1.key FROM intermediate_result_pruning.table_1 WHERE (table_1.key OPERATOR(pg_catalog.=) 3) INTERSECT SELECT table_1.key FROM intermediate_result_pruning.table_1 WHERE (table_1.key OPERATOR(pg_catalog.=) 4)
DEBUG: generating subplan XXX_3 for subquery SELECT cte_1.key FROM (SELECT intermediate_result.key FROM read_intermediate_result('XXX_1'::text, 'binary'::citus_copy_format) intermediate_result(key integer)) cte_1 UNION SELECT cte_2.key FROM (SELECT intermediate_result.key FROM read_intermediate_result('XXX_2'::text, 'binary'::citus_copy_format) intermediate_result(key integer)) cte_2
DEBUG: Plan XXX query after replacing subqueries and CTEs: SELECT table_2.key, table_2.value FROM intermediate_result_pruning.table_2, (SELECT intermediate_result.key FROM read_intermediate_result('XXX_3'::text, 'binary'::citus_copy_format) intermediate_result(key integer)) foo WHERE ((table_2.key OPERATOR(pg_catalog.<>) 1) AND (foo.key OPERATOR(pg_catalog.=) (table_2.value)::integer))
DEBUG: Subplan XXX_1 will be sent to localhost:xxxxx
DEBUG: Subplan XXX_1 will be sent to localhost:xxxxx
DEBUG: Subplan XXX_2 will be sent to localhost:xxxxx
DEBUG: Subplan XXX_2 will be sent to localhost:xxxxx
DEBUG: Subplan XXX_3 will be sent to localhost:xxxxx
DEBUG: Subplan XXX_3 will be sent to localhost:xxxxx
-- append partitioned/heap-type
SET citus.replication_model TO statement;
-- do not print out 'building index pg_toast_xxxxx_index' messages
@ -838,37 +838,37 @@ CREATE TABLE range_partitioned(range_column text, data int);
SET client_min_messages TO DEBUG1;
SELECT create_distributed_table('range_partitioned', 'range_column', 'range');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
SELECT master_create_empty_shard('range_partitioned');
master_create_empty_shard
---------------------------
---------------------------------------------------------------------
1480013
(1 row)
SELECT master_create_empty_shard('range_partitioned');
master_create_empty_shard
---------------------------
---------------------------------------------------------------------
1480014
(1 row)
SELECT master_create_empty_shard('range_partitioned');
master_create_empty_shard
---------------------------
---------------------------------------------------------------------
1480015
(1 row)
SELECT master_create_empty_shard('range_partitioned');
master_create_empty_shard
---------------------------
---------------------------------------------------------------------
1480016
(1 row)
SELECT master_create_empty_shard('range_partitioned');
master_create_empty_shard
---------------------------
---------------------------------------------------------------------
1480017
(1 row)
@ -885,11 +885,11 @@ FROM
WHERE
range_column = 'A' AND
data IN (SELECT data FROM range_partitioned);
DEBUG: generating subplan 120_1 for subquery SELECT data FROM intermediate_result_pruning.range_partitioned
DEBUG: Plan 120 query after replacing subqueries and CTEs: SELECT count(*) AS count FROM intermediate_result_pruning.range_partitioned WHERE ((range_column OPERATOR(pg_catalog.=) 'A'::text) AND (data OPERATOR(pg_catalog.=) ANY (SELECT intermediate_result.data FROM read_intermediate_result('120_1'::text, 'binary'::citus_copy_format) intermediate_result(data integer))))
DEBUG: Subplan 120_1 will be sent to localhost:57637
DEBUG: generating subplan XXX_1 for subquery SELECT data FROM intermediate_result_pruning.range_partitioned
DEBUG: Plan XXX query after replacing subqueries and CTEs: SELECT count(*) AS count FROM intermediate_result_pruning.range_partitioned WHERE ((range_column OPERATOR(pg_catalog.=) 'A'::text) AND (data OPERATOR(pg_catalog.=) ANY (SELECT intermediate_result.data FROM read_intermediate_result('XXX_1'::text, 'binary'::citus_copy_format) intermediate_result(data integer))))
DEBUG: Subplan XXX_1 will be sent to localhost:xxxxx
count
-------
---------------------------------------------------------------------
0
(1 row)
@ -901,12 +901,12 @@ FROM
WHERE
range_column >= 'A' AND range_column <= 'K' AND
data IN (SELECT data FROM range_partitioned);
DEBUG: generating subplan 122_1 for subquery SELECT data FROM intermediate_result_pruning.range_partitioned
DEBUG: Plan 122 query after replacing subqueries and CTEs: SELECT count(*) AS count FROM intermediate_result_pruning.range_partitioned WHERE ((range_column OPERATOR(pg_catalog.>=) 'A'::text) AND (range_column OPERATOR(pg_catalog.<=) 'K'::text) AND (data OPERATOR(pg_catalog.=) ANY (SELECT intermediate_result.data FROM read_intermediate_result('122_1'::text, 'binary'::citus_copy_format) intermediate_result(data integer))))
DEBUG: Subplan 122_1 will be sent to localhost:57637
DEBUG: Subplan 122_1 will be sent to localhost:57638
DEBUG: generating subplan XXX_1 for subquery SELECT data FROM intermediate_result_pruning.range_partitioned
DEBUG: Plan XXX query after replacing subqueries and CTEs: SELECT count(*) AS count FROM intermediate_result_pruning.range_partitioned WHERE ((range_column OPERATOR(pg_catalog.>=) 'A'::text) AND (range_column OPERATOR(pg_catalog.<=) 'K'::text) AND (data OPERATOR(pg_catalog.=) ANY (SELECT intermediate_result.data FROM read_intermediate_result('XXX_1'::text, 'binary'::citus_copy_format) intermediate_result(data integer))))
DEBUG: Subplan XXX_1 will be sent to localhost:xxxxx
DEBUG: Subplan XXX_1 will be sent to localhost:xxxxx
count
-------
---------------------------------------------------------------------
0
(1 row)
@ -921,12 +921,12 @@ FROM
WHERE
range_column IN ('A', 'E') AND
range_partitioned.data IN (SELECT data FROM some_data);
DEBUG: generating subplan 124_1 for CTE some_data: SELECT data FROM intermediate_result_pruning.range_partitioned
DEBUG: Plan 124 query after replacing subqueries and CTEs: SELECT count(*) AS count FROM intermediate_result_pruning.range_partitioned WHERE ((range_column OPERATOR(pg_catalog.=) ANY (ARRAY['A'::text, 'E'::text])) AND (data OPERATOR(pg_catalog.=) ANY (SELECT some_data.data FROM (SELECT intermediate_result.data FROM read_intermediate_result('124_1'::text, 'binary'::citus_copy_format) intermediate_result(data integer)) some_data)))
DEBUG: Subplan 124_1 will be sent to localhost:57637
DEBUG: Subplan 124_1 will be sent to localhost:57638
DEBUG: generating subplan XXX_1 for CTE some_data: SELECT data FROM intermediate_result_pruning.range_partitioned
DEBUG: Plan XXX query after replacing subqueries and CTEs: SELECT count(*) AS count FROM intermediate_result_pruning.range_partitioned WHERE ((range_column OPERATOR(pg_catalog.=) ANY (ARRAY['A'::text, 'E'::text])) AND (data OPERATOR(pg_catalog.=) ANY (SELECT some_data.data FROM (SELECT intermediate_result.data FROM read_intermediate_result('XXX_1'::text, 'binary'::citus_copy_format) intermediate_result(data integer)) some_data)))
DEBUG: Subplan XXX_1 will be sent to localhost:xxxxx
DEBUG: Subplan XXX_1 will be sent to localhost:xxxxx
count
-------
---------------------------------------------------------------------
0
(1 row)

View File

@ -10,13 +10,13 @@ CREATE OR REPLACE FUNCTION pg_catalog.store_intermediate_result_on_node(nodename
BEGIN;
SELECT create_intermediate_result('squares', 'SELECT s, s*s FROM generate_series(1,5) s');
create_intermediate_result
----------------------------
---------------------------------------------------------------------
5
(1 row)
SELECT * FROM read_intermediate_result('squares', 'binary') AS res (x int, x2 int);
x | x2
---+----
---------------------------------------------------------------------
1 | 1
2 | 4
3 | 9
@ -28,7 +28,7 @@ COMMIT;
-- in separate transactions, the result is no longer available
SELECT create_intermediate_result('squares', 'SELECT s, s*s FROM generate_series(1,5) s');
create_intermediate_result
----------------------------
---------------------------------------------------------------------
5
(1 row)
@ -38,7 +38,7 @@ BEGIN;
CREATE TABLE interesting_squares (user_id text, interested_in text);
SELECT create_distributed_table('interesting_squares', 'user_id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -46,7 +46,7 @@ INSERT INTO interesting_squares VALUES ('jon', '2'), ('jon', '5'), ('jack', '3')
-- put an intermediate result on all workers
SELECT broadcast_intermediate_result('squares', 'SELECT s, s*s FROM generate_series(1,5) s');
broadcast_intermediate_result
-------------------------------
---------------------------------------------------------------------
5
(1 row)
@ -56,7 +56,7 @@ FROM interesting_squares JOIN (SELECT * FROM read_intermediate_result('squares',
WHERE user_id = 'jon'
ORDER BY x;
x | x2
---+----
---------------------------------------------------------------------
2 | 4
5 | 25
(2 rows)
@ -66,7 +66,7 @@ BEGIN;
-- put an intermediate result on all workers
SELECT broadcast_intermediate_result('squares', 'SELECT s, s*s FROM generate_series(1,5) s');
broadcast_intermediate_result
-------------------------------
---------------------------------------------------------------------
5
(1 row)
@ -76,7 +76,7 @@ FROM interesting_squares
JOIN (SELECT * FROM read_intermediate_result('squares', 'binary') AS res (x int, x2 int)) squares ON (x::text = interested_in)
ORDER BY x;
x | x2
---+----
---------------------------------------------------------------------
2 | 4
3 | 9
5 | 25
@ -111,7 +111,7 @@ SET client_min_messages TO DEFAULT;
BEGIN;
SELECT create_intermediate_result('squares', 'SELECT s, s*s FROM generate_series(1,5) s');
create_intermediate_result
----------------------------
---------------------------------------------------------------------
5
(1 row)
@ -122,12 +122,12 @@ END;
BEGIN;
SELECT create_intermediate_result('squares', 'SELECT s, s*s FROM generate_series(1,5) s');
create_intermediate_result
----------------------------
---------------------------------------------------------------------
5
(1 row)
SELECT * FROM read_intermediate_result('squares', 'csv') AS res (x int, x2 int);
ERROR: invalid input syntax for type integer: "PGCOPY"
ERROR: invalid input syntax for integer: "PGCOPY"
END;
-- try a composite type
CREATE TYPE intermediate_results.square_type AS (x text, x2 int);
@ -140,7 +140,7 @@ INSERT INTO stored_squares VALUES ('jon', '(5,25)'::intermediate_results.square_
BEGIN;
SELECT create_intermediate_result('stored_squares', 'SELECT square FROM stored_squares');
create_intermediate_result
----------------------------
---------------------------------------------------------------------
4
(1 row)
@ -150,13 +150,13 @@ COMMIT;
BEGIN;
SELECT create_intermediate_result('stored_squares', 'SELECT square FROM stored_squares');
create_intermediate_result
----------------------------
---------------------------------------------------------------------
4
(1 row)
SELECT * FROM read_intermediate_result('stored_squares', 'text') AS res (s intermediate_results.square_type);
s
--------
---------------------------------------------------------------------
(2,4)
(3,9)
(4,16)
@ -168,7 +168,7 @@ BEGIN;
-- put an intermediate result in text format on all workers
SELECT broadcast_intermediate_result('stored_squares', 'SELECT square, metadata FROM stored_squares');
broadcast_intermediate_result
-------------------------------
---------------------------------------------------------------------
4
(1 row)
@ -179,7 +179,7 @@ SELECT * FROM interesting_squares JOIN (
) squares
ON ((s).x = interested_in) WHERE user_id = 'jon' ORDER BY 1,2;
user_id | interested_in | s | m
---------+---------------+--------+--------------
---------------------------------------------------------------------
jon | 2 | (2,4) | {"value": 2}
jon | 5 | (5,25) | {"value": 5}
(2 rows)
@ -191,7 +191,7 @@ SELECT * FROM interesting_squares JOIN (
) squares
ON ((s).x = interested_in) ORDER BY 1,2;
user_id | interested_in | s | m
---------+---------------+--------+--------------
---------------------------------------------------------------------
jack | 3 | (3,9) | {"value": 3}
jon | 2 | (2,4) | {"value": 2}
jon | 5 | (5,25) | {"value": 5}
@ -202,39 +202,39 @@ BEGIN;
-- accurate row count estimates for primitive types
SELECT create_intermediate_result('squares', 'SELECT s, s*s FROM generate_series(1,632) s');
create_intermediate_result
----------------------------
---------------------------------------------------------------------
632
(1 row)
EXPLAIN (COSTS ON) SELECT * FROM read_intermediate_result('squares', 'binary') AS res (x int, x2 int);
QUERY PLAN
-----------------------------------------------------------------------------------
---------------------------------------------------------------------
Function Scan on read_intermediate_result res (cost=0.00..4.55 rows=632 width=8)
(1 row)
-- less accurate results for variable types
SELECT create_intermediate_result('hellos', $$SELECT s, 'hello-'||s FROM generate_series(1,63) s$$);
create_intermediate_result
----------------------------
---------------------------------------------------------------------
63
(1 row)
EXPLAIN (COSTS ON) SELECT * FROM read_intermediate_result('hellos', 'binary') AS res (x int, y text);
QUERY PLAN
-----------------------------------------------------------------------------------
---------------------------------------------------------------------
Function Scan on read_intermediate_result res (cost=0.00..0.32 rows=30 width=36)
(1 row)
-- not very accurate results for text encoding
SELECT create_intermediate_result('stored_squares', 'SELECT square FROM stored_squares');
create_intermediate_result
----------------------------
---------------------------------------------------------------------
4
(1 row)
EXPLAIN (COSTS ON) SELECT * FROM read_intermediate_result('stored_squares', 'text') AS res (s intermediate_results.square_type);
QUERY PLAN
----------------------------------------------------------------------------------
---------------------------------------------------------------------
Function Scan on read_intermediate_result res (cost=0.00..0.01 rows=1 width=32)
(1 row)
@ -246,7 +246,7 @@ TO PROGRAM
WITH (FORMAT text);
SELECT * FROM squares ORDER BY x;
x | x2
---+----
---------------------------------------------------------------------
1 | 1
2 | 4
3 | 9
@ -272,19 +272,19 @@ SELECT create_intermediate_result('squares_1', 'SELECT s, s*s FROM generate_seri
create_intermediate_result('squares_2', 'SELECT s, s*s FROM generate_series(4,6) s'),
create_intermediate_result('squares_3', 'SELECT s, s*s FROM generate_series(7,10) s');
create_intermediate_result | create_intermediate_result | create_intermediate_result
----------------------------+----------------------------+----------------------------
---------------------------------------------------------------------
3 | 3 | 4
(1 row)
SELECT count(*) FROM read_intermediate_results(ARRAY[]::text[], 'binary') AS res (x int, x2 int);
count
-------
---------------------------------------------------------------------
0
(1 row)
SELECT * FROM read_intermediate_results(ARRAY['squares_1']::text[], 'binary') AS res (x int, x2 int);
x | x2
---+----
---------------------------------------------------------------------
1 | 1
2 | 4
3 | 9
@ -292,7 +292,7 @@ SELECT * FROM read_intermediate_results(ARRAY['squares_1']::text[], 'binary') AS
SELECT * FROM read_intermediate_results(ARRAY['squares_1', 'squares_2', 'squares_3']::text[], 'binary') AS res (x int, x2 int);
x | x2
----+-----
---------------------------------------------------------------------
1 | 1
2 | 4
3 | 9
@ -309,7 +309,7 @@ COMMIT;
-- in separate transactions, the result is no longer available
SELECT create_intermediate_result('squares_1', 'SELECT s, s*s FROM generate_series(1,5) s');
create_intermediate_result
----------------------------
---------------------------------------------------------------------
5
(1 row)
@ -319,7 +319,7 @@ ERROR: result "squares_1" does not exist
BEGIN;
SELECT create_intermediate_result('squares_1', 'SELECT s, s*s FROM generate_series(1,3) s');
create_intermediate_result
----------------------------
---------------------------------------------------------------------
3
(1 row)
@ -336,13 +336,13 @@ ROLLBACK TO SAVEPOINT s1;
-- after rollbacks we should be able to run vail read_intermediate_results still.
SELECT count(*) FROM read_intermediate_results(ARRAY['squares_1']::text[], 'binary') AS res (x int, x2 int);
count
-------
---------------------------------------------------------------------
3
(1 row)
SELECT count(*) FROM read_intermediate_results(ARRAY[]::text[], 'binary') AS res (x int, x2 int);
count
-------
---------------------------------------------------------------------
0
(1 row)
@ -357,7 +357,7 @@ SELECT broadcast_intermediate_result('stored_squares_1',
broadcast_intermediate_result('stored_squares_2',
'SELECT s, s*s, ROW(2::text, 3) FROM generate_series(4,6) s');
broadcast_intermediate_result | broadcast_intermediate_result
-------------------------------+-------------------------------
---------------------------------------------------------------------
3 | 3
(1 row)
@ -368,7 +368,7 @@ SELECT * FROM interesting_squares JOIN (
) squares
ON (squares.x::text = interested_in) WHERE user_id = 'jon' ORDER BY 1,2;
user_id | interested_in | x | x2 | z
---------+---------------+---+----+-------
---------------------------------------------------------------------
jon | 2 | 2 | 4 | (1,2)
jon | 5 | 5 | 25 | (2,3)
(2 rows)
@ -380,13 +380,13 @@ BEGIN;
SELECT create_intermediate_result('squares_1', 'SELECT s, s*s FROM generate_series(1,632) s'),
create_intermediate_result('squares_2', 'SELECT s, s*s FROM generate_series(633,1024) s');
create_intermediate_result | create_intermediate_result
----------------------------+----------------------------
---------------------------------------------------------------------
632 | 392
(1 row)
EXPLAIN (COSTS ON) SELECT * FROM read_intermediate_results(ARRAY['squares_1', 'squares_2'], 'binary') AS res (x int, x2 int);
QUERY PLAN
-------------------------------------------------------------------------------------
---------------------------------------------------------------------
Function Scan on read_intermediate_results res (cost=0.00..7.37 rows=1024 width=8)
(1 row)
@ -394,26 +394,26 @@ EXPLAIN (COSTS ON) SELECT * FROM read_intermediate_results(ARRAY['squares_1', 's
SELECT create_intermediate_result('hellos_1', $$SELECT s, 'hello-'||s FROM generate_series(1,63) s$$),
create_intermediate_result('hellos_2', $$SELECT s, 'hello-'||s FROM generate_series(64,129) s$$);
create_intermediate_result | create_intermediate_result
----------------------------+----------------------------
---------------------------------------------------------------------
63 | 66
(1 row)
EXPLAIN (COSTS ON) SELECT * FROM read_intermediate_results(ARRAY['hellos_1', 'hellos_2'], 'binary') AS res (x int, y text);
QUERY PLAN
------------------------------------------------------------------------------------
---------------------------------------------------------------------
Function Scan on read_intermediate_results res (cost=0.00..0.66 rows=62 width=36)
(1 row)
-- not very accurate results for text encoding
SELECT create_intermediate_result('stored_squares', 'SELECT square FROM stored_squares');
create_intermediate_result
----------------------------
---------------------------------------------------------------------
4
(1 row)
EXPLAIN (COSTS ON) SELECT * FROM read_intermediate_results(ARRAY['stored_squares'], 'text') AS res (s intermediate_results.square_type);
QUERY PLAN
-----------------------------------------------------------------------------------
---------------------------------------------------------------------
Function Scan on read_intermediate_results res (cost=0.00..0.01 rows=1 width=32)
(1 row)
@ -425,19 +425,19 @@ END;
BEGIN;
SELECT broadcast_intermediate_result('squares_1', 'SELECT s, s*s FROM generate_series(1, 5) s');
broadcast_intermediate_result
-------------------------------
---------------------------------------------------------------------
5
(1 row)
SELECT * FROM fetch_intermediate_results(ARRAY['squares_1']::text[], 'localhost', :worker_2_port);
fetch_intermediate_results
----------------------------
---------------------------------------------------------------------
111
(1 row)
SELECT * FROM read_intermediate_result('squares_1', 'binary') AS res (x int, x2 int);
x | x2
---+----
---------------------------------------------------------------------
1 | 1
2 | 4
3 | 9
@ -447,13 +447,13 @@ SELECT * FROM read_intermediate_result('squares_1', 'binary') AS res (x int, x2
SELECT * FROM fetch_intermediate_results(ARRAY['squares_1']::text[], 'localhost', :worker_1_port);
fetch_intermediate_results
----------------------------
---------------------------------------------------------------------
111
(1 row)
SELECT * FROM read_intermediate_result('squares_1', 'binary') AS res (x int, x2 int);
x | x2
---+----
---------------------------------------------------------------------
1 | 1
2 | 4
3 | 9
@ -467,14 +467,14 @@ BEGIN;
SELECT store_intermediate_result_on_node('localhost', :worker_1_port,
'squares_1', 'SELECT s, s*s FROM generate_series(1, 2) s');
store_intermediate_result_on_node
-----------------------------------
---------------------------------------------------------------------
(1 row)
SELECT store_intermediate_result_on_node('localhost', :worker_1_port,
'squares_2', 'SELECT s, s*s FROM generate_series(3, 4) s');
store_intermediate_result_on_node
-----------------------------------
---------------------------------------------------------------------
(1 row)
@ -485,8 +485,8 @@ ERROR: result "squares_1" does not exist
ROLLBACK TO SAVEPOINT s1;
-- fetch from worker 2 should fail
SELECT * FROM fetch_intermediate_results(ARRAY['squares_1', 'squares_2']::text[], 'localhost', :worker_2_port);
ERROR: could not open file "base/pgsql_job_cache/10_0_200/squares_1.data": No such file or directory
CONTEXT: while executing command on localhost:57638
ERROR: could not open file "base/pgsql_job_cache/xx_x_xxx/squares_1.data": No such file or directory
CONTEXT: while executing command on localhost:xxxxx
ROLLBACK TO SAVEPOINT s1;
-- still, results aren't available on coordinator yet
SELECT * FROM read_intermediate_results(ARRAY['squares_1', 'squares_2']::text[], 'binary') AS res (x int, x2 int);
@ -495,13 +495,13 @@ ROLLBACK TO SAVEPOINT s1;
-- fetch from worker 1 should succeed
SELECT * FROM fetch_intermediate_results(ARRAY['squares_1', 'squares_2']::text[], 'localhost', :worker_1_port);
fetch_intermediate_results
----------------------------
---------------------------------------------------------------------
114
(1 row)
SELECT * FROM read_intermediate_results(ARRAY['squares_1', 'squares_2']::text[], 'binary') AS res (x int, x2 int);
x | x2
---+----
---------------------------------------------------------------------
1 | 1
2 | 4
3 | 9
@ -511,13 +511,13 @@ SELECT * FROM read_intermediate_results(ARRAY['squares_1', 'squares_2']::text[],
-- fetching again should succeed
SELECT * FROM fetch_intermediate_results(ARRAY['squares_1', 'squares_2']::text[], 'localhost', :worker_1_port);
fetch_intermediate_results
----------------------------
---------------------------------------------------------------------
114
(1 row)
SELECT * FROM read_intermediate_results(ARRAY['squares_1', 'squares_2']::text[], 'binary') AS res (x int, x2 int);
x | x2
---+----
---------------------------------------------------------------------
1 | 1
2 | 4
3 | 9
@ -528,7 +528,7 @@ ROLLBACK TO SAVEPOINT s1;
-- empty result id list should succeed
SELECT * FROM fetch_intermediate_results(ARRAY[]::text[], 'localhost', :worker_1_port);
fetch_intermediate_results
----------------------------
---------------------------------------------------------------------
0
(1 row)

View File

@ -234,7 +234,7 @@ step s1-commit:
COMMIT;
step s2-remove-node-1: <... completed>
error in steps s1-commit s2-remove-node-1: ERROR: node at "localhost:57637" does not exist
error in steps s1-commit s2-remove-node-1: ERROR: node at "localhost:xxxxx" does not exist
step s1-show-nodes:
SELECT nodename, nodeport, isactive FROM pg_dist_node ORDER BY nodename, nodeport;

View File

@ -411,7 +411,7 @@ step s2-commit:
COMMIT;
step s1-insert-table-2: <... completed>
error in steps s2-commit s1-insert-table-2: ERROR: insert or update on table "ref_table_2_102048" violates foreign key constraint "ref_table_2_value_fkey_102048"
error in steps s2-commit s1-insert-table-2: ERROR: insert or update on table "ref_table_2_xxxxxxx" violates foreign key constraint "ref_table_2_value_fkey_xxxxxxx"
step s1-commit:
COMMIT;
@ -498,7 +498,7 @@ step s2-commit:
COMMIT;
step s1-insert-table-3: <... completed>
error in steps s2-commit s1-insert-table-3: ERROR: insert or update on table "ref_table_3_102058" violates foreign key constraint "ref_table_3_value_fkey_102058"
error in steps s2-commit s1-insert-table-3: ERROR: insert or update on table "ref_table_3_xxxxxxx" violates foreign key constraint "ref_table_3_value_fkey_xxxxxxx"
step s1-commit:
COMMIT;

View File

@ -159,7 +159,7 @@ step s2-commit:
COMMIT;
step s1-noshards: <... completed>
error in steps s2-commit s1-noshards: ERROR: node at "localhost:57637" does not exist
error in steps s2-commit s1-noshards: ERROR: node at "localhost:xxxxx" does not exist
step s1-commit:
COMMIT;

View File

@ -59,7 +59,7 @@ UNION
(select count(*) as c from cte5)
) as foo;
sum
-----
---------------------------------------------------------------------
91
(1 row)
@ -179,7 +179,7 @@ cte4 AS (
SELECT * FROM cte UNION ALL
SELECT * FROM cte4 ORDER BY 1,2,3,4,5 LIMIT 5;
user_id | time | value_1 | value_2 | value_3 | value_4
---------+---------------------------------+---------+---------+---------+---------
---------------------------------------------------------------------
1 | Wed Nov 22 18:49:42.327403 2017 | 3 | 2 | 1 |
1 | Wed Nov 22 19:03:01.772353 2017 | 4 | 1 | 2 |
1 | Wed Nov 22 19:07:03.846437 2017 | 1 | 2 | 5 |
@ -210,7 +210,7 @@ ORDER BY
1,2
LIMIT 10;
user_id | value_2
---------+---------
---------------------------------------------------------------------
1 | 0
1 | 0
1 | 0
@ -233,7 +233,7 @@ cte2 AS (
)
SELECT cte.user_id, cte.value_2 FROM cte,cte2 ORDER BY 1,2 LIMIT 10;
user_id | value_2
---------+---------
---------------------------------------------------------------------
1 | 0
1 | 0
1 | 0
@ -253,7 +253,7 @@ WITH cte AS
)
SELECT * FROM cte ORDER BY 1,2,3,4,5 LIMIT 10;
user_id | time | value_1 | value_2 | value_3 | value_4
---------+---------------------------------+---------+---------+---------+---------
---------------------------------------------------------------------
1 | Wed Nov 22 22:51:43.132261 2017 | 4 | 0 | 3 |
1 | Thu Nov 23 03:32:50.803031 2017 | 3 | 2 | 1 |
1 | Thu Nov 23 09:26:42.145043 2017 | 1 | 3 | 3 |
@ -283,7 +283,7 @@ WITH cte AS (
)
SELECT * FROM cte ORDER BY 1,2,3,4,5 LIMIT 10;
user_id | time | event_type | value_2 | value_3
---------+---------------------------------+------------+---------+---------
---------------------------------------------------------------------
1 | Wed Nov 22 22:51:43.132261 2017 | 0 | 2 | 0
1 | Wed Nov 22 22:51:43.132261 2017 | 0 | 5 | 1
1 | Wed Nov 22 22:51:43.132261 2017 | 1 | 1 | 1

View File

@ -7,21 +7,21 @@ SET citus.next_shard_id TO 1470000;
CREATE TABLE reference_table (key int PRIMARY KEY);
SELECT create_reference_table('reference_table');
create_reference_table
------------------------
---------------------------------------------------------------------
(1 row)
CREATE TABLE distributed_table (key int PRIMARY KEY , value text, age bigint CHECK (age > 10), FOREIGN KEY (key) REFERENCES reference_table(key) ON DELETE CASCADE);
SELECT create_distributed_table('distributed_table','key');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
CREATE TABLE second_distributed_table (key int PRIMARY KEY , value text, FOREIGN KEY (key) REFERENCES distributed_table(key) ON DELETE CASCADE);
SELECT create_distributed_table('second_distributed_table','key');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -40,7 +40,7 @@ CREATE TABLE collections_list (
) PARTITION BY LIST (collection_id );
SELECT create_distributed_table('collections_list', 'key');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -80,38 +80,38 @@ $$ LANGUAGE plpgsql;
-- we'll use these values in the tests
SELECT shard_of_distribution_column_is_local(1);
shard_of_distribution_column_is_local
---------------------------------------
---------------------------------------------------------------------
t
(1 row)
SELECT shard_of_distribution_column_is_local(6);
shard_of_distribution_column_is_local
---------------------------------------
---------------------------------------------------------------------
t
(1 row)
SELECT shard_of_distribution_column_is_local(500);
shard_of_distribution_column_is_local
---------------------------------------
---------------------------------------------------------------------
t
(1 row)
SELECT shard_of_distribution_column_is_local(701);
shard_of_distribution_column_is_local
---------------------------------------
---------------------------------------------------------------------
t
(1 row)
-- distribution key values of 11 and 12 are REMOTE to shards
SELECT shard_of_distribution_column_is_local(11);
shard_of_distribution_column_is_local
---------------------------------------
---------------------------------------------------------------------
f
(1 row)
SELECT shard_of_distribution_column_is_local(12);
shard_of_distribution_column_is_local
---------------------------------------
---------------------------------------------------------------------
f
(1 row)
@ -123,7 +123,7 @@ SET citus.log_local_commands TO ON;
SELECT count(*) FROM distributed_table WHERE key = 1;
LOG: executing the command locally: SELECT count(*) AS count FROM local_shard_execution.distributed_table_1470001 distributed_table WHERE (key OPERATOR(pg_catalog.=) 1)
count
-------
---------------------------------------------------------------------
1
(1 row)
@ -132,20 +132,20 @@ LOG: executing the command locally: SELECT count(*) AS count FROM local_shard_e
-- favors parallel execution even if everyting is local to node
SELECT count(*) FROM distributed_table WHERE key IN (1,6);
count
-------
---------------------------------------------------------------------
1
(1 row)
-- queries that hit any remote shards should NOT use local execution
SELECT count(*) FROM distributed_table WHERE key IN (1,11);
count
-------
---------------------------------------------------------------------
1
(1 row)
SELECT count(*) FROM distributed_table;
count
-------
---------------------------------------------------------------------
1
(1 row)
@ -180,7 +180,7 @@ ON CONFLICT(key) DO UPDATE SET value = '22'
RETURNING *;
LOG: executing the command locally: INSERT INTO local_shard_execution.distributed_table_1470001 AS citus_table_alias (key, value, age) SELECT distributed_table.key, distributed_table.value, distributed_table.age FROM local_shard_execution.distributed_table_1470001 distributed_table, local_shard_execution.second_distributed_table_1470005 second_distributed_table WHERE (((distributed_table.key OPERATOR(pg_catalog.=) 1) AND (distributed_table.key OPERATOR(pg_catalog.=) second_distributed_table.key)) AND ((worker_hash(distributed_table.key) OPERATOR(pg_catalog.>=) '-2147483648'::integer) AND (worker_hash(distributed_table.key) OPERATOR(pg_catalog.<=) '-1073741825'::integer))) ON CONFLICT(key) DO UPDATE SET value = '22'::text RETURNING citus_table_alias.key, citus_table_alias.value, citus_table_alias.age
key | value | age
-----+-------+-----
---------------------------------------------------------------------
1 | 22 | 20
(1 row)
@ -195,7 +195,7 @@ WHERE
ON CONFLICT(key) DO UPDATE SET value = '22'
RETURNING *;
key | value | age
-----+-------+-----
---------------------------------------------------------------------
(0 rows)
-- INSERT..SELECT via coordinator consists of two steps, select + COPY
@ -209,12 +209,12 @@ INSERT INTO distributed_table SELECT * FROM distributed_table ON CONFLICT DO NOT
-- though going through distributed execution
EXPLAIN (COSTS OFF) SELECT * FROM distributed_table WHERE key = 1 AND age = 20;
QUERY PLAN
------------------------------------------------------------------------------------------------------------
---------------------------------------------------------------------
Custom Scan (Citus Adaptive)
Task Count: 1
Tasks Shown: All
-> Task
Node: host=localhost port=57637 dbname=regression
Node: host=localhost port=xxxxx dbname=regression
-> Index Scan using distributed_table_pkey_1470001 on distributed_table_1470001 distributed_table
Index Cond: (key = 1)
Filter: (age = 20)
@ -222,12 +222,12 @@ EXPLAIN (COSTS OFF) SELECT * FROM distributed_table WHERE key = 1 AND age = 20;
EXPLAIN (ANALYZE, COSTS OFF, SUMMARY OFF, TIMING OFF) SELECT * FROM distributed_table WHERE key = 1 AND age = 20;
QUERY PLAN
------------------------------------------------------------------------------------------------------------------------------------
---------------------------------------------------------------------
Custom Scan (Citus Adaptive) (actual rows=1 loops=1)
Task Count: 1
Tasks Shown: All
-> Task
Node: host=localhost port=57637 dbname=regression
Node: host=localhost port=xxxxx dbname=regression
-> Index Scan using distributed_table_pkey_1470001 on distributed_table_1470001 distributed_table (actual rows=1 loops=1)
Index Cond: (key = 1)
Filter: (age = 20)
@ -235,12 +235,12 @@ EXPLAIN (ANALYZE, COSTS OFF, SUMMARY OFF, TIMING OFF) SELECT * FROM distribute
EXPLAIN (COSTS OFF) DELETE FROM distributed_table WHERE key = 1 AND age = 20;
QUERY PLAN
------------------------------------------------------------------------------------------------------------------
---------------------------------------------------------------------
Custom Scan (Citus Adaptive)
Task Count: 1
Tasks Shown: All
-> Task
Node: host=localhost port=57637 dbname=regression
Node: host=localhost port=xxxxx dbname=regression
-> Delete on distributed_table_1470001 distributed_table
-> Index Scan using distributed_table_pkey_1470001 on distributed_table_1470001 distributed_table
Index Cond: (key = 1)
@ -249,12 +249,12 @@ EXPLAIN (COSTS OFF) DELETE FROM distributed_table WHERE key = 1 AND age = 20;
EXPLAIN (ANALYZE, COSTS OFF, SUMMARY OFF, TIMING OFF) DELETE FROM distributed_table WHERE key = 1 AND age = 20;
QUERY PLAN
------------------------------------------------------------------------------------------------------------------------------------------
---------------------------------------------------------------------
Custom Scan (Citus Adaptive) (actual rows=0 loops=1)
Task Count: 1
Tasks Shown: All
-> Task
Node: host=localhost port=57637 dbname=regression
Node: host=localhost port=xxxxx dbname=regression
-> Delete on distributed_table_1470001 distributed_table (actual rows=0 loops=1)
-> Index Scan using distributed_table_pkey_1470001 on distributed_table_1470001 distributed_table (actual rows=0 loops=1)
Index Cond: (key = 1)
@ -265,13 +265,13 @@ EXPLAIN (ANALYZE, COSTS OFF, SUMMARY OFF, TIMING OFF) DELETE FROM distributed_ta
SELECT * FROM distributed_table WHERE key = 1 AND age = 20 ORDER BY 1,2,3;
LOG: executing the command locally: SELECT key, value, age FROM local_shard_execution.distributed_table_1470001 distributed_table WHERE ((key OPERATOR(pg_catalog.=) 1) AND (age OPERATOR(pg_catalog.=) 20)) ORDER BY key, value, age
key | value | age
-----+-------+-----
---------------------------------------------------------------------
(0 rows)
SELECT * FROM second_distributed_table WHERE key = 1 ORDER BY 1,2;
LOG: executing the command locally: SELECT key, value FROM local_shard_execution.second_distributed_table_1470005 second_distributed_table WHERE (key OPERATOR(pg_catalog.=) 1) ORDER BY key, value
key | value
-----+-------
---------------------------------------------------------------------
(0 rows)
-- Put rows back for other tests
@ -295,14 +295,14 @@ BEGIN;
INSERT INTO distributed_table VALUES (1, '11',21) ON CONFLICT(key) DO UPDATE SET value = '29' RETURNING *;
LOG: executing the command locally: INSERT INTO local_shard_execution.distributed_table_1470001 AS citus_table_alias (key, value, age) VALUES (1, '11'::text, 21) ON CONFLICT(key) DO UPDATE SET value = '29'::text RETURNING citus_table_alias.key, citus_table_alias.value, citus_table_alias.age
key | value | age
-----+-------+-----
---------------------------------------------------------------------
1 | 29 | 20
(1 row)
SELECT * FROM distributed_table WHERE key = 1 ORDER BY 1,2,3;
LOG: executing the command locally: SELECT key, value, age FROM local_shard_execution.distributed_table_1470001 distributed_table WHERE (key OPERATOR(pg_catalog.=) 1) ORDER BY key, value, age
key | value | age
-----+-------+-----
---------------------------------------------------------------------
1 | 29 | 20
(1 row)
@ -311,7 +311,7 @@ ROLLBACK;
SELECT * FROM distributed_table WHERE key = 1 ORDER BY 1,2,3;
LOG: executing the command locally: SELECT key, value, age FROM local_shard_execution.distributed_table_1470001 distributed_table WHERE (key OPERATOR(pg_catalog.=) 1) ORDER BY key, value, age
key | value | age
-----+-------+-----
---------------------------------------------------------------------
1 | 22 | 20
(1 row)
@ -320,7 +320,7 @@ BEGIN;
INSERT INTO distributed_table VALUES (1, '11',21) ON CONFLICT(key) DO UPDATE SET value = '29' RETURNING *;
LOG: executing the command locally: INSERT INTO local_shard_execution.distributed_table_1470001 AS citus_table_alias (key, value, age) VALUES (1, '11'::text, 21) ON CONFLICT(key) DO UPDATE SET value = '29'::text RETURNING citus_table_alias.key, citus_table_alias.value, citus_table_alias.age
key | value | age
-----+-------+-----
---------------------------------------------------------------------
1 | 29 | 20
(1 row)
@ -332,7 +332,7 @@ LOG: executing the command locally: DELETE FROM local_shard_execution.distribut
LOG: executing the command locally: SELECT count(*) AS count FROM local_shard_execution.second_distributed_table_1470005 second_distributed_table WHERE true
LOG: executing the command locally: SELECT count(*) AS count FROM local_shard_execution.second_distributed_table_1470007 second_distributed_table WHERE true
count
-------
---------------------------------------------------------------------
0
(1 row)
@ -341,19 +341,19 @@ ROLLBACK;
SELECT * FROM distributed_table WHERE key = 1 ORDER BY 1,2,3;
LOG: executing the command locally: SELECT key, value, age FROM local_shard_execution.distributed_table_1470001 distributed_table WHERE (key OPERATOR(pg_catalog.=) 1) ORDER BY key, value, age
key | value | age
-----+-------+-----
---------------------------------------------------------------------
1 | 22 | 20
(1 row)
SELECT count(*) FROM second_distributed_table;
count
-------
---------------------------------------------------------------------
2
(1 row)
SELECT * FROM second_distributed_table;
key | value
-----+-------
---------------------------------------------------------------------
1 | 1
6 | '6'
(2 rows)
@ -365,7 +365,7 @@ BEGIN;
INSERT INTO distributed_table VALUES (1, '11',21) ON CONFLICT(key) DO UPDATE SET value = '23' RETURNING *;
LOG: executing the command locally: INSERT INTO local_shard_execution.distributed_table_1470001 AS citus_table_alias (key, value, age) VALUES (1, '11'::text, 21) ON CONFLICT(key) DO UPDATE SET value = '23'::text RETURNING citus_table_alias.key, citus_table_alias.value, citus_table_alias.age
key | value | age
-----+-------+-----
---------------------------------------------------------------------
1 | 23 | 20
(1 row)
@ -374,7 +374,7 @@ LOG: executing the command locally: INSERT INTO local_shard_execution.distribut
SELECT * FROM distributed_table WHERE key = 1 ORDER BY 1,2,3;
LOG: executing the command locally: SELECT key, value, age FROM local_shard_execution.distributed_table_1470001 distributed_table WHERE (key OPERATOR(pg_catalog.=) 1) ORDER BY key, value, age
key | value | age
-----+-------+-----
---------------------------------------------------------------------
1 | 23 | 20
(1 row)
@ -384,7 +384,7 @@ LOG: executing the command locally: SELECT key, value, age FROM local_shard_exe
LOG: executing the command locally: SELECT key, value, age FROM local_shard_execution.distributed_table_1470001 distributed_table WHERE (value OPERATOR(pg_catalog.=) '23'::text)
LOG: executing the command locally: SELECT key, value, age FROM local_shard_execution.distributed_table_1470003 distributed_table WHERE (value OPERATOR(pg_catalog.=) '23'::text)
key | value | age
-----+-------+-----
---------------------------------------------------------------------
1 | 23 | 20
(1 row)
@ -398,7 +398,7 @@ LOG: executing the command locally: DELETE FROM local_shard_execution.distribut
LOG: executing the command locally: SELECT key, value, age FROM local_shard_execution.distributed_table_1470001 distributed_table WHERE (value OPERATOR(pg_catalog.=) '23'::text)
LOG: executing the command locally: SELECT key, value, age FROM local_shard_execution.distributed_table_1470003 distributed_table WHERE (value OPERATOR(pg_catalog.=) '23'::text)
key | value | age
-----+-------+-----
---------------------------------------------------------------------
(0 rows)
COMMIT;
@ -406,7 +406,7 @@ COMMIT;
SELECT * FROM distributed_table WHERE key = 1 ORDER BY 1,2,3;
LOG: executing the command locally: SELECT key, value, age FROM local_shard_execution.distributed_table_1470001 distributed_table WHERE (key OPERATOR(pg_catalog.=) 1) ORDER BY key, value, age
key | value | age
-----+-------+-----
---------------------------------------------------------------------
(0 rows)
-- if we start with a distributed execution, we should keep
@ -417,7 +417,7 @@ BEGIN;
-- locally, it is not going to be executed locally
SELECT * FROM distributed_table WHERE key = 1 ORDER BY 1,2,3;
key | value | age
-----+-------+-----
---------------------------------------------------------------------
(0 rows)
-- but we can still execute parallel queries, even if
@ -427,7 +427,7 @@ NOTICE: truncate cascades to table "second_distributed_table"
-- TRUNCATE cascaded into second_distributed_table
SELECT count(*) FROM second_distributed_table;
count
-------
---------------------------------------------------------------------
0
(1 row)
@ -442,7 +442,7 @@ BEGIN;
-- done distributed execution
SELECT * FROM distributed_table WHERE key = 500 ORDER BY 1,2,3;
key | value | age
-----+-------+-----
---------------------------------------------------------------------
500 | 500 | 25
(1 row)
@ -452,7 +452,7 @@ NOTICE: truncate cascades to table "second_distributed_table"
-- ensure that TRUNCATE made it
SELECT * FROM distributed_table WHERE key = 500 ORDER BY 1,2,3;
key | value | age
-----+-------+-----
---------------------------------------------------------------------
(0 rows)
ROLLBACK;
@ -469,14 +469,14 @@ LOG: executing the command locally: DELETE FROM local_shard_execution.reference
SELECT count(*) FROM distributed_table WHERE key = 701;
LOG: executing the command locally: SELECT count(*) AS count FROM local_shard_execution.distributed_table_1470001 distributed_table WHERE (key OPERATOR(pg_catalog.=) 701)
count
-------
---------------------------------------------------------------------
0
(1 row)
SELECT count(*) FROM second_distributed_table WHERE key = 701;
LOG: executing the command locally: SELECT count(*) AS count FROM local_shard_execution.second_distributed_table_1470005 second_distributed_table WHERE (key OPERATOR(pg_catalog.=) 701)
count
-------
---------------------------------------------------------------------
0
(1 row)
@ -485,7 +485,7 @@ LOG: executing the command locally: SELECT count(*) AS count FROM local_shard_e
LOG: executing the command locally: SELECT count(*) AS count FROM local_shard_execution.distributed_table_1470001 distributed_table WHERE (key OPERATOR(pg_catalog.>) 700)
LOG: executing the command locally: SELECT count(*) AS count FROM local_shard_execution.distributed_table_1470003 distributed_table WHERE (key OPERATOR(pg_catalog.>) 700)
count
-------
---------------------------------------------------------------------
0
(1 row)
@ -499,21 +499,21 @@ BEGIN;
SELECT count(*) FROM distributed_table WHERE key = 1;
LOG: executing the command locally: SELECT count(*) AS count FROM local_shard_execution.distributed_table_1470001 distributed_table WHERE (key OPERATOR(pg_catalog.=) 1)
count
-------
---------------------------------------------------------------------
0
(1 row)
SELECT count(*) FROM distributed_table WHERE key = 6;
LOG: executing the command locally: SELECT count(*) AS count FROM local_shard_execution.distributed_table_1470003 distributed_table WHERE (key OPERATOR(pg_catalog.=) 6)
count
-------
---------------------------------------------------------------------
1
(1 row)
SELECT count(*) FROM distributed_table WHERE key = 500;
LOG: executing the command locally: SELECT count(*) AS count FROM local_shard_execution.distributed_table_1470003 distributed_table WHERE (key OPERATOR(pg_catalog.=) 500)
count
-------
---------------------------------------------------------------------
0
(1 row)
@ -523,7 +523,7 @@ BEGIN;
SELECT count(*) FROM distributed_table WHERE key = 1;
LOG: executing the command locally: SELECT count(*) AS count FROM local_shard_execution.distributed_table_1470001 distributed_table WHERE (key OPERATOR(pg_catalog.=) 1)
count
-------
---------------------------------------------------------------------
0
(1 row)
@ -538,7 +538,7 @@ BEGIN;
SELECT count(*) FROM distributed_table WHERE key = 1;
LOG: executing the command locally: SELECT count(*) AS count FROM local_shard_execution.distributed_table_1470001 distributed_table WHERE (key OPERATOR(pg_catalog.=) 1)
count
-------
---------------------------------------------------------------------
0
(1 row)
@ -553,7 +553,7 @@ BEGIN;
SELECT count(*) FROM distributed_table WHERE key = 1;
LOG: executing the command locally: SELECT count(*) AS count FROM local_shard_execution.distributed_table_1470001 distributed_table WHERE (key OPERATOR(pg_catalog.=) 1)
count
-------
---------------------------------------------------------------------
0
(1 row)
@ -567,7 +567,7 @@ BEGIN;
SELECT count(*) FROM distributed_table WHERE key = 1;
LOG: executing the command locally: SELECT count(*) AS count FROM local_shard_execution.distributed_table_1470001 distributed_table WHERE (key OPERATOR(pg_catalog.=) 1)
count
-------
---------------------------------------------------------------------
0
(1 row)
@ -579,7 +579,7 @@ ROLLBACK;
INSERT INTO distributed_table VALUES (1, '11',21) ON CONFLICT(key) DO UPDATE SET value = '29' RETURNING *;
LOG: executing the command locally: INSERT INTO local_shard_execution.distributed_table_1470001 AS citus_table_alias (key, value, age) VALUES (1, '11'::text, 21) ON CONFLICT(key) DO UPDATE SET value = '29'::text RETURNING citus_table_alias.key, citus_table_alias.value, citus_table_alias.age
key | value | age
-----+-------+-----
---------------------------------------------------------------------
1 | 11 | 21
(1 row)
@ -594,7 +594,7 @@ ROLLBACK;
BEGIN;
INSERT INTO distributed_table VALUES (11, '111',29) ON CONFLICT(key) DO UPDATE SET value = '29' RETURNING *;
key | value | age
-----+-------+-----
---------------------------------------------------------------------
11 | 29 | 121
(1 row)
@ -607,7 +607,7 @@ ROLLBACK;
BEGIN;
INSERT INTO distributed_table VALUES (11, '111',29) ON CONFLICT(key) DO UPDATE SET value = '29' RETURNING *;
key | value | age
-----+-------+-----
---------------------------------------------------------------------
11 | 29 | 121
(1 row)
@ -686,9 +686,9 @@ WITH local_insert AS (INSERT INTO distributed_table VALUES (1, '11',21) ON CONFL
distributed_local_mixed AS (SELECT * FROM reference_table WHERE key IN (SELECT key FROM local_insert))
SELECT * FROM local_insert, distributed_local_mixed;
LOG: executing the command locally: INSERT INTO local_shard_execution.distributed_table_1470001 AS citus_table_alias (key, value, age) VALUES (1, '11'::text, 21) ON CONFLICT(key) DO UPDATE SET value = '29'::text RETURNING citus_table_alias.key, citus_table_alias.value, citus_table_alias.age
LOG: executing the command locally: SELECT key FROM local_shard_execution.reference_table_1470000 reference_table WHERE (key OPERATOR(pg_catalog.=) ANY (SELECT local_insert.key FROM (SELECT intermediate_result.key, intermediate_result.value, intermediate_result.age FROM read_intermediate_result('81_1'::text, 'binary'::citus_copy_format) intermediate_result(key integer, value text, age bigint)) local_insert))
LOG: executing the command locally: SELECT key FROM local_shard_execution.reference_table_1470000 reference_table WHERE (key OPERATOR(pg_catalog.=) ANY (SELECT local_insert.key FROM (SELECT intermediate_result.key, intermediate_result.value, intermediate_result.age FROM read_intermediate_result('XXX_1'::text, 'binary'::citus_copy_format) intermediate_result(key integer, value text, age bigint)) local_insert))
key | value | age | key
-----+-------+-----+-----
---------------------------------------------------------------------
1 | 11 | 21 | 1
(1 row)
@ -698,7 +698,7 @@ WITH distributed_local_mixed AS (SELECT * FROM distributed_table),
local_insert AS (INSERT INTO distributed_table VALUES (1, '11',21) ON CONFLICT(key) DO UPDATE SET value = '29' RETURNING *)
SELECT * FROM local_insert, distributed_local_mixed ORDER BY 1,2,3,4,5;
key | value | age | key | value | age
-----+-------+-----+-----+-------+-----
---------------------------------------------------------------------
1 | 29 | 21 | 1 | 11 | 21
(1 row)
@ -712,7 +712,7 @@ WHERE
distributed_table.key = all_data.key AND distributed_table.key = 1;
LOG: executing the command locally: WITH all_data AS (SELECT distributed_table_1.key, distributed_table_1.value, distributed_table_1.age FROM local_shard_execution.distributed_table_1470001 distributed_table_1 WHERE (distributed_table_1.key OPERATOR(pg_catalog.=) 1)) SELECT count(*) AS count FROM local_shard_execution.distributed_table_1470001 distributed_table, all_data WHERE ((distributed_table.key OPERATOR(pg_catalog.=) all_data.key) AND (distributed_table.key OPERATOR(pg_catalog.=) 1))
count
-------
---------------------------------------------------------------------
1
(1 row)
@ -731,7 +731,7 @@ WHERE
ORDER BY
1 DESC;
key
-----
---------------------------------------------------------------------
1
(1 row)
@ -749,7 +749,7 @@ WHERE
distributed_table.key = all_data.key AND distributed_table.key = 1
AND EXISTS (SELECT * FROM all_data);
count
-------
---------------------------------------------------------------------
1
(1 row)
@ -764,7 +764,7 @@ FROM
WHERE
distributed_table.key = all_data.age AND distributed_table.key = 1;
count
-------
---------------------------------------------------------------------
0
(1 row)
@ -774,7 +774,7 @@ TRUNCATE reference_table, distributed_table, second_distributed_table;
INSERT INTO reference_table VALUES (1),(2),(3),(4),(5),(6) RETURNING *;
LOG: executing the command locally: INSERT INTO local_shard_execution.reference_table_1470000 AS citus_table_alias (key) VALUES (1), (2), (3), (4), (5), (6) RETURNING citus_table_alias.key
key
-----
---------------------------------------------------------------------
1
2
3
@ -787,7 +787,7 @@ LOG: executing the command locally: INSERT INTO local_shard_execution.reference
INSERT INTO distributed_table VALUES (1, '11',21), (5,'55',22) ON CONFLICT(key) DO UPDATE SET value = (EXCLUDED.value::int + 1)::text RETURNING *;
LOG: executing the command locally: INSERT INTO local_shard_execution.distributed_table_1470001 AS citus_table_alias (key, value, age) VALUES (1,'11'::text,'21'::bigint), (5,'55'::text,'22'::bigint) ON CONFLICT(key) DO UPDATE SET value = (((excluded.value)::integer OPERATOR(pg_catalog.+) 1))::text RETURNING citus_table_alias.key, citus_table_alias.value, citus_table_alias.age
key | value | age
-----+-------+-----
---------------------------------------------------------------------
1 | 11 | 21
5 | 55 | 22
(2 rows)
@ -797,7 +797,7 @@ LOG: executing the command locally: INSERT INTO local_shard_execution.distribut
-- because the command is a multi-shard query
INSERT INTO distributed_table VALUES (1, '11',21), (2,'22',22), (3,'33',33), (4,'44',44),(5,'55',55) ON CONFLICT(key) DO UPDATE SET value = (EXCLUDED.value::int + 1)::text RETURNING *;
key | value | age
-----+-------+-----
---------------------------------------------------------------------
1 | 12 | 21
2 | 22 | 22
3 | 33 | 33
@ -813,42 +813,42 @@ BEGIN;
EXECUTE local_prepare_no_param;
LOG: executing the command locally: SELECT count(*) AS count FROM local_shard_execution.distributed_table_1470001 distributed_table WHERE (key OPERATOR(pg_catalog.=) 1)
count
-------
---------------------------------------------------------------------
1
(1 row)
EXECUTE local_prepare_no_param;
LOG: executing the command locally: SELECT count(*) AS count FROM local_shard_execution.distributed_table_1470001 distributed_table WHERE (key OPERATOR(pg_catalog.=) 1)
count
-------
---------------------------------------------------------------------
1
(1 row)
EXECUTE local_prepare_no_param;
LOG: executing the command locally: SELECT count(*) AS count FROM local_shard_execution.distributed_table_1470001 distributed_table WHERE (key OPERATOR(pg_catalog.=) 1)
count
-------
---------------------------------------------------------------------
1
(1 row)
EXECUTE local_prepare_no_param;
LOG: executing the command locally: SELECT count(*) AS count FROM local_shard_execution.distributed_table_1470001 distributed_table WHERE (key OPERATOR(pg_catalog.=) 1)
count
-------
---------------------------------------------------------------------
1
(1 row)
EXECUTE local_prepare_no_param;
LOG: executing the command locally: SELECT count(*) AS count FROM local_shard_execution.distributed_table_1470001 distributed_table WHERE (key OPERATOR(pg_catalog.=) 1)
count
-------
---------------------------------------------------------------------
1
(1 row)
EXECUTE local_prepare_no_param;
LOG: executing the command locally: SELECT count(*) AS count FROM local_shard_execution.distributed_table_1470001 distributed_table WHERE (key OPERATOR(pg_catalog.=) 1)
count
-------
---------------------------------------------------------------------
1
(1 row)
@ -856,42 +856,42 @@ LOG: executing the command locally: SELECT count(*) AS count FROM local_shard_e
EXECUTE local_prepare_param(1);
LOG: executing the command locally: SELECT count(*) AS count FROM local_shard_execution.distributed_table_1470001 distributed_table WHERE (key OPERATOR(pg_catalog.=) 1)
count
-------
---------------------------------------------------------------------
1
(1 row)
EXECUTE local_prepare_param(5);
LOG: executing the command locally: SELECT count(*) AS count FROM local_shard_execution.distributed_table_1470001 distributed_table WHERE (key OPERATOR(pg_catalog.=) 5)
count
-------
---------------------------------------------------------------------
1
(1 row)
EXECUTE local_prepare_param(6);
LOG: executing the command locally: SELECT count(*) AS count FROM local_shard_execution.distributed_table_1470003 distributed_table WHERE (key OPERATOR(pg_catalog.=) 6)
count
-------
---------------------------------------------------------------------
0
(1 row)
EXECUTE local_prepare_param(1);
LOG: executing the command locally: SELECT count(*) AS count FROM local_shard_execution.distributed_table_1470001 distributed_table WHERE (key OPERATOR(pg_catalog.=) 1)
count
-------
---------------------------------------------------------------------
1
(1 row)
EXECUTE local_prepare_param(5);
LOG: executing the command locally: SELECT count(*) AS count FROM local_shard_execution.distributed_table_1470001 distributed_table WHERE (key OPERATOR(pg_catalog.=) 5)
count
-------
---------------------------------------------------------------------
1
(1 row)
EXECUTE local_prepare_param(6);
LOG: executing the command locally: SELECT count(*) AS count FROM local_shard_execution.distributed_table_1470003 distributed_table WHERE (key OPERATOR(pg_catalog.=) 6)
count
-------
---------------------------------------------------------------------
0
(1 row)
@ -900,7 +900,7 @@ LOG: executing the command locally: SELECT count(*) AS count FROM local_shard_e
LOG: executing the command locally: SELECT count(*) AS count FROM local_shard_execution.distributed_table_1470001 distributed_table WHERE (key OPERATOR(pg_catalog.<>) 1)
LOG: executing the command locally: SELECT count(*) AS count FROM local_shard_execution.distributed_table_1470003 distributed_table WHERE (key OPERATOR(pg_catalog.<>) 1)
count
-------
---------------------------------------------------------------------
4
(1 row)
@ -912,42 +912,42 @@ BEGIN;
EXECUTE local_insert_prepare_no_param;
LOG: executing the command locally: INSERT INTO local_shard_execution.distributed_table_1470001 AS citus_table_alias (key, value, age) VALUES (1, '11'::text, '21'::bigint) ON CONFLICT(key) DO UPDATE SET value = '2928'::text RETURNING citus_table_alias.key, citus_table_alias.value, citus_table_alias.age, (citus_table_alias.key OPERATOR(pg_catalog.+) 1), (citus_table_alias.value OPERATOR(pg_catalog.||) '30'::text), (citus_table_alias.age OPERATOR(pg_catalog.*) 15)
key | value | age | ?column? | ?column? | ?column?
-----+-------+-----+----------+----------+----------
---------------------------------------------------------------------
1 | 2928 | 21 | 2 | 292830 | 315
(1 row)
EXECUTE local_insert_prepare_no_param;
LOG: executing the command locally: INSERT INTO local_shard_execution.distributed_table_1470001 AS citus_table_alias (key, value, age) VALUES (1, '11'::text, '21'::bigint) ON CONFLICT(key) DO UPDATE SET value = '2928'::text RETURNING citus_table_alias.key, citus_table_alias.value, citus_table_alias.age, (citus_table_alias.key OPERATOR(pg_catalog.+) 1), (citus_table_alias.value OPERATOR(pg_catalog.||) '30'::text), (citus_table_alias.age OPERATOR(pg_catalog.*) 15)
key | value | age | ?column? | ?column? | ?column?
-----+-------+-----+----------+----------+----------
---------------------------------------------------------------------
1 | 2928 | 21 | 2 | 292830 | 315
(1 row)
EXECUTE local_insert_prepare_no_param;
LOG: executing the command locally: INSERT INTO local_shard_execution.distributed_table_1470001 AS citus_table_alias (key, value, age) VALUES (1, '11'::text, '21'::bigint) ON CONFLICT(key) DO UPDATE SET value = '2928'::text RETURNING citus_table_alias.key, citus_table_alias.value, citus_table_alias.age, (citus_table_alias.key OPERATOR(pg_catalog.+) 1), (citus_table_alias.value OPERATOR(pg_catalog.||) '30'::text), (citus_table_alias.age OPERATOR(pg_catalog.*) 15)
key | value | age | ?column? | ?column? | ?column?
-----+-------+-----+----------+----------+----------
---------------------------------------------------------------------
1 | 2928 | 21 | 2 | 292830 | 315
(1 row)
EXECUTE local_insert_prepare_no_param;
LOG: executing the command locally: INSERT INTO local_shard_execution.distributed_table_1470001 AS citus_table_alias (key, value, age) VALUES (1, '11'::text, '21'::bigint) ON CONFLICT(key) DO UPDATE SET value = '2928'::text RETURNING citus_table_alias.key, citus_table_alias.value, citus_table_alias.age, (citus_table_alias.key OPERATOR(pg_catalog.+) 1), (citus_table_alias.value OPERATOR(pg_catalog.||) '30'::text), (citus_table_alias.age OPERATOR(pg_catalog.*) 15)
key | value | age | ?column? | ?column? | ?column?
-----+-------+-----+----------+----------+----------
---------------------------------------------------------------------
1 | 2928 | 21 | 2 | 292830 | 315
(1 row)
EXECUTE local_insert_prepare_no_param;
LOG: executing the command locally: INSERT INTO local_shard_execution.distributed_table_1470001 AS citus_table_alias (key, value, age) VALUES (1, '11'::text, '21'::bigint) ON CONFLICT(key) DO UPDATE SET value = '2928'::text RETURNING citus_table_alias.key, citus_table_alias.value, citus_table_alias.age, (citus_table_alias.key OPERATOR(pg_catalog.+) 1), (citus_table_alias.value OPERATOR(pg_catalog.||) '30'::text), (citus_table_alias.age OPERATOR(pg_catalog.*) 15)
key | value | age | ?column? | ?column? | ?column?
-----+-------+-----+----------+----------+----------
---------------------------------------------------------------------
1 | 2928 | 21 | 2 | 292830 | 315
(1 row)
EXECUTE local_insert_prepare_no_param;
LOG: executing the command locally: INSERT INTO local_shard_execution.distributed_table_1470001 AS citus_table_alias (key, value, age) VALUES (1, '11'::text, '21'::bigint) ON CONFLICT(key) DO UPDATE SET value = '2928'::text RETURNING citus_table_alias.key, citus_table_alias.value, citus_table_alias.age, (citus_table_alias.key OPERATOR(pg_catalog.+) 1), (citus_table_alias.value OPERATOR(pg_catalog.||) '30'::text), (citus_table_alias.age OPERATOR(pg_catalog.*) 15)
key | value | age | ?column? | ?column? | ?column?
-----+-------+-----+----------+----------+----------
---------------------------------------------------------------------
1 | 2928 | 21 | 2 | 292830 | 315
(1 row)
@ -955,42 +955,42 @@ LOG: executing the command locally: INSERT INTO local_shard_execution.distribut
EXECUTE local_insert_prepare_param(1);
LOG: executing the command locally: INSERT INTO local_shard_execution.distributed_table_1470001 AS citus_table_alias (key, value, age) VALUES (1, '11'::text, '21'::bigint) ON CONFLICT(key) DO UPDATE SET value = '2928'::text RETURNING citus_table_alias.key, citus_table_alias.value, citus_table_alias.age, (citus_table_alias.key OPERATOR(pg_catalog.+) 1), (citus_table_alias.value OPERATOR(pg_catalog.||) '30'::text), (citus_table_alias.age OPERATOR(pg_catalog.*) 15)
key | value | age | ?column? | ?column? | ?column?
-----+-------+-----+----------+----------+----------
---------------------------------------------------------------------
1 | 2928 | 21 | 2 | 292830 | 315
(1 row)
EXECUTE local_insert_prepare_param(5);
LOG: executing the command locally: INSERT INTO local_shard_execution.distributed_table_1470001 AS citus_table_alias (key, value, age) VALUES (5, '11'::text, '21'::bigint) ON CONFLICT(key) DO UPDATE SET value = '2928'::text RETURNING citus_table_alias.key, citus_table_alias.value, citus_table_alias.age, (citus_table_alias.key OPERATOR(pg_catalog.+) 1), (citus_table_alias.value OPERATOR(pg_catalog.||) '30'::text), (citus_table_alias.age OPERATOR(pg_catalog.*) 15)
key | value | age | ?column? | ?column? | ?column?
-----+-------+-----+----------+----------+----------
---------------------------------------------------------------------
5 | 2928 | 22 | 6 | 292830 | 330
(1 row)
EXECUTE local_insert_prepare_param(6);
LOG: executing the command locally: INSERT INTO local_shard_execution.distributed_table_1470003 AS citus_table_alias (key, value, age) VALUES (6, '11'::text, '21'::bigint) ON CONFLICT(key) DO UPDATE SET value = '2928'::text RETURNING citus_table_alias.key, citus_table_alias.value, citus_table_alias.age, (citus_table_alias.key OPERATOR(pg_catalog.+) 1), (citus_table_alias.value OPERATOR(pg_catalog.||) '30'::text), (citus_table_alias.age OPERATOR(pg_catalog.*) 15)
key | value | age | ?column? | ?column? | ?column?
-----+-------+-----+----------+----------+----------
---------------------------------------------------------------------
6 | 11 | 21 | 7 | 1130 | 315
(1 row)
EXECUTE local_insert_prepare_param(1);
LOG: executing the command locally: INSERT INTO local_shard_execution.distributed_table_1470001 AS citus_table_alias (key, value, age) VALUES (1, '11'::text, '21'::bigint) ON CONFLICT(key) DO UPDATE SET value = '2928'::text RETURNING citus_table_alias.key, citus_table_alias.value, citus_table_alias.age, (citus_table_alias.key OPERATOR(pg_catalog.+) 1), (citus_table_alias.value OPERATOR(pg_catalog.||) '30'::text), (citus_table_alias.age OPERATOR(pg_catalog.*) 15)
key | value | age | ?column? | ?column? | ?column?
-----+-------+-----+----------+----------+----------
---------------------------------------------------------------------
1 | 2928 | 21 | 2 | 292830 | 315
(1 row)
EXECUTE local_insert_prepare_param(5);
LOG: executing the command locally: INSERT INTO local_shard_execution.distributed_table_1470001 AS citus_table_alias (key, value, age) VALUES (5, '11'::text, '21'::bigint) ON CONFLICT(key) DO UPDATE SET value = '2928'::text RETURNING citus_table_alias.key, citus_table_alias.value, citus_table_alias.age, (citus_table_alias.key OPERATOR(pg_catalog.+) 1), (citus_table_alias.value OPERATOR(pg_catalog.||) '30'::text), (citus_table_alias.age OPERATOR(pg_catalog.*) 15)
key | value | age | ?column? | ?column? | ?column?
-----+-------+-----+----------+----------+----------
---------------------------------------------------------------------
5 | 2928 | 22 | 6 | 292830 | 330
(1 row)
EXECUTE local_insert_prepare_param(6);
LOG: executing the command locally: INSERT INTO local_shard_execution.distributed_table_1470003 AS citus_table_alias (key, value, age) VALUES (6, '11'::text, '21'::bigint) ON CONFLICT(key) DO UPDATE SET value = '2928'::text RETURNING citus_table_alias.key, citus_table_alias.value, citus_table_alias.age, (citus_table_alias.key OPERATOR(pg_catalog.+) 1), (citus_table_alias.value OPERATOR(pg_catalog.||) '30'::text), (citus_table_alias.age OPERATOR(pg_catalog.*) 15)
key | value | age | ?column? | ?column? | ?column?
-----+-------+-----+----------+----------+----------
---------------------------------------------------------------------
6 | 2928 | 21 | 7 | 292830 | 315
(1 row)
@ -999,7 +999,7 @@ LOG: executing the command locally: INSERT INTO local_shard_execution.distribut
LOG: executing the command locally: SELECT count(*) AS count FROM local_shard_execution.distributed_table_1470001 distributed_table WHERE (key OPERATOR(pg_catalog.<>) 2)
LOG: executing the command locally: SELECT count(*) AS count FROM local_shard_execution.distributed_table_1470003 distributed_table WHERE (key OPERATOR(pg_catalog.<>) 2)
count
-------
---------------------------------------------------------------------
5
(1 row)
@ -1069,7 +1069,7 @@ BEGIN;
INSERT INTO distributed_table VALUES (1, '11',21) ON CONFLICT(key) DO UPDATE SET value = '100' RETURNING *;
LOG: executing the command locally: INSERT INTO local_shard_execution.distributed_table_1470001 AS citus_table_alias (key, value, age) VALUES (1, '11'::text, 21) ON CONFLICT(key) DO UPDATE SET value = '100'::text RETURNING citus_table_alias.key, citus_table_alias.value, citus_table_alias.age
key | value | age
-----+-------+-----
---------------------------------------------------------------------
1 | 100 | 21
(1 row)
@ -1083,7 +1083,7 @@ ROLLBACK;
-- we've rollbacked everything
SELECT count(*) FROM distributed_table WHERE value = '200';
count
-------
---------------------------------------------------------------------
0
(1 row)
@ -1091,14 +1091,14 @@ SELECT count(*) FROM distributed_table WHERE value = '200';
INSERT INTO reference_table VALUES (500) RETURNING *;
LOG: executing the command locally: INSERT INTO local_shard_execution.reference_table_1470000 (key) VALUES (500) RETURNING key
key
-----
---------------------------------------------------------------------
500
(1 row)
DELETE FROM reference_table WHERE key = 500 RETURNING *;
LOG: executing the command locally: DELETE FROM local_shard_execution.reference_table_1470000 reference_table WHERE (key OPERATOR(pg_catalog.=) 500) RETURNING key
key
-----
---------------------------------------------------------------------
500
(1 row)
@ -1108,7 +1108,7 @@ BEGIN;
DELETE FROM distributed_table;
INSERT INTO distributed_table VALUES (1, '11',21) ON CONFLICT(key) DO UPDATE SET value = '100' RETURNING *;
key | value | age
-----+-------+-----
---------------------------------------------------------------------
1 | 11 | 21
(1 row)
@ -1119,7 +1119,7 @@ BEGIN;
INSERT INTO distributed_table VALUES (1, '11',21) ON CONFLICT(key) DO UPDATE SET value = '100' RETURNING *;
LOG: executing the command locally: INSERT INTO local_shard_execution.distributed_table_1470001 AS citus_table_alias (key, value, age) VALUES (1, '11'::text, 21) ON CONFLICT(key) DO UPDATE SET value = '100'::text RETURNING citus_table_alias.key, citus_table_alias.value, citus_table_alias.age
key | value | age
-----+-------+-----
---------------------------------------------------------------------
1 | 100 | 21
(1 row)
@ -1146,7 +1146,7 @@ BEGIN;
DELETE FROM reference_table WHERE key = 500 RETURNING *;
LOG: executing the command locally: DELETE FROM local_shard_execution.reference_table_1470000 reference_table WHERE (key OPERATOR(pg_catalog.=) 500) RETURNING key
key
-----
---------------------------------------------------------------------
500
(1 row)
@ -1170,7 +1170,7 @@ BEGIN;
SET LOCAL client_min_messages TO INFO;
SELECT count(*) FROM distributed_table;
count
-------
---------------------------------------------------------------------
101
(1 row)
@ -1184,7 +1184,7 @@ SELECT * FROM distributed_table WHERE key = 500;
SELECT * FROM v_local_query_execution;
LOG: executing the command locally: SELECT key, value, age FROM (SELECT distributed_table.key, distributed_table.value, distributed_table.age FROM local_shard_execution.distributed_table_1470003 distributed_table WHERE (distributed_table.key OPERATOR(pg_catalog.=) 500)) v_local_query_execution
key | value | age
-----+-------+-----
---------------------------------------------------------------------
500 | 500 | 25
(1 row)
@ -1195,7 +1195,7 @@ SELECT * FROM distributed_table;
SELECT * FROM v_local_query_execution_2 WHERE key = 500;
LOG: executing the command locally: SELECT key, value, age FROM (SELECT distributed_table.key, distributed_table.value, distributed_table.age FROM local_shard_execution.distributed_table_1470003 distributed_table) v_local_query_execution_2 WHERE (key OPERATOR(pg_catalog.=) 500)
key | value | age
-----+-------+-----
---------------------------------------------------------------------
500 | 500 | 25
(1 row)
@ -1205,7 +1205,7 @@ BEGIN;
SAVEPOINT my_savepoint;
SELECT count(*) FROM distributed_table;
count
-------
---------------------------------------------------------------------
101
(1 row)
@ -1223,7 +1223,7 @@ LOG: executing the command locally: DELETE FROM local_shard_execution.distribut
LOG: executing the command locally: SELECT count(*) AS count FROM local_shard_execution.distributed_table_1470001 distributed_table WHERE true
LOG: executing the command locally: SELECT count(*) AS count FROM local_shard_execution.distributed_table_1470003 distributed_table WHERE true
count
-------
---------------------------------------------------------------------
100
(1 row)
@ -1235,7 +1235,7 @@ COMMIT;
INSERT INTO collections_list (collection_id) VALUES (0) RETURNING *;
LOG: executing the command locally: INSERT INTO local_shard_execution.collections_list_1470011 (key, ser, collection_id) VALUES ('3940649673949185'::bigint, '3940649673949185'::bigint, 0) RETURNING key, ser, ts, collection_id, value
key | ser | ts | collection_id | value
------------------+------------------+----+---------------+-------
---------------------------------------------------------------------
3940649673949185 | 3940649673949185 | | 0 |
(1 row)
@ -1245,7 +1245,7 @@ LOG: executing the command locally: INSERT INTO local_shard_execution.collectio
SELECT count(*) FROM collections_list_0 WHERE key = 1;
LOG: executing the command locally: SELECT count(*) AS count FROM local_shard_execution.collections_list_0_1470013 collections_list_0 WHERE (key OPERATOR(pg_catalog.=) 1)
count
-------
---------------------------------------------------------------------
1
(1 row)
@ -1253,7 +1253,7 @@ LOG: executing the command locally: SELECT count(*) AS count FROM local_shard_e
LOG: executing the command locally: SELECT count(*) AS count FROM local_shard_execution.collections_list_1470009 collections_list WHERE true
LOG: executing the command locally: SELECT count(*) AS count FROM local_shard_execution.collections_list_1470011 collections_list WHERE true
count
-------
---------------------------------------------------------------------
2
(1 row)
@ -1261,7 +1261,7 @@ LOG: executing the command locally: SELECT count(*) AS count FROM local_shard_e
LOG: executing the command locally: SELECT key, ser, ts, collection_id, value FROM local_shard_execution.collections_list_1470009 collections_list WHERE true
LOG: executing the command locally: SELECT key, ser, ts, collection_id, value FROM local_shard_execution.collections_list_1470011 collections_list WHERE true
key | ser | ts | collection_id | value
------------------+------------------+----+---------------+-------
---------------------------------------------------------------------
1 | 3940649673949186 | | 0 |
3940649673949185 | 3940649673949185 | | 0 |
(2 rows)
@ -1274,92 +1274,92 @@ ALTER SEQUENCE collections_list_key_seq NO MINVALUE NO MAXVALUE;
PREPARE serial_prepared_local AS INSERT INTO collections_list (collection_id) VALUES (0) RETURNING key, ser;
SELECT setval('collections_list_key_seq', 4);
setval
--------
---------------------------------------------------------------------
4
(1 row)
EXECUTE serial_prepared_local;
LOG: executing the command locally: INSERT INTO local_shard_execution.collections_list_1470009 (key, ser, collection_id) VALUES ('5'::bigint, '3940649673949187'::bigint, 0) RETURNING key, ser
key | ser
-----+------------------
---------------------------------------------------------------------
5 | 3940649673949187
(1 row)
SELECT setval('collections_list_key_seq', 5);
setval
--------
---------------------------------------------------------------------
5
(1 row)
EXECUTE serial_prepared_local;
LOG: executing the command locally: INSERT INTO local_shard_execution.collections_list_1470011 (key, ser, collection_id) VALUES ('6'::bigint, '3940649673949188'::bigint, 0) RETURNING key, ser
key | ser
-----+------------------
---------------------------------------------------------------------
6 | 3940649673949188
(1 row)
SELECT setval('collections_list_key_seq', 499);
setval
--------
---------------------------------------------------------------------
499
(1 row)
EXECUTE serial_prepared_local;
LOG: executing the command locally: INSERT INTO local_shard_execution.collections_list_1470011 (key, ser, collection_id) VALUES ('500'::bigint, '3940649673949189'::bigint, 0) RETURNING key, ser
key | ser
-----+------------------
---------------------------------------------------------------------
500 | 3940649673949189
(1 row)
SELECT setval('collections_list_key_seq', 700);
setval
--------
---------------------------------------------------------------------
700
(1 row)
EXECUTE serial_prepared_local;
LOG: executing the command locally: INSERT INTO local_shard_execution.collections_list_1470009 (key, ser, collection_id) VALUES ('701'::bigint, '3940649673949190'::bigint, 0) RETURNING key, ser
key | ser
-----+------------------
---------------------------------------------------------------------
701 | 3940649673949190
(1 row)
SELECT setval('collections_list_key_seq', 708);
setval
--------
---------------------------------------------------------------------
708
(1 row)
EXECUTE serial_prepared_local;
LOG: executing the command locally: INSERT INTO local_shard_execution.collections_list_1470011 (key, ser, collection_id) VALUES ('709'::bigint, '3940649673949191'::bigint, 0) RETURNING key, ser
key | ser
-----+------------------
---------------------------------------------------------------------
709 | 3940649673949191
(1 row)
SELECT setval('collections_list_key_seq', 709);
setval
--------
---------------------------------------------------------------------
709
(1 row)
EXECUTE serial_prepared_local;
LOG: executing the command locally: INSERT INTO local_shard_execution.collections_list_1470009 (key, ser, collection_id) VALUES ('710'::bigint, '3940649673949192'::bigint, 0) RETURNING key, ser
key | ser
-----+------------------
---------------------------------------------------------------------
710 | 3940649673949192
(1 row)
-- and, one remote test
SELECT setval('collections_list_key_seq', 10);
setval
--------
---------------------------------------------------------------------
10
(1 row)
EXECUTE serial_prepared_local;
key | ser
-----+------------------
---------------------------------------------------------------------
11 | 3940649673949193
(1 row)
@ -1369,7 +1369,7 @@ EXECUTE serial_prepared_local;
WITH distributed_local_mixed AS (INSERT INTO reference_table VALUES (1000) RETURNING *) SELECT * FROM distributed_local_mixed;
LOG: executing the command locally: INSERT INTO local_shard_execution.reference_table_1470000 (key) VALUES (1000) RETURNING key
key
------
---------------------------------------------------------------------
1000
(1 row)
@ -1389,7 +1389,7 @@ LOG: executing the command locally: INSERT INTO local_shard_execution.distribut
LOG: executing the command locally: DELETE FROM local_shard_execution.distributed_table_1470001 distributed_table RETURNING key
LOG: executing the command locally: DELETE FROM local_shard_execution.distributed_table_1470003 distributed_table RETURNING key
key
-----
---------------------------------------------------------------------
1
2
(2 rows)
@ -1409,7 +1409,7 @@ LOG: executing the command locally: INSERT INTO local_shard_execution.reference
DELETE FROM reference_table RETURNING key;
LOG: executing the command locally: DELETE FROM local_shard_execution.reference_table_1470000 reference_table RETURNING key
key
-----
---------------------------------------------------------------------
1
2
(2 rows)
@ -1428,7 +1428,7 @@ CREATE TABLE event_responses (
);
SELECT create_distributed_table('event_responses', 'event_id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -1442,7 +1442,7 @@ END;
$fn$;
SELECT create_distributed_function('register_for_event(int,int,invite_resp)', 'p_event_id', 'event_responses');
create_distributed_function
-----------------------------
---------------------------------------------------------------------
(1 row)

View File

@ -1,6 +1,6 @@
---
---------------------------------------------------------------------
--- materialized_view
---
---------------------------------------------------------------------
-- This file contains test cases for materialized view support.
-- materialized views work
-- insert into... select works with views
@ -10,14 +10,14 @@ CREATE VIEW air_shipped_lineitems AS SELECT * FROM lineitem_hash_part WHERE l_sh
CREATE TABLE temp_lineitem(LIKE lineitem_hash_part);
SELECT create_distributed_table('temp_lineitem', 'l_orderkey', 'hash', 'lineitem_hash_part');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
INSERT INTO temp_lineitem SELECT * FROM air_shipped_lineitems;
SELECT count(*) FROM temp_lineitem;
count
-------
---------------------------------------------------------------------
1706
(1 row)
@ -25,7 +25,7 @@ SELECT count(*) FROM temp_lineitem;
INSERT INTO temp_lineitem SELECT * FROM air_shipped_lineitems WHERE l_shipmode = 'MAIL';
SELECT count(*) FROM temp_lineitem;
count
-------
---------------------------------------------------------------------
1706
(1 row)
@ -34,7 +34,7 @@ CREATE MATERIALIZED VIEW mode_counts
AS SELECT l_shipmode, count(*) FROM temp_lineitem GROUP BY l_shipmode;
SELECT * FROM mode_counts WHERE l_shipmode = 'AIR' ORDER BY 2 DESC, 1 LIMIT 10;
l_shipmode | count
------------+-------
---------------------------------------------------------------------
AIR | 1706
(1 row)
@ -45,7 +45,7 @@ ERROR: relation mode_counts is not distributed
INSERT INTO temp_lineitem SELECT * FROM air_shipped_lineitems;
SELECT * FROM mode_counts WHERE l_shipmode = 'AIR' ORDER BY 2 DESC, 1 LIMIT 10;
l_shipmode | count
------------+-------
---------------------------------------------------------------------
AIR | 1706
(1 row)
@ -53,7 +53,7 @@ SELECT * FROM mode_counts WHERE l_shipmode = 'AIR' ORDER BY 2 DESC, 1 LIMIT 10;
REFRESH MATERIALIZED VIEW mode_counts;
SELECT * FROM mode_counts WHERE l_shipmode = 'AIR' ORDER BY 2 DESC, 1 LIMIT 10;
l_shipmode | count
------------+-------
---------------------------------------------------------------------
AIR | 3412
(1 row)
@ -67,7 +67,7 @@ WHERE lineitem_hash_part.l_orderkey=orders_hash_part.o_orderkey AND lineitem_has
REFRESH MATERIALIZED VIEW materialized_view;
SELECT count(*) FROM materialized_view;
count
-------
---------------------------------------------------------------------
6
(1 row)
@ -80,7 +80,7 @@ WHERE lineitem_hash_part.l_orderkey=orders_hash_part.o_orderkey;
REFRESH MATERIALIZED VIEW materialized_view;
SELECT count(*) FROM materialized_view;
count
-------
---------------------------------------------------------------------
12000
(1 row)
@ -94,7 +94,7 @@ WHERE lineitem_hash_part.l_orderkey=orders_hash_part.o_orderkey;
REFRESH MATERIALIZED VIEW materialized_view;
SELECT count(*) FROM materialized_view;
count
-------
---------------------------------------------------------------------
12000
(1 row)
@ -111,7 +111,7 @@ FROM orders_hash_part JOIN (
REFRESH MATERIALIZED VIEW materialized_view;
SELECT count(*) FROM materialized_view;
count
-------
---------------------------------------------------------------------
2985
(1 row)
@ -124,7 +124,7 @@ WHERE lineitem_hash_part.l_orderkey=orders_reference.o_orderkey;
REFRESH MATERIALIZED VIEW materialized_view;
SELECT count(*) FROM materialized_view;
count
-------
---------------------------------------------------------------------
12000
(1 row)
@ -139,21 +139,21 @@ WHERE lineitem_local_to_hash_part.l_orderkey=orders_local_to_hash_part.o_orderke
SELECT create_distributed_table('lineitem_local_to_hash_part', 'l_orderkey');
NOTICE: Copying data from local table...
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
SELECT create_distributed_table('orders_local_to_hash_part', 'o_orderkey');
NOTICE: Copying data from local table...
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
REFRESH MATERIALIZED VIEW materialized_view;
SELECT count(*) FROM materialized_view;
count
-------
---------------------------------------------------------------------
12000
(1 row)
@ -168,7 +168,7 @@ WHERE lineitem_hash_part.l_orderkey=orders_hash_part.o_orderkey;
REFRESH MATERIALIZED VIEW materialized_view WITH DATA;
SELECT count(*) FROM materialized_view;
count
-------
---------------------------------------------------------------------
12000
(1 row)
@ -193,7 +193,7 @@ CREATE UNIQUE INDEX materialized_view_index ON materialized_view (o_orderdate);
REFRESH MATERIALIZED VIEW CONCURRENTLY materialized_view;
SELECT count(*) FROM materialized_view;
count
-------
---------------------------------------------------------------------
1699
(1 row)
@ -206,13 +206,13 @@ CREATE TABLE large (id int, tenant_id int);
CREATE TABLE small (id int, tenant_id int);
SELECT create_distributed_table('large','tenant_id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
SELECT create_distributed_table('small','tenant_id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -227,7 +227,7 @@ UPDATE large SET id=20 FROM small_view WHERE small_view.id=large.id;
ERROR: materialized views in modify queries are not supported
SELECT * FROM large ORDER BY 1, 2;
id | tenant_id
----+-----------
---------------------------------------------------------------------
1 | 2
2 | 3
5 | 4
@ -239,7 +239,7 @@ UPDATE large SET id=28 FROM small_view WHERE small_view.id=large.id and small_vi
ERROR: materialized views in modify queries are not supported
SELECT * FROM large ORDER BY 1, 2;
id | tenant_id
----+-----------
---------------------------------------------------------------------
1 | 2
2 | 3
5 | 4
@ -250,7 +250,7 @@ SELECT * FROM large ORDER BY 1, 2;
DELETE FROM large WHERE tenant_id in (SELECT tenant_id FROM small_view);
SELECT * FROM large ORDER BY 1, 2;
id | tenant_id
----+-----------
---------------------------------------------------------------------
6 | 5
(1 row)
@ -268,13 +268,13 @@ CREATE TABLE large_partitioned_p2 PARTITION OF large_partitioned FOR VALUES FROM
CREATE TABLE large_partitioned_p3 PARTITION OF large_partitioned FOR VALUES FROM (20) TO (100);
SELECT create_distributed_table('large_partitioned','tenant_id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
SELECT create_distributed_table('small','tenant_id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -291,7 +291,7 @@ UPDATE large_partitioned SET id=20 FROM small_view WHERE small_view.id=large_par
ERROR: materialized views in modify queries are not supported
SELECT * FROM large_partitioned ORDER BY 1, 2;
id | tenant_id
----+-----------
---------------------------------------------------------------------
1 | 2
2 | 3
5 | 4
@ -305,7 +305,7 @@ SELECT * FROM large_partitioned ORDER BY 1, 2;
DELETE FROM large_partitioned WHERE id in (SELECT id FROM small_view);
SELECT * FROM large_partitioned ORDER BY 1, 2;
id | tenant_id
----+-----------
---------------------------------------------------------------------
2 | 3
5 | 4
26 | 32
@ -322,13 +322,13 @@ DELETE FROM large_partitioned WHERE id in (SELECT * FROM all_small_view_ids);
-- make sure that materialized view in a CTE/subquery can be joined with a distributed table
WITH cte AS (SELECT *, random() FROM small_view) SELECT count(*) FROM cte JOIN small USING(id);
count
-------
---------------------------------------------------------------------
4
(1 row)
SELECT count(*) FROM (SELECT *, random() FROM small_view) as subquery JOIN small USING(id);
count
-------
---------------------------------------------------------------------
4
(1 row)

View File

@ -8,7 +8,7 @@ INSERT INTO pg_dist_shard_placement
(1, 1, 1, 0, 'localhost', :worker_1_port);
-- if there are no worker nodes which match the shards this should fail
ALTER EXTENSION citus UPDATE TO '7.0-3';
ERROR: There is no node at "localhost:57637"
ERROR: There is no node at "localhost:xxxxx"
CONTEXT: PL/pgSQL function citus.find_groupid_for_node(text,integer) line 6 at RAISE
-- if you add a matching worker the upgrade should succeed
INSERT INTO pg_dist_node (nodename, nodeport, groupid)
@ -16,7 +16,7 @@ INSERT INTO pg_dist_node (nodename, nodeport, groupid)
ALTER EXTENSION citus UPDATE TO '7.0-3';
SELECT * FROM pg_dist_placement;
placementid | shardid | shardstate | shardlength | groupid
-------------+---------+------------+-------------+---------
---------------------------------------------------------------------
1 | 1 | 1 | 0 | 1
(1 row)

View File

@ -12,7 +12,7 @@ WHERE name = 'hll'
-- Try to execute count(distinct) when approximate distincts aren't enabled
SELECT count(distinct l_orderkey) FROM lineitem;
count
-------
---------------------------------------------------------------------
2985
(1 row)
@ -20,58 +20,58 @@ SELECT count(distinct l_orderkey) FROM lineitem;
SET citus.count_distinct_error_rate = 0.1;
SELECT count(distinct l_orderkey) FROM lineitem;
count
-------
---------------------------------------------------------------------
2612
(1 row)
SET citus.count_distinct_error_rate = 0.01;
SELECT count(distinct l_orderkey) FROM lineitem;
count
-------
---------------------------------------------------------------------
2967
(1 row)
-- Check approximate count(distinct) for different data types
SELECT count(distinct l_partkey) FROM lineitem;
count
-------
---------------------------------------------------------------------
11654
(1 row)
SELECT count(distinct l_extendedprice) FROM lineitem;
count
-------
---------------------------------------------------------------------
11691
(1 row)
SELECT count(distinct l_shipdate) FROM lineitem;
count
-------
---------------------------------------------------------------------
2483
(1 row)
SELECT count(distinct l_comment) FROM lineitem;
count
-------
---------------------------------------------------------------------
11788
(1 row)
-- Check that we can execute approximate count(distinct) on complex expressions
SELECT count(distinct (l_orderkey * 2 + 1)) FROM lineitem;
count
-------
---------------------------------------------------------------------
2980
(1 row)
SELECT count(distinct extract(month from l_shipdate)) AS my_month FROM lineitem;
my_month
----------
---------------------------------------------------------------------
12
(1 row)
SELECT count(distinct l_partkey) / count(distinct l_orderkey) FROM lineitem;
?column?
----------
---------------------------------------------------------------------
3
(1 row)
@ -80,14 +80,14 @@ SELECT count(distinct l_partkey) / count(distinct l_orderkey) FROM lineitem;
SELECT count(distinct l_orderkey) FROM lineitem
WHERE octet_length(l_comment) + octet_length('randomtext'::text) > 40;
count
-------
---------------------------------------------------------------------
2355
(1 row)
SELECT count(DISTINCT l_orderkey) FROM lineitem, orders
WHERE l_orderkey = o_orderkey AND l_quantity < 5;
count
-------
---------------------------------------------------------------------
835
(1 row)
@ -97,7 +97,7 @@ SELECT count(DISTINCT l_orderkey) as distinct_order_count, l_quantity FROM linei
ORDER BY distinct_order_count ASC, l_quantity ASC
LIMIT 10;
distinct_order_count | l_quantity
----------------------+------------
---------------------------------------------------------------------
210 | 29.00
216 | 13.00
217 | 16.00
@ -123,7 +123,7 @@ CREATE TABLE test_count_distinct_schema.nation_hash(
);
SELECT create_distributed_table('test_count_distinct_schema.nation_hash', 'n_nationkey', 'hash');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -132,7 +132,7 @@ SET search_path TO public;
SET citus.count_distinct_error_rate TO 0.01;
SELECT COUNT (DISTINCT n_regionkey) FROM test_count_distinct_schema.nation_hash;
count
-------
---------------------------------------------------------------------
3
(1 row)
@ -140,7 +140,7 @@ SELECT COUNT (DISTINCT n_regionkey) FROM test_count_distinct_schema.nation_hash;
SET search_path TO test_count_distinct_schema;
SELECT COUNT (DISTINCT n_regionkey) FROM nation_hash;
count
-------
---------------------------------------------------------------------
3
(1 row)
@ -161,7 +161,7 @@ SELECT l_returnflag, count(DISTINCT l_shipdate) as count_distinct, count(*) as t
ORDER BY total
LIMIT 10;
l_returnflag | count_distinct | total
--------------+----------------+-------
---------------------------------------------------------------------
R | 1103 | 2901
A | 1108 | 2944
N | 1265 | 6155
@ -177,7 +177,7 @@ SELECT
ORDER BY 2 DESC, 1 DESC
LIMIT 10;
l_orderkey | count | count | count
------------+-------+-------+-------
---------------------------------------------------------------------
12005 | 4 | 4 | 4
5409 | 4 | 4 | 4
4964 | 4 | 4 | 4
@ -194,7 +194,7 @@ SELECT
SET citus.count_distinct_error_rate = 0.0;
SELECT count(distinct l_orderkey) FROM lineitem;
count
-------
---------------------------------------------------------------------
2985
(1 row)

View File

@ -10,14 +10,14 @@ WHERE name = 'hll'
\gset
:create_cmd;
hll_present
-------------
---------------------------------------------------------------------
f
(1 row)
-- Try to execute count(distinct) when approximate distincts aren't enabled
SELECT count(distinct l_orderkey) FROM lineitem;
count
-------
---------------------------------------------------------------------
2985
(1 row)
@ -83,7 +83,7 @@ CREATE TABLE test_count_distinct_schema.nation_hash(
);
SELECT create_distributed_table('test_count_distinct_schema.nation_hash', 'n_nationkey', 'hash');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -131,7 +131,7 @@ HINT: You need to have the hll extension loaded.
SET citus.count_distinct_error_rate = 0.0;
SELECT count(distinct l_orderkey) FROM lineitem;
count
-------
---------------------------------------------------------------------
2985
(1 row)

View File

@ -14,7 +14,7 @@ CREATE TABLE products (
);
SELECT create_distributed_table('products', 'product_no');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -31,14 +31,14 @@ INSERT INTO products VALUES(1, 'product_1', 1);
INSERT INTO products VALUES(1, 'product_1', 1);
ERROR: duplicate key value violates unique constraint "p_key_1450001"
DETAIL: Key (product_no)=(1) already exists.
CONTEXT: while executing command on localhost:57638
CONTEXT: while executing command on localhost:xxxxx
ALTER TABLE products DROP CONSTRAINT p_key;
INSERT INTO products VALUES(1, 'product_1', 1);
-- Can not create constraint since it conflicts with the existing data
ALTER TABLE products ADD CONSTRAINT p_key PRIMARY KEY(product_no);
ERROR: could not create unique index "p_key_1450001"
DETAIL: Key (product_no)=(1) is duplicated.
CONTEXT: while executing command on localhost:57637
CONTEXT: while executing command on localhost:xxxxx
DROP TABLE products;
-- Check "PRIMARY KEY CONSTRAINT" with reference table
CREATE TABLE products_ref (
@ -48,7 +48,7 @@ CREATE TABLE products_ref (
);
SELECT create_reference_table('products_ref');
create_reference_table
------------------------
---------------------------------------------------------------------
(1 row)
@ -62,7 +62,7 @@ INSERT INTO products_ref VALUES(1, 'product_1', 1);
INSERT INTO products_ref VALUES(1, 'product_1', 1);
ERROR: duplicate key value violates unique constraint "p_key_1450032"
DETAIL: Key (product_no)=(1) already exists.
CONTEXT: while executing command on localhost:57637
CONTEXT: while executing command on localhost:xxxxx
DROP TABLE products_ref;
-- Check "PRIMARY KEY CONSTRAINT" on append table
CREATE TABLE products_append (
@ -72,7 +72,7 @@ CREATE TABLE products_append (
);
SELECT create_distributed_table('products_append', 'product_no', 'append');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -98,7 +98,7 @@ DROP TABLE products_append;
CREATE TABLE unique_test_table(id int, name varchar(20));
SELECT create_distributed_table('unique_test_table', 'id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -113,16 +113,16 @@ ALTER TABLE unique_test_table ADD CONSTRAINT unn_id UNIQUE(id);
INSERT INTO unique_test_table VALUES(1, 'Ahmet');
INSERT INTO unique_test_table VALUES(1, 'Mehmet');
ERROR: duplicate key value violates unique constraint "unn_id_1450035"
DETAIL: Key (id)=(1) already exists.
CONTEXT: while executing command on localhost:57638
DETAIL: Key (id)=(X) already exists.
CONTEXT: while executing command on localhost:xxxxx
ALTER TABLE unique_test_table DROP CONSTRAINT unn_id;
-- Insert row which will conflict with the next unique constraint command
INSERT INTO unique_test_table VALUES(1, 'Mehmet');
-- Can not create constraint since it conflicts with the existing data
ALTER TABLE unique_test_table ADD CONSTRAINT unn_id UNIQUE(id);
ERROR: could not create unique index "unn_id_1450035"
DETAIL: Key (id)=(1) is duplicated.
CONTEXT: while executing command on localhost:57637
DETAIL: Key (id)=(X) is duplicated.
CONTEXT: while executing command on localhost:xxxxx
-- Can create unique constraint over multiple columns which must include
-- distribution column
ALTER TABLE unique_test_table ADD CONSTRAINT unn_id_name UNIQUE(id, name);
@ -130,13 +130,13 @@ ALTER TABLE unique_test_table ADD CONSTRAINT unn_id_name UNIQUE(id, name);
INSERT INTO unique_test_table VALUES(1, 'Mehmet');
ERROR: duplicate key value violates unique constraint "unn_id_name_1450035"
DETAIL: Key (id, name)=(1, Mehmet) already exists.
CONTEXT: while executing command on localhost:57638
CONTEXT: while executing command on localhost:xxxxx
DROP TABLE unique_test_table;
-- Check "UNIQUE CONSTRAINT" with reference table
CREATE TABLE unique_test_table_ref(id int, name varchar(20));
SELECT create_reference_table('unique_test_table_ref');
create_reference_table
------------------------
---------------------------------------------------------------------
(1 row)
@ -147,8 +147,8 @@ ALTER TABLE unique_test_table_ref ADD CONSTRAINT unn_id UNIQUE(id);
INSERT INTO unique_test_table_ref VALUES(1, 'Ahmet');
INSERT INTO unique_test_table_ref VALUES(1, 'Mehmet');
ERROR: duplicate key value violates unique constraint "unn_id_1450066"
DETAIL: Key (id)=(1) already exists.
CONTEXT: while executing command on localhost:57637
DETAIL: Key (id)=(X) already exists.
CONTEXT: while executing command on localhost:xxxxx
-- We can add unique constraint with multiple columns
ALTER TABLE unique_test_table_ref DROP CONSTRAINT unn_id;
ALTER TABLE unique_test_table_ref ADD CONSTRAINT unn_id_name UNIQUE(id,name);
@ -159,7 +159,7 @@ DROP TABLE unique_test_table_ref;
CREATE TABLE unique_test_table_append(id int, name varchar(20));
SELECT create_distributed_table('unique_test_table_append', 'id', 'append');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -179,7 +179,7 @@ HINT: Consider using hash partitioning.
-- Error out. Table can not have two rows with the same id.
\COPY unique_test_table_append FROM STDIN DELIMITER AS ',';
ERROR: duplicate key value violates unique constraint "unn_id_1450067"
DETAIL: Key (id)=(1) already exists.
DETAIL: Key (id)=(X) already exists.
DROP TABLE unique_test_table_append;
-- Check "CHECK CONSTRAINT"
CREATE TABLE products (
@ -190,7 +190,7 @@ CREATE TABLE products (
);
SELECT create_distributed_table('products', 'product_no');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -202,12 +202,12 @@ ALTER TABLE products ADD CONSTRAINT p_multi_check CHECK(price > discounted_price
INSERT INTO products VALUES(1, 'product_1', -1, -2);
ERROR: new row for relation "products_1450069" violates check constraint "p_check_1450069"
DETAIL: Failing row contains (1, product_1, -1, -2).
CONTEXT: while executing command on localhost:57638
CONTEXT: while executing command on localhost:xxxxx
INSERT INTO products VALUES(1, 'product_1', 5, 3);
INSERT INTO products VALUES(1, 'product_1', 2, 3);
ERROR: new row for relation "products_1450069" violates check constraint "p_multi_check_1450069"
DETAIL: Failing row contains (1, product_1, 2, 3).
CONTEXT: while executing command on localhost:57638
CONTEXT: while executing command on localhost:xxxxx
DROP TABLE products;
-- Check "CHECK CONSTRAINT" with reference table
CREATE TABLE products_ref (
@ -218,7 +218,7 @@ CREATE TABLE products_ref (
);
SELECT create_reference_table('products_ref');
create_reference_table
------------------------
---------------------------------------------------------------------
(1 row)
@ -230,12 +230,12 @@ ALTER TABLE products_ref ADD CONSTRAINT p_multi_check CHECK(price > discounted_p
INSERT INTO products_ref VALUES(1, 'product_1', -1, -2);
ERROR: new row for relation "products_ref_1450100" violates check constraint "p_check_1450100"
DETAIL: Failing row contains (1, product_1, -1, -2).
CONTEXT: while executing command on localhost:57637
CONTEXT: while executing command on localhost:xxxxx
INSERT INTO products_ref VALUES(1, 'product_1', 5, 3);
INSERT INTO products_ref VALUES(1, 'product_1', 2, 3);
ERROR: new row for relation "products_ref_1450100" violates check constraint "p_multi_check_1450100"
DETAIL: Failing row contains (1, product_1, 2, 3).
CONTEXT: while executing command on localhost:57637
CONTEXT: while executing command on localhost:xxxxx
DROP TABLE products_ref;
-- Check "CHECK CONSTRAINT" with append table
CREATE TABLE products_append (
@ -246,7 +246,7 @@ CREATE TABLE products_append (
);
SELECT create_distributed_table('products_append', 'product_no', 'append');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -266,7 +266,7 @@ CREATE TABLE products (
);
SELECT create_distributed_table('products', 'product_no');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -285,7 +285,7 @@ INSERT INTO products VALUES(2,'product_2', 5);
INSERT INTO products VALUES(2,'product_2', 5);
ERROR: conflicting key value violates exclusion constraint "exc_pno_name_1450126"
DETAIL: Key (product_no, name)=(2, product_2) conflicts with existing key (product_no, name)=(2, product_2).
CONTEXT: while executing command on localhost:57637
CONTEXT: while executing command on localhost:xxxxx
DROP TABLE products;
-- Check "EXCLUSION CONSTRAINT" with reference table
CREATE TABLE products_ref (
@ -295,7 +295,7 @@ CREATE TABLE products_ref (
);
SELECT create_reference_table('products_ref');
create_reference_table
------------------------
---------------------------------------------------------------------
(1 row)
@ -309,7 +309,7 @@ INSERT INTO products_ref VALUES(1,'product_2', 10);
INSERT INTO products_ref VALUES(2,'product_2', 5);
ERROR: conflicting key value violates exclusion constraint "exc_name_1450134"
DETAIL: Key (name)=(product_2) conflicts with existing key (name)=(product_2).
CONTEXT: while executing command on localhost:57637
CONTEXT: while executing command on localhost:xxxxx
DROP TABLE products_ref;
-- Check "EXCLUSION CONSTRAINT" with append table
CREATE TABLE products_append (
@ -319,7 +319,7 @@ CREATE TABLE products_append (
);
SELECT create_distributed_table('products_append', 'product_no','append');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -349,7 +349,7 @@ CREATE TABLE products (
);
SELECT create_distributed_table('products', 'product_no');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -358,7 +358,7 @@ ALTER TABLE products ALTER COLUMN name SET NOT NULL;
INSERT INTO products VALUES(1,NULL,5);
ERROR: null value in column "name" violates not-null constraint
DETAIL: Failing row contains (1, null, 5).
CONTEXT: while executing command on localhost:57638
CONTEXT: while executing command on localhost:xxxxx
INSERT INTO products VALUES(NULL,'product_1', 5);
ERROR: cannot perform an INSERT with NULL in the partition column
DROP TABLE products;
@ -370,7 +370,7 @@ CREATE TABLE products_ref (
);
SELECT create_reference_table('products_ref');
create_reference_table
------------------------
---------------------------------------------------------------------
(1 row)
@ -379,7 +379,7 @@ ALTER TABLE products_ref ALTER COLUMN name SET NOT NULL;
INSERT INTO products_ref VALUES(1,NULL,5);
ERROR: null value in column "name" violates not-null constraint
DETAIL: Failing row contains (1, null, 5).
CONTEXT: while executing command on localhost:57637
CONTEXT: while executing command on localhost:xxxxx
INSERT INTO products_ref VALUES(NULL,'product_1', 5);
DROP TABLE products_ref;
-- Check "NOT NULL" with append table
@ -390,7 +390,7 @@ CREATE TABLE products_append (
);
SELECT create_distributed_table('products_append', 'product_no', 'append');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -406,7 +406,7 @@ CREATE TABLE products (
);
SELECT create_distributed_table('products', 'product_no');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -439,7 +439,7 @@ CREATE TABLE products (
);
SELECT create_distributed_table('products', 'product_no');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -459,13 +459,13 @@ ROLLBACK;
-- There should be no constraint on master and worker(s)
SELECT "Constraint", "Definition" FROM table_checks WHERE relid='products'::regclass;
Constraint | Definition
------------+------------
---------------------------------------------------------------------
(0 rows)
\c - - - :worker_1_port
SELECT "Constraint", "Definition" FROM table_checks WHERE relid='public.products_1450202'::regclass;
Constraint | Definition
------------+------------
---------------------------------------------------------------------
(0 rows)
\c - - - :master_port
@ -479,13 +479,13 @@ ROLLBACK;
-- There should be no constraint on master and worker(s)
SELECT "Constraint", "Definition" FROM table_checks WHERE relid='products'::regclass;
Constraint | Definition
------------+------------
---------------------------------------------------------------------
(0 rows)
\c - - - :worker_1_port
SELECT "Constraint", "Definition" FROM table_checks WHERE relid='public.products_1450202'::regclass;
Constraint | Definition
------------+------------
---------------------------------------------------------------------
(0 rows)
\c - - - :master_port
@ -498,7 +498,7 @@ CREATE UNIQUE INDEX CONCURRENTLY alter_pk_idx ON sc1.alter_add_prim_key(x);
ALTER TABLE sc1.alter_add_prim_key ADD CONSTRAINT alter_pk_idx PRIMARY KEY USING INDEX alter_pk_idx;
SELECT create_distributed_table('sc1.alter_add_prim_key', 'x');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -517,7 +517,7 @@ SELECT (run_command_on_workers($$
ORDER BY
1,2,3,4;
nodename | nodeport | success | result
-----------+----------+---------+----------------------
---------------------------------------------------------------------
localhost | 57637 | t | alter_pk_idx_1450234
localhost | 57638 | t | alter_pk_idx_1450234
(2 rows)
@ -527,7 +527,7 @@ CREATE TABLE sc2.alter_add_prim_key(x int, y int);
SET search_path TO 'sc2';
SELECT create_distributed_table('alter_add_prim_key', 'x');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -548,7 +548,7 @@ SELECT (run_command_on_workers($$
ORDER BY
1,2,3,4;
nodename | nodeport | success | result
-----------+----------+---------+----------------------
---------------------------------------------------------------------
localhost | 57637 | t | alter_pk_idx_1450236
localhost | 57638 | t | alter_pk_idx_1450236
(2 rows)
@ -561,7 +561,7 @@ SET search_path TO 'sc3';
SELECT create_distributed_table('alter_add_prim_key', 'x');
NOTICE: Copying data from local table...
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -583,7 +583,7 @@ SELECT (run_command_on_workers($$
ORDER BY
1,2,3,4;
nodename | nodeport | success | result
-----------+----------+---------+----------------------
---------------------------------------------------------------------
localhost | 57637 | t | a_constraint_1450238
localhost | 57638 | t | a_constraint_1450238
(2 rows)
@ -604,7 +604,7 @@ SELECT (run_command_on_workers($$
ORDER BY
1,2,3,4;
nodename | nodeport | success | result
-----------+----------+---------+--------
---------------------------------------------------------------------
localhost | 57637 | t |
localhost | 57638 | t |
(2 rows)

View File

@ -10,7 +10,7 @@ $$;
-- Check multi_cat_agg() aggregate which is used to implement array_agg()
SELECT array_cat_agg(i) FROM (VALUES (ARRAY[1,2]), (NULL), (ARRAY[3,4])) AS t(i);
array_cat_agg
---------------
---------------------------------------------------------------------
{1,2,3,4}
(1 row)
@ -25,7 +25,7 @@ ERROR: array_agg with order by is unsupported
SELECT array_sort(array_agg(l_partkey)) FROM lineitem GROUP BY l_orderkey
ORDER BY l_orderkey LIMIT 10;
array_sort
--------------------------------------------------
---------------------------------------------------------------------
{2132,15635,24027,63700,67310,155190}
{106170}
{4297,19036,29380,62143,128449,183095}
@ -41,7 +41,7 @@ SELECT array_sort(array_agg(l_partkey)) FROM lineitem GROUP BY l_orderkey
SELECT array_sort(array_agg(l_extendedprice)) FROM lineitem GROUP BY l_orderkey
ORDER BY l_orderkey LIMIT 10;
array_sort
-----------------------------------------------------------------
---------------------------------------------------------------------
{13309.60,21168.23,22824.48,28955.64,45983.16,49620.16}
{44694.46}
{2618.76,28733.64,32986.52,39890.88,46796.47,54058.05}
@ -57,7 +57,7 @@ SELECT array_sort(array_agg(l_extendedprice)) FROM lineitem GROUP BY l_orderkey
SELECT array_sort(array_agg(l_shipdate)) FROM lineitem GROUP BY l_orderkey
ORDER BY l_orderkey LIMIT 10;
array_sort
--------------------------------------------------------------------------------
---------------------------------------------------------------------
{01-29-1996,01-30-1996,03-13-1996,03-30-1996,04-12-1996,04-21-1996}
{01-28-1997}
{10-29-1993,11-09-1993,12-04-1993,12-14-1993,01-16-1994,02-02-1994}
@ -73,7 +73,7 @@ SELECT array_sort(array_agg(l_shipdate)) FROM lineitem GROUP BY l_orderkey
SELECT array_sort(array_agg(l_shipmode)) FROM lineitem GROUP BY l_orderkey
ORDER BY l_orderkey LIMIT 10;
array_sort
----------------------------------------------------------------------------------------------
---------------------------------------------------------------------
{"AIR ","FOB ","MAIL ","MAIL ","REG AIR ","TRUCK "}
{"RAIL "}
{"AIR ","FOB ","RAIL ","RAIL ","SHIP ","TRUCK "}
@ -89,7 +89,7 @@ SELECT array_sort(array_agg(l_shipmode)) FROM lineitem GROUP BY l_orderkey
-- Check that we can execute array_agg() within other functions
SELECT array_length(array_agg(l_orderkey), 1) FROM lineitem;
array_length
--------------
---------------------------------------------------------------------
12000
(1 row)
@ -101,7 +101,7 @@ SELECT l_quantity, count(*), avg(l_extendedprice), array_sort(array_agg(l_orderk
WHERE l_quantity < 5 AND l_orderkey > 5500 AND l_orderkey < 9500
GROUP BY l_quantity ORDER BY l_quantity;
l_quantity | count | avg | array_sort
------------+-------+-----------------------+--------------------------------------------------------------------------------------------------
---------------------------------------------------------------------
1.00 | 17 | 1477.1258823529411765 | {5543,5633,5634,5698,5766,5856,5857,5986,8997,9026,9158,9184,9220,9222,9348,9383,9476}
2.00 | 19 | 3078.4242105263157895 | {5506,5540,5573,5669,5703,5730,5798,5831,5893,5920,5923,9030,9058,9123,9124,9188,9344,9441,9476}
3.00 | 14 | 4714.0392857142857143 | {5509,5543,5605,5606,5827,9124,9157,9184,9223,9254,9349,9414,9475,9477}
@ -112,7 +112,7 @@ SELECT l_quantity, array_sort(array_agg(extract (month FROM o_orderdate))) AS my
FROM lineitem, orders WHERE l_orderkey = o_orderkey AND l_quantity < 5
AND l_orderkey > 5500 AND l_orderkey < 9500 GROUP BY l_quantity ORDER BY l_quantity;
l_quantity | my_month
------------+------------------------------------------------
---------------------------------------------------------------------
1.00 | {2,3,4,4,4,5,5,5,6,7,7,7,7,9,9,11,11}
2.00 | {1,3,5,5,5,5,6,6,6,7,7,8,10,10,11,11,11,12,12}
3.00 | {3,4,5,6,7,7,8,8,8,9,9,10,11,11}
@ -123,7 +123,7 @@ SELECT l_quantity, array_sort(array_agg(l_orderkey * 2 + 1)) FROM lineitem WHERE
AND octet_length(l_comment) + octet_length('randomtext'::text) > 40
AND l_orderkey > 5500 AND l_orderkey < 9500 GROUP BY l_quantity ORDER BY l_quantity;
l_quantity | array_sort
------------+---------------------------------------------
---------------------------------------------------------------------
1.00 | {11269,11397,11713,11715,11973,18317,18445}
2.00 | {11847,18061,18247,18953}
3.00 | {18249,18315,18699,18951,18955}
@ -134,14 +134,14 @@ SELECT l_quantity, array_sort(array_agg(l_orderkey * 2 + 1)) FROM lineitem WHERE
SELECT array_agg(case when l_quantity > 20 then l_quantity else NULL end)
FROM lineitem WHERE l_orderkey < 10;
array_agg
--------------------------------------------------------------------------------------------------------------------------------------------------
---------------------------------------------------------------------
{NULL,36.00,NULL,28.00,24.00,32.00,38.00,45.00,49.00,27.00,NULL,28.00,26.00,30.00,NULL,26.00,50.00,37.00,NULL,NULL,46.00,28.00,38.00,35.00,NULL}
(1 row)
-- Check that we return NULL in case there are no input rows to array_agg()
SELECT array_agg(l_orderkey) FROM lineitem WHERE l_quantity < 0;
array_agg
-----------
---------------------------------------------------------------------
(1 row)

View File

@ -26,7 +26,7 @@ ORDER BY
l_returnflag,
l_linestatus;
sum_qty | sum_base_price | sum_disc_price | sum_charge | avg_qty | avg_price | avg_disc | count_order | l_returnflag | l_linestatus
-----------+----------------+----------------+------------------+---------------------+--------------------+------------------------+-------------+--------------+--------------
---------------------------------------------------------------------
75465.00 | 113619873.63 | 107841287.0728 | 112171153.245923 | 25.6334918478260870 | 38593.707075407609 | 0.05055027173913043478 | 2944 | A | F
2022.00 | 3102551.45 | 2952540.7118 | 3072642.770652 | 26.6052631578947368 | 40823.045394736842 | 0.05263157894736842105 | 76 | N | F
149778.00 | 224706948.16 | 213634857.6854 | 222134071.929801 | 25.4594594594594595 | 38195.979629440762 | 0.04939486656467788543 | 5883 | N | O
@ -46,7 +46,7 @@ SELECT
FROM
lineitem;
avg
---------------------
---------------------------------------------------------------------
35.3570440077497924
(1 row)
@ -59,7 +59,7 @@ SELECT
FROM
lineitem;
avg
-----
---------------------------------------------------------------------
(1 row)

View File

@ -5,19 +5,19 @@
-- our partitioned table.
SELECT count(*) FROM lineitem;
count
-------
---------------------------------------------------------------------
12000
(1 row)
SELECT sum(l_extendedprice) FROM lineitem;
sum
--------------
---------------------------------------------------------------------
457702024.50
(1 row)
SELECT avg(l_extendedprice) FROM lineitem;
avg
--------------------
---------------------------------------------------------------------
38141.835375000000
(1 row)
@ -26,7 +26,7 @@ BEGIN;
SET TRANSACTION READ ONLY;
SELECT count(*) FROM lineitem;
count
-------
---------------------------------------------------------------------
12000
(1 row)
@ -34,7 +34,7 @@ COMMIT;
-- Verify temp tables which are used for final result aggregation don't persist.
SELECT count(*) FROM pg_class WHERE relname LIKE 'pg_merge_job_%' AND relkind = 'r';
count
-------
---------------------------------------------------------------------
0
(1 row)

View File

@ -1,8 +1,8 @@
------------------------------------
------------------------------------
---------------------------------------------------------------------
---------------------------------------------------------------------
-- Vanilla funnel query
------------------------------------
------------------------------------
---------------------------------------------------------------------
---------------------------------------------------------------------
INSERT INTO agg_results (user_id, value_1_agg)
SELECT user_id, array_length(events_table, 1)
FROM (
@ -21,15 +21,15 @@ FROM (
-- get some statistics from the aggregated results to ensure the results are correct
SELECT count(*), count(DISTINCT user_id), avg(user_id) FROM agg_results;
count | count | avg
-------+-------+--------------------
---------------------------------------------------------------------
2 | 2 | 1.5000000000000000
(1 row)
------------------------------------
------------------------------------
---------------------------------------------------------------------
---------------------------------------------------------------------
-- Funnel grouped by whether or not a user has done an event
------------------------------------
------------------------------------
---------------------------------------------------------------------
---------------------------------------------------------------------
INSERT INTO agg_results (user_id, value_1_agg, value_2_agg )
SELECT user_id, sum(array_length(events_table, 1)), length(hasdone_event)
FROM (
@ -70,15 +70,15 @@ FROM (
-- get some statistics from the aggregated results to ensure the results are correct
SELECT count(*), count(DISTINCT user_id), avg(user_id) FROM agg_results;
count | count | avg
-------+-------+--------------------
---------------------------------------------------------------------
4 | 2 | 1.5000000000000000
(1 row)
------------------------------------
------------------------------------
---------------------------------------------------------------------
---------------------------------------------------------------------
-- Funnel, grouped by the number of times a user has done an event
------------------------------------
------------------------------------
---------------------------------------------------------------------
---------------------------------------------------------------------
INSERT INTO agg_results (user_id, value_1_agg, value_2_agg)
SELECT
user_id,
@ -147,17 +147,17 @@ ORDER BY
-- get some statistics from the aggregated results to ensure the results are correct
SELECT count(*), count(DISTINCT user_id), avg(user_id) FROM agg_results;
count | count | avg
-------+-------+--------------------
---------------------------------------------------------------------
7 | 3 | 1.7142857142857143
(1 row)
------------------------------------
------------------------------------
---------------------------------------------------------------------
---------------------------------------------------------------------
-- Most recently seen users_table events_table
------------------------------------
---------------------------------------------------------------------
-- Note that we don't use ORDER BY/LIMIT yet
------------------------------------
------------------------------------
---------------------------------------------------------------------
---------------------------------------------------------------------
TRUNCATE agg_results;
INSERT INTO agg_results (user_id, agg_time, value_2_agg)
SELECT
@ -188,15 +188,15 @@ ORDER BY user_lastseen DESC;
-- get some statistics from the aggregated results to ensure the results are correct
SELECT count(*), count(DISTINCT user_id), avg(user_id) FROM agg_results;
count | count | avg
-------+-------+--------------------
---------------------------------------------------------------------
3 | 3 | 2.0000000000000000
(1 row)
------------------------------------
------------------------------------
---------------------------------------------------------------------
---------------------------------------------------------------------
-- Count the number of distinct users_table who are in segment X and Y and Z
------------------------------------
------------------------------------
---------------------------------------------------------------------
---------------------------------------------------------------------
TRUNCATE agg_results;
INSERT INTO agg_results (user_id)
SELECT DISTINCT user_id
@ -207,15 +207,15 @@ WHERE user_id IN (SELECT user_id FROM users_table WHERE value_1 >= 1 AND value_1
-- get some statistics from the aggregated results to ensure the results are correct
SELECT count(*), count(DISTINCT user_id), avg(user_id) FROM agg_results;
count | count | avg
-------+-------+--------------------
---------------------------------------------------------------------
5 | 5 | 3.8000000000000000
(1 row)
------------------------------------
------------------------------------
---------------------------------------------------------------------
---------------------------------------------------------------------
-- Count the number of distinct users_table who are in at least two of X and Y and Z segments
------------------------------------
------------------------------------
---------------------------------------------------------------------
---------------------------------------------------------------------
TRUNCATE agg_results;
INSERT INTO agg_results(user_id)
SELECT user_id
@ -228,15 +228,15 @@ HAVING count(distinct value_1) >= 2;
-- get some statistics from the aggregated results to ensure the results are correct
SELECT count(*), count(DISTINCT user_id), avg(user_id) FROM agg_results;
count | count | avg
-------+-------+--------------------
---------------------------------------------------------------------
6 | 6 | 3.5000000000000000
(1 row)
------------------------------------
------------------------------------
---------------------------------------------------------------------
---------------------------------------------------------------------
-- Find customers who have done X, and satisfy other customer specific criteria
------------------------------------
------------------------------------
---------------------------------------------------------------------
---------------------------------------------------------------------
TRUNCATE agg_results;
INSERT INTO agg_results(user_id, value_2_agg)
SELECT user_id, value_2 FROM users_table WHERE
@ -246,15 +246,15 @@ SELECT user_id, value_2 FROM users_table WHERE
-- get some statistics from the aggregated results to ensure the results are correct
SELECT count(*), count(DISTINCT user_id), avg(user_id) FROM agg_results;
count | count | avg
-------+-------+--------------------
---------------------------------------------------------------------
20 | 6 | 3.7500000000000000
(1 row)
------------------------------------
------------------------------------
---------------------------------------------------------------------
---------------------------------------------------------------------
-- Customers who havent done X, and satisfy other customer specific criteria
------------------------------------
------------------------------------
---------------------------------------------------------------------
---------------------------------------------------------------------
TRUNCATE agg_results;
INSERT INTO agg_results(user_id, value_2_agg)
SELECT user_id, value_2 FROM users_table WHERE
@ -264,15 +264,15 @@ SELECT user_id, value_2 FROM users_table WHERE
-- get some statistics from the aggregated results to ensure the results are correct
SELECT count(*), count(DISTINCT user_id), avg(user_id) FROM agg_results;
count | count | avg
-------+-------+--------------------
---------------------------------------------------------------------
4 | 2 | 4.2500000000000000
(1 row)
------------------------------------
------------------------------------
---------------------------------------------------------------------
---------------------------------------------------------------------
-- Customers who have done X and Y, and satisfy other customer specific criteria
------------------------------------
------------------------------------
---------------------------------------------------------------------
---------------------------------------------------------------------
TRUNCATE agg_results;
INSERT INTO agg_results(user_id, value_2_agg)
SELECT user_id, value_2 FROM users_table WHERE
@ -283,15 +283,15 @@ SELECT user_id, value_2 FROM users_table WHERE
-- get some statistics from the aggregated results to ensure the results are correct
SELECT count(*), count(DISTINCT user_id), avg(user_id) FROM agg_results;
count | count | avg
-------+-------+--------------------
---------------------------------------------------------------------
29 | 5 | 3.1034482758620690
(1 row)
------------------------------------
------------------------------------
---------------------------------------------------------------------
---------------------------------------------------------------------
-- Customers who have done X and havent done Y, and satisfy other customer specific criteria
------------------------------------
------------------------------------
---------------------------------------------------------------------
---------------------------------------------------------------------
TRUNCATE agg_results;
INSERT INTO agg_results(user_id, value_2_agg)
SELECT user_id, value_2 FROM users_table WHERE
@ -301,15 +301,15 @@ SELECT user_id, value_2 FROM users_table WHERE
-- get some statistics from the aggregated results to ensure the results are correct
SELECT count(*), count(DISTINCT user_id), avg(user_id) FROM agg_results;
count | count | avg
-------+-------+--------------------
---------------------------------------------------------------------
11 | 1 | 5.0000000000000000
(1 row)
------------------------------------
------------------------------------
---------------------------------------------------------------------
---------------------------------------------------------------------
-- Customers who have done X more than 2 times, and satisfy other customer specific criteria
------------------------------------
------------------------------------
---------------------------------------------------------------------
---------------------------------------------------------------------
TRUNCATE agg_results;
INSERT INTO agg_results(user_id, value_2_agg)
SELECT user_id,
@ -329,15 +329,15 @@ INSERT INTO agg_results(user_id, value_2_agg)
-- get some statistics from the aggregated results to ensure the results are correct
SELECT count(*), count(DISTINCT user_id), avg(user_id) FROM agg_results;
count | count | avg
-------+-------+--------------------
---------------------------------------------------------------------
4 | 2 | 3.5000000000000000
(1 row)
------------------------------------
------------------------------------
---------------------------------------------------------------------
---------------------------------------------------------------------
-- Find me all users_table who logged in more than once
------------------------------------
------------------------------------
---------------------------------------------------------------------
---------------------------------------------------------------------
TRUNCATE agg_results;
INSERT INTO agg_results(user_id, value_1_agg)
SELECT user_id, value_1 from
@ -348,15 +348,15 @@ SELECT user_id, value_1 from
-- get some statistics from the aggregated results to ensure the results are correct
SELECT count(*), count(DISTINCT user_id), avg(user_id) FROM agg_results;
count | count | avg
-------+-------+------------------------
---------------------------------------------------------------------
1 | 1 | 1.00000000000000000000
(1 row)
------------------------------------
------------------------------------
---------------------------------------------------------------------
---------------------------------------------------------------------
-- Find me all users_table who has done some event and has filters
------------------------------------
------------------------------------
---------------------------------------------------------------------
---------------------------------------------------------------------
TRUNCATE agg_results;
INSERT INTO agg_results(user_id)
Select user_id
@ -371,15 +371,15 @@ And user_id in
-- get some statistics from the aggregated results to ensure the results are correct
SELECT count(*), count(DISTINCT user_id), avg(user_id) FROM agg_results;
count | count | avg
-------+-------+--------------------
---------------------------------------------------------------------
11 | 4 | 3.1818181818181818
(1 row)
------------------------------------
------------------------------------
---------------------------------------------------------------------
---------------------------------------------------------------------
-- Which events_table did people who has done some specific events_table
------------------------------------
------------------------------------
---------------------------------------------------------------------
---------------------------------------------------------------------
TRUNCATE agg_results;
INSERT INTO agg_results(user_id, value_1_agg)
SELECT user_id, event_type FROM events_table
@ -388,15 +388,15 @@ GROUP BY user_id, event_type;
-- get some statistics from the aggregated results to ensure the results are correct
SELECT count(*), count(DISTINCT user_id), avg(user_id) FROM agg_results;
count | count | avg
-------+-------+--------------------
---------------------------------------------------------------------
34 | 6 | 3.4411764705882353
(1 row)
------------------------------------
------------------------------------
---------------------------------------------------------------------
---------------------------------------------------------------------
-- Find me all the users_table who has done some event more than three times
------------------------------------
------------------------------------
---------------------------------------------------------------------
---------------------------------------------------------------------
TRUNCATE agg_results;
INSERT INTO agg_results(user_id)
select user_id from
@ -410,15 +410,15 @@ where event_type = 4 group by user_id having count(*) > 3
-- get some statistics from the aggregated results to ensure the results are correct
SELECT count(*), count(DISTINCT user_id), avg(user_id) FROM agg_results;
count | count | avg
-------+-------+--------------------
---------------------------------------------------------------------
4 | 4 | 2.5000000000000000
(1 row)
------------------------------------
------------------------------------
---------------------------------------------------------------------
---------------------------------------------------------------------
-- Find my assets that have the highest probability and fetch their metadata
------------------------------------
------------------------------------
---------------------------------------------------------------------
---------------------------------------------------------------------
TRUNCATE agg_results;
INSERT INTO agg_results(user_id, value_1_agg, value_3_agg)
SELECT
@ -438,7 +438,7 @@ FROM
-- get some statistics from the aggregated results to ensure the results are correct
SELECT count(*), count(DISTINCT user_id), avg(user_id) FROM agg_results;
count | count | avg
-------+-------+--------------------
---------------------------------------------------------------------
3488 | 6 | 3.5372706422018349
(1 row)
@ -462,7 +462,7 @@ FROM
-- get some statistics from the aggregated results to ensure the results are correct
SELECT count(*), count(DISTINCT user_id), avg(user_id) FROM agg_results;
count | count | avg
-------+-------+--------------------
---------------------------------------------------------------------
6 | 6 | 3.5000000000000000
(1 row)
@ -486,7 +486,7 @@ FROM
ORDER BY 1, 2;
SELECT count(*), count(DISTINCT user_id), avg(user_id), avg(value_1_agg) FROM agg_results;
count | count | avg | avg
-------+-------+--------------------+------------------------
---------------------------------------------------------------------
6 | 6 | 3.5000000000000000 | 0.16666666666666666667
(1 row)
@ -509,7 +509,7 @@ FROM
ORDER BY 1, 2;
SELECT count(*), count(DISTINCT user_id), avg(user_id), avg(value_1_agg) FROM agg_results;
count | count | avg | avg
-------+-------+--------------------+------------------------
---------------------------------------------------------------------
6 | 6 | 3.5000000000000000 | 0.16666666666666666667
(1 row)

View File

@ -1,8 +1,8 @@
------------------------------------
------------------------------------
---------------------------------------------------------------------
---------------------------------------------------------------------
-- Vanilla funnel query -- single shard
------------------------------------
------------------------------------
---------------------------------------------------------------------
---------------------------------------------------------------------
TRUNCATE agg_results_second;
INSERT INTO agg_results_second (user_id, value_1_agg)
SELECT user_id, array_length(events_table, 1)
@ -23,15 +23,15 @@ WHERE user_id = 2;
-- get some statistics from the aggregated results to ensure the results are correct
SELECT count(*), count(DISTINCT user_id), avg(user_id) FROM agg_results_second;
count | count | avg
-------+-------+--------------------
---------------------------------------------------------------------
1 | 1 | 2.0000000000000000
(1 row)
------------------------------------
------------------------------------
---------------------------------------------------------------------
---------------------------------------------------------------------
-- Vanilla funnel query -- two shards
------------------------------------
------------------------------------
---------------------------------------------------------------------
---------------------------------------------------------------------
TRUNCATE agg_results_second;
INSERT INTO agg_results_second (user_id, value_1_agg)
SELECT user_id, array_length(events_table, 1)
@ -52,15 +52,15 @@ WHERE (user_id = 1 OR user_id = 2);
-- get some statistics from the aggregated results to ensure the results are correct
SELECT count(*), count(DISTINCT user_id), avg(user_id) FROM agg_results_second;
count | count | avg
-------+-------+--------------------
---------------------------------------------------------------------
2 | 2 | 1.5000000000000000
(1 row)
------------------------------------
------------------------------------
---------------------------------------------------------------------
---------------------------------------------------------------------
-- Funnel grouped by whether or not a user has done an event -- single shard query
------------------------------------
------------------------------------
---------------------------------------------------------------------
---------------------------------------------------------------------
TRUNCATE agg_results_second;
INSERT INTO agg_results_second (user_id, value_1_agg, value_2_agg )
SELECT user_id, sum(array_length(events_table, 1)), length(hasdone_event)
@ -100,11 +100,11 @@ FROM (
WHERE t1.user_id = 2
GROUP BY t1.user_id, hasdone_event
) t GROUP BY user_id, hasdone_event;
------------------------------------
------------------------------------
---------------------------------------------------------------------
---------------------------------------------------------------------
-- Funnel grouped by whether or not a user has done an event -- two shards query
------------------------------------
------------------------------------
---------------------------------------------------------------------
---------------------------------------------------------------------
TRUNCATE agg_results_second;
INSERT INTO agg_results_second (user_id, value_1_agg, value_2_agg )
SELECT user_id, sum(array_length(events_table, 1)), length(hasdone_event)
@ -145,17 +145,17 @@ FROM (
-- get some statistics from the aggregated results to ensure the results are correct
SELECT count(*), count(DISTINCT user_id), avg(user_id) FROM agg_results_second;
count | count | avg
-------+-------+--------------------
---------------------------------------------------------------------
1 | 1 | 2.0000000000000000
(1 row)
------------------------------------
------------------------------------
---------------------------------------------------------------------
---------------------------------------------------------------------
-- Most recently seen users_table events_table -- single shard query
------------------------------------
---------------------------------------------------------------------
-- Note that we don't use ORDER BY/LIMIT yet
------------------------------------
------------------------------------
---------------------------------------------------------------------
---------------------------------------------------------------------
TRUNCATE agg_results_second;
INSERT INTO agg_results_second (user_id, agg_time, value_2_agg)
SELECT
@ -187,17 +187,17 @@ ORDER BY user_lastseen DESC;
-- get some statistics from the aggregated results to ensure the results are correct
SELECT count(*), count(DISTINCT user_id), avg(user_id) FROM agg_results_second;
count | count | avg
-------+-------+--------------------
---------------------------------------------------------------------
1 | 1 | 5.0000000000000000
(1 row)
------------------------------------
------------------------------------
---------------------------------------------------------------------
---------------------------------------------------------------------
-- Most recently seen users_table events_table -- two shards query
------------------------------------
---------------------------------------------------------------------
-- Note that we don't use ORDER BY/LIMIT yet
------------------------------------
------------------------------------
---------------------------------------------------------------------
---------------------------------------------------------------------
TRUNCATE agg_results_second;
INSERT INTO agg_results_second (user_id, agg_time, value_2_agg)
SELECT
@ -230,15 +230,15 @@ ORDER BY user_lastseen DESC;
-- get some statistics from the aggregated results to ensure the results are correct
SELECT count(*), count(DISTINCT user_id), avg(user_id) FROM agg_results_second;
count | count | avg
-------+-------+--------------------
---------------------------------------------------------------------
2 | 2 | 3.0000000000000000
(1 row)
------------------------------------
------------------------------------
---------------------------------------------------------------------
---------------------------------------------------------------------
-- Count the number of distinct users_table who are in segment X and Y and Z -- single shard
------------------------------------
------------------------------------
---------------------------------------------------------------------
---------------------------------------------------------------------
TRUNCATE agg_results_second;
INSERT INTO agg_results_second (user_id)
SELECT DISTINCT user_id
@ -250,15 +250,15 @@ WHERE user_id IN (SELECT user_id FROM users_table WHERE value_1 >= 1 AND value_1
-- get some statistics from the aggregated results to ensure the results are correct
SELECT count(*), count(DISTINCT user_id), avg(user_id) FROM agg_results_second;
count | count | avg
-------+-------+------------------------
---------------------------------------------------------------------
1 | 1 | 1.00000000000000000000
(1 row)
------------------------------------
------------------------------------
---------------------------------------------------------------------
---------------------------------------------------------------------
-- Count the number of distinct users_table who are in segment X and Y and Z -- two shards
------------------------------------
------------------------------------
---------------------------------------------------------------------
---------------------------------------------------------------------
TRUNCATE agg_results_second;
INSERT INTO agg_results_second (user_id)
SELECT DISTINCT user_id
@ -270,15 +270,15 @@ WHERE user_id IN (SELECT user_id FROM users_table WHERE value_1 >= 1 AND value_1
-- get some statistics from the aggregated results to ensure the results are correct
SELECT count(*), count(DISTINCT user_id), avg(user_id) FROM agg_results_second;
count | count | avg
-------+-------+------------------------
---------------------------------------------------------------------
1 | 1 | 1.00000000000000000000
(1 row)
------------------------------------
------------------------------------
---------------------------------------------------------------------
---------------------------------------------------------------------
-- Find customers who have done X, and satisfy other customer specific criteria -- single shard
------------------------------------
------------------------------------
---------------------------------------------------------------------
---------------------------------------------------------------------
TRUNCATE agg_results_second;
INSERT INTO agg_results_second(user_id, value_2_agg)
SELECT user_id, value_2 FROM users_table WHERE
@ -289,15 +289,15 @@ SELECT user_id, value_2 FROM users_table WHERE
-- get some statistics from the aggregated results to ensure the results are correct
SELECT count(*), count(DISTINCT user_id), avg(user_id) FROM agg_results_second;
count | count | avg
-------+-------+--------------------
---------------------------------------------------------------------
7 | 1 | 2.0000000000000000
(1 row)
------------------------------------
------------------------------------
---------------------------------------------------------------------
---------------------------------------------------------------------
-- Find customers who have done X, and satisfy other customer specific criteria -- two shards
------------------------------------
------------------------------------
---------------------------------------------------------------------
---------------------------------------------------------------------
TRUNCATE agg_results_second;
INSERT INTO agg_results_second(user_id, value_2_agg)
SELECT user_id, value_2 FROM users_table WHERE
@ -308,15 +308,15 @@ SELECT user_id, value_2 FROM users_table WHERE
-- get some statistics from the aggregated results to ensure the results are correct
SELECT count(*), count(DISTINCT user_id), avg(user_id) FROM agg_results_second;
count | count | avg
-------+-------+--------------------
---------------------------------------------------------------------
10 | 2 | 1.7000000000000000
(1 row)
------------------------------------
------------------------------------
---------------------------------------------------------------------
---------------------------------------------------------------------
-- Customers who have done X and havent done Y, and satisfy other customer specific criteria -- single shard
------------------------------------
------------------------------------
---------------------------------------------------------------------
---------------------------------------------------------------------
TRUNCATE agg_results_second;
INSERT INTO agg_results_second(user_id, value_2_agg)
SELECT user_id, value_2 FROM users_table WHERE
@ -327,15 +327,15 @@ SELECT user_id, value_2 FROM users_table WHERE
-- get some statistics from the aggregated results to ensure the results are correct
SELECT count(*), count(DISTINCT user_id), avg(user_id) FROM agg_results_second;
count | count | avg
-------+-------+------------------------
---------------------------------------------------------------------
6 | 1 | 1.00000000000000000000
(1 row)
------------------------------------
------------------------------------
---------------------------------------------------------------------
---------------------------------------------------------------------
-- Customers who have done X and havent done Y, and satisfy other customer specific criteria -- two shards
------------------------------------
------------------------------------
---------------------------------------------------------------------
---------------------------------------------------------------------
TRUNCATE agg_results_second;
INSERT INTO agg_results_second(user_id, value_2_agg)
SELECT user_id, value_2 FROM users_table WHERE
@ -346,15 +346,15 @@ SELECT user_id, value_2 FROM users_table WHERE
-- get some statistics from the aggregated results to ensure the results are correct
SELECT count(*), count(DISTINCT user_id), avg(user_id) FROM agg_results_second;
count | count | avg
-------+-------+--------------------
---------------------------------------------------------------------
20 | 2 | 1.7000000000000000
(1 row)
------------------------------------
------------------------------------
---------------------------------------------------------------------
---------------------------------------------------------------------
-- Customers who have done X more than 2 times, and satisfy other customer specific criteria -- single shard
------------------------------------
------------------------------------
---------------------------------------------------------------------
---------------------------------------------------------------------
TRUNCATE agg_results_second;
INSERT INTO agg_results_second(user_id, value_2_agg)
SELECT user_id,
@ -376,15 +376,15 @@ INSERT INTO agg_results_second(user_id, value_2_agg)
-- get some statistics from the aggregated results to ensure the results are correct
SELECT count(*), count(DISTINCT user_id), avg(user_id) FROM agg_results_second;
count | count | avg
-------+-------+--------------------
---------------------------------------------------------------------
2 | 1 | 3.0000000000000000
(1 row)
------------------------------------
------------------------------------
---------------------------------------------------------------------
---------------------------------------------------------------------
-- Customers who have done X more than 2 times, and satisfy other customer specific criteria -- two shards
------------------------------------
------------------------------------
---------------------------------------------------------------------
---------------------------------------------------------------------
TRUNCATE agg_results_second;
INSERT INTO agg_results_second(user_id, value_2_agg)
SELECT user_id,
@ -405,7 +405,7 @@ INSERT INTO agg_results_second(user_id, value_2_agg)
-- get some statistics from the aggregated results to ensure the results are correct
SELECT count(*), count(DISTINCT user_id), avg(user_id) FROM agg_results_second;
count | count | avg
-------+-------+--------------------
---------------------------------------------------------------------
4 | 2 | 3.5000000000000000
(1 row)

View File

@ -7,13 +7,13 @@ SET citus.binary_master_copy_format TO 'on';
SET citus.task_executor_type TO 'task-tracker';
SELECT count(*) FROM lineitem;
count
-------
---------------------------------------------------------------------
12000
(1 row)
SELECT l_shipmode FROM lineitem WHERE l_partkey = 67310 OR l_partkey = 155190;
l_shipmode
------------
---------------------------------------------------------------------
TRUCK
MAIL
(2 rows)
@ -21,13 +21,13 @@ SELECT l_shipmode FROM lineitem WHERE l_partkey = 67310 OR l_partkey = 155190;
RESET citus.task_executor_type;
SELECT count(*) FROM lineitem;
count
-------
---------------------------------------------------------------------
12000
(1 row)
SELECT l_shipmode FROM lineitem WHERE l_partkey = 67310 OR l_partkey = 155190;
l_shipmode
------------
---------------------------------------------------------------------
TRUCK
MAIL
(2 rows)

View File

@ -7,13 +7,13 @@ CREATE TABLE mci_1.test (test_id integer NOT NULL, data int);
CREATE TABLE mci_2.test (test_id integer NOT NULL, data int);
SELECT create_distributed_table('mci_1.test', 'test_id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
SELECT create_distributed_table('mci_2.test', 'test_id', 'append');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -21,7 +21,7 @@ INSERT INTO mci_1.test VALUES (1,2), (3,4);
-- move shards into other append-distributed table
SELECT run_command_on_placements('mci_1.test', 'ALTER TABLE %s SET SCHEMA mci_2');
run_command_on_placements
-------------------------------------------
---------------------------------------------------------------------
(localhost,57637,1601000,t,"ALTER TABLE")
(localhost,57638,1601000,t,"ALTER TABLE")
(localhost,57637,1601001,t,"ALTER TABLE")
@ -37,7 +37,7 @@ SET logicalrelid = 'mci_2.test'::regclass, shardminvalue = NULL, shardmaxvalue =
WHERE logicalrelid = 'mci_1.test'::regclass;
SELECT * FROM mci_2.test ORDER BY test_id;
test_id | data
---------+------
---------------------------------------------------------------------
1 | 2
3 | 4
(2 rows)

View File

@ -15,16 +15,16 @@ SELECT * FROM master_run_on_worker(ARRAY['localhost']::text[], ARRAY['666']::int
ARRAY['select count(*) from pg_dist_shard']::text[],
false);
node_name | node_port | success | result
-----------+-----------+---------+------------------------------------
localhost | 666 | f | failed to connect to localhost:666
---------------------------------------------------------------------
localhost | 666 | f | failed to connect to localhost:xxxxx
(1 row)
SELECT * FROM master_run_on_worker(ARRAY['localhost']::text[], ARRAY['666']::int[],
ARRAY['select count(*) from pg_dist_shard']::text[],
true);
node_name | node_port | success | result
-----------+-----------+---------+------------------------------------
localhost | 666 | f | failed to connect to localhost:666
---------------------------------------------------------------------
localhost | 666 | f | failed to connect to localhost:xxxxx
(1 row)
RESET client_min_messages;
@ -38,7 +38,7 @@ SELECT * FROM master_run_on_worker(ARRAY[:node_name]::text[], ARRAY[:node_port]:
ARRAY['select count(*) from pg_dist_shard']::text[],
false);
node_name | node_port | success | result
-----------+-----------+---------+--------
---------------------------------------------------------------------
localhost | 57637 | t | 0
(1 row)
@ -48,7 +48,7 @@ SELECT * FROM master_run_on_worker(ARRAY[:node_name]::text[], ARRAY[:node_port]:
ARRAY['select * from pg_dist_shard']::text[],
false);
node_name | node_port | success | result
-----------+-----------+---------+------------------------------------------
---------------------------------------------------------------------
localhost | 57637 | f | expected a single column in query target
(1 row)
@ -57,7 +57,7 @@ SELECT * FROM master_run_on_worker(ARRAY[:node_name]::text[], ARRAY[:node_port]:
ARRAY['select a from generate_series(1,2) a']::text[],
false);
node_name | node_port | success | result
-----------+-----------+---------+---------------------------------------
---------------------------------------------------------------------
localhost | 57637 | f | expected a single row in query result
(1 row)
@ -68,7 +68,7 @@ SELECT * FROM master_run_on_worker(ARRAY[:node_name, :node_name]::text[],
'select a from generate_series(2,2) a']::text[],
false);
node_name | node_port | success | result
-----------+-----------+---------+--------
---------------------------------------------------------------------
localhost | 57637 | t | 1
localhost | 57637 | t | 2
(2 rows)
@ -80,7 +80,7 @@ SELECT * FROM master_run_on_worker(ARRAY[:node_name, :node_name]::text[],
'select a from generate_series(1,2) a']::text[],
false);
node_name | node_port | success | result
-----------+-----------+---------+---------------------------------------
---------------------------------------------------------------------
localhost | 57637 | t | 1
localhost | 57637 | f | expected a single row in query result
(2 rows)
@ -92,7 +92,7 @@ SELECT * FROM master_run_on_worker(ARRAY[:node_name, :node_name]::text[],
'select a from generate_series(1,2) a']::text[],
false);
node_name | node_port | success | result
-----------+-----------+---------+---------------------------------------
---------------------------------------------------------------------
localhost | 57637 | f | expected a single row in query result
localhost | 57637 | f | expected a single row in query result
(2 rows)
@ -104,7 +104,7 @@ SELECT * FROM master_run_on_worker(ARRAY[:node_name, :node_name]::text[],
'create table second_table(a int, b int)']::text[],
false);
node_name | node_port | success | result
-----------+-----------+---------+--------------
---------------------------------------------------------------------
localhost | 57637 | t | CREATE TABLE
localhost | 57637 | t | CREATE TABLE
(2 rows)
@ -114,7 +114,7 @@ SELECT * FROM master_run_on_worker(ARRAY[:node_name]::text[], ARRAY[:node_port]:
ARRAY['insert into first_table select a,a from generate_series(1,20) a']::text[],
false);
node_name | node_port | success | result
-----------+-----------+---------+-------------
---------------------------------------------------------------------
localhost | 57637 | t | INSERT 0 20
(1 row)
@ -122,7 +122,7 @@ SELECT * FROM master_run_on_worker(ARRAY[:node_name]::text[], ARRAY[:node_port]:
ARRAY['select count(*) from first_table']::text[],
false);
node_name | node_port | success | result
-----------+-----------+---------+--------
---------------------------------------------------------------------
localhost | 57637 | t | 20
(1 row)
@ -131,7 +131,7 @@ SELECT * FROM master_run_on_worker(ARRAY[:node_name]::text[], ARRAY[:node_port]:
ARRAY['insert into second_table select * from first_table']::text[],
false);
node_name | node_port | success | result
-----------+-----------+---------+-------------
---------------------------------------------------------------------
localhost | 57637 | t | INSERT 0 20
(1 row)
@ -139,7 +139,7 @@ SELECT * FROM master_run_on_worker(ARRAY[:node_name]::text[], ARRAY[:node_port]:
ARRAY['insert into second_table select * from first_table']::text[],
false);
node_name | node_port | success | result
-----------+-----------+---------+-------------
---------------------------------------------------------------------
localhost | 57637 | t | INSERT 0 20
(1 row)
@ -148,7 +148,7 @@ SELECT * FROM master_run_on_worker(ARRAY[:node_name]::text[], ARRAY[:node_port]:
ARRAY['select count(*) from second_table']::text[],
false);
node_name | node_port | success | result
-----------+-----------+---------+--------
---------------------------------------------------------------------
localhost | 57637 | t | 40
(1 row)
@ -163,7 +163,7 @@ SELECT * FROM master_run_on_worker(ARRAY[:node_name]::text[], ARRAY[:node_port]:
ARRAY['create index first_table_index on first_table(a)']::text[],
false);
node_name | node_port | success | result
-----------+-----------+---------+--------------
---------------------------------------------------------------------
localhost | 57637 | t | CREATE INDEX
(1 row)
@ -172,7 +172,7 @@ SELECT * FROM master_run_on_worker(ARRAY[:node_name]::text[], ARRAY[:node_port]:
ARRAY['drop table first_table']::text[],
false);
node_name | node_port | success | result
-----------+-----------+---------+------------
---------------------------------------------------------------------
localhost | 57637 | t | DROP TABLE
(1 row)
@ -180,7 +180,7 @@ SELECT * FROM master_run_on_worker(ARRAY[:node_name]::text[], ARRAY[:node_port]:
ARRAY['drop table second_table']::text[],
false);
node_name | node_port | success | result
-----------+-----------+---------+------------
---------------------------------------------------------------------
localhost | 57637 | t | DROP TABLE
(1 row)
@ -189,7 +189,7 @@ SELECT * FROM master_run_on_worker(ARRAY[:node_name]::text[], ARRAY[:node_port]:
ARRAY['select count(*) from second_table']::text[],
false);
node_name | node_port | success | result
-----------+-----------+---------+------------------------------------------------
---------------------------------------------------------------------
localhost | 57637 | f | ERROR: relation "second_table" does not exist
(1 row)
@ -201,7 +201,7 @@ SELECT * FROM master_run_on_worker(ARRAY[:node_name]::text[], ARRAY[:node_port]:
ARRAY['select count(*) from pg_dist_shard']::text[],
true);
node_name | node_port | success | result
-----------+-----------+---------+--------
---------------------------------------------------------------------
localhost | 57637 | t | 0
(1 row)
@ -211,7 +211,7 @@ SELECT * FROM master_run_on_worker(ARRAY[:node_name]::text[], ARRAY[:node_port]:
ARRAY['select * from pg_dist_shard']::text[],
true);
node_name | node_port | success | result
-----------+-----------+---------+------------------------------------------
---------------------------------------------------------------------
localhost | 57637 | f | expected a single column in query target
(1 row)
@ -220,7 +220,7 @@ SELECT * FROM master_run_on_worker(ARRAY[:node_name]::text[], ARRAY[:node_port]:
ARRAY['select a from generate_series(1,2) a']::text[],
true);
node_name | node_port | success | result
-----------+-----------+---------+---------------------------------------
---------------------------------------------------------------------
localhost | 57637 | f | expected a single row in query result
(1 row)
@ -231,7 +231,7 @@ SELECT * FROM master_run_on_worker(ARRAY[:node_name, :node_name]::text[],
'select a from generate_series(2,2) a']::text[],
true);
node_name | node_port | success | result
-----------+-----------+---------+--------
---------------------------------------------------------------------
localhost | 57637 | t | 1
localhost | 57637 | t | 2
(2 rows)
@ -243,7 +243,7 @@ SELECT * FROM master_run_on_worker(ARRAY[:node_name, :node_name]::text[],
'select a from generate_series(1,2) a']::text[],
true);
node_name | node_port | success | result
-----------+-----------+---------+---------------------------------------
---------------------------------------------------------------------
localhost | 57637 | t | 1
localhost | 57637 | f | expected a single row in query result
(2 rows)
@ -255,7 +255,7 @@ SELECT * FROM master_run_on_worker(ARRAY[:node_name, :node_name]::text[],
'select a from generate_series(1,2) a']::text[],
true);
node_name | node_port | success | result
-----------+-----------+---------+---------------------------------------
---------------------------------------------------------------------
localhost | 57637 | f | expected a single row in query result
localhost | 57637 | f | expected a single row in query result
(2 rows)
@ -267,7 +267,7 @@ SELECT * FROM master_run_on_worker(ARRAY[:node_name, :node_name]::text[],
'create table second_table(a int, b int)']::text[],
true);
node_name | node_port | success | result
-----------+-----------+---------+--------------
---------------------------------------------------------------------
localhost | 57637 | t | CREATE TABLE
localhost | 57637 | t | CREATE TABLE
(2 rows)
@ -283,7 +283,7 @@ SELECT * FROM master_run_on_worker(ARRAY[:node_name]::text[], ARRAY[:node_port]:
ARRAY['insert into first_table select a,a from generate_series(1,20) a']::text[],
true);
node_name | node_port | success | result
-----------+-----------+---------+-------------
---------------------------------------------------------------------
localhost | 57637 | t | INSERT 0 20
(1 row)
@ -291,7 +291,7 @@ SELECT * FROM master_run_on_worker(ARRAY[:node_name]::text[], ARRAY[:node_port]:
ARRAY['select count(*) from first_table']::text[],
true);
node_name | node_port | success | result
-----------+-----------+---------+--------
---------------------------------------------------------------------
localhost | 57637 | t | 20
(1 row)
@ -300,7 +300,7 @@ SELECT * FROM master_run_on_worker(ARRAY[:node_name]::text[], ARRAY[:node_port]:
ARRAY['insert into second_table select * from first_table']::text[],
true);
node_name | node_port | success | result
-----------+-----------+---------+-------------
---------------------------------------------------------------------
localhost | 57637 | t | INSERT 0 20
(1 row)
@ -308,7 +308,7 @@ SELECT * FROM master_run_on_worker(ARRAY[:node_name]::text[], ARRAY[:node_port]:
ARRAY['insert into second_table select * from first_table']::text[],
true);
node_name | node_port | success | result
-----------+-----------+---------+-------------
---------------------------------------------------------------------
localhost | 57637 | t | INSERT 0 20
(1 row)
@ -317,7 +317,7 @@ SELECT * FROM master_run_on_worker(ARRAY[:node_name]::text[], ARRAY[:node_port]:
ARRAY['select count(*) from second_table']::text[],
true);
node_name | node_port | success | result
-----------+-----------+---------+--------
---------------------------------------------------------------------
localhost | 57637 | t | 40
(1 row)
@ -326,7 +326,7 @@ SELECT * FROM master_run_on_worker(ARRAY[:node_name]::text[], ARRAY[:node_port]:
ARRAY['create index first_table_index on first_table(a)']::text[],
true);
node_name | node_port | success | result
-----------+-----------+---------+--------------
---------------------------------------------------------------------
localhost | 57637 | t | CREATE INDEX
(1 row)
@ -335,7 +335,7 @@ SELECT * FROM master_run_on_worker(ARRAY[:node_name]::text[], ARRAY[:node_port]:
ARRAY['drop table first_table']::text[],
true);
node_name | node_port | success | result
-----------+-----------+---------+------------
---------------------------------------------------------------------
localhost | 57637 | t | DROP TABLE
(1 row)
@ -343,7 +343,7 @@ SELECT * FROM master_run_on_worker(ARRAY[:node_name]::text[], ARRAY[:node_port]:
ARRAY['drop table second_table']::text[],
true);
node_name | node_port | success | result
-----------+-----------+---------+------------
---------------------------------------------------------------------
localhost | 57637 | t | DROP TABLE
(1 row)
@ -352,21 +352,21 @@ SELECT * FROM master_run_on_worker(ARRAY[:node_name]::text[], ARRAY[:node_port]:
ARRAY['select count(*) from second_table']::text[],
true);
node_name | node_port | success | result
-----------+-----------+---------+------------------------------------------------
---------------------------------------------------------------------
localhost | 57637 | f | ERROR: relation "second_table" does not exist
(1 row)
-- run_command_on_XXX tests
SELECT * FROM run_command_on_workers('select 1') ORDER BY 2 ASC;
nodename | nodeport | success | result
-----------+----------+---------+--------
---------------------------------------------------------------------
localhost | 57637 | t | 1
localhost | 57638 | t | 1
(2 rows)
SELECT * FROM run_command_on_workers('select count(*) from pg_dist_partition') ORDER BY 2 ASC;
nodename | nodeport | success | result
-----------+----------+---------+--------
---------------------------------------------------------------------
localhost | 57637 | t | 0
localhost | 57638 | t | 0
(2 rows)
@ -376,13 +376,13 @@ SET citus.shard_count TO 5;
CREATE TABLE check_placements (key int);
SELECT create_distributed_table('check_placements', 'key', 'hash');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
SELECT * FROM run_command_on_placements('check_placements', 'select 1');
nodename | nodeport | shardid | success | result
-----------+----------+---------+---------+--------
---------------------------------------------------------------------
localhost | 57637 | 1240000 | t | 1
localhost | 57638 | 1240000 | t | 1
localhost | 57637 | 1240001 | t | 1
@ -399,7 +399,7 @@ UPDATE pg_dist_shard_placement SET shardstate = 3
WHERE shardid % 2 = 0 AND nodeport = :worker_1_port;
SELECT * FROM run_command_on_placements('check_placements', 'select 1');
nodename | nodeport | shardid | success | result
-----------+----------+---------+---------+--------
---------------------------------------------------------------------
localhost | 57638 | 1240000 | t | 1
localhost | 57637 | 1240001 | t | 1
localhost | 57638 | 1240001 | t | 1
@ -414,7 +414,7 @@ DROP TABLE check_placements CASCADE;
CREATE TABLE check_colocated (key int);
SELECT create_distributed_table('check_colocated', 'key', 'hash');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -422,7 +422,7 @@ SET citus.shard_count TO 4;
CREATE TABLE second_table (key int);
SELECT create_distributed_table('second_table', 'key', 'hash');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -436,7 +436,7 @@ SET citus.shard_count TO 5;
CREATE TABLE second_table (key int);
SELECT create_distributed_table('second_table', 'key', 'hash');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -450,14 +450,14 @@ SET citus.shard_count TO 5;
CREATE TABLE second_table (key int);
SELECT create_distributed_table('second_table', 'key', 'hash');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
SELECT * FROM run_command_on_colocated_placements('check_colocated', 'second_table',
'select 1');
nodename | nodeport | shardid1 | shardid2 | success | result
-----------+----------+----------+----------+---------+--------
---------------------------------------------------------------------
localhost | 57637 | 1240005 | 1240019 | t | 1
localhost | 57638 | 1240005 | 1240019 | t | 1
localhost | 57637 | 1240006 | 1240020 | t | 1
@ -477,13 +477,13 @@ SET citus.shard_count TO 5;
CREATE TABLE check_shards (key int);
SELECT create_distributed_table('check_shards', 'key', 'hash');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
SELECT * FROM run_command_on_shards('check_shards', 'select 1');
shardid | success | result
---------+---------+--------
---------------------------------------------------------------------
1240024 | t | 1
1240025 | t | 1
1240026 | t | 1
@ -495,7 +495,7 @@ UPDATE pg_dist_shard_placement SET shardstate = 3 WHERE shardid % 2 = 0;
SELECT * FROM run_command_on_shards('check_shards', 'select 1');
NOTICE: some shards do not have active placements
shardid | success | result
---------+---------+--------
---------------------------------------------------------------------
1240025 | t | 1
1240027 | t | 1
(2 rows)

View File

@ -10,20 +10,20 @@ DETAIL: There are no active worker nodes.
-- add the nodes to the cluster
SELECT 1 FROM master_add_node('localhost', :worker_1_port);
?column?
----------
---------------------------------------------------------------------
1
(1 row)
SELECT 1 FROM master_add_node('localhost', :worker_2_port);
?column?
----------
---------------------------------------------------------------------
1
(1 row)
-- get the active nodes
SELECT master_get_active_worker_nodes();
master_get_active_worker_nodes
--------------------------------
---------------------------------------------------------------------
(localhost,57638)
(localhost,57637)
(2 rows)
@ -31,14 +31,14 @@ SELECT master_get_active_worker_nodes();
-- try to add a node that is already in the cluster
SELECT * FROM master_add_node('localhost', :worker_1_port);
master_add_node
-----------------
---------------------------------------------------------------------
1
(1 row)
-- get the active nodes
SELECT master_get_active_worker_nodes();
master_get_active_worker_nodes
--------------------------------
---------------------------------------------------------------------
(localhost,57638)
(localhost,57637)
(2 rows)
@ -46,33 +46,33 @@ SELECT master_get_active_worker_nodes();
-- try to remove a node (with no placements)
SELECT master_remove_node('localhost', :worker_2_port);
master_remove_node
--------------------
---------------------------------------------------------------------
(1 row)
-- verify that the node has been deleted
SELECT master_get_active_worker_nodes();
master_get_active_worker_nodes
--------------------------------
---------------------------------------------------------------------
(localhost,57637)
(1 row)
-- try to disable a node with no placements see that node is removed
SELECT 1 FROM master_add_node('localhost', :worker_2_port);
?column?
----------
---------------------------------------------------------------------
1
(1 row)
SELECT master_disable_node('localhost', :worker_2_port);
master_disable_node
---------------------
---------------------------------------------------------------------
(1 row)
SELECT master_get_active_worker_nodes();
master_get_active_worker_nodes
--------------------------------
---------------------------------------------------------------------
(localhost,57637)
(1 row)
@ -81,21 +81,21 @@ SET citus.shard_count TO 16;
SET citus.shard_replication_factor TO 1;
SELECT * FROM master_activate_node('localhost', :worker_2_port);
master_activate_node
----------------------
---------------------------------------------------------------------
3
(1 row)
CREATE TABLE cluster_management_test (col_1 text, col_2 int);
SELECT create_distributed_table('cluster_management_test', 'col_1', 'hash');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
-- see that there are some active placements in the candidate node
SELECT shardid, shardstate, nodename, nodeport FROM pg_dist_shard_placement WHERE nodeport=:worker_2_port;
shardid | shardstate | nodename | nodeport
---------+------------+-----------+----------
---------------------------------------------------------------------
1220001 | 1 | localhost | 57638
1220003 | 1 | localhost | 57638
1220005 | 1 | localhost | 57638
@ -111,7 +111,7 @@ SELECT master_remove_node('localhost', :worker_2_port);
ERROR: you cannot remove the primary node of a node group which has shard placements
SELECT master_get_active_worker_nodes();
master_get_active_worker_nodes
--------------------------------
---------------------------------------------------------------------
(localhost,57638)
(localhost,57637)
(2 rows)
@ -121,15 +121,15 @@ INSERT INTO test_reference_table VALUES (1, '1');
-- try to disable a node with active placements see that node is removed
-- observe that a notification is displayed
SELECT master_disable_node('localhost', :worker_2_port);
NOTICE: Node localhost:57638 has active shard placements. Some queries may fail after this operation. Use SELECT master_activate_node('localhost', 57638) to activate this node back.
NOTICE: Node localhost:xxxxx has active shard placements. Some queries may fail after this operation. Use SELECT master_activate_node('localhost', 57638) to activate this node back.
master_disable_node
---------------------
---------------------------------------------------------------------
(1 row)
SELECT master_get_active_worker_nodes();
master_get_active_worker_nodes
--------------------------------
---------------------------------------------------------------------
(localhost,57637)
(1 row)
@ -170,49 +170,49 @@ SET ROLE node_metadata_user;
BEGIN;
SELECT 1 FROM master_add_inactive_node('localhost', :worker_2_port + 1);
?column?
----------
---------------------------------------------------------------------
1
(1 row)
SELECT 1 FROM master_activate_node('localhost', :worker_2_port + 1);
?column?
----------
---------------------------------------------------------------------
1
(1 row)
SELECT 1 FROM master_disable_node('localhost', :worker_2_port + 1);
?column?
----------
---------------------------------------------------------------------
1
(1 row)
SELECT 1 FROM master_remove_node('localhost', :worker_2_port + 1);
?column?
----------
---------------------------------------------------------------------
1
(1 row)
SELECT 1 FROM master_add_node('localhost', :worker_2_port + 1);
?column?
----------
---------------------------------------------------------------------
1
(1 row)
SELECT 1 FROM master_add_secondary_node('localhost', :worker_2_port + 2, 'localhost', :worker_2_port);
?column?
----------
---------------------------------------------------------------------
1
(1 row)
SELECT master_update_node(nodeid, 'localhost', :worker_2_port + 3) FROM pg_dist_node WHERE nodeport = :worker_2_port;
master_update_node
--------------------
---------------------------------------------------------------------
(1 row)
SELECT nodename, nodeport, noderole FROM pg_dist_node ORDER BY nodeport;
nodename | nodeport | noderole
-----------+----------+-----------
---------------------------------------------------------------------
localhost | 57637 | primary
localhost | 57639 | primary
localhost | 57640 | secondary
@ -223,14 +223,14 @@ ABORT;
\c - postgres - :master_port
SELECT master_get_active_worker_nodes();
master_get_active_worker_nodes
--------------------------------
---------------------------------------------------------------------
(localhost,57637)
(1 row)
-- restore the node for next tests
SELECT * FROM master_activate_node('localhost', :worker_2_port);
master_activate_node
----------------------
---------------------------------------------------------------------
3
(1 row)
@ -242,7 +242,7 @@ SELECT groupid AS worker_2_group FROM pg_dist_node WHERE nodeport=:worker_2_port
UPDATE pg_dist_placement SET shardstate=3 WHERE groupid=:worker_2_group;
SELECT shardid, shardstate, nodename, nodeport FROM pg_dist_shard_placement WHERE nodeport=:worker_2_port;
shardid | shardstate | nodename | nodeport
---------+------------+-----------+----------
---------------------------------------------------------------------
1220001 | 3 | localhost | 57638
1220003 | 3 | localhost | 57638
1220005 | 3 | localhost | 57638
@ -258,7 +258,7 @@ SELECT master_remove_node('localhost', :worker_2_port);
ERROR: you cannot remove the primary node of a node group which has shard placements
SELECT master_get_active_worker_nodes();
master_get_active_worker_nodes
--------------------------------
---------------------------------------------------------------------
(localhost,57638)
(localhost,57637)
(2 rows)
@ -266,7 +266,7 @@ SELECT master_get_active_worker_nodes();
-- clean-up
SELECT 1 FROM master_add_node('localhost', :worker_2_port);
?column?
----------
---------------------------------------------------------------------
1
(1 row)
@ -286,7 +286,7 @@ UPDATE pg_dist_placement SET groupid = :new_group WHERE groupid = :worker_2_grou
-- test that you are allowed to remove secondary nodes even if there are placements
SELECT 1 FROM master_add_node('localhost', 9990, groupid => :new_group, noderole => 'secondary');
?column?
----------
---------------------------------------------------------------------
1
(1 row)
@ -294,7 +294,7 @@ SELECT master_remove_node('localhost', :worker_2_port);
ERROR: you cannot remove the primary node of a node group which has shard placements
SELECT master_remove_node('localhost', 9990);
master_remove_node
--------------------
---------------------------------------------------------------------
(1 row)
@ -303,60 +303,60 @@ DROP TABLE cluster_management_test;
-- check that adding/removing nodes are propagated to nodes with metadata
SELECT master_remove_node('localhost', :worker_2_port);
master_remove_node
--------------------
---------------------------------------------------------------------
(1 row)
SELECT start_metadata_sync_to_node('localhost', :worker_1_port);
start_metadata_sync_to_node
-----------------------------
---------------------------------------------------------------------
(1 row)
SELECT 1 FROM master_add_node('localhost', :worker_2_port);
?column?
----------
---------------------------------------------------------------------
1
(1 row)
\c - - - :worker_1_port
SELECT nodename, nodeport FROM pg_dist_node WHERE nodename='localhost' AND nodeport=:worker_2_port;
nodename | nodeport
-----------+----------
---------------------------------------------------------------------
localhost | 57638
(1 row)
\c - - - :master_port
SELECT master_remove_node('localhost', :worker_2_port);
master_remove_node
--------------------
---------------------------------------------------------------------
(1 row)
\c - - - :worker_1_port
SELECT nodename, nodeport FROM pg_dist_node WHERE nodename='localhost' AND nodeport=:worker_2_port;
nodename | nodeport
----------+----------
---------------------------------------------------------------------
(0 rows)
\c - - - :master_port
-- check that added nodes are not propagated to nodes without metadata
SELECT stop_metadata_sync_to_node('localhost', :worker_1_port);
stop_metadata_sync_to_node
----------------------------
---------------------------------------------------------------------
(1 row)
SELECT 1 FROM master_add_node('localhost', :worker_2_port);
?column?
----------
---------------------------------------------------------------------
1
(1 row)
\c - - - :worker_1_port
SELECT nodename, nodeport FROM pg_dist_node WHERE nodename='localhost' AND nodeport=:worker_2_port;
nodename | nodeport
----------+----------
---------------------------------------------------------------------
(0 rows)
\c - - - :master_port
@ -365,13 +365,13 @@ SELECT
master_remove_node('localhost', :worker_1_port),
master_remove_node('localhost', :worker_2_port);
master_remove_node | master_remove_node
--------------------+--------------------
---------------------------------------------------------------------
|
(1 row)
SELECT count(1) FROM pg_dist_node;
count
-------
---------------------------------------------------------------------
0
(1 row)
@ -380,13 +380,13 @@ SELECT
master_add_node('localhost', :worker_1_port),
master_add_node('localhost', :worker_2_port);
master_add_node | master_add_node
-----------------+-----------------
---------------------------------------------------------------------
11 | 12
(1 row)
SELECT * FROM pg_dist_node ORDER BY nodeid;
nodeid | groupid | nodename | nodeport | noderack | hasmetadata | isactive | noderole | nodecluster | metadatasynced | shouldhaveshards
--------+---------+-----------+----------+----------+-------------+----------+----------+-------------+----------------+------------------
---------------------------------------------------------------------
11 | 9 | localhost | 57637 | default | f | t | primary | default | f | t
12 | 10 | localhost | 57638 | default | f | t | primary | default | f | t
(2 rows)
@ -395,84 +395,84 @@ SELECT * FROM pg_dist_node ORDER BY nodeid;
BEGIN;
SELECT master_remove_node('localhost', :worker_2_port);
master_remove_node
--------------------
---------------------------------------------------------------------
(1 row)
SELECT 1 FROM master_add_node('localhost', :worker_2_port);
?column?
----------
---------------------------------------------------------------------
1
(1 row)
SELECT master_remove_node('localhost', :worker_2_port);
master_remove_node
--------------------
---------------------------------------------------------------------
(1 row)
COMMIT;
SELECT nodename, nodeport FROM pg_dist_node WHERE nodename='localhost' AND nodeport=:worker_2_port;
nodename | nodeport
----------+----------
---------------------------------------------------------------------
(0 rows)
SELECT start_metadata_sync_to_node('localhost', :worker_1_port);
start_metadata_sync_to_node
-----------------------------
---------------------------------------------------------------------
(1 row)
BEGIN;
SELECT 1 FROM master_add_node('localhost', :worker_2_port);
?column?
----------
---------------------------------------------------------------------
1
(1 row)
SELECT master_remove_node('localhost', :worker_2_port);
master_remove_node
--------------------
---------------------------------------------------------------------
(1 row)
SELECT 1 FROM master_add_node('localhost', :worker_2_port);
?column?
----------
---------------------------------------------------------------------
1
(1 row)
COMMIT;
SELECT nodename, nodeport FROM pg_dist_node WHERE nodename='localhost' AND nodeport=:worker_2_port;
nodename | nodeport
-----------+----------
---------------------------------------------------------------------
localhost | 57638
(1 row)
\c - - - :worker_1_port
SELECT nodename, nodeport FROM pg_dist_node WHERE nodename='localhost' AND nodeport=:worker_2_port;
nodename | nodeport
-----------+----------
---------------------------------------------------------------------
localhost | 57638
(1 row)
\c - - - :master_port
SELECT master_remove_node(nodename, nodeport) FROM pg_dist_node;
master_remove_node
--------------------
---------------------------------------------------------------------
(2 rows)
SELECT 1 FROM master_add_node('localhost', :worker_1_port);
?column?
----------
---------------------------------------------------------------------
1
(1 row)
SELECT 1 FROM master_add_node('localhost', :worker_2_port);
?column?
----------
---------------------------------------------------------------------
1
(1 row)
@ -480,21 +480,21 @@ SELECT 1 FROM master_add_node('localhost', :worker_2_port);
SET citus.shard_count TO 4;
SELECT master_remove_node('localhost', :worker_2_port);
master_remove_node
--------------------
---------------------------------------------------------------------
(1 row)
BEGIN;
SELECT 1 FROM master_add_node('localhost', :worker_2_port);
?column?
----------
---------------------------------------------------------------------
1
(1 row)
CREATE TABLE temp(col1 text, col2 int);
SELECT create_distributed_table('temp', 'col1');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -503,7 +503,7 @@ INSERT INTO temp VALUES ('row2', 2);
COMMIT;
SELECT col1, col2 FROM temp ORDER BY col1;
col1 | col2
------+------
---------------------------------------------------------------------
row1 | 1
row2 | 2
(2 rows)
@ -517,7 +517,7 @@ WHERE
AND pg_dist_shard.logicalrelid = 'temp'::regclass
AND pg_dist_shard_placement.nodeport = :worker_2_port;
count
-------
---------------------------------------------------------------------
4
(1 row)
@ -530,13 +530,13 @@ DELETE FROM pg_dist_node;
\c - - - :master_port
SELECT stop_metadata_sync_to_node('localhost', :worker_1_port);
stop_metadata_sync_to_node
----------------------------
---------------------------------------------------------------------
(1 row)
SELECT stop_metadata_sync_to_node('localhost', :worker_2_port);
stop_metadata_sync_to_node
----------------------------
---------------------------------------------------------------------
(1 row)
@ -551,45 +551,45 @@ ERROR: group 14 already has a primary node
SELECT groupid AS worker_2_group FROM pg_dist_node WHERE nodeport = :worker_2_port \gset
SELECT 1 FROM master_add_node('localhost', 9998, groupid => :worker_1_group, noderole => 'secondary');
?column?
----------
---------------------------------------------------------------------
1
(1 row)
SELECT 1 FROM master_add_node('localhost', 9997, groupid => :worker_1_group, noderole => 'unavailable');
?column?
----------
---------------------------------------------------------------------
1
(1 row)
-- add_inactive_node also works with secondaries
SELECT 1 FROM master_add_inactive_node('localhost', 9996, groupid => :worker_2_group, noderole => 'secondary');
?column?
----------
---------------------------------------------------------------------
1
(1 row)
-- check that you can add a seconary to a non-default cluster, and activate it, and remove it
SELECT master_add_inactive_node('localhost', 9999, groupid => :worker_2_group, nodecluster => 'olap', noderole => 'secondary');
master_add_inactive_node
--------------------------
---------------------------------------------------------------------
22
(1 row)
SELECT master_activate_node('localhost', 9999);
master_activate_node
----------------------
---------------------------------------------------------------------
22
(1 row)
SELECT master_disable_node('localhost', 9999);
master_disable_node
---------------------
---------------------------------------------------------------------
(1 row)
SELECT master_remove_node('localhost', 9999);
master_remove_node
--------------------
---------------------------------------------------------------------
(1 row)
@ -615,7 +615,7 @@ DETAIL: Failing row contains (16, 14, localhost, 57637, default, f, t, primary,
SELECT groupid AS worker_2_group FROM pg_dist_node WHERE nodeport = :worker_2_port \gset
SELECT master_add_node('localhost', 8888, groupid => :worker_1_group, noderole => 'secondary', nodecluster=> 'olap');
master_add_node
-----------------
---------------------------------------------------------------------
25
(1 row)
@ -628,13 +628,13 @@ SELECT master_add_node('localhost', 8887, groupid => :worker_1_group, noderole =
'overflow'
);
master_add_node
-----------------
---------------------------------------------------------------------
26
(1 row)
SELECT * FROM pg_dist_node WHERE nodeport=8887;
nodeid | groupid | nodename | nodeport | noderack | hasmetadata | isactive | noderole | nodecluster | metadatasynced | shouldhaveshards
--------+---------+-----------+----------+----------+-------------+----------+-----------+-----------------------------------------------------------------+----------------+------------------
---------------------------------------------------------------------
26 | 14 | localhost | 8887 | default | f | t | secondary | thisisasixtyfourcharacterstringrepeatedfourtimestomake256chars. | f | t
(1 row)
@ -643,21 +643,21 @@ SELECT * FROM pg_dist_node WHERE nodeport=8887;
-- master_add_secondary_node lets you skip looking up the groupid
SELECT master_add_secondary_node('localhost', 9995, 'localhost', :worker_1_port);
master_add_secondary_node
---------------------------
---------------------------------------------------------------------
27
(1 row)
SELECT master_add_secondary_node('localhost', 9994, primaryname => 'localhost', primaryport => :worker_2_port);
master_add_secondary_node
---------------------------
---------------------------------------------------------------------
28
(1 row)
SELECT master_add_secondary_node('localhost', 9993, 'localhost', 2000);
ERROR: node at "localhost:2000" does not exist
ERROR: node at "localhost:xxxxx" does not exist
SELECT master_add_secondary_node('localhost', 9992, 'localhost', :worker_1_port, nodecluster => 'second-cluster');
master_add_secondary_node
---------------------------
---------------------------------------------------------------------
29
(1 row)
@ -671,26 +671,26 @@ ERROR: there is already another node with the specified hostname and port
-- master_update_node moves a node
SELECT master_update_node(:worker_1_node, 'somehost', 9000);
master_update_node
--------------------
---------------------------------------------------------------------
(1 row)
SELECT * FROM pg_dist_node WHERE nodeid = :worker_1_node;
nodeid | groupid | nodename | nodeport | noderack | hasmetadata | isactive | noderole | nodecluster | metadatasynced | shouldhaveshards
--------+---------+----------+----------+----------+-------------+----------+----------+-------------+----------------+------------------
---------------------------------------------------------------------
16 | 14 | somehost | 9000 | default | f | t | primary | default | f | t
(1 row)
-- cleanup
SELECT master_update_node(:worker_1_node, 'localhost', :worker_1_port);
master_update_node
--------------------
---------------------------------------------------------------------
(1 row)
SELECT * FROM pg_dist_node WHERE nodeid = :worker_1_node;
nodeid | groupid | nodename | nodeport | noderack | hasmetadata | isactive | noderole | nodecluster | metadatasynced | shouldhaveshards
--------+---------+-----------+----------+----------+-------------+----------+----------+-------------+----------------+------------------
---------------------------------------------------------------------
16 | 14 | localhost | 57637 | default | f | t | primary | default | f | t
(1 row)
@ -698,14 +698,14 @@ SET citus.shard_replication_factor TO 1;
CREATE TABLE test_dist (x int, y int);
SELECT create_distributed_table('test_dist', 'x');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
-- testing behaviour when setting shouldhaveshards to false on partially empty node
SELECT * from master_set_node_property('localhost', :worker_2_port, 'shouldhaveshards', false);
master_set_node_property
--------------------------
---------------------------------------------------------------------
(1 row)
@ -715,25 +715,25 @@ CREATE TABLE test_dist_colocated_with_non_colocated (x int, y int);
CREATE TABLE test_ref (a int, b int);
SELECT create_distributed_table('test_dist_colocated', 'x');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
SELECT create_distributed_table('test_dist_non_colocated', 'x', colocate_with => 'none');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
SELECT create_distributed_table('test_dist_colocated_with_non_colocated', 'x', colocate_with => 'test_dist_non_colocated');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
SELECT create_reference_table('test_ref');
create_reference_table
------------------------
---------------------------------------------------------------------
(1 row)
@ -742,7 +742,7 @@ SELECT nodeport, count(*)
FROM pg_dist_shard JOIN pg_dist_shard_placement USING (shardid)
WHERE logicalrelid = 'test_dist_colocated'::regclass GROUP BY nodeport ORDER BY nodeport;
nodeport | count
----------+-------
---------------------------------------------------------------------
57637 | 2
57638 | 2
(2 rows)
@ -752,7 +752,7 @@ SELECT nodeport, count(*)
FROM pg_dist_shard JOIN pg_dist_shard_placement USING (shardid)
WHERE logicalrelid = 'test_dist_non_colocated'::regclass GROUP BY nodeport ORDER BY nodeport;
nodeport | count
----------+-------
---------------------------------------------------------------------
57637 | 4
(1 row)
@ -762,7 +762,7 @@ SELECT nodeport, count(*)
FROM pg_dist_shard JOIN pg_dist_shard_placement USING (shardid)
WHERE logicalrelid = 'test_dist_colocated_with_non_colocated'::regclass GROUP BY nodeport ORDER BY nodeport;
nodeport | count
----------+-------
---------------------------------------------------------------------
57637 | 4
(1 row)
@ -771,7 +771,7 @@ SELECT nodeport, count(*)
FROM pg_dist_shard JOIN pg_dist_shard_placement USING (shardid)
WHERE logicalrelid = 'test_ref'::regclass GROUP BY nodeport ORDER BY nodeport;
nodeport | count
----------+-------
---------------------------------------------------------------------
57637 | 1
57638 | 1
(2 rows)
@ -781,7 +781,7 @@ DROP TABLE test_dist, test_ref, test_dist_colocated, test_dist_non_colocated, te
-- testing behaviour when setting shouldhaveshards to false on fully empty node
SELECT * from master_set_node_property('localhost', :worker_2_port, 'shouldhaveshards', false);
master_set_node_property
--------------------------
---------------------------------------------------------------------
(1 row)
@ -791,13 +791,13 @@ CREATE TABLE test_dist_non_colocated (x int, y int);
CREATE TABLE test_ref (a int, b int);
SELECT create_distributed_table('test_dist', 'x');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
SELECT create_reference_table('test_ref');
create_reference_table
------------------------
---------------------------------------------------------------------
(1 row)
@ -806,7 +806,7 @@ SELECT nodeport, count(*)
FROM pg_dist_shard JOIN pg_dist_shard_placement USING (shardid)
WHERE logicalrelid = 'test_dist'::regclass GROUP BY nodeport ORDER BY nodeport;
nodeport | count
----------+-------
---------------------------------------------------------------------
57637 | 4
(1 row)
@ -815,14 +815,14 @@ SELECT nodeport, count(*)
FROM pg_dist_shard JOIN pg_dist_shard_placement USING (shardid)
WHERE logicalrelid = 'test_ref'::regclass GROUP BY nodeport ORDER BY nodeport;
nodeport | count
----------+-------
---------------------------------------------------------------------
57637 | 1
57638 | 1
(2 rows)
SELECT * from master_set_node_property('localhost', :worker_2_port, 'shouldhaveshards', true);
master_set_node_property
--------------------------
---------------------------------------------------------------------
(1 row)
@ -832,7 +832,7 @@ SELECT nodeport, count(*)
FROM pg_dist_shard JOIN pg_dist_shard_placement USING (shardid)
WHERE logicalrelid = 'test_dist'::regclass GROUP BY nodeport ORDER BY nodeport;
nodeport | count
----------+-------
---------------------------------------------------------------------
57637 | 4
(1 row)
@ -841,20 +841,20 @@ SELECT nodeport, count(*)
FROM pg_dist_shard JOIN pg_dist_shard_placement USING (shardid)
WHERE logicalrelid = 'test_ref'::regclass GROUP BY nodeport ORDER BY nodeport;
nodeport | count
----------+-------
---------------------------------------------------------------------
57637 | 1
57638 | 1
(2 rows)
SELECT create_distributed_table('test_dist_colocated', 'x');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
SELECT create_distributed_table('test_dist_non_colocated', 'x', colocate_with => 'none');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -864,7 +864,7 @@ SELECT nodeport, count(*)
FROM pg_dist_shard JOIN pg_dist_shard_placement USING (shardid)
WHERE logicalrelid = 'test_dist_colocated'::regclass GROUP BY nodeport ORDER BY nodeport;
nodeport | count
----------+-------
---------------------------------------------------------------------
57637 | 4
(1 row)
@ -874,7 +874,7 @@ SELECT nodeport, count(*)
FROM pg_dist_shard JOIN pg_dist_shard_placement USING (shardid)
WHERE logicalrelid = 'test_dist_non_colocated'::regclass GROUP BY nodeport ORDER BY nodeport;
nodeport | count
----------+-------
---------------------------------------------------------------------
57637 | 2
57638 | 2
(2 rows)

View File

@ -18,7 +18,7 @@ WHERE
colocationid = (SELECT colocationid FROM pg_dist_partition WHERE logicalrelid = 'table1_group1'::regclass)
ORDER BY s.shardid, sp.nodeport;
shardid | logicalrelid | nodeport | colocationid | shardstate
---------+---------------+----------+--------------+------------
---------------------------------------------------------------------
1300000 | table1_group1 | 57637 | 1000 | 1
1300000 | table1_group1 | 57638 | 1000 | 3
1300001 | table1_group1 | 57637 | 1000 | 1
@ -40,7 +40,7 @@ ORDER BY s.shardid, sp.nodeport;
-- repair colocated shards
SELECT master_copy_shard_placement(1300000, 'localhost', :worker_1_port, 'localhost', :worker_2_port);
master_copy_shard_placement
-----------------------------
---------------------------------------------------------------------
(1 row)
@ -54,7 +54,7 @@ WHERE
colocationid = (SELECT colocationid FROM pg_dist_partition WHERE logicalrelid = 'table1_group1'::regclass)
ORDER BY s.shardid, sp.nodeport;
shardid | logicalrelid | nodeport | colocationid | shardstate
---------+---------------+----------+--------------+------------
---------------------------------------------------------------------
1300000 | table1_group1 | 57637 | 1000 | 1
1300000 | table1_group1 | 57638 | 1000 | 1
1300001 | table1_group1 | 57637 | 1000 | 1
@ -84,7 +84,7 @@ WHERE
p.logicalrelid = 'table5_groupX'::regclass
ORDER BY s.shardid, sp.nodeport;
shardid | logicalrelid | nodeport | colocationid | shardstate
---------+---------------+----------+--------------+------------
---------------------------------------------------------------------
1300016 | table5_groupx | 57637 | 0 | 1
1300016 | table5_groupx | 57638 | 0 | 3
1300017 | table5_groupx | 57637 | 0 | 1
@ -98,7 +98,7 @@ ORDER BY s.shardid, sp.nodeport;
-- repair NOT colocated shard
SELECT master_copy_shard_placement(1300016, 'localhost', :worker_1_port, 'localhost', :worker_2_port);
master_copy_shard_placement
-----------------------------
---------------------------------------------------------------------
(1 row)
@ -112,7 +112,7 @@ WHERE
p.logicalrelid = 'table5_groupX'::regclass
ORDER BY s.shardid, sp.nodeport;
shardid | logicalrelid | nodeport | colocationid | shardstate
---------+---------------+----------+--------------+------------
---------------------------------------------------------------------
1300016 | table5_groupx | 57637 | 0 | 1
1300016 | table5_groupx | 57638 | 0 | 1
1300017 | table5_groupx | 57637 | 0 | 1
@ -134,7 +134,7 @@ WHERE
p.logicalrelid = 'table6_append'::regclass
ORDER BY s.shardid, sp.nodeport;
shardid | logicalrelid | nodeport | colocationid | shardstate
---------+---------------+----------+--------------+------------
---------------------------------------------------------------------
1300020 | table6_append | 57637 | 0 | 1
1300020 | table6_append | 57638 | 0 | 3
1300021 | table6_append | 57637 | 0 | 1
@ -144,7 +144,7 @@ ORDER BY s.shardid, sp.nodeport;
-- repair shard in append distributed table
SELECT master_copy_shard_placement(1300020, 'localhost', :worker_1_port, 'localhost', :worker_2_port);
master_copy_shard_placement
-----------------------------
---------------------------------------------------------------------
(1 row)
@ -158,7 +158,7 @@ WHERE
p.logicalrelid = 'table6_append'::regclass
ORDER BY s.shardid, sp.nodeport;
shardid | logicalrelid | nodeport | colocationid | shardstate
---------+---------------+----------+--------------+------------
---------------------------------------------------------------------
1300020 | table6_append | 57637 | 0 | 1
1300020 | table6_append | 57638 | 0 | 1
1300021 | table6_append | 57637 | 0 | 1
@ -178,7 +178,7 @@ WHERE
colocationid = (SELECT colocationid FROM pg_dist_partition WHERE logicalrelid = 'table1_group1'::regclass)
ORDER BY s.shardid, sp.nodeport;
shardid | logicalrelid | nodeport | colocationid | shardstate
---------+---------------+----------+--------------+------------
---------------------------------------------------------------------
1300000 | table1_group1 | 57637 | 1000 | 3
1300000 | table1_group1 | 57638 | 1000 | 3
1300001 | table1_group1 | 57637 | 1000 | 1
@ -210,7 +210,7 @@ WHERE
colocationid = (SELECT colocationid FROM pg_dist_partition WHERE logicalrelid = 'table1_group1'::regclass)
ORDER BY s.shardid, sp.nodeport;
shardid | logicalrelid | nodeport | colocationid | shardstate
---------+---------------+----------+--------------+------------
---------------------------------------------------------------------
1300000 | table1_group1 | 57637 | 1000 | 3
1300000 | table1_group1 | 57638 | 1000 | 3
1300001 | table1_group1 | 57637 | 1000 | 1

View File

@ -59,280 +59,280 @@ CREATE FUNCTION find_shard_interval_index(bigint)
CREATE TABLE table1_group1 ( id int );
SELECT master_create_distributed_table('table1_group1', 'id', 'hash');
master_create_distributed_table
---------------------------------
---------------------------------------------------------------------
(1 row)
SELECT master_create_worker_shards('table1_group1', 4, 2);
master_create_worker_shards
-----------------------------
---------------------------------------------------------------------
(1 row)
CREATE TABLE table2_group1 ( id int );
SELECT master_create_distributed_table('table2_group1', 'id', 'hash');
master_create_distributed_table
---------------------------------
---------------------------------------------------------------------
(1 row)
SELECT master_create_worker_shards('table2_group1', 4, 2);
master_create_worker_shards
-----------------------------
---------------------------------------------------------------------
(1 row)
CREATE TABLE table3_group2 ( id int );
SELECT master_create_distributed_table('table3_group2', 'id', 'hash');
master_create_distributed_table
---------------------------------
---------------------------------------------------------------------
(1 row)
SELECT master_create_worker_shards('table3_group2', 4, 2);
master_create_worker_shards
-----------------------------
---------------------------------------------------------------------
(1 row)
CREATE TABLE table4_group2 ( id int );
SELECT master_create_distributed_table('table4_group2', 'id', 'hash');
master_create_distributed_table
---------------------------------
---------------------------------------------------------------------
(1 row)
SELECT master_create_worker_shards('table4_group2', 4, 2);
master_create_worker_shards
-----------------------------
---------------------------------------------------------------------
(1 row)
CREATE TABLE table5_groupX ( id int );
SELECT master_create_distributed_table('table5_groupX', 'id', 'hash');
master_create_distributed_table
---------------------------------
---------------------------------------------------------------------
(1 row)
SELECT master_create_worker_shards('table5_groupX', 4, 2);
master_create_worker_shards
-----------------------------
---------------------------------------------------------------------
(1 row)
CREATE TABLE table6_append ( id int );
SELECT master_create_distributed_table('table6_append', 'id', 'append');
master_create_distributed_table
---------------------------------
---------------------------------------------------------------------
(1 row)
SELECT master_create_empty_shard('table6_append');
master_create_empty_shard
---------------------------
---------------------------------------------------------------------
1300020
(1 row)
SELECT master_create_empty_shard('table6_append');
master_create_empty_shard
---------------------------
---------------------------------------------------------------------
1300021
(1 row)
-- make table1_group1 and table2_group1 co-located manually
SELECT colocation_test_colocate_tables('table1_group1', 'table2_group1');
colocation_test_colocate_tables
---------------------------------
---------------------------------------------------------------------
t
(1 row)
-- check co-location id
SELECT get_table_colocation_id('table1_group1');
get_table_colocation_id
-------------------------
---------------------------------------------------------------------
1000
(1 row)
SELECT get_table_colocation_id('table5_groupX');
get_table_colocation_id
-------------------------
---------------------------------------------------------------------
0
(1 row)
SELECT get_table_colocation_id('table6_append');
get_table_colocation_id
-------------------------
---------------------------------------------------------------------
0
(1 row)
-- check self table co-location
SELECT tables_colocated('table1_group1', 'table1_group1');
tables_colocated
------------------
---------------------------------------------------------------------
t
(1 row)
SELECT tables_colocated('table5_groupX', 'table5_groupX');
tables_colocated
------------------
---------------------------------------------------------------------
t
(1 row)
SELECT tables_colocated('table6_append', 'table6_append');
tables_colocated
------------------
---------------------------------------------------------------------
t
(1 row)
-- check table co-location with same co-location group
SELECT tables_colocated('table1_group1', 'table2_group1');
tables_colocated
------------------
---------------------------------------------------------------------
t
(1 row)
-- check table co-location with different co-location group
SELECT tables_colocated('table1_group1', 'table3_group2');
tables_colocated
------------------
---------------------------------------------------------------------
f
(1 row)
-- check table co-location with invalid co-location group
SELECT tables_colocated('table1_group1', 'table5_groupX');
tables_colocated
------------------
---------------------------------------------------------------------
f
(1 row)
SELECT tables_colocated('table1_group1', 'table6_append');
tables_colocated
------------------
---------------------------------------------------------------------
f
(1 row)
-- check self shard co-location
SELECT shards_colocated(1300000, 1300000);
shards_colocated
------------------
---------------------------------------------------------------------
t
(1 row)
SELECT shards_colocated(1300016, 1300016);
shards_colocated
------------------
---------------------------------------------------------------------
t
(1 row)
SELECT shards_colocated(1300020, 1300020);
shards_colocated
------------------
---------------------------------------------------------------------
t
(1 row)
-- check shard co-location with same co-location group
SELECT shards_colocated(1300000, 1300004);
shards_colocated
------------------
---------------------------------------------------------------------
t
(1 row)
-- check shard co-location with same table different co-location group
SELECT shards_colocated(1300000, 1300001);
shards_colocated
------------------
---------------------------------------------------------------------
f
(1 row)
-- check shard co-location with different co-location group
SELECT shards_colocated(1300000, 1300005);
shards_colocated
------------------
---------------------------------------------------------------------
f
(1 row)
-- check shard co-location with invalid co-location group
SELECT shards_colocated(1300000, 1300016);
shards_colocated
------------------
---------------------------------------------------------------------
f
(1 row)
SELECT shards_colocated(1300000, 1300020);
shards_colocated
------------------
---------------------------------------------------------------------
f
(1 row)
-- check co-located table list
SELECT UNNEST(get_colocated_table_array('table1_group1'))::regclass ORDER BY 1;
unnest
---------------
---------------------------------------------------------------------
table1_group1
table2_group1
(2 rows)
SELECT UNNEST(get_colocated_table_array('table5_groupX'))::regclass ORDER BY 1;
unnest
---------------
---------------------------------------------------------------------
table5_groupx
(1 row)
SELECT UNNEST(get_colocated_table_array('table6_append'))::regclass ORDER BY 1;
unnest
---------------
---------------------------------------------------------------------
table6_append
(1 row)
-- check co-located shard list
SELECT UNNEST(get_colocated_shard_array(1300000))::regclass ORDER BY 1;
unnest
---------
---------------------------------------------------------------------
1300000
1300004
(2 rows)
SELECT UNNEST(get_colocated_shard_array(1300016))::regclass ORDER BY 1;
unnest
---------
---------------------------------------------------------------------
1300016
(1 row)
SELECT UNNEST(get_colocated_shard_array(1300020))::regclass ORDER BY 1;
unnest
---------
---------------------------------------------------------------------
1300020
(1 row)
-- check FindShardIntervalIndex function
SELECT find_shard_interval_index(1300000);
find_shard_interval_index
---------------------------
---------------------------------------------------------------------
0
(1 row)
SELECT find_shard_interval_index(1300001);
find_shard_interval_index
---------------------------
---------------------------------------------------------------------
1
(1 row)
SELECT find_shard_interval_index(1300002);
find_shard_interval_index
---------------------------
---------------------------------------------------------------------
2
(1 row)
SELECT find_shard_interval_index(1300003);
find_shard_interval_index
---------------------------
---------------------------------------------------------------------
3
(1 row)
SELECT find_shard_interval_index(1300016);
find_shard_interval_index
---------------------------
---------------------------------------------------------------------
0
(1 row)
@ -341,14 +341,14 @@ SET citus.shard_count = 2;
CREATE TABLE table1_groupA ( id int );
SELECT create_distributed_table('table1_groupA', 'id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
CREATE TABLE table2_groupA ( id int );
SELECT create_distributed_table('table2_groupA', 'id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -357,14 +357,14 @@ SET citus.shard_replication_factor = 1;
CREATE TABLE table1_groupB ( id int );
SELECT create_distributed_table('table1_groupB', 'id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
CREATE TABLE table2_groupB ( id int );
SELECT create_distributed_table('table2_groupB', 'id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -376,14 +376,14 @@ SET citus.shard_replication_factor to DEFAULT;
CREATE TABLE table1_groupC ( id text );
SELECT create_distributed_table('table1_groupC', 'id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
CREATE TABLE table2_groupC ( id text );
SELECT create_distributed_table('table2_groupC', 'id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -392,14 +392,14 @@ SET citus.shard_count = 8;
CREATE TABLE table1_groupD ( id int );
SELECT create_distributed_table('table1_groupD', 'id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
CREATE TABLE table2_groupD ( id int );
SELECT create_distributed_table('table2_groupD', 'id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -407,14 +407,14 @@ SELECT create_distributed_table('table2_groupD', 'id');
CREATE TABLE table_append ( id int );
SELECT create_distributed_table('table_append', 'id', 'append');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
CREATE TABLE table_range ( id int );
SELECT create_distributed_table('table_range', 'id', 'range');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -423,7 +423,7 @@ CREATE FOREIGN TABLE table3_groupD ( id int ) SERVER fake_fdw_server;
SELECT create_distributed_table('table3_groupD', 'id');
NOTICE: foreign-data wrapper "fake_fdw" does not have an extension defined
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -432,7 +432,7 @@ SELECT * FROM pg_dist_colocation
WHERE colocationid >= 1 AND colocationid < 1000
ORDER BY colocationid;
colocationid | shardcount | replicationfactor | distributioncolumntype | distributioncolumncollation
--------------+------------+-------------------+------------------------+-----------------------------
---------------------------------------------------------------------
3 | 4 | 2 | 23 | 0
4 | 2 | 2 | 23 | 0
5 | 2 | 1 | 23 | 0
@ -444,7 +444,7 @@ SELECT logicalrelid, colocationid FROM pg_dist_partition
WHERE colocationid >= 1 AND colocationid < 1000
ORDER BY logicalrelid;
logicalrelid | colocationid
---------------+--------------
---------------------------------------------------------------------
table1_groupa | 4
table2_groupa | 4
table1_groupb | 5
@ -460,7 +460,7 @@ SELECT logicalrelid, colocationid FROM pg_dist_partition
DROP TABLE table1_groupA;
SELECT * FROM pg_dist_colocation WHERE colocationid = 4;
colocationid | shardcount | replicationfactor | distributioncolumntype | distributioncolumncollation
--------------+------------+-------------------+------------------------+-----------------------------
---------------------------------------------------------------------
4 | 2 | 2 | 23 | 0
(1 row)
@ -468,7 +468,7 @@ SELECT * FROM pg_dist_colocation WHERE colocationid = 4;
DROP TABLE table2_groupA;
SELECT * FROM pg_dist_colocation WHERE colocationid = 4;
colocationid | shardcount | replicationfactor | distributioncolumntype | distributioncolumncollation
--------------+------------+-------------------+------------------------+-----------------------------
---------------------------------------------------------------------
4 | 2 | 2 | 23 | 0
(1 row)
@ -477,14 +477,14 @@ SET citus.shard_count = 2;
CREATE TABLE table1_groupE ( id int );
SELECT create_distributed_table('table1_groupE', 'id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
CREATE TABLE table2_groupE ( id int );
SELECT create_distributed_table('table2_groupE', 'id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -492,7 +492,7 @@ SELECT create_distributed_table('table2_groupE', 'id');
CREATE TABLE table3_groupE ( dummy_column text, id int );
SELECT create_distributed_table('table3_groupE', 'id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -501,7 +501,7 @@ CREATE SCHEMA schema_colocation;
CREATE TABLE schema_colocation.table4_groupE ( id int );
SELECT create_distributed_table('schema_colocation.table4_groupE', 'id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -509,28 +509,28 @@ SELECT create_distributed_table('schema_colocation.table4_groupE', 'id');
CREATE TABLE table1_group_none_1 ( id int );
SELECT create_distributed_table('table1_group_none_1', 'id', colocate_with => 'none');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
CREATE TABLE table2_group_none_1 ( id int );
SELECT create_distributed_table('table2_group_none_1', 'id', colocate_with => 'table1_group_none_1');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
CREATE TABLE table1_group_none_2 ( id int );
SELECT create_distributed_table('table1_group_none_2', 'id', colocate_with => 'none');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
CREATE TABLE table4_groupE ( id int );
SELECT create_distributed_table('table4_groupE', 'id', colocate_with => 'default');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -539,7 +539,7 @@ SET citus.shard_count = 3;
CREATE TABLE table1_group_none_3 ( id int );
SELECT create_distributed_table('table1_group_none_3', 'id', colocate_with => 'NONE');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -547,7 +547,7 @@ SELECT create_distributed_table('table1_group_none_3', 'id', colocate_with => 'N
CREATE TABLE table1_group_default ( id int );
SELECT create_distributed_table('table1_group_default', 'id', colocate_with => 'DEFAULT');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -556,7 +556,7 @@ SELECT * FROM pg_dist_colocation
WHERE colocationid >= 1 AND colocationid < 1000
ORDER BY colocationid;
colocationid | shardcount | replicationfactor | distributioncolumntype | distributioncolumncollation
--------------+------------+-------------------+------------------------+-----------------------------
---------------------------------------------------------------------
3 | 4 | 2 | 23 | 0
4 | 2 | 2 | 23 | 0
5 | 2 | 1 | 23 | 0
@ -569,7 +569,7 @@ SELECT logicalrelid, colocationid FROM pg_dist_partition
WHERE colocationid >= 1 AND colocationid < 1000
ORDER BY colocationid, logicalrelid;
logicalrelid | colocationid
---------------------------------+--------------
---------------------------------------------------------------------
table1_groupe | 4
table2_groupe | 4
table3_groupe | 4
@ -606,7 +606,7 @@ SELECT create_distributed_table('table_failing', 'id', colocate_with => '');
ERROR: invalid name syntax
SELECT create_distributed_table('table_failing', 'id', colocate_with => NULL);
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -619,14 +619,14 @@ DETAIL: Distribution column types don't match for table1_groupe and table_bigin
\c - - - :worker_1_port
SELECT "Column", "Type", "Modifiers" FROM table_desc WHERE relid='public.table3_groupE_1300062'::regclass;
Column | Type | Modifiers
--------------+---------+-----------
---------------------------------------------------------------------
dummy_column | text |
id | integer |
(2 rows)
SELECT "Column", "Type", "Modifiers" FROM table_desc WHERE relid='schema_colocation.table4_groupE_1300064'::regclass;
Column | Type | Modifiers
--------+---------+-----------
---------------------------------------------------------------------
id | integer |
(1 row)
@ -635,14 +635,14 @@ SET citus.next_shard_id TO 1300080;
CREATE TABLE table1_groupF ( id int );
SELECT create_reference_table('table1_groupF');
create_reference_table
------------------------
---------------------------------------------------------------------
(1 row)
CREATE TABLE table2_groupF ( id int );
SELECT create_reference_table('table2_groupF');
create_reference_table
------------------------
---------------------------------------------------------------------
(1 row)
@ -651,7 +651,7 @@ SELECT * FROM pg_dist_colocation
WHERE colocationid >= 1 AND colocationid < 1000
ORDER BY colocationid;
colocationid | shardcount | replicationfactor | distributioncolumntype | distributioncolumncollation
--------------+------------+-------------------+------------------------+-----------------------------
---------------------------------------------------------------------
3 | 4 | 2 | 23 | 0
4 | 2 | 2 | 23 | 0
5 | 2 | 1 | 23 | 0
@ -677,7 +677,7 @@ ORDER BY
table1,
table2;
table1 | table2 | colocated
---------------------------------+---------------------------------+-----------
---------------------------------------------------------------------
table1_group1 | table2_group1 | t
table1_groupb | table2_groupb | t
table1_groupc | table2_groupc | t
@ -718,7 +718,7 @@ ORDER BY
shardid,
nodeport;
logicalrelid | shardid | shardstorage | nodeport | shardminvalue | shardmaxvalue
---------------------------------+---------+--------------+----------+---------------+---------------
---------------------------------------------------------------------
table1_groupb | 1300026 | t | 57637 | -2147483648 | -1
table1_groupb | 1300027 | t | 57638 | 0 | 2147483647
table2_groupb | 1300028 | t | 57637 | -2147483648 | -1
@ -840,14 +840,14 @@ SELECT * FROM pg_dist_colocation
WHERE colocationid >= 1 AND colocationid < 1000
ORDER BY colocationid;
colocationid | shardcount | replicationfactor | distributioncolumntype | distributioncolumncollation
--------------+------------+-------------------+------------------------+-----------------------------
---------------------------------------------------------------------
(0 rows)
SELECT logicalrelid, colocationid FROM pg_dist_partition
WHERE colocationid >= 1 AND colocationid < 1000
ORDER BY colocationid, logicalrelid;
logicalrelid | colocationid
--------------+--------------
---------------------------------------------------------------------
(0 rows)
-- first check failing cases
@ -859,7 +859,7 @@ ERROR: cannot colocate tables table1_groupb and table1_groupd
DETAIL: Shard counts don't match for table1_groupb and table1_groupd.
SELECT mark_tables_colocated('table1_groupB', ARRAY['table1_groupE']);
ERROR: cannot colocate tables table1_groupb and table1_groupe
DETAIL: Shard 1300026 of table1_groupb and shard 1300058 of table1_groupe have different number of shard placements.
DETAIL: Shard 1300026 of table1_groupb and shard xxxxx of table1_groupe have different number of shard placements.
SELECT mark_tables_colocated('table1_groupB', ARRAY['table1_groupF']);
ERROR: cannot colocate tables table1_groupb and table1_groupf
DETAIL: Replication models don't match for table1_groupb and table1_groupf.
@ -871,51 +871,51 @@ SELECT * FROM pg_dist_colocation
WHERE colocationid >= 1 AND colocationid < 1000
ORDER BY colocationid;
colocationid | shardcount | replicationfactor | distributioncolumntype | distributioncolumncollation
--------------+------------+-------------------+------------------------+-----------------------------
---------------------------------------------------------------------
(0 rows)
SELECT logicalrelid, colocationid FROM pg_dist_partition
WHERE colocationid >= 1 AND colocationid < 1000
ORDER BY colocationid, logicalrelid;
logicalrelid | colocationid
--------------+--------------
---------------------------------------------------------------------
(0 rows)
-- check successfully cololated tables
SELECT mark_tables_colocated('table1_groupB', ARRAY['table2_groupB']);
mark_tables_colocated
-----------------------
---------------------------------------------------------------------
(1 row)
SELECT mark_tables_colocated('table1_groupC', ARRAY['table2_groupC']);
mark_tables_colocated
-----------------------
---------------------------------------------------------------------
(1 row)
SELECT mark_tables_colocated('table1_groupD', ARRAY['table2_groupD']);
mark_tables_colocated
-----------------------
---------------------------------------------------------------------
(1 row)
SELECT mark_tables_colocated('table1_groupE', ARRAY['table2_groupE', 'table3_groupE']);
mark_tables_colocated
-----------------------
---------------------------------------------------------------------
(1 row)
SELECT mark_tables_colocated('table1_groupF', ARRAY['table2_groupF']);
mark_tables_colocated
-----------------------
---------------------------------------------------------------------
(1 row)
-- check to colocate with itself
SELECT mark_tables_colocated('table1_groupB', ARRAY['table1_groupB']);
mark_tables_colocated
-----------------------
---------------------------------------------------------------------
(1 row)
@ -923,14 +923,14 @@ SET citus.shard_count = 2;
CREATE TABLE table1_group_none ( id int );
SELECT create_distributed_table('table1_group_none', 'id', colocate_with => 'NONE');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
CREATE TABLE table2_group_none ( id int );
SELECT create_distributed_table('table2_group_none', 'id', colocate_with => 'NONE');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -939,7 +939,7 @@ SELECT * FROM pg_dist_colocation
WHERE colocationid >= 1 AND colocationid < 1000
ORDER BY colocationid;
colocationid | shardcount | replicationfactor | distributioncolumntype | distributioncolumncollation
--------------+------------+-------------------+------------------------+-----------------------------
---------------------------------------------------------------------
2 | 2 | 1 | 23 | 0
3 | 2 | 2 | 25 | 100
4 | 8 | 2 | 23 | 0
@ -950,7 +950,7 @@ SELECT logicalrelid, colocationid FROM pg_dist_partition
WHERE colocationid >= 1 AND colocationid < 1000
ORDER BY colocationid, logicalrelid;
logicalrelid | colocationid
-------------------+--------------
---------------------------------------------------------------------
table1_groupb | 2
table2_groupb | 2
table1_groupc | 3
@ -967,14 +967,14 @@ SELECT logicalrelid, colocationid FROM pg_dist_partition
-- move the all tables in colocation group 5 to colocation group 7
SELECT mark_tables_colocated('table1_group_none', ARRAY['table1_groupE', 'table2_groupE', 'table3_groupE']);
mark_tables_colocated
-----------------------
---------------------------------------------------------------------
(1 row)
-- move a table with a colocation id which is already not in pg_dist_colocation
SELECT mark_tables_colocated('table1_group_none', ARRAY['table2_group_none']);
mark_tables_colocated
-----------------------
---------------------------------------------------------------------
(1 row)
@ -983,7 +983,7 @@ SELECT * FROM pg_dist_colocation
WHERE colocationid >= 1 AND colocationid < 1000
ORDER BY colocationid;
colocationid | shardcount | replicationfactor | distributioncolumntype | distributioncolumncollation
--------------+------------+-------------------+------------------------+-----------------------------
---------------------------------------------------------------------
2 | 2 | 1 | 23 | 0
3 | 2 | 2 | 25 | 100
4 | 8 | 2 | 23 | 0
@ -993,7 +993,7 @@ SELECT logicalrelid, colocationid FROM pg_dist_partition
WHERE colocationid >= 1 AND colocationid < 1000
ORDER BY colocationid, logicalrelid;
logicalrelid | colocationid
-------------------+--------------
---------------------------------------------------------------------
table1_groupb | 2
table2_groupb | 2
table1_groupc | 3
@ -1011,7 +1011,7 @@ SELECT logicalrelid, colocationid FROM pg_dist_partition
CREATE TABLE table1_groupG ( id int );
SELECT create_distributed_table('table1_groupG', 'id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -1025,7 +1025,7 @@ CREATE TABLE table2_groupG ( id int );
ERROR: relation "table2_groupg" already exists
SELECT create_distributed_table('table2_groupG', 'id', colocate_with => 'NONE');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)

View File

@ -4,44 +4,44 @@
-- Check that we can correctly handle complex expressions and aggregates.
SELECT sum(l_quantity) / avg(l_quantity) FROM lineitem;
?column?
------------------------
---------------------------------------------------------------------
12000.0000000000000000
(1 row)
SELECT sum(l_quantity) / (10 * avg(l_quantity)) FROM lineitem;
?column?
-----------------------
---------------------------------------------------------------------
1200.0000000000000000
(1 row)
SELECT (sum(l_quantity) / (10 * avg(l_quantity))) + 11 FROM lineitem;
?column?
-----------------------
---------------------------------------------------------------------
1211.0000000000000000
(1 row)
SELECT avg(l_quantity) as average FROM lineitem;
average
---------------------
---------------------------------------------------------------------
25.4462500000000000
(1 row)
SELECT 100 * avg(l_quantity) as average_times_hundred FROM lineitem;
average_times_hundred
-----------------------
---------------------------------------------------------------------
2544.6250000000000000
(1 row)
SELECT 100 * avg(l_quantity) / 10 as average_times_ten FROM lineitem;
average_times_ten
----------------------
---------------------------------------------------------------------
254.4625000000000000
(1 row)
SELECT l_quantity, 10 * count(*) count_quantity FROM lineitem
GROUP BY l_quantity ORDER BY count_quantity, l_quantity;
l_quantity | count_quantity
------------+----------------
---------------------------------------------------------------------
44.00 | 2150
38.00 | 2160
45.00 | 2180
@ -98,42 +98,42 @@ SELECT l_quantity, 10 * count(*) count_quantity FROM lineitem
SELECT count(*) FROM lineitem
WHERE octet_length(l_comment || l_comment) > 40;
count
-------
---------------------------------------------------------------------
8148
(1 row)
SELECT count(*) FROM lineitem
WHERE octet_length(concat(l_comment, l_comment)) > 40;
count
-------
---------------------------------------------------------------------
8148
(1 row)
SELECT count(*) FROM lineitem
WHERE octet_length(l_comment) + octet_length('randomtext'::text) > 40;
count
-------
---------------------------------------------------------------------
4611
(1 row)
SELECT count(*) FROM lineitem
WHERE octet_length(l_comment) + 10 > 40;
count
-------
---------------------------------------------------------------------
4611
(1 row)
SELECT count(*) FROM lineitem
WHERE (l_receiptdate::timestamp - l_shipdate::timestamp) > interval '5 days';
count
-------
---------------------------------------------------------------------
10008
(1 row)
-- can push down queries where no columns present on the WHERE clause
SELECT count(*) FROM lineitem WHERE random() = -0.1;
count
-------
---------------------------------------------------------------------
0
(1 row)
@ -141,7 +141,7 @@ SELECT count(*) FROM lineitem WHERE random() = -0.1;
SELECT count(*) FROM lineitem
WHERE (l_partkey > 10000) is true;
count
-------
---------------------------------------------------------------------
11423
(1 row)
@ -149,7 +149,7 @@ SELECT count(*) FROM lineitem
SELECT count(*) FROM lineitem
WHERE l_partkey = ANY(ARRAY[19353, 19354, 19355]);
count
-------
---------------------------------------------------------------------
1
(1 row)
@ -157,7 +157,7 @@ SELECT count(*) FROM lineitem
SELECT count(*) FROM lineitem
WHERE l_partkey = ALL(ARRAY[19353]);
count
-------
---------------------------------------------------------------------
1
(1 row)
@ -165,7 +165,7 @@ SELECT count(*) FROM lineitem
SELECT count(*) FROM lineitem
WHERE ARRAY[19353, 19354, 19355] @> ARRAY[l_partkey];
count
-------
---------------------------------------------------------------------
1
(1 row)
@ -173,7 +173,7 @@ SELECT count(*) FROM lineitem
SELECT count(*) FROM lineitem
WHERE (l_quantity/100)::int::bool::text::bool;
count
-------
---------------------------------------------------------------------
260
(1 row)
@ -181,7 +181,7 @@ SELECT count(*) FROM lineitem
SELECT count(*) FROM lineitem
WHERE (CASE WHEN l_orderkey > 4000 THEN l_partkey / 100 > 1 ELSE false END);
count
-------
---------------------------------------------------------------------
7948
(1 row)
@ -189,7 +189,7 @@ SELECT count(*) FROM lineitem
SELECT count(*) FROM lineitem
WHERE COALESCE((l_partkey/50000)::bool, false);
count
-------
---------------------------------------------------------------------
9122
(1 row)
@ -197,7 +197,7 @@ SELECT count(*) FROM lineitem
SELECT count(*) FROM lineitem
WHERE NULLIF((l_partkey/50000)::bool, false);
count
-------
---------------------------------------------------------------------
9122
(1 row)
@ -205,7 +205,7 @@ SELECT count(*) FROM lineitem
SELECT count(*) FROM orders
WHERE o_comment IS NOT null;
count
-------
---------------------------------------------------------------------
2985
(1 row)
@ -213,7 +213,7 @@ SELECT count(*) FROM orders
SELECT count(*) FROM lineitem
WHERE isfinite(l_shipdate);
count
-------
---------------------------------------------------------------------
12000
(1 row)
@ -221,7 +221,7 @@ SELECT count(*) FROM lineitem
SELECT count(*) FROM lineitem
WHERE 0 != 0;
count
-------
---------------------------------------------------------------------
0
(1 row)
@ -229,7 +229,7 @@ SELECT count(*) FROM lineitem
SELECT count(*) FROM lineitem
WHERE l_partkey IS DISTINCT FROM 50040;
count
-------
---------------------------------------------------------------------
11999
(1 row)
@ -237,7 +237,7 @@ SELECT count(*) FROM lineitem
SELECT count(*) FROM lineitem
WHERE row(l_partkey, 2, 3) > row(2000, 2, 3);
count
-------
---------------------------------------------------------------------
11882
(1 row)
@ -252,7 +252,7 @@ SELECT count(*) FROM lineitem
l_partkey IS DISTINCT FROM 50040 AND
row(l_partkey, 2, 3) > row(2000, 2, 3);
count
-------
---------------------------------------------------------------------
137
(1 row)
@ -264,7 +264,7 @@ SELECT l_linenumber FROM lineitem
l_linenumber
LIMIT 1;
l_linenumber
--------------
---------------------------------------------------------------------
1
(1 row)
@ -277,7 +277,7 @@ SELECT count(*) * l_discount as total_discount, count(*), sum(l_tax), l_discount
ORDER BY
total_discount DESC, sum(l_tax) DESC;
total_discount | count | sum | l_discount
----------------+-------+-------+------------
---------------------------------------------------------------------
104.80 | 1048 | 41.08 | 0.10
98.55 | 1095 | 44.15 | 0.09
90.64 | 1133 | 45.94 | 0.08
@ -300,7 +300,7 @@ SELECT l_linenumber FROM lineitem
l_linenumber
LIMIT 1;
l_linenumber
--------------
---------------------------------------------------------------------
2
(1 row)
@ -315,7 +315,7 @@ SELECT max(l_linenumber), min(l_discount), l_receiptdate FROM lineitem
l_receiptdate
LIMIT 1;
max | min | l_receiptdate
-----+------+---------------
---------------------------------------------------------------------
3 | 0.07 | 01-09-1992
(1 row)
@ -323,21 +323,21 @@ SELECT max(l_linenumber), min(l_discount), l_receiptdate FROM lineitem
SELECT count(*) FROM lineitem, orders
WHERE l_orderkey = o_orderkey AND l_quantity < 5;
count
-------
---------------------------------------------------------------------
951
(1 row)
SELECT count(*) FROM lineitem
JOIN orders ON l_orderkey = o_orderkey AND l_quantity < 5;
count
-------
---------------------------------------------------------------------
951
(1 row)
SELECT count(*) FROM lineitem JOIN orders ON l_orderkey = o_orderkey
WHERE l_quantity < 5;
count
-------
---------------------------------------------------------------------
951
(1 row)
@ -348,7 +348,7 @@ ERROR: complex joins are only supported when all distributed tables are joined
-- the subquery is recursively planned since it contains OFFSET, which is not pushdownable
SELECT * FROM (SELECT o_custkey FROM orders GROUP BY o_custkey ORDER BY o_custkey OFFSET 20) sq ORDER BY 1 LIMIT 5;
o_custkey
-----------
---------------------------------------------------------------------
35
37
38
@ -359,7 +359,7 @@ SELECT * FROM (SELECT o_custkey FROM orders GROUP BY o_custkey ORDER BY o_custke
-- the subquery is recursively planned since it contains OFFSET, which is not pushdownable
SELECT * FROM (SELECT o_orderkey FROM orders ORDER BY o_orderkey OFFSET 20) sq ORDER BY 1 LIMIT 5;
o_orderkey
------------
---------------------------------------------------------------------
69
70
71
@ -370,7 +370,7 @@ SELECT * FROM (SELECT o_orderkey FROM orders ORDER BY o_orderkey OFFSET 20) sq O
-- Simple LIMIT/OFFSET with ORDER BY
SELECT o_orderkey FROM orders ORDER BY o_orderkey LIMIT 10 OFFSET 20;
o_orderkey
------------
---------------------------------------------------------------------
69
70
71
@ -397,7 +397,7 @@ ORDER BY
customer_keys.o_custkey DESC
LIMIT 10 OFFSET 20;
o_custkey | total_order_count
-----------+-------------------
---------------------------------------------------------------------
1466 | 1
1465 | 2
1463 | 4
@ -430,7 +430,7 @@ SELECT o_custkey, COUNT(*) AS ccnt FROM orders GROUP BY o_custkey ORDER BY ccnt
-- OFFSET without LIMIT
SELECT o_custkey FROM orders ORDER BY o_custkey OFFSET 2980;
o_custkey
-----------
---------------------------------------------------------------------
1498
1498
1499
@ -451,7 +451,7 @@ ORDER BY 1, 2, 3
LIMIT 10 OFFSET 20;
DEBUG: push down of limit count: 30
l_partkey | o_custkey | l_quantity
-----------+-----------+------------
---------------------------------------------------------------------
655 | 58 | 50.00
669 | 319 | 34.00
699 | 1255 | 50.00
@ -479,7 +479,7 @@ SELECT
ORDER BY 2 DESC, 1 DESC
LIMIT 10;
l_orderkey | sum | sum | count | count | max | max
------------+-----------+-----------+-------+-------+-----------+----------
---------------------------------------------------------------------
12804 | 440012.71 | 45788.16 | 7 | 1 | 94398.00 | 45788.16
9863 | 412560.63 | 175647.63 | 7 | 3 | 85723.77 | 50769.14
2567 | 412076.77 | 59722.26 | 7 | 1 | 94894.00 | 9784.02
@ -506,7 +506,7 @@ SELECT
ORDER BY 2 DESC, 1 DESC
LIMIT 10;
l_orderkey | sum | sum | count | count | max | max
------------+-----------+-----------+-------+-------+----------+----------
---------------------------------------------------------------------
9863 | 412560.63 | 175647.63 | 7 | 3 | 85723.77 | 50769.14
12039 | 407048.94 | 76406.30 | 7 | 2 | 94471.02 | 19679.30
5606 | 403595.91 | 36531.51 | 7 | 2 | 94890.18 | 30582.75

View File

@ -4,44 +4,44 @@
-- Check that we can correctly handle complex expressions and aggregates.
SELECT sum(l_quantity) / avg(l_quantity) FROM lineitem;
?column?
------------------------
---------------------------------------------------------------------
12000.0000000000000000
(1 row)
SELECT sum(l_quantity) / (10 * avg(l_quantity)) FROM lineitem;
?column?
-----------------------
---------------------------------------------------------------------
1200.0000000000000000
(1 row)
SELECT (sum(l_quantity) / (10 * avg(l_quantity))) + 11 FROM lineitem;
?column?
-----------------------
---------------------------------------------------------------------
1211.0000000000000000
(1 row)
SELECT avg(l_quantity) as average FROM lineitem;
average
---------------------
---------------------------------------------------------------------
25.4462500000000000
(1 row)
SELECT 100 * avg(l_quantity) as average_times_hundred FROM lineitem;
average_times_hundred
-----------------------
---------------------------------------------------------------------
2544.6250000000000000
(1 row)
SELECT 100 * avg(l_quantity) / 10 as average_times_ten FROM lineitem;
average_times_ten
----------------------
---------------------------------------------------------------------
254.4625000000000000
(1 row)
SELECT l_quantity, 10 * count(*) count_quantity FROM lineitem
GROUP BY l_quantity ORDER BY count_quantity, l_quantity;
l_quantity | count_quantity
------------+----------------
---------------------------------------------------------------------
44.00 | 2150
38.00 | 2160
45.00 | 2180
@ -98,42 +98,42 @@ SELECT l_quantity, 10 * count(*) count_quantity FROM lineitem
SELECT count(*) FROM lineitem
WHERE octet_length(l_comment || l_comment) > 40;
count
-------
---------------------------------------------------------------------
8148
(1 row)
SELECT count(*) FROM lineitem
WHERE octet_length(concat(l_comment, l_comment)) > 40;
count
-------
---------------------------------------------------------------------
8148
(1 row)
SELECT count(*) FROM lineitem
WHERE octet_length(l_comment) + octet_length('randomtext'::text) > 40;
count
-------
---------------------------------------------------------------------
4611
(1 row)
SELECT count(*) FROM lineitem
WHERE octet_length(l_comment) + 10 > 40;
count
-------
---------------------------------------------------------------------
4611
(1 row)
SELECT count(*) FROM lineitem
WHERE (l_receiptdate::timestamp - l_shipdate::timestamp) > interval '5 days';
count
-------
---------------------------------------------------------------------
10008
(1 row)
-- can push down queries where no columns present on the WHERE clause
SELECT count(*) FROM lineitem WHERE random() = -0.1;
count
-------
---------------------------------------------------------------------
0
(1 row)
@ -141,7 +141,7 @@ SELECT count(*) FROM lineitem WHERE random() = -0.1;
SELECT count(*) FROM lineitem
WHERE (l_partkey > 10000) is true;
count
-------
---------------------------------------------------------------------
11423
(1 row)
@ -149,7 +149,7 @@ SELECT count(*) FROM lineitem
SELECT count(*) FROM lineitem
WHERE l_partkey = ANY(ARRAY[19353, 19354, 19355]);
count
-------
---------------------------------------------------------------------
1
(1 row)
@ -157,7 +157,7 @@ SELECT count(*) FROM lineitem
SELECT count(*) FROM lineitem
WHERE l_partkey = ALL(ARRAY[19353]);
count
-------
---------------------------------------------------------------------
1
(1 row)
@ -165,7 +165,7 @@ SELECT count(*) FROM lineitem
SELECT count(*) FROM lineitem
WHERE ARRAY[19353, 19354, 19355] @> ARRAY[l_partkey];
count
-------
---------------------------------------------------------------------
1
(1 row)
@ -173,7 +173,7 @@ SELECT count(*) FROM lineitem
SELECT count(*) FROM lineitem
WHERE (l_quantity/100)::int::bool::text::bool;
count
-------
---------------------------------------------------------------------
260
(1 row)
@ -181,7 +181,7 @@ SELECT count(*) FROM lineitem
SELECT count(*) FROM lineitem
WHERE (CASE WHEN l_orderkey > 4000 THEN l_partkey / 100 > 1 ELSE false END);
count
-------
---------------------------------------------------------------------
7948
(1 row)
@ -189,7 +189,7 @@ SELECT count(*) FROM lineitem
SELECT count(*) FROM lineitem
WHERE COALESCE((l_partkey/50000)::bool, false);
count
-------
---------------------------------------------------------------------
9122
(1 row)
@ -197,7 +197,7 @@ SELECT count(*) FROM lineitem
SELECT count(*) FROM lineitem
WHERE NULLIF((l_partkey/50000)::bool, false);
count
-------
---------------------------------------------------------------------
9122
(1 row)
@ -205,7 +205,7 @@ SELECT count(*) FROM lineitem
SELECT count(*) FROM orders
WHERE o_comment IS NOT null;
count
-------
---------------------------------------------------------------------
2985
(1 row)
@ -213,7 +213,7 @@ SELECT count(*) FROM orders
SELECT count(*) FROM lineitem
WHERE isfinite(l_shipdate);
count
-------
---------------------------------------------------------------------
12000
(1 row)
@ -221,7 +221,7 @@ SELECT count(*) FROM lineitem
SELECT count(*) FROM lineitem
WHERE 0 != 0;
count
-------
---------------------------------------------------------------------
0
(1 row)
@ -229,7 +229,7 @@ SELECT count(*) FROM lineitem
SELECT count(*) FROM lineitem
WHERE l_partkey IS DISTINCT FROM 50040;
count
-------
---------------------------------------------------------------------
11999
(1 row)
@ -237,7 +237,7 @@ SELECT count(*) FROM lineitem
SELECT count(*) FROM lineitem
WHERE row(l_partkey, 2, 3) > row(2000, 2, 3);
count
-------
---------------------------------------------------------------------
11882
(1 row)
@ -252,7 +252,7 @@ SELECT count(*) FROM lineitem
l_partkey IS DISTINCT FROM 50040 AND
row(l_partkey, 2, 3) > row(2000, 2, 3);
count
-------
---------------------------------------------------------------------
137
(1 row)
@ -264,7 +264,7 @@ SELECT l_linenumber FROM lineitem
l_linenumber
LIMIT 1;
l_linenumber
--------------
---------------------------------------------------------------------
1
(1 row)
@ -277,7 +277,7 @@ SELECT count(*) * l_discount as total_discount, count(*), sum(l_tax), l_discount
ORDER BY
total_discount DESC, sum(l_tax) DESC;
total_discount | count | sum | l_discount
----------------+-------+-------+------------
---------------------------------------------------------------------
104.80 | 1048 | 41.08 | 0.10
98.55 | 1095 | 44.15 | 0.09
90.64 | 1133 | 45.94 | 0.08
@ -300,7 +300,7 @@ SELECT l_linenumber FROM lineitem
l_linenumber
LIMIT 1;
l_linenumber
--------------
---------------------------------------------------------------------
2
(1 row)
@ -315,7 +315,7 @@ SELECT max(l_linenumber), min(l_discount), l_receiptdate FROM lineitem
l_receiptdate
LIMIT 1;
max | min | l_receiptdate
-----+------+---------------
---------------------------------------------------------------------
3 | 0.07 | 01-09-1992
(1 row)
@ -323,21 +323,21 @@ SELECT max(l_linenumber), min(l_discount), l_receiptdate FROM lineitem
SELECT count(*) FROM lineitem, orders
WHERE l_orderkey = o_orderkey AND l_quantity < 5;
count
-------
---------------------------------------------------------------------
951
(1 row)
SELECT count(*) FROM lineitem
JOIN orders ON l_orderkey = o_orderkey AND l_quantity < 5;
count
-------
---------------------------------------------------------------------
951
(1 row)
SELECT count(*) FROM lineitem JOIN orders ON l_orderkey = o_orderkey
WHERE l_quantity < 5;
count
-------
---------------------------------------------------------------------
951
(1 row)
@ -356,7 +356,7 @@ DETAIL: Subqueries with offset are not supported yet
-- Simple LIMIT/OFFSET with ORDER BY
SELECT o_orderkey FROM orders ORDER BY o_orderkey LIMIT 10 OFFSET 20;
o_orderkey
------------
---------------------------------------------------------------------
69
70
71
@ -383,7 +383,7 @@ ORDER BY
customer_keys.o_custkey DESC
LIMIT 10 OFFSET 20;
o_custkey | total_order_count
-----------+-------------------
---------------------------------------------------------------------
1466 | 1
1465 | 2
1463 | 4
@ -416,7 +416,7 @@ SELECT o_custkey, COUNT(*) AS ccnt FROM orders GROUP BY o_custkey ORDER BY ccnt
-- OFFSET without LIMIT
SELECT o_custkey FROM orders ORDER BY o_custkey OFFSET 2980;
o_custkey
-----------
---------------------------------------------------------------------
1498
1498
1499
@ -437,7 +437,7 @@ ORDER BY 1, 2, 3
LIMIT 10 OFFSET 20;
DEBUG: push down of limit count: 30
l_partkey | o_custkey | l_quantity
-----------+-----------+------------
---------------------------------------------------------------------
655 | 58 | 50.00
669 | 319 | 34.00
699 | 1255 | 50.00
@ -465,7 +465,7 @@ SELECT
ORDER BY 2 DESC, 1 DESC
LIMIT 10;
l_orderkey | sum | sum | count | count | max | max
------------+-----------+-----------+-------+-------+-----------+----------
---------------------------------------------------------------------
12804 | 440012.71 | 45788.16 | 7 | 1 | 94398.00 | 45788.16
9863 | 412560.63 | 175647.63 | 7 | 3 | 85723.77 | 50769.14
2567 | 412076.77 | 59722.26 | 7 | 1 | 94894.00 | 9784.02
@ -492,7 +492,7 @@ SELECT
ORDER BY 2 DESC, 1 DESC
LIMIT 10;
l_orderkey | sum | sum | count | count | max | max
------------+-----------+-----------+-------+-------+----------+----------
---------------------------------------------------------------------
9863 | 412560.63 | 175647.63 | 7 | 3 | 85723.77 | 50769.14
12039 | 407048.94 | 76406.30 | 7 | 2 | 94471.02 | 19679.30
5606 | 403595.91 | 36531.51 | 7 | 2 | 94890.18 | 30582.75

View File

@ -11,7 +11,7 @@ SELECT count(*) count_quantity, l_quantity FROM lineitem WHERE l_quantity < 32.0
GROUP BY l_quantity
ORDER BY count_quantity ASC, l_quantity DESC;
count_quantity | l_quantity
----------------+------------
---------------------------------------------------------------------
219 | 13.00
222 | 29.00
227 | 3.00
@ -49,7 +49,7 @@ SELECT count(*) count_quantity, l_quantity FROM lineitem WHERE l_quantity < 32.0
GROUP BY l_quantity
ORDER BY count_quantity DESC, l_quantity ASC;
count_quantity | l_quantity
----------------+------------
---------------------------------------------------------------------
273 | 28.00
264 | 30.00
261 | 23.00

View File

@ -51,8 +51,6 @@ ERROR: column "bad_column" of relation "table_to_distribute" does not exist
-- use unrecognized partition type
SELECT create_distributed_table('table_to_distribute', 'name', 'unrecognized');
ERROR: invalid input value for enum citus.distribution_type: "unrecognized"
LINE 1: ..._distributed_table('table_to_distribute', 'name', 'unrecogni...
^
-- use a partition column of a type lacking any default operator class
SELECT create_distributed_table('table_to_distribute', 'json_data', 'hash');
ERROR: data type json has no default operator class for specified partition method
@ -64,14 +62,14 @@ DETAIL: Partition column types must have a hash function defined to use hash pa
-- distribute table and inspect side effects
SELECT master_create_distributed_table('table_to_distribute', 'name', 'hash');
master_create_distributed_table
---------------------------------
---------------------------------------------------------------------
(1 row)
SELECT partmethod, partkey FROM pg_dist_partition
WHERE logicalrelid = 'table_to_distribute'::regclass;
partmethod | partkey
------------+--------------------------------------------------------------------------------------------------------------------------
---------------------------------------------------------------------
h | {VAR :varno 1 :varattno 1 :vartype 25 :vartypmod -1 :varcollid 100 :varlevelsup 0 :varnoold 1 :varoattno 1 :location -1}
(1 row)
@ -88,7 +86,7 @@ HINT: Add more worker nodes or try again with a lower replication factor.
-- finally, create shards and inspect metadata
SELECT master_create_worker_shards('table_to_distribute', 16, 1);
master_create_worker_shards
-----------------------------
---------------------------------------------------------------------
(1 row)
@ -96,7 +94,7 @@ SELECT shardstorage, shardminvalue, shardmaxvalue FROM pg_dist_shard
WHERE logicalrelid = 'table_to_distribute'::regclass
ORDER BY (shardminvalue::integer) ASC;
shardstorage | shardminvalue | shardmaxvalue
--------------+---------------+---------------
---------------------------------------------------------------------
t | -2147483648 | -1879048193
t | -1879048192 | -1610612737
t | -1610612736 | -1342177281
@ -122,13 +120,13 @@ SELECT count(*) AS shard_count,
WHERE logicalrelid='table_to_distribute'::regclass
GROUP BY shard_size;
shard_count | shard_size
-------------+------------
---------------------------------------------------------------------
16 | 268435455
(1 row)
SELECT COUNT(*) FROM pg_class WHERE relname LIKE 'table_to_distribute%' AND relkind = 'r';
count
-------
---------------------------------------------------------------------
1
(1 row)
@ -138,7 +136,7 @@ ERROR: table "table_to_distribute" has already had shards created for it
-- test list sorting
SELECT sort_names('sumedh', 'jason', 'ozgun');
sort_names
------------
---------------------------------------------------------------------
jason +
ozgun +
sumedh +
@ -147,7 +145,7 @@ SELECT sort_names('sumedh', 'jason', 'ozgun');
SELECT COUNT(*) FROM pg_class WHERE relname LIKE 'throwaway%' AND relkind = 'r';
count
-------
---------------------------------------------------------------------
0
(1 row)
@ -163,7 +161,7 @@ SET citus.shard_replication_factor TO 1;
SELECT create_distributed_table('foreign_table_to_distribute', 'id', 'hash');
NOTICE: foreign-data wrapper "fake_fdw" does not have an extension defined
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -171,7 +169,7 @@ SELECT shardstorage, shardminvalue, shardmaxvalue FROM pg_dist_shard
WHERE logicalrelid = 'foreign_table_to_distribute'::regclass
ORDER BY (shardminvalue::integer) ASC;
shardstorage | shardminvalue | shardmaxvalue
--------------+---------------+---------------
---------------------------------------------------------------------
f | -2147483648 | -1879048193
f | -1879048192 | -1610612737
f | -1610612736 | -1342177281
@ -199,7 +197,7 @@ CREATE TABLE weird_shard_count
SET citus.shard_count TO 7;
SELECT create_distributed_table('weird_shard_count', 'id', 'hash');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -209,7 +207,7 @@ SELECT shardmaxvalue::integer - shardminvalue::integer AS shard_size
WHERE logicalrelid = 'weird_shard_count'::regclass
ORDER BY shardminvalue::integer ASC;
shard_size
------------
---------------------------------------------------------------------
613566755
613566755
613566755

View File

@ -30,7 +30,7 @@ WARNING: table "lineitem" has a UNIQUE or EXCLUDE constraint
DETAIL: UNIQUE constraints, EXCLUDE constraints, and PRIMARY KEYs on append-partitioned tables cannot be enforced.
HINT: Consider using hash partitioning.
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -51,7 +51,7 @@ WARNING: table "orders" has a UNIQUE or EXCLUDE constraint
DETAIL: UNIQUE constraints, EXCLUDE constraints, and PRIMARY KEYs on append-partitioned tables cannot be enforced.
HINT: Consider using hash partitioning.
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -68,7 +68,7 @@ CREATE TABLE orders_reference (
PRIMARY KEY(o_orderkey) );
SELECT create_reference_table('orders_reference');
create_reference_table
------------------------
---------------------------------------------------------------------
(1 row)
@ -83,7 +83,7 @@ CREATE TABLE customer (
c_comment varchar(117) not null);
SELECT create_reference_table('customer');
create_reference_table
------------------------
---------------------------------------------------------------------
(1 row)
@ -98,7 +98,7 @@ CREATE TABLE customer_append (
c_comment varchar(117) not null);
SELECT create_distributed_table('customer_append', 'c_custkey', 'append');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -109,7 +109,7 @@ CREATE TABLE nation (
n_comment varchar(152));
SELECT create_reference_table('nation');
create_reference_table
------------------------
---------------------------------------------------------------------
(1 row)
@ -125,7 +125,7 @@ CREATE TABLE part (
p_comment varchar(23) not null);
SELECT create_reference_table('part');
create_reference_table
------------------------
---------------------------------------------------------------------
(1 row)
@ -141,7 +141,7 @@ CREATE TABLE part_append (
p_comment varchar(23) not null);
SELECT create_distributed_table('part_append', 'p_partkey', 'append');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -157,7 +157,7 @@ CREATE TABLE supplier
);
SELECT create_reference_table('supplier');
create_reference_table
------------------------
---------------------------------------------------------------------
(1 row)
@ -175,7 +175,7 @@ CREATE TABLE supplier_single_shard
);
SELECT create_distributed_table('supplier_single_shard', 's_suppkey', 'append');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -191,13 +191,13 @@ HINT: Try again after reducing "citus.shard_replication_factor" to one or setti
SET citus.shard_replication_factor TO 1;
SELECT create_distributed_table('mx_table_test', 'col1');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
SELECT repmodel FROM pg_dist_partition WHERE logicalrelid='mx_table_test'::regclass;
repmodel
----------
---------------------------------------------------------------------
s
(1 row)
@ -209,13 +209,13 @@ NOTICE: using statement-based replication
DETAIL: The current replication_model setting is 'streaming', which is not supported by master_create_distributed_table.
HINT: Use create_distributed_table to use the streaming replication model.
master_create_distributed_table
---------------------------------
---------------------------------------------------------------------
(1 row)
SELECT repmodel FROM pg_dist_partition WHERE logicalrelid='s_table'::regclass;
repmodel
----------
---------------------------------------------------------------------
c
(1 row)
@ -236,13 +236,13 @@ SELECT create_distributed_table('repmodel_test', 'a', 'append');
NOTICE: using statement-based replication
DETAIL: Streaming replication is supported only for hash-distributed tables.
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
SELECT repmodel FROM pg_dist_partition WHERE logicalrelid='repmodel_test'::regclass;
repmodel
----------
---------------------------------------------------------------------
c
(1 row)
@ -252,13 +252,13 @@ SELECT create_distributed_table('repmodel_test', 'a', 'range');
NOTICE: using statement-based replication
DETAIL: Streaming replication is supported only for hash-distributed tables.
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
SELECT repmodel FROM pg_dist_partition WHERE logicalrelid='repmodel_test'::regclass;
repmodel
----------
---------------------------------------------------------------------
c
(1 row)
@ -271,13 +271,13 @@ NOTICE: using statement-based replication
DETAIL: The current replication_model setting is 'streaming', which is not supported by master_create_distributed_table.
HINT: Use create_distributed_table to use the streaming replication model.
master_create_distributed_table
---------------------------------
---------------------------------------------------------------------
(1 row)
SELECT repmodel FROM pg_dist_partition WHERE logicalrelid='repmodel_test'::regclass;
repmodel
----------
---------------------------------------------------------------------
c
(1 row)
@ -288,13 +288,13 @@ NOTICE: using statement-based replication
DETAIL: The current replication_model setting is 'streaming', which is not supported by master_create_distributed_table.
HINT: Use create_distributed_table to use the streaming replication model.
master_create_distributed_table
---------------------------------
---------------------------------------------------------------------
(1 row)
SELECT repmodel FROM pg_dist_partition WHERE logicalrelid='repmodel_test'::regclass;
repmodel
----------
---------------------------------------------------------------------
c
(1 row)
@ -305,13 +305,13 @@ NOTICE: using statement-based replication
DETAIL: The current replication_model setting is 'streaming', which is not supported by master_create_distributed_table.
HINT: Use create_distributed_table to use the streaming replication model.
master_create_distributed_table
---------------------------------
---------------------------------------------------------------------
(1 row)
SELECT repmodel FROM pg_dist_partition WHERE logicalrelid='repmodel_test'::regclass;
repmodel
----------
---------------------------------------------------------------------
c
(1 row)
@ -323,13 +323,13 @@ SELECT create_distributed_table('repmodel_test', 'a', 'append');
NOTICE: using statement-based replication
DETAIL: Streaming replication is supported only for hash-distributed tables.
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
SELECT repmodel FROM pg_dist_partition WHERE logicalrelid='repmodel_test'::regclass;
repmodel
----------
---------------------------------------------------------------------
c
(1 row)
@ -339,13 +339,13 @@ SELECT create_distributed_table('repmodel_test', 'a', 'range');
NOTICE: using statement-based replication
DETAIL: Streaming replication is supported only for hash-distributed tables.
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
SELECT repmodel FROM pg_dist_partition WHERE logicalrelid='repmodel_test'::regclass;
repmodel
----------
---------------------------------------------------------------------
c
(1 row)
@ -356,13 +356,13 @@ NOTICE: using statement-based replication
DETAIL: The current replication_model setting is 'streaming', which is not supported by master_create_distributed_table.
HINT: Use create_distributed_table to use the streaming replication model.
master_create_distributed_table
---------------------------------
---------------------------------------------------------------------
(1 row)
SELECT repmodel FROM pg_dist_partition WHERE logicalrelid='repmodel_test'::regclass;
repmodel
----------
---------------------------------------------------------------------
c
(1 row)
@ -373,13 +373,13 @@ NOTICE: using statement-based replication
DETAIL: The current replication_model setting is 'streaming', which is not supported by master_create_distributed_table.
HINT: Use create_distributed_table to use the streaming replication model.
master_create_distributed_table
---------------------------------
---------------------------------------------------------------------
(1 row)
SELECT repmodel FROM pg_dist_partition WHERE logicalrelid='repmodel_test'::regclass;
repmodel
----------
---------------------------------------------------------------------
c
(1 row)
@ -390,13 +390,13 @@ NOTICE: using statement-based replication
DETAIL: The current replication_model setting is 'streaming', which is not supported by master_create_distributed_table.
HINT: Use create_distributed_table to use the streaming replication model.
master_create_distributed_table
---------------------------------
---------------------------------------------------------------------
(1 row)
SELECT repmodel FROM pg_dist_partition WHERE logicalrelid='repmodel_test'::regclass;
repmodel
----------
---------------------------------------------------------------------
c
(1 row)
@ -424,13 +424,13 @@ HINT: Empty your table before distributing it.
SELECT create_distributed_table('data_load_test', 'col1');
NOTICE: Copying data from local table...
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
SELECT * FROM data_load_test ORDER BY col1;
col1 | col2 | col3
------+-------+------
---------------------------------------------------------------------
132 | hello | 1
243 | world | 2
(2 rows)
@ -440,39 +440,39 @@ DROP TABLE data_load_test;
CREATE TABLE no_shard_test (col1 int, col2 text);
SELECT create_distributed_table('no_shard_test', 'col1', 'append');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
SELECT * FROM no_shard_test WHERE col1 > 1;
col1 | col2
------+------
---------------------------------------------------------------------
(0 rows)
DROP TABLE no_shard_test;
CREATE TABLE no_shard_test (col1 int, col2 text);
SELECT create_distributed_table('no_shard_test', 'col1', 'range');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
SELECT * FROM no_shard_test WHERE col1 > 1;
col1 | col2
------+------
---------------------------------------------------------------------
(0 rows)
DROP TABLE no_shard_test;
CREATE TABLE no_shard_test (col1 int, col2 text);
SELECT master_create_distributed_table('no_shard_test', 'col1', 'hash');
master_create_distributed_table
---------------------------------
---------------------------------------------------------------------
(1 row)
SELECT * FROM no_shard_test WHERE col1 > 1;
col1 | col2
------+------
---------------------------------------------------------------------
(0 rows)
DROP TABLE no_shard_test;
@ -483,7 +483,7 @@ INSERT INTO data_load_test VALUES (132, 'hello');
SELECT create_distributed_table('data_load_test', 'col1');
NOTICE: Copying data from local table...
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -491,7 +491,7 @@ INSERT INTO data_load_test VALUES (243, 'world');
END;
SELECT * FROM data_load_test ORDER BY col1;
col1 | col2 | col3
------+-------+------
---------------------------------------------------------------------
132 | hello | 1
243 | world | 2
(2 rows)
@ -504,7 +504,7 @@ INSERT INTO data_load_test1 VALUES (132, 'hello');
SELECT create_distributed_table('data_load_test1', 'col1');
NOTICE: Copying data from local table...
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -513,7 +513,7 @@ INSERT INTO data_load_test2 VALUES (132, 'world');
SELECT create_distributed_table('data_load_test2', 'col1');
NOTICE: Copying data from local table...
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -521,7 +521,7 @@ SELECT a.col2 ||' '|| b.col2
FROM data_load_test1 a JOIN data_load_test2 b USING (col1)
WHERE col1 = 132;
?column?
-------------
---------------------------------------------------------------------
hello world
(1 row)
@ -531,7 +531,7 @@ END;
\c - - - :worker_1_port
SELECT relname FROM pg_class WHERE relname LIKE 'data_load_test%';
relname
---------
---------------------------------------------------------------------
(0 rows)
\c - - - :master_port
@ -542,7 +542,7 @@ INSERT INTO data_load_test VALUES (132, 'hello');
SELECT create_distributed_table('data_load_test', 'col1');
NOTICE: Copying data from local table...
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -556,7 +556,7 @@ INSERT INTO data_load_test VALUES (132, 'hello');
SELECT create_distributed_table('data_load_test', 'col1');
NOTICE: Copying data from local table...
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -569,7 +569,7 @@ INSERT INTO data_load_test VALUES (132, 'hello');
SELECT create_distributed_table('data_load_test', 'col1');
NOTICE: Copying data from local table...
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -584,13 +584,13 @@ ALTER TABLE data_load_test DROP COLUMN col1;
SELECT create_distributed_table('data_load_test', 'col3');
NOTICE: Copying data from local table...
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
SELECT * FROM data_load_test ORDER BY col2;
col2 | col3 | CoL4")
-------+-------+--------
---------------------------------------------------------------------
hello | world |
world | hello |
(2 rows)
@ -598,7 +598,7 @@ SELECT * FROM data_load_test ORDER BY col2;
-- make sure the tuple went to the right shard
SELECT * FROM data_load_test WHERE col3 = 'world';
col2 | col3 | CoL4")
-------+-------+--------
---------------------------------------------------------------------
hello | world |
(1 row)
@ -608,14 +608,14 @@ SET citus.shard_count to 4;
CREATE TABLE lineitem_hash_part (like lineitem);
SELECT create_distributed_table('lineitem_hash_part', 'l_orderkey');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
CREATE TABLE orders_hash_part (like orders);
SELECT create_distributed_table('orders_hash_part', 'o_orderkey');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -626,13 +626,13 @@ CREATE UNLOGGED TABLE unlogged_table
);
SELECT create_distributed_table('unlogged_table', 'key');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
SELECT * FROM master_get_table_ddl_events('unlogged_table');
master_get_table_ddl_events
--------------------------------------------------------------------
---------------------------------------------------------------------
CREATE UNLOGGED TABLE public.unlogged_table (key text, value text)
ALTER TABLE public.unlogged_table OWNER TO postgres
(2 rows)
@ -640,7 +640,7 @@ SELECT * FROM master_get_table_ddl_events('unlogged_table');
\c - - - :worker_1_port
SELECT relpersistence FROM pg_class WHERE relname LIKE 'unlogged_table_%';
relpersistence
----------------
---------------------------------------------------------------------
u
u
u
@ -653,7 +653,7 @@ BEGIN;
CREATE TABLE rollback_table(id int, name varchar(20));
SELECT create_distributed_table('rollback_table','id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -662,7 +662,7 @@ ROLLBACK;
\c - - - :worker_1_port
SELECT "Column", "Type", "Modifiers" FROM table_desc WHERE relid = (SELECT oid FROM pg_class WHERE relname LIKE 'rollback_table%');
Column | Type | Modifiers
--------+------+-----------
---------------------------------------------------------------------
(0 rows)
\c - - - :master_port
@ -676,7 +676,7 @@ INSERT INTO rollback_table VALUES(3, 'Name_3');
SELECT create_distributed_table('rollback_table','id');
NOTICE: Copying data from local table...
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -685,7 +685,7 @@ ROLLBACK;
\c - - - :worker_1_port
SELECT "Column", "Type", "Modifiers" FROM table_desc WHERE relid = (SELECT oid FROM pg_class WHERE relname LIKE 'rollback_table%');
Column | Type | Modifiers
--------+------+-----------
---------------------------------------------------------------------
(0 rows)
\c - - - :master_port
@ -693,7 +693,7 @@ BEGIN;
CREATE TABLE rollback_table(id int, name varchar(20));
SELECT create_distributed_table('rollback_table','id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -703,7 +703,7 @@ COMMIT;
-- Check the table is created
SELECT count(*) FROM rollback_table;
count
-------
---------------------------------------------------------------------
3
(1 row)
@ -712,7 +712,7 @@ BEGIN;
CREATE TABLE rollback_table(id int, name varchar(20));
SELECT create_distributed_table('rollback_table','id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -722,7 +722,7 @@ ROLLBACK;
\c - - - :worker_1_port
SELECT "Column", "Type", "Modifiers" FROM table_desc WHERE relid = (SELECT oid FROM pg_class WHERE relname LIKE 'rollback_table%');
Column | Type | Modifiers
--------+------+-----------
---------------------------------------------------------------------
(0 rows)
\c - - - :master_port
@ -730,14 +730,14 @@ BEGIN;
CREATE TABLE tt1(id int);
SELECT create_distributed_table('tt1','id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
CREATE TABLE tt2(id int);
SELECT create_distributed_table('tt2','id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -748,13 +748,13 @@ COMMIT;
\c - - - :worker_1_port
SELECT "Column", "Type", "Modifiers" FROM table_desc WHERE relid = 'public.tt1_360069'::regclass;
Column | Type | Modifiers
--------+---------+-----------
---------------------------------------------------------------------
id | integer |
(1 row)
SELECT "Column", "Type", "Modifiers" FROM table_desc WHERE relid = 'public.tt2_360073'::regclass;
Column | Type | Modifiers
--------+---------+-----------
---------------------------------------------------------------------
id | integer |
(1 row)
@ -767,13 +767,13 @@ BEGIN;
CREATE TABLE append_tt1(id int);
SELECT create_distributed_table('append_tt1','id','append');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
SELECT master_create_empty_shard('append_tt1');
master_create_empty_shard
---------------------------
---------------------------------------------------------------------
360077
(1 row)
@ -782,7 +782,7 @@ ROLLBACK;
\c - - - :worker_1_port
SELECT "Column", "Type", "Modifiers" FROM table_desc WHERE relid = 'public.append_tt1_360077'::regclass;
Column | Type | Modifiers
--------+---------+-----------
---------------------------------------------------------------------
id | integer |
(1 row)
@ -791,7 +791,7 @@ SELECT "Column", "Type", "Modifiers" FROM table_desc WHERE relid = 'public.appen
\c - - - :worker_1_port
SELECT "Column", "Type", "Modifiers" FROM table_desc WHERE relid = (SELECT oid from pg_class WHERE relname LIKE 'public.tt1%');
Column | Type | Modifiers
--------+------+-----------
---------------------------------------------------------------------
(0 rows)
\c - - - :master_port
@ -803,14 +803,14 @@ INSERT INTO tt1 VALUES(1);
SELECT create_distributed_table('tt1','id');
NOTICE: Copying data from local table...
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
INSERT INTO tt1 VALUES(2);
SELECT * FROM tt1 WHERE id = 1;
id
----
---------------------------------------------------------------------
1
(1 row)
@ -819,7 +819,7 @@ COMMIT;
\c - - - :worker_1_port
SELECT "Column", "Type", "Modifiers" FROM table_desc WHERE relid = 'public.tt1_360078'::regclass;
Column | Type | Modifiers
--------+---------+-----------
---------------------------------------------------------------------
id | integer |
(1 row)
@ -829,7 +829,7 @@ BEGIN;
CREATE TABLE tt1(id int);
SELECT create_distributed_table('tt1','id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -839,7 +839,7 @@ COMMIT;
\c - - - :worker_1_port
SELECT "Column", "Type", "Modifiers" FROM table_desc WHERE relid = (SELECT oid from pg_class WHERE relname LIKE 'tt1%');
Column | Type | Modifiers
--------+------+-----------
---------------------------------------------------------------------
(0 rows)
\c - - - :master_port
@ -849,7 +849,7 @@ SELECT "Column", "Type", "Modifiers" FROM table_desc WHERE relid = (SELECT oid f
CREATE TABLE sample_table(id int);
SELECT create_distributed_table('sample_table','id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -859,7 +859,7 @@ CREATE TABLE stage_table (LIKE sample_table);
SELECT create_distributed_table('stage_table', 'id');
NOTICE: Copying data from local table...
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -867,7 +867,7 @@ INSERT INTO sample_table SELECT * FROM stage_table;
DROP TABLE stage_table;
SELECT * FROM sample_table WHERE id = 3;
id
----
---------------------------------------------------------------------
3
(1 row)
@ -875,7 +875,7 @@ COMMIT;
-- Show that rows of sample_table are updated
SELECT count(*) FROM sample_table;
count
-------
---------------------------------------------------------------------
4
(1 row)
@ -886,7 +886,7 @@ BEGIN;
CREATE TABLE tt1(id int);
SELECT create_distributed_table('tt1','id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -894,7 +894,7 @@ SELECT create_distributed_table('tt1','id');
CREATE TABLE tt2(like tt1);
SELECT create_distributed_table('tt2','id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -902,20 +902,20 @@ SELECT create_distributed_table('tt2','id');
INSERT INTO tt1 SELECT * FROM tt2;
SELECT * FROM tt1 WHERE id = 3;
id
----
---------------------------------------------------------------------
3
(1 row)
SELECT * FROM tt2 WHERE id = 6;
id
----
---------------------------------------------------------------------
6
(1 row)
END;
SELECT count(*) FROM tt1;
count
-------
---------------------------------------------------------------------
6
(1 row)
@ -930,7 +930,7 @@ insert into sc.ref SELECT s FROM generate_series(0, 100) s;
SELECT create_reference_table('sc.ref');
NOTICE: Copying data from local table...
create_reference_table
------------------------
---------------------------------------------------------------------
(1 row)
@ -939,7 +939,7 @@ insert into sc.hash SELECT s FROM generate_series(0, 100) s;
SELECT create_distributed_table('sc.hash', 'a');
NOTICE: Copying data from local table...
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -952,7 +952,7 @@ insert into sc2.hash SELECT s FROM generate_series(0, 100) s;
SELECT create_distributed_table('sc2.hash', 'a');
NOTICE: Copying data from local table...
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -961,7 +961,7 @@ insert into sc2.ref SELECT s FROM generate_series(0, 100) s;
SELECT create_reference_table('sc2.ref');
NOTICE: Copying data from local table...
create_reference_table
------------------------
---------------------------------------------------------------------
(1 row)
@ -977,14 +977,14 @@ CREATE TABLE sc3.alter_replica_table
ALTER TABLE sc3.alter_replica_table REPLICA IDENTITY USING INDEX alter_replica_table_pkey;
SELECT create_distributed_table('sc3.alter_replica_table', 'id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
COMMIT;
SELECT run_command_on_workers($$SELECT relreplident FROM pg_class join information_schema.tables AS tables ON (pg_class.relname=tables.table_name) WHERE relname LIKE 'alter_replica_table_%' AND table_schema='sc3' LIMIT 1$$);
run_command_on_workers
------------------------
---------------------------------------------------------------------
(localhost,57637,t,i)
(localhost,57638,t,i)
(2 rows)
@ -1002,14 +1002,14 @@ ALTER TABLE alter_replica_table REPLICA IDENTITY USING INDEX alter_replica_table
SELECT create_distributed_table('alter_replica_table', 'id');
NOTICE: Copying data from local table...
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
COMMIT;
SELECT run_command_on_workers($$SELECT relreplident FROM pg_class join information_schema.tables AS tables ON (pg_class.relname=tables.table_name) WHERE relname LIKE 'alter_replica_table_%' AND table_schema='sc4' LIMIT 1$$);
run_command_on_workers
------------------------
---------------------------------------------------------------------
(localhost,57637,t,i)
(localhost,57638,t,i)
(2 rows)
@ -1027,14 +1027,14 @@ ALTER TABLE sc5.alter_replica_table REPLICA IDENTITY FULL;
SELECT create_distributed_table('sc5.alter_replica_table', 'id');
NOTICE: Copying data from local table...
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
COMMIT;
SELECT run_command_on_workers($$SELECT relreplident FROM pg_class join information_schema.tables AS tables ON (pg_class.relname=tables.table_name) WHERE relname LIKE 'alter_replica_table_%' AND table_schema='sc5' LIMIT 1$$);
run_command_on_workers
------------------------
---------------------------------------------------------------------
(localhost,57637,t,f)
(localhost,57638,t,f)
(2 rows)
@ -1052,14 +1052,14 @@ ALTER TABLE sc6.alter_replica_table REPLICA IDENTITY USING INDEX unique_idx;
SELECT create_distributed_table('sc6.alter_replica_table', 'id');
NOTICE: Copying data from local table...
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
COMMIT;
SELECT run_command_on_workers($$SELECT relreplident FROM pg_class join information_schema.tables AS tables ON (pg_class.relname=tables.table_name) WHERE relname LIKE 'alter_replica_table_%' AND table_schema='sc6' LIMIT 1$$);
run_command_on_workers
------------------------
---------------------------------------------------------------------
(localhost,57637,t,i)
(localhost,57638,t,i)
(2 rows)
@ -1076,14 +1076,14 @@ ALTER TABLE alter_replica_table REPLICA IDENTITY USING INDEX unique_idx;
SELECT create_distributed_table('alter_replica_table', 'id');
NOTICE: Copying data from local table...
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
COMMIT;
SELECT run_command_on_workers($$SELECT relreplident FROM pg_class join information_schema.tables AS tables ON (pg_class.relname=tables.table_name) WHERE relname LIKE 'alter_replica_table_%' AND table_schema='public' LIMIT 1$$);
run_command_on_workers
------------------------
---------------------------------------------------------------------
(localhost,57637,t,i)
(localhost,57638,t,i)
(2 rows)

View File

@ -13,7 +13,7 @@ WARNING: table "uniq_cns_append_tables" has a UNIQUE or EXCLUDE constraint
DETAIL: UNIQUE constraints, EXCLUDE constraints, and PRIMARY KEYs on append-partitioned tables cannot be enforced.
HINT: Consider using hash partitioning.
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -28,7 +28,7 @@ WARNING: table "excl_cns_append_tables" has a UNIQUE or EXCLUDE constraint
DETAIL: UNIQUE constraints, EXCLUDE constraints, and PRIMARY KEYs on append-partitioned tables cannot be enforced.
HINT: Consider using hash partitioning.
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -71,7 +71,7 @@ CREATE TABLE pk_on_part_col
);
SELECT create_distributed_table('pk_on_part_col', 'partition_col', 'hash');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -82,7 +82,7 @@ CREATE TABLE uq_part_col
);
SELECT create_distributed_table('uq_part_col', 'partition_col', 'hash');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -94,7 +94,7 @@ CREATE TABLE uq_two_columns
);
SELECT create_distributed_table('uq_two_columns', 'partition_col', 'hash');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -102,7 +102,7 @@ INSERT INTO uq_two_columns (partition_col, other_col) VALUES (1,1);
INSERT INTO uq_two_columns (partition_col, other_col) VALUES (1,1);
ERROR: duplicate key value violates unique constraint "uq_two_columns_partition_col_other_col_key_365008"
DETAIL: Key (partition_col, other_col)=(1, 1) already exists.
CONTEXT: while executing command on localhost:57637
CONTEXT: while executing command on localhost:xxxxx
CREATE TABLE ex_on_part_col
(
partition_col integer,
@ -111,7 +111,7 @@ CREATE TABLE ex_on_part_col
);
SELECT create_distributed_table('ex_on_part_col', 'partition_col', 'hash');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -119,7 +119,7 @@ INSERT INTO ex_on_part_col (partition_col, other_col) VALUES (1,1);
INSERT INTO ex_on_part_col (partition_col, other_col) VALUES (1,2);
ERROR: conflicting key value violates exclusion constraint "ex_on_part_col_partition_col_excl_365012"
DETAIL: Key (partition_col)=(1) conflicts with existing key (partition_col)=(1).
CONTEXT: while executing command on localhost:57638
CONTEXT: while executing command on localhost:xxxxx
CREATE TABLE ex_on_two_columns
(
partition_col integer,
@ -128,7 +128,7 @@ CREATE TABLE ex_on_two_columns
);
SELECT create_distributed_table('ex_on_two_columns', 'partition_col', 'hash');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -136,7 +136,7 @@ INSERT INTO ex_on_two_columns (partition_col, other_col) VALUES (1,1);
INSERT INTO ex_on_two_columns (partition_col, other_col) VALUES (1,1);
ERROR: conflicting key value violates exclusion constraint "ex_on_two_columns_partition_col_other_col_excl_365016"
DETAIL: Key (partition_col, other_col)=(1, 1) conflicts with existing key (partition_col, other_col)=(1, 1).
CONTEXT: while executing command on localhost:57637
CONTEXT: while executing command on localhost:xxxxx
CREATE TABLE ex_on_two_columns_prt
(
partition_col integer,
@ -145,7 +145,7 @@ CREATE TABLE ex_on_two_columns_prt
);
SELECT create_distributed_table('ex_on_two_columns_prt', 'partition_col', 'hash');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -155,7 +155,7 @@ INSERT INTO ex_on_two_columns_prt (partition_col, other_col) VALUES (1,101);
INSERT INTO ex_on_two_columns_prt (partition_col, other_col) VALUES (1,101);
ERROR: conflicting key value violates exclusion constraint "ex_on_two_columns_prt_partition_col_other_col_excl_365020"
DETAIL: Key (partition_col, other_col)=(1, 101) conflicts with existing key (partition_col, other_col)=(1, 101).
CONTEXT: while executing command on localhost:57638
CONTEXT: while executing command on localhost:xxxxx
CREATE TABLE ex_wrong_operator
(
partition_col tsrange,
@ -173,7 +173,7 @@ CREATE TABLE ex_overlaps
);
SELECT create_distributed_table('ex_overlaps', 'partition_col', 'hash');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -181,7 +181,7 @@ INSERT INTO ex_overlaps (partition_col, other_col) VALUES ('[2016-01-01 00:00:00
INSERT INTO ex_overlaps (partition_col, other_col) VALUES ('[2016-01-01 00:00:00, 2016-02-01 00:00:00]', '[2016-01-15 00:00:00, 2016-02-01 00:00:00]');
ERROR: conflicting key value violates exclusion constraint "ex_overlaps_other_col_partition_col_excl_365027"
DETAIL: Key (other_col, partition_col)=(["2016-01-15 00:00:00","2016-02-01 00:00:00"], ["2016-01-01 00:00:00","2016-02-01 00:00:00"]) conflicts with existing key (other_col, partition_col)=(["2016-01-01 00:00:00","2016-02-01 00:00:00"], ["2016-01-01 00:00:00","2016-02-01 00:00:00"]).
CONTEXT: while executing command on localhost:57638
CONTEXT: while executing command on localhost:xxxxx
-- now show that Citus can distribute unique and EXCLUDE constraints that
-- include the partition column, for hash-partitioned tables.
-- However, EXCLUDE constraints must include the partition column with
@ -194,7 +194,7 @@ CREATE TABLE pk_on_part_col_named
);
SELECT create_distributed_table('pk_on_part_col_named', 'partition_col', 'hash');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -205,7 +205,7 @@ CREATE TABLE uq_part_col_named
);
SELECT create_distributed_table('uq_part_col_named', 'partition_col', 'hash');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -217,7 +217,7 @@ CREATE TABLE uq_two_columns_named
);
SELECT create_distributed_table('uq_two_columns_named', 'partition_col', 'hash');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -225,7 +225,7 @@ INSERT INTO uq_two_columns_named (partition_col, other_col) VALUES (1,1);
INSERT INTO uq_two_columns_named (partition_col, other_col) VALUES (1,1);
ERROR: duplicate key value violates unique constraint "uq_two_columns_named_uniq_365036"
DETAIL: Key (partition_col, other_col)=(1, 1) already exists.
CONTEXT: while executing command on localhost:57637
CONTEXT: while executing command on localhost:xxxxx
CREATE TABLE ex_on_part_col_named
(
partition_col integer,
@ -234,7 +234,7 @@ CREATE TABLE ex_on_part_col_named
);
SELECT create_distributed_table('ex_on_part_col_named', 'partition_col', 'hash');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -242,7 +242,7 @@ INSERT INTO ex_on_part_col_named (partition_col, other_col) VALUES (1,1);
INSERT INTO ex_on_part_col_named (partition_col, other_col) VALUES (1,2);
ERROR: conflicting key value violates exclusion constraint "ex_on_part_col_named_exclude_365040"
DETAIL: Key (partition_col)=(1) conflicts with existing key (partition_col)=(1).
CONTEXT: while executing command on localhost:57638
CONTEXT: while executing command on localhost:xxxxx
CREATE TABLE ex_on_two_columns_named
(
partition_col integer,
@ -251,7 +251,7 @@ CREATE TABLE ex_on_two_columns_named
);
SELECT create_distributed_table('ex_on_two_columns_named', 'partition_col', 'hash');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -259,7 +259,7 @@ INSERT INTO ex_on_two_columns_named (partition_col, other_col) VALUES (1,1);
INSERT INTO ex_on_two_columns_named (partition_col, other_col) VALUES (1,1);
ERROR: conflicting key value violates exclusion constraint "ex_on_two_columns_named_exclude_365044"
DETAIL: Key (partition_col, other_col)=(1, 1) conflicts with existing key (partition_col, other_col)=(1, 1).
CONTEXT: while executing command on localhost:57637
CONTEXT: while executing command on localhost:xxxxx
CREATE TABLE ex_multiple_excludes
(
partition_col integer,
@ -270,7 +270,7 @@ CREATE TABLE ex_multiple_excludes
);
SELECT create_distributed_table('ex_multiple_excludes', 'partition_col', 'hash');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -278,11 +278,11 @@ INSERT INTO ex_multiple_excludes (partition_col, other_col, other_other_col) VAL
INSERT INTO ex_multiple_excludes (partition_col, other_col, other_other_col) VALUES (1,1,2);
ERROR: conflicting key value violates exclusion constraint "ex_multiple_excludes_excl1_365048"
DETAIL: Key (partition_col, other_col)=(1, 1) conflicts with existing key (partition_col, other_col)=(1, 1).
CONTEXT: while executing command on localhost:57638
CONTEXT: while executing command on localhost:xxxxx
INSERT INTO ex_multiple_excludes (partition_col, other_col, other_other_col) VALUES (1,2,1);
ERROR: conflicting key value violates exclusion constraint "ex_multiple_excludes_excl2_365048"
DETAIL: Key (partition_col, other_other_col)=(1, 1) conflicts with existing key (partition_col, other_other_col)=(1, 1).
CONTEXT: while executing command on localhost:57638
CONTEXT: while executing command on localhost:xxxxx
CREATE TABLE ex_wrong_operator_named
(
partition_col tsrange,
@ -300,7 +300,7 @@ CREATE TABLE ex_overlaps_named
);
SELECT create_distributed_table('ex_overlaps_named', 'partition_col', 'hash');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -308,7 +308,7 @@ INSERT INTO ex_overlaps_named (partition_col, other_col) VALUES ('[2016-01-01 00
INSERT INTO ex_overlaps_named (partition_col, other_col) VALUES ('[2016-01-01 00:00:00, 2016-02-01 00:00:00]', '[2016-01-15 00:00:00, 2016-02-01 00:00:00]');
ERROR: conflicting key value violates exclusion constraint "ex_overlaps_operator_named_exclude_365055"
DETAIL: Key (other_col, partition_col)=(["2016-01-15 00:00:00","2016-02-01 00:00:00"], ["2016-01-01 00:00:00","2016-02-01 00:00:00"]) conflicts with existing key (other_col, partition_col)=(["2016-01-01 00:00:00","2016-02-01 00:00:00"], ["2016-01-01 00:00:00","2016-02-01 00:00:00"]).
CONTEXT: while executing command on localhost:57637
CONTEXT: while executing command on localhost:xxxxx
-- now show that Citus allows unique constraints on range-partitioned tables.
CREATE TABLE uq_range_tables
(
@ -317,7 +317,7 @@ CREATE TABLE uq_range_tables
);
SELECT create_distributed_table('uq_range_tables', 'partition_col', 'range');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -330,7 +330,7 @@ CREATE TABLE check_example
);
SELECT create_distributed_table('check_example', 'partition_col', 'hash');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -338,13 +338,13 @@ SELECT create_distributed_table('check_example', 'partition_col', 'hash');
SELECT "Column", "Type", "Definition" FROM index_attrs WHERE
relid = 'check_example_partition_col_key_365056'::regclass;
Column | Type | Definition
---------------+---------+---------------
---------------------------------------------------------------------
partition_col | integer | partition_col
(1 row)
SELECT "Constraint", "Definition" FROM table_checks WHERE relid='public.check_example_365056'::regclass;
Constraint | Definition
-------------------------------------+-------------------------------------
---------------------------------------------------------------------
check_example_other_col_check | CHECK (other_col >= 100)
check_example_other_other_col_check | CHECK (abs(other_other_col) >= 100)
(2 rows)
@ -376,21 +376,21 @@ SET citus.shard_replication_factor = 1;
CREATE TABLE raw_table_1 (user_id int, UNIQUE(user_id));
SELECT create_distributed_table('raw_table_1', 'user_id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
CREATE TABLE raw_table_2 (user_id int REFERENCES raw_table_1(user_id), UNIQUE(user_id));
SELECT create_distributed_table('raw_table_2', 'user_id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
-- see that the constraint exists
SELECT "Constraint", "Definition" FROM table_fkeys WHERE relid='raw_table_2'::regclass;
Constraint | Definition
--------------------------+-------------------------------------------------------
---------------------------------------------------------------------
raw_table_2_user_id_fkey | FOREIGN KEY (user_id) REFERENCES raw_table_1(user_id)
(1 row)
@ -405,7 +405,7 @@ NOTICE: drop cascades to constraint raw_table_2_user_id_fkey on table raw_table
-- see that the constraint also dropped
SELECT "Constraint", "Definition" FROM table_fkeys WHERE relid='raw_table_2'::regclass;
Constraint | Definition
------------+------------
---------------------------------------------------------------------
(0 rows)
-- drop the table as well

View File

@ -11,7 +11,7 @@ CREATE TABLE multi_task_table
);
SELECT create_distributed_table('multi_task_table', 'id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -21,13 +21,13 @@ INSERT INTO multi_task_table VALUES(3, 'elem_3');
-- Shouldn't log anything when the log level is 'off'
SHOW citus.multi_task_query_log_level;
citus.multi_task_query_log_level
----------------------------------
---------------------------------------------------------------------
off
(1 row)
SELECT * FROM multi_task_table ORDER BY 1;
id | name
----+--------
---------------------------------------------------------------------
1 | elem_1
2 | elem_2
3 | elem_3
@ -39,7 +39,7 @@ SELECT * FROM multi_task_table ORDER BY 1;
NOTICE: multi-task query about to be executed
HINT: Queries are split to multiple tasks if they have to be split into several queries on the workers.
id | name
----+--------
---------------------------------------------------------------------
1 | elem_1
2 | elem_2
3 | elem_3
@ -49,7 +49,7 @@ SELECT AVG(id) AS avg_id FROM multi_task_table;
NOTICE: multi-task query about to be executed
HINT: Queries are split to multiple tasks if they have to be split into several queries on the workers.
avg_id
--------------------
---------------------------------------------------------------------
2.0000000000000000
(1 row)
@ -71,13 +71,13 @@ CREATE TABLE summary_table
);
SELECT create_distributed_table('raw_table', 'id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
SELECT create_distributed_table('summary_table', 'id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -103,7 +103,7 @@ INSERT INTO summary_table SELECT id, SUM(order_count) FROM raw_table WHERE id =
SET citus.multi_task_query_log_level to DEFAULT;
SELECT * FROM summary_table ORDER BY 1,2;
id | order_sum
----+-----------
---------------------------------------------------------------------
1 | 35
1 | 35
2 | 40
@ -127,7 +127,7 @@ ROLLBACK;
SET citus.multi_task_query_log_level to DEFAULT;
SELECT * FROM summary_table ORDER BY 1,2;
id | order_sum
----+-----------
---------------------------------------------------------------------
1 | 35
1 | 35
2 | 40
@ -139,7 +139,7 @@ SET citus.multi_task_query_log_level TO notice;
-- Shouldn't log since it is a router select query
SELECT * FROM raw_table WHERE ID = 1;
id | order_count
----+-------------
---------------------------------------------------------------------
1 | 15
1 | 20
(2 rows)
@ -158,13 +158,13 @@ CREATE TABLE tt2
);
SELECT create_distributed_table('tt1', 'id');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
SELECT create_distributed_table('tt2', 'name');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -178,7 +178,7 @@ SELECT tt1.id, tt2.count from tt1,tt2 where tt1.id = tt2.id;
NOTICE: multi-task query about to be executed
HINT: Queries are split to multiple tasks if they have to be split into several queries on the workers.
id | count
----+-------
---------------------------------------------------------------------
1 | 5
2 | 15
(2 rows)

View File

@ -17,7 +17,7 @@ SELECT run_command_on_coordinator_and_workers($cf$
RETURNS NULL ON NULL INPUT;
$cf$);
run_command_on_coordinator_and_workers
----------------------------------------
---------------------------------------------------------------------
(1 row)
@ -29,7 +29,7 @@ SELECT run_command_on_coordinator_and_workers($cf$
RETURNS NULL ON NULL INPUT;
$cf$);
run_command_on_coordinator_and_workers
----------------------------------------
---------------------------------------------------------------------
(1 row)
@ -43,7 +43,7 @@ SELECT run_command_on_coordinator_and_workers($co$
);
$co$);
run_command_on_coordinator_and_workers
----------------------------------------
---------------------------------------------------------------------
(1 row)
@ -75,7 +75,7 @@ CREATE TABLE composite_type_partitioned_table
SET citus.shard_replication_factor TO 1;
SELECT create_distributed_table('composite_type_partitioned_table', 'col', 'hash');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -87,14 +87,14 @@ INSERT INTO composite_type_partitioned_table VALUES (4, '(7, 8)'::test_composit
INSERT INTO composite_type_partitioned_table VALUES (5, '(9, 10)'::test_composite_type);
SELECT * FROM composite_type_partitioned_table WHERE col = '(7, 8)'::test_composite_type;
id | col
----+-------
---------------------------------------------------------------------
4 | (7,8)
(1 row)
UPDATE composite_type_partitioned_table SET id = 6 WHERE col = '(7, 8)'::test_composite_type;
SELECT * FROM composite_type_partitioned_table WHERE col = '(7, 8)'::test_composite_type;
id | col
----+-------
---------------------------------------------------------------------
6 | (7,8)
(1 row)
@ -106,7 +106,7 @@ CREATE TABLE bugs (
);
SELECT create_distributed_table('bugs', 'status', 'hash');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -118,7 +118,7 @@ INSERT INTO bugs VALUES (4, 'closed');
INSERT INTO bugs VALUES (5, 'open');
SELECT * FROM bugs WHERE status = 'closed'::bug_status;
id | status
----+--------
---------------------------------------------------------------------
3 | closed
4 | closed
(2 rows)
@ -127,7 +127,7 @@ UPDATE bugs SET status = 'closed'::bug_status WHERE id = 2;
ERROR: modifying the partition value of rows is not allowed
SELECT * FROM bugs WHERE status = 'open'::bug_status;
id | status
----+--------
---------------------------------------------------------------------
2 | open
5 | open
(2 rows)
@ -140,7 +140,7 @@ CREATE TABLE varchar_hash_partitioned_table
);
SELECT create_distributed_table('varchar_hash_partitioned_table', 'name', 'hash');
create_distributed_table
--------------------------
---------------------------------------------------------------------
(1 row)
@ -152,14 +152,14 @@ INSERT INTO varchar_hash_partitioned_table VALUES (4, 'Sumedh');
INSERT INTO varchar_hash_partitioned_table VALUES (5, 'Marco');
SELECT * FROM varchar_hash_partitioned_table WHERE id = 1;
id | name
----+-------
---------------------------------------------------------------------
1 | Jason
(1 row)
UPDATE varchar_hash_partitioned_table SET id = 6 WHERE name = 'Jason';
SELECT * FROM varchar_hash_partitioned_table WHERE id = 6;
id | name
----+-------
---------------------------------------------------------------------
6 | Jason
(1 row)

Some files were not shown because too many files have changed in this diff Show More