PG16 compatibility - more test output fixes (#7108)

PG16 compatibility - part 7

Check out part 1 42d956888d
part 2 0d503dd5ac
part 3 907d72e60d
part 4 7c6b4ce103
part 5 6056cb2c29
part 6 b36c431abb

This commit is in the series of PG16 compatibility commits. It makes some changes
to our tests in order to be compatible with the following in PG16:

- PG16 removed logic for converting a table to a view 
Relevant PG commit:
b23cd185fd
b23cd185fd5410e5204683933f848d4583e34b35

- Fix changed error message in certificate verification 
Relevant PG commit:
8eda731465
8eda7314652703a2ae30d6c4a69c378f6813a7f2

- Fix backend type order in tests 
Relevant PG commit:
0c679464a8
0c679464a837079acc75ff1d45eaa83f79e05690

- Reduce log level to omit extra NOTICE in create collation in PG16 
Relevant PG commit:
a14e75eb0b
a14e75eb0b6a73821e0d66c0d407372ec8376105
That commit made LOCALE parameter apply regardless of the
provider used, and it printed the following notice:
NOTICE:  using standard form "und-u-ks-level2" for ICU locale "@colStrength=secondary"
We omit this notice to omit output change between pg versions.

- Fix columnar_memory test 
TopMemoryContext now has more children contexts
Possible relevant PG commit:
9d3ebba729
9d3ebba729ebaf5882a92f0f5f662a3312037605
memusage is now around 8.5 MB, whereas it was less than 8MB before.
To avoid differences between PG versions, I changed the test to compare
to less than 9 MB. It still reflects very well the improvement from
28MB.

- Alternative test output for GRANTOR values in pg_auth_members 
grantor changed in PG16
Relevant PG commit:
ce6b672e44
ce6b672e4455820a0348214be0da1a024c3f619f

- Remove redundant grouping columns from our tests 
Relevant PG commit:
8d83a5d0a2
8d83a5d0a2673174dc478e707de1f502935391a5

- Fix tests with different order in Filters 
Relevant PG commit:
2489d76c49
2489d76c4906f4461a364ca8ad7e0751ead8aa0d

More PG16 compatibility commits are coming soon ...
pull/7115/head
Naisila Puka 2023-08-09 18:04:32 +03:00 committed by GitHub
parent b36c431abb
commit ee3153fe50
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
24 changed files with 1236 additions and 96 deletions

View File

@ -301,5 +301,6 @@ s/(NOTICE: issuing CREATE EXTENSION IF NOT EXISTS citus_columnar WITH SCHEMA p
# (This is not preprocessor directive, but a reminder for the developer that will drop PG14&15 support ) # (This is not preprocessor directive, but a reminder for the developer that will drop PG14&15 support )
s/, password_required=false//g s/, password_required=false//g
s/provide the file or change sslmode/provide the file, use the system's trusted roots with sslrootcert=system, or change sslmode/g
#endif /* PG_VERSION_NUM < PG_VERSION_16 */ #endif /* PG_VERSION_NUM < PG_VERSION_16 */

View File

@ -1,6 +1,10 @@
-- --
-- Test chunk filtering in columnar using min/max values in stripe skip lists. -- Test chunk filtering in columnar using min/max values in stripe skip lists.
-- --
-- It has an alternative test output file
-- because PG16 changed the order of some Filters in EXPLAIN
-- Relevant PG commit:
-- https://github.com/postgres/postgres/commit/2489d76c4906f4461a364ca8ad7e0751ead8aa0d
-- --
-- filtered_row_count returns number of rows filtered by the WHERE clause. -- filtered_row_count returns number of rows filtered by the WHERE clause.
-- If chunks get filtered by columnar, less rows are passed to WHERE -- If chunks get filtered by columnar, less rows are passed to WHERE
@ -370,10 +374,10 @@ SELECT * FROM r1, coltest WHERE
Filter: ((n1 % 10) = 0) Filter: ((n1 % 10) = 0)
Rows Removed by Filter: 1 Rows Removed by Filter: 1
-> Custom Scan (ColumnarScan) on coltest (actual rows=1 loops=4) -> Custom Scan (ColumnarScan) on coltest (actual rows=1 loops=4)
Filter: ((x1 > 15000) AND (r1.id1 = id) AND ((x1)::text > '000000'::text)) Filter: ((x1 > 15000) AND (id = r1.id1) AND ((x1)::text > '000000'::text))
Rows Removed by Filter: 999 Rows Removed by Filter: 999
Columnar Projected Columns: id, x1, x2, x3 Columnar Projected Columns: id, x1, x2, x3
Columnar Chunk Group Filters: ((x1 > 15000) AND (r1.id1 = id)) Columnar Chunk Group Filters: ((x1 > 15000) AND (id = r1.id1))
Columnar Chunk Groups Removed by Filter: 19 Columnar Chunk Groups Removed by Filter: 19
(10 rows) (10 rows)
@ -413,10 +417,10 @@ SELECT * FROM r1, r2, r3, r4, r5, r6, r7, coltest WHERE
-> Seq Scan on r2 (actual rows=5 loops=5) -> Seq Scan on r2 (actual rows=5 loops=5)
-> Seq Scan on r3 (actual rows=5 loops=5) -> Seq Scan on r3 (actual rows=5 loops=5)
-> Custom Scan (ColumnarScan) on coltest (actual rows=1 loops=5) -> Custom Scan (ColumnarScan) on coltest (actual rows=1 loops=5)
Filter: (r1.id1 = id) Filter: (id = r1.id1)
Rows Removed by Filter: 999 Rows Removed by Filter: 999
Columnar Projected Columns: id, x1, x2, x3 Columnar Projected Columns: id, x1, x2, x3
Columnar Chunk Group Filters: (r1.id1 = id) Columnar Chunk Group Filters: (id = r1.id1)
Columnar Chunk Groups Removed by Filter: 19 Columnar Chunk Groups Removed by Filter: 19
-> Seq Scan on r4 (actual rows=1 loops=5) -> Seq Scan on r4 (actual rows=1 loops=5)
-> Seq Scan on r5 (actual rows=1 loops=1) -> Seq Scan on r5 (actual rows=1 loops=1)
@ -588,10 +592,10 @@ DETAIL: parameterized by rels {r3}; 2 clauses pushed down
-> Nested Loop (actual rows=3 loops=1) -> Nested Loop (actual rows=3 loops=1)
-> Seq Scan on r1 (actual rows=5 loops=1) -> Seq Scan on r1 (actual rows=5 loops=1)
-> Custom Scan (ColumnarScan) on coltest (actual rows=1 loops=5) -> Custom Scan (ColumnarScan) on coltest (actual rows=1 loops=5)
Filter: ((r1.n1 > x1) AND (r1.id1 = id)) Filter: ((r1.n1 > x1) AND (id = r1.id1))
Rows Removed by Filter: 799 Rows Removed by Filter: 799
Columnar Projected Columns: id, x1, x2, x3 Columnar Projected Columns: id, x1, x2, x3
Columnar Chunk Group Filters: ((r1.n1 > x1) AND (r1.id1 = id)) Columnar Chunk Group Filters: ((r1.n1 > x1) AND (id = r1.id1))
Columnar Chunk Groups Removed by Filter: 19 Columnar Chunk Groups Removed by Filter: 19
-> Seq Scan on r2 (actual rows=5 loops=3) -> Seq Scan on r2 (actual rows=5 loops=3)
-> Seq Scan on r3 (actual rows=5 loops=3) -> Seq Scan on r3 (actual rows=5 loops=3)
@ -618,10 +622,10 @@ SELECT * FROM r1, coltest_part WHERE
-> Seq Scan on r1 (actual rows=5 loops=1) -> Seq Scan on r1 (actual rows=5 loops=1)
-> Append (actual rows=1 loops=5) -> Append (actual rows=1 loops=5)
-> Custom Scan (ColumnarScan) on coltest_part0 coltest_part_1 (actual rows=1 loops=3) -> Custom Scan (ColumnarScan) on coltest_part0 coltest_part_1 (actual rows=1 loops=3)
Filter: ((r1.n1 > x1) AND (r1.id1 = id)) Filter: ((r1.n1 > x1) AND (id = r1.id1))
Rows Removed by Filter: 999 Rows Removed by Filter: 999
Columnar Projected Columns: id, x1, x2, x3 Columnar Projected Columns: id, x1, x2, x3
Columnar Chunk Group Filters: ((r1.n1 > x1) AND (r1.id1 = id)) Columnar Chunk Group Filters: ((r1.n1 > x1) AND (id = r1.id1))
Columnar Chunk Groups Removed by Filter: 9 Columnar Chunk Groups Removed by Filter: 9
-> Seq Scan on coltest_part1 coltest_part_2 (actual rows=0 loops=2) -> Seq Scan on coltest_part1 coltest_part_2 (actual rows=0 loops=2)
Filter: ((r1.n1 > x1) AND (r1.id1 = id)) Filter: ((r1.n1 > x1) AND (r1.id1 = id))

File diff suppressed because it is too large Load Diff

View File

@ -77,10 +77,10 @@ FROM columnar_test_helpers.columnar_store_memory_stats();
top_growth | 1 top_growth | 1
-- before this change, max mem usage while executing inserts was 28MB and -- before this change, max mem usage while executing inserts was 28MB and
-- with this change it's less than 8MB. -- with this change it's less than 9MB.
SELECT SELECT
(SELECT max(memusage) < 8 * 1024 * 1024 FROM t WHERE tag='large batch') AS large_batch_ok, (SELECT max(memusage) < 9 * 1024 * 1024 FROM t WHERE tag='large batch') AS large_batch_ok,
(SELECT max(memusage) < 8 * 1024 * 1024 FROM t WHERE tag='first batch') AS first_batch_ok; (SELECT max(memusage) < 9 * 1024 * 1024 FROM t WHERE tag='first batch') AS first_batch_ok;
-[ RECORD 1 ]--+-- -[ RECORD 1 ]--+--
large_batch_ok | t large_batch_ok | t
first_batch_ok | t first_batch_ok | t

View File

@ -244,13 +244,13 @@ SELECT 1 FROM master_add_node('localhost', :worker_2_port);
1 1
(1 row) (1 row)
SELECT roleid::regrole::text AS role, member::regrole::text, grantor::regrole::text, admin_option FROM pg_auth_members WHERE roleid::regrole::text LIKE '%dist\_%' ORDER BY 1, 2; SELECT roleid::regrole::text AS role, member::regrole::text, (grantor::regrole::text IN ('postgres', 'non_dist_role_1', 'dist_role_1')) AS grantor, admin_option FROM pg_auth_members WHERE roleid::regrole::text LIKE '%dist\_%' ORDER BY 1, 2;
role | member | grantor | admin_option role | member | grantor | admin_option
--------------------------------------------------------------------- ---------------------------------------------------------------------
dist_role_1 | dist_role_2 | non_dist_role_1 | f dist_role_1 | dist_role_2 | t | f
dist_role_3 | non_dist_role_3 | postgres | f dist_role_3 | non_dist_role_3 | t | f
non_dist_role_1 | non_dist_role_2 | dist_role_1 | f non_dist_role_1 | non_dist_role_2 | t | f
non_dist_role_4 | dist_role_4 | postgres | f non_dist_role_4 | dist_role_4 | t | f
(4 rows) (4 rows)
SELECT objid::regrole FROM pg_catalog.pg_dist_object WHERE classid='pg_authid'::regclass::oid AND objid::regrole::text LIKE '%dist\_%' ORDER BY 1; SELECT objid::regrole FROM pg_catalog.pg_dist_object WHERE classid='pg_authid'::regclass::oid AND objid::regrole::text LIKE '%dist\_%' ORDER BY 1;

View File

@ -1214,7 +1214,7 @@ SELECT c1, c2, c3, c4, -1::float AS c5,
sum(cardinality), sum(cardinality),
sum(sum) sum(sum)
FROM source_table FROM source_table
GROUP BY c1, c2, c3, c4, c5, c6 GROUP BY c1, c2, c3, c4, c6
ON CONFLICT(c1, c2, c3, c4, c5, c6) ON CONFLICT(c1, c2, c3, c4, c5, c6)
DO UPDATE SET DO UPDATE SET
cardinality = enriched.cardinality + excluded.cardinality, cardinality = enriched.cardinality + excluded.cardinality,
@ -1232,7 +1232,7 @@ SELECT c1, c2, c3, c4, -1::float AS c5,
sum(cardinality), sum(cardinality),
sum(sum) sum(sum)
FROM source_table FROM source_table
GROUP BY c1, c2, c3, c4, c5, c6 GROUP BY c1, c2, c3, c4, c6
ON CONFLICT(c1, c2, c3, c4, c5, c6) ON CONFLICT(c1, c2, c3, c4, c5, c6)
DO UPDATE SET DO UPDATE SET
cardinality = enriched.cardinality + excluded.cardinality, cardinality = enriched.cardinality + excluded.cardinality,
@ -1247,7 +1247,7 @@ DO UPDATE SET
-> Task -> Task
Node: host=localhost port=xxxxx dbname=regression Node: host=localhost port=xxxxx dbname=regression
-> HashAggregate -> HashAggregate
Group Key: c1, c2, c3, c4, '-1'::double precision, insert_select_repartition.dist_func(c1, 4) Group Key: c1, c2, c3, c4, insert_select_repartition.dist_func(c1, 4)
-> Seq Scan on source_table_4213644 source_table -> Seq Scan on source_table_4213644 source_table
(10 rows) (10 rows)

View File

@ -1214,7 +1214,7 @@ SELECT c1, c2, c3, c4, -1::float AS c5,
sum(cardinality), sum(cardinality),
sum(sum) sum(sum)
FROM source_table FROM source_table
GROUP BY c1, c2, c3, c4, c5, c6 GROUP BY c1, c2, c3, c4, c6
ON CONFLICT(c1, c2, c3, c4, c5, c6) ON CONFLICT(c1, c2, c3, c4, c5, c6)
DO UPDATE SET DO UPDATE SET
cardinality = enriched.cardinality + excluded.cardinality, cardinality = enriched.cardinality + excluded.cardinality,
@ -1232,7 +1232,7 @@ SELECT c1, c2, c3, c4, -1::float AS c5,
sum(cardinality), sum(cardinality),
sum(sum) sum(sum)
FROM source_table FROM source_table
GROUP BY c1, c2, c3, c4, c5, c6 GROUP BY c1, c2, c3, c4, c6
ON CONFLICT(c1, c2, c3, c4, c5, c6) ON CONFLICT(c1, c2, c3, c4, c5, c6)
DO UPDATE SET DO UPDATE SET
cardinality = enriched.cardinality + excluded.cardinality, cardinality = enriched.cardinality + excluded.cardinality,
@ -1247,7 +1247,7 @@ DO UPDATE SET
-> Task -> Task
Node: host=localhost port=xxxxx dbname=regression Node: host=localhost port=xxxxx dbname=regression
-> HashAggregate -> HashAggregate
Group Key: c1, c2, c3, c4, '-1'::double precision, insert_select_repartition.dist_func(c1, 4) Group Key: c1, c2, c3, c4, insert_select_repartition.dist_func(c1, 4)
-> Seq Scan on source_table_4213644 source_table -> Seq Scan on source_table_4213644 source_table
(10 rows) (10 rows)

View File

@ -1232,31 +1232,20 @@ WHERE o_orderkey IN (1, 2)
-> Seq Scan on lineitem_hash_partitioned_630004 lineitem_hash_partitioned -> Seq Scan on lineitem_hash_partitioned_630004 lineitem_hash_partitioned
(13 rows) (13 rows)
SELECT public.coordinator_plan($Q$
EXPLAIN (COSTS OFF) EXPLAIN (COSTS OFF)
SELECT count(*) SELECT count(*)
FROM orders_hash_partitioned FROM orders_hash_partitioned
FULL OUTER JOIN lineitem_hash_partitioned ON (o_orderkey = l_orderkey) FULL OUTER JOIN lineitem_hash_partitioned ON (o_orderkey = l_orderkey)
WHERE o_orderkey IN (1, 2) WHERE o_orderkey IN (1, 2)
AND l_orderkey IN (2, 3); AND l_orderkey IN (2, 3);
QUERY PLAN $Q$);
coordinator_plan
--------------------------------------------------------------------- ---------------------------------------------------------------------
Aggregate Aggregate
-> Custom Scan (Citus Adaptive) -> Custom Scan (Citus Adaptive)
Task Count: 3 Task Count: 3
Tasks Shown: One of 3 (3 rows)
-> Task
Node: host=localhost port=xxxxx dbname=regression
-> Aggregate
-> Nested Loop
Join Filter: (orders_hash_partitioned.o_orderkey = lineitem_hash_partitioned.l_orderkey)
-> Seq Scan on orders_hash_partitioned_630000 orders_hash_partitioned
Filter: (o_orderkey = ANY ('{1,2}'::integer[]))
-> Materialize
-> Bitmap Heap Scan on lineitem_hash_partitioned_630004 lineitem_hash_partitioned
Recheck Cond: (l_orderkey = ANY ('{2,3}'::integer[]))
-> Bitmap Index Scan on lineitem_hash_partitioned_pkey_630004
Index Cond: (l_orderkey = ANY ('{2,3}'::integer[]))
(16 rows)
SET citus.task_executor_type TO DEFAULT; SET citus.task_executor_type TO DEFAULT;
DROP TABLE lineitem_hash_partitioned; DROP TABLE lineitem_hash_partitioned;

View File

@ -120,7 +120,7 @@ EXPLAIN (COSTS FALSE)
SELECT sum(l_extendedprice * l_discount) as revenue SELECT sum(l_extendedprice * l_discount) as revenue
FROM lineitem_hash, orders_hash FROM lineitem_hash, orders_hash
WHERE o_orderkey = l_orderkey WHERE o_orderkey = l_orderkey
GROUP BY l_orderkey, o_orderkey, l_shipmode HAVING sum(l_quantity) > 24 GROUP BY l_orderkey, l_shipmode HAVING sum(l_quantity) > 24
ORDER BY 1 DESC LIMIT 3; ORDER BY 1 DESC LIMIT 3;
QUERY PLAN QUERY PLAN
--------------------------------------------------------------------- ---------------------------------------------------------------------
@ -136,7 +136,7 @@ EXPLAIN (COSTS FALSE)
-> Sort -> Sort
Sort Key: (sum((lineitem_hash.l_extendedprice * lineitem_hash.l_discount))) DESC Sort Key: (sum((lineitem_hash.l_extendedprice * lineitem_hash.l_discount))) DESC
-> HashAggregate -> HashAggregate
Group Key: lineitem_hash.l_orderkey, orders_hash.o_orderkey, lineitem_hash.l_shipmode Group Key: lineitem_hash.l_orderkey, lineitem_hash.l_shipmode
Filter: (sum(lineitem_hash.l_quantity) > '24'::numeric) Filter: (sum(lineitem_hash.l_quantity) > '24'::numeric)
-> Hash Join -> Hash Join
Hash Cond: (orders_hash.o_orderkey = lineitem_hash.l_orderkey) Hash Cond: (orders_hash.o_orderkey = lineitem_hash.l_orderkey)

View File

@ -148,7 +148,7 @@ SELECT pg_reload_conf();
CREATE SUBSCRIPTION subs_01 CONNECTION 'host=''localhost'' port=57637' CREATE SUBSCRIPTION subs_01 CONNECTION 'host=''localhost'' port=57637'
PUBLICATION pub_01 WITH (citus_use_authinfo=true); PUBLICATION pub_01 WITH (citus_use_authinfo=true);
ERROR: could not connect to the publisher: root certificate file "/non/existing/certificate.crt" does not exist ERROR: could not connect to the publisher: root certificate file "/non/existing/certificate.crt" does not exist
Either provide the file or change sslmode to disable server certificate verification. Either provide the file, use the system's trusted roots with sslrootcert=system, or change sslmode to disable server certificate verification.
ALTER SYSTEM RESET citus.node_conninfo; ALTER SYSTEM RESET citus.node_conninfo;
SELECT pg_reload_conf(); SELECT pg_reload_conf();
pg_reload_conf pg_reload_conf

View File

@ -425,9 +425,25 @@ SELECT relname FROM pg_catalog.pg_class WHERE relnamespace = 'mx_hide_shard_name
test_table_2_1130000 test_table_2_1130000
(4 rows) (4 rows)
-- PG16 added one more backend type B_STANDALONE_BACKEND
-- and also alphabetized the backend types, hence the orders changed
-- Relevant PG commit:
-- https://github.com/postgres/postgres/commit/0c679464a837079acc75ff1d45eaa83f79e05690
SHOW server_version \gset
SELECT substring(:'server_version', '\d+')::int >= 16 AS server_version_ge_16
\gset
\if :server_version_ge_16
SELECT 4 AS client_backend \gset
SELECT 5 AS bgworker \gset
SELECT 12 AS walsender \gset
\else
SELECT 3 AS client_backend \gset
SELECT 4 AS bgworker \gset
SELECT 9 AS walsender \gset
\endif
-- say, we set it to bgworker -- say, we set it to bgworker
-- the shards and indexes do not show up -- the shards and indexes do not show up
SELECT set_backend_type(4); SELECT set_backend_type(:bgworker);
NOTICE: backend type switched to: background worker NOTICE: backend type switched to: background worker
set_backend_type set_backend_type
--------------------------------------------------------------------- ---------------------------------------------------------------------
@ -445,7 +461,7 @@ SELECT relname FROM pg_catalog.pg_class WHERE relnamespace = 'mx_hide_shard_name
-- or, we set it to walsender -- or, we set it to walsender
-- the shards and indexes do not show up -- the shards and indexes do not show up
SELECT set_backend_type(9); SELECT set_backend_type(:walsender);
NOTICE: backend type switched to: walsender NOTICE: backend type switched to: walsender
set_backend_type set_backend_type
--------------------------------------------------------------------- ---------------------------------------------------------------------
@ -480,7 +496,7 @@ SELECT relname FROM pg_catalog.pg_class WHERE relnamespace = 'mx_hide_shard_name
RESET application_name; RESET application_name;
-- but, client backends to see the shards -- but, client backends to see the shards
SELECT set_backend_type(3); SELECT set_backend_type(:client_backend);
NOTICE: backend type switched to: client backend NOTICE: backend type switched to: client backend
set_backend_type set_backend_type
--------------------------------------------------------------------- ---------------------------------------------------------------------

View File

@ -1062,7 +1062,7 @@ SELECT count(*) FROM keyval1 GROUP BY key HAVING sum(value) > (SELECT sum(value)
(26 rows) (26 rows)
EXPLAIN (COSTS OFF) EXPLAIN (COSTS OFF)
SELECT count(*) FROM keyval1 k1 WHERE k1.key = 2 GROUP BY key HAVING sum(value) > (SELECT sum(value) FROM keyval2 k2 WHERE k2.key = 2 GROUP BY key ORDER BY 1 DESC LIMIT 1); SELECT count(*) FROM keyval1 k1 WHERE k1.key = 2 HAVING sum(value) > (SELECT sum(value) FROM keyval2 k2 WHERE k2.key = 2 ORDER BY 1 DESC LIMIT 1);
QUERY PLAN QUERY PLAN
--------------------------------------------------------------------- ---------------------------------------------------------------------
Custom Scan (Citus Adaptive) Custom Scan (Citus Adaptive)
@ -1070,20 +1070,18 @@ SELECT count(*) FROM keyval1 k1 WHERE k1.key = 2 GROUP BY key HAVING sum(value)
Tasks Shown: All Tasks Shown: All
-> Task -> Task
Node: host=localhost port=xxxxx dbname=regression Node: host=localhost port=xxxxx dbname=regression
-> GroupAggregate -> Aggregate
Group Key: k1.key
Filter: (sum(k1.value) > $0) Filter: (sum(k1.value) > $0)
InitPlan 1 (returns $0) InitPlan 1 (returns $0)
-> Limit -> Limit
-> Sort -> Sort
Sort Key: (sum(k2.value)) DESC Sort Key: (sum(k2.value)) DESC
-> GroupAggregate -> Aggregate
Group Key: k2.key
-> Seq Scan on keyval2_xxxxxxx k2 -> Seq Scan on keyval2_xxxxxxx k2
Filter: (key = 2) Filter: (key = 2)
-> Seq Scan on keyval1_xxxxxxx k1 -> Seq Scan on keyval1_xxxxxxx k1
Filter: (key = 2) Filter: (key = 2)
(18 rows) (16 rows)
-- Simple join subquery pushdown -- Simple join subquery pushdown
SELECT SELECT

View File

@ -370,11 +370,13 @@ SELECT DISTINCT y FROM test;
(1 row) (1 row)
-- non deterministic collations -- non deterministic collations
SET client_min_messages TO WARNING;
CREATE COLLATION test_pg12.case_insensitive ( CREATE COLLATION test_pg12.case_insensitive (
provider = icu, provider = icu,
locale = '@colStrength=secondary', locale = '@colStrength=secondary',
deterministic = false deterministic = false
); );
RESET client_min_messages;
CREATE TABLE col_test ( CREATE TABLE col_test (
id int, id int,
val text collate case_insensitive val text collate case_insensitive

View File

@ -400,22 +400,6 @@ NOTICE: renaming the new table to undistribute_table.dist_type_table
(1 row) (1 row)
-- test CREATE RULE with ON SELECT
CREATE TABLE rule_table_1 (a INT);
CREATE TABLE rule_table_2 (a INT);
SELECT create_distributed_table('rule_table_2', 'a');
create_distributed_table
---------------------------------------------------------------------
(1 row)
CREATE RULE "_RETURN" AS ON SELECT TO rule_table_1 DO INSTEAD SELECT * FROM rule_table_2;
-- the CREATE RULE turns rule_table_1 into a view
ALTER EXTENSION plpgsql ADD VIEW rule_table_1;
NOTICE: Citus does not propagate adding/dropping member objects
HINT: You can add/drop the member objects on the workers as well.
SELECT undistribute_table('rule_table_2');
ERROR: cannot alter table because an extension depends on it
-- test CREATE RULE without ON SELECT -- test CREATE RULE without ON SELECT
CREATE TABLE rule_table_3 (a INT); CREATE TABLE rule_table_3 (a INT);
CREATE TABLE rule_table_4 (a INT); CREATE TABLE rule_table_4 (a INT);
@ -444,9 +428,6 @@ NOTICE: renaming the new table to undistribute_table.rule_table_4
ALTER EXTENSION plpgsql DROP VIEW extension_view; ALTER EXTENSION plpgsql DROP VIEW extension_view;
NOTICE: Citus does not propagate adding/dropping member objects NOTICE: Citus does not propagate adding/dropping member objects
HINT: You can add/drop the member objects on the workers as well. HINT: You can add/drop the member objects on the workers as well.
ALTER EXTENSION plpgsql DROP VIEW rule_table_1;
NOTICE: Citus does not propagate adding/dropping member objects
HINT: You can add/drop the member objects on the workers as well.
ALTER EXTENSION plpgsql DROP TABLE rule_table_3; ALTER EXTENSION plpgsql DROP TABLE rule_table_3;
NOTICE: Citus does not propagate adding/dropping member objects NOTICE: Citus does not propagate adding/dropping member objects
HINT: You can add/drop the member objects on the workers as well. HINT: You can add/drop the member objects on the workers as well.
@ -456,11 +437,9 @@ DETAIL: drop cascades to view undis_view1
drop cascades to view undis_view2 drop cascades to view undis_view2
drop cascades to view another_schema.undis_view3 drop cascades to view another_schema.undis_view3
DROP SCHEMA undistribute_table, another_schema CASCADE; DROP SCHEMA undistribute_table, another_schema CASCADE;
NOTICE: drop cascades to 7 other objects NOTICE: drop cascades to 5 other objects
DETAIL: drop cascades to table extension_table DETAIL: drop cascades to table extension_table
drop cascades to view extension_view drop cascades to view extension_view
drop cascades to table dist_type_table drop cascades to table dist_type_table
drop cascades to table rule_table_2
drop cascades to view rule_table_1
drop cascades to table rule_table_3 drop cascades to table rule_table_3
drop cascades to table rule_table_4 drop cascades to table rule_table_4

View File

@ -1,6 +1,10 @@
-- --
-- Test chunk filtering in columnar using min/max values in stripe skip lists. -- Test chunk filtering in columnar using min/max values in stripe skip lists.
-- --
-- It has an alternative test output file
-- because PG16 changed the order of some Filters in EXPLAIN
-- Relevant PG commit:
-- https://github.com/postgres/postgres/commit/2489d76c4906f4461a364ca8ad7e0751ead8aa0d
-- --

View File

@ -77,10 +77,10 @@ SELECT CASE WHEN 1.0 * TopMemoryContext / :top_post BETWEEN 0.98 AND 1.03 THEN 1
FROM columnar_test_helpers.columnar_store_memory_stats(); FROM columnar_test_helpers.columnar_store_memory_stats();
-- before this change, max mem usage while executing inserts was 28MB and -- before this change, max mem usage while executing inserts was 28MB and
-- with this change it's less than 8MB. -- with this change it's less than 9MB.
SELECT SELECT
(SELECT max(memusage) < 8 * 1024 * 1024 FROM t WHERE tag='large batch') AS large_batch_ok, (SELECT max(memusage) < 9 * 1024 * 1024 FROM t WHERE tag='large batch') AS large_batch_ok,
(SELECT max(memusage) < 8 * 1024 * 1024 FROM t WHERE tag='first batch') AS first_batch_ok; (SELECT max(memusage) < 9 * 1024 * 1024 FROM t WHERE tag='first batch') AS first_batch_ok;
\x \x

View File

@ -117,7 +117,7 @@ GRANT non_dist_role_4 TO dist_role_4;
SELECT 1 FROM master_add_node('localhost', :worker_2_port); SELECT 1 FROM master_add_node('localhost', :worker_2_port);
SELECT roleid::regrole::text AS role, member::regrole::text, grantor::regrole::text, admin_option FROM pg_auth_members WHERE roleid::regrole::text LIKE '%dist\_%' ORDER BY 1, 2; SELECT roleid::regrole::text AS role, member::regrole::text, (grantor::regrole::text IN ('postgres', 'non_dist_role_1', 'dist_role_1')) AS grantor, admin_option FROM pg_auth_members WHERE roleid::regrole::text LIKE '%dist\_%' ORDER BY 1, 2;
SELECT objid::regrole FROM pg_catalog.pg_dist_object WHERE classid='pg_authid'::regclass::oid AND objid::regrole::text LIKE '%dist\_%' ORDER BY 1; SELECT objid::regrole FROM pg_catalog.pg_dist_object WHERE classid='pg_authid'::regclass::oid AND objid::regrole::text LIKE '%dist\_%' ORDER BY 1;
\c - - - :worker_1_port \c - - - :worker_1_port

View File

@ -611,7 +611,7 @@ SELECT c1, c2, c3, c4, -1::float AS c5,
sum(cardinality), sum(cardinality),
sum(sum) sum(sum)
FROM source_table FROM source_table
GROUP BY c1, c2, c3, c4, c5, c6 GROUP BY c1, c2, c3, c4, c6
ON CONFLICT(c1, c2, c3, c4, c5, c6) ON CONFLICT(c1, c2, c3, c4, c5, c6)
DO UPDATE SET DO UPDATE SET
cardinality = enriched.cardinality + excluded.cardinality, cardinality = enriched.cardinality + excluded.cardinality,
@ -625,7 +625,7 @@ SELECT c1, c2, c3, c4, -1::float AS c5,
sum(cardinality), sum(cardinality),
sum(sum) sum(sum)
FROM source_table FROM source_table
GROUP BY c1, c2, c3, c4, c5, c6 GROUP BY c1, c2, c3, c4, c6
ON CONFLICT(c1, c2, c3, c4, c5, c6) ON CONFLICT(c1, c2, c3, c4, c5, c6)
DO UPDATE SET DO UPDATE SET
cardinality = enriched.cardinality + excluded.cardinality, cardinality = enriched.cardinality + excluded.cardinality,

View File

@ -336,12 +336,14 @@ FULL OUTER JOIN lineitem_hash_partitioned ON (o_orderkey = l_orderkey)
WHERE o_orderkey IN (1, 2) WHERE o_orderkey IN (1, 2)
OR l_orderkey IN (2, 3); OR l_orderkey IN (2, 3);
SELECT public.coordinator_plan($Q$
EXPLAIN (COSTS OFF) EXPLAIN (COSTS OFF)
SELECT count(*) SELECT count(*)
FROM orders_hash_partitioned FROM orders_hash_partitioned
FULL OUTER JOIN lineitem_hash_partitioned ON (o_orderkey = l_orderkey) FULL OUTER JOIN lineitem_hash_partitioned ON (o_orderkey = l_orderkey)
WHERE o_orderkey IN (1, 2) WHERE o_orderkey IN (1, 2)
AND l_orderkey IN (2, 3); AND l_orderkey IN (2, 3);
$Q$);
SET citus.task_executor_type TO DEFAULT; SET citus.task_executor_type TO DEFAULT;

View File

@ -43,7 +43,7 @@ EXPLAIN (COSTS FALSE)
SELECT sum(l_extendedprice * l_discount) as revenue SELECT sum(l_extendedprice * l_discount) as revenue
FROM lineitem_hash, orders_hash FROM lineitem_hash, orders_hash
WHERE o_orderkey = l_orderkey WHERE o_orderkey = l_orderkey
GROUP BY l_orderkey, o_orderkey, l_shipmode HAVING sum(l_quantity) > 24 GROUP BY l_orderkey, l_shipmode HAVING sum(l_quantity) > 24
ORDER BY 1 DESC LIMIT 3; ORDER BY 1 DESC LIMIT 3;
EXPLAIN (COSTS FALSE) EXPLAIN (COSTS FALSE)

View File

@ -226,14 +226,32 @@ RESET citus.enable_metadata_sync;
-- the shards and indexes do not show up -- the shards and indexes do not show up
SELECT relname FROM pg_catalog.pg_class WHERE relnamespace = 'mx_hide_shard_names'::regnamespace ORDER BY relname; SELECT relname FROM pg_catalog.pg_class WHERE relnamespace = 'mx_hide_shard_names'::regnamespace ORDER BY relname;
-- PG16 added one more backend type B_STANDALONE_BACKEND
-- and also alphabetized the backend types, hence the orders changed
-- Relevant PG commit:
-- https://github.com/postgres/postgres/commit/0c679464a837079acc75ff1d45eaa83f79e05690
SHOW server_version \gset
SELECT substring(:'server_version', '\d+')::int >= 16 AS server_version_ge_16
\gset
\if :server_version_ge_16
SELECT 4 AS client_backend \gset
SELECT 5 AS bgworker \gset
SELECT 12 AS walsender \gset
\else
SELECT 3 AS client_backend \gset
SELECT 4 AS bgworker \gset
SELECT 9 AS walsender \gset
\endif
-- say, we set it to bgworker -- say, we set it to bgworker
-- the shards and indexes do not show up -- the shards and indexes do not show up
SELECT set_backend_type(4); SELECT set_backend_type(:bgworker);
SELECT relname FROM pg_catalog.pg_class WHERE relnamespace = 'mx_hide_shard_names'::regnamespace ORDER BY relname; SELECT relname FROM pg_catalog.pg_class WHERE relnamespace = 'mx_hide_shard_names'::regnamespace ORDER BY relname;
-- or, we set it to walsender -- or, we set it to walsender
-- the shards and indexes do not show up -- the shards and indexes do not show up
SELECT set_backend_type(9); SELECT set_backend_type(:walsender);
SELECT relname FROM pg_catalog.pg_class WHERE relnamespace = 'mx_hide_shard_names'::regnamespace ORDER BY relname; SELECT relname FROM pg_catalog.pg_class WHERE relnamespace = 'mx_hide_shard_names'::regnamespace ORDER BY relname;
-- unless the application name starts with citus_shard -- unless the application name starts with citus_shard
@ -242,7 +260,7 @@ SELECT relname FROM pg_catalog.pg_class WHERE relnamespace = 'mx_hide_shard_name
RESET application_name; RESET application_name;
-- but, client backends to see the shards -- but, client backends to see the shards
SELECT set_backend_type(3); SELECT set_backend_type(:client_backend);
SELECT relname FROM pg_catalog.pg_class WHERE relnamespace = 'mx_hide_shard_names'::regnamespace ORDER BY relname; SELECT relname FROM pg_catalog.pg_class WHERE relnamespace = 'mx_hide_shard_names'::regnamespace ORDER BY relname;

View File

@ -676,7 +676,7 @@ EXPLAIN (COSTS OFF)
SELECT count(*) FROM keyval1 GROUP BY key HAVING sum(value) > (SELECT sum(value) FROM keyval2 GROUP BY key ORDER BY 1 DESC LIMIT 1); SELECT count(*) FROM keyval1 GROUP BY key HAVING sum(value) > (SELECT sum(value) FROM keyval2 GROUP BY key ORDER BY 1 DESC LIMIT 1);
EXPLAIN (COSTS OFF) EXPLAIN (COSTS OFF)
SELECT count(*) FROM keyval1 k1 WHERE k1.key = 2 GROUP BY key HAVING sum(value) > (SELECT sum(value) FROM keyval2 k2 WHERE k2.key = 2 GROUP BY key ORDER BY 1 DESC LIMIT 1); SELECT count(*) FROM keyval1 k1 WHERE k1.key = 2 HAVING sum(value) > (SELECT sum(value) FROM keyval2 k2 WHERE k2.key = 2 ORDER BY 1 DESC LIMIT 1);
-- Simple join subquery pushdown -- Simple join subquery pushdown
SELECT SELECT

View File

@ -242,11 +242,13 @@ COMMIT;
SELECT DISTINCT y FROM test; SELECT DISTINCT y FROM test;
-- non deterministic collations -- non deterministic collations
SET client_min_messages TO WARNING;
CREATE COLLATION test_pg12.case_insensitive ( CREATE COLLATION test_pg12.case_insensitive (
provider = icu, provider = icu,
locale = '@colStrength=secondary', locale = '@colStrength=secondary',
deterministic = false deterministic = false
); );
RESET client_min_messages;
CREATE TABLE col_test ( CREATE TABLE col_test (
id int, id int,

View File

@ -131,18 +131,6 @@ SELECT create_distributed_table('dist_type_table', 'a');
SELECT undistribute_table('dist_type_table'); SELECT undistribute_table('dist_type_table');
-- test CREATE RULE with ON SELECT
CREATE TABLE rule_table_1 (a INT);
CREATE TABLE rule_table_2 (a INT);
SELECT create_distributed_table('rule_table_2', 'a');
CREATE RULE "_RETURN" AS ON SELECT TO rule_table_1 DO INSTEAD SELECT * FROM rule_table_2;
-- the CREATE RULE turns rule_table_1 into a view
ALTER EXTENSION plpgsql ADD VIEW rule_table_1;
SELECT undistribute_table('rule_table_2');
-- test CREATE RULE without ON SELECT -- test CREATE RULE without ON SELECT
CREATE TABLE rule_table_3 (a INT); CREATE TABLE rule_table_3 (a INT);
CREATE TABLE rule_table_4 (a INT); CREATE TABLE rule_table_4 (a INT);
@ -155,7 +143,6 @@ ALTER EXTENSION plpgsql ADD TABLE rule_table_3;
SELECT undistribute_table('rule_table_4'); SELECT undistribute_table('rule_table_4');
ALTER EXTENSION plpgsql DROP VIEW extension_view; ALTER EXTENSION plpgsql DROP VIEW extension_view;
ALTER EXTENSION plpgsql DROP VIEW rule_table_1;
ALTER EXTENSION plpgsql DROP TABLE rule_table_3; ALTER EXTENSION plpgsql DROP TABLE rule_table_3;
DROP TABLE view_table CASCADE; DROP TABLE view_table CASCADE;