From 87a1b631e8d3e2fbc1cd360181f88382d1704f92 Mon Sep 17 00:00:00 2001 From: Onur Tirtir Date: Mon, 18 Aug 2025 10:29:27 +0300 Subject: [PATCH] Not automatically create citus_columnar when creating citus extension (#8081) DESCRIPTION: Not automatically create citus_columnar when there are no relations using it. Previously, we were always creating citus_columnar when creating citus with version >= 11.1. And how we were doing was as follows: * Detach SQL objects owned by old columnar, i.e., "drop" them from citus, but not actually drop them from the database * "old columnar" is the one that we had before Citus 11.1 as part of citus, i.e., before splitting the access method ands its catalog to citus_columnar. * Create citus_columnar and attach the SQL objects leftover from old columnar to it so that we can continue supporting the columnar tables that user had before Citus 11.1 with citus_columnar. First part is unchanged, however, now we don't create citus_columnar automatically anymore if the user didn't have any relations using columnar. For this reason, as of Citus 13.2, when these SQL objects are not owned by an extension and there are no relations using columnar access method, we drop these SQL objects when updating Citus to 13.2. The net effect is still the same as if we automatically created citus_columnar and user dropped citus_columnar later, so we should not have any issues with dropping them. (**Update:** Seems we've made some assumptions in citus, e.g., citus_finish_pg_upgrade() still assumes columnar metadata exists and tries to apply some fixes for it, so this PR fixes them as well. See the last section of this PR description.) Also, ideally I was hoping to just remove some lines of code from extension.c, where we decide automatically creating citus_columnar when creating citus, however, this didn't happen to be the case for two reasons: * We still need to automatically create it for the servers using columnar access method. * We need to clean-up the leftover SQL objects from old columnar when the above is not case otherwise we would have leftover SQL objects from old columnar for no reason, and that would confuse users too. * Old columnar cannot be used to create columnar tables properly, so we should clean them up and let the user decide whether they want to create citus_columnar when they really need it later. --- Also made several changes in the test suite because similarly, we don't always want to have citus_columnar created in citus tests anymore: * Now, columnar specific test targets, which cover **41** test sql files, always install columnar by default, by using "--load-extension=citus_columnar". * "--load-extension=citus_columnar" is not added to citus specific test targets because by default we don't want to have citus_columnar created during citus tests. * Excluding citus_columnar specific tests, we have **601** sql files that we have as citus tests and in **27** of them we manually create citus_columnar at the very beginning of the test because these tests do test some functionalities of citus together with columnar tables. Also, before and after schedules for PG upgrade tests are now duplicated so we have two versions of each: one with columnar tests and one without. To choose between them, check-pg-upgrade now supports a "test-with-columnar" option, which can be set to "true" or anything else to logically indicate "false". In CI, we run the check-pg-upgrade test target with both options. The purpose is to ensure we can test PG upgrades where citus_columnar is not created in the cluster before the upgrade as well. Finally, added more tests to multi_extension.sql to test Citus upgrade scenarios with / without columnar tables / citus_columnar extension. --- Also, seems citus_finish_pg_upgrade was assuming that citus_columnar is always created but actually we should have never made such an assumption. To fix that, moved columnar specific post-PG-upgrade work from citus to a new columnar UDF, which is columnar_finish_pg_upgrade. But to avoid breaking existing customer / managed service scripts, we continue to automatically perform post PG-upgrade work for columnar within citus_finish_pg_upgrade, but only if columnar access method exists this time. --- .github/workflows/build_and_test.yml | 10 +- src/backend/columnar/citus_columnar.control | 2 +- .../sql/citus_columnar--12.2-1--13.2-1.sql | 3 + .../citus_columnar--13.2-1--12.2-1.sql | 3 + .../columnar_finish_pg_upgrade/13.2-1.sql | 13 + .../columnar_finish_pg_upgrade/latest.sql | 13 + src/backend/distributed/commands/extension.c | 134 ++++++++- src/backend/distributed/metadata/dependency.c | 5 +- .../distributed/sql/citus--13.1-1--13.2-1.sql | 69 +++++ .../sql/downgrades/citus--13.2-1--13.1-1.sql | 8 + .../udfs/citus_finish_pg_upgrade/13.2-1.sql | 260 ++++++++++++++++++ .../udfs/citus_finish_pg_upgrade/latest.sql | 37 ++- .../distributed/commands/utility_hook.h | 1 - src/test/regress/Makefile | 41 ++- src/test/regress/README.md | 9 + ...> after_pg_upgrade_with_columnar_schedule} | 0 ...after_pg_upgrade_without_columnar_schedule | 11 + src/test/regress/base_schedule | 2 +- ... before_pg_upgrade_with_columnar_schedule} | 0 ...efore_pg_upgrade_without_columnar_schedule | 16 ++ src/test/regress/citus_tests/config.py | 12 +- src/test/regress/citus_tests/run_test.py | 17 +- .../regress/citus_tests/test/test_columnar.py | 8 + .../regress/citus_tests/upgrade/README.md | 13 +- .../citus_tests/upgrade/pg_upgrade_test.py | 29 +- src/test/regress/columnar_schedule | 7 +- .../expected/alter_distributed_table.out | 5 + .../alter_table_set_access_method.out | 5 + .../expected/citus_depended_object.out | 5 + .../citus_non_blocking_split_columnar.out | 13 + ...citus_split_shard_columnar_partitioned.out | 5 + .../expected/columnar_chunk_filtering.out | 10 +- .../expected/columnar_chunk_filtering_0.out | 10 +- .../expected/columnar_citus_integration.out | 60 ++++ src/test/regress/expected/columnar_drop.out | 3 - .../regress/expected/columnar_indexes.out | 47 ---- src/test/regress/expected/columnar_memory.out | 4 +- .../expected/columnar_partitioning.out | 33 +-- .../expected/columnar_test_helpers.out | 20 ++ .../create_distributed_table_concurrently.out | 5 + .../drop_column_partitioned_table.out | 5 + .../expected/dropped_columns_create_load.out | 6 - .../ensure_citus_columnar_not_exists.out | 28 ++ .../regress/expected/follower_single_node.out | 3 + .../regress/expected/generated_identity.out | 5 + .../insert_select_into_local_table.out | 5 + src/test/regress/expected/issue_5248.out | 5 + src/test/regress/expected/merge.out | 5 + .../regress/expected/merge_repartition1.out | 5 + .../regress/expected/multi_drop_extension.out | 5 + src/test/regress/expected/multi_extension.out | 148 +++++++++- .../multi_fix_partition_shard_index_names.out | 38 +-- src/test/regress/expected/multi_multiuser.out | 5 + .../expected/multi_tenant_isolation.out | 5 + .../multi_tenant_isolation_nonblocking.out | 5 + src/test/regress/expected/mx_regular_user.out | 5 + src/test/regress/expected/pg12.out | 5 + src/test/regress/expected/pg14.out | 5 + src/test/regress/expected/pg17.out | 67 +++-- src/test/regress/expected/pg17_0.out | 3 + src/test/regress/expected/pg_dump.out | 5 + .../regress/expected/recurring_outer_join.out | 5 + ...me_public_to_citus_schema_and_recreate.out | 2 + .../expected/upgrade_columnar_before.out | 3 + .../expected/upgrade_list_citus_objects.out | 5 +- src/test/regress/minimal_columnar_schedule | 1 + src/test/regress/minimal_schedule | 2 +- src/test/regress/multi_1_schedule | 2 + src/test/regress/split_schedule | 2 +- .../regress/sql/alter_distributed_table.sql | 7 + .../sql/alter_table_set_access_method.sql | 7 + .../regress/sql/citus_depended_object.sql | 7 + .../sql/citus_non_blocking_split_columnar.sql | 11 + ...citus_split_shard_columnar_partitioned.sql | 7 + .../regress/sql/columnar_chunk_filtering.sql | 2 +- .../sql/columnar_citus_integration.sql | 51 ++++ src/test/regress/sql/columnar_indexes.sql | 40 --- src/test/regress/sql/columnar_memory.sql | 4 +- .../regress/sql/columnar_partitioning.sql | 6 +- .../regress/sql/columnar_test_helpers.sql | 21 ++ .../create_distributed_table_concurrently.sql | 7 + .../sql/drop_column_partitioned_table.sql | 7 + .../sql/dropped_columns_create_load.sql | 1 - .../sql/ensure_citus_columnar_not_exists.sql | 9 + src/test/regress/sql/follower_single_node.sql | 4 + src/test/regress/sql/generated_identity.sql | 7 + .../sql/insert_select_into_local_table.sql | 7 + src/test/regress/sql/issue_5248.sql | 7 + src/test/regress/sql/merge.sql | 7 + src/test/regress/sql/merge_repartition1.sql | 7 + src/test/regress/sql/multi_drop_extension.sql | 7 + src/test/regress/sql/multi_extension.sql | 76 ++++- .../multi_fix_partition_shard_index_names.sql | 15 +- src/test/regress/sql/multi_multiuser.sql | 7 + .../regress/sql/multi_tenant_isolation.sql | 7 + .../multi_tenant_isolation_nonblocking.sql | 7 + src/test/regress/sql/mx_regular_user.sql | 7 + src/test/regress/sql/pg12.sql | 6 + src/test/regress/sql/pg14.sql | 7 + src/test/regress/sql/pg17.sql | 31 ++- src/test/regress/sql/pg_dump.sql | 7 + src/test/regress/sql/recurring_outer_join.sql | 7 + ...me_public_to_citus_schema_and_recreate.sql | 2 + .../regress/sql/upgrade_columnar_before.sql | 4 + .../sql/upgrade_list_citus_objects.sql | 1 + 105 files changed, 1486 insertions(+), 264 deletions(-) create mode 100644 src/backend/columnar/sql/citus_columnar--12.2-1--13.2-1.sql create mode 100644 src/backend/columnar/sql/downgrades/citus_columnar--13.2-1--12.2-1.sql create mode 100644 src/backend/columnar/sql/udfs/columnar_finish_pg_upgrade/13.2-1.sql create mode 100644 src/backend/columnar/sql/udfs/columnar_finish_pg_upgrade/latest.sql create mode 100644 src/backend/distributed/sql/udfs/citus_finish_pg_upgrade/13.2-1.sql rename src/test/regress/{after_pg_upgrade_schedule => after_pg_upgrade_with_columnar_schedule} (100%) create mode 100644 src/test/regress/after_pg_upgrade_without_columnar_schedule rename src/test/regress/{before_pg_upgrade_schedule => before_pg_upgrade_with_columnar_schedule} (100%) create mode 100644 src/test/regress/before_pg_upgrade_without_columnar_schedule create mode 100644 src/test/regress/expected/ensure_citus_columnar_not_exists.out create mode 100644 src/test/regress/expected/rename_public_to_citus_schema_and_recreate.out create mode 100644 src/test/regress/minimal_columnar_schedule create mode 100644 src/test/regress/sql/ensure_citus_columnar_not_exists.sql create mode 100644 src/test/regress/sql/rename_public_to_citus_schema_and_recreate.sql diff --git a/.github/workflows/build_and_test.yml b/.github/workflows/build_and_test.yml index d0f6a4cd5..31ecabf0d 100644 --- a/.github/workflows/build_and_test.yml +++ b/.github/workflows/build_and_test.yml @@ -331,7 +331,15 @@ jobs: make -C src/test/regress \ check-pg-upgrade \ old-bindir=/usr/lib/postgresql/${{ env.old_pg_major }}/bin \ - new-bindir=/usr/lib/postgresql/${{ env.new_pg_major }}/bin + new-bindir=/usr/lib/postgresql/${{ env.new_pg_major }}/bin \ + test-with-columnar=false + + gosu circleci \ + make -C src/test/regress \ + check-pg-upgrade \ + old-bindir=/usr/lib/postgresql/${{ env.old_pg_major }}/bin \ + new-bindir=/usr/lib/postgresql/${{ env.new_pg_major }}/bin \ + test-with-columnar=true - name: Copy pg_upgrade logs for newData dir run: |- mkdir -p /tmp/pg_upgrade_newData_logs diff --git a/src/backend/columnar/citus_columnar.control b/src/backend/columnar/citus_columnar.control index c96d839d1..9047037a0 100644 --- a/src/backend/columnar/citus_columnar.control +++ b/src/backend/columnar/citus_columnar.control @@ -1,6 +1,6 @@ # Columnar extension comment = 'Citus Columnar extension' -default_version = '12.2-1' +default_version = '13.2-1' module_pathname = '$libdir/citus_columnar' relocatable = false schema = pg_catalog diff --git a/src/backend/columnar/sql/citus_columnar--12.2-1--13.2-1.sql b/src/backend/columnar/sql/citus_columnar--12.2-1--13.2-1.sql new file mode 100644 index 000000000..037a70930 --- /dev/null +++ b/src/backend/columnar/sql/citus_columnar--12.2-1--13.2-1.sql @@ -0,0 +1,3 @@ +-- citus_columnar--12.2-1--13.2-1.sql + +#include "udfs/columnar_finish_pg_upgrade/13.2-1.sql" diff --git a/src/backend/columnar/sql/downgrades/citus_columnar--13.2-1--12.2-1.sql b/src/backend/columnar/sql/downgrades/citus_columnar--13.2-1--12.2-1.sql new file mode 100644 index 000000000..8258e7d2b --- /dev/null +++ b/src/backend/columnar/sql/downgrades/citus_columnar--13.2-1--12.2-1.sql @@ -0,0 +1,3 @@ +-- citus_columnar--13.2-1--12.2-1.sql + +DROP FUNCTION IF EXISTS pg_catalog.columnar_finish_pg_upgrade(); diff --git a/src/backend/columnar/sql/udfs/columnar_finish_pg_upgrade/13.2-1.sql b/src/backend/columnar/sql/udfs/columnar_finish_pg_upgrade/13.2-1.sql new file mode 100644 index 000000000..287deb7b6 --- /dev/null +++ b/src/backend/columnar/sql/udfs/columnar_finish_pg_upgrade/13.2-1.sql @@ -0,0 +1,13 @@ +CREATE OR REPLACE FUNCTION pg_catalog.columnar_finish_pg_upgrade() + RETURNS void + LANGUAGE plpgsql + SET search_path = pg_catalog + AS $cppu$ +BEGIN + -- set dependencies for columnar table access method + PERFORM columnar_internal.columnar_ensure_am_depends_catalog(); +END; +$cppu$; + +COMMENT ON FUNCTION pg_catalog.columnar_finish_pg_upgrade() + IS 'perform tasks to properly complete a Postgres upgrade for columnar extension'; diff --git a/src/backend/columnar/sql/udfs/columnar_finish_pg_upgrade/latest.sql b/src/backend/columnar/sql/udfs/columnar_finish_pg_upgrade/latest.sql new file mode 100644 index 000000000..287deb7b6 --- /dev/null +++ b/src/backend/columnar/sql/udfs/columnar_finish_pg_upgrade/latest.sql @@ -0,0 +1,13 @@ +CREATE OR REPLACE FUNCTION pg_catalog.columnar_finish_pg_upgrade() + RETURNS void + LANGUAGE plpgsql + SET search_path = pg_catalog + AS $cppu$ +BEGIN + -- set dependencies for columnar table access method + PERFORM columnar_internal.columnar_ensure_am_depends_catalog(); +END; +$cppu$; + +COMMENT ON FUNCTION pg_catalog.columnar_finish_pg_upgrade() + IS 'perform tasks to properly complete a Postgres upgrade for columnar extension'; diff --git a/src/backend/distributed/commands/extension.c b/src/backend/distributed/commands/extension.c index 17f9ff575..d5bb50317 100644 --- a/src/backend/distributed/commands/extension.c +++ b/src/backend/distributed/commands/extension.c @@ -25,6 +25,12 @@ #include "utils/lsyscache.h" #include "utils/syscache.h" +#include "pg_version_constants.h" + +#if PG_VERSION_NUM < PG_VERSION_17 +#include "catalog/pg_am_d.h" +#endif + #include "citus_version.h" #include "columnar/columnar.h" @@ -52,6 +58,10 @@ static void MarkExistingObjectDependenciesDistributedIfSupported(void); static List * GetAllViews(void); static bool ShouldPropagateExtensionCommand(Node *parseTree); static bool IsAlterExtensionSetSchemaCitus(Node *parseTree); +static bool HasAnyRelationsUsingOldColumnar(void); +static Oid GetOldColumnarAMIdIfExists(void); +static bool AccessMethodDependsOnAnyExtensions(Oid accessMethodId); +static bool HasAnyRelationsUsingAccessMethod(Oid accessMethodId); static Node * RecreateExtensionStmt(Oid extensionOid); static List * GenerateGrantCommandsOnExtensionDependentFDWs(Oid extensionId); @@ -783,7 +793,8 @@ PreprocessCreateExtensionStmtForCitusColumnar(Node *parsetree) /*citus version >= 11.1 requires install citus_columnar first*/ if (versionNumber >= 1110 && !CitusHasBeenLoaded()) { - if (get_extension_oid("citus_columnar", true) == InvalidOid) + if (get_extension_oid("citus_columnar", true) == InvalidOid && + (versionNumber < 1320 || HasAnyRelationsUsingOldColumnar())) { CreateExtensionWithVersion("citus_columnar", NULL); } @@ -894,9 +905,10 @@ PreprocessAlterExtensionCitusStmtForCitusColumnar(Node *parseTree) double newVersionNumber = GetExtensionVersionNumber(pstrdup(newVersion)); /*alter extension citus update to version >= 11.1-1, and no citus_columnar installed */ - if (newVersionNumber >= 1110 && citusColumnarOid == InvalidOid) + if (newVersionNumber >= 1110 && citusColumnarOid == InvalidOid && + (newVersionNumber < 1320 || HasAnyRelationsUsingOldColumnar())) { - /*it's upgrade citus to 11.1-1 or further version */ + /*it's upgrade citus to 11.1-1 or further version and there are relations using old columnar */ CreateExtensionWithVersion("citus_columnar", CITUS_COLUMNAR_INTERNAL_VERSION); } else if (newVersionNumber < 1110 && citusColumnarOid != InvalidOid) @@ -911,7 +923,8 @@ PreprocessAlterExtensionCitusStmtForCitusColumnar(Node *parseTree) int versionNumber = (int) (100 * strtod(CITUS_MAJORVERSION, NULL)); if (versionNumber >= 1110) { - if (citusColumnarOid == InvalidOid) + if (citusColumnarOid == InvalidOid && + (versionNumber < 1320 || HasAnyRelationsUsingOldColumnar())) { CreateExtensionWithVersion("citus_columnar", CITUS_COLUMNAR_INTERNAL_VERSION); @@ -921,6 +934,117 @@ PreprocessAlterExtensionCitusStmtForCitusColumnar(Node *parseTree) } +/* + * HasAnyRelationsUsingOldColumnar returns true if there are any relations + * using the old columnar access method. + */ +static bool +HasAnyRelationsUsingOldColumnar(void) +{ + Oid oldColumnarAMId = GetOldColumnarAMIdIfExists(); + return OidIsValid(oldColumnarAMId) && + HasAnyRelationsUsingAccessMethod(oldColumnarAMId); +} + + +/* + * GetOldColumnarAMIdIfExists returns the oid of the old columnar access + * method, i.e., the columnar access method that we had as part of "citus" + * extension before we split it into "citus_columnar" at version 11.1, if + * it exists. Otherwise, it returns InvalidOid. + * + * We know that it's "old columnar" only if the access method doesn't depend + * on any extensions. This is because, in citus--11.0-4--11.1-1.sql, we + * detach the columnar objects (including the access method) from citus + * in preparation for splitting of the columnar into a separate extension. + */ +static Oid +GetOldColumnarAMIdIfExists(void) +{ + Oid columnarAMId = get_am_oid("columnar", true); + if (OidIsValid(columnarAMId) && !AccessMethodDependsOnAnyExtensions(columnarAMId)) + { + return columnarAMId; + } + + return InvalidOid; +} + + +/* + * AccessMethodDependsOnAnyExtensions returns true if the access method + * with the given accessMethodId depends on any extensions. + */ +static bool +AccessMethodDependsOnAnyExtensions(Oid accessMethodId) +{ + ScanKeyData key[3]; + + Relation pgDepend = table_open(DependRelationId, AccessShareLock); + + ScanKeyInit(&key[0], + Anum_pg_depend_classid, + BTEqualStrategyNumber, F_OIDEQ, + ObjectIdGetDatum(AccessMethodRelationId)); + ScanKeyInit(&key[1], + Anum_pg_depend_objid, + BTEqualStrategyNumber, F_OIDEQ, + ObjectIdGetDatum(accessMethodId)); + + ScanKeyInit(&key[2], + Anum_pg_depend_objsubid, + BTEqualStrategyNumber, F_INT4EQ, + Int32GetDatum(0)); + + SysScanDesc scan = systable_beginscan(pgDepend, DependDependerIndexId, true, + NULL, 3, key); + + bool result = false; + + HeapTuple heapTuple = NULL; + while (HeapTupleIsValid(heapTuple = systable_getnext(scan))) + { + Form_pg_depend dependForm = (Form_pg_depend) GETSTRUCT(heapTuple); + + if (dependForm->refclassid == ExtensionRelationId) + { + result = true; + break; + } + } + + systable_endscan(scan); + table_close(pgDepend, AccessShareLock); + + return result; +} + + +/* + * HasAnyRelationsUsingAccessMethod returns true if there are any relations + * using the access method with the given accessMethodId. + */ +static bool +HasAnyRelationsUsingAccessMethod(Oid accessMethodId) +{ + ScanKeyData key[1]; + Relation pgClass = table_open(RelationRelationId, AccessShareLock); + ScanKeyInit(&key[0], + Anum_pg_class_relam, + BTEqualStrategyNumber, F_OIDEQ, + ObjectIdGetDatum(accessMethodId)); + + SysScanDesc scan = systable_beginscan(pgClass, InvalidOid, false, NULL, 1, key); + + bool result = HeapTupleIsValid(systable_getnext(scan)); + + systable_endscan(scan); + table_close(pgClass, AccessShareLock); + + return result; +} + + /* * PostprocessAlterExtensionCitusStmtForCitusColumnar process the case when upgrade citus * to version that support citus_columnar, or downgrade citus to lower version that @@ -959,7 +1083,7 @@ PostprocessAlterExtensionCitusStmtForCitusColumnar(Node *parseTree) { /*alter extension citus update, need upgrade citus_columnar from Y to Z*/ int versionNumber = (int) (100 * strtod(CITUS_MAJORVERSION, NULL)); - if (versionNumber >= 1110) + if (versionNumber >= 1110 && citusColumnarOid != InvalidOid) { char *curColumnarVersion = get_extension_version(citusColumnarOid); if (strcmp(curColumnarVersion, CITUS_COLUMNAR_INTERNAL_VERSION) == 0) diff --git a/src/backend/distributed/metadata/dependency.c b/src/backend/distributed/metadata/dependency.c index 36db39bab..ee334f1b0 100644 --- a/src/backend/distributed/metadata/dependency.c +++ b/src/backend/distributed/metadata/dependency.c @@ -1249,8 +1249,9 @@ IsObjectAddressOwnedByCitus(const ObjectAddress *objectAddress) return false; } - bool ownedByCitus = extObjectAddress.objectId == citusId; - bool ownedByCitusColumnar = extObjectAddress.objectId == citusColumnarId; + bool ownedByCitus = OidIsValid(citusId) && extObjectAddress.objectId == citusId; + bool ownedByCitusColumnar = OidIsValid(citusColumnarId) && + extObjectAddress.objectId == citusColumnarId; return ownedByCitus || ownedByCitusColumnar; } diff --git a/src/backend/distributed/sql/citus--13.1-1--13.2-1.sql b/src/backend/distributed/sql/citus--13.1-1--13.2-1.sql index 2f507eb24..f415fff88 100644 --- a/src/backend/distributed/sql/citus--13.1-1--13.2-1.sql +++ b/src/backend/distributed/sql/citus--13.1-1--13.2-1.sql @@ -1,3 +1,72 @@ -- citus--13.1-1--13.2-1 -- bump version to 13.2-1 #include "udfs/worker_last_saved_explain_analyze/13.2-1.sql" + +#include "udfs/citus_finish_pg_upgrade/13.2-1.sql" + +DO $drop_leftover_old_columnar_objects$ +BEGIN + -- If old columnar exists, i.e., the columnar access method that we had before Citus 11.1, + -- and we don't have any relations using the old columnar, then we want to drop the columnar + -- objects. This is because, we don't want to automatically create the "citus_columnar" + -- extension together with the "citus" extension anymore. And for the cases where we don't + -- want to automatically create the "citus_columnar" extension, there is no point of keeping + -- the columnar objects that we had before Citus 11.1 around. + IF ( + SELECT EXISTS ( + SELECT 1 FROM pg_am + WHERE + -- looking for an access method whose name is "columnar" .. + pg_am.amname = 'columnar' AND + -- .. and there should *NOT* be such a dependency edge in pg_depend, where .. + NOT EXISTS ( + SELECT 1 FROM pg_depend + WHERE + -- .. the depender is columnar access method (2601 = access method class) .. + pg_depend.classid = 2601 AND pg_depend.objid = pg_am.oid AND pg_depend.objsubid = 0 AND + -- .. and the dependee is an extension (3079 = extension class) + pg_depend.refclassid = 3079 AND pg_depend.refobjsubid = 0 + LIMIT 1 + ) AND + -- .. and there should *NOT* be any relations using it + NOT EXISTS ( + SELECT 1 + FROM pg_class + WHERE pg_class.relam = pg_am.oid + LIMIT 1 + ) + ) + ) + THEN + -- Below we drop the columnar objects in such an order that the objects that depend on + -- other objects are dropped first. + + DROP VIEW IF EXISTS columnar.options; + DROP VIEW IF EXISTS columnar.stripe; + DROP VIEW IF EXISTS columnar.chunk_group; + DROP VIEW IF EXISTS columnar.chunk; + DROP VIEW IF EXISTS columnar.storage; + + DROP ACCESS METHOD IF EXISTS columnar; + + DROP SEQUENCE IF EXISTS columnar_internal.storageid_seq; + + DROP TABLE IF EXISTS columnar_internal.options; + DROP TABLE IF EXISTS columnar_internal.stripe; + DROP TABLE IF EXISTS columnar_internal.chunk_group; + DROP TABLE IF EXISTS columnar_internal.chunk; + + DROP FUNCTION IF EXISTS columnar_internal.columnar_handler; + + DROP FUNCTION IF EXISTS pg_catalog.alter_columnar_table_set; + DROP FUNCTION IF EXISTS pg_catalog.alter_columnar_table_reset; + DROP FUNCTION IF EXISTS columnar.get_storage_id; + + DROP FUNCTION IF EXISTS citus_internal.upgrade_columnar_storage; + DROP FUNCTION IF EXISTS citus_internal.downgrade_columnar_storage; + DROP FUNCTION IF EXISTS citus_internal.columnar_ensure_am_depends_catalog; + + DROP SCHEMA IF EXISTS columnar; + DROP SCHEMA IF EXISTS columnar_internal; + END IF; +END $drop_leftover_old_columnar_objects$; diff --git a/src/backend/distributed/sql/downgrades/citus--13.2-1--13.1-1.sql b/src/backend/distributed/sql/downgrades/citus--13.2-1--13.1-1.sql index de26b790a..032c45e60 100644 --- a/src/backend/distributed/sql/downgrades/citus--13.2-1--13.1-1.sql +++ b/src/backend/distributed/sql/downgrades/citus--13.2-1--13.1-1.sql @@ -3,3 +3,11 @@ DROP FUNCTION IF EXISTS pg_catalog.worker_last_saved_explain_analyze(); #include "../udfs/worker_last_saved_explain_analyze/9.4-1.sql" + +#include "../udfs/citus_finish_pg_upgrade/13.1-1.sql" + +-- Note that we intentionally don't add the old columnar objects back to the "citus" +-- extension in this downgrade script, even if they were present in the older version. +-- +-- If the user wants to create "citus_columnar" extension later, "citus_columnar" +-- will anyway properly create them at the scope of that extension. diff --git a/src/backend/distributed/sql/udfs/citus_finish_pg_upgrade/13.2-1.sql b/src/backend/distributed/sql/udfs/citus_finish_pg_upgrade/13.2-1.sql new file mode 100644 index 000000000..38daeb86c --- /dev/null +++ b/src/backend/distributed/sql/udfs/citus_finish_pg_upgrade/13.2-1.sql @@ -0,0 +1,260 @@ +CREATE OR REPLACE FUNCTION pg_catalog.citus_finish_pg_upgrade() + RETURNS void + LANGUAGE plpgsql + SET search_path = pg_catalog + AS $cppu$ +DECLARE + table_name regclass; + command text; + trigger_name text; +BEGIN + + + IF substring(current_Setting('server_version'), '\d+')::int >= 14 THEN + EXECUTE $cmd$ + -- disable propagation to prevent EnsureCoordinator errors + -- the aggregate created here does not depend on Citus extension (yet) + -- since we add the dependency with the next command + SET citus.enable_ddl_propagation TO OFF; + CREATE AGGREGATE array_cat_agg(anycompatiblearray) (SFUNC = array_cat, STYPE = anycompatiblearray); + COMMENT ON AGGREGATE array_cat_agg(anycompatiblearray) + IS 'concatenate input arrays into a single array'; + RESET citus.enable_ddl_propagation; + $cmd$; + ELSE + EXECUTE $cmd$ + SET citus.enable_ddl_propagation TO OFF; + CREATE AGGREGATE array_cat_agg(anyarray) (SFUNC = array_cat, STYPE = anyarray); + COMMENT ON AGGREGATE array_cat_agg(anyarray) + IS 'concatenate input arrays into a single array'; + RESET citus.enable_ddl_propagation; + $cmd$; + END IF; + + -- + -- Citus creates the array_cat_agg but because of a compatibility + -- issue between pg13-pg14, we drop and create it during upgrade. + -- And as Citus creates it, there needs to be a dependency to the + -- Citus extension, so we create that dependency here. + -- We are not using: + -- ALTER EXENSION citus DROP/CREATE AGGREGATE array_cat_agg + -- because we don't have an easy way to check if the aggregate + -- exists with anyarray type or anycompatiblearray type. + + INSERT INTO pg_depend + SELECT + 'pg_proc'::regclass::oid as classid, + (SELECT oid FROM pg_proc WHERE proname = 'array_cat_agg') as objid, + 0 as objsubid, + 'pg_extension'::regclass::oid as refclassid, + (select oid from pg_extension where extname = 'citus') as refobjid, + 0 as refobjsubid , + 'e' as deptype; + + -- PG16 has its own any_value, so only create it pre PG16. + -- We can remove this part when we drop support for PG16 + IF substring(current_Setting('server_version'), '\d+')::int < 16 THEN + EXECUTE $cmd$ + -- disable propagation to prevent EnsureCoordinator errors + -- the aggregate created here does not depend on Citus extension (yet) + -- since we add the dependency with the next command + SET citus.enable_ddl_propagation TO OFF; + CREATE OR REPLACE FUNCTION pg_catalog.any_value_agg ( anyelement, anyelement ) + RETURNS anyelement AS $$ + SELECT CASE WHEN $1 IS NULL THEN $2 ELSE $1 END; + $$ LANGUAGE SQL STABLE; + + CREATE AGGREGATE pg_catalog.any_value ( + sfunc = pg_catalog.any_value_agg, + combinefunc = pg_catalog.any_value_agg, + basetype = anyelement, + stype = anyelement + ); + COMMENT ON AGGREGATE pg_catalog.any_value(anyelement) IS + 'Returns the value of any row in the group. It is mostly useful when you know there will be only 1 element.'; + RESET citus.enable_ddl_propagation; + -- + -- Citus creates the any_value aggregate but because of a compatibility + -- issue between pg15-pg16 -- any_value is created in PG16, we drop + -- and create it during upgrade IF upgraded version is less than 16. + -- And as Citus creates it, there needs to be a dependency to the + -- Citus extension, so we create that dependency here. + + INSERT INTO pg_depend + SELECT + 'pg_proc'::regclass::oid as classid, + (SELECT oid FROM pg_proc WHERE proname = 'any_value_agg') as objid, + 0 as objsubid, + 'pg_extension'::regclass::oid as refclassid, + (select oid from pg_extension where extname = 'citus') as refobjid, + 0 as refobjsubid , + 'e' as deptype; + + INSERT INTO pg_depend + SELECT + 'pg_proc'::regclass::oid as classid, + (SELECT oid FROM pg_proc WHERE proname = 'any_value') as objid, + 0 as objsubid, + 'pg_extension'::regclass::oid as refclassid, + (select oid from pg_extension where extname = 'citus') as refobjid, + 0 as refobjsubid , + 'e' as deptype; + $cmd$; + END IF; + + -- + -- restore citus catalog tables + -- + INSERT INTO pg_catalog.pg_dist_partition SELECT * FROM public.pg_dist_partition; + + -- if we are upgrading from PG14/PG15 to PG16+, + -- we need to regenerate the partkeys because they will include varnullingrels as well. + UPDATE pg_catalog.pg_dist_partition + SET partkey = column_name_to_column(pg_dist_partkeys_pre_16_upgrade.logicalrelid, col_name) + FROM public.pg_dist_partkeys_pre_16_upgrade + WHERE pg_dist_partkeys_pre_16_upgrade.logicalrelid = pg_dist_partition.logicalrelid; + DROP TABLE public.pg_dist_partkeys_pre_16_upgrade; + + INSERT INTO pg_catalog.pg_dist_shard SELECT * FROM public.pg_dist_shard; + INSERT INTO pg_catalog.pg_dist_placement SELECT * FROM public.pg_dist_placement; + INSERT INTO pg_catalog.pg_dist_node_metadata SELECT * FROM public.pg_dist_node_metadata; + INSERT INTO pg_catalog.pg_dist_node SELECT * FROM public.pg_dist_node; + INSERT INTO pg_catalog.pg_dist_local_group SELECT * FROM public.pg_dist_local_group; + INSERT INTO pg_catalog.pg_dist_transaction SELECT * FROM public.pg_dist_transaction; + INSERT INTO pg_catalog.pg_dist_colocation SELECT * FROM public.pg_dist_colocation; + INSERT INTO pg_catalog.pg_dist_cleanup SELECT * FROM public.pg_dist_cleanup; + INSERT INTO pg_catalog.pg_dist_schema SELECT schemaname::regnamespace, colocationid FROM public.pg_dist_schema; + -- enterprise catalog tables + INSERT INTO pg_catalog.pg_dist_authinfo SELECT * FROM public.pg_dist_authinfo; + INSERT INTO pg_catalog.pg_dist_poolinfo SELECT * FROM public.pg_dist_poolinfo; + + -- Temporarily disable trigger to check for validity of functions while + -- inserting. The current contents of the table might be invalid if one of + -- the functions was removed by the user without also removing the + -- rebalance strategy. Obviously that's not great, but it should be no + -- reason to fail the upgrade. + ALTER TABLE pg_catalog.pg_dist_rebalance_strategy DISABLE TRIGGER pg_dist_rebalance_strategy_validation_trigger; + INSERT INTO pg_catalog.pg_dist_rebalance_strategy SELECT + name, + default_strategy, + shard_cost_function::regprocedure::regproc, + node_capacity_function::regprocedure::regproc, + shard_allowed_on_node_function::regprocedure::regproc, + default_threshold, + minimum_threshold, + improvement_threshold + FROM public.pg_dist_rebalance_strategy; + ALTER TABLE pg_catalog.pg_dist_rebalance_strategy ENABLE TRIGGER pg_dist_rebalance_strategy_validation_trigger; + + -- + -- drop backup tables + -- + DROP TABLE public.pg_dist_authinfo; + DROP TABLE public.pg_dist_colocation; + DROP TABLE public.pg_dist_local_group; + DROP TABLE public.pg_dist_node; + DROP TABLE public.pg_dist_node_metadata; + DROP TABLE public.pg_dist_partition; + DROP TABLE public.pg_dist_placement; + DROP TABLE public.pg_dist_poolinfo; + DROP TABLE public.pg_dist_shard; + DROP TABLE public.pg_dist_transaction; + DROP TABLE public.pg_dist_rebalance_strategy; + DROP TABLE public.pg_dist_cleanup; + DROP TABLE public.pg_dist_schema; + -- + -- reset sequences + -- + PERFORM setval('pg_catalog.pg_dist_shardid_seq', (SELECT MAX(shardid)+1 AS max_shard_id FROM pg_dist_shard), false); + PERFORM setval('pg_catalog.pg_dist_placement_placementid_seq', (SELECT MAX(placementid)+1 AS max_placement_id FROM pg_dist_placement), false); + PERFORM setval('pg_catalog.pg_dist_groupid_seq', (SELECT MAX(groupid)+1 AS max_group_id FROM pg_dist_node), false); + PERFORM setval('pg_catalog.pg_dist_node_nodeid_seq', (SELECT MAX(nodeid)+1 AS max_node_id FROM pg_dist_node), false); + PERFORM setval('pg_catalog.pg_dist_colocationid_seq', (SELECT MAX(colocationid)+1 AS max_colocation_id FROM pg_dist_colocation), false); + PERFORM setval('pg_catalog.pg_dist_operationid_seq', (SELECT MAX(operation_id)+1 AS max_operation_id FROM pg_dist_cleanup), false); + PERFORM setval('pg_catalog.pg_dist_cleanup_recordid_seq', (SELECT MAX(record_id)+1 AS max_record_id FROM pg_dist_cleanup), false); + PERFORM setval('pg_catalog.pg_dist_clock_logical_seq', (SELECT last_value FROM public.pg_dist_clock_logical_seq), false); + DROP TABLE public.pg_dist_clock_logical_seq; + + + + -- + -- register triggers + -- + FOR table_name IN SELECT logicalrelid FROM pg_catalog.pg_dist_partition JOIN pg_class ON (logicalrelid = oid) WHERE relkind <> 'f' + LOOP + trigger_name := 'truncate_trigger_' || table_name::oid; + command := 'create trigger ' || trigger_name || ' after truncate on ' || table_name || ' execute procedure pg_catalog.citus_truncate_trigger()'; + EXECUTE command; + command := 'update pg_trigger set tgisinternal = true where tgname = ' || quote_literal(trigger_name); + EXECUTE command; + END LOOP; + + -- + -- set dependencies + -- + INSERT INTO pg_depend + SELECT + 'pg_class'::regclass::oid as classid, + p.logicalrelid::regclass::oid as objid, + 0 as objsubid, + 'pg_extension'::regclass::oid as refclassid, + (select oid from pg_extension where extname = 'citus') as refobjid, + 0 as refobjsubid , + 'n' as deptype + FROM pg_catalog.pg_dist_partition p; + + -- If citus_columnar extension exists, then perform the post PG-upgrade work for columnar as well. + -- + -- First look if pg_catalog.columnar_finish_pg_upgrade function exists as part of the citus_columnar + -- extension. (We check whether it's part of the extension just for security reasons). If it does, then + -- call it. If not, then look for columnar_internal.columnar_ensure_am_depends_catalog function and as + -- part of the citus_columnar extension. If so, then call it. We alternatively check for the latter UDF + -- just because pg_catalog.columnar_finish_pg_upgrade function is introduced in citus_columnar 13.2-1 + -- and as of today all it does is to call columnar_internal.columnar_ensure_am_depends_catalog function. + IF EXISTS ( + SELECT 1 FROM pg_depend + JOIN pg_proc ON (pg_depend.objid = pg_proc.oid) + JOIN pg_namespace ON (pg_proc.pronamespace = pg_namespace.oid) + JOIN pg_extension ON (pg_depend.refobjid = pg_extension.oid) + WHERE + -- Looking if pg_catalog.columnar_finish_pg_upgrade function exists and + -- if there is a dependency record from it (proc class = 1255) .. + pg_depend.classid = 1255 AND pg_namespace.nspname = 'pg_catalog' AND pg_proc.proname = 'columnar_finish_pg_upgrade' AND + -- .. to citus_columnar extension (3079 = extension class), if it exists. + pg_depend.refclassid = 3079 AND pg_extension.extname = 'citus_columnar' + ) + THEN PERFORM pg_catalog.columnar_finish_pg_upgrade(); + ELSIF EXISTS ( + SELECT 1 FROM pg_depend + JOIN pg_proc ON (pg_depend.objid = pg_proc.oid) + JOIN pg_namespace ON (pg_proc.pronamespace = pg_namespace.oid) + JOIN pg_extension ON (pg_depend.refobjid = pg_extension.oid) + WHERE + -- Looking if columnar_internal.columnar_ensure_am_depends_catalog function exists and + -- if there is a dependency record from it (proc class = 1255) .. + pg_depend.classid = 1255 AND pg_namespace.nspname = 'columnar_internal' AND pg_proc.proname = 'columnar_ensure_am_depends_catalog' AND + -- .. to citus_columnar extension (3079 = extension class), if it exists. + pg_depend.refclassid = 3079 AND pg_extension.extname = 'citus_columnar' + ) + THEN PERFORM columnar_internal.columnar_ensure_am_depends_catalog(); + END IF; + + -- restore pg_dist_object from the stable identifiers + TRUNCATE pg_catalog.pg_dist_object; + INSERT INTO pg_catalog.pg_dist_object (classid, objid, objsubid, distribution_argument_index, colocationid) + SELECT + address.classid, + address.objid, + address.objsubid, + naming.distribution_argument_index, + naming.colocationid + FROM + public.pg_dist_object naming, + pg_catalog.pg_get_object_address(naming.type, naming.object_names, naming.object_args) address; + + DROP TABLE public.pg_dist_object; +END; +$cppu$; + +COMMENT ON FUNCTION pg_catalog.citus_finish_pg_upgrade() + IS 'perform tasks to restore citus settings from a location that has been prepared before pg_upgrade'; diff --git a/src/backend/distributed/sql/udfs/citus_finish_pg_upgrade/latest.sql b/src/backend/distributed/sql/udfs/citus_finish_pg_upgrade/latest.sql index 4d3a17bd4..38daeb86c 100644 --- a/src/backend/distributed/sql/udfs/citus_finish_pg_upgrade/latest.sql +++ b/src/backend/distributed/sql/udfs/citus_finish_pg_upgrade/latest.sql @@ -203,8 +203,41 @@ BEGIN 'n' as deptype FROM pg_catalog.pg_dist_partition p; - -- set dependencies for columnar table access method - PERFORM columnar_internal.columnar_ensure_am_depends_catalog(); + -- If citus_columnar extension exists, then perform the post PG-upgrade work for columnar as well. + -- + -- First look if pg_catalog.columnar_finish_pg_upgrade function exists as part of the citus_columnar + -- extension. (We check whether it's part of the extension just for security reasons). If it does, then + -- call it. If not, then look for columnar_internal.columnar_ensure_am_depends_catalog function and as + -- part of the citus_columnar extension. If so, then call it. We alternatively check for the latter UDF + -- just because pg_catalog.columnar_finish_pg_upgrade function is introduced in citus_columnar 13.2-1 + -- and as of today all it does is to call columnar_internal.columnar_ensure_am_depends_catalog function. + IF EXISTS ( + SELECT 1 FROM pg_depend + JOIN pg_proc ON (pg_depend.objid = pg_proc.oid) + JOIN pg_namespace ON (pg_proc.pronamespace = pg_namespace.oid) + JOIN pg_extension ON (pg_depend.refobjid = pg_extension.oid) + WHERE + -- Looking if pg_catalog.columnar_finish_pg_upgrade function exists and + -- if there is a dependency record from it (proc class = 1255) .. + pg_depend.classid = 1255 AND pg_namespace.nspname = 'pg_catalog' AND pg_proc.proname = 'columnar_finish_pg_upgrade' AND + -- .. to citus_columnar extension (3079 = extension class), if it exists. + pg_depend.refclassid = 3079 AND pg_extension.extname = 'citus_columnar' + ) + THEN PERFORM pg_catalog.columnar_finish_pg_upgrade(); + ELSIF EXISTS ( + SELECT 1 FROM pg_depend + JOIN pg_proc ON (pg_depend.objid = pg_proc.oid) + JOIN pg_namespace ON (pg_proc.pronamespace = pg_namespace.oid) + JOIN pg_extension ON (pg_depend.refobjid = pg_extension.oid) + WHERE + -- Looking if columnar_internal.columnar_ensure_am_depends_catalog function exists and + -- if there is a dependency record from it (proc class = 1255) .. + pg_depend.classid = 1255 AND pg_namespace.nspname = 'columnar_internal' AND pg_proc.proname = 'columnar_ensure_am_depends_catalog' AND + -- .. to citus_columnar extension (3079 = extension class), if it exists. + pg_depend.refclassid = 3079 AND pg_extension.extname = 'citus_columnar' + ) + THEN PERFORM columnar_internal.columnar_ensure_am_depends_catalog(); + END IF; -- restore pg_dist_object from the stable identifiers TRUNCATE pg_catalog.pg_dist_object; diff --git a/src/include/distributed/commands/utility_hook.h b/src/include/distributed/commands/utility_hook.h index 52fcf7091..42b41d557 100644 --- a/src/include/distributed/commands/utility_hook.h +++ b/src/include/distributed/commands/utility_hook.h @@ -112,6 +112,5 @@ extern void UndistributeDisconnectedCitusLocalTables(void); extern void NotifyUtilityHookConstraintDropped(void); extern void ResetConstraintDropped(void); extern void ExecuteDistributedDDLJob(DDLJob *ddlJob); -extern void ColumnarTableSetOptionsHook(Oid relationId, ColumnarOptions options); #endif /* MULTI_UTILITY_H */ diff --git a/src/test/regress/Makefile b/src/test/regress/Makefile index 4bdc7a1b8..f874ecb55 100644 --- a/src/test/regress/Makefile +++ b/src/test/regress/Makefile @@ -146,7 +146,6 @@ check-isolation-custom-schedule-vg: all $(isolation_test_files) --valgrind --pg_ctl-timeout=360 --connection-timeout=500000 --valgrind-path=valgrind --valgrind-log-file=$(CITUS_VALGRIND_LOG_FILE) \ -- $(MULTI_REGRESS_OPTS) --inputdir=$(citus_abs_srcdir)/build --schedule=$(citus_abs_srcdir)/$(SCHEDULE) $(EXTRA_TESTS) - check-empty: all $(pg_regress_multi_check) --load-extension=citus \ -- $(MULTI_REGRESS_OPTS) $(EXTRA_TESTS) @@ -180,11 +179,6 @@ check-multi-1-vg: all --pg_ctl-timeout=360 --connection-timeout=500000 --valgrind-path=valgrind --valgrind-log-file=$(CITUS_VALGRIND_LOG_FILE) \ -- $(MULTI_REGRESS_OPTS) --schedule=$(citus_abs_srcdir)/multi_1_schedule $(EXTRA_TESTS) -check-columnar-vg: all - $(pg_regress_multi_check) --load-extension=citus --valgrind \ - --pg_ctl-timeout=360 --connection-timeout=500000 --valgrind-path=valgrind --valgrind-log-file=$(CITUS_VALGRIND_LOG_FILE) \ - -- $(MULTI_REGRESS_OPTS) --schedule=$(citus_abs_srcdir)/columnar_schedule $(EXTRA_TESTS) - check-isolation: all $(isolation_test_files) $(pg_regress_multi_check) --load-extension=citus --isolationtester \ -- $(MULTI_REGRESS_OPTS) --inputdir=$(citus_abs_srcdir)/build --schedule=$(citus_abs_srcdir)/isolation_schedule $(EXTRA_TESTS) @@ -227,14 +221,42 @@ check-operations: all $(pg_regress_multi_check) --load-extension=citus --worker-count=6 \ -- $(MULTI_REGRESS_OPTS) --schedule=$(citus_abs_srcdir)/operations_schedule $(EXTRA_TESTS) +check-columnar-minimal: + $(pg_regress_multi_check) --load-extension=citus_columnar \ + -- $(MULTI_REGRESS_OPTS) --schedule=$(citus_abs_srcdir)/minimal_columnar_schedule $(EXTRA_TESTS) + check-columnar: all - $(pg_regress_multi_check) --load-extension=citus_columnar --load-extension=citus \ + $(pg_regress_multi_check) --load-extension=citus_columnar \ + -- $(MULTI_REGRESS_OPTS) --schedule=$(citus_abs_srcdir)/columnar_schedule $(EXTRA_TESTS) + +check-columnar-vg: all + $(pg_regress_multi_check) --load-extension=citus_columnar --valgrind \ + --pg_ctl-timeout=360 --connection-timeout=500000 --valgrind-path=valgrind --valgrind-log-file=$(CITUS_VALGRIND_LOG_FILE) \ -- $(MULTI_REGRESS_OPTS) --schedule=$(citus_abs_srcdir)/columnar_schedule $(EXTRA_TESTS) check-columnar-isolation: all $(isolation_test_files) - $(pg_regress_multi_check) --load-extension=citus --isolationtester \ + $(pg_regress_multi_check) --load-extension=citus_columnar --isolationtester \ -- $(MULTI_REGRESS_OPTS) --inputdir=$(citus_abs_srcdir)/build --schedule=$(citus_abs_srcdir)/columnar_isolation_schedule $(EXTRA_TESTS) +check-columnar-custom-schedule: all + $(pg_regress_multi_check) --load-extension=citus_columnar \ + -- $(MULTI_REGRESS_OPTS) --schedule=$(citus_abs_srcdir)/$(SCHEDULE) $(EXTRA_TESTS) + +check-columnar-custom-schedule-vg: all + $(pg_regress_multi_check) --load-extension=citus_columnar \ + --valgrind --pg_ctl-timeout=360 --connection-timeout=500000 \ + --valgrind-path=valgrind --valgrind-log-file=$(CITUS_VALGRIND_LOG_FILE) \ + -- $(MULTI_REGRESS_OPTS) --schedule=$(citus_abs_srcdir)/$(SCHEDULE) $(EXTRA_TESTS) + +check-columnar-isolation-custom-schedule: all $(isolation_test_files) + $(pg_regress_multi_check) --load-extension=citus_columnar --isolationtester \ + -- $(MULTI_REGRESS_OPTS) --inputdir=$(citus_abs_srcdir)/build --schedule=$(citus_abs_srcdir)/$(SCHEDULE) $(EXTRA_TESTS) + +check-columnar-isolation-custom-schedule-vg: all $(isolation_test_files) + $(pg_regress_multi_check) --load-extension=citus_columnar --isolationtester \ + --valgrind --pg_ctl-timeout=360 --connection-timeout=500000 --valgrind-path=valgrind --valgrind-log-file=$(CITUS_VALGRIND_LOG_FILE) \ + -- $(MULTI_REGRESS_OPTS) --inputdir=$(citus_abs_srcdir)/build --schedule=$(citus_abs_srcdir)/$(SCHEDULE) $(EXTRA_TESTS) + check-split: all $(pg_regress_multi_check) --load-extension=citus \ -- $(MULTI_REGRESS_OPTS) --schedule=$(citus_abs_srcdir)/split_schedule $(EXTRA_TESTS) @@ -252,7 +274,7 @@ check-enterprise-failure: all -- $(MULTI_REGRESS_OPTS) --schedule=$(citus_abs_srcdir)/enterprise_failure_schedule $(EXTRA_TESTS) check-pg-upgrade: all - $(pg_upgrade_check) --old-bindir=$(old-bindir) --new-bindir=$(new-bindir) --pgxsdir=$(pgxsdir) + $(pg_upgrade_check) --old-bindir=$(old-bindir) --new-bindir=$(new-bindir) --pgxsdir=$(pgxsdir) $(if $(filter true,$(test-with-columnar)),--test-with-columnar) check-arbitrary-configs: all ${arbitrary_config_check} --bindir=$(bindir) --pgxsdir=$(pgxsdir) --parallel=$(parallel) --configs=$(CONFIGS) --seed=$(seed) @@ -301,4 +323,3 @@ clean distclean maintainer-clean: rm -rf input/ output/ rm -rf tmp_check/ rm -rf tmp_citus_test/ - diff --git a/src/test/regress/README.md b/src/test/regress/README.md index 2b68bf905..0fcc2154f 100644 --- a/src/test/regress/README.md +++ b/src/test/regress/README.md @@ -54,6 +54,8 @@ of the following commands to do so: ```bash # If your tests needs almost no setup you can use check-minimal make install -j9 && make -C src/test/regress/ check-minimal EXTRA_TESTS='multi_utility_warnings' +# For columnar specific tests, use check-columnar-minimal instead of check-minimal +make install -j9 && make -C src/test/regress/ check-columnar-minimal # Often tests need some testing data, if you get missing table errors using # check-minimal you should try check-base make install -j9 && make -C src/test/regress/ check-base EXTRA_TESTS='with_prepare' @@ -92,6 +94,13 @@ make install -j9 && make -C src/test/regress/ check-minimal EXTRA_TESTS='multi_u cp src/test/regress/{results,expected}/multi_utility_warnings.out ``` +Or if it's a columnar test, you can use: + +```bash +make install -j9 && make -C src/test/regress/ check-columnar-minimal EXTRA_TESTS='multi_utility_warnings' +cp src/test/regress/{results,expected}/multi_utility_warnings.out +``` + ## Adding a new test file Adding a new test file is quite simple: diff --git a/src/test/regress/after_pg_upgrade_schedule b/src/test/regress/after_pg_upgrade_with_columnar_schedule similarity index 100% rename from src/test/regress/after_pg_upgrade_schedule rename to src/test/regress/after_pg_upgrade_with_columnar_schedule diff --git a/src/test/regress/after_pg_upgrade_without_columnar_schedule b/src/test/regress/after_pg_upgrade_without_columnar_schedule new file mode 100644 index 000000000..02535ab7e --- /dev/null +++ b/src/test/regress/after_pg_upgrade_without_columnar_schedule @@ -0,0 +1,11 @@ +test: upgrade_basic_after upgrade_ref2ref_after upgrade_type_after upgrade_distributed_function_after upgrade_rebalance_strategy_after upgrade_list_citus_objects upgrade_autoconverted_after upgrade_citus_stat_activity upgrade_citus_locks upgrade_single_shard_table_after upgrade_schema_based_sharding_after upgrade_basic_after_non_mixed + +# This test cannot be run with run_test.py currently due to its dependence on +# the specific PG versions that we use to run upgrade tests. For now we leave +# it out of the parallel line, so that flaky test detection can at least work +# for the other tests. +test: upgrade_distributed_triggers_after + +# The last test to ensure citus columnar not automatically created and upgrade +# went fine without automatically creating it. +test: ensure_citus_columnar_not_exists diff --git a/src/test/regress/base_schedule b/src/test/regress/base_schedule index 65f439acc..f2a809c87 100644 --- a/src/test/regress/base_schedule +++ b/src/test/regress/base_schedule @@ -1,7 +1,7 @@ # ---------- # Only run few basic tests to set up a testing environment # ---------- -test: multi_test_helpers multi_test_helpers_superuser multi_create_fdw columnar_test_helpers failure_test_helpers +test: multi_test_helpers multi_test_helpers_superuser multi_create_fdw failure_test_helpers test: multi_cluster_management test: multi_test_catalog_views test: multi_create_table diff --git a/src/test/regress/before_pg_upgrade_schedule b/src/test/regress/before_pg_upgrade_with_columnar_schedule similarity index 100% rename from src/test/regress/before_pg_upgrade_schedule rename to src/test/regress/before_pg_upgrade_with_columnar_schedule diff --git a/src/test/regress/before_pg_upgrade_without_columnar_schedule b/src/test/regress/before_pg_upgrade_without_columnar_schedule new file mode 100644 index 000000000..7f4f204f0 --- /dev/null +++ b/src/test/regress/before_pg_upgrade_without_columnar_schedule @@ -0,0 +1,16 @@ +# The basic tests runs analyze which depends on shard numbers +test: multi_test_helpers multi_test_helpers_superuser upgrade_basic_before_non_mixed +test: multi_test_catalog_views +test: upgrade_basic_before +test: upgrade_ref2ref_before +test: upgrade_type_before +test: upgrade_distributed_function_before upgrade_rebalance_strategy_before +test: upgrade_autoconverted_before upgrade_single_shard_table_before upgrade_schema_based_sharding_before +test: upgrade_citus_stat_activity +test: upgrade_citus_locks +test: upgrade_distributed_triggers_before + +# The last test, i.e., upgrade_columnar_before, in before_pg_upgrade_with_columnar_schedule +# renames public schema to citus_schema and re-creates public schema, so we also do the same +# here to have compatible output in after schedule tests for both schedules. +test: rename_public_to_citus_schema_and_recreate diff --git a/src/test/regress/citus_tests/config.py b/src/test/regress/citus_tests/config.py index 436163ff1..2e3375856 100644 --- a/src/test/regress/citus_tests/config.py +++ b/src/test/regress/citus_tests/config.py @@ -25,8 +25,14 @@ ARBITRARY_SCHEDULE_NAMES = [ "single_shard_table_prep_schedule", ] -BEFORE_PG_UPGRADE_SCHEDULE = "./before_pg_upgrade_schedule" -AFTER_PG_UPGRADE_SCHEDULE = "./after_pg_upgrade_schedule" +BEFORE_PG_UPGRADE_WITH_COLUMNAR_SCHEDULE = "./before_pg_upgrade_with_columnar_schedule" +BEFORE_PG_UPGRADE_WITHOUT_COLUMNAR_SCHEDULE = ( + "./before_pg_upgrade_without_columnar_schedule" +) +AFTER_PG_UPGRADE_WITH_COLUMNAR_SCHEDULE = "./after_pg_upgrade_with_columnar_schedule" +AFTER_PG_UPGRADE_WITHOUT_COLUMNAR_SCHEDULE = ( + "./after_pg_upgrade_without_columnar_schedule" +) CREATE_SCHEDULE = "./create_schedule" POSTGRES_SCHEDULE = "./postgres_schedule" @@ -104,6 +110,7 @@ class CitusBaseClusterConfig(object, metaclass=NewInitCaller): self.is_mx = True self.is_citus = True self.all_null_dist_key = False + self.test_with_columnar = False self.name = type(self).__name__ self.settings = { "shared_preload_libraries": "citus", @@ -402,3 +409,4 @@ class PGUpgradeConfig(CitusBaseClusterConfig): self.old_datadir = self.temp_dir + "/oldData" self.new_datadir = self.temp_dir + "/newData" self.user = SUPER_USER_NAME + self.test_with_columnar = arguments["--test-with-columnar"] diff --git a/src/test/regress/citus_tests/run_test.py b/src/test/regress/citus_tests/run_test.py index fff261372..593c37bdd 100755 --- a/src/test/regress/citus_tests/run_test.py +++ b/src/test/regress/citus_tests/run_test.py @@ -209,7 +209,7 @@ DEPS = { ), "limit_intermediate_size": TestDeps("base_schedule"), "columnar_drop": TestDeps( - "minimal_schedule", + "minimal_columnar_schedule", ["columnar_create", "columnar_load"], repeatable=False, ), @@ -335,6 +335,15 @@ def run_schedule_with_multiregress(test_name, schedule, dependencies, args): "failure" ): make_recipe = "check-failure-custom-schedule" + elif test_name.startswith("columnar"): + if dependencies.schedule is None: + # Columanar isolation tests don't depend on a base schedule, + # so this must be a columnar isolation test. + make_recipe = "check-columnar-isolation-custom-schedule" + elif dependencies.schedule == "minimal_columnar_schedule": + make_recipe = "check-columnar-custom-schedule" + else: + raise Exception("Columnar test could not be found in any schedule") else: make_recipe = "check-custom-schedule" @@ -352,6 +361,9 @@ def run_schedule_with_multiregress(test_name, schedule, dependencies, args): def default_base_schedule(test_schedule, args): if "isolation" in test_schedule: + if "columnar" in test_schedule: + # we don't have pre-requisites for columnar isolation tests + return None return "base_isolation_schedule" if "failure" in test_schedule: @@ -374,6 +386,9 @@ def default_base_schedule(test_schedule, args): if "pg_upgrade" in test_schedule: return "minimal_pg_upgrade_schedule" + if "columnar" in test_schedule: + return "minimal_columnar_schedule" + if test_schedule in ARBITRARY_SCHEDULE_NAMES: print(f"WARNING: Arbitrary config schedule ({test_schedule}) is not supported.") sys.exit(0) diff --git a/src/test/regress/citus_tests/test/test_columnar.py b/src/test/regress/citus_tests/test/test_columnar.py index 7366cd432..465dcffdd 100644 --- a/src/test/regress/citus_tests/test/test_columnar.py +++ b/src/test/regress/citus_tests/test/test_columnar.py @@ -3,6 +3,8 @@ import pytest def test_freezing(coord): + coord.sql("CREATE EXTENSION IF NOT EXISTS citus_columnar") + coord.configure("vacuum_freeze_min_age = 50000", "vacuum_freeze_table_age = 50000") coord.restart() @@ -38,8 +40,12 @@ def test_freezing(coord): ) assert frozen_age < 70_000, "columnar table was not frozen" + coord.sql("DROP EXTENSION citus_columnar CASCADE") + def test_recovery(coord): + coord.sql("CREATE EXTENSION IF NOT EXISTS citus_columnar") + # create columnar table and insert simple data to verify the data survives a crash coord.sql("CREATE TABLE t1 (a int, b text) USING columnar") coord.sql( @@ -115,3 +121,5 @@ def test_recovery(coord): row_count = coord.sql_value("SELECT count(*) FROM t1") assert row_count == 1007, "columnar didn't recover after copy" + + coord.sql("DROP EXTENSION citus_columnar CASCADE") diff --git a/src/test/regress/citus_tests/upgrade/README.md b/src/test/regress/citus_tests/upgrade/README.md index 1efd4e91d..154354a25 100644 --- a/src/test/regress/citus_tests/upgrade/README.md +++ b/src/test/regress/citus_tests/upgrade/README.md @@ -30,9 +30,14 @@ Before running the script, make sure that: - Finally run upgrade test in `citus/src/test/regress`: ```bash - pipenv run make check-pg-upgrade old-bindir= new-bindir= + pipenv run make check-pg-upgrade old-bindir= new-bindir= test-with-columnar= ``` +When test-with-columnar is provided as true, before_pg_upgrade_with_columnar_schedule / +after_pg_upgrade_with_columnar_schedule is used before / after upgrading Postgres during the +tests and before_pg_upgrade_without_columnar_schedule / after_pg_upgrade_without_columnar_schedule +is used otherwise. + To see full command list: ```bash @@ -43,9 +48,9 @@ How the postgres upgrade test works: - Temporary folder `tmp_upgrade` is created in `src/test/regress/`, if one exists it is removed first. - Database is initialized and citus cluster is created(1 coordinator + 2 workers) with old postgres. -- `before_pg_upgrade_schedule` is run with `pg_regress`. This schedule sets up any +- `before_pg_upgrade_with_columnar_schedule` / `before_pg_upgrade_without_columnar_schedule` is run with `pg_regress`. This schedule sets up any objects and data that will be tested for preservation after the upgrade. It -- `after_pg_upgrade_schedule` is run with `pg_regress` to verify that the output +- `after_pg_upgrade_with_columnar_schedule` / `after_pg_upgrade_without_columnar_schedule` is run with `pg_regress` to verify that the output of those tests is the same before the upgrade as after. - `citus_prepare_pg_upgrade` is run in coordinators and workers. - Old database is stopped. @@ -53,7 +58,7 @@ How the postgres upgrade test works: - Postgres upgrade is performed. - New database is started in both coordinators and workers. - `citus_finish_pg_upgrade` is run in coordinators and workers to finalize the upgrade step. -- `after_pg_upgrade_schedule` is run with `pg_regress` to verify that the previously created tables, and data still exist. Router and realtime queries are used to verify this. +- `after_pg_upgrade_with_columnar_schedule` / `after_pg_upgrade_without_columnar_schedule` is run with `pg_regress` to verify that the previously created tables, and data still exist. Router and realtime queries are used to verify this. ### Writing new PG upgrade tests diff --git a/src/test/regress/citus_tests/upgrade/pg_upgrade_test.py b/src/test/regress/citus_tests/upgrade/pg_upgrade_test.py index f4ee4301c..8c217d62b 100755 --- a/src/test/regress/citus_tests/upgrade/pg_upgrade_test.py +++ b/src/test/regress/citus_tests/upgrade/pg_upgrade_test.py @@ -2,12 +2,13 @@ """upgrade_test Usage: - upgrade_test --old-bindir= --new-bindir= --pgxsdir= + upgrade_test --old-bindir= --new-bindir= --pgxsdir= [--test-with-columnar] Options: --old-bindir= The old PostgreSQL executable directory(ex: '~/.pgenv/pgsql-10.4/bin') --new-bindir= The new PostgreSQL executable directory(ex: '~/.pgenv/pgsql-11.3/bin') - --pgxsdir= Path to the PGXS directory(ex: ~/.pgenv/src/postgresql-11.3) + --pgxsdir= Path to the PGXS directory(ex: ~/.pgenv/src/postgresql-11.3) + --test-with-columnar Enable automatically creating citus_columnar extension """ import atexit @@ -26,8 +27,10 @@ import utils # noqa: E402 from utils import USER # noqa: E402 from config import ( # noqa: E402 - AFTER_PG_UPGRADE_SCHEDULE, - BEFORE_PG_UPGRADE_SCHEDULE, + AFTER_PG_UPGRADE_WITH_COLUMNAR_SCHEDULE, + AFTER_PG_UPGRADE_WITHOUT_COLUMNAR_SCHEDULE, + BEFORE_PG_UPGRADE_WITH_COLUMNAR_SCHEDULE, + BEFORE_PG_UPGRADE_WITHOUT_COLUMNAR_SCHEDULE, PGUpgradeConfig, ) @@ -85,13 +88,21 @@ def main(config): config.old_bindir, config.pg_srcdir, config.coordinator_port(), - BEFORE_PG_UPGRADE_SCHEDULE, + ( + BEFORE_PG_UPGRADE_WITH_COLUMNAR_SCHEDULE + if config.test_with_columnar + else BEFORE_PG_UPGRADE_WITHOUT_COLUMNAR_SCHEDULE + ), ) common.run_pg_regress( config.old_bindir, config.pg_srcdir, config.coordinator_port(), - AFTER_PG_UPGRADE_SCHEDULE, + ( + AFTER_PG_UPGRADE_WITH_COLUMNAR_SCHEDULE + if config.test_with_columnar + else AFTER_PG_UPGRADE_WITHOUT_COLUMNAR_SCHEDULE + ), ) citus_prepare_pg_upgrade(config.old_bindir, config.node_name_to_ports.values()) @@ -127,7 +138,11 @@ def main(config): config.new_bindir, config.pg_srcdir, config.coordinator_port(), - AFTER_PG_UPGRADE_SCHEDULE, + ( + AFTER_PG_UPGRADE_WITH_COLUMNAR_SCHEDULE + if config.test_with_columnar + else AFTER_PG_UPGRADE_WITHOUT_COLUMNAR_SCHEDULE + ), ) diff --git a/src/test/regress/columnar_schedule b/src/test/regress/columnar_schedule index 602af0fc7..4c36e4ddd 100644 --- a/src/test/regress/columnar_schedule +++ b/src/test/regress/columnar_schedule @@ -1,8 +1,5 @@ -test: multi_test_helpers multi_test_helpers_superuser columnar_test_helpers -test: multi_cluster_management -test: multi_test_catalog_views +test: columnar_test_helpers -test: remove_coordinator_from_metadata test: columnar_create test: columnar_load test: columnar_query @@ -36,5 +33,3 @@ test: columnar_recursive test: columnar_transactions test: columnar_matview test: columnar_memory -test: columnar_citus_integration -test: check_mx diff --git a/src/test/regress/expected/alter_distributed_table.out b/src/test/regress/expected/alter_distributed_table.out index 9d968dbb1..aee720a45 100644 --- a/src/test/regress/expected/alter_distributed_table.out +++ b/src/test/regress/expected/alter_distributed_table.out @@ -1,3 +1,6 @@ +SET client_min_messages TO WARNING; +CREATE EXTENSION IF NOT EXISTS citus_columnar; +RESET client_min_messages; CREATE SCHEMA alter_distributed_table; SET search_path TO alter_distributed_table; SET citus.shard_count TO 4; @@ -1259,3 +1262,5 @@ RESET search_path; DROP SCHEMA alter_distributed_table CASCADE; DROP SCHEMA schema_to_test_alter_dist_table CASCADE; DROP USER alter_dist_table_test_user; +SET client_min_messages TO WARNING; +DROP EXTENSION citus_columnar CASCADE; diff --git a/src/test/regress/expected/alter_table_set_access_method.out b/src/test/regress/expected/alter_table_set_access_method.out index 938c6bc0d..d24a81744 100644 --- a/src/test/regress/expected/alter_table_set_access_method.out +++ b/src/test/regress/expected/alter_table_set_access_method.out @@ -1,6 +1,9 @@ -- -- ALTER_TABLE_SET_ACCESS_METHOD -- +SET client_min_messages TO WARNING; +CREATE EXTENSION IF NOT EXISTS citus_columnar; +RESET client_min_messages; CREATE TABLE alter_am_pg_version_table (a INT); SELECT alter_table_set_access_method('alter_am_pg_version_table', 'columnar'); NOTICE: creating a new table for public.alter_am_pg_version_table @@ -810,3 +813,5 @@ select alter_table_set_access_method('view_test_view','columnar'); ERROR: you cannot alter access method of a view SET client_min_messages TO WARNING; DROP SCHEMA alter_table_set_access_method CASCADE; +SET client_min_messages TO WARNING; +DROP EXTENSION citus_columnar CASCADE; diff --git a/src/test/regress/expected/citus_depended_object.out b/src/test/regress/expected/citus_depended_object.out index 88eca1f5a..08cb6263c 100644 --- a/src/test/regress/expected/citus_depended_object.out +++ b/src/test/regress/expected/citus_depended_object.out @@ -1,3 +1,6 @@ +SET client_min_messages TO WARNING; +CREATE EXTENSION IF NOT EXISTS citus_columnar; +RESET client_min_messages; -- create the udf is_citus_depended_object that is needed for the tests CREATE OR REPLACE FUNCTION pg_catalog.is_citus_depended_object(oid,oid) @@ -193,3 +196,5 @@ drop cascades to table no_hide_pg_proc drop cascades to table hide_pg_proc drop cascades to function citus_depended_proc(noderole) drop cascades to function citus_independed_proc(integer) +SET client_min_messages TO WARNING; +DROP EXTENSION citus_columnar CASCADE; diff --git a/src/test/regress/expected/citus_non_blocking_split_columnar.out b/src/test/regress/expected/citus_non_blocking_split_columnar.out index 2d20fbc8a..0d5c74254 100644 --- a/src/test/regress/expected/citus_non_blocking_split_columnar.out +++ b/src/test/regress/expected/citus_non_blocking_split_columnar.out @@ -1,9 +1,20 @@ +SET client_min_messages TO WARNING; +CREATE EXTENSION IF NOT EXISTS citus_columnar; +RESET client_min_messages; CREATE SCHEMA "citus_split_non_blocking_schema_columnar_partitioned"; SET search_path TO "citus_split_non_blocking_schema_columnar_partitioned"; SET citus.next_shard_id TO 8970000; SET citus.next_placement_id TO 8770000; SET citus.shard_count TO 1; SET citus.shard_replication_factor TO 1; +-- remove coordinator if it is added to pg_dist_node +SELECT COUNT(master_remove_node(nodename, nodeport)) >= 0 +FROM pg_dist_node WHERE nodename='localhost' AND nodeport=:master_port; + ?column? +--------------------------------------------------------------------- + t +(1 row) + -- Disable Deferred drop auto cleanup to avoid flaky tests. ALTER SYSTEM SET citus.defer_shard_delete_interval TO -1; SELECT pg_reload_conf(); @@ -841,3 +852,5 @@ drop cascades to table citus_split_non_blocking_schema_columnar_partitioned.colo drop cascades to table citus_split_non_blocking_schema_columnar_partitioned.colocated_partitioned_table drop cascades to table citus_split_non_blocking_schema_columnar_partitioned.reference_table --END : Cleanup +SET client_min_messages TO WARNING; +DROP EXTENSION citus_columnar CASCADE; diff --git a/src/test/regress/expected/citus_split_shard_columnar_partitioned.out b/src/test/regress/expected/citus_split_shard_columnar_partitioned.out index 97162e387..5dc0c1ebc 100644 --- a/src/test/regress/expected/citus_split_shard_columnar_partitioned.out +++ b/src/test/regress/expected/citus_split_shard_columnar_partitioned.out @@ -1,3 +1,6 @@ +SET client_min_messages TO WARNING; +CREATE EXTENSION IF NOT EXISTS citus_columnar; +RESET client_min_messages; CREATE SCHEMA "citus_split_test_schema_columnar_partitioned"; SET search_path TO "citus_split_test_schema_columnar_partitioned"; SET citus.next_shard_id TO 8970000; @@ -841,3 +844,5 @@ drop cascades to table citus_split_test_schema_columnar_partitioned.colocated_di drop cascades to table citus_split_test_schema_columnar_partitioned.colocated_partitioned_table drop cascades to table citus_split_test_schema_columnar_partitioned.reference_table --END : Cleanup +SET client_min_messages TO WARNING; +DROP EXTENSION citus_columnar CASCADE; diff --git a/src/test/regress/expected/columnar_chunk_filtering.out b/src/test/regress/expected/columnar_chunk_filtering.out index f952eb27b..d3c403eeb 100644 --- a/src/test/regress/expected/columnar_chunk_filtering.out +++ b/src/test/regress/expected/columnar_chunk_filtering.out @@ -977,7 +977,7 @@ DETAIL: unparameterized; 1 clauses pushed down (1 row) SET hash_mem_multiplier = 1.0; -SELECT public.explain_with_pg16_subplan_format($Q$ +SELECT columnar_test_helpers.explain_with_pg16_subplan_format($Q$ EXPLAIN (analyze on, costs off, timing off, summary off) SELECT sum(a) FROM pushdown_test where ( @@ -993,15 +993,15 @@ or $Q$) as "QUERY PLAN"; NOTICE: columnar planner: adding CustomScan path for pushdown_test DETAIL: unparameterized; 0 clauses pushed down -CONTEXT: PL/pgSQL function explain_with_pg16_subplan_format(text) line XX at FOR over EXECUTE statement +CONTEXT: PL/pgSQL function columnar_test_helpers.explain_with_pg16_subplan_format(text) line XX at FOR over EXECUTE statement NOTICE: columnar planner: cannot push down clause: must match 'Var Expr' or 'Expr Var' HINT: Var must only reference this rel, and Expr must not reference this rel -CONTEXT: PL/pgSQL function explain_with_pg16_subplan_format(text) line XX at FOR over EXECUTE statement +CONTEXT: PL/pgSQL function columnar_test_helpers.explain_with_pg16_subplan_format(text) line XX at FOR over EXECUTE statement NOTICE: columnar planner: cannot push down clause: must not contain a subplan -CONTEXT: PL/pgSQL function explain_with_pg16_subplan_format(text) line XX at FOR over EXECUTE statement +CONTEXT: PL/pgSQL function columnar_test_helpers.explain_with_pg16_subplan_format(text) line XX at FOR over EXECUTE statement NOTICE: columnar planner: adding CustomScan path for pushdown_test DETAIL: unparameterized; 1 clauses pushed down -CONTEXT: PL/pgSQL function explain_with_pg16_subplan_format(text) line XX at FOR over EXECUTE statement +CONTEXT: PL/pgSQL function columnar_test_helpers.explain_with_pg16_subplan_format(text) line XX at FOR over EXECUTE statement QUERY PLAN --------------------------------------------------------------------- Aggregate (actual rows=1 loops=1) diff --git a/src/test/regress/expected/columnar_chunk_filtering_0.out b/src/test/regress/expected/columnar_chunk_filtering_0.out index 57b30b8b1..83fee1c24 100644 --- a/src/test/regress/expected/columnar_chunk_filtering_0.out +++ b/src/test/regress/expected/columnar_chunk_filtering_0.out @@ -977,7 +977,7 @@ DETAIL: unparameterized; 1 clauses pushed down (1 row) SET hash_mem_multiplier = 1.0; -SELECT public.explain_with_pg16_subplan_format($Q$ +SELECT columnar_test_helpers.explain_with_pg16_subplan_format($Q$ EXPLAIN (analyze on, costs off, timing off, summary off) SELECT sum(a) FROM pushdown_test where ( @@ -993,15 +993,15 @@ or $Q$) as "QUERY PLAN"; NOTICE: columnar planner: adding CustomScan path for pushdown_test DETAIL: unparameterized; 0 clauses pushed down -CONTEXT: PL/pgSQL function explain_with_pg16_subplan_format(text) line XX at FOR over EXECUTE statement +CONTEXT: PL/pgSQL function columnar_test_helpers.explain_with_pg16_subplan_format(text) line XX at FOR over EXECUTE statement NOTICE: columnar planner: cannot push down clause: must match 'Var Expr' or 'Expr Var' HINT: Var must only reference this rel, and Expr must not reference this rel -CONTEXT: PL/pgSQL function explain_with_pg16_subplan_format(text) line XX at FOR over EXECUTE statement +CONTEXT: PL/pgSQL function columnar_test_helpers.explain_with_pg16_subplan_format(text) line XX at FOR over EXECUTE statement NOTICE: columnar planner: cannot push down clause: must not contain a subplan -CONTEXT: PL/pgSQL function explain_with_pg16_subplan_format(text) line XX at FOR over EXECUTE statement +CONTEXT: PL/pgSQL function columnar_test_helpers.explain_with_pg16_subplan_format(text) line XX at FOR over EXECUTE statement NOTICE: columnar planner: adding CustomScan path for pushdown_test DETAIL: unparameterized; 1 clauses pushed down -CONTEXT: PL/pgSQL function explain_with_pg16_subplan_format(text) line XX at FOR over EXECUTE statement +CONTEXT: PL/pgSQL function columnar_test_helpers.explain_with_pg16_subplan_format(text) line XX at FOR over EXECUTE statement QUERY PLAN --------------------------------------------------------------------- Aggregate (actual rows=1 loops=1) diff --git a/src/test/regress/expected/columnar_citus_integration.out b/src/test/regress/expected/columnar_citus_integration.out index fb7d9201e..d0ad99d7e 100644 --- a/src/test/regress/expected/columnar_citus_integration.out +++ b/src/test/regress/expected/columnar_citus_integration.out @@ -1,3 +1,14 @@ +SET client_min_messages TO WARNING; +CREATE EXTENSION IF NOT EXISTS citus_columnar; +RESET client_min_messages; +-- remove coordinator if it is added to pg_dist_node +SELECT COUNT(master_remove_node(nodename, nodeport)) >= 0 +FROM pg_dist_node WHERE nodename='localhost' AND nodeport=:master_port; + ?column? +--------------------------------------------------------------------- + t +(1 row) + SELECT success, result FROM run_command_on_all_nodes($cmd$ ALTER SYSTEM SET columnar.compression TO 'none' $cmd$); @@ -993,5 +1004,54 @@ SELECT COUNT(*) FROM weird_col_explain; Columnar Projected Columns: (9 rows) +-- some tests with distributed & partitioned tables -- +CREATE TABLE dist_part_table( + dist_col INT, + part_col TIMESTAMPTZ, + col1 TEXT +) PARTITION BY RANGE (part_col); +-- create an index before creating a columnar partition +CREATE INDEX dist_part_table_btree ON dist_part_table (col1); +-- columnar partition +CREATE TABLE p0 PARTITION OF dist_part_table +FOR VALUES FROM ('2020-01-01') TO ('2020-02-01') +USING columnar; +SELECT create_distributed_table('dist_part_table', 'dist_col'); + create_distributed_table +--------------------------------------------------------------------- + +(1 row) + +-- columnar partition +CREATE TABLE p1 PARTITION OF dist_part_table +FOR VALUES FROM ('2020-02-01') TO ('2020-03-01') +USING columnar; +-- row partition +CREATE TABLE p2 PARTITION OF dist_part_table +FOR VALUES FROM ('2020-03-01') TO ('2020-04-01'); +INSERT INTO dist_part_table VALUES (1, '2020-03-15', 'str1', POINT(1, 1)); +ERROR: INSERT has more expressions than target columns +-- insert into columnar partitions +INSERT INTO dist_part_table VALUES (1, '2020-01-15', 'str2', POINT(2, 2)); +ERROR: INSERT has more expressions than target columns +INSERT INTO dist_part_table VALUES (1, '2020-02-15', 'str3', POINT(3, 3)); +ERROR: INSERT has more expressions than target columns +-- create another index after creating a columnar partition +CREATE UNIQUE INDEX dist_part_table_unique ON dist_part_table (dist_col, part_col); +-- verify that indexes are created on columnar partitions +SELECT COUNT(*)=2 FROM pg_indexes WHERE tablename = 'p0'; + ?column? +--------------------------------------------------------------------- + t +(1 row) + +SELECT COUNT(*)=2 FROM pg_indexes WHERE tablename = 'p1'; + ?column? +--------------------------------------------------------------------- + t +(1 row) + SET client_min_messages TO WARNING; DROP SCHEMA columnar_citus_integration CASCADE; +SET client_min_messages TO WARNING; +DROP EXTENSION citus_columnar CASCADE; diff --git a/src/test/regress/expected/columnar_drop.out b/src/test/regress/expected/columnar_drop.out index 2e7998b69..9a13cabe4 100644 --- a/src/test/regress/expected/columnar_drop.out +++ b/src/test/regress/expected/columnar_drop.out @@ -38,9 +38,6 @@ SELECT :columnar_stripes_before_drop - count(distinct storage_id) FROM columnar. SELECT current_database() datname \gset CREATE DATABASE db_to_drop; -NOTICE: Citus partially supports CREATE DATABASE for distributed databases -DETAIL: Citus does not propagate CREATE DATABASE command to other nodes -HINT: You can manually create a database and its extensions on other nodes. \c db_to_drop CREATE EXTENSION citus_columnar; SELECT oid::text databaseoid FROM pg_database WHERE datname = current_database() \gset diff --git a/src/test/regress/expected/columnar_indexes.out b/src/test/regress/expected/columnar_indexes.out index cd05108b2..d5e4b1cbb 100644 --- a/src/test/regress/expected/columnar_indexes.out +++ b/src/test/regress/expected/columnar_indexes.out @@ -395,53 +395,6 @@ SELECT b=980 FROM include_test WHERE a = 980; t (1 row) --- some tests with distributed & partitioned tables -- -CREATE TABLE dist_part_table( - dist_col INT, - part_col TIMESTAMPTZ, - col1 TEXT -) PARTITION BY RANGE (part_col); --- create an index before creating a columnar partition -CREATE INDEX dist_part_table_btree ON dist_part_table (col1); --- columnar partition -CREATE TABLE p0 PARTITION OF dist_part_table -FOR VALUES FROM ('2020-01-01') TO ('2020-02-01') -USING columnar; -SELECT create_distributed_table('dist_part_table', 'dist_col'); - create_distributed_table ---------------------------------------------------------------------- - -(1 row) - --- columnar partition -CREATE TABLE p1 PARTITION OF dist_part_table -FOR VALUES FROM ('2020-02-01') TO ('2020-03-01') -USING columnar; --- row partition -CREATE TABLE p2 PARTITION OF dist_part_table -FOR VALUES FROM ('2020-03-01') TO ('2020-04-01'); -INSERT INTO dist_part_table VALUES (1, '2020-03-15', 'str1', POINT(1, 1)); -ERROR: INSERT has more expressions than target columns --- insert into columnar partitions -INSERT INTO dist_part_table VALUES (1, '2020-01-15', 'str2', POINT(2, 2)); -ERROR: INSERT has more expressions than target columns -INSERT INTO dist_part_table VALUES (1, '2020-02-15', 'str3', POINT(3, 3)); -ERROR: INSERT has more expressions than target columns --- create another index after creating a columnar partition -CREATE UNIQUE INDEX dist_part_table_unique ON dist_part_table (dist_col, part_col); --- verify that indexes are created on columnar partitions -SELECT COUNT(*)=2 FROM pg_indexes WHERE tablename = 'p0'; - ?column? ---------------------------------------------------------------------- - t -(1 row) - -SELECT COUNT(*)=2 FROM pg_indexes WHERE tablename = 'p1'; - ?column? ---------------------------------------------------------------------- - t -(1 row) - -- unsupported index types -- -- gin -- CREATE TABLE testjsonb (j JSONB) USING columnar; diff --git a/src/test/regress/expected/columnar_memory.out b/src/test/regress/expected/columnar_memory.out index 229502437..06396cd36 100644 --- a/src/test/regress/expected/columnar_memory.out +++ b/src/test/regress/expected/columnar_memory.out @@ -71,7 +71,9 @@ write_clear_outside_xact | t INSERT INTO t SELECT i, 'last batch', 0 /* no need to record memusage per row */ FROM generate_series(1, 50000) i; -SELECT CASE WHEN 1.0 * TopMemoryContext / :top_post BETWEEN 0.98 AND 1.03 THEN 1 ELSE 1.0 * TopMemoryContext / :top_post END AS top_growth +-- FIXME: Somehow, after we stopped creating citus extension for columnar tests, +-- we started observing %38 growth in TopMemoryContext here. +SELECT CASE WHEN 1.0 * TopMemoryContext / :top_post BETWEEN 0.98 AND 1.40 THEN 1 ELSE 1.0 * TopMemoryContext / :top_post END AS top_growth FROM columnar_test_helpers.columnar_store_memory_stats(); -[ RECORD 1 ]- top_growth | 1 diff --git a/src/test/regress/expected/columnar_partitioning.out b/src/test/regress/expected/columnar_partitioning.out index cd530b3f9..bd7bbdc25 100644 --- a/src/test/regress/expected/columnar_partitioning.out +++ b/src/test/regress/expected/columnar_partitioning.out @@ -54,36 +54,9 @@ SELECT count(*), sum(i), min(i), max(i) FROM parent; (1 row) -- set older partitions as columnar -SELECT alter_table_set_access_method('p0','columnar'); -NOTICE: creating a new table for public.p0 -NOTICE: moving the data of public.p0 -NOTICE: dropping the old public.p0 -NOTICE: renaming the new table to public.p0 - alter_table_set_access_method ---------------------------------------------------------------------- - -(1 row) - -SELECT alter_table_set_access_method('p1','columnar'); -NOTICE: creating a new table for public.p1 -NOTICE: moving the data of public.p1 -NOTICE: dropping the old public.p1 -NOTICE: renaming the new table to public.p1 - alter_table_set_access_method ---------------------------------------------------------------------- - -(1 row) - -SELECT alter_table_set_access_method('p3','columnar'); -NOTICE: creating a new table for public.p3 -NOTICE: moving the data of public.p3 -NOTICE: dropping the old public.p3 -NOTICE: renaming the new table to public.p3 - alter_table_set_access_method ---------------------------------------------------------------------- - -(1 row) - +ALTER TABLE p0 SET ACCESS METHOD columnar; +ALTER TABLE p1 SET ACCESS METHOD columnar; +ALTER TABLE p3 SET ACCESS METHOD columnar; -- should also be parallel plan EXPLAIN (costs off) SELECT count(*), sum(i), min(i), max(i) FROM parent; QUERY PLAN diff --git a/src/test/regress/expected/columnar_test_helpers.out b/src/test/regress/expected/columnar_test_helpers.out index 9a9e21057..f4f179e55 100644 --- a/src/test/regress/expected/columnar_test_helpers.out +++ b/src/test/regress/expected/columnar_test_helpers.out @@ -126,3 +126,23 @@ BEGIN PERFORM pg_sleep(0.001); END LOOP; END; $$ language plpgsql; +-- This function formats EXPLAIN output to conform to how pg <= 16 EXPLAIN +-- shows ANY in an expression the pg version >= 17. When 17 is +-- the minimum supported pgversion this function can be retired. The commit +-- that changed how ANY exrpressions appear in EXPLAIN is: +-- https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=fd0398fcb +CREATE OR REPLACE FUNCTION explain_with_pg16_subplan_format(explain_command text, out query_plan text) +RETURNS SETOF TEXT AS $$ +DECLARE + pgversion int = 0; +BEGIN + pgversion = substring(version(), '\d+')::int ; + FOR query_plan IN execute explain_command LOOP + IF pgversion >= 17 THEN + IF query_plan ~ 'SubPlan \d+\).col' THEN + query_plan = regexp_replace(query_plan, '\(ANY \(\w+ = \(SubPlan (\d+)\).col1\)\)', '(SubPlan \1)', 'g'); + END IF; + END IF; + RETURN NEXT; + END LOOP; +END; $$ language plpgsql; diff --git a/src/test/regress/expected/create_distributed_table_concurrently.out b/src/test/regress/expected/create_distributed_table_concurrently.out index 1bf366fb3..607a4fafc 100644 --- a/src/test/regress/expected/create_distributed_table_concurrently.out +++ b/src/test/regress/expected/create_distributed_table_concurrently.out @@ -1,3 +1,6 @@ +SET client_min_messages TO WARNING; +CREATE EXTENSION IF NOT EXISTS citus_columnar; +RESET client_min_messages; create schema create_distributed_table_concurrently; set search_path to create_distributed_table_concurrently; set citus.shard_replication_factor to 1; @@ -313,3 +316,5 @@ select count(*) from test_columnar_2; -- columnar tests -- set client_min_messages to warning; drop schema create_distributed_table_concurrently cascade; +SET client_min_messages TO WARNING; +DROP EXTENSION citus_columnar CASCADE; diff --git a/src/test/regress/expected/drop_column_partitioned_table.out b/src/test/regress/expected/drop_column_partitioned_table.out index 7151071e9..824b34ff0 100644 --- a/src/test/regress/expected/drop_column_partitioned_table.out +++ b/src/test/regress/expected/drop_column_partitioned_table.out @@ -1,3 +1,6 @@ +SET client_min_messages TO WARNING; +CREATE EXTENSION IF NOT EXISTS citus_columnar; +RESET client_min_messages; CREATE SCHEMA drop_column_partitioned_table; SET search_path TO drop_column_partitioned_table; SET citus.shard_replication_factor TO 1; @@ -394,3 +397,5 @@ ORDER BY 1,2; \c - - - :master_port SET client_min_messages TO WARNING; DROP SCHEMA drop_column_partitioned_table CASCADE; +SET client_min_messages TO WARNING; +DROP EXTENSION citus_columnar CASCADE; diff --git a/src/test/regress/expected/dropped_columns_create_load.out b/src/test/regress/expected/dropped_columns_create_load.out index 1d5bbf4da..0b2e48ea6 100644 --- a/src/test/regress/expected/dropped_columns_create_load.out +++ b/src/test/regress/expected/dropped_columns_create_load.out @@ -61,9 +61,3 @@ CREATE TABLE sensors_2004( col_to_drop_4 date, measureid integer NOT NULL, eventdatetime date NOT NULL, measure_data jsonb NOT NULL); ALTER TABLE sensors ATTACH PARTITION sensors_2004 FOR VALUES FROM ('2004-01-01') TO ('2005-01-01'); ALTER TABLE sensors DROP COLUMN col_to_drop_4; -SELECT alter_table_set_access_method('sensors_2004', 'columnar'); - alter_table_set_access_method ---------------------------------------------------------------------- - -(1 row) - diff --git a/src/test/regress/expected/ensure_citus_columnar_not_exists.out b/src/test/regress/expected/ensure_citus_columnar_not_exists.out new file mode 100644 index 000000000..1992fd287 --- /dev/null +++ b/src/test/regress/expected/ensure_citus_columnar_not_exists.out @@ -0,0 +1,28 @@ +-- When there are no relations using the columnar access method, we don't automatically create +-- "citus_columnar" extension together with "citus" extension. +SELECT NOT EXISTS (SELECT 1 FROM pg_extension WHERE extname = 'citus_columnar') as citus_columnar_not_exists; + citus_columnar_not_exists +--------------------------------------------------------------------- + t +(1 row) + +-- Likely, we should not have any columnar objects leftover from "old columnar", i.e., the +-- columnar access method that we had before Citus 11.1, around. +SELECT NOT EXISTS (SELECT 1 FROM pg_am WHERE pg_am.amname = 'columnar') as columnar_am_not_exists; + columnar_am_not_exists +--------------------------------------------------------------------- + t +(1 row) + +SELECT NOT EXISTS (SELECT 1 FROM pg_namespace WHERE nspname IN ('columnar', 'columnar_internal')) as columnar_catalog_schemas_not_exists; + columnar_catalog_schemas_not_exists +--------------------------------------------------------------------- + t +(1 row) + +SELECT NOT EXISTS (SELECT 1 FROM pg_proc WHERE proname IN ('alter_columnar_table_set', 'alter_columnar_table_reset', 'upgrade_columnar_storage', 'downgrade_columnar_storage', 'columnar_ensure_am_depends_catalog')) as columnar_utilities_not_exists; + columnar_utilities_not_exists +--------------------------------------------------------------------- + t +(1 row) + diff --git a/src/test/regress/expected/follower_single_node.out b/src/test/regress/expected/follower_single_node.out index 5c26e55e1..c6e2c4907 100644 --- a/src/test/regress/expected/follower_single_node.out +++ b/src/test/regress/expected/follower_single_node.out @@ -1,4 +1,7 @@ \c - - - :master_port +SET client_min_messages TO WARNING; +CREATE EXTENSION IF NOT EXISTS citus_columnar; +RESET client_min_messages; CREATE SCHEMA single_node; SET search_path TO single_node; SET citus.shard_count TO 4; diff --git a/src/test/regress/expected/generated_identity.out b/src/test/regress/expected/generated_identity.out index b1102b781..3155bb769 100644 --- a/src/test/regress/expected/generated_identity.out +++ b/src/test/regress/expected/generated_identity.out @@ -1,3 +1,6 @@ +SET client_min_messages TO WARNING; +CREATE EXTENSION IF NOT EXISTS citus_columnar; +RESET client_min_messages; CREATE SCHEMA generated_identities; SET search_path TO generated_identities; SET client_min_messages to ERROR; @@ -673,3 +676,5 @@ ORDER BY table_name, id; SET client_min_messages TO WARNING; DROP SCHEMA generated_identities CASCADE; DROP USER identity_test_user; +SET client_min_messages TO WARNING; +DROP EXTENSION citus_columnar CASCADE; diff --git a/src/test/regress/expected/insert_select_into_local_table.out b/src/test/regress/expected/insert_select_into_local_table.out index 0e919b7cd..a5d2c4c65 100644 --- a/src/test/regress/expected/insert_select_into_local_table.out +++ b/src/test/regress/expected/insert_select_into_local_table.out @@ -1,3 +1,6 @@ +SET client_min_messages TO WARNING; +CREATE EXTENSION IF NOT EXISTS citus_columnar; +RESET client_min_messages; CREATE SCHEMA insert_select_into_local_table; SET search_path TO insert_select_into_local_table; SET citus.shard_count = 4; @@ -1113,3 +1116,5 @@ ROLLBACK; \set VERBOSITY terse DROP SCHEMA insert_select_into_local_table CASCADE; NOTICE: drop cascades to 13 other objects +SET client_min_messages TO WARNING; +DROP EXTENSION citus_columnar CASCADE; diff --git a/src/test/regress/expected/issue_5248.out b/src/test/regress/expected/issue_5248.out index d5946089f..536764aa7 100644 --- a/src/test/regress/expected/issue_5248.out +++ b/src/test/regress/expected/issue_5248.out @@ -1,6 +1,9 @@ -- -- ISSUE_5248 -- +SET client_min_messages TO WARNING; +CREATE EXTENSION IF NOT EXISTS citus_columnar; +RESET client_min_messages; CREATE SCHEMA issue_5248; SET search_path TO issue_5248; SET citus.shard_count TO 4; @@ -216,3 +219,5 @@ ERROR: cannot push down subquery on the target list DETAIL: Subqueries in the SELECT part of the query can only be pushed down if they happen before aggregates and window functions SET client_min_messages TO WARNING; DROP SCHEMA issue_5248 CASCADE; +SET client_min_messages TO WARNING; +DROP EXTENSION citus_columnar CASCADE; diff --git a/src/test/regress/expected/merge.out b/src/test/regress/expected/merge.out index 1e5e85242..8bee2e524 100644 --- a/src/test/regress/expected/merge.out +++ b/src/test/regress/expected/merge.out @@ -6,6 +6,9 @@ NOTICE: schema "merge_schema" does not exist, skipping --WHEN NOT MATCHED --WHEN MATCHED AND --WHEN MATCHED +SET client_min_messages TO WARNING; +CREATE EXTENSION IF NOT EXISTS citus_columnar; +RESET client_min_messages; CREATE SCHEMA merge_schema; SET search_path TO merge_schema; SET citus.shard_count TO 4; @@ -4352,3 +4355,5 @@ drop cascades to table t1_4000190 drop cascades to table s1_4000191 drop cascades to table t1 and 7 other objects (see server log for list) +SET client_min_messages TO WARNING; +DROP EXTENSION citus_columnar CASCADE; diff --git a/src/test/regress/expected/merge_repartition1.out b/src/test/regress/expected/merge_repartition1.out index ac718f73c..c75415897 100644 --- a/src/test/regress/expected/merge_repartition1.out +++ b/src/test/regress/expected/merge_repartition1.out @@ -3,6 +3,9 @@ -- and compare the final results of the target tables in Postgres and Citus. -- The results should match. This process is repeated for various combinations -- of MERGE SQL. +SET client_min_messages TO WARNING; +CREATE EXTENSION IF NOT EXISTS citus_columnar; +RESET client_min_messages; DROP SCHEMA IF EXISTS merge_repartition1_schema CASCADE; NOTICE: schema "merge_repartition1_schema" does not exist, skipping CREATE SCHEMA merge_repartition1_schema; @@ -1339,3 +1342,5 @@ drop cascades to function check_data(text,text,text,text) drop cascades to function compare_data() drop cascades to table citus_target drop cascades to table citus_source +SET client_min_messages TO WARNING; +DROP EXTENSION citus_columnar CASCADE; diff --git a/src/test/regress/expected/multi_drop_extension.out b/src/test/regress/expected/multi_drop_extension.out index 909ad2f87..da7d5587f 100644 --- a/src/test/regress/expected/multi_drop_extension.out +++ b/src/test/regress/expected/multi_drop_extension.out @@ -2,6 +2,9 @@ -- MULTI_DROP_EXTENSION -- -- Tests around dropping and recreating the extension +SET client_min_messages TO WARNING; +CREATE EXTENSION IF NOT EXISTS citus_columnar; +RESET client_min_messages; SET citus.next_shard_id TO 550000; CREATE TABLE testtableddl(somecol int, distributecol text NOT NULL); SELECT create_distributed_table('testtableddl', 'distributecol', 'append'); @@ -168,3 +171,5 @@ SELECT * FROM testtableddl; (0 rows) DROP TABLE testtableddl; +SET client_min_messages TO WARNING; +DROP EXTENSION citus_columnar CASCADE; diff --git a/src/test/regress/expected/multi_extension.out b/src/test/regress/expected/multi_extension.out index defe41f0d..b9f4b621c 100644 --- a/src/test/regress/expected/multi_extension.out +++ b/src/test/regress/expected/multi_extension.out @@ -120,7 +120,41 @@ ORDER BY 1, 2; -- DROP EXTENSION pre-created by the regression suite DROP EXTENSION citus; -DROP EXTENSION citus_columnar; +SET client_min_messages TO WARNING; +DROP EXTENSION IF EXISTS citus_columnar; +RESET client_min_messages; +CREATE EXTENSION citus; +-- When there are no relations using the columnar access method, we don't automatically create +-- "citus_columnar" extension together with "citus" extension anymore. And as this will always +-- be the case for a fresh "CREATE EXTENSION citus", we know that we should definitely not have +-- "citus_columnar" extension created. +SELECT NOT EXISTS (SELECT 1 FROM pg_extension WHERE extname = 'citus_columnar') as citus_columnar_not_exists; + citus_columnar_not_exists +--------------------------------------------------------------------- + t +(1 row) + +-- Likely, we should not have any columnar objects leftover from "old columnar", i.e., the +-- columnar access method that we had before Citus 11.1, around. +SELECT NOT EXISTS (SELECT 1 FROM pg_am WHERE pg_am.amname = 'columnar') as columnar_am_not_exists; + columnar_am_not_exists +--------------------------------------------------------------------- + t +(1 row) + +SELECT NOT EXISTS (SELECT 1 FROM pg_namespace WHERE nspname IN ('columnar', 'columnar_internal')) as columnar_catalog_schemas_not_exists; + columnar_catalog_schemas_not_exists +--------------------------------------------------------------------- + t +(1 row) + +SELECT NOT EXISTS (SELECT 1 FROM pg_proc WHERE proname IN ('alter_columnar_table_set', 'alter_columnar_table_reset', 'upgrade_columnar_storage', 'downgrade_columnar_storage', 'columnar_ensure_am_depends_catalog')) as columnar_utilities_not_exists; + columnar_utilities_not_exists +--------------------------------------------------------------------- + t +(1 row) + +DROP EXTENSION citus; \c -- these tests switch between citus versions and call ddl's that require pg_dist_object to be created SET citus.enable_metadata_sync TO 'false'; @@ -759,6 +793,99 @@ SELECT * FROM multi_extension.print_extension_changes(); | view public.citus_tables (2 rows) +-- Update Citus to 13.2-1 and make sure that we don't automatically create +-- citus_columnar extension as we don't have any relations created using columnar. +ALTER EXTENSION citus UPDATE TO '13.2-1'; +SELECT NOT EXISTS (SELECT 1 FROM pg_extension WHERE extname = 'citus_columnar') as citus_columnar_not_exists; + citus_columnar_not_exists +--------------------------------------------------------------------- + t +(1 row) + +SELECT NOT EXISTS (SELECT 1 FROM pg_am WHERE pg_am.amname = 'columnar') as columnar_am_not_exists; + columnar_am_not_exists +--------------------------------------------------------------------- + t +(1 row) + +SELECT NOT EXISTS (SELECT 1 FROM pg_namespace WHERE nspname IN ('columnar', 'columnar_internal')) as columnar_catalog_schemas_not_exists; + columnar_catalog_schemas_not_exists +--------------------------------------------------------------------- + t +(1 row) + +SELECT NOT EXISTS (SELECT 1 FROM pg_proc WHERE proname IN ('alter_columnar_table_set', 'alter_columnar_table_reset', 'upgrade_columnar_storage', 'downgrade_columnar_storage', 'columnar_ensure_am_depends_catalog')) as columnar_utilities_not_exists; + columnar_utilities_not_exists +--------------------------------------------------------------------- + t +(1 row) + +-- Unfortunately, our downgrade scripts seem to assume that citus_columnar exists. +-- Seems this has always been the case since the introduction of citus_columnar, +-- so we need to create citus_columnar before the downgrade. +CREATE EXTENSION citus_columnar; +ALTER EXTENSION citus UPDATE TO '11.1-1'; +-- Update Citus to 13.2-1 and make sure that already having citus_columnar extension +-- doesn't cause any issues. +ALTER EXTENSION citus UPDATE TO '13.2-1'; +SELECT EXISTS (SELECT 1 FROM pg_extension WHERE extname = 'citus_columnar') as citus_columnar_exists; + citus_columnar_exists +--------------------------------------------------------------------- + t +(1 row) + +SELECT EXISTS (SELECT 1 FROM pg_am WHERE pg_am.amname = 'columnar') as columnar_am_exists; + columnar_am_exists +--------------------------------------------------------------------- + t +(1 row) + +SELECT EXISTS (SELECT 1 FROM pg_namespace WHERE nspname IN ('columnar', 'columnar_internal')) as columnar_catalog_schemas_exists; + columnar_catalog_schemas_exists +--------------------------------------------------------------------- + t +(1 row) + +SELECT EXISTS (SELECT 1 FROM pg_proc WHERE proname IN ('alter_columnar_table_set', 'alter_columnar_table_reset', 'upgrade_columnar_storage', 'downgrade_columnar_storage', 'columnar_ensure_am_depends_catalog')) as columnar_utilities_exists; + columnar_utilities_exists +--------------------------------------------------------------------- + t +(1 row) + +ALTER EXTENSION citus UPDATE TO '11.1-1'; +DROP EXTENSION citus_columnar; +-- Update Citus to 13.2-1 and make sure that NOT having citus_columnar extension +-- doesn't cause any issues. +ALTER EXTENSION citus UPDATE TO '13.2-1'; +SELECT NOT EXISTS (SELECT 1 FROM pg_extension WHERE extname = 'citus_columnar') as citus_columnar_not_exists; + citus_columnar_not_exists +--------------------------------------------------------------------- + t +(1 row) + +SELECT NOT EXISTS (SELECT 1 FROM pg_am WHERE pg_am.amname = 'columnar') as columnar_am_not_exists; + columnar_am_not_exists +--------------------------------------------------------------------- + t +(1 row) + +SELECT NOT EXISTS (SELECT 1 FROM pg_namespace WHERE nspname IN ('columnar', 'columnar_internal')) as columnar_catalog_schemas_not_exists; + columnar_catalog_schemas_not_exists +--------------------------------------------------------------------- + t +(1 row) + +SELECT NOT EXISTS (SELECT 1 FROM pg_proc WHERE proname IN ('alter_columnar_table_set', 'alter_columnar_table_reset', 'upgrade_columnar_storage', 'downgrade_columnar_storage', 'columnar_ensure_am_depends_catalog')) as columnar_utilities_not_exists; + columnar_utilities_not_exists +--------------------------------------------------------------------- + t +(1 row) + +-- Downgrade back to 10.0-4 for the rest of the tests. +-- +-- same here - to downgrade Citus, first we need to create citus_columnar +CREATE EXTENSION citus_columnar; +ALTER EXTENSION citus UPDATE TO '10.0-4'; -- not print "HINT: " to hide current lib version \set VERBOSITY terse CREATE TABLE columnar_table(a INT, b INT) USING columnar; @@ -1010,7 +1137,7 @@ FROM pg_dist_node_metadata; partitioned_citus_table_exists_pre_11 | is_null --------------------------------------------------------------------- - | t + f | f (1 row) -- Test downgrade to 10.2-5 from 11.0-1 @@ -1212,6 +1339,14 @@ SELECT * FROM multi_extension.print_extension_changes(); | view citus_locks (57 rows) +-- Make sure that citus_columnar is automatically created while updating Citus to 11.1-1 +-- as we created columnar tables using the columnar access method before. +SELECT EXISTS (SELECT 1 FROM pg_extension WHERE extname = 'citus_columnar') as citus_columnar_exists; + citus_columnar_exists +--------------------------------------------------------------------- + t +(1 row) + -- Test downgrade to 11.1-1 from 11.2-1 ALTER EXTENSION citus UPDATE TO '11.2-1'; ALTER EXTENSION citus UPDATE TO '11.1-1'; @@ -1501,10 +1636,10 @@ SELECT * FROM multi_extension.print_extension_changes(); -- Snapshot of state at 13.2-1 ALTER EXTENSION citus UPDATE TO '13.2-1'; SELECT * FROM multi_extension.print_extension_changes(); - previous_object | current_object + previous_object | current_object --------------------------------------------------------------------- function worker_last_saved_explain_analyze() TABLE(explain_analyze_output text, execution_duration double precision) | - | function worker_last_saved_explain_analyze() TABLE(explain_analyze_output text, execution_duration double precision, execution_ntuples double precision, execution_nloops double precision) + | function worker_last_saved_explain_analyze() TABLE(explain_analyze_output text, execution_duration double precision, execution_ntuples double precision, execution_nloops double precision) (2 rows) DROP TABLE multi_extension.prev_objects, multi_extension.extension_diff; @@ -1621,13 +1756,11 @@ NOTICE: version "9.1-1" of extension "citus" is already installed ALTER EXTENSION citus UPDATE; -- re-create in newest version DROP EXTENSION citus; -DROP EXTENSION citus_columnar; \c CREATE EXTENSION citus; -- test cache invalidation in workers \c - - - :worker_1_port DROP EXTENSION citus; -DROP EXTENSION citus_columnar; SET citus.enable_version_checks TO 'false'; SET columnar.enable_version_checks TO 'false'; CREATE EXTENSION citus VERSION '8.0-1'; @@ -1999,8 +2132,7 @@ SELECT citus_add_local_table_to_metadata('test'); DROP TABLE test; -- Verify that we don't consider the schemas created by extensions as tenant schemas. --- Easiest way of verifying this is to drop and re-create columnar extension. -DROP EXTENSION citus_columnar; +-- Easiest way of verifying this is to create columnar extension. SET citus.enable_schema_based_sharding TO ON; CREATE EXTENSION citus_columnar; SELECT COUNT(*)=0 FROM pg_dist_schema diff --git a/src/test/regress/expected/multi_fix_partition_shard_index_names.out b/src/test/regress/expected/multi_fix_partition_shard_index_names.out index 975a49351..b63040fe2 100644 --- a/src/test/regress/expected/multi_fix_partition_shard_index_names.out +++ b/src/test/regress/expected/multi_fix_partition_shard_index_names.out @@ -524,10 +524,11 @@ SELECT tablename, indexname FROM pg_indexes WHERE schemaname = 'fix_idx_names' O -- index, only the new index should be fixed -- create only one shard & one partition so that the output easier to check SET citus.next_shard_id TO 915000; +ALTER SEQUENCE pg_catalog.pg_dist_colocationid_seq RESTART 1370100; SET citus.shard_count TO 1; SET citus.shard_replication_factor TO 1; CREATE TABLE parent_table (dist_col int, another_col int, partition_col timestamp, name text) PARTITION BY RANGE (partition_col); -SELECT create_distributed_table('parent_table', 'dist_col'); +SELECT create_distributed_table('parent_table', 'dist_col', colocate_with=>'none'); create_distributed_table --------------------------------------------------------------------- @@ -634,40 +635,25 @@ ALTER INDEX p1_dist_col_idx3 RENAME TO p1_dist_col_idx3_renamed; ALTER INDEX p1_pkey RENAME TO p1_pkey_renamed; ALTER INDEX p1_dist_col_partition_col_key RENAME TO p1_dist_col_partition_col_key_renamed; ALTER INDEX p1_dist_col_idx RENAME TO p1_dist_col_idx_renamed; +SET client_min_messages TO WARNING; +CREATE EXTENSION IF NOT EXISTS citus_columnar; +RESET client_min_messages; +SET search_path TO fix_idx_names, public; +SET columnar.compression TO 'zstd'; -- should be able to create a new partition that is columnar SET citus.log_remote_commands TO ON; CREATE TABLE p2(dist_col int NOT NULL, another_col int, partition_col timestamp NOT NULL, name text) USING columnar; ALTER TABLE parent_table ATTACH PARTITION p2 FOR VALUES FROM ('2019-01-01') TO ('2020-01-01'); NOTICE: issuing BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;SELECT assign_distributed_transaction_id(xx, xx, 'xxxxxxx'); DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx -NOTICE: issuing SET citus.enable_ddl_propagation TO 'off' -DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx -NOTICE: issuing CREATE EXTENSION IF NOT EXISTS citus_columnar WITH SCHEMA pg_catalog VERSION "x.y-z"; -DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx -NOTICE: issuing COMMIT -DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx -NOTICE: issuing BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;SELECT assign_distributed_transaction_id(xx, xx, 'xxxxxxx'); -DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx -NOTICE: issuing SET citus.enable_ddl_propagation TO 'off' -DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx -NOTICE: issuing CREATE EXTENSION IF NOT EXISTS citus_columnar WITH SCHEMA pg_catalog VERSION "x.y-z"; -DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx -NOTICE: issuing COMMIT -DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx -NOTICE: issuing BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;SELECT assign_distributed_transaction_id(xx, xx, 'xxxxxxx'); -DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx -NOTICE: issuing BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;SELECT assign_distributed_transaction_id(xx, xx, 'xxxxxxx'); -DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx -NOTICE: issuing WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('extension', ARRAY['citus_columnar']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; -DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx -NOTICE: issuing WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('extension', ARRAY['citus_columnar']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data; -DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx NOTICE: issuing SELECT worker_apply_shard_ddl_command (915002, 'fix_idx_names', 'CREATE TABLE fix_idx_names.p2 (dist_col integer NOT NULL, another_col integer, partition_col timestamp without time zone NOT NULL, name text) USING columnar') DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx NOTICE: issuing ALTER TABLE fix_idx_names.p2_915002 SET (columnar.chunk_group_row_limit = 10000, columnar.stripe_row_limit = 150000, columnar.compression_level = 3, columnar.compression = 'zstd'); DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx NOTICE: issuing SELECT worker_apply_shard_ddl_command (915002, 'fix_idx_names', 'ALTER TABLE fix_idx_names.p2 OWNER TO postgres') DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx +NOTICE: issuing BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;SELECT assign_distributed_transaction_id(xx, xx, 'xxxxxxx'); +DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx NOTICE: issuing SET citus.enable_ddl_propagation TO 'off' DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx NOTICE: issuing SET citus.enable_ddl_propagation TO 'off' @@ -692,9 +678,9 @@ NOTICE: issuing SET citus.enable_ddl_propagation TO 'off' DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx NOTICE: issuing SET citus.enable_ddl_propagation TO 'off' DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx -NOTICE: issuing SELECT citus_internal.add_partition_metadata ('fix_idx_names.p2'::regclass, 'h', 'dist_col', 1370001, 's') +NOTICE: issuing SELECT citus_internal.add_partition_metadata ('fix_idx_names.p2'::regclass, 'h', 'dist_col', 1370100, 's') DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx -NOTICE: issuing SELECT citus_internal.add_partition_metadata ('fix_idx_names.p2'::regclass, 'h', 'dist_col', 1370001, 's') +NOTICE: issuing SELECT citus_internal.add_partition_metadata ('fix_idx_names.p2'::regclass, 'h', 'dist_col', 1370100, 's') DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx NOTICE: issuing WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('fix_idx_names.p2'::regclass, 915002, 't'::"char", '-2147483648', '2147483647')) SELECT citus_internal.add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx @@ -742,3 +728,5 @@ ALTER TABLE parent_table DROP CONSTRAINT pkey_cst CASCADE; ALTER TABLE parent_table DROP CONSTRAINT unique_cst CASCADE; SET client_min_messages TO WARNING; DROP SCHEMA fix_idx_names CASCADE; +SET client_min_messages TO WARNING; +DROP EXTENSION citus_columnar CASCADE; diff --git a/src/test/regress/expected/multi_multiuser.out b/src/test/regress/expected/multi_multiuser.out index d6216a961..6dde96e1b 100644 --- a/src/test/regress/expected/multi_multiuser.out +++ b/src/test/regress/expected/multi_multiuser.out @@ -3,6 +3,9 @@ -- -- Test user permissions. -- +SET client_min_messages TO WARNING; +CREATE EXTENSION IF NOT EXISTS citus_columnar; +RESET client_min_messages; SET citus.next_shard_id TO 1420000; SET citus.shard_replication_factor TO 1; CREATE TABLE test (id integer, val integer); @@ -549,3 +552,5 @@ DROP USER full_access; DROP USER read_access; DROP USER no_access; DROP ROLE some_role; +SET client_min_messages TO WARNING; +DROP EXTENSION citus_columnar CASCADE; diff --git a/src/test/regress/expected/multi_tenant_isolation.out b/src/test/regress/expected/multi_tenant_isolation.out index 5af7acac8..991f09d0f 100644 --- a/src/test/regress/expected/multi_tenant_isolation.out +++ b/src/test/regress/expected/multi_tenant_isolation.out @@ -1,3 +1,6 @@ +SET client_min_messages TO WARNING; +CREATE EXTENSION IF NOT EXISTS citus_columnar; +RESET client_min_messages; -- -- MULTI_TENANT_ISOLATION -- @@ -1228,3 +1231,5 @@ SELECT count(*) FROM pg_catalog.pg_dist_partition WHERE colocationid > 0; TRUNCATE TABLE pg_catalog.pg_dist_colocation; ALTER SEQUENCE pg_catalog.pg_dist_colocationid_seq RESTART 1; ALTER SEQUENCE pg_catalog.pg_dist_placement_placementid_seq RESTART :last_placement_id; +SET client_min_messages TO WARNING; +DROP EXTENSION citus_columnar CASCADE; diff --git a/src/test/regress/expected/multi_tenant_isolation_nonblocking.out b/src/test/regress/expected/multi_tenant_isolation_nonblocking.out index 3daac7dac..0d4ecb1eb 100644 --- a/src/test/regress/expected/multi_tenant_isolation_nonblocking.out +++ b/src/test/regress/expected/multi_tenant_isolation_nonblocking.out @@ -1,3 +1,6 @@ +SET client_min_messages TO WARNING; +CREATE EXTENSION IF NOT EXISTS citus_columnar; +RESET client_min_messages; -- -- MULTI_TENANT_ISOLATION -- @@ -1282,3 +1285,5 @@ SELECT public.wait_for_resource_cleanup(); (1 row) +SET client_min_messages TO WARNING; +DROP EXTENSION citus_columnar CASCADE; diff --git a/src/test/regress/expected/mx_regular_user.out b/src/test/regress/expected/mx_regular_user.out index 24af36179..b4912a154 100644 --- a/src/test/regress/expected/mx_regular_user.out +++ b/src/test/regress/expected/mx_regular_user.out @@ -1,3 +1,6 @@ +SET client_min_messages TO WARNING; +CREATE EXTENSION IF NOT EXISTS citus_columnar; +RESET client_min_messages; CREATE SCHEMA "Mx Regular User"; SET search_path TO "Mx Regular User"; -- add coordinator in idempotent way @@ -689,3 +692,5 @@ drop cascades to table "Mx Regular User".on_delete_fkey_table drop cascades to table "Mx Regular User".local_table_in_the_metadata drop cascades to type "Mx Regular User".test_type drop cascades to table "Mx Regular User".composite_key +SET client_min_messages TO WARNING; +DROP EXTENSION citus_columnar CASCADE; diff --git a/src/test/regress/expected/pg12.out b/src/test/regress/expected/pg12.out index acc0c3f63..b334b8875 100644 --- a/src/test/regress/expected/pg12.out +++ b/src/test/regress/expected/pg12.out @@ -1,6 +1,9 @@ SET citus.shard_replication_factor to 1; SET citus.next_shard_id TO 60000; SET citus.next_placement_id TO 60000; +SET client_min_messages TO WARNING; +CREATE EXTENSION IF NOT EXISTS citus_columnar; +RESET client_min_messages; create schema test_pg12; set search_path to test_pg12; -- Test generated columns @@ -651,3 +654,5 @@ drop schema test_pg12 cascade; NOTICE: drop cascades to 16 other objects \set VERBOSITY default SET citus.shard_replication_factor to 2; +SET client_min_messages TO WARNING; +DROP EXTENSION citus_columnar CASCADE; diff --git a/src/test/regress/expected/pg14.out b/src/test/regress/expected/pg14.out index bbfd5dafa..b6af0a570 100644 --- a/src/test/regress/expected/pg14.out +++ b/src/test/regress/expected/pg14.out @@ -1,3 +1,6 @@ +SET client_min_messages TO WARNING; +CREATE EXTENSION IF NOT EXISTS citus_columnar; +RESET client_min_messages; create schema pg14; set search_path to pg14; SET citus.shard_replication_factor TO 1; @@ -1468,3 +1471,5 @@ drop extension postgres_fdw cascade; drop schema pg14 cascade; DROP ROLE role_1, r1; reset client_min_messages; +SET client_min_messages TO WARNING; +DROP EXTENSION citus_columnar CASCADE; diff --git a/src/test/regress/expected/pg17.out b/src/test/regress/expected/pg17.out index 41b82b067..f70062eaa 100644 --- a/src/test/regress/expected/pg17.out +++ b/src/test/regress/expected/pg17.out @@ -4,6 +4,9 @@ SHOW server_version \gset SELECT substring(:'server_version', '\d+')::int >= 17 AS server_version_ge_17 \gset +SET client_min_messages TO WARNING; +CREATE EXTENSION IF NOT EXISTS citus_columnar; +RESET client_min_messages; -- PG17 has the capabilty to pull up a correlated ANY subquery to a join if -- the subquery only refers to its immediate parent query. Previously, the -- subquery needed to be implemented as a SubPlan node, typically as a @@ -1787,45 +1790,49 @@ SELECT create_distributed_table('test_partitioned_alter', 'id'); (1 row) -- Step 4: Verify that the table and partitions are created and distributed correctly on the coordinator -SELECT relname, relam +SELECT relname, amname FROM pg_class +JOIN pg_am ON (relam = pg_am.oid) WHERE relname = 'test_partitioned_alter'; - relname | relam + relname | amname --------------------------------------------------------------------- - test_partitioned_alter | 2 + test_partitioned_alter | heap (1 row) -SELECT relname, relam +SELECT relname, amname FROM pg_class +JOIN pg_am ON (relam = pg_am.oid) WHERE relname IN ('test_partition_1', 'test_partition_2') ORDER BY relname; - relname | relam + relname | amname --------------------------------------------------------------------- - test_partition_1 | 2 - test_partition_2 | 2 + test_partition_1 | heap + test_partition_2 | heap (2 rows) -- Step 4 (Repeat on a Worker Node): Verify that the table and partitions are created correctly \c - - - :worker_1_port SET search_path TO pg17; -- Verify the table's access method on the worker node -SELECT relname, relam +SELECT relname, amname FROM pg_class +JOIN pg_am ON (relam = pg_am.oid) WHERE relname = 'test_partitioned_alter'; - relname | relam + relname | amname --------------------------------------------------------------------- - test_partitioned_alter | 2 + test_partitioned_alter | heap (1 row) -- Verify the partitions' access methods on the worker node -SELECT relname, relam +SELECT relname, amname FROM pg_class +JOIN pg_am ON (relam = pg_am.oid) WHERE relname IN ('test_partition_1', 'test_partition_2') ORDER BY relname; - relname | relam + relname | amname --------------------------------------------------------------------- - test_partition_1 | 2 - test_partition_2 | 2 + test_partition_1 | heap + test_partition_2 | heap (2 rows) \c - - - :master_port @@ -1838,46 +1845,50 @@ ALTER TABLE test_partitioned_alter SET ACCESS METHOD columnar; -- option for partitioned tables. Existing partitions are not modified. -- Reference: https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=374c7a2290429eac3217b0c7b0b485db9c2bcc72 -- Verify the parent table's access method -SELECT relname, relam +SELECT relname, amname FROM pg_class +JOIN pg_am ON (relam = pg_am.oid) WHERE relname = 'test_partitioned_alter'; - relname | relam + relname | amname --------------------------------------------------------------------- - test_partitioned_alter | 16413 + test_partitioned_alter | columnar (1 row) -- Verify the partitions' access methods -SELECT relname, relam +SELECT relname, amname FROM pg_class +JOIN pg_am ON (relam = pg_am.oid) WHERE relname IN ('test_partition_1', 'test_partition_2') ORDER BY relname; - relname | relam + relname | amname --------------------------------------------------------------------- - test_partition_1 | 2 - test_partition_2 | 2 + test_partition_1 | heap + test_partition_2 | heap (2 rows) -- Step 6: Verify the change is applied to future partitions CREATE TABLE test_partition_3 PARTITION OF test_partitioned_alter FOR VALUES FROM (200) TO (300); -SELECT relname, relam +SELECT relname, amname FROM pg_class +JOIN pg_am ON (relam = pg_am.oid) WHERE relname = 'test_partition_3'; - relname | relam + relname | amname --------------------------------------------------------------------- - test_partition_3 | 16413 + test_partition_3 | columnar (1 row) -- Step 6 (Repeat on a Worker Node): Verify that the new partition is created correctly \c - - - :worker_1_port SET search_path TO pg17; -- Verify the new partition's access method on the worker node -SELECT relname, relam +SELECT relname, amname FROM pg_class +JOIN pg_am ON (relam = pg_am.oid) WHERE relname = 'test_partition_3'; - relname | relam + relname | amname --------------------------------------------------------------------- - test_partition_3 | 16413 + test_partition_3 | columnar (1 row) \c - - - :master_port @@ -3169,3 +3180,5 @@ DROP SCHEMA pg17 CASCADE; RESET client_min_messages; DROP ROLE regress_maintain; DROP ROLE regress_no_maintain; +SET client_min_messages TO WARNING; +DROP EXTENSION citus_columnar CASCADE; diff --git a/src/test/regress/expected/pg17_0.out b/src/test/regress/expected/pg17_0.out index 6f65f6099..697c97a15 100644 --- a/src/test/regress/expected/pg17_0.out +++ b/src/test/regress/expected/pg17_0.out @@ -4,6 +4,9 @@ SHOW server_version \gset SELECT substring(:'server_version', '\d+')::int >= 17 AS server_version_ge_17 \gset +SET client_min_messages TO WARNING; +CREATE EXTENSION IF NOT EXISTS citus_columnar; +RESET client_min_messages; -- PG17 has the capabilty to pull up a correlated ANY subquery to a join if -- the subquery only refers to its immediate parent query. Previously, the -- subquery needed to be implemented as a SubPlan node, typically as a diff --git a/src/test/regress/expected/pg_dump.out b/src/test/regress/expected/pg_dump.out index 9a297f2c5..0cbe84018 100644 --- a/src/test/regress/expected/pg_dump.out +++ b/src/test/regress/expected/pg_dump.out @@ -1,3 +1,6 @@ +SET client_min_messages TO WARNING; +CREATE EXTENSION IF NOT EXISTS citus_columnar; +RESET client_min_messages; CREATE TEMPORARY TABLE output (line text); CREATE SCHEMA dumper; SET search_path TO 'dumper'; @@ -173,3 +176,5 @@ SELECT tablename FROM pg_tables WHERE schemaname = 'dumper' ORDER BY 1; DROP SCHEMA dumper CASCADE; NOTICE: drop cascades to table data +SET client_min_messages TO WARNING; +DROP EXTENSION citus_columnar CASCADE; diff --git a/src/test/regress/expected/recurring_outer_join.out b/src/test/regress/expected/recurring_outer_join.out index 0764f05dc..e020c6814 100644 --- a/src/test/regress/expected/recurring_outer_join.out +++ b/src/test/regress/expected/recurring_outer_join.out @@ -1,3 +1,6 @@ +SET client_min_messages TO WARNING; +CREATE EXTENSION IF NOT EXISTS citus_columnar; +RESET client_min_messages; CREATE SCHEMA recurring_outer_join; SET search_path TO recurring_outer_join; SET citus.next_shard_id TO 1520000; @@ -2003,3 +2006,5 @@ DEBUG: performing repartitioned INSERT ... SELECT ROLLBACK; SET client_min_messages TO ERROR; DROP SCHEMA recurring_outer_join CASCADE; +SET client_min_messages TO WARNING; +DROP EXTENSION citus_columnar CASCADE; diff --git a/src/test/regress/expected/rename_public_to_citus_schema_and_recreate.out b/src/test/regress/expected/rename_public_to_citus_schema_and_recreate.out new file mode 100644 index 000000000..39c497ef4 --- /dev/null +++ b/src/test/regress/expected/rename_public_to_citus_schema_and_recreate.out @@ -0,0 +1,2 @@ +ALTER SCHEMA public RENAME TO citus_schema; +CREATE SCHEMA public; diff --git a/src/test/regress/expected/upgrade_columnar_before.out b/src/test/regress/expected/upgrade_columnar_before.out index a4895c770..fd0e7993e 100644 --- a/src/test/regress/expected/upgrade_columnar_before.out +++ b/src/test/regress/expected/upgrade_columnar_before.out @@ -1,3 +1,6 @@ +SET client_min_messages TO WARNING; +CREATE EXTENSION IF NOT EXISTS citus_columnar; +RESET client_min_messages; -- Test if relying on topological sort of the objects, not their names, works -- fine when re-creating objects during pg_upgrade. DO diff --git a/src/test/regress/expected/upgrade_list_citus_objects.out b/src/test/regress/expected/upgrade_list_citus_objects.out index 030228fe3..8848a489d 100644 --- a/src/test/regress/expected/upgrade_list_citus_objects.out +++ b/src/test/regress/expected/upgrade_list_citus_objects.out @@ -5,6 +5,7 @@ -- Here we create a table with only the basic extension types -- in order to avoid printing extra ones for now -- This can be removed when we drop PG16 support. +SET search_path TO citus_schema; CREATE TABLE extension_basic_types (description text); INSERT INTO extension_basic_types VALUES ('type citus.distribution_type'), ('type citus.shard_transfer_mode'), @@ -381,8 +382,7 @@ ORDER BY 1; view citus_lock_waits view citus_locks view citus_nodes - view citus_schema.citus_schemas - view citus_schema.citus_tables + view citus_schemas view citus_shard_indexes_on_worker view citus_shards view citus_shards_on_worker @@ -391,6 +391,7 @@ ORDER BY 1; view citus_stat_statements view citus_stat_tenants view citus_stat_tenants_local + view citus_tables view pg_dist_shard_placement view time_partitions (362 rows) diff --git a/src/test/regress/minimal_columnar_schedule b/src/test/regress/minimal_columnar_schedule new file mode 100644 index 000000000..25b823538 --- /dev/null +++ b/src/test/regress/minimal_columnar_schedule @@ -0,0 +1 @@ +test: columnar_test_helpers diff --git a/src/test/regress/minimal_schedule b/src/test/regress/minimal_schedule index 8b0cfff70..6a458728f 100644 --- a/src/test/regress/minimal_schedule +++ b/src/test/regress/minimal_schedule @@ -1,2 +1,2 @@ test: minimal_cluster_management -test: multi_test_helpers multi_test_helpers_superuser multi_create_fdw columnar_test_helpers multi_test_catalog_views tablespace +test: multi_test_helpers multi_test_helpers_superuser multi_create_fdw multi_test_catalog_views tablespace diff --git a/src/test/regress/multi_1_schedule b/src/test/regress/multi_1_schedule index 6a54e82ad..315e555eb 100644 --- a/src/test/regress/multi_1_schedule +++ b/src/test/regress/multi_1_schedule @@ -51,6 +51,8 @@ test: citus_schema_distribute_undistribute # because it checks the value of stats_reset column before calling the function. test: stat_counters +test: columnar_citus_integration + test: multi_test_catalog_views test: multi_table_ddl test: multi_alias diff --git a/src/test/regress/split_schedule b/src/test/regress/split_schedule index 53c422eab..b854de467 100644 --- a/src/test/regress/split_schedule +++ b/src/test/regress/split_schedule @@ -1,6 +1,6 @@ # Split Shard tests. # Include tests from 'minimal_schedule' for setup. -test: multi_test_helpers multi_test_helpers_superuser columnar_test_helpers +test: multi_test_helpers multi_test_helpers_superuser test: multi_cluster_management test: remove_coordinator_from_metadata test: multi_test_catalog_views diff --git a/src/test/regress/sql/alter_distributed_table.sql b/src/test/regress/sql/alter_distributed_table.sql index 0577421de..ac4082912 100644 --- a/src/test/regress/sql/alter_distributed_table.sql +++ b/src/test/regress/sql/alter_distributed_table.sql @@ -1,3 +1,7 @@ +SET client_min_messages TO WARNING; +CREATE EXTENSION IF NOT EXISTS citus_columnar; +RESET client_min_messages; + CREATE SCHEMA alter_distributed_table; SET search_path TO alter_distributed_table; SET citus.shard_count TO 4; @@ -482,3 +486,6 @@ RESET search_path; DROP SCHEMA alter_distributed_table CASCADE; DROP SCHEMA schema_to_test_alter_dist_table CASCADE; DROP USER alter_dist_table_test_user; + +SET client_min_messages TO WARNING; +DROP EXTENSION citus_columnar CASCADE; diff --git a/src/test/regress/sql/alter_table_set_access_method.sql b/src/test/regress/sql/alter_table_set_access_method.sql index 24dc89fe4..d92fe2f9a 100644 --- a/src/test/regress/sql/alter_table_set_access_method.sql +++ b/src/test/regress/sql/alter_table_set_access_method.sql @@ -2,6 +2,10 @@ -- ALTER_TABLE_SET_ACCESS_METHOD -- +SET client_min_messages TO WARNING; +CREATE EXTENSION IF NOT EXISTS citus_columnar; +RESET client_min_messages; + CREATE TABLE alter_am_pg_version_table (a INT); SELECT alter_table_set_access_method('alter_am_pg_version_table', 'columnar'); DROP TABLE alter_am_pg_version_table; @@ -288,3 +292,6 @@ select alter_table_set_access_method('view_test_view','columnar'); SET client_min_messages TO WARNING; DROP SCHEMA alter_table_set_access_method CASCADE; + +SET client_min_messages TO WARNING; +DROP EXTENSION citus_columnar CASCADE; diff --git a/src/test/regress/sql/citus_depended_object.sql b/src/test/regress/sql/citus_depended_object.sql index 4f35acb1e..a01b0fa9a 100644 --- a/src/test/regress/sql/citus_depended_object.sql +++ b/src/test/regress/sql/citus_depended_object.sql @@ -1,3 +1,7 @@ +SET client_min_messages TO WARNING; +CREATE EXTENSION IF NOT EXISTS citus_columnar; +RESET client_min_messages; + -- create the udf is_citus_depended_object that is needed for the tests CREATE OR REPLACE FUNCTION pg_catalog.is_citus_depended_object(oid,oid) @@ -149,3 +153,6 @@ FROM (VALUES ('master_add_node'), ('format'), -- drop the namespace with all its objects DROP SCHEMA citus_dependend_object CASCADE; + +SET client_min_messages TO WARNING; +DROP EXTENSION citus_columnar CASCADE; diff --git a/src/test/regress/sql/citus_non_blocking_split_columnar.sql b/src/test/regress/sql/citus_non_blocking_split_columnar.sql index ead6d3f37..5b0665060 100644 --- a/src/test/regress/sql/citus_non_blocking_split_columnar.sql +++ b/src/test/regress/sql/citus_non_blocking_split_columnar.sql @@ -1,3 +1,7 @@ +SET client_min_messages TO WARNING; +CREATE EXTENSION IF NOT EXISTS citus_columnar; +RESET client_min_messages; + CREATE SCHEMA "citus_split_non_blocking_schema_columnar_partitioned"; SET search_path TO "citus_split_non_blocking_schema_columnar_partitioned"; SET citus.next_shard_id TO 8970000; @@ -5,6 +9,10 @@ SET citus.next_placement_id TO 8770000; SET citus.shard_count TO 1; SET citus.shard_replication_factor TO 1; +-- remove coordinator if it is added to pg_dist_node +SELECT COUNT(master_remove_node(nodename, nodeport)) >= 0 +FROM pg_dist_node WHERE nodename='localhost' AND nodeport=:master_port; + -- Disable Deferred drop auto cleanup to avoid flaky tests. ALTER SYSTEM SET citus.defer_shard_delete_interval TO -1; SELECT pg_reload_conf(); @@ -306,3 +314,6 @@ SELECT public.wait_for_resource_cleanup(); SELECT pg_reload_conf(); DROP SCHEMA "citus_split_non_blocking_schema_columnar_partitioned" CASCADE; --END : Cleanup + +SET client_min_messages TO WARNING; +DROP EXTENSION citus_columnar CASCADE; diff --git a/src/test/regress/sql/citus_split_shard_columnar_partitioned.sql b/src/test/regress/sql/citus_split_shard_columnar_partitioned.sql index f1b3a3d13..bee287293 100644 --- a/src/test/regress/sql/citus_split_shard_columnar_partitioned.sql +++ b/src/test/regress/sql/citus_split_shard_columnar_partitioned.sql @@ -1,3 +1,7 @@ +SET client_min_messages TO WARNING; +CREATE EXTENSION IF NOT EXISTS citus_columnar; +RESET client_min_messages; + CREATE SCHEMA "citus_split_test_schema_columnar_partitioned"; SET search_path TO "citus_split_test_schema_columnar_partitioned"; SET citus.next_shard_id TO 8970000; @@ -306,3 +310,6 @@ SELECT public.wait_for_resource_cleanup(); SELECT pg_reload_conf(); DROP SCHEMA "citus_split_test_schema_columnar_partitioned" CASCADE; --END : Cleanup + +SET client_min_messages TO WARNING; +DROP EXTENSION citus_columnar CASCADE; diff --git a/src/test/regress/sql/columnar_chunk_filtering.sql b/src/test/regress/sql/columnar_chunk_filtering.sql index 6c90e1943..08ac2b627 100644 --- a/src/test/regress/sql/columnar_chunk_filtering.sql +++ b/src/test/regress/sql/columnar_chunk_filtering.sql @@ -415,7 +415,7 @@ SELECT sum(a) FROM pushdown_test where (a > random() and a <= 2000) or (a > 2000 SELECT sum(a) FROM pushdown_test where (a > random() and a <= 2000) or (a > 200000-1010); SET hash_mem_multiplier = 1.0; -SELECT public.explain_with_pg16_subplan_format($Q$ +SELECT columnar_test_helpers.explain_with_pg16_subplan_format($Q$ EXPLAIN (analyze on, costs off, timing off, summary off) SELECT sum(a) FROM pushdown_test where ( diff --git a/src/test/regress/sql/columnar_citus_integration.sql b/src/test/regress/sql/columnar_citus_integration.sql index 514508795..82ded52c9 100644 --- a/src/test/regress/sql/columnar_citus_integration.sql +++ b/src/test/regress/sql/columnar_citus_integration.sql @@ -1,4 +1,12 @@ +SET client_min_messages TO WARNING; +CREATE EXTENSION IF NOT EXISTS citus_columnar; +RESET client_min_messages; + +-- remove coordinator if it is added to pg_dist_node +SELECT COUNT(master_remove_node(nodename, nodeport)) >= 0 +FROM pg_dist_node WHERE nodename='localhost' AND nodeport=:master_port; + SELECT success, result FROM run_command_on_all_nodes($cmd$ ALTER SYSTEM SET columnar.compression TO 'none' $cmd$); @@ -440,5 +448,48 @@ WHERE "bbbbbbbbbbbbbbbbbbbbbbbbb\!bbbb'bbbbbbbbbbbbbbbbbbbbb''bbbbbbbb" * 2 > EXPLAIN (COSTS OFF, SUMMARY OFF) SELECT COUNT(*) FROM weird_col_explain; +-- some tests with distributed & partitioned tables -- + +CREATE TABLE dist_part_table( + dist_col INT, + part_col TIMESTAMPTZ, + col1 TEXT +) PARTITION BY RANGE (part_col); + +-- create an index before creating a columnar partition +CREATE INDEX dist_part_table_btree ON dist_part_table (col1); + +-- columnar partition +CREATE TABLE p0 PARTITION OF dist_part_table +FOR VALUES FROM ('2020-01-01') TO ('2020-02-01') +USING columnar; + +SELECT create_distributed_table('dist_part_table', 'dist_col'); + +-- columnar partition +CREATE TABLE p1 PARTITION OF dist_part_table +FOR VALUES FROM ('2020-02-01') TO ('2020-03-01') +USING columnar; + +-- row partition +CREATE TABLE p2 PARTITION OF dist_part_table +FOR VALUES FROM ('2020-03-01') TO ('2020-04-01'); + +INSERT INTO dist_part_table VALUES (1, '2020-03-15', 'str1', POINT(1, 1)); + +-- insert into columnar partitions +INSERT INTO dist_part_table VALUES (1, '2020-01-15', 'str2', POINT(2, 2)); +INSERT INTO dist_part_table VALUES (1, '2020-02-15', 'str3', POINT(3, 3)); + +-- create another index after creating a columnar partition +CREATE UNIQUE INDEX dist_part_table_unique ON dist_part_table (dist_col, part_col); + +-- verify that indexes are created on columnar partitions +SELECT COUNT(*)=2 FROM pg_indexes WHERE tablename = 'p0'; +SELECT COUNT(*)=2 FROM pg_indexes WHERE tablename = 'p1'; + SET client_min_messages TO WARNING; DROP SCHEMA columnar_citus_integration CASCADE; + +SET client_min_messages TO WARNING; +DROP EXTENSION citus_columnar CASCADE; diff --git a/src/test/regress/sql/columnar_indexes.sql b/src/test/regress/sql/columnar_indexes.sql index 28716c970..afb56e01c 100644 --- a/src/test/regress/sql/columnar_indexes.sql +++ b/src/test/regress/sql/columnar_indexes.sql @@ -265,46 +265,6 @@ ROLLBACK; -- make sure that we read the correct value for "b" when doing index only scan SELECT b=980 FROM include_test WHERE a = 980; --- some tests with distributed & partitioned tables -- - -CREATE TABLE dist_part_table( - dist_col INT, - part_col TIMESTAMPTZ, - col1 TEXT -) PARTITION BY RANGE (part_col); - --- create an index before creating a columnar partition -CREATE INDEX dist_part_table_btree ON dist_part_table (col1); - --- columnar partition -CREATE TABLE p0 PARTITION OF dist_part_table -FOR VALUES FROM ('2020-01-01') TO ('2020-02-01') -USING columnar; - -SELECT create_distributed_table('dist_part_table', 'dist_col'); - --- columnar partition -CREATE TABLE p1 PARTITION OF dist_part_table -FOR VALUES FROM ('2020-02-01') TO ('2020-03-01') -USING columnar; - --- row partition -CREATE TABLE p2 PARTITION OF dist_part_table -FOR VALUES FROM ('2020-03-01') TO ('2020-04-01'); - -INSERT INTO dist_part_table VALUES (1, '2020-03-15', 'str1', POINT(1, 1)); - --- insert into columnar partitions -INSERT INTO dist_part_table VALUES (1, '2020-01-15', 'str2', POINT(2, 2)); -INSERT INTO dist_part_table VALUES (1, '2020-02-15', 'str3', POINT(3, 3)); - --- create another index after creating a columnar partition -CREATE UNIQUE INDEX dist_part_table_unique ON dist_part_table (dist_col, part_col); - --- verify that indexes are created on columnar partitions -SELECT COUNT(*)=2 FROM pg_indexes WHERE tablename = 'p0'; -SELECT COUNT(*)=2 FROM pg_indexes WHERE tablename = 'p1'; - -- unsupported index types -- -- gin -- diff --git a/src/test/regress/sql/columnar_memory.sql b/src/test/regress/sql/columnar_memory.sql index 5f29eb1e3..4be6b011a 100644 --- a/src/test/regress/sql/columnar_memory.sql +++ b/src/test/regress/sql/columnar_memory.sql @@ -73,7 +73,9 @@ INSERT INTO t SELECT i, 'last batch', 0 /* no need to record memusage per row */ FROM generate_series(1, 50000) i; -SELECT CASE WHEN 1.0 * TopMemoryContext / :top_post BETWEEN 0.98 AND 1.03 THEN 1 ELSE 1.0 * TopMemoryContext / :top_post END AS top_growth +-- FIXME: Somehow, after we stopped creating citus extension for columnar tests, +-- we started observing %38 growth in TopMemoryContext here. +SELECT CASE WHEN 1.0 * TopMemoryContext / :top_post BETWEEN 0.98 AND 1.40 THEN 1 ELSE 1.0 * TopMemoryContext / :top_post END AS top_growth FROM columnar_test_helpers.columnar_store_memory_stats(); -- before this change, max mem usage while executing inserts was 28MB and diff --git a/src/test/regress/sql/columnar_partitioning.sql b/src/test/regress/sql/columnar_partitioning.sql index 01a9e892e..8e91b8919 100644 --- a/src/test/regress/sql/columnar_partitioning.sql +++ b/src/test/regress/sql/columnar_partitioning.sql @@ -43,9 +43,9 @@ EXPLAIN (costs off) SELECT count(*), sum(i), min(i), max(i) FROM parent; SELECT count(*), sum(i), min(i), max(i) FROM parent; -- set older partitions as columnar -SELECT alter_table_set_access_method('p0','columnar'); -SELECT alter_table_set_access_method('p1','columnar'); -SELECT alter_table_set_access_method('p3','columnar'); +ALTER TABLE p0 SET ACCESS METHOD columnar; +ALTER TABLE p1 SET ACCESS METHOD columnar; +ALTER TABLE p3 SET ACCESS METHOD columnar; -- should also be parallel plan EXPLAIN (costs off) SELECT count(*), sum(i), min(i), max(i) FROM parent; diff --git a/src/test/regress/sql/columnar_test_helpers.sql b/src/test/regress/sql/columnar_test_helpers.sql index d88f8b88f..9cff79bbe 100644 --- a/src/test/regress/sql/columnar_test_helpers.sql +++ b/src/test/regress/sql/columnar_test_helpers.sql @@ -137,3 +137,24 @@ BEGIN PERFORM pg_sleep(0.001); END LOOP; END; $$ language plpgsql; + +-- This function formats EXPLAIN output to conform to how pg <= 16 EXPLAIN +-- shows ANY in an expression the pg version >= 17. When 17 is +-- the minimum supported pgversion this function can be retired. The commit +-- that changed how ANY exrpressions appear in EXPLAIN is: +-- https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=fd0398fcb +CREATE OR REPLACE FUNCTION explain_with_pg16_subplan_format(explain_command text, out query_plan text) +RETURNS SETOF TEXT AS $$ +DECLARE + pgversion int = 0; +BEGIN + pgversion = substring(version(), '\d+')::int ; + FOR query_plan IN execute explain_command LOOP + IF pgversion >= 17 THEN + IF query_plan ~ 'SubPlan \d+\).col' THEN + query_plan = regexp_replace(query_plan, '\(ANY \(\w+ = \(SubPlan (\d+)\).col1\)\)', '(SubPlan \1)', 'g'); + END IF; + END IF; + RETURN NEXT; + END LOOP; +END; $$ language plpgsql; diff --git a/src/test/regress/sql/create_distributed_table_concurrently.sql b/src/test/regress/sql/create_distributed_table_concurrently.sql index 6820d782c..e55386172 100644 --- a/src/test/regress/sql/create_distributed_table_concurrently.sql +++ b/src/test/regress/sql/create_distributed_table_concurrently.sql @@ -1,3 +1,7 @@ +SET client_min_messages TO WARNING; +CREATE EXTENSION IF NOT EXISTS citus_columnar; +RESET client_min_messages; + create schema create_distributed_table_concurrently; set search_path to create_distributed_table_concurrently; set citus.shard_replication_factor to 1; @@ -154,3 +158,6 @@ select count(*) from test_columnar_2; set client_min_messages to warning; drop schema create_distributed_table_concurrently cascade; + +SET client_min_messages TO WARNING; +DROP EXTENSION citus_columnar CASCADE; diff --git a/src/test/regress/sql/drop_column_partitioned_table.sql b/src/test/regress/sql/drop_column_partitioned_table.sql index 3fed6f4eb..83b15c522 100644 --- a/src/test/regress/sql/drop_column_partitioned_table.sql +++ b/src/test/regress/sql/drop_column_partitioned_table.sql @@ -1,3 +1,7 @@ +SET client_min_messages TO WARNING; +CREATE EXTENSION IF NOT EXISTS citus_columnar; +RESET client_min_messages; + CREATE SCHEMA drop_column_partitioned_table; SET search_path TO drop_column_partitioned_table; @@ -209,3 +213,6 @@ ORDER BY 1,2; \c - - - :master_port SET client_min_messages TO WARNING; DROP SCHEMA drop_column_partitioned_table CASCADE; + +SET client_min_messages TO WARNING; +DROP EXTENSION citus_columnar CASCADE; diff --git a/src/test/regress/sql/dropped_columns_create_load.sql b/src/test/regress/sql/dropped_columns_create_load.sql index d47c264ea..41f3454c5 100644 --- a/src/test/regress/sql/dropped_columns_create_load.sql +++ b/src/test/regress/sql/dropped_columns_create_load.sql @@ -59,4 +59,3 @@ col_to_drop_4 date, measureid integer NOT NULL, eventdatetime date NOT NULL, mea ALTER TABLE sensors ATTACH PARTITION sensors_2004 FOR VALUES FROM ('2004-01-01') TO ('2005-01-01'); ALTER TABLE sensors DROP COLUMN col_to_drop_4; -SELECT alter_table_set_access_method('sensors_2004', 'columnar'); diff --git a/src/test/regress/sql/ensure_citus_columnar_not_exists.sql b/src/test/regress/sql/ensure_citus_columnar_not_exists.sql new file mode 100644 index 000000000..b1f1da604 --- /dev/null +++ b/src/test/regress/sql/ensure_citus_columnar_not_exists.sql @@ -0,0 +1,9 @@ +-- When there are no relations using the columnar access method, we don't automatically create +-- "citus_columnar" extension together with "citus" extension. +SELECT NOT EXISTS (SELECT 1 FROM pg_extension WHERE extname = 'citus_columnar') as citus_columnar_not_exists; + +-- Likely, we should not have any columnar objects leftover from "old columnar", i.e., the +-- columnar access method that we had before Citus 11.1, around. +SELECT NOT EXISTS (SELECT 1 FROM pg_am WHERE pg_am.amname = 'columnar') as columnar_am_not_exists; +SELECT NOT EXISTS (SELECT 1 FROM pg_namespace WHERE nspname IN ('columnar', 'columnar_internal')) as columnar_catalog_schemas_not_exists; +SELECT NOT EXISTS (SELECT 1 FROM pg_proc WHERE proname IN ('alter_columnar_table_set', 'alter_columnar_table_reset', 'upgrade_columnar_storage', 'downgrade_columnar_storage', 'columnar_ensure_am_depends_catalog')) as columnar_utilities_not_exists; diff --git a/src/test/regress/sql/follower_single_node.sql b/src/test/regress/sql/follower_single_node.sql index 71e1dd3bc..b3649cf51 100644 --- a/src/test/regress/sql/follower_single_node.sql +++ b/src/test/regress/sql/follower_single_node.sql @@ -1,4 +1,8 @@ \c - - - :master_port +SET client_min_messages TO WARNING; +CREATE EXTENSION IF NOT EXISTS citus_columnar; +RESET client_min_messages; + CREATE SCHEMA single_node; SET search_path TO single_node; SET citus.shard_count TO 4; diff --git a/src/test/regress/sql/generated_identity.sql b/src/test/regress/sql/generated_identity.sql index 5de9ea692..8dcc3ed06 100644 --- a/src/test/regress/sql/generated_identity.sql +++ b/src/test/regress/sql/generated_identity.sql @@ -1,3 +1,7 @@ +SET client_min_messages TO WARNING; +CREATE EXTENSION IF NOT EXISTS citus_columnar; +RESET client_min_messages; + CREATE SCHEMA generated_identities; SET search_path TO generated_identities; SET client_min_messages to ERROR; @@ -375,3 +379,6 @@ ORDER BY table_name, id; SET client_min_messages TO WARNING; DROP SCHEMA generated_identities CASCADE; DROP USER identity_test_user; + +SET client_min_messages TO WARNING; +DROP EXTENSION citus_columnar CASCADE; diff --git a/src/test/regress/sql/insert_select_into_local_table.sql b/src/test/regress/sql/insert_select_into_local_table.sql index 1b2b49a5d..733b1577a 100644 --- a/src/test/regress/sql/insert_select_into_local_table.sql +++ b/src/test/regress/sql/insert_select_into_local_table.sql @@ -1,3 +1,7 @@ +SET client_min_messages TO WARNING; +CREATE EXTENSION IF NOT EXISTS citus_columnar; +RESET client_min_messages; + CREATE SCHEMA insert_select_into_local_table; SET search_path TO insert_select_into_local_table; @@ -598,3 +602,6 @@ ROLLBACK; \set VERBOSITY terse DROP SCHEMA insert_select_into_local_table CASCADE; + +SET client_min_messages TO WARNING; +DROP EXTENSION citus_columnar CASCADE; diff --git a/src/test/regress/sql/issue_5248.sql b/src/test/regress/sql/issue_5248.sql index f58e5b1a8..893380338 100644 --- a/src/test/regress/sql/issue_5248.sql +++ b/src/test/regress/sql/issue_5248.sql @@ -2,6 +2,10 @@ -- ISSUE_5248 -- +SET client_min_messages TO WARNING; +CREATE EXTENSION IF NOT EXISTS citus_columnar; +RESET client_min_messages; + CREATE SCHEMA issue_5248; SET search_path TO issue_5248; SET citus.shard_count TO 4; @@ -197,3 +201,6 @@ WHERE pg_catalog.pg_backup_stop() > cast(NULL AS record) limit 100; SET client_min_messages TO WARNING; DROP SCHEMA issue_5248 CASCADE; + +SET client_min_messages TO WARNING; +DROP EXTENSION citus_columnar CASCADE; diff --git a/src/test/regress/sql/merge.sql b/src/test/regress/sql/merge.sql index 14dd04e32..7bafa1da5 100644 --- a/src/test/regress/sql/merge.sql +++ b/src/test/regress/sql/merge.sql @@ -6,6 +6,10 @@ DROP SCHEMA IF EXISTS merge_schema CASCADE; --WHEN MATCHED AND --WHEN MATCHED +SET client_min_messages TO WARNING; +CREATE EXTENSION IF NOT EXISTS citus_columnar; +RESET client_min_messages; + CREATE SCHEMA merge_schema; SET search_path TO merge_schema; SET citus.shard_count TO 4; @@ -2610,3 +2614,6 @@ RESET client_min_messages; DROP SERVER foreign_server CASCADE; DROP FUNCTION merge_when_and_write(); DROP SCHEMA merge_schema CASCADE; + +SET client_min_messages TO WARNING; +DROP EXTENSION citus_columnar CASCADE; diff --git a/src/test/regress/sql/merge_repartition1.sql b/src/test/regress/sql/merge_repartition1.sql index d0f4b6e56..1a1ead85f 100644 --- a/src/test/regress/sql/merge_repartition1.sql +++ b/src/test/regress/sql/merge_repartition1.sql @@ -4,6 +4,10 @@ -- The results should match. This process is repeated for various combinations -- of MERGE SQL. +SET client_min_messages TO WARNING; +CREATE EXTENSION IF NOT EXISTS citus_columnar; +RESET client_min_messages; + DROP SCHEMA IF EXISTS merge_repartition1_schema CASCADE; CREATE SCHEMA merge_repartition1_schema; SET search_path TO merge_repartition1_schema; @@ -570,3 +574,6 @@ WHEN NOT MATCHED THEN SELECT compare_data(); DROP SCHEMA merge_repartition1_schema CASCADE; + +SET client_min_messages TO WARNING; +DROP EXTENSION citus_columnar CASCADE; diff --git a/src/test/regress/sql/multi_drop_extension.sql b/src/test/regress/sql/multi_drop_extension.sql index 0bb3c3ecd..f80b2fe27 100644 --- a/src/test/regress/sql/multi_drop_extension.sql +++ b/src/test/regress/sql/multi_drop_extension.sql @@ -3,6 +3,10 @@ -- -- Tests around dropping and recreating the extension +SET client_min_messages TO WARNING; +CREATE EXTENSION IF NOT EXISTS citus_columnar; +RESET client_min_messages; + SET citus.next_shard_id TO 550000; @@ -143,3 +147,6 @@ SELECT create_distributed_table('testtableddl', 'distributecol', 'append'); SELECT 1 FROM master_create_empty_shard('testtableddl'); SELECT * FROM testtableddl; DROP TABLE testtableddl; + +SET client_min_messages TO WARNING; +DROP EXTENSION citus_columnar CASCADE; diff --git a/src/test/regress/sql/multi_extension.sql b/src/test/regress/sql/multi_extension.sql index f2779d65e..8877133f9 100644 --- a/src/test/regress/sql/multi_extension.sql +++ b/src/test/regress/sql/multi_extension.sql @@ -120,7 +120,26 @@ ORDER BY 1, 2; -- DROP EXTENSION pre-created by the regression suite DROP EXTENSION citus; -DROP EXTENSION citus_columnar; + +SET client_min_messages TO WARNING; +DROP EXTENSION IF EXISTS citus_columnar; +RESET client_min_messages; + +CREATE EXTENSION citus; + +-- When there are no relations using the columnar access method, we don't automatically create +-- "citus_columnar" extension together with "citus" extension anymore. And as this will always +-- be the case for a fresh "CREATE EXTENSION citus", we know that we should definitely not have +-- "citus_columnar" extension created. +SELECT NOT EXISTS (SELECT 1 FROM pg_extension WHERE extname = 'citus_columnar') as citus_columnar_not_exists; + +-- Likely, we should not have any columnar objects leftover from "old columnar", i.e., the +-- columnar access method that we had before Citus 11.1, around. +SELECT NOT EXISTS (SELECT 1 FROM pg_am WHERE pg_am.amname = 'columnar') as columnar_am_not_exists; +SELECT NOT EXISTS (SELECT 1 FROM pg_namespace WHERE nspname IN ('columnar', 'columnar_internal')) as columnar_catalog_schemas_not_exists; +SELECT NOT EXISTS (SELECT 1 FROM pg_proc WHERE proname IN ('alter_columnar_table_set', 'alter_columnar_table_reset', 'upgrade_columnar_storage', 'downgrade_columnar_storage', 'columnar_ensure_am_depends_catalog')) as columnar_utilities_not_exists; + +DROP EXTENSION citus; \c -- these tests switch between citus versions and call ddl's that require pg_dist_object to be created @@ -327,6 +346,52 @@ ALTER EXTENSION citus UPDATE TO '9.5-1'; ALTER EXTENSION citus UPDATE TO '10.0-4'; SELECT * FROM multi_extension.print_extension_changes(); +-- Update Citus to 13.2-1 and make sure that we don't automatically create +-- citus_columnar extension as we don't have any relations created using columnar. +ALTER EXTENSION citus UPDATE TO '13.2-1'; + +SELECT NOT EXISTS (SELECT 1 FROM pg_extension WHERE extname = 'citus_columnar') as citus_columnar_not_exists; + +SELECT NOT EXISTS (SELECT 1 FROM pg_am WHERE pg_am.amname = 'columnar') as columnar_am_not_exists; +SELECT NOT EXISTS (SELECT 1 FROM pg_namespace WHERE nspname IN ('columnar', 'columnar_internal')) as columnar_catalog_schemas_not_exists; +SELECT NOT EXISTS (SELECT 1 FROM pg_proc WHERE proname IN ('alter_columnar_table_set', 'alter_columnar_table_reset', 'upgrade_columnar_storage', 'downgrade_columnar_storage', 'columnar_ensure_am_depends_catalog')) as columnar_utilities_not_exists; + +-- Unfortunately, our downgrade scripts seem to assume that citus_columnar exists. +-- Seems this has always been the case since the introduction of citus_columnar, +-- so we need to create citus_columnar before the downgrade. +CREATE EXTENSION citus_columnar; +ALTER EXTENSION citus UPDATE TO '11.1-1'; + +-- Update Citus to 13.2-1 and make sure that already having citus_columnar extension +-- doesn't cause any issues. +ALTER EXTENSION citus UPDATE TO '13.2-1'; + +SELECT EXISTS (SELECT 1 FROM pg_extension WHERE extname = 'citus_columnar') as citus_columnar_exists; + +SELECT EXISTS (SELECT 1 FROM pg_am WHERE pg_am.amname = 'columnar') as columnar_am_exists; +SELECT EXISTS (SELECT 1 FROM pg_namespace WHERE nspname IN ('columnar', 'columnar_internal')) as columnar_catalog_schemas_exists; +SELECT EXISTS (SELECT 1 FROM pg_proc WHERE proname IN ('alter_columnar_table_set', 'alter_columnar_table_reset', 'upgrade_columnar_storage', 'downgrade_columnar_storage', 'columnar_ensure_am_depends_catalog')) as columnar_utilities_exists; + +ALTER EXTENSION citus UPDATE TO '11.1-1'; + +DROP EXTENSION citus_columnar; + +-- Update Citus to 13.2-1 and make sure that NOT having citus_columnar extension +-- doesn't cause any issues. +ALTER EXTENSION citus UPDATE TO '13.2-1'; + +SELECT NOT EXISTS (SELECT 1 FROM pg_extension WHERE extname = 'citus_columnar') as citus_columnar_not_exists; + +SELECT NOT EXISTS (SELECT 1 FROM pg_am WHERE pg_am.amname = 'columnar') as columnar_am_not_exists; +SELECT NOT EXISTS (SELECT 1 FROM pg_namespace WHERE nspname IN ('columnar', 'columnar_internal')) as columnar_catalog_schemas_not_exists; +SELECT NOT EXISTS (SELECT 1 FROM pg_proc WHERE proname IN ('alter_columnar_table_set', 'alter_columnar_table_reset', 'upgrade_columnar_storage', 'downgrade_columnar_storage', 'columnar_ensure_am_depends_catalog')) as columnar_utilities_not_exists; + +-- Downgrade back to 10.0-4 for the rest of the tests. +-- +-- same here - to downgrade Citus, first we need to create citus_columnar +CREATE EXTENSION citus_columnar; +ALTER EXTENSION citus UPDATE TO '10.0-4'; + -- not print "HINT: " to hide current lib version \set VERBOSITY terse CREATE TABLE columnar_table(a INT, b INT) USING columnar; @@ -547,6 +612,10 @@ CREATE EXTENSION citus; ALTER EXTENSION citus UPDATE TO '11.1-1'; SELECT * FROM multi_extension.print_extension_changes(); +-- Make sure that citus_columnar is automatically created while updating Citus to 11.1-1 +-- as we created columnar tables using the columnar access method before. +SELECT EXISTS (SELECT 1 FROM pg_extension WHERE extname = 'citus_columnar') as citus_columnar_exists; + -- Test downgrade to 11.1-1 from 11.2-1 ALTER EXTENSION citus UPDATE TO '11.2-1'; ALTER EXTENSION citus UPDATE TO '11.1-1'; @@ -772,7 +841,6 @@ ALTER EXTENSION citus UPDATE; -- re-create in newest version DROP EXTENSION citus; -DROP EXTENSION citus_columnar; \c CREATE EXTENSION citus; @@ -780,7 +848,6 @@ CREATE EXTENSION citus; \c - - - :worker_1_port DROP EXTENSION citus; -DROP EXTENSION citus_columnar; SET citus.enable_version_checks TO 'false'; SET columnar.enable_version_checks TO 'false'; CREATE EXTENSION citus VERSION '8.0-1'; @@ -1048,8 +1115,7 @@ SELECT citus_add_local_table_to_metadata('test'); DROP TABLE test; -- Verify that we don't consider the schemas created by extensions as tenant schemas. --- Easiest way of verifying this is to drop and re-create columnar extension. -DROP EXTENSION citus_columnar; +-- Easiest way of verifying this is to create columnar extension. SET citus.enable_schema_based_sharding TO ON; diff --git a/src/test/regress/sql/multi_fix_partition_shard_index_names.sql b/src/test/regress/sql/multi_fix_partition_shard_index_names.sql index d0f789cd9..984614756 100644 --- a/src/test/regress/sql/multi_fix_partition_shard_index_names.sql +++ b/src/test/regress/sql/multi_fix_partition_shard_index_names.sql @@ -301,10 +301,12 @@ SELECT tablename, indexname FROM pg_indexes WHERE schemaname = 'fix_idx_names' O -- create only one shard & one partition so that the output easier to check SET citus.next_shard_id TO 915000; +ALTER SEQUENCE pg_catalog.pg_dist_colocationid_seq RESTART 1370100; + SET citus.shard_count TO 1; SET citus.shard_replication_factor TO 1; CREATE TABLE parent_table (dist_col int, another_col int, partition_col timestamp, name text) PARTITION BY RANGE (partition_col); -SELECT create_distributed_table('parent_table', 'dist_col'); +SELECT create_distributed_table('parent_table', 'dist_col', colocate_with=>'none'); CREATE TABLE p1 PARTITION OF parent_table FOR VALUES FROM ('2018-01-01') TO ('2019-01-01'); CREATE INDEX i1 ON parent_table(dist_col); @@ -329,6 +331,14 @@ ALTER INDEX p1_pkey RENAME TO p1_pkey_renamed; ALTER INDEX p1_dist_col_partition_col_key RENAME TO p1_dist_col_partition_col_key_renamed; ALTER INDEX p1_dist_col_idx RENAME TO p1_dist_col_idx_renamed; +SET client_min_messages TO WARNING; +CREATE EXTENSION IF NOT EXISTS citus_columnar; +RESET client_min_messages; + +SET search_path TO fix_idx_names, public; + +SET columnar.compression TO 'zstd'; + -- should be able to create a new partition that is columnar SET citus.log_remote_commands TO ON; CREATE TABLE p2(dist_col int NOT NULL, another_col int, partition_col timestamp NOT NULL, name text) USING columnar; @@ -341,3 +351,6 @@ ALTER TABLE parent_table DROP CONSTRAINT unique_cst CASCADE; SET client_min_messages TO WARNING; DROP SCHEMA fix_idx_names CASCADE; + +SET client_min_messages TO WARNING; +DROP EXTENSION citus_columnar CASCADE; diff --git a/src/test/regress/sql/multi_multiuser.sql b/src/test/regress/sql/multi_multiuser.sql index 150abe307..5abf5c5c4 100644 --- a/src/test/regress/sql/multi_multiuser.sql +++ b/src/test/regress/sql/multi_multiuser.sql @@ -4,6 +4,10 @@ -- Test user permissions. -- +SET client_min_messages TO WARNING; +CREATE EXTENSION IF NOT EXISTS citus_columnar; +RESET client_min_messages; + SET citus.next_shard_id TO 1420000; SET citus.shard_replication_factor TO 1; @@ -332,3 +336,6 @@ DROP USER full_access; DROP USER read_access; DROP USER no_access; DROP ROLE some_role; + +SET client_min_messages TO WARNING; +DROP EXTENSION citus_columnar CASCADE; diff --git a/src/test/regress/sql/multi_tenant_isolation.sql b/src/test/regress/sql/multi_tenant_isolation.sql index c3e51b6cc..f6d98366b 100644 --- a/src/test/regress/sql/multi_tenant_isolation.sql +++ b/src/test/regress/sql/multi_tenant_isolation.sql @@ -1,3 +1,7 @@ +SET client_min_messages TO WARNING; +CREATE EXTENSION IF NOT EXISTS citus_columnar; +RESET client_min_messages; + -- -- MULTI_TENANT_ISOLATION -- @@ -610,3 +614,6 @@ TRUNCATE TABLE pg_catalog.pg_dist_colocation; ALTER SEQUENCE pg_catalog.pg_dist_colocationid_seq RESTART 1; ALTER SEQUENCE pg_catalog.pg_dist_placement_placementid_seq RESTART :last_placement_id; + +SET client_min_messages TO WARNING; +DROP EXTENSION citus_columnar CASCADE; diff --git a/src/test/regress/sql/multi_tenant_isolation_nonblocking.sql b/src/test/regress/sql/multi_tenant_isolation_nonblocking.sql index 994f29f0a..dc8727548 100644 --- a/src/test/regress/sql/multi_tenant_isolation_nonblocking.sql +++ b/src/test/regress/sql/multi_tenant_isolation_nonblocking.sql @@ -1,3 +1,7 @@ +SET client_min_messages TO WARNING; +CREATE EXTENSION IF NOT EXISTS citus_columnar; +RESET client_min_messages; + -- -- MULTI_TENANT_ISOLATION -- @@ -610,3 +614,6 @@ ALTER SEQUENCE pg_catalog.pg_dist_placement_placementid_seq RESTART :last_placem -- make sure we don't have any replication objects leftover on the nodes SELECT public.wait_for_resource_cleanup(); + +SET client_min_messages TO WARNING; +DROP EXTENSION citus_columnar CASCADE; diff --git a/src/test/regress/sql/mx_regular_user.sql b/src/test/regress/sql/mx_regular_user.sql index 2dbd85c28..3a5e2f115 100644 --- a/src/test/regress/sql/mx_regular_user.sql +++ b/src/test/regress/sql/mx_regular_user.sql @@ -1,3 +1,7 @@ +SET client_min_messages TO WARNING; +CREATE EXTENSION IF NOT EXISTS citus_columnar; +RESET client_min_messages; + CREATE SCHEMA "Mx Regular User"; SET search_path TO "Mx Regular User"; @@ -345,3 +349,6 @@ SELECT start_metadata_sync_to_node('localhost', :worker_1_port); SELECT start_metadata_sync_to_node('localhost', :worker_2_port); DROP SCHEMA "Mx Regular User" CASCADE; + +SET client_min_messages TO WARNING; +DROP EXTENSION citus_columnar CASCADE; diff --git a/src/test/regress/sql/pg12.sql b/src/test/regress/sql/pg12.sql index 831ce40bb..ebaf0b592 100644 --- a/src/test/regress/sql/pg12.sql +++ b/src/test/regress/sql/pg12.sql @@ -2,6 +2,10 @@ SET citus.shard_replication_factor to 1; SET citus.next_shard_id TO 60000; SET citus.next_placement_id TO 60000; +SET client_min_messages TO WARNING; +CREATE EXTENSION IF NOT EXISTS citus_columnar; +RESET client_min_messages; + create schema test_pg12; set search_path to test_pg12; @@ -394,3 +398,5 @@ drop schema test_pg12 cascade; SET citus.shard_replication_factor to 2; +SET client_min_messages TO WARNING; +DROP EXTENSION citus_columnar CASCADE; diff --git a/src/test/regress/sql/pg14.sql b/src/test/regress/sql/pg14.sql index 47eb67930..c12a8c4fa 100644 --- a/src/test/regress/sql/pg14.sql +++ b/src/test/regress/sql/pg14.sql @@ -1,3 +1,7 @@ +SET client_min_messages TO WARNING; +CREATE EXTENSION IF NOT EXISTS citus_columnar; +RESET client_min_messages; + create schema pg14; set search_path to pg14; SET citus.shard_replication_factor TO 1; @@ -760,3 +764,6 @@ drop extension postgres_fdw cascade; drop schema pg14 cascade; DROP ROLE role_1, r1; reset client_min_messages; + +SET client_min_messages TO WARNING; +DROP EXTENSION citus_columnar CASCADE; diff --git a/src/test/regress/sql/pg17.sql b/src/test/regress/sql/pg17.sql index 713b58952..8d4c2097b 100644 --- a/src/test/regress/sql/pg17.sql +++ b/src/test/regress/sql/pg17.sql @@ -5,6 +5,10 @@ SHOW server_version \gset SELECT substring(:'server_version', '\d+')::int >= 17 AS server_version_ge_17 \gset +SET client_min_messages TO WARNING; +CREATE EXTENSION IF NOT EXISTS citus_columnar; +RESET client_min_messages; + -- PG17 has the capabilty to pull up a correlated ANY subquery to a join if -- the subquery only refers to its immediate parent query. Previously, the -- subquery needed to be implemented as a SubPlan node, typically as a @@ -1018,12 +1022,14 @@ CREATE TABLE test_partition_2 PARTITION OF test_partitioned_alter SELECT create_distributed_table('test_partitioned_alter', 'id'); -- Step 4: Verify that the table and partitions are created and distributed correctly on the coordinator -SELECT relname, relam +SELECT relname, amname FROM pg_class +JOIN pg_am ON (relam = pg_am.oid) WHERE relname = 'test_partitioned_alter'; -SELECT relname, relam +SELECT relname, amname FROM pg_class +JOIN pg_am ON (relam = pg_am.oid) WHERE relname IN ('test_partition_1', 'test_partition_2') ORDER BY relname; @@ -1032,13 +1038,15 @@ ORDER BY relname; SET search_path TO pg17; -- Verify the table's access method on the worker node -SELECT relname, relam +SELECT relname, amname FROM pg_class +JOIN pg_am ON (relam = pg_am.oid) WHERE relname = 'test_partitioned_alter'; -- Verify the partitions' access methods on the worker node -SELECT relname, relam +SELECT relname, amname FROM pg_class +JOIN pg_am ON (relam = pg_am.oid) WHERE relname IN ('test_partition_1', 'test_partition_2') ORDER BY relname; @@ -1055,13 +1063,15 @@ ALTER TABLE test_partitioned_alter SET ACCESS METHOD columnar; -- Reference: https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=374c7a2290429eac3217b0c7b0b485db9c2bcc72 -- Verify the parent table's access method -SELECT relname, relam +SELECT relname, amname FROM pg_class +JOIN pg_am ON (relam = pg_am.oid) WHERE relname = 'test_partitioned_alter'; -- Verify the partitions' access methods -SELECT relname, relam +SELECT relname, amname FROM pg_class +JOIN pg_am ON (relam = pg_am.oid) WHERE relname IN ('test_partition_1', 'test_partition_2') ORDER BY relname; @@ -1069,8 +1079,9 @@ ORDER BY relname; CREATE TABLE test_partition_3 PARTITION OF test_partitioned_alter FOR VALUES FROM (200) TO (300); -SELECT relname, relam +SELECT relname, amname FROM pg_class +JOIN pg_am ON (relam = pg_am.oid) WHERE relname = 'test_partition_3'; -- Step 6 (Repeat on a Worker Node): Verify that the new partition is created correctly @@ -1078,8 +1089,9 @@ WHERE relname = 'test_partition_3'; SET search_path TO pg17; -- Verify the new partition's access method on the worker node -SELECT relname, relam +SELECT relname, amname FROM pg_class +JOIN pg_am ON (relam = pg_am.oid) WHERE relname = 'test_partition_3'; \c - - - :master_port @@ -1764,3 +1776,6 @@ RESET client_min_messages; DROP ROLE regress_maintain; DROP ROLE regress_no_maintain; + +SET client_min_messages TO WARNING; +DROP EXTENSION citus_columnar CASCADE; diff --git a/src/test/regress/sql/pg_dump.sql b/src/test/regress/sql/pg_dump.sql index 7604238f0..2cc39a05b 100644 --- a/src/test/regress/sql/pg_dump.sql +++ b/src/test/regress/sql/pg_dump.sql @@ -1,3 +1,7 @@ +SET client_min_messages TO WARNING; +CREATE EXTENSION IF NOT EXISTS citus_columnar; +RESET client_min_messages; + CREATE TEMPORARY TABLE output (line text); CREATE SCHEMA dumper; @@ -109,3 +113,6 @@ DROP SCHEMA dumper CASCADE; SELECT tablename FROM pg_tables WHERE schemaname = 'dumper' ORDER BY 1; DROP SCHEMA dumper CASCADE; + +SET client_min_messages TO WARNING; +DROP EXTENSION citus_columnar CASCADE; diff --git a/src/test/regress/sql/recurring_outer_join.sql b/src/test/regress/sql/recurring_outer_join.sql index d33309817..014a7e536 100644 --- a/src/test/regress/sql/recurring_outer_join.sql +++ b/src/test/regress/sql/recurring_outer_join.sql @@ -1,3 +1,7 @@ +SET client_min_messages TO WARNING; +CREATE EXTENSION IF NOT EXISTS citus_columnar; +RESET client_min_messages; + CREATE SCHEMA recurring_outer_join; SET search_path TO recurring_outer_join; @@ -1020,3 +1024,6 @@ ROLLBACK; SET client_min_messages TO ERROR; DROP SCHEMA recurring_outer_join CASCADE; + +SET client_min_messages TO WARNING; +DROP EXTENSION citus_columnar CASCADE; diff --git a/src/test/regress/sql/rename_public_to_citus_schema_and_recreate.sql b/src/test/regress/sql/rename_public_to_citus_schema_and_recreate.sql new file mode 100644 index 000000000..39c497ef4 --- /dev/null +++ b/src/test/regress/sql/rename_public_to_citus_schema_and_recreate.sql @@ -0,0 +1,2 @@ +ALTER SCHEMA public RENAME TO citus_schema; +CREATE SCHEMA public; diff --git a/src/test/regress/sql/upgrade_columnar_before.sql b/src/test/regress/sql/upgrade_columnar_before.sql index 6f39f4234..c2570aa55 100644 --- a/src/test/regress/sql/upgrade_columnar_before.sql +++ b/src/test/regress/sql/upgrade_columnar_before.sql @@ -1,3 +1,7 @@ +SET client_min_messages TO WARNING; +CREATE EXTENSION IF NOT EXISTS citus_columnar; +RESET client_min_messages; + -- Test if relying on topological sort of the objects, not their names, works -- fine when re-creating objects during pg_upgrade. diff --git a/src/test/regress/sql/upgrade_list_citus_objects.sql b/src/test/regress/sql/upgrade_list_citus_objects.sql index fb761e852..131c00785 100644 --- a/src/test/regress/sql/upgrade_list_citus_objects.sql +++ b/src/test/regress/sql/upgrade_list_citus_objects.sql @@ -5,6 +5,7 @@ -- Here we create a table with only the basic extension types -- in order to avoid printing extra ones for now -- This can be removed when we drop PG16 support. +SET search_path TO citus_schema; CREATE TABLE extension_basic_types (description text); INSERT INTO extension_basic_types VALUES ('type citus.distribution_type'), ('type citus.shard_transfer_mode'),