Not automatically create citus_columnar when creating citus extension (#8081)

DESCRIPTION: Not automatically create citus_columnar when there are no
relations using it.

Previously, we were always creating citus_columnar when creating citus
with version >= 11.1. And how we were doing was as follows:
* Detach SQL objects owned by old columnar, i.e., "drop" them from
citus, but not actually drop them from the database
* "old columnar" is the one that we had before Citus 11.1 as part of
citus, i.e., before splitting the access method ands its catalog to
citus_columnar.
* Create citus_columnar and attach the SQL objects leftover from old
columnar to it so that we can continue supporting the columnar tables
that user had before Citus 11.1 with citus_columnar.

First part is unchanged, however, now we don't create citus_columnar
automatically anymore if the user didn't have any relations using
columnar. For this reason, as of Citus 13.2, when these SQL objects are
not owned by an extension and there are no relations using columnar
access method, we drop these SQL objects when updating Citus to 13.2.

The net effect is still the same as if we automatically created
citus_columnar and user dropped citus_columnar later, so we should not
have any issues with dropping them.

(**Update:** Seems we've made some assumptions in citus, e.g.,
citus_finish_pg_upgrade() still assumes columnar metadata exists and
tries to apply some fixes for it, so this PR fixes them as well. See the
last section of this PR description.)

Also, ideally I was hoping to just remove some lines of code from
extension.c, where we decide automatically creating citus_columnar when
creating citus, however, this didn't happen to be the case for two
reasons:
* We still need to automatically create it for the servers using
columnar access method.
* We need to clean-up the leftover SQL objects from old columnar when
the above is not case otherwise we would have leftover SQL objects from
old columnar for no reason, and that would confuse users too.
* Old columnar cannot be used to create columnar tables properly, so we
should clean them up and let the user decide whether they want to create
citus_columnar when they really need it later.

---

Also made several changes in the test suite because similarly, we don't
always want to have citus_columnar created in citus tests anymore:
* Now, columnar specific test targets, which cover **41** test sql
files, always install columnar by default, by using
"--load-extension=citus_columnar".
* "--load-extension=citus_columnar" is not added to citus specific test
targets because by default we don't want to have citus_columnar created
during citus tests.
* Excluding citus_columnar specific tests, we have **601** sql files
that we have as citus tests and in **27** of them we manually create
citus_columnar at the very beginning of the test because these tests do
test some functionalities of citus together with columnar tables.

Also, before and after schedules for PG upgrade tests are now duplicated
so we have two versions of each: one with columnar tests and one
without. To choose between them, check-pg-upgrade now supports a
"test-with-columnar" option, which can be set to "true" or anything else
to logically indicate "false". In CI, we run the check-pg-upgrade test
target with both options. The purpose is to ensure we can test PG
upgrades where citus_columnar is not created in the cluster before the
upgrade as well.

Finally, added more tests to multi_extension.sql to test Citus upgrade
scenarios with / without columnar tables / citus_columnar extension.

---

Also, seems citus_finish_pg_upgrade was assuming that citus_columnar is
always created but actually we should have never made such an
assumption. To fix that, moved columnar specific post-PG-upgrade work
from citus to a new columnar UDF, which is columnar_finish_pg_upgrade.
But to avoid breaking existing customer / managed service scripts, we
continue to automatically perform post PG-upgrade work for columnar
within citus_finish_pg_upgrade, but only if columnar access method
exists this time.
pull/7973/head
Onur Tirtir 2025-08-18 10:29:27 +03:00 committed by GitHub
parent 649050c676
commit 87a1b631e8
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
105 changed files with 1486 additions and 264 deletions

View File

@ -331,7 +331,15 @@ jobs:
make -C src/test/regress \ make -C src/test/regress \
check-pg-upgrade \ check-pg-upgrade \
old-bindir=/usr/lib/postgresql/${{ env.old_pg_major }}/bin \ old-bindir=/usr/lib/postgresql/${{ env.old_pg_major }}/bin \
new-bindir=/usr/lib/postgresql/${{ env.new_pg_major }}/bin new-bindir=/usr/lib/postgresql/${{ env.new_pg_major }}/bin \
test-with-columnar=false
gosu circleci \
make -C src/test/regress \
check-pg-upgrade \
old-bindir=/usr/lib/postgresql/${{ env.old_pg_major }}/bin \
new-bindir=/usr/lib/postgresql/${{ env.new_pg_major }}/bin \
test-with-columnar=true
- name: Copy pg_upgrade logs for newData dir - name: Copy pg_upgrade logs for newData dir
run: |- run: |-
mkdir -p /tmp/pg_upgrade_newData_logs mkdir -p /tmp/pg_upgrade_newData_logs

View File

@ -1,6 +1,6 @@
# Columnar extension # Columnar extension
comment = 'Citus Columnar extension' comment = 'Citus Columnar extension'
default_version = '12.2-1' default_version = '13.2-1'
module_pathname = '$libdir/citus_columnar' module_pathname = '$libdir/citus_columnar'
relocatable = false relocatable = false
schema = pg_catalog schema = pg_catalog

View File

@ -0,0 +1,3 @@
-- citus_columnar--12.2-1--13.2-1.sql
#include "udfs/columnar_finish_pg_upgrade/13.2-1.sql"

View File

@ -0,0 +1,3 @@
-- citus_columnar--13.2-1--12.2-1.sql
DROP FUNCTION IF EXISTS pg_catalog.columnar_finish_pg_upgrade();

View File

@ -0,0 +1,13 @@
CREATE OR REPLACE FUNCTION pg_catalog.columnar_finish_pg_upgrade()
RETURNS void
LANGUAGE plpgsql
SET search_path = pg_catalog
AS $cppu$
BEGIN
-- set dependencies for columnar table access method
PERFORM columnar_internal.columnar_ensure_am_depends_catalog();
END;
$cppu$;
COMMENT ON FUNCTION pg_catalog.columnar_finish_pg_upgrade()
IS 'perform tasks to properly complete a Postgres upgrade for columnar extension';

View File

@ -0,0 +1,13 @@
CREATE OR REPLACE FUNCTION pg_catalog.columnar_finish_pg_upgrade()
RETURNS void
LANGUAGE plpgsql
SET search_path = pg_catalog
AS $cppu$
BEGIN
-- set dependencies for columnar table access method
PERFORM columnar_internal.columnar_ensure_am_depends_catalog();
END;
$cppu$;
COMMENT ON FUNCTION pg_catalog.columnar_finish_pg_upgrade()
IS 'perform tasks to properly complete a Postgres upgrade for columnar extension';

View File

@ -25,6 +25,12 @@
#include "utils/lsyscache.h" #include "utils/lsyscache.h"
#include "utils/syscache.h" #include "utils/syscache.h"
#include "pg_version_constants.h"
#if PG_VERSION_NUM < PG_VERSION_17
#include "catalog/pg_am_d.h"
#endif
#include "citus_version.h" #include "citus_version.h"
#include "columnar/columnar.h" #include "columnar/columnar.h"
@ -52,6 +58,10 @@ static void MarkExistingObjectDependenciesDistributedIfSupported(void);
static List * GetAllViews(void); static List * GetAllViews(void);
static bool ShouldPropagateExtensionCommand(Node *parseTree); static bool ShouldPropagateExtensionCommand(Node *parseTree);
static bool IsAlterExtensionSetSchemaCitus(Node *parseTree); static bool IsAlterExtensionSetSchemaCitus(Node *parseTree);
static bool HasAnyRelationsUsingOldColumnar(void);
static Oid GetOldColumnarAMIdIfExists(void);
static bool AccessMethodDependsOnAnyExtensions(Oid accessMethodId);
static bool HasAnyRelationsUsingAccessMethod(Oid accessMethodId);
static Node * RecreateExtensionStmt(Oid extensionOid); static Node * RecreateExtensionStmt(Oid extensionOid);
static List * GenerateGrantCommandsOnExtensionDependentFDWs(Oid extensionId); static List * GenerateGrantCommandsOnExtensionDependentFDWs(Oid extensionId);
@ -783,7 +793,8 @@ PreprocessCreateExtensionStmtForCitusColumnar(Node *parsetree)
/*citus version >= 11.1 requires install citus_columnar first*/ /*citus version >= 11.1 requires install citus_columnar first*/
if (versionNumber >= 1110 && !CitusHasBeenLoaded()) if (versionNumber >= 1110 && !CitusHasBeenLoaded())
{ {
if (get_extension_oid("citus_columnar", true) == InvalidOid) if (get_extension_oid("citus_columnar", true) == InvalidOid &&
(versionNumber < 1320 || HasAnyRelationsUsingOldColumnar()))
{ {
CreateExtensionWithVersion("citus_columnar", NULL); CreateExtensionWithVersion("citus_columnar", NULL);
} }
@ -894,9 +905,10 @@ PreprocessAlterExtensionCitusStmtForCitusColumnar(Node *parseTree)
double newVersionNumber = GetExtensionVersionNumber(pstrdup(newVersion)); double newVersionNumber = GetExtensionVersionNumber(pstrdup(newVersion));
/*alter extension citus update to version >= 11.1-1, and no citus_columnar installed */ /*alter extension citus update to version >= 11.1-1, and no citus_columnar installed */
if (newVersionNumber >= 1110 && citusColumnarOid == InvalidOid) if (newVersionNumber >= 1110 && citusColumnarOid == InvalidOid &&
(newVersionNumber < 1320 || HasAnyRelationsUsingOldColumnar()))
{ {
/*it's upgrade citus to 11.1-1 or further version */ /*it's upgrade citus to 11.1-1 or further version and there are relations using old columnar */
CreateExtensionWithVersion("citus_columnar", CITUS_COLUMNAR_INTERNAL_VERSION); CreateExtensionWithVersion("citus_columnar", CITUS_COLUMNAR_INTERNAL_VERSION);
} }
else if (newVersionNumber < 1110 && citusColumnarOid != InvalidOid) else if (newVersionNumber < 1110 && citusColumnarOid != InvalidOid)
@ -911,7 +923,8 @@ PreprocessAlterExtensionCitusStmtForCitusColumnar(Node *parseTree)
int versionNumber = (int) (100 * strtod(CITUS_MAJORVERSION, NULL)); int versionNumber = (int) (100 * strtod(CITUS_MAJORVERSION, NULL));
if (versionNumber >= 1110) if (versionNumber >= 1110)
{ {
if (citusColumnarOid == InvalidOid) if (citusColumnarOid == InvalidOid &&
(versionNumber < 1320 || HasAnyRelationsUsingOldColumnar()))
{ {
CreateExtensionWithVersion("citus_columnar", CreateExtensionWithVersion("citus_columnar",
CITUS_COLUMNAR_INTERNAL_VERSION); CITUS_COLUMNAR_INTERNAL_VERSION);
@ -921,6 +934,117 @@ PreprocessAlterExtensionCitusStmtForCitusColumnar(Node *parseTree)
} }
/*
* HasAnyRelationsUsingOldColumnar returns true if there are any relations
* using the old columnar access method.
*/
static bool
HasAnyRelationsUsingOldColumnar(void)
{
Oid oldColumnarAMId = GetOldColumnarAMIdIfExists();
return OidIsValid(oldColumnarAMId) &&
HasAnyRelationsUsingAccessMethod(oldColumnarAMId);
}
/*
* GetOldColumnarAMIdIfExists returns the oid of the old columnar access
* method, i.e., the columnar access method that we had as part of "citus"
* extension before we split it into "citus_columnar" at version 11.1, if
* it exists. Otherwise, it returns InvalidOid.
*
* We know that it's "old columnar" only if the access method doesn't depend
* on any extensions. This is because, in citus--11.0-4--11.1-1.sql, we
* detach the columnar objects (including the access method) from citus
* in preparation for splitting of the columnar into a separate extension.
*/
static Oid
GetOldColumnarAMIdIfExists(void)
{
Oid columnarAMId = get_am_oid("columnar", true);
if (OidIsValid(columnarAMId) && !AccessMethodDependsOnAnyExtensions(columnarAMId))
{
return columnarAMId;
}
return InvalidOid;
}
/*
* AccessMethodDependsOnAnyExtensions returns true if the access method
* with the given accessMethodId depends on any extensions.
*/
static bool
AccessMethodDependsOnAnyExtensions(Oid accessMethodId)
{
ScanKeyData key[3];
Relation pgDepend = table_open(DependRelationId, AccessShareLock);
ScanKeyInit(&key[0],
Anum_pg_depend_classid,
BTEqualStrategyNumber, F_OIDEQ,
ObjectIdGetDatum(AccessMethodRelationId));
ScanKeyInit(&key[1],
Anum_pg_depend_objid,
BTEqualStrategyNumber, F_OIDEQ,
ObjectIdGetDatum(accessMethodId));
ScanKeyInit(&key[2],
Anum_pg_depend_objsubid,
BTEqualStrategyNumber, F_INT4EQ,
Int32GetDatum(0));
SysScanDesc scan = systable_beginscan(pgDepend, DependDependerIndexId, true,
NULL, 3, key);
bool result = false;
HeapTuple heapTuple = NULL;
while (HeapTupleIsValid(heapTuple = systable_getnext(scan)))
{
Form_pg_depend dependForm = (Form_pg_depend) GETSTRUCT(heapTuple);
if (dependForm->refclassid == ExtensionRelationId)
{
result = true;
break;
}
}
systable_endscan(scan);
table_close(pgDepend, AccessShareLock);
return result;
}
/*
* HasAnyRelationsUsingAccessMethod returns true if there are any relations
* using the access method with the given accessMethodId.
*/
static bool
HasAnyRelationsUsingAccessMethod(Oid accessMethodId)
{
ScanKeyData key[1];
Relation pgClass = table_open(RelationRelationId, AccessShareLock);
ScanKeyInit(&key[0],
Anum_pg_class_relam,
BTEqualStrategyNumber, F_OIDEQ,
ObjectIdGetDatum(accessMethodId));
SysScanDesc scan = systable_beginscan(pgClass, InvalidOid, false, NULL, 1, key);
bool result = HeapTupleIsValid(systable_getnext(scan));
systable_endscan(scan);
table_close(pgClass, AccessShareLock);
return result;
}
/* /*
* PostprocessAlterExtensionCitusStmtForCitusColumnar process the case when upgrade citus * PostprocessAlterExtensionCitusStmtForCitusColumnar process the case when upgrade citus
* to version that support citus_columnar, or downgrade citus to lower version that * to version that support citus_columnar, or downgrade citus to lower version that
@ -959,7 +1083,7 @@ PostprocessAlterExtensionCitusStmtForCitusColumnar(Node *parseTree)
{ {
/*alter extension citus update, need upgrade citus_columnar from Y to Z*/ /*alter extension citus update, need upgrade citus_columnar from Y to Z*/
int versionNumber = (int) (100 * strtod(CITUS_MAJORVERSION, NULL)); int versionNumber = (int) (100 * strtod(CITUS_MAJORVERSION, NULL));
if (versionNumber >= 1110) if (versionNumber >= 1110 && citusColumnarOid != InvalidOid)
{ {
char *curColumnarVersion = get_extension_version(citusColumnarOid); char *curColumnarVersion = get_extension_version(citusColumnarOid);
if (strcmp(curColumnarVersion, CITUS_COLUMNAR_INTERNAL_VERSION) == 0) if (strcmp(curColumnarVersion, CITUS_COLUMNAR_INTERNAL_VERSION) == 0)

View File

@ -1249,8 +1249,9 @@ IsObjectAddressOwnedByCitus(const ObjectAddress *objectAddress)
return false; return false;
} }
bool ownedByCitus = extObjectAddress.objectId == citusId; bool ownedByCitus = OidIsValid(citusId) && extObjectAddress.objectId == citusId;
bool ownedByCitusColumnar = extObjectAddress.objectId == citusColumnarId; bool ownedByCitusColumnar = OidIsValid(citusColumnarId) &&
extObjectAddress.objectId == citusColumnarId;
return ownedByCitus || ownedByCitusColumnar; return ownedByCitus || ownedByCitusColumnar;
} }

View File

@ -1,3 +1,72 @@
-- citus--13.1-1--13.2-1 -- citus--13.1-1--13.2-1
-- bump version to 13.2-1 -- bump version to 13.2-1
#include "udfs/worker_last_saved_explain_analyze/13.2-1.sql" #include "udfs/worker_last_saved_explain_analyze/13.2-1.sql"
#include "udfs/citus_finish_pg_upgrade/13.2-1.sql"
DO $drop_leftover_old_columnar_objects$
BEGIN
-- If old columnar exists, i.e., the columnar access method that we had before Citus 11.1,
-- and we don't have any relations using the old columnar, then we want to drop the columnar
-- objects. This is because, we don't want to automatically create the "citus_columnar"
-- extension together with the "citus" extension anymore. And for the cases where we don't
-- want to automatically create the "citus_columnar" extension, there is no point of keeping
-- the columnar objects that we had before Citus 11.1 around.
IF (
SELECT EXISTS (
SELECT 1 FROM pg_am
WHERE
-- looking for an access method whose name is "columnar" ..
pg_am.amname = 'columnar' AND
-- .. and there should *NOT* be such a dependency edge in pg_depend, where ..
NOT EXISTS (
SELECT 1 FROM pg_depend
WHERE
-- .. the depender is columnar access method (2601 = access method class) ..
pg_depend.classid = 2601 AND pg_depend.objid = pg_am.oid AND pg_depend.objsubid = 0 AND
-- .. and the dependee is an extension (3079 = extension class)
pg_depend.refclassid = 3079 AND pg_depend.refobjsubid = 0
LIMIT 1
) AND
-- .. and there should *NOT* be any relations using it
NOT EXISTS (
SELECT 1
FROM pg_class
WHERE pg_class.relam = pg_am.oid
LIMIT 1
)
)
)
THEN
-- Below we drop the columnar objects in such an order that the objects that depend on
-- other objects are dropped first.
DROP VIEW IF EXISTS columnar.options;
DROP VIEW IF EXISTS columnar.stripe;
DROP VIEW IF EXISTS columnar.chunk_group;
DROP VIEW IF EXISTS columnar.chunk;
DROP VIEW IF EXISTS columnar.storage;
DROP ACCESS METHOD IF EXISTS columnar;
DROP SEQUENCE IF EXISTS columnar_internal.storageid_seq;
DROP TABLE IF EXISTS columnar_internal.options;
DROP TABLE IF EXISTS columnar_internal.stripe;
DROP TABLE IF EXISTS columnar_internal.chunk_group;
DROP TABLE IF EXISTS columnar_internal.chunk;
DROP FUNCTION IF EXISTS columnar_internal.columnar_handler;
DROP FUNCTION IF EXISTS pg_catalog.alter_columnar_table_set;
DROP FUNCTION IF EXISTS pg_catalog.alter_columnar_table_reset;
DROP FUNCTION IF EXISTS columnar.get_storage_id;
DROP FUNCTION IF EXISTS citus_internal.upgrade_columnar_storage;
DROP FUNCTION IF EXISTS citus_internal.downgrade_columnar_storage;
DROP FUNCTION IF EXISTS citus_internal.columnar_ensure_am_depends_catalog;
DROP SCHEMA IF EXISTS columnar;
DROP SCHEMA IF EXISTS columnar_internal;
END IF;
END $drop_leftover_old_columnar_objects$;

View File

@ -3,3 +3,11 @@
DROP FUNCTION IF EXISTS pg_catalog.worker_last_saved_explain_analyze(); DROP FUNCTION IF EXISTS pg_catalog.worker_last_saved_explain_analyze();
#include "../udfs/worker_last_saved_explain_analyze/9.4-1.sql" #include "../udfs/worker_last_saved_explain_analyze/9.4-1.sql"
#include "../udfs/citus_finish_pg_upgrade/13.1-1.sql"
-- Note that we intentionally don't add the old columnar objects back to the "citus"
-- extension in this downgrade script, even if they were present in the older version.
--
-- If the user wants to create "citus_columnar" extension later, "citus_columnar"
-- will anyway properly create them at the scope of that extension.

View File

@ -0,0 +1,260 @@
CREATE OR REPLACE FUNCTION pg_catalog.citus_finish_pg_upgrade()
RETURNS void
LANGUAGE plpgsql
SET search_path = pg_catalog
AS $cppu$
DECLARE
table_name regclass;
command text;
trigger_name text;
BEGIN
IF substring(current_Setting('server_version'), '\d+')::int >= 14 THEN
EXECUTE $cmd$
-- disable propagation to prevent EnsureCoordinator errors
-- the aggregate created here does not depend on Citus extension (yet)
-- since we add the dependency with the next command
SET citus.enable_ddl_propagation TO OFF;
CREATE AGGREGATE array_cat_agg(anycompatiblearray) (SFUNC = array_cat, STYPE = anycompatiblearray);
COMMENT ON AGGREGATE array_cat_agg(anycompatiblearray)
IS 'concatenate input arrays into a single array';
RESET citus.enable_ddl_propagation;
$cmd$;
ELSE
EXECUTE $cmd$
SET citus.enable_ddl_propagation TO OFF;
CREATE AGGREGATE array_cat_agg(anyarray) (SFUNC = array_cat, STYPE = anyarray);
COMMENT ON AGGREGATE array_cat_agg(anyarray)
IS 'concatenate input arrays into a single array';
RESET citus.enable_ddl_propagation;
$cmd$;
END IF;
--
-- Citus creates the array_cat_agg but because of a compatibility
-- issue between pg13-pg14, we drop and create it during upgrade.
-- And as Citus creates it, there needs to be a dependency to the
-- Citus extension, so we create that dependency here.
-- We are not using:
-- ALTER EXENSION citus DROP/CREATE AGGREGATE array_cat_agg
-- because we don't have an easy way to check if the aggregate
-- exists with anyarray type or anycompatiblearray type.
INSERT INTO pg_depend
SELECT
'pg_proc'::regclass::oid as classid,
(SELECT oid FROM pg_proc WHERE proname = 'array_cat_agg') as objid,
0 as objsubid,
'pg_extension'::regclass::oid as refclassid,
(select oid from pg_extension where extname = 'citus') as refobjid,
0 as refobjsubid ,
'e' as deptype;
-- PG16 has its own any_value, so only create it pre PG16.
-- We can remove this part when we drop support for PG16
IF substring(current_Setting('server_version'), '\d+')::int < 16 THEN
EXECUTE $cmd$
-- disable propagation to prevent EnsureCoordinator errors
-- the aggregate created here does not depend on Citus extension (yet)
-- since we add the dependency with the next command
SET citus.enable_ddl_propagation TO OFF;
CREATE OR REPLACE FUNCTION pg_catalog.any_value_agg ( anyelement, anyelement )
RETURNS anyelement AS $$
SELECT CASE WHEN $1 IS NULL THEN $2 ELSE $1 END;
$$ LANGUAGE SQL STABLE;
CREATE AGGREGATE pg_catalog.any_value (
sfunc = pg_catalog.any_value_agg,
combinefunc = pg_catalog.any_value_agg,
basetype = anyelement,
stype = anyelement
);
COMMENT ON AGGREGATE pg_catalog.any_value(anyelement) IS
'Returns the value of any row in the group. It is mostly useful when you know there will be only 1 element.';
RESET citus.enable_ddl_propagation;
--
-- Citus creates the any_value aggregate but because of a compatibility
-- issue between pg15-pg16 -- any_value is created in PG16, we drop
-- and create it during upgrade IF upgraded version is less than 16.
-- And as Citus creates it, there needs to be a dependency to the
-- Citus extension, so we create that dependency here.
INSERT INTO pg_depend
SELECT
'pg_proc'::regclass::oid as classid,
(SELECT oid FROM pg_proc WHERE proname = 'any_value_agg') as objid,
0 as objsubid,
'pg_extension'::regclass::oid as refclassid,
(select oid from pg_extension where extname = 'citus') as refobjid,
0 as refobjsubid ,
'e' as deptype;
INSERT INTO pg_depend
SELECT
'pg_proc'::regclass::oid as classid,
(SELECT oid FROM pg_proc WHERE proname = 'any_value') as objid,
0 as objsubid,
'pg_extension'::regclass::oid as refclassid,
(select oid from pg_extension where extname = 'citus') as refobjid,
0 as refobjsubid ,
'e' as deptype;
$cmd$;
END IF;
--
-- restore citus catalog tables
--
INSERT INTO pg_catalog.pg_dist_partition SELECT * FROM public.pg_dist_partition;
-- if we are upgrading from PG14/PG15 to PG16+,
-- we need to regenerate the partkeys because they will include varnullingrels as well.
UPDATE pg_catalog.pg_dist_partition
SET partkey = column_name_to_column(pg_dist_partkeys_pre_16_upgrade.logicalrelid, col_name)
FROM public.pg_dist_partkeys_pre_16_upgrade
WHERE pg_dist_partkeys_pre_16_upgrade.logicalrelid = pg_dist_partition.logicalrelid;
DROP TABLE public.pg_dist_partkeys_pre_16_upgrade;
INSERT INTO pg_catalog.pg_dist_shard SELECT * FROM public.pg_dist_shard;
INSERT INTO pg_catalog.pg_dist_placement SELECT * FROM public.pg_dist_placement;
INSERT INTO pg_catalog.pg_dist_node_metadata SELECT * FROM public.pg_dist_node_metadata;
INSERT INTO pg_catalog.pg_dist_node SELECT * FROM public.pg_dist_node;
INSERT INTO pg_catalog.pg_dist_local_group SELECT * FROM public.pg_dist_local_group;
INSERT INTO pg_catalog.pg_dist_transaction SELECT * FROM public.pg_dist_transaction;
INSERT INTO pg_catalog.pg_dist_colocation SELECT * FROM public.pg_dist_colocation;
INSERT INTO pg_catalog.pg_dist_cleanup SELECT * FROM public.pg_dist_cleanup;
INSERT INTO pg_catalog.pg_dist_schema SELECT schemaname::regnamespace, colocationid FROM public.pg_dist_schema;
-- enterprise catalog tables
INSERT INTO pg_catalog.pg_dist_authinfo SELECT * FROM public.pg_dist_authinfo;
INSERT INTO pg_catalog.pg_dist_poolinfo SELECT * FROM public.pg_dist_poolinfo;
-- Temporarily disable trigger to check for validity of functions while
-- inserting. The current contents of the table might be invalid if one of
-- the functions was removed by the user without also removing the
-- rebalance strategy. Obviously that's not great, but it should be no
-- reason to fail the upgrade.
ALTER TABLE pg_catalog.pg_dist_rebalance_strategy DISABLE TRIGGER pg_dist_rebalance_strategy_validation_trigger;
INSERT INTO pg_catalog.pg_dist_rebalance_strategy SELECT
name,
default_strategy,
shard_cost_function::regprocedure::regproc,
node_capacity_function::regprocedure::regproc,
shard_allowed_on_node_function::regprocedure::regproc,
default_threshold,
minimum_threshold,
improvement_threshold
FROM public.pg_dist_rebalance_strategy;
ALTER TABLE pg_catalog.pg_dist_rebalance_strategy ENABLE TRIGGER pg_dist_rebalance_strategy_validation_trigger;
--
-- drop backup tables
--
DROP TABLE public.pg_dist_authinfo;
DROP TABLE public.pg_dist_colocation;
DROP TABLE public.pg_dist_local_group;
DROP TABLE public.pg_dist_node;
DROP TABLE public.pg_dist_node_metadata;
DROP TABLE public.pg_dist_partition;
DROP TABLE public.pg_dist_placement;
DROP TABLE public.pg_dist_poolinfo;
DROP TABLE public.pg_dist_shard;
DROP TABLE public.pg_dist_transaction;
DROP TABLE public.pg_dist_rebalance_strategy;
DROP TABLE public.pg_dist_cleanup;
DROP TABLE public.pg_dist_schema;
--
-- reset sequences
--
PERFORM setval('pg_catalog.pg_dist_shardid_seq', (SELECT MAX(shardid)+1 AS max_shard_id FROM pg_dist_shard), false);
PERFORM setval('pg_catalog.pg_dist_placement_placementid_seq', (SELECT MAX(placementid)+1 AS max_placement_id FROM pg_dist_placement), false);
PERFORM setval('pg_catalog.pg_dist_groupid_seq', (SELECT MAX(groupid)+1 AS max_group_id FROM pg_dist_node), false);
PERFORM setval('pg_catalog.pg_dist_node_nodeid_seq', (SELECT MAX(nodeid)+1 AS max_node_id FROM pg_dist_node), false);
PERFORM setval('pg_catalog.pg_dist_colocationid_seq', (SELECT MAX(colocationid)+1 AS max_colocation_id FROM pg_dist_colocation), false);
PERFORM setval('pg_catalog.pg_dist_operationid_seq', (SELECT MAX(operation_id)+1 AS max_operation_id FROM pg_dist_cleanup), false);
PERFORM setval('pg_catalog.pg_dist_cleanup_recordid_seq', (SELECT MAX(record_id)+1 AS max_record_id FROM pg_dist_cleanup), false);
PERFORM setval('pg_catalog.pg_dist_clock_logical_seq', (SELECT last_value FROM public.pg_dist_clock_logical_seq), false);
DROP TABLE public.pg_dist_clock_logical_seq;
--
-- register triggers
--
FOR table_name IN SELECT logicalrelid FROM pg_catalog.pg_dist_partition JOIN pg_class ON (logicalrelid = oid) WHERE relkind <> 'f'
LOOP
trigger_name := 'truncate_trigger_' || table_name::oid;
command := 'create trigger ' || trigger_name || ' after truncate on ' || table_name || ' execute procedure pg_catalog.citus_truncate_trigger()';
EXECUTE command;
command := 'update pg_trigger set tgisinternal = true where tgname = ' || quote_literal(trigger_name);
EXECUTE command;
END LOOP;
--
-- set dependencies
--
INSERT INTO pg_depend
SELECT
'pg_class'::regclass::oid as classid,
p.logicalrelid::regclass::oid as objid,
0 as objsubid,
'pg_extension'::regclass::oid as refclassid,
(select oid from pg_extension where extname = 'citus') as refobjid,
0 as refobjsubid ,
'n' as deptype
FROM pg_catalog.pg_dist_partition p;
-- If citus_columnar extension exists, then perform the post PG-upgrade work for columnar as well.
--
-- First look if pg_catalog.columnar_finish_pg_upgrade function exists as part of the citus_columnar
-- extension. (We check whether it's part of the extension just for security reasons). If it does, then
-- call it. If not, then look for columnar_internal.columnar_ensure_am_depends_catalog function and as
-- part of the citus_columnar extension. If so, then call it. We alternatively check for the latter UDF
-- just because pg_catalog.columnar_finish_pg_upgrade function is introduced in citus_columnar 13.2-1
-- and as of today all it does is to call columnar_internal.columnar_ensure_am_depends_catalog function.
IF EXISTS (
SELECT 1 FROM pg_depend
JOIN pg_proc ON (pg_depend.objid = pg_proc.oid)
JOIN pg_namespace ON (pg_proc.pronamespace = pg_namespace.oid)
JOIN pg_extension ON (pg_depend.refobjid = pg_extension.oid)
WHERE
-- Looking if pg_catalog.columnar_finish_pg_upgrade function exists and
-- if there is a dependency record from it (proc class = 1255) ..
pg_depend.classid = 1255 AND pg_namespace.nspname = 'pg_catalog' AND pg_proc.proname = 'columnar_finish_pg_upgrade' AND
-- .. to citus_columnar extension (3079 = extension class), if it exists.
pg_depend.refclassid = 3079 AND pg_extension.extname = 'citus_columnar'
)
THEN PERFORM pg_catalog.columnar_finish_pg_upgrade();
ELSIF EXISTS (
SELECT 1 FROM pg_depend
JOIN pg_proc ON (pg_depend.objid = pg_proc.oid)
JOIN pg_namespace ON (pg_proc.pronamespace = pg_namespace.oid)
JOIN pg_extension ON (pg_depend.refobjid = pg_extension.oid)
WHERE
-- Looking if columnar_internal.columnar_ensure_am_depends_catalog function exists and
-- if there is a dependency record from it (proc class = 1255) ..
pg_depend.classid = 1255 AND pg_namespace.nspname = 'columnar_internal' AND pg_proc.proname = 'columnar_ensure_am_depends_catalog' AND
-- .. to citus_columnar extension (3079 = extension class), if it exists.
pg_depend.refclassid = 3079 AND pg_extension.extname = 'citus_columnar'
)
THEN PERFORM columnar_internal.columnar_ensure_am_depends_catalog();
END IF;
-- restore pg_dist_object from the stable identifiers
TRUNCATE pg_catalog.pg_dist_object;
INSERT INTO pg_catalog.pg_dist_object (classid, objid, objsubid, distribution_argument_index, colocationid)
SELECT
address.classid,
address.objid,
address.objsubid,
naming.distribution_argument_index,
naming.colocationid
FROM
public.pg_dist_object naming,
pg_catalog.pg_get_object_address(naming.type, naming.object_names, naming.object_args) address;
DROP TABLE public.pg_dist_object;
END;
$cppu$;
COMMENT ON FUNCTION pg_catalog.citus_finish_pg_upgrade()
IS 'perform tasks to restore citus settings from a location that has been prepared before pg_upgrade';

View File

@ -203,8 +203,41 @@ BEGIN
'n' as deptype 'n' as deptype
FROM pg_catalog.pg_dist_partition p; FROM pg_catalog.pg_dist_partition p;
-- set dependencies for columnar table access method -- If citus_columnar extension exists, then perform the post PG-upgrade work for columnar as well.
PERFORM columnar_internal.columnar_ensure_am_depends_catalog(); --
-- First look if pg_catalog.columnar_finish_pg_upgrade function exists as part of the citus_columnar
-- extension. (We check whether it's part of the extension just for security reasons). If it does, then
-- call it. If not, then look for columnar_internal.columnar_ensure_am_depends_catalog function and as
-- part of the citus_columnar extension. If so, then call it. We alternatively check for the latter UDF
-- just because pg_catalog.columnar_finish_pg_upgrade function is introduced in citus_columnar 13.2-1
-- and as of today all it does is to call columnar_internal.columnar_ensure_am_depends_catalog function.
IF EXISTS (
SELECT 1 FROM pg_depend
JOIN pg_proc ON (pg_depend.objid = pg_proc.oid)
JOIN pg_namespace ON (pg_proc.pronamespace = pg_namespace.oid)
JOIN pg_extension ON (pg_depend.refobjid = pg_extension.oid)
WHERE
-- Looking if pg_catalog.columnar_finish_pg_upgrade function exists and
-- if there is a dependency record from it (proc class = 1255) ..
pg_depend.classid = 1255 AND pg_namespace.nspname = 'pg_catalog' AND pg_proc.proname = 'columnar_finish_pg_upgrade' AND
-- .. to citus_columnar extension (3079 = extension class), if it exists.
pg_depend.refclassid = 3079 AND pg_extension.extname = 'citus_columnar'
)
THEN PERFORM pg_catalog.columnar_finish_pg_upgrade();
ELSIF EXISTS (
SELECT 1 FROM pg_depend
JOIN pg_proc ON (pg_depend.objid = pg_proc.oid)
JOIN pg_namespace ON (pg_proc.pronamespace = pg_namespace.oid)
JOIN pg_extension ON (pg_depend.refobjid = pg_extension.oid)
WHERE
-- Looking if columnar_internal.columnar_ensure_am_depends_catalog function exists and
-- if there is a dependency record from it (proc class = 1255) ..
pg_depend.classid = 1255 AND pg_namespace.nspname = 'columnar_internal' AND pg_proc.proname = 'columnar_ensure_am_depends_catalog' AND
-- .. to citus_columnar extension (3079 = extension class), if it exists.
pg_depend.refclassid = 3079 AND pg_extension.extname = 'citus_columnar'
)
THEN PERFORM columnar_internal.columnar_ensure_am_depends_catalog();
END IF;
-- restore pg_dist_object from the stable identifiers -- restore pg_dist_object from the stable identifiers
TRUNCATE pg_catalog.pg_dist_object; TRUNCATE pg_catalog.pg_dist_object;

View File

@ -112,6 +112,5 @@ extern void UndistributeDisconnectedCitusLocalTables(void);
extern void NotifyUtilityHookConstraintDropped(void); extern void NotifyUtilityHookConstraintDropped(void);
extern void ResetConstraintDropped(void); extern void ResetConstraintDropped(void);
extern void ExecuteDistributedDDLJob(DDLJob *ddlJob); extern void ExecuteDistributedDDLJob(DDLJob *ddlJob);
extern void ColumnarTableSetOptionsHook(Oid relationId, ColumnarOptions options);
#endif /* MULTI_UTILITY_H */ #endif /* MULTI_UTILITY_H */

View File

@ -146,7 +146,6 @@ check-isolation-custom-schedule-vg: all $(isolation_test_files)
--valgrind --pg_ctl-timeout=360 --connection-timeout=500000 --valgrind-path=valgrind --valgrind-log-file=$(CITUS_VALGRIND_LOG_FILE) \ --valgrind --pg_ctl-timeout=360 --connection-timeout=500000 --valgrind-path=valgrind --valgrind-log-file=$(CITUS_VALGRIND_LOG_FILE) \
-- $(MULTI_REGRESS_OPTS) --inputdir=$(citus_abs_srcdir)/build --schedule=$(citus_abs_srcdir)/$(SCHEDULE) $(EXTRA_TESTS) -- $(MULTI_REGRESS_OPTS) --inputdir=$(citus_abs_srcdir)/build --schedule=$(citus_abs_srcdir)/$(SCHEDULE) $(EXTRA_TESTS)
check-empty: all check-empty: all
$(pg_regress_multi_check) --load-extension=citus \ $(pg_regress_multi_check) --load-extension=citus \
-- $(MULTI_REGRESS_OPTS) $(EXTRA_TESTS) -- $(MULTI_REGRESS_OPTS) $(EXTRA_TESTS)
@ -180,11 +179,6 @@ check-multi-1-vg: all
--pg_ctl-timeout=360 --connection-timeout=500000 --valgrind-path=valgrind --valgrind-log-file=$(CITUS_VALGRIND_LOG_FILE) \ --pg_ctl-timeout=360 --connection-timeout=500000 --valgrind-path=valgrind --valgrind-log-file=$(CITUS_VALGRIND_LOG_FILE) \
-- $(MULTI_REGRESS_OPTS) --schedule=$(citus_abs_srcdir)/multi_1_schedule $(EXTRA_TESTS) -- $(MULTI_REGRESS_OPTS) --schedule=$(citus_abs_srcdir)/multi_1_schedule $(EXTRA_TESTS)
check-columnar-vg: all
$(pg_regress_multi_check) --load-extension=citus --valgrind \
--pg_ctl-timeout=360 --connection-timeout=500000 --valgrind-path=valgrind --valgrind-log-file=$(CITUS_VALGRIND_LOG_FILE) \
-- $(MULTI_REGRESS_OPTS) --schedule=$(citus_abs_srcdir)/columnar_schedule $(EXTRA_TESTS)
check-isolation: all $(isolation_test_files) check-isolation: all $(isolation_test_files)
$(pg_regress_multi_check) --load-extension=citus --isolationtester \ $(pg_regress_multi_check) --load-extension=citus --isolationtester \
-- $(MULTI_REGRESS_OPTS) --inputdir=$(citus_abs_srcdir)/build --schedule=$(citus_abs_srcdir)/isolation_schedule $(EXTRA_TESTS) -- $(MULTI_REGRESS_OPTS) --inputdir=$(citus_abs_srcdir)/build --schedule=$(citus_abs_srcdir)/isolation_schedule $(EXTRA_TESTS)
@ -227,14 +221,42 @@ check-operations: all
$(pg_regress_multi_check) --load-extension=citus --worker-count=6 \ $(pg_regress_multi_check) --load-extension=citus --worker-count=6 \
-- $(MULTI_REGRESS_OPTS) --schedule=$(citus_abs_srcdir)/operations_schedule $(EXTRA_TESTS) -- $(MULTI_REGRESS_OPTS) --schedule=$(citus_abs_srcdir)/operations_schedule $(EXTRA_TESTS)
check-columnar-minimal:
$(pg_regress_multi_check) --load-extension=citus_columnar \
-- $(MULTI_REGRESS_OPTS) --schedule=$(citus_abs_srcdir)/minimal_columnar_schedule $(EXTRA_TESTS)
check-columnar: all check-columnar: all
$(pg_regress_multi_check) --load-extension=citus_columnar --load-extension=citus \ $(pg_regress_multi_check) --load-extension=citus_columnar \
-- $(MULTI_REGRESS_OPTS) --schedule=$(citus_abs_srcdir)/columnar_schedule $(EXTRA_TESTS)
check-columnar-vg: all
$(pg_regress_multi_check) --load-extension=citus_columnar --valgrind \
--pg_ctl-timeout=360 --connection-timeout=500000 --valgrind-path=valgrind --valgrind-log-file=$(CITUS_VALGRIND_LOG_FILE) \
-- $(MULTI_REGRESS_OPTS) --schedule=$(citus_abs_srcdir)/columnar_schedule $(EXTRA_TESTS) -- $(MULTI_REGRESS_OPTS) --schedule=$(citus_abs_srcdir)/columnar_schedule $(EXTRA_TESTS)
check-columnar-isolation: all $(isolation_test_files) check-columnar-isolation: all $(isolation_test_files)
$(pg_regress_multi_check) --load-extension=citus --isolationtester \ $(pg_regress_multi_check) --load-extension=citus_columnar --isolationtester \
-- $(MULTI_REGRESS_OPTS) --inputdir=$(citus_abs_srcdir)/build --schedule=$(citus_abs_srcdir)/columnar_isolation_schedule $(EXTRA_TESTS) -- $(MULTI_REGRESS_OPTS) --inputdir=$(citus_abs_srcdir)/build --schedule=$(citus_abs_srcdir)/columnar_isolation_schedule $(EXTRA_TESTS)
check-columnar-custom-schedule: all
$(pg_regress_multi_check) --load-extension=citus_columnar \
-- $(MULTI_REGRESS_OPTS) --schedule=$(citus_abs_srcdir)/$(SCHEDULE) $(EXTRA_TESTS)
check-columnar-custom-schedule-vg: all
$(pg_regress_multi_check) --load-extension=citus_columnar \
--valgrind --pg_ctl-timeout=360 --connection-timeout=500000 \
--valgrind-path=valgrind --valgrind-log-file=$(CITUS_VALGRIND_LOG_FILE) \
-- $(MULTI_REGRESS_OPTS) --schedule=$(citus_abs_srcdir)/$(SCHEDULE) $(EXTRA_TESTS)
check-columnar-isolation-custom-schedule: all $(isolation_test_files)
$(pg_regress_multi_check) --load-extension=citus_columnar --isolationtester \
-- $(MULTI_REGRESS_OPTS) --inputdir=$(citus_abs_srcdir)/build --schedule=$(citus_abs_srcdir)/$(SCHEDULE) $(EXTRA_TESTS)
check-columnar-isolation-custom-schedule-vg: all $(isolation_test_files)
$(pg_regress_multi_check) --load-extension=citus_columnar --isolationtester \
--valgrind --pg_ctl-timeout=360 --connection-timeout=500000 --valgrind-path=valgrind --valgrind-log-file=$(CITUS_VALGRIND_LOG_FILE) \
-- $(MULTI_REGRESS_OPTS) --inputdir=$(citus_abs_srcdir)/build --schedule=$(citus_abs_srcdir)/$(SCHEDULE) $(EXTRA_TESTS)
check-split: all check-split: all
$(pg_regress_multi_check) --load-extension=citus \ $(pg_regress_multi_check) --load-extension=citus \
-- $(MULTI_REGRESS_OPTS) --schedule=$(citus_abs_srcdir)/split_schedule $(EXTRA_TESTS) -- $(MULTI_REGRESS_OPTS) --schedule=$(citus_abs_srcdir)/split_schedule $(EXTRA_TESTS)
@ -252,7 +274,7 @@ check-enterprise-failure: all
-- $(MULTI_REGRESS_OPTS) --schedule=$(citus_abs_srcdir)/enterprise_failure_schedule $(EXTRA_TESTS) -- $(MULTI_REGRESS_OPTS) --schedule=$(citus_abs_srcdir)/enterprise_failure_schedule $(EXTRA_TESTS)
check-pg-upgrade: all check-pg-upgrade: all
$(pg_upgrade_check) --old-bindir=$(old-bindir) --new-bindir=$(new-bindir) --pgxsdir=$(pgxsdir) $(pg_upgrade_check) --old-bindir=$(old-bindir) --new-bindir=$(new-bindir) --pgxsdir=$(pgxsdir) $(if $(filter true,$(test-with-columnar)),--test-with-columnar)
check-arbitrary-configs: all check-arbitrary-configs: all
${arbitrary_config_check} --bindir=$(bindir) --pgxsdir=$(pgxsdir) --parallel=$(parallel) --configs=$(CONFIGS) --seed=$(seed) ${arbitrary_config_check} --bindir=$(bindir) --pgxsdir=$(pgxsdir) --parallel=$(parallel) --configs=$(CONFIGS) --seed=$(seed)
@ -301,4 +323,3 @@ clean distclean maintainer-clean:
rm -rf input/ output/ rm -rf input/ output/
rm -rf tmp_check/ rm -rf tmp_check/
rm -rf tmp_citus_test/ rm -rf tmp_citus_test/

View File

@ -54,6 +54,8 @@ of the following commands to do so:
```bash ```bash
# If your tests needs almost no setup you can use check-minimal # If your tests needs almost no setup you can use check-minimal
make install -j9 && make -C src/test/regress/ check-minimal EXTRA_TESTS='multi_utility_warnings' make install -j9 && make -C src/test/regress/ check-minimal EXTRA_TESTS='multi_utility_warnings'
# For columnar specific tests, use check-columnar-minimal instead of check-minimal
make install -j9 && make -C src/test/regress/ check-columnar-minimal
# Often tests need some testing data, if you get missing table errors using # Often tests need some testing data, if you get missing table errors using
# check-minimal you should try check-base # check-minimal you should try check-base
make install -j9 && make -C src/test/regress/ check-base EXTRA_TESTS='with_prepare' make install -j9 && make -C src/test/regress/ check-base EXTRA_TESTS='with_prepare'
@ -92,6 +94,13 @@ make install -j9 && make -C src/test/regress/ check-minimal EXTRA_TESTS='multi_u
cp src/test/regress/{results,expected}/multi_utility_warnings.out cp src/test/regress/{results,expected}/multi_utility_warnings.out
``` ```
Or if it's a columnar test, you can use:
```bash
make install -j9 && make -C src/test/regress/ check-columnar-minimal EXTRA_TESTS='multi_utility_warnings'
cp src/test/regress/{results,expected}/multi_utility_warnings.out
```
## Adding a new test file ## Adding a new test file
Adding a new test file is quite simple: Adding a new test file is quite simple:

View File

@ -0,0 +1,11 @@
test: upgrade_basic_after upgrade_ref2ref_after upgrade_type_after upgrade_distributed_function_after upgrade_rebalance_strategy_after upgrade_list_citus_objects upgrade_autoconverted_after upgrade_citus_stat_activity upgrade_citus_locks upgrade_single_shard_table_after upgrade_schema_based_sharding_after upgrade_basic_after_non_mixed
# This test cannot be run with run_test.py currently due to its dependence on
# the specific PG versions that we use to run upgrade tests. For now we leave
# it out of the parallel line, so that flaky test detection can at least work
# for the other tests.
test: upgrade_distributed_triggers_after
# The last test to ensure citus columnar not automatically created and upgrade
# went fine without automatically creating it.
test: ensure_citus_columnar_not_exists

View File

@ -1,7 +1,7 @@
# ---------- # ----------
# Only run few basic tests to set up a testing environment # Only run few basic tests to set up a testing environment
# ---------- # ----------
test: multi_test_helpers multi_test_helpers_superuser multi_create_fdw columnar_test_helpers failure_test_helpers test: multi_test_helpers multi_test_helpers_superuser multi_create_fdw failure_test_helpers
test: multi_cluster_management test: multi_cluster_management
test: multi_test_catalog_views test: multi_test_catalog_views
test: multi_create_table test: multi_create_table

View File

@ -0,0 +1,16 @@
# The basic tests runs analyze which depends on shard numbers
test: multi_test_helpers multi_test_helpers_superuser upgrade_basic_before_non_mixed
test: multi_test_catalog_views
test: upgrade_basic_before
test: upgrade_ref2ref_before
test: upgrade_type_before
test: upgrade_distributed_function_before upgrade_rebalance_strategy_before
test: upgrade_autoconverted_before upgrade_single_shard_table_before upgrade_schema_based_sharding_before
test: upgrade_citus_stat_activity
test: upgrade_citus_locks
test: upgrade_distributed_triggers_before
# The last test, i.e., upgrade_columnar_before, in before_pg_upgrade_with_columnar_schedule
# renames public schema to citus_schema and re-creates public schema, so we also do the same
# here to have compatible output in after schedule tests for both schedules.
test: rename_public_to_citus_schema_and_recreate

View File

@ -25,8 +25,14 @@ ARBITRARY_SCHEDULE_NAMES = [
"single_shard_table_prep_schedule", "single_shard_table_prep_schedule",
] ]
BEFORE_PG_UPGRADE_SCHEDULE = "./before_pg_upgrade_schedule" BEFORE_PG_UPGRADE_WITH_COLUMNAR_SCHEDULE = "./before_pg_upgrade_with_columnar_schedule"
AFTER_PG_UPGRADE_SCHEDULE = "./after_pg_upgrade_schedule" BEFORE_PG_UPGRADE_WITHOUT_COLUMNAR_SCHEDULE = (
"./before_pg_upgrade_without_columnar_schedule"
)
AFTER_PG_UPGRADE_WITH_COLUMNAR_SCHEDULE = "./after_pg_upgrade_with_columnar_schedule"
AFTER_PG_UPGRADE_WITHOUT_COLUMNAR_SCHEDULE = (
"./after_pg_upgrade_without_columnar_schedule"
)
CREATE_SCHEDULE = "./create_schedule" CREATE_SCHEDULE = "./create_schedule"
POSTGRES_SCHEDULE = "./postgres_schedule" POSTGRES_SCHEDULE = "./postgres_schedule"
@ -104,6 +110,7 @@ class CitusBaseClusterConfig(object, metaclass=NewInitCaller):
self.is_mx = True self.is_mx = True
self.is_citus = True self.is_citus = True
self.all_null_dist_key = False self.all_null_dist_key = False
self.test_with_columnar = False
self.name = type(self).__name__ self.name = type(self).__name__
self.settings = { self.settings = {
"shared_preload_libraries": "citus", "shared_preload_libraries": "citus",
@ -402,3 +409,4 @@ class PGUpgradeConfig(CitusBaseClusterConfig):
self.old_datadir = self.temp_dir + "/oldData" self.old_datadir = self.temp_dir + "/oldData"
self.new_datadir = self.temp_dir + "/newData" self.new_datadir = self.temp_dir + "/newData"
self.user = SUPER_USER_NAME self.user = SUPER_USER_NAME
self.test_with_columnar = arguments["--test-with-columnar"]

View File

@ -209,7 +209,7 @@ DEPS = {
), ),
"limit_intermediate_size": TestDeps("base_schedule"), "limit_intermediate_size": TestDeps("base_schedule"),
"columnar_drop": TestDeps( "columnar_drop": TestDeps(
"minimal_schedule", "minimal_columnar_schedule",
["columnar_create", "columnar_load"], ["columnar_create", "columnar_load"],
repeatable=False, repeatable=False,
), ),
@ -335,6 +335,15 @@ def run_schedule_with_multiregress(test_name, schedule, dependencies, args):
"failure" "failure"
): ):
make_recipe = "check-failure-custom-schedule" make_recipe = "check-failure-custom-schedule"
elif test_name.startswith("columnar"):
if dependencies.schedule is None:
# Columanar isolation tests don't depend on a base schedule,
# so this must be a columnar isolation test.
make_recipe = "check-columnar-isolation-custom-schedule"
elif dependencies.schedule == "minimal_columnar_schedule":
make_recipe = "check-columnar-custom-schedule"
else:
raise Exception("Columnar test could not be found in any schedule")
else: else:
make_recipe = "check-custom-schedule" make_recipe = "check-custom-schedule"
@ -352,6 +361,9 @@ def run_schedule_with_multiregress(test_name, schedule, dependencies, args):
def default_base_schedule(test_schedule, args): def default_base_schedule(test_schedule, args):
if "isolation" in test_schedule: if "isolation" in test_schedule:
if "columnar" in test_schedule:
# we don't have pre-requisites for columnar isolation tests
return None
return "base_isolation_schedule" return "base_isolation_schedule"
if "failure" in test_schedule: if "failure" in test_schedule:
@ -374,6 +386,9 @@ def default_base_schedule(test_schedule, args):
if "pg_upgrade" in test_schedule: if "pg_upgrade" in test_schedule:
return "minimal_pg_upgrade_schedule" return "minimal_pg_upgrade_schedule"
if "columnar" in test_schedule:
return "minimal_columnar_schedule"
if test_schedule in ARBITRARY_SCHEDULE_NAMES: if test_schedule in ARBITRARY_SCHEDULE_NAMES:
print(f"WARNING: Arbitrary config schedule ({test_schedule}) is not supported.") print(f"WARNING: Arbitrary config schedule ({test_schedule}) is not supported.")
sys.exit(0) sys.exit(0)

View File

@ -3,6 +3,8 @@ import pytest
def test_freezing(coord): def test_freezing(coord):
coord.sql("CREATE EXTENSION IF NOT EXISTS citus_columnar")
coord.configure("vacuum_freeze_min_age = 50000", "vacuum_freeze_table_age = 50000") coord.configure("vacuum_freeze_min_age = 50000", "vacuum_freeze_table_age = 50000")
coord.restart() coord.restart()
@ -38,8 +40,12 @@ def test_freezing(coord):
) )
assert frozen_age < 70_000, "columnar table was not frozen" assert frozen_age < 70_000, "columnar table was not frozen"
coord.sql("DROP EXTENSION citus_columnar CASCADE")
def test_recovery(coord): def test_recovery(coord):
coord.sql("CREATE EXTENSION IF NOT EXISTS citus_columnar")
# create columnar table and insert simple data to verify the data survives a crash # create columnar table and insert simple data to verify the data survives a crash
coord.sql("CREATE TABLE t1 (a int, b text) USING columnar") coord.sql("CREATE TABLE t1 (a int, b text) USING columnar")
coord.sql( coord.sql(
@ -115,3 +121,5 @@ def test_recovery(coord):
row_count = coord.sql_value("SELECT count(*) FROM t1") row_count = coord.sql_value("SELECT count(*) FROM t1")
assert row_count == 1007, "columnar didn't recover after copy" assert row_count == 1007, "columnar didn't recover after copy"
coord.sql("DROP EXTENSION citus_columnar CASCADE")

View File

@ -30,9 +30,14 @@ Before running the script, make sure that:
- Finally run upgrade test in `citus/src/test/regress`: - Finally run upgrade test in `citus/src/test/regress`:
```bash ```bash
pipenv run make check-pg-upgrade old-bindir=<old-bindir> new-bindir=<new-bindir> pipenv run make check-pg-upgrade old-bindir=<old-bindir> new-bindir=<new-bindir> test-with-columnar=<true|false>
``` ```
When test-with-columnar is provided as true, before_pg_upgrade_with_columnar_schedule /
after_pg_upgrade_with_columnar_schedule is used before / after upgrading Postgres during the
tests and before_pg_upgrade_without_columnar_schedule / after_pg_upgrade_without_columnar_schedule
is used otherwise.
To see full command list: To see full command list:
```bash ```bash
@ -43,9 +48,9 @@ How the postgres upgrade test works:
- Temporary folder `tmp_upgrade` is created in `src/test/regress/`, if one exists it is removed first. - Temporary folder `tmp_upgrade` is created in `src/test/regress/`, if one exists it is removed first.
- Database is initialized and citus cluster is created(1 coordinator + 2 workers) with old postgres. - Database is initialized and citus cluster is created(1 coordinator + 2 workers) with old postgres.
- `before_pg_upgrade_schedule` is run with `pg_regress`. This schedule sets up any - `before_pg_upgrade_with_columnar_schedule` / `before_pg_upgrade_without_columnar_schedule` is run with `pg_regress`. This schedule sets up any
objects and data that will be tested for preservation after the upgrade. It objects and data that will be tested for preservation after the upgrade. It
- `after_pg_upgrade_schedule` is run with `pg_regress` to verify that the output - `after_pg_upgrade_with_columnar_schedule` / `after_pg_upgrade_without_columnar_schedule` is run with `pg_regress` to verify that the output
of those tests is the same before the upgrade as after. of those tests is the same before the upgrade as after.
- `citus_prepare_pg_upgrade` is run in coordinators and workers. - `citus_prepare_pg_upgrade` is run in coordinators and workers.
- Old database is stopped. - Old database is stopped.
@ -53,7 +58,7 @@ How the postgres upgrade test works:
- Postgres upgrade is performed. - Postgres upgrade is performed.
- New database is started in both coordinators and workers. - New database is started in both coordinators and workers.
- `citus_finish_pg_upgrade` is run in coordinators and workers to finalize the upgrade step. - `citus_finish_pg_upgrade` is run in coordinators and workers to finalize the upgrade step.
- `after_pg_upgrade_schedule` is run with `pg_regress` to verify that the previously created tables, and data still exist. Router and realtime queries are used to verify this. - `after_pg_upgrade_with_columnar_schedule` / `after_pg_upgrade_without_columnar_schedule` is run with `pg_regress` to verify that the previously created tables, and data still exist. Router and realtime queries are used to verify this.
### Writing new PG upgrade tests ### Writing new PG upgrade tests

View File

@ -2,12 +2,13 @@
"""upgrade_test """upgrade_test
Usage: Usage:
upgrade_test --old-bindir=<old-bindir> --new-bindir=<new-bindir> --pgxsdir=<pgxsdir> upgrade_test --old-bindir=<old-bindir> --new-bindir=<new-bindir> --pgxsdir=<pgxsdir> [--test-with-columnar]
Options: Options:
--old-bindir=<old-bindir> The old PostgreSQL executable directory(ex: '~/.pgenv/pgsql-10.4/bin') --old-bindir=<old-bindir> The old PostgreSQL executable directory(ex: '~/.pgenv/pgsql-10.4/bin')
--new-bindir=<new-bindir> The new PostgreSQL executable directory(ex: '~/.pgenv/pgsql-11.3/bin') --new-bindir=<new-bindir> The new PostgreSQL executable directory(ex: '~/.pgenv/pgsql-11.3/bin')
--pgxsdir=<pgxsdir> Path to the PGXS directory(ex: ~/.pgenv/src/postgresql-11.3) --pgxsdir=<pgxsdir> Path to the PGXS directory(ex: ~/.pgenv/src/postgresql-11.3)
--test-with-columnar Enable automatically creating citus_columnar extension
""" """
import atexit import atexit
@ -26,8 +27,10 @@ import utils # noqa: E402
from utils import USER # noqa: E402 from utils import USER # noqa: E402
from config import ( # noqa: E402 from config import ( # noqa: E402
AFTER_PG_UPGRADE_SCHEDULE, AFTER_PG_UPGRADE_WITH_COLUMNAR_SCHEDULE,
BEFORE_PG_UPGRADE_SCHEDULE, AFTER_PG_UPGRADE_WITHOUT_COLUMNAR_SCHEDULE,
BEFORE_PG_UPGRADE_WITH_COLUMNAR_SCHEDULE,
BEFORE_PG_UPGRADE_WITHOUT_COLUMNAR_SCHEDULE,
PGUpgradeConfig, PGUpgradeConfig,
) )
@ -85,13 +88,21 @@ def main(config):
config.old_bindir, config.old_bindir,
config.pg_srcdir, config.pg_srcdir,
config.coordinator_port(), config.coordinator_port(),
BEFORE_PG_UPGRADE_SCHEDULE, (
BEFORE_PG_UPGRADE_WITH_COLUMNAR_SCHEDULE
if config.test_with_columnar
else BEFORE_PG_UPGRADE_WITHOUT_COLUMNAR_SCHEDULE
),
) )
common.run_pg_regress( common.run_pg_regress(
config.old_bindir, config.old_bindir,
config.pg_srcdir, config.pg_srcdir,
config.coordinator_port(), config.coordinator_port(),
AFTER_PG_UPGRADE_SCHEDULE, (
AFTER_PG_UPGRADE_WITH_COLUMNAR_SCHEDULE
if config.test_with_columnar
else AFTER_PG_UPGRADE_WITHOUT_COLUMNAR_SCHEDULE
),
) )
citus_prepare_pg_upgrade(config.old_bindir, config.node_name_to_ports.values()) citus_prepare_pg_upgrade(config.old_bindir, config.node_name_to_ports.values())
@ -127,7 +138,11 @@ def main(config):
config.new_bindir, config.new_bindir,
config.pg_srcdir, config.pg_srcdir,
config.coordinator_port(), config.coordinator_port(),
AFTER_PG_UPGRADE_SCHEDULE, (
AFTER_PG_UPGRADE_WITH_COLUMNAR_SCHEDULE
if config.test_with_columnar
else AFTER_PG_UPGRADE_WITHOUT_COLUMNAR_SCHEDULE
),
) )

View File

@ -1,8 +1,5 @@
test: multi_test_helpers multi_test_helpers_superuser columnar_test_helpers test: columnar_test_helpers
test: multi_cluster_management
test: multi_test_catalog_views
test: remove_coordinator_from_metadata
test: columnar_create test: columnar_create
test: columnar_load test: columnar_load
test: columnar_query test: columnar_query
@ -36,5 +33,3 @@ test: columnar_recursive
test: columnar_transactions test: columnar_transactions
test: columnar_matview test: columnar_matview
test: columnar_memory test: columnar_memory
test: columnar_citus_integration
test: check_mx

View File

@ -1,3 +1,6 @@
SET client_min_messages TO WARNING;
CREATE EXTENSION IF NOT EXISTS citus_columnar;
RESET client_min_messages;
CREATE SCHEMA alter_distributed_table; CREATE SCHEMA alter_distributed_table;
SET search_path TO alter_distributed_table; SET search_path TO alter_distributed_table;
SET citus.shard_count TO 4; SET citus.shard_count TO 4;
@ -1259,3 +1262,5 @@ RESET search_path;
DROP SCHEMA alter_distributed_table CASCADE; DROP SCHEMA alter_distributed_table CASCADE;
DROP SCHEMA schema_to_test_alter_dist_table CASCADE; DROP SCHEMA schema_to_test_alter_dist_table CASCADE;
DROP USER alter_dist_table_test_user; DROP USER alter_dist_table_test_user;
SET client_min_messages TO WARNING;
DROP EXTENSION citus_columnar CASCADE;

View File

@ -1,6 +1,9 @@
-- --
-- ALTER_TABLE_SET_ACCESS_METHOD -- ALTER_TABLE_SET_ACCESS_METHOD
-- --
SET client_min_messages TO WARNING;
CREATE EXTENSION IF NOT EXISTS citus_columnar;
RESET client_min_messages;
CREATE TABLE alter_am_pg_version_table (a INT); CREATE TABLE alter_am_pg_version_table (a INT);
SELECT alter_table_set_access_method('alter_am_pg_version_table', 'columnar'); SELECT alter_table_set_access_method('alter_am_pg_version_table', 'columnar');
NOTICE: creating a new table for public.alter_am_pg_version_table NOTICE: creating a new table for public.alter_am_pg_version_table
@ -810,3 +813,5 @@ select alter_table_set_access_method('view_test_view','columnar');
ERROR: you cannot alter access method of a view ERROR: you cannot alter access method of a view
SET client_min_messages TO WARNING; SET client_min_messages TO WARNING;
DROP SCHEMA alter_table_set_access_method CASCADE; DROP SCHEMA alter_table_set_access_method CASCADE;
SET client_min_messages TO WARNING;
DROP EXTENSION citus_columnar CASCADE;

View File

@ -1,3 +1,6 @@
SET client_min_messages TO WARNING;
CREATE EXTENSION IF NOT EXISTS citus_columnar;
RESET client_min_messages;
-- create the udf is_citus_depended_object that is needed for the tests -- create the udf is_citus_depended_object that is needed for the tests
CREATE OR REPLACE FUNCTION CREATE OR REPLACE FUNCTION
pg_catalog.is_citus_depended_object(oid,oid) pg_catalog.is_citus_depended_object(oid,oid)
@ -193,3 +196,5 @@ drop cascades to table no_hide_pg_proc
drop cascades to table hide_pg_proc drop cascades to table hide_pg_proc
drop cascades to function citus_depended_proc(noderole) drop cascades to function citus_depended_proc(noderole)
drop cascades to function citus_independed_proc(integer) drop cascades to function citus_independed_proc(integer)
SET client_min_messages TO WARNING;
DROP EXTENSION citus_columnar CASCADE;

View File

@ -1,9 +1,20 @@
SET client_min_messages TO WARNING;
CREATE EXTENSION IF NOT EXISTS citus_columnar;
RESET client_min_messages;
CREATE SCHEMA "citus_split_non_blocking_schema_columnar_partitioned"; CREATE SCHEMA "citus_split_non_blocking_schema_columnar_partitioned";
SET search_path TO "citus_split_non_blocking_schema_columnar_partitioned"; SET search_path TO "citus_split_non_blocking_schema_columnar_partitioned";
SET citus.next_shard_id TO 8970000; SET citus.next_shard_id TO 8970000;
SET citus.next_placement_id TO 8770000; SET citus.next_placement_id TO 8770000;
SET citus.shard_count TO 1; SET citus.shard_count TO 1;
SET citus.shard_replication_factor TO 1; SET citus.shard_replication_factor TO 1;
-- remove coordinator if it is added to pg_dist_node
SELECT COUNT(master_remove_node(nodename, nodeport)) >= 0
FROM pg_dist_node WHERE nodename='localhost' AND nodeport=:master_port;
?column?
---------------------------------------------------------------------
t
(1 row)
-- Disable Deferred drop auto cleanup to avoid flaky tests. -- Disable Deferred drop auto cleanup to avoid flaky tests.
ALTER SYSTEM SET citus.defer_shard_delete_interval TO -1; ALTER SYSTEM SET citus.defer_shard_delete_interval TO -1;
SELECT pg_reload_conf(); SELECT pg_reload_conf();
@ -841,3 +852,5 @@ drop cascades to table citus_split_non_blocking_schema_columnar_partitioned.colo
drop cascades to table citus_split_non_blocking_schema_columnar_partitioned.colocated_partitioned_table drop cascades to table citus_split_non_blocking_schema_columnar_partitioned.colocated_partitioned_table
drop cascades to table citus_split_non_blocking_schema_columnar_partitioned.reference_table drop cascades to table citus_split_non_blocking_schema_columnar_partitioned.reference_table
--END : Cleanup --END : Cleanup
SET client_min_messages TO WARNING;
DROP EXTENSION citus_columnar CASCADE;

View File

@ -1,3 +1,6 @@
SET client_min_messages TO WARNING;
CREATE EXTENSION IF NOT EXISTS citus_columnar;
RESET client_min_messages;
CREATE SCHEMA "citus_split_test_schema_columnar_partitioned"; CREATE SCHEMA "citus_split_test_schema_columnar_partitioned";
SET search_path TO "citus_split_test_schema_columnar_partitioned"; SET search_path TO "citus_split_test_schema_columnar_partitioned";
SET citus.next_shard_id TO 8970000; SET citus.next_shard_id TO 8970000;
@ -841,3 +844,5 @@ drop cascades to table citus_split_test_schema_columnar_partitioned.colocated_di
drop cascades to table citus_split_test_schema_columnar_partitioned.colocated_partitioned_table drop cascades to table citus_split_test_schema_columnar_partitioned.colocated_partitioned_table
drop cascades to table citus_split_test_schema_columnar_partitioned.reference_table drop cascades to table citus_split_test_schema_columnar_partitioned.reference_table
--END : Cleanup --END : Cleanup
SET client_min_messages TO WARNING;
DROP EXTENSION citus_columnar CASCADE;

View File

@ -977,7 +977,7 @@ DETAIL: unparameterized; 1 clauses pushed down
(1 row) (1 row)
SET hash_mem_multiplier = 1.0; SET hash_mem_multiplier = 1.0;
SELECT public.explain_with_pg16_subplan_format($Q$ SELECT columnar_test_helpers.explain_with_pg16_subplan_format($Q$
EXPLAIN (analyze on, costs off, timing off, summary off) EXPLAIN (analyze on, costs off, timing off, summary off)
SELECT sum(a) FROM pushdown_test where SELECT sum(a) FROM pushdown_test where
( (
@ -993,15 +993,15 @@ or
$Q$) as "QUERY PLAN"; $Q$) as "QUERY PLAN";
NOTICE: columnar planner: adding CustomScan path for pushdown_test NOTICE: columnar planner: adding CustomScan path for pushdown_test
DETAIL: unparameterized; 0 clauses pushed down DETAIL: unparameterized; 0 clauses pushed down
CONTEXT: PL/pgSQL function explain_with_pg16_subplan_format(text) line XX at FOR over EXECUTE statement CONTEXT: PL/pgSQL function columnar_test_helpers.explain_with_pg16_subplan_format(text) line XX at FOR over EXECUTE statement
NOTICE: columnar planner: cannot push down clause: must match 'Var <op> Expr' or 'Expr <op> Var' NOTICE: columnar planner: cannot push down clause: must match 'Var <op> Expr' or 'Expr <op> Var'
HINT: Var must only reference this rel, and Expr must not reference this rel HINT: Var must only reference this rel, and Expr must not reference this rel
CONTEXT: PL/pgSQL function explain_with_pg16_subplan_format(text) line XX at FOR over EXECUTE statement CONTEXT: PL/pgSQL function columnar_test_helpers.explain_with_pg16_subplan_format(text) line XX at FOR over EXECUTE statement
NOTICE: columnar planner: cannot push down clause: must not contain a subplan NOTICE: columnar planner: cannot push down clause: must not contain a subplan
CONTEXT: PL/pgSQL function explain_with_pg16_subplan_format(text) line XX at FOR over EXECUTE statement CONTEXT: PL/pgSQL function columnar_test_helpers.explain_with_pg16_subplan_format(text) line XX at FOR over EXECUTE statement
NOTICE: columnar planner: adding CustomScan path for pushdown_test NOTICE: columnar planner: adding CustomScan path for pushdown_test
DETAIL: unparameterized; 1 clauses pushed down DETAIL: unparameterized; 1 clauses pushed down
CONTEXT: PL/pgSQL function explain_with_pg16_subplan_format(text) line XX at FOR over EXECUTE statement CONTEXT: PL/pgSQL function columnar_test_helpers.explain_with_pg16_subplan_format(text) line XX at FOR over EXECUTE statement
QUERY PLAN QUERY PLAN
--------------------------------------------------------------------- ---------------------------------------------------------------------
Aggregate (actual rows=1 loops=1) Aggregate (actual rows=1 loops=1)

View File

@ -977,7 +977,7 @@ DETAIL: unparameterized; 1 clauses pushed down
(1 row) (1 row)
SET hash_mem_multiplier = 1.0; SET hash_mem_multiplier = 1.0;
SELECT public.explain_with_pg16_subplan_format($Q$ SELECT columnar_test_helpers.explain_with_pg16_subplan_format($Q$
EXPLAIN (analyze on, costs off, timing off, summary off) EXPLAIN (analyze on, costs off, timing off, summary off)
SELECT sum(a) FROM pushdown_test where SELECT sum(a) FROM pushdown_test where
( (
@ -993,15 +993,15 @@ or
$Q$) as "QUERY PLAN"; $Q$) as "QUERY PLAN";
NOTICE: columnar planner: adding CustomScan path for pushdown_test NOTICE: columnar planner: adding CustomScan path for pushdown_test
DETAIL: unparameterized; 0 clauses pushed down DETAIL: unparameterized; 0 clauses pushed down
CONTEXT: PL/pgSQL function explain_with_pg16_subplan_format(text) line XX at FOR over EXECUTE statement CONTEXT: PL/pgSQL function columnar_test_helpers.explain_with_pg16_subplan_format(text) line XX at FOR over EXECUTE statement
NOTICE: columnar planner: cannot push down clause: must match 'Var <op> Expr' or 'Expr <op> Var' NOTICE: columnar planner: cannot push down clause: must match 'Var <op> Expr' or 'Expr <op> Var'
HINT: Var must only reference this rel, and Expr must not reference this rel HINT: Var must only reference this rel, and Expr must not reference this rel
CONTEXT: PL/pgSQL function explain_with_pg16_subplan_format(text) line XX at FOR over EXECUTE statement CONTEXT: PL/pgSQL function columnar_test_helpers.explain_with_pg16_subplan_format(text) line XX at FOR over EXECUTE statement
NOTICE: columnar planner: cannot push down clause: must not contain a subplan NOTICE: columnar planner: cannot push down clause: must not contain a subplan
CONTEXT: PL/pgSQL function explain_with_pg16_subplan_format(text) line XX at FOR over EXECUTE statement CONTEXT: PL/pgSQL function columnar_test_helpers.explain_with_pg16_subplan_format(text) line XX at FOR over EXECUTE statement
NOTICE: columnar planner: adding CustomScan path for pushdown_test NOTICE: columnar planner: adding CustomScan path for pushdown_test
DETAIL: unparameterized; 1 clauses pushed down DETAIL: unparameterized; 1 clauses pushed down
CONTEXT: PL/pgSQL function explain_with_pg16_subplan_format(text) line XX at FOR over EXECUTE statement CONTEXT: PL/pgSQL function columnar_test_helpers.explain_with_pg16_subplan_format(text) line XX at FOR over EXECUTE statement
QUERY PLAN QUERY PLAN
--------------------------------------------------------------------- ---------------------------------------------------------------------
Aggregate (actual rows=1 loops=1) Aggregate (actual rows=1 loops=1)

View File

@ -1,3 +1,14 @@
SET client_min_messages TO WARNING;
CREATE EXTENSION IF NOT EXISTS citus_columnar;
RESET client_min_messages;
-- remove coordinator if it is added to pg_dist_node
SELECT COUNT(master_remove_node(nodename, nodeport)) >= 0
FROM pg_dist_node WHERE nodename='localhost' AND nodeport=:master_port;
?column?
---------------------------------------------------------------------
t
(1 row)
SELECT success, result FROM run_command_on_all_nodes($cmd$ SELECT success, result FROM run_command_on_all_nodes($cmd$
ALTER SYSTEM SET columnar.compression TO 'none' ALTER SYSTEM SET columnar.compression TO 'none'
$cmd$); $cmd$);
@ -993,5 +1004,54 @@ SELECT COUNT(*) FROM weird_col_explain;
Columnar Projected Columns: <columnar optimized out all columns> Columnar Projected Columns: <columnar optimized out all columns>
(9 rows) (9 rows)
-- some tests with distributed & partitioned tables --
CREATE TABLE dist_part_table(
dist_col INT,
part_col TIMESTAMPTZ,
col1 TEXT
) PARTITION BY RANGE (part_col);
-- create an index before creating a columnar partition
CREATE INDEX dist_part_table_btree ON dist_part_table (col1);
-- columnar partition
CREATE TABLE p0 PARTITION OF dist_part_table
FOR VALUES FROM ('2020-01-01') TO ('2020-02-01')
USING columnar;
SELECT create_distributed_table('dist_part_table', 'dist_col');
create_distributed_table
---------------------------------------------------------------------
(1 row)
-- columnar partition
CREATE TABLE p1 PARTITION OF dist_part_table
FOR VALUES FROM ('2020-02-01') TO ('2020-03-01')
USING columnar;
-- row partition
CREATE TABLE p2 PARTITION OF dist_part_table
FOR VALUES FROM ('2020-03-01') TO ('2020-04-01');
INSERT INTO dist_part_table VALUES (1, '2020-03-15', 'str1', POINT(1, 1));
ERROR: INSERT has more expressions than target columns
-- insert into columnar partitions
INSERT INTO dist_part_table VALUES (1, '2020-01-15', 'str2', POINT(2, 2));
ERROR: INSERT has more expressions than target columns
INSERT INTO dist_part_table VALUES (1, '2020-02-15', 'str3', POINT(3, 3));
ERROR: INSERT has more expressions than target columns
-- create another index after creating a columnar partition
CREATE UNIQUE INDEX dist_part_table_unique ON dist_part_table (dist_col, part_col);
-- verify that indexes are created on columnar partitions
SELECT COUNT(*)=2 FROM pg_indexes WHERE tablename = 'p0';
?column?
---------------------------------------------------------------------
t
(1 row)
SELECT COUNT(*)=2 FROM pg_indexes WHERE tablename = 'p1';
?column?
---------------------------------------------------------------------
t
(1 row)
SET client_min_messages TO WARNING; SET client_min_messages TO WARNING;
DROP SCHEMA columnar_citus_integration CASCADE; DROP SCHEMA columnar_citus_integration CASCADE;
SET client_min_messages TO WARNING;
DROP EXTENSION citus_columnar CASCADE;

View File

@ -38,9 +38,6 @@ SELECT :columnar_stripes_before_drop - count(distinct storage_id) FROM columnar.
SELECT current_database() datname \gset SELECT current_database() datname \gset
CREATE DATABASE db_to_drop; CREATE DATABASE db_to_drop;
NOTICE: Citus partially supports CREATE DATABASE for distributed databases
DETAIL: Citus does not propagate CREATE DATABASE command to other nodes
HINT: You can manually create a database and its extensions on other nodes.
\c db_to_drop \c db_to_drop
CREATE EXTENSION citus_columnar; CREATE EXTENSION citus_columnar;
SELECT oid::text databaseoid FROM pg_database WHERE datname = current_database() \gset SELECT oid::text databaseoid FROM pg_database WHERE datname = current_database() \gset

View File

@ -395,53 +395,6 @@ SELECT b=980 FROM include_test WHERE a = 980;
t t
(1 row) (1 row)
-- some tests with distributed & partitioned tables --
CREATE TABLE dist_part_table(
dist_col INT,
part_col TIMESTAMPTZ,
col1 TEXT
) PARTITION BY RANGE (part_col);
-- create an index before creating a columnar partition
CREATE INDEX dist_part_table_btree ON dist_part_table (col1);
-- columnar partition
CREATE TABLE p0 PARTITION OF dist_part_table
FOR VALUES FROM ('2020-01-01') TO ('2020-02-01')
USING columnar;
SELECT create_distributed_table('dist_part_table', 'dist_col');
create_distributed_table
---------------------------------------------------------------------
(1 row)
-- columnar partition
CREATE TABLE p1 PARTITION OF dist_part_table
FOR VALUES FROM ('2020-02-01') TO ('2020-03-01')
USING columnar;
-- row partition
CREATE TABLE p2 PARTITION OF dist_part_table
FOR VALUES FROM ('2020-03-01') TO ('2020-04-01');
INSERT INTO dist_part_table VALUES (1, '2020-03-15', 'str1', POINT(1, 1));
ERROR: INSERT has more expressions than target columns
-- insert into columnar partitions
INSERT INTO dist_part_table VALUES (1, '2020-01-15', 'str2', POINT(2, 2));
ERROR: INSERT has more expressions than target columns
INSERT INTO dist_part_table VALUES (1, '2020-02-15', 'str3', POINT(3, 3));
ERROR: INSERT has more expressions than target columns
-- create another index after creating a columnar partition
CREATE UNIQUE INDEX dist_part_table_unique ON dist_part_table (dist_col, part_col);
-- verify that indexes are created on columnar partitions
SELECT COUNT(*)=2 FROM pg_indexes WHERE tablename = 'p0';
?column?
---------------------------------------------------------------------
t
(1 row)
SELECT COUNT(*)=2 FROM pg_indexes WHERE tablename = 'p1';
?column?
---------------------------------------------------------------------
t
(1 row)
-- unsupported index types -- -- unsupported index types --
-- gin -- -- gin --
CREATE TABLE testjsonb (j JSONB) USING columnar; CREATE TABLE testjsonb (j JSONB) USING columnar;

View File

@ -71,7 +71,9 @@ write_clear_outside_xact | t
INSERT INTO t INSERT INTO t
SELECT i, 'last batch', 0 /* no need to record memusage per row */ SELECT i, 'last batch', 0 /* no need to record memusage per row */
FROM generate_series(1, 50000) i; FROM generate_series(1, 50000) i;
SELECT CASE WHEN 1.0 * TopMemoryContext / :top_post BETWEEN 0.98 AND 1.03 THEN 1 ELSE 1.0 * TopMemoryContext / :top_post END AS top_growth -- FIXME: Somehow, after we stopped creating citus extension for columnar tests,
-- we started observing %38 growth in TopMemoryContext here.
SELECT CASE WHEN 1.0 * TopMemoryContext / :top_post BETWEEN 0.98 AND 1.40 THEN 1 ELSE 1.0 * TopMemoryContext / :top_post END AS top_growth
FROM columnar_test_helpers.columnar_store_memory_stats(); FROM columnar_test_helpers.columnar_store_memory_stats();
-[ RECORD 1 ]- -[ RECORD 1 ]-
top_growth | 1 top_growth | 1

View File

@ -54,36 +54,9 @@ SELECT count(*), sum(i), min(i), max(i) FROM parent;
(1 row) (1 row)
-- set older partitions as columnar -- set older partitions as columnar
SELECT alter_table_set_access_method('p0','columnar'); ALTER TABLE p0 SET ACCESS METHOD columnar;
NOTICE: creating a new table for public.p0 ALTER TABLE p1 SET ACCESS METHOD columnar;
NOTICE: moving the data of public.p0 ALTER TABLE p3 SET ACCESS METHOD columnar;
NOTICE: dropping the old public.p0
NOTICE: renaming the new table to public.p0
alter_table_set_access_method
---------------------------------------------------------------------
(1 row)
SELECT alter_table_set_access_method('p1','columnar');
NOTICE: creating a new table for public.p1
NOTICE: moving the data of public.p1
NOTICE: dropping the old public.p1
NOTICE: renaming the new table to public.p1
alter_table_set_access_method
---------------------------------------------------------------------
(1 row)
SELECT alter_table_set_access_method('p3','columnar');
NOTICE: creating a new table for public.p3
NOTICE: moving the data of public.p3
NOTICE: dropping the old public.p3
NOTICE: renaming the new table to public.p3
alter_table_set_access_method
---------------------------------------------------------------------
(1 row)
-- should also be parallel plan -- should also be parallel plan
EXPLAIN (costs off) SELECT count(*), sum(i), min(i), max(i) FROM parent; EXPLAIN (costs off) SELECT count(*), sum(i), min(i), max(i) FROM parent;
QUERY PLAN QUERY PLAN

View File

@ -126,3 +126,23 @@ BEGIN
PERFORM pg_sleep(0.001); PERFORM pg_sleep(0.001);
END LOOP; END LOOP;
END; $$ language plpgsql; END; $$ language plpgsql;
-- This function formats EXPLAIN output to conform to how pg <= 16 EXPLAIN
-- shows ANY <subquery> in an expression the pg version >= 17. When 17 is
-- the minimum supported pgversion this function can be retired. The commit
-- that changed how ANY <subquery> exrpressions appear in EXPLAIN is:
-- https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=fd0398fcb
CREATE OR REPLACE FUNCTION explain_with_pg16_subplan_format(explain_command text, out query_plan text)
RETURNS SETOF TEXT AS $$
DECLARE
pgversion int = 0;
BEGIN
pgversion = substring(version(), '\d+')::int ;
FOR query_plan IN execute explain_command LOOP
IF pgversion >= 17 THEN
IF query_plan ~ 'SubPlan \d+\).col' THEN
query_plan = regexp_replace(query_plan, '\(ANY \(\w+ = \(SubPlan (\d+)\).col1\)\)', '(SubPlan \1)', 'g');
END IF;
END IF;
RETURN NEXT;
END LOOP;
END; $$ language plpgsql;

View File

@ -1,3 +1,6 @@
SET client_min_messages TO WARNING;
CREATE EXTENSION IF NOT EXISTS citus_columnar;
RESET client_min_messages;
create schema create_distributed_table_concurrently; create schema create_distributed_table_concurrently;
set search_path to create_distributed_table_concurrently; set search_path to create_distributed_table_concurrently;
set citus.shard_replication_factor to 1; set citus.shard_replication_factor to 1;
@ -313,3 +316,5 @@ select count(*) from test_columnar_2;
-- columnar tests -- -- columnar tests --
set client_min_messages to warning; set client_min_messages to warning;
drop schema create_distributed_table_concurrently cascade; drop schema create_distributed_table_concurrently cascade;
SET client_min_messages TO WARNING;
DROP EXTENSION citus_columnar CASCADE;

View File

@ -1,3 +1,6 @@
SET client_min_messages TO WARNING;
CREATE EXTENSION IF NOT EXISTS citus_columnar;
RESET client_min_messages;
CREATE SCHEMA drop_column_partitioned_table; CREATE SCHEMA drop_column_partitioned_table;
SET search_path TO drop_column_partitioned_table; SET search_path TO drop_column_partitioned_table;
SET citus.shard_replication_factor TO 1; SET citus.shard_replication_factor TO 1;
@ -394,3 +397,5 @@ ORDER BY 1,2;
\c - - - :master_port \c - - - :master_port
SET client_min_messages TO WARNING; SET client_min_messages TO WARNING;
DROP SCHEMA drop_column_partitioned_table CASCADE; DROP SCHEMA drop_column_partitioned_table CASCADE;
SET client_min_messages TO WARNING;
DROP EXTENSION citus_columnar CASCADE;

View File

@ -61,9 +61,3 @@ CREATE TABLE sensors_2004(
col_to_drop_4 date, measureid integer NOT NULL, eventdatetime date NOT NULL, measure_data jsonb NOT NULL); col_to_drop_4 date, measureid integer NOT NULL, eventdatetime date NOT NULL, measure_data jsonb NOT NULL);
ALTER TABLE sensors ATTACH PARTITION sensors_2004 FOR VALUES FROM ('2004-01-01') TO ('2005-01-01'); ALTER TABLE sensors ATTACH PARTITION sensors_2004 FOR VALUES FROM ('2004-01-01') TO ('2005-01-01');
ALTER TABLE sensors DROP COLUMN col_to_drop_4; ALTER TABLE sensors DROP COLUMN col_to_drop_4;
SELECT alter_table_set_access_method('sensors_2004', 'columnar');
alter_table_set_access_method
---------------------------------------------------------------------
(1 row)

View File

@ -0,0 +1,28 @@
-- When there are no relations using the columnar access method, we don't automatically create
-- "citus_columnar" extension together with "citus" extension.
SELECT NOT EXISTS (SELECT 1 FROM pg_extension WHERE extname = 'citus_columnar') as citus_columnar_not_exists;
citus_columnar_not_exists
---------------------------------------------------------------------
t
(1 row)
-- Likely, we should not have any columnar objects leftover from "old columnar", i.e., the
-- columnar access method that we had before Citus 11.1, around.
SELECT NOT EXISTS (SELECT 1 FROM pg_am WHERE pg_am.amname = 'columnar') as columnar_am_not_exists;
columnar_am_not_exists
---------------------------------------------------------------------
t
(1 row)
SELECT NOT EXISTS (SELECT 1 FROM pg_namespace WHERE nspname IN ('columnar', 'columnar_internal')) as columnar_catalog_schemas_not_exists;
columnar_catalog_schemas_not_exists
---------------------------------------------------------------------
t
(1 row)
SELECT NOT EXISTS (SELECT 1 FROM pg_proc WHERE proname IN ('alter_columnar_table_set', 'alter_columnar_table_reset', 'upgrade_columnar_storage', 'downgrade_columnar_storage', 'columnar_ensure_am_depends_catalog')) as columnar_utilities_not_exists;
columnar_utilities_not_exists
---------------------------------------------------------------------
t
(1 row)

View File

@ -1,4 +1,7 @@
\c - - - :master_port \c - - - :master_port
SET client_min_messages TO WARNING;
CREATE EXTENSION IF NOT EXISTS citus_columnar;
RESET client_min_messages;
CREATE SCHEMA single_node; CREATE SCHEMA single_node;
SET search_path TO single_node; SET search_path TO single_node;
SET citus.shard_count TO 4; SET citus.shard_count TO 4;

View File

@ -1,3 +1,6 @@
SET client_min_messages TO WARNING;
CREATE EXTENSION IF NOT EXISTS citus_columnar;
RESET client_min_messages;
CREATE SCHEMA generated_identities; CREATE SCHEMA generated_identities;
SET search_path TO generated_identities; SET search_path TO generated_identities;
SET client_min_messages to ERROR; SET client_min_messages to ERROR;
@ -673,3 +676,5 @@ ORDER BY table_name, id;
SET client_min_messages TO WARNING; SET client_min_messages TO WARNING;
DROP SCHEMA generated_identities CASCADE; DROP SCHEMA generated_identities CASCADE;
DROP USER identity_test_user; DROP USER identity_test_user;
SET client_min_messages TO WARNING;
DROP EXTENSION citus_columnar CASCADE;

View File

@ -1,3 +1,6 @@
SET client_min_messages TO WARNING;
CREATE EXTENSION IF NOT EXISTS citus_columnar;
RESET client_min_messages;
CREATE SCHEMA insert_select_into_local_table; CREATE SCHEMA insert_select_into_local_table;
SET search_path TO insert_select_into_local_table; SET search_path TO insert_select_into_local_table;
SET citus.shard_count = 4; SET citus.shard_count = 4;
@ -1113,3 +1116,5 @@ ROLLBACK;
\set VERBOSITY terse \set VERBOSITY terse
DROP SCHEMA insert_select_into_local_table CASCADE; DROP SCHEMA insert_select_into_local_table CASCADE;
NOTICE: drop cascades to 13 other objects NOTICE: drop cascades to 13 other objects
SET client_min_messages TO WARNING;
DROP EXTENSION citus_columnar CASCADE;

View File

@ -1,6 +1,9 @@
-- --
-- ISSUE_5248 -- ISSUE_5248
-- --
SET client_min_messages TO WARNING;
CREATE EXTENSION IF NOT EXISTS citus_columnar;
RESET client_min_messages;
CREATE SCHEMA issue_5248; CREATE SCHEMA issue_5248;
SET search_path TO issue_5248; SET search_path TO issue_5248;
SET citus.shard_count TO 4; SET citus.shard_count TO 4;
@ -216,3 +219,5 @@ ERROR: cannot push down subquery on the target list
DETAIL: Subqueries in the SELECT part of the query can only be pushed down if they happen before aggregates and window functions DETAIL: Subqueries in the SELECT part of the query can only be pushed down if they happen before aggregates and window functions
SET client_min_messages TO WARNING; SET client_min_messages TO WARNING;
DROP SCHEMA issue_5248 CASCADE; DROP SCHEMA issue_5248 CASCADE;
SET client_min_messages TO WARNING;
DROP EXTENSION citus_columnar CASCADE;

View File

@ -6,6 +6,9 @@ NOTICE: schema "merge_schema" does not exist, skipping
--WHEN NOT MATCHED --WHEN NOT MATCHED
--WHEN MATCHED AND <condition> --WHEN MATCHED AND <condition>
--WHEN MATCHED --WHEN MATCHED
SET client_min_messages TO WARNING;
CREATE EXTENSION IF NOT EXISTS citus_columnar;
RESET client_min_messages;
CREATE SCHEMA merge_schema; CREATE SCHEMA merge_schema;
SET search_path TO merge_schema; SET search_path TO merge_schema;
SET citus.shard_count TO 4; SET citus.shard_count TO 4;
@ -4352,3 +4355,5 @@ drop cascades to table t1_4000190
drop cascades to table s1_4000191 drop cascades to table s1_4000191
drop cascades to table t1 drop cascades to table t1
and 7 other objects (see server log for list) and 7 other objects (see server log for list)
SET client_min_messages TO WARNING;
DROP EXTENSION citus_columnar CASCADE;

View File

@ -3,6 +3,9 @@
-- and compare the final results of the target tables in Postgres and Citus. -- and compare the final results of the target tables in Postgres and Citus.
-- The results should match. This process is repeated for various combinations -- The results should match. This process is repeated for various combinations
-- of MERGE SQL. -- of MERGE SQL.
SET client_min_messages TO WARNING;
CREATE EXTENSION IF NOT EXISTS citus_columnar;
RESET client_min_messages;
DROP SCHEMA IF EXISTS merge_repartition1_schema CASCADE; DROP SCHEMA IF EXISTS merge_repartition1_schema CASCADE;
NOTICE: schema "merge_repartition1_schema" does not exist, skipping NOTICE: schema "merge_repartition1_schema" does not exist, skipping
CREATE SCHEMA merge_repartition1_schema; CREATE SCHEMA merge_repartition1_schema;
@ -1339,3 +1342,5 @@ drop cascades to function check_data(text,text,text,text)
drop cascades to function compare_data() drop cascades to function compare_data()
drop cascades to table citus_target drop cascades to table citus_target
drop cascades to table citus_source drop cascades to table citus_source
SET client_min_messages TO WARNING;
DROP EXTENSION citus_columnar CASCADE;

View File

@ -2,6 +2,9 @@
-- MULTI_DROP_EXTENSION -- MULTI_DROP_EXTENSION
-- --
-- Tests around dropping and recreating the extension -- Tests around dropping and recreating the extension
SET client_min_messages TO WARNING;
CREATE EXTENSION IF NOT EXISTS citus_columnar;
RESET client_min_messages;
SET citus.next_shard_id TO 550000; SET citus.next_shard_id TO 550000;
CREATE TABLE testtableddl(somecol int, distributecol text NOT NULL); CREATE TABLE testtableddl(somecol int, distributecol text NOT NULL);
SELECT create_distributed_table('testtableddl', 'distributecol', 'append'); SELECT create_distributed_table('testtableddl', 'distributecol', 'append');
@ -168,3 +171,5 @@ SELECT * FROM testtableddl;
(0 rows) (0 rows)
DROP TABLE testtableddl; DROP TABLE testtableddl;
SET client_min_messages TO WARNING;
DROP EXTENSION citus_columnar CASCADE;

View File

@ -120,7 +120,41 @@ ORDER BY 1, 2;
-- DROP EXTENSION pre-created by the regression suite -- DROP EXTENSION pre-created by the regression suite
DROP EXTENSION citus; DROP EXTENSION citus;
DROP EXTENSION citus_columnar; SET client_min_messages TO WARNING;
DROP EXTENSION IF EXISTS citus_columnar;
RESET client_min_messages;
CREATE EXTENSION citus;
-- When there are no relations using the columnar access method, we don't automatically create
-- "citus_columnar" extension together with "citus" extension anymore. And as this will always
-- be the case for a fresh "CREATE EXTENSION citus", we know that we should definitely not have
-- "citus_columnar" extension created.
SELECT NOT EXISTS (SELECT 1 FROM pg_extension WHERE extname = 'citus_columnar') as citus_columnar_not_exists;
citus_columnar_not_exists
---------------------------------------------------------------------
t
(1 row)
-- Likely, we should not have any columnar objects leftover from "old columnar", i.e., the
-- columnar access method that we had before Citus 11.1, around.
SELECT NOT EXISTS (SELECT 1 FROM pg_am WHERE pg_am.amname = 'columnar') as columnar_am_not_exists;
columnar_am_not_exists
---------------------------------------------------------------------
t
(1 row)
SELECT NOT EXISTS (SELECT 1 FROM pg_namespace WHERE nspname IN ('columnar', 'columnar_internal')) as columnar_catalog_schemas_not_exists;
columnar_catalog_schemas_not_exists
---------------------------------------------------------------------
t
(1 row)
SELECT NOT EXISTS (SELECT 1 FROM pg_proc WHERE proname IN ('alter_columnar_table_set', 'alter_columnar_table_reset', 'upgrade_columnar_storage', 'downgrade_columnar_storage', 'columnar_ensure_am_depends_catalog')) as columnar_utilities_not_exists;
columnar_utilities_not_exists
---------------------------------------------------------------------
t
(1 row)
DROP EXTENSION citus;
\c \c
-- these tests switch between citus versions and call ddl's that require pg_dist_object to be created -- these tests switch between citus versions and call ddl's that require pg_dist_object to be created
SET citus.enable_metadata_sync TO 'false'; SET citus.enable_metadata_sync TO 'false';
@ -759,6 +793,99 @@ SELECT * FROM multi_extension.print_extension_changes();
| view public.citus_tables | view public.citus_tables
(2 rows) (2 rows)
-- Update Citus to 13.2-1 and make sure that we don't automatically create
-- citus_columnar extension as we don't have any relations created using columnar.
ALTER EXTENSION citus UPDATE TO '13.2-1';
SELECT NOT EXISTS (SELECT 1 FROM pg_extension WHERE extname = 'citus_columnar') as citus_columnar_not_exists;
citus_columnar_not_exists
---------------------------------------------------------------------
t
(1 row)
SELECT NOT EXISTS (SELECT 1 FROM pg_am WHERE pg_am.amname = 'columnar') as columnar_am_not_exists;
columnar_am_not_exists
---------------------------------------------------------------------
t
(1 row)
SELECT NOT EXISTS (SELECT 1 FROM pg_namespace WHERE nspname IN ('columnar', 'columnar_internal')) as columnar_catalog_schemas_not_exists;
columnar_catalog_schemas_not_exists
---------------------------------------------------------------------
t
(1 row)
SELECT NOT EXISTS (SELECT 1 FROM pg_proc WHERE proname IN ('alter_columnar_table_set', 'alter_columnar_table_reset', 'upgrade_columnar_storage', 'downgrade_columnar_storage', 'columnar_ensure_am_depends_catalog')) as columnar_utilities_not_exists;
columnar_utilities_not_exists
---------------------------------------------------------------------
t
(1 row)
-- Unfortunately, our downgrade scripts seem to assume that citus_columnar exists.
-- Seems this has always been the case since the introduction of citus_columnar,
-- so we need to create citus_columnar before the downgrade.
CREATE EXTENSION citus_columnar;
ALTER EXTENSION citus UPDATE TO '11.1-1';
-- Update Citus to 13.2-1 and make sure that already having citus_columnar extension
-- doesn't cause any issues.
ALTER EXTENSION citus UPDATE TO '13.2-1';
SELECT EXISTS (SELECT 1 FROM pg_extension WHERE extname = 'citus_columnar') as citus_columnar_exists;
citus_columnar_exists
---------------------------------------------------------------------
t
(1 row)
SELECT EXISTS (SELECT 1 FROM pg_am WHERE pg_am.amname = 'columnar') as columnar_am_exists;
columnar_am_exists
---------------------------------------------------------------------
t
(1 row)
SELECT EXISTS (SELECT 1 FROM pg_namespace WHERE nspname IN ('columnar', 'columnar_internal')) as columnar_catalog_schemas_exists;
columnar_catalog_schemas_exists
---------------------------------------------------------------------
t
(1 row)
SELECT EXISTS (SELECT 1 FROM pg_proc WHERE proname IN ('alter_columnar_table_set', 'alter_columnar_table_reset', 'upgrade_columnar_storage', 'downgrade_columnar_storage', 'columnar_ensure_am_depends_catalog')) as columnar_utilities_exists;
columnar_utilities_exists
---------------------------------------------------------------------
t
(1 row)
ALTER EXTENSION citus UPDATE TO '11.1-1';
DROP EXTENSION citus_columnar;
-- Update Citus to 13.2-1 and make sure that NOT having citus_columnar extension
-- doesn't cause any issues.
ALTER EXTENSION citus UPDATE TO '13.2-1';
SELECT NOT EXISTS (SELECT 1 FROM pg_extension WHERE extname = 'citus_columnar') as citus_columnar_not_exists;
citus_columnar_not_exists
---------------------------------------------------------------------
t
(1 row)
SELECT NOT EXISTS (SELECT 1 FROM pg_am WHERE pg_am.amname = 'columnar') as columnar_am_not_exists;
columnar_am_not_exists
---------------------------------------------------------------------
t
(1 row)
SELECT NOT EXISTS (SELECT 1 FROM pg_namespace WHERE nspname IN ('columnar', 'columnar_internal')) as columnar_catalog_schemas_not_exists;
columnar_catalog_schemas_not_exists
---------------------------------------------------------------------
t
(1 row)
SELECT NOT EXISTS (SELECT 1 FROM pg_proc WHERE proname IN ('alter_columnar_table_set', 'alter_columnar_table_reset', 'upgrade_columnar_storage', 'downgrade_columnar_storage', 'columnar_ensure_am_depends_catalog')) as columnar_utilities_not_exists;
columnar_utilities_not_exists
---------------------------------------------------------------------
t
(1 row)
-- Downgrade back to 10.0-4 for the rest of the tests.
--
-- same here - to downgrade Citus, first we need to create citus_columnar
CREATE EXTENSION citus_columnar;
ALTER EXTENSION citus UPDATE TO '10.0-4';
-- not print "HINT: " to hide current lib version -- not print "HINT: " to hide current lib version
\set VERBOSITY terse \set VERBOSITY terse
CREATE TABLE columnar_table(a INT, b INT) USING columnar; CREATE TABLE columnar_table(a INT, b INT) USING columnar;
@ -1010,7 +1137,7 @@ FROM
pg_dist_node_metadata; pg_dist_node_metadata;
partitioned_citus_table_exists_pre_11 | is_null partitioned_citus_table_exists_pre_11 | is_null
--------------------------------------------------------------------- ---------------------------------------------------------------------
| t f | f
(1 row) (1 row)
-- Test downgrade to 10.2-5 from 11.0-1 -- Test downgrade to 10.2-5 from 11.0-1
@ -1212,6 +1339,14 @@ SELECT * FROM multi_extension.print_extension_changes();
| view citus_locks | view citus_locks
(57 rows) (57 rows)
-- Make sure that citus_columnar is automatically created while updating Citus to 11.1-1
-- as we created columnar tables using the columnar access method before.
SELECT EXISTS (SELECT 1 FROM pg_extension WHERE extname = 'citus_columnar') as citus_columnar_exists;
citus_columnar_exists
---------------------------------------------------------------------
t
(1 row)
-- Test downgrade to 11.1-1 from 11.2-1 -- Test downgrade to 11.1-1 from 11.2-1
ALTER EXTENSION citus UPDATE TO '11.2-1'; ALTER EXTENSION citus UPDATE TO '11.2-1';
ALTER EXTENSION citus UPDATE TO '11.1-1'; ALTER EXTENSION citus UPDATE TO '11.1-1';
@ -1621,13 +1756,11 @@ NOTICE: version "9.1-1" of extension "citus" is already installed
ALTER EXTENSION citus UPDATE; ALTER EXTENSION citus UPDATE;
-- re-create in newest version -- re-create in newest version
DROP EXTENSION citus; DROP EXTENSION citus;
DROP EXTENSION citus_columnar;
\c \c
CREATE EXTENSION citus; CREATE EXTENSION citus;
-- test cache invalidation in workers -- test cache invalidation in workers
\c - - - :worker_1_port \c - - - :worker_1_port
DROP EXTENSION citus; DROP EXTENSION citus;
DROP EXTENSION citus_columnar;
SET citus.enable_version_checks TO 'false'; SET citus.enable_version_checks TO 'false';
SET columnar.enable_version_checks TO 'false'; SET columnar.enable_version_checks TO 'false';
CREATE EXTENSION citus VERSION '8.0-1'; CREATE EXTENSION citus VERSION '8.0-1';
@ -1999,8 +2132,7 @@ SELECT citus_add_local_table_to_metadata('test');
DROP TABLE test; DROP TABLE test;
-- Verify that we don't consider the schemas created by extensions as tenant schemas. -- Verify that we don't consider the schemas created by extensions as tenant schemas.
-- Easiest way of verifying this is to drop and re-create columnar extension. -- Easiest way of verifying this is to create columnar extension.
DROP EXTENSION citus_columnar;
SET citus.enable_schema_based_sharding TO ON; SET citus.enable_schema_based_sharding TO ON;
CREATE EXTENSION citus_columnar; CREATE EXTENSION citus_columnar;
SELECT COUNT(*)=0 FROM pg_dist_schema SELECT COUNT(*)=0 FROM pg_dist_schema

View File

@ -524,10 +524,11 @@ SELECT tablename, indexname FROM pg_indexes WHERE schemaname = 'fix_idx_names' O
-- index, only the new index should be fixed -- index, only the new index should be fixed
-- create only one shard & one partition so that the output easier to check -- create only one shard & one partition so that the output easier to check
SET citus.next_shard_id TO 915000; SET citus.next_shard_id TO 915000;
ALTER SEQUENCE pg_catalog.pg_dist_colocationid_seq RESTART 1370100;
SET citus.shard_count TO 1; SET citus.shard_count TO 1;
SET citus.shard_replication_factor TO 1; SET citus.shard_replication_factor TO 1;
CREATE TABLE parent_table (dist_col int, another_col int, partition_col timestamp, name text) PARTITION BY RANGE (partition_col); CREATE TABLE parent_table (dist_col int, another_col int, partition_col timestamp, name text) PARTITION BY RANGE (partition_col);
SELECT create_distributed_table('parent_table', 'dist_col'); SELECT create_distributed_table('parent_table', 'dist_col', colocate_with=>'none');
create_distributed_table create_distributed_table
--------------------------------------------------------------------- ---------------------------------------------------------------------
@ -634,40 +635,25 @@ ALTER INDEX p1_dist_col_idx3 RENAME TO p1_dist_col_idx3_renamed;
ALTER INDEX p1_pkey RENAME TO p1_pkey_renamed; ALTER INDEX p1_pkey RENAME TO p1_pkey_renamed;
ALTER INDEX p1_dist_col_partition_col_key RENAME TO p1_dist_col_partition_col_key_renamed; ALTER INDEX p1_dist_col_partition_col_key RENAME TO p1_dist_col_partition_col_key_renamed;
ALTER INDEX p1_dist_col_idx RENAME TO p1_dist_col_idx_renamed; ALTER INDEX p1_dist_col_idx RENAME TO p1_dist_col_idx_renamed;
SET client_min_messages TO WARNING;
CREATE EXTENSION IF NOT EXISTS citus_columnar;
RESET client_min_messages;
SET search_path TO fix_idx_names, public;
SET columnar.compression TO 'zstd';
-- should be able to create a new partition that is columnar -- should be able to create a new partition that is columnar
SET citus.log_remote_commands TO ON; SET citus.log_remote_commands TO ON;
CREATE TABLE p2(dist_col int NOT NULL, another_col int, partition_col timestamp NOT NULL, name text) USING columnar; CREATE TABLE p2(dist_col int NOT NULL, another_col int, partition_col timestamp NOT NULL, name text) USING columnar;
ALTER TABLE parent_table ATTACH PARTITION p2 FOR VALUES FROM ('2019-01-01') TO ('2020-01-01'); ALTER TABLE parent_table ATTACH PARTITION p2 FOR VALUES FROM ('2019-01-01') TO ('2020-01-01');
NOTICE: issuing BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;SELECT assign_distributed_transaction_id(xx, xx, 'xxxxxxx'); NOTICE: issuing BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;SELECT assign_distributed_transaction_id(xx, xx, 'xxxxxxx');
DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx
NOTICE: issuing SET citus.enable_ddl_propagation TO 'off'
DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx
NOTICE: issuing CREATE EXTENSION IF NOT EXISTS citus_columnar WITH SCHEMA pg_catalog VERSION "x.y-z";
DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx
NOTICE: issuing COMMIT
DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx
NOTICE: issuing BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;SELECT assign_distributed_transaction_id(xx, xx, 'xxxxxxx');
DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx
NOTICE: issuing SET citus.enable_ddl_propagation TO 'off'
DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx
NOTICE: issuing CREATE EXTENSION IF NOT EXISTS citus_columnar WITH SCHEMA pg_catalog VERSION "x.y-z";
DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx
NOTICE: issuing COMMIT
DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx
NOTICE: issuing BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;SELECT assign_distributed_transaction_id(xx, xx, 'xxxxxxx');
DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx
NOTICE: issuing BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;SELECT assign_distributed_transaction_id(xx, xx, 'xxxxxxx');
DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx
NOTICE: issuing WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('extension', ARRAY['citus_columnar']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data;
DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx
NOTICE: issuing WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('extension', ARRAY['citus_columnar']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal.add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data;
DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx
NOTICE: issuing SELECT worker_apply_shard_ddl_command (915002, 'fix_idx_names', 'CREATE TABLE fix_idx_names.p2 (dist_col integer NOT NULL, another_col integer, partition_col timestamp without time zone NOT NULL, name text) USING columnar') NOTICE: issuing SELECT worker_apply_shard_ddl_command (915002, 'fix_idx_names', 'CREATE TABLE fix_idx_names.p2 (dist_col integer NOT NULL, another_col integer, partition_col timestamp without time zone NOT NULL, name text) USING columnar')
DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx
NOTICE: issuing ALTER TABLE fix_idx_names.p2_915002 SET (columnar.chunk_group_row_limit = 10000, columnar.stripe_row_limit = 150000, columnar.compression_level = 3, columnar.compression = 'zstd'); NOTICE: issuing ALTER TABLE fix_idx_names.p2_915002 SET (columnar.chunk_group_row_limit = 10000, columnar.stripe_row_limit = 150000, columnar.compression_level = 3, columnar.compression = 'zstd');
DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx
NOTICE: issuing SELECT worker_apply_shard_ddl_command (915002, 'fix_idx_names', 'ALTER TABLE fix_idx_names.p2 OWNER TO postgres') NOTICE: issuing SELECT worker_apply_shard_ddl_command (915002, 'fix_idx_names', 'ALTER TABLE fix_idx_names.p2 OWNER TO postgres')
DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx
NOTICE: issuing BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;SELECT assign_distributed_transaction_id(xx, xx, 'xxxxxxx');
DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx
NOTICE: issuing SET citus.enable_ddl_propagation TO 'off' NOTICE: issuing SET citus.enable_ddl_propagation TO 'off'
DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx
NOTICE: issuing SET citus.enable_ddl_propagation TO 'off' NOTICE: issuing SET citus.enable_ddl_propagation TO 'off'
@ -692,9 +678,9 @@ NOTICE: issuing SET citus.enable_ddl_propagation TO 'off'
DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx
NOTICE: issuing SET citus.enable_ddl_propagation TO 'off' NOTICE: issuing SET citus.enable_ddl_propagation TO 'off'
DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx
NOTICE: issuing SELECT citus_internal.add_partition_metadata ('fix_idx_names.p2'::regclass, 'h', 'dist_col', 1370001, 's') NOTICE: issuing SELECT citus_internal.add_partition_metadata ('fix_idx_names.p2'::regclass, 'h', 'dist_col', 1370100, 's')
DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx
NOTICE: issuing SELECT citus_internal.add_partition_metadata ('fix_idx_names.p2'::regclass, 'h', 'dist_col', 1370001, 's') NOTICE: issuing SELECT citus_internal.add_partition_metadata ('fix_idx_names.p2'::regclass, 'h', 'dist_col', 1370100, 's')
DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx
NOTICE: issuing WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('fix_idx_names.p2'::regclass, 915002, 't'::"char", '-2147483648', '2147483647')) SELECT citus_internal.add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data; NOTICE: issuing WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('fix_idx_names.p2'::regclass, 915002, 't'::"char", '-2147483648', '2147483647')) SELECT citus_internal.add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data;
DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx
@ -742,3 +728,5 @@ ALTER TABLE parent_table DROP CONSTRAINT pkey_cst CASCADE;
ALTER TABLE parent_table DROP CONSTRAINT unique_cst CASCADE; ALTER TABLE parent_table DROP CONSTRAINT unique_cst CASCADE;
SET client_min_messages TO WARNING; SET client_min_messages TO WARNING;
DROP SCHEMA fix_idx_names CASCADE; DROP SCHEMA fix_idx_names CASCADE;
SET client_min_messages TO WARNING;
DROP EXTENSION citus_columnar CASCADE;

View File

@ -3,6 +3,9 @@
-- --
-- Test user permissions. -- Test user permissions.
-- --
SET client_min_messages TO WARNING;
CREATE EXTENSION IF NOT EXISTS citus_columnar;
RESET client_min_messages;
SET citus.next_shard_id TO 1420000; SET citus.next_shard_id TO 1420000;
SET citus.shard_replication_factor TO 1; SET citus.shard_replication_factor TO 1;
CREATE TABLE test (id integer, val integer); CREATE TABLE test (id integer, val integer);
@ -549,3 +552,5 @@ DROP USER full_access;
DROP USER read_access; DROP USER read_access;
DROP USER no_access; DROP USER no_access;
DROP ROLE some_role; DROP ROLE some_role;
SET client_min_messages TO WARNING;
DROP EXTENSION citus_columnar CASCADE;

View File

@ -1,3 +1,6 @@
SET client_min_messages TO WARNING;
CREATE EXTENSION IF NOT EXISTS citus_columnar;
RESET client_min_messages;
-- --
-- MULTI_TENANT_ISOLATION -- MULTI_TENANT_ISOLATION
-- --
@ -1228,3 +1231,5 @@ SELECT count(*) FROM pg_catalog.pg_dist_partition WHERE colocationid > 0;
TRUNCATE TABLE pg_catalog.pg_dist_colocation; TRUNCATE TABLE pg_catalog.pg_dist_colocation;
ALTER SEQUENCE pg_catalog.pg_dist_colocationid_seq RESTART 1; ALTER SEQUENCE pg_catalog.pg_dist_colocationid_seq RESTART 1;
ALTER SEQUENCE pg_catalog.pg_dist_placement_placementid_seq RESTART :last_placement_id; ALTER SEQUENCE pg_catalog.pg_dist_placement_placementid_seq RESTART :last_placement_id;
SET client_min_messages TO WARNING;
DROP EXTENSION citus_columnar CASCADE;

View File

@ -1,3 +1,6 @@
SET client_min_messages TO WARNING;
CREATE EXTENSION IF NOT EXISTS citus_columnar;
RESET client_min_messages;
-- --
-- MULTI_TENANT_ISOLATION -- MULTI_TENANT_ISOLATION
-- --
@ -1282,3 +1285,5 @@ SELECT public.wait_for_resource_cleanup();
(1 row) (1 row)
SET client_min_messages TO WARNING;
DROP EXTENSION citus_columnar CASCADE;

View File

@ -1,3 +1,6 @@
SET client_min_messages TO WARNING;
CREATE EXTENSION IF NOT EXISTS citus_columnar;
RESET client_min_messages;
CREATE SCHEMA "Mx Regular User"; CREATE SCHEMA "Mx Regular User";
SET search_path TO "Mx Regular User"; SET search_path TO "Mx Regular User";
-- add coordinator in idempotent way -- add coordinator in idempotent way
@ -689,3 +692,5 @@ drop cascades to table "Mx Regular User".on_delete_fkey_table
drop cascades to table "Mx Regular User".local_table_in_the_metadata drop cascades to table "Mx Regular User".local_table_in_the_metadata
drop cascades to type "Mx Regular User".test_type drop cascades to type "Mx Regular User".test_type
drop cascades to table "Mx Regular User".composite_key drop cascades to table "Mx Regular User".composite_key
SET client_min_messages TO WARNING;
DROP EXTENSION citus_columnar CASCADE;

View File

@ -1,6 +1,9 @@
SET citus.shard_replication_factor to 1; SET citus.shard_replication_factor to 1;
SET citus.next_shard_id TO 60000; SET citus.next_shard_id TO 60000;
SET citus.next_placement_id TO 60000; SET citus.next_placement_id TO 60000;
SET client_min_messages TO WARNING;
CREATE EXTENSION IF NOT EXISTS citus_columnar;
RESET client_min_messages;
create schema test_pg12; create schema test_pg12;
set search_path to test_pg12; set search_path to test_pg12;
-- Test generated columns -- Test generated columns
@ -651,3 +654,5 @@ drop schema test_pg12 cascade;
NOTICE: drop cascades to 16 other objects NOTICE: drop cascades to 16 other objects
\set VERBOSITY default \set VERBOSITY default
SET citus.shard_replication_factor to 2; SET citus.shard_replication_factor to 2;
SET client_min_messages TO WARNING;
DROP EXTENSION citus_columnar CASCADE;

View File

@ -1,3 +1,6 @@
SET client_min_messages TO WARNING;
CREATE EXTENSION IF NOT EXISTS citus_columnar;
RESET client_min_messages;
create schema pg14; create schema pg14;
set search_path to pg14; set search_path to pg14;
SET citus.shard_replication_factor TO 1; SET citus.shard_replication_factor TO 1;
@ -1468,3 +1471,5 @@ drop extension postgres_fdw cascade;
drop schema pg14 cascade; drop schema pg14 cascade;
DROP ROLE role_1, r1; DROP ROLE role_1, r1;
reset client_min_messages; reset client_min_messages;
SET client_min_messages TO WARNING;
DROP EXTENSION citus_columnar CASCADE;

View File

@ -4,6 +4,9 @@
SHOW server_version \gset SHOW server_version \gset
SELECT substring(:'server_version', '\d+')::int >= 17 AS server_version_ge_17 SELECT substring(:'server_version', '\d+')::int >= 17 AS server_version_ge_17
\gset \gset
SET client_min_messages TO WARNING;
CREATE EXTENSION IF NOT EXISTS citus_columnar;
RESET client_min_messages;
-- PG17 has the capabilty to pull up a correlated ANY subquery to a join if -- PG17 has the capabilty to pull up a correlated ANY subquery to a join if
-- the subquery only refers to its immediate parent query. Previously, the -- the subquery only refers to its immediate parent query. Previously, the
-- subquery needed to be implemented as a SubPlan node, typically as a -- subquery needed to be implemented as a SubPlan node, typically as a
@ -1787,45 +1790,49 @@ SELECT create_distributed_table('test_partitioned_alter', 'id');
(1 row) (1 row)
-- Step 4: Verify that the table and partitions are created and distributed correctly on the coordinator -- Step 4: Verify that the table and partitions are created and distributed correctly on the coordinator
SELECT relname, relam SELECT relname, amname
FROM pg_class FROM pg_class
JOIN pg_am ON (relam = pg_am.oid)
WHERE relname = 'test_partitioned_alter'; WHERE relname = 'test_partitioned_alter';
relname | relam relname | amname
--------------------------------------------------------------------- ---------------------------------------------------------------------
test_partitioned_alter | 2 test_partitioned_alter | heap
(1 row) (1 row)
SELECT relname, relam SELECT relname, amname
FROM pg_class FROM pg_class
JOIN pg_am ON (relam = pg_am.oid)
WHERE relname IN ('test_partition_1', 'test_partition_2') WHERE relname IN ('test_partition_1', 'test_partition_2')
ORDER BY relname; ORDER BY relname;
relname | relam relname | amname
--------------------------------------------------------------------- ---------------------------------------------------------------------
test_partition_1 | 2 test_partition_1 | heap
test_partition_2 | 2 test_partition_2 | heap
(2 rows) (2 rows)
-- Step 4 (Repeat on a Worker Node): Verify that the table and partitions are created correctly -- Step 4 (Repeat on a Worker Node): Verify that the table and partitions are created correctly
\c - - - :worker_1_port \c - - - :worker_1_port
SET search_path TO pg17; SET search_path TO pg17;
-- Verify the table's access method on the worker node -- Verify the table's access method on the worker node
SELECT relname, relam SELECT relname, amname
FROM pg_class FROM pg_class
JOIN pg_am ON (relam = pg_am.oid)
WHERE relname = 'test_partitioned_alter'; WHERE relname = 'test_partitioned_alter';
relname | relam relname | amname
--------------------------------------------------------------------- ---------------------------------------------------------------------
test_partitioned_alter | 2 test_partitioned_alter | heap
(1 row) (1 row)
-- Verify the partitions' access methods on the worker node -- Verify the partitions' access methods on the worker node
SELECT relname, relam SELECT relname, amname
FROM pg_class FROM pg_class
JOIN pg_am ON (relam = pg_am.oid)
WHERE relname IN ('test_partition_1', 'test_partition_2') WHERE relname IN ('test_partition_1', 'test_partition_2')
ORDER BY relname; ORDER BY relname;
relname | relam relname | amname
--------------------------------------------------------------------- ---------------------------------------------------------------------
test_partition_1 | 2 test_partition_1 | heap
test_partition_2 | 2 test_partition_2 | heap
(2 rows) (2 rows)
\c - - - :master_port \c - - - :master_port
@ -1838,46 +1845,50 @@ ALTER TABLE test_partitioned_alter SET ACCESS METHOD columnar;
-- option for partitioned tables. Existing partitions are not modified. -- option for partitioned tables. Existing partitions are not modified.
-- Reference: https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=374c7a2290429eac3217b0c7b0b485db9c2bcc72 -- Reference: https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=374c7a2290429eac3217b0c7b0b485db9c2bcc72
-- Verify the parent table's access method -- Verify the parent table's access method
SELECT relname, relam SELECT relname, amname
FROM pg_class FROM pg_class
JOIN pg_am ON (relam = pg_am.oid)
WHERE relname = 'test_partitioned_alter'; WHERE relname = 'test_partitioned_alter';
relname | relam relname | amname
--------------------------------------------------------------------- ---------------------------------------------------------------------
test_partitioned_alter | 16413 test_partitioned_alter | columnar
(1 row) (1 row)
-- Verify the partitions' access methods -- Verify the partitions' access methods
SELECT relname, relam SELECT relname, amname
FROM pg_class FROM pg_class
JOIN pg_am ON (relam = pg_am.oid)
WHERE relname IN ('test_partition_1', 'test_partition_2') WHERE relname IN ('test_partition_1', 'test_partition_2')
ORDER BY relname; ORDER BY relname;
relname | relam relname | amname
--------------------------------------------------------------------- ---------------------------------------------------------------------
test_partition_1 | 2 test_partition_1 | heap
test_partition_2 | 2 test_partition_2 | heap
(2 rows) (2 rows)
-- Step 6: Verify the change is applied to future partitions -- Step 6: Verify the change is applied to future partitions
CREATE TABLE test_partition_3 PARTITION OF test_partitioned_alter CREATE TABLE test_partition_3 PARTITION OF test_partitioned_alter
FOR VALUES FROM (200) TO (300); FOR VALUES FROM (200) TO (300);
SELECT relname, relam SELECT relname, amname
FROM pg_class FROM pg_class
JOIN pg_am ON (relam = pg_am.oid)
WHERE relname = 'test_partition_3'; WHERE relname = 'test_partition_3';
relname | relam relname | amname
--------------------------------------------------------------------- ---------------------------------------------------------------------
test_partition_3 | 16413 test_partition_3 | columnar
(1 row) (1 row)
-- Step 6 (Repeat on a Worker Node): Verify that the new partition is created correctly -- Step 6 (Repeat on a Worker Node): Verify that the new partition is created correctly
\c - - - :worker_1_port \c - - - :worker_1_port
SET search_path TO pg17; SET search_path TO pg17;
-- Verify the new partition's access method on the worker node -- Verify the new partition's access method on the worker node
SELECT relname, relam SELECT relname, amname
FROM pg_class FROM pg_class
JOIN pg_am ON (relam = pg_am.oid)
WHERE relname = 'test_partition_3'; WHERE relname = 'test_partition_3';
relname | relam relname | amname
--------------------------------------------------------------------- ---------------------------------------------------------------------
test_partition_3 | 16413 test_partition_3 | columnar
(1 row) (1 row)
\c - - - :master_port \c - - - :master_port
@ -3169,3 +3180,5 @@ DROP SCHEMA pg17 CASCADE;
RESET client_min_messages; RESET client_min_messages;
DROP ROLE regress_maintain; DROP ROLE regress_maintain;
DROP ROLE regress_no_maintain; DROP ROLE regress_no_maintain;
SET client_min_messages TO WARNING;
DROP EXTENSION citus_columnar CASCADE;

View File

@ -4,6 +4,9 @@
SHOW server_version \gset SHOW server_version \gset
SELECT substring(:'server_version', '\d+')::int >= 17 AS server_version_ge_17 SELECT substring(:'server_version', '\d+')::int >= 17 AS server_version_ge_17
\gset \gset
SET client_min_messages TO WARNING;
CREATE EXTENSION IF NOT EXISTS citus_columnar;
RESET client_min_messages;
-- PG17 has the capabilty to pull up a correlated ANY subquery to a join if -- PG17 has the capabilty to pull up a correlated ANY subquery to a join if
-- the subquery only refers to its immediate parent query. Previously, the -- the subquery only refers to its immediate parent query. Previously, the
-- subquery needed to be implemented as a SubPlan node, typically as a -- subquery needed to be implemented as a SubPlan node, typically as a

View File

@ -1,3 +1,6 @@
SET client_min_messages TO WARNING;
CREATE EXTENSION IF NOT EXISTS citus_columnar;
RESET client_min_messages;
CREATE TEMPORARY TABLE output (line text); CREATE TEMPORARY TABLE output (line text);
CREATE SCHEMA dumper; CREATE SCHEMA dumper;
SET search_path TO 'dumper'; SET search_path TO 'dumper';
@ -173,3 +176,5 @@ SELECT tablename FROM pg_tables WHERE schemaname = 'dumper' ORDER BY 1;
DROP SCHEMA dumper CASCADE; DROP SCHEMA dumper CASCADE;
NOTICE: drop cascades to table data NOTICE: drop cascades to table data
SET client_min_messages TO WARNING;
DROP EXTENSION citus_columnar CASCADE;

View File

@ -1,3 +1,6 @@
SET client_min_messages TO WARNING;
CREATE EXTENSION IF NOT EXISTS citus_columnar;
RESET client_min_messages;
CREATE SCHEMA recurring_outer_join; CREATE SCHEMA recurring_outer_join;
SET search_path TO recurring_outer_join; SET search_path TO recurring_outer_join;
SET citus.next_shard_id TO 1520000; SET citus.next_shard_id TO 1520000;
@ -2003,3 +2006,5 @@ DEBUG: performing repartitioned INSERT ... SELECT
ROLLBACK; ROLLBACK;
SET client_min_messages TO ERROR; SET client_min_messages TO ERROR;
DROP SCHEMA recurring_outer_join CASCADE; DROP SCHEMA recurring_outer_join CASCADE;
SET client_min_messages TO WARNING;
DROP EXTENSION citus_columnar CASCADE;

View File

@ -0,0 +1,2 @@
ALTER SCHEMA public RENAME TO citus_schema;
CREATE SCHEMA public;

View File

@ -1,3 +1,6 @@
SET client_min_messages TO WARNING;
CREATE EXTENSION IF NOT EXISTS citus_columnar;
RESET client_min_messages;
-- Test if relying on topological sort of the objects, not their names, works -- Test if relying on topological sort of the objects, not their names, works
-- fine when re-creating objects during pg_upgrade. -- fine when re-creating objects during pg_upgrade.
DO DO

View File

@ -5,6 +5,7 @@
-- Here we create a table with only the basic extension types -- Here we create a table with only the basic extension types
-- in order to avoid printing extra ones for now -- in order to avoid printing extra ones for now
-- This can be removed when we drop PG16 support. -- This can be removed when we drop PG16 support.
SET search_path TO citus_schema;
CREATE TABLE extension_basic_types (description text); CREATE TABLE extension_basic_types (description text);
INSERT INTO extension_basic_types VALUES ('type citus.distribution_type'), INSERT INTO extension_basic_types VALUES ('type citus.distribution_type'),
('type citus.shard_transfer_mode'), ('type citus.shard_transfer_mode'),
@ -381,8 +382,7 @@ ORDER BY 1;
view citus_lock_waits view citus_lock_waits
view citus_locks view citus_locks
view citus_nodes view citus_nodes
view citus_schema.citus_schemas view citus_schemas
view citus_schema.citus_tables
view citus_shard_indexes_on_worker view citus_shard_indexes_on_worker
view citus_shards view citus_shards
view citus_shards_on_worker view citus_shards_on_worker
@ -391,6 +391,7 @@ ORDER BY 1;
view citus_stat_statements view citus_stat_statements
view citus_stat_tenants view citus_stat_tenants
view citus_stat_tenants_local view citus_stat_tenants_local
view citus_tables
view pg_dist_shard_placement view pg_dist_shard_placement
view time_partitions view time_partitions
(362 rows) (362 rows)

View File

@ -0,0 +1 @@
test: columnar_test_helpers

View File

@ -1,2 +1,2 @@
test: minimal_cluster_management test: minimal_cluster_management
test: multi_test_helpers multi_test_helpers_superuser multi_create_fdw columnar_test_helpers multi_test_catalog_views tablespace test: multi_test_helpers multi_test_helpers_superuser multi_create_fdw multi_test_catalog_views tablespace

View File

@ -51,6 +51,8 @@ test: citus_schema_distribute_undistribute
# because it checks the value of stats_reset column before calling the function. # because it checks the value of stats_reset column before calling the function.
test: stat_counters test: stat_counters
test: columnar_citus_integration
test: multi_test_catalog_views test: multi_test_catalog_views
test: multi_table_ddl test: multi_table_ddl
test: multi_alias test: multi_alias

View File

@ -1,6 +1,6 @@
# Split Shard tests. # Split Shard tests.
# Include tests from 'minimal_schedule' for setup. # Include tests from 'minimal_schedule' for setup.
test: multi_test_helpers multi_test_helpers_superuser columnar_test_helpers test: multi_test_helpers multi_test_helpers_superuser
test: multi_cluster_management test: multi_cluster_management
test: remove_coordinator_from_metadata test: remove_coordinator_from_metadata
test: multi_test_catalog_views test: multi_test_catalog_views

View File

@ -1,3 +1,7 @@
SET client_min_messages TO WARNING;
CREATE EXTENSION IF NOT EXISTS citus_columnar;
RESET client_min_messages;
CREATE SCHEMA alter_distributed_table; CREATE SCHEMA alter_distributed_table;
SET search_path TO alter_distributed_table; SET search_path TO alter_distributed_table;
SET citus.shard_count TO 4; SET citus.shard_count TO 4;
@ -482,3 +486,6 @@ RESET search_path;
DROP SCHEMA alter_distributed_table CASCADE; DROP SCHEMA alter_distributed_table CASCADE;
DROP SCHEMA schema_to_test_alter_dist_table CASCADE; DROP SCHEMA schema_to_test_alter_dist_table CASCADE;
DROP USER alter_dist_table_test_user; DROP USER alter_dist_table_test_user;
SET client_min_messages TO WARNING;
DROP EXTENSION citus_columnar CASCADE;

View File

@ -2,6 +2,10 @@
-- ALTER_TABLE_SET_ACCESS_METHOD -- ALTER_TABLE_SET_ACCESS_METHOD
-- --
SET client_min_messages TO WARNING;
CREATE EXTENSION IF NOT EXISTS citus_columnar;
RESET client_min_messages;
CREATE TABLE alter_am_pg_version_table (a INT); CREATE TABLE alter_am_pg_version_table (a INT);
SELECT alter_table_set_access_method('alter_am_pg_version_table', 'columnar'); SELECT alter_table_set_access_method('alter_am_pg_version_table', 'columnar');
DROP TABLE alter_am_pg_version_table; DROP TABLE alter_am_pg_version_table;
@ -288,3 +292,6 @@ select alter_table_set_access_method('view_test_view','columnar');
SET client_min_messages TO WARNING; SET client_min_messages TO WARNING;
DROP SCHEMA alter_table_set_access_method CASCADE; DROP SCHEMA alter_table_set_access_method CASCADE;
SET client_min_messages TO WARNING;
DROP EXTENSION citus_columnar CASCADE;

View File

@ -1,3 +1,7 @@
SET client_min_messages TO WARNING;
CREATE EXTENSION IF NOT EXISTS citus_columnar;
RESET client_min_messages;
-- create the udf is_citus_depended_object that is needed for the tests -- create the udf is_citus_depended_object that is needed for the tests
CREATE OR REPLACE FUNCTION CREATE OR REPLACE FUNCTION
pg_catalog.is_citus_depended_object(oid,oid) pg_catalog.is_citus_depended_object(oid,oid)
@ -149,3 +153,6 @@ FROM (VALUES ('master_add_node'), ('format'),
-- drop the namespace with all its objects -- drop the namespace with all its objects
DROP SCHEMA citus_dependend_object CASCADE; DROP SCHEMA citus_dependend_object CASCADE;
SET client_min_messages TO WARNING;
DROP EXTENSION citus_columnar CASCADE;

View File

@ -1,3 +1,7 @@
SET client_min_messages TO WARNING;
CREATE EXTENSION IF NOT EXISTS citus_columnar;
RESET client_min_messages;
CREATE SCHEMA "citus_split_non_blocking_schema_columnar_partitioned"; CREATE SCHEMA "citus_split_non_blocking_schema_columnar_partitioned";
SET search_path TO "citus_split_non_blocking_schema_columnar_partitioned"; SET search_path TO "citus_split_non_blocking_schema_columnar_partitioned";
SET citus.next_shard_id TO 8970000; SET citus.next_shard_id TO 8970000;
@ -5,6 +9,10 @@ SET citus.next_placement_id TO 8770000;
SET citus.shard_count TO 1; SET citus.shard_count TO 1;
SET citus.shard_replication_factor TO 1; SET citus.shard_replication_factor TO 1;
-- remove coordinator if it is added to pg_dist_node
SELECT COUNT(master_remove_node(nodename, nodeport)) >= 0
FROM pg_dist_node WHERE nodename='localhost' AND nodeport=:master_port;
-- Disable Deferred drop auto cleanup to avoid flaky tests. -- Disable Deferred drop auto cleanup to avoid flaky tests.
ALTER SYSTEM SET citus.defer_shard_delete_interval TO -1; ALTER SYSTEM SET citus.defer_shard_delete_interval TO -1;
SELECT pg_reload_conf(); SELECT pg_reload_conf();
@ -306,3 +314,6 @@ SELECT public.wait_for_resource_cleanup();
SELECT pg_reload_conf(); SELECT pg_reload_conf();
DROP SCHEMA "citus_split_non_blocking_schema_columnar_partitioned" CASCADE; DROP SCHEMA "citus_split_non_blocking_schema_columnar_partitioned" CASCADE;
--END : Cleanup --END : Cleanup
SET client_min_messages TO WARNING;
DROP EXTENSION citus_columnar CASCADE;

View File

@ -1,3 +1,7 @@
SET client_min_messages TO WARNING;
CREATE EXTENSION IF NOT EXISTS citus_columnar;
RESET client_min_messages;
CREATE SCHEMA "citus_split_test_schema_columnar_partitioned"; CREATE SCHEMA "citus_split_test_schema_columnar_partitioned";
SET search_path TO "citus_split_test_schema_columnar_partitioned"; SET search_path TO "citus_split_test_schema_columnar_partitioned";
SET citus.next_shard_id TO 8970000; SET citus.next_shard_id TO 8970000;
@ -306,3 +310,6 @@ SELECT public.wait_for_resource_cleanup();
SELECT pg_reload_conf(); SELECT pg_reload_conf();
DROP SCHEMA "citus_split_test_schema_columnar_partitioned" CASCADE; DROP SCHEMA "citus_split_test_schema_columnar_partitioned" CASCADE;
--END : Cleanup --END : Cleanup
SET client_min_messages TO WARNING;
DROP EXTENSION citus_columnar CASCADE;

View File

@ -415,7 +415,7 @@ SELECT sum(a) FROM pushdown_test where (a > random() and a <= 2000) or (a > 2000
SELECT sum(a) FROM pushdown_test where (a > random() and a <= 2000) or (a > 200000-1010); SELECT sum(a) FROM pushdown_test where (a > random() and a <= 2000) or (a > 200000-1010);
SET hash_mem_multiplier = 1.0; SET hash_mem_multiplier = 1.0;
SELECT public.explain_with_pg16_subplan_format($Q$ SELECT columnar_test_helpers.explain_with_pg16_subplan_format($Q$
EXPLAIN (analyze on, costs off, timing off, summary off) EXPLAIN (analyze on, costs off, timing off, summary off)
SELECT sum(a) FROM pushdown_test where SELECT sum(a) FROM pushdown_test where
( (

View File

@ -1,4 +1,12 @@
SET client_min_messages TO WARNING;
CREATE EXTENSION IF NOT EXISTS citus_columnar;
RESET client_min_messages;
-- remove coordinator if it is added to pg_dist_node
SELECT COUNT(master_remove_node(nodename, nodeport)) >= 0
FROM pg_dist_node WHERE nodename='localhost' AND nodeport=:master_port;
SELECT success, result FROM run_command_on_all_nodes($cmd$ SELECT success, result FROM run_command_on_all_nodes($cmd$
ALTER SYSTEM SET columnar.compression TO 'none' ALTER SYSTEM SET columnar.compression TO 'none'
$cmd$); $cmd$);
@ -440,5 +448,48 @@ WHERE "bbbbbbbbbbbbbbbbbbbbbbbbb\!bbbb'bbbbbbbbbbbbbbbbbbbbb''bbbbbbbb" * 2 >
EXPLAIN (COSTS OFF, SUMMARY OFF) EXPLAIN (COSTS OFF, SUMMARY OFF)
SELECT COUNT(*) FROM weird_col_explain; SELECT COUNT(*) FROM weird_col_explain;
-- some tests with distributed & partitioned tables --
CREATE TABLE dist_part_table(
dist_col INT,
part_col TIMESTAMPTZ,
col1 TEXT
) PARTITION BY RANGE (part_col);
-- create an index before creating a columnar partition
CREATE INDEX dist_part_table_btree ON dist_part_table (col1);
-- columnar partition
CREATE TABLE p0 PARTITION OF dist_part_table
FOR VALUES FROM ('2020-01-01') TO ('2020-02-01')
USING columnar;
SELECT create_distributed_table('dist_part_table', 'dist_col');
-- columnar partition
CREATE TABLE p1 PARTITION OF dist_part_table
FOR VALUES FROM ('2020-02-01') TO ('2020-03-01')
USING columnar;
-- row partition
CREATE TABLE p2 PARTITION OF dist_part_table
FOR VALUES FROM ('2020-03-01') TO ('2020-04-01');
INSERT INTO dist_part_table VALUES (1, '2020-03-15', 'str1', POINT(1, 1));
-- insert into columnar partitions
INSERT INTO dist_part_table VALUES (1, '2020-01-15', 'str2', POINT(2, 2));
INSERT INTO dist_part_table VALUES (1, '2020-02-15', 'str3', POINT(3, 3));
-- create another index after creating a columnar partition
CREATE UNIQUE INDEX dist_part_table_unique ON dist_part_table (dist_col, part_col);
-- verify that indexes are created on columnar partitions
SELECT COUNT(*)=2 FROM pg_indexes WHERE tablename = 'p0';
SELECT COUNT(*)=2 FROM pg_indexes WHERE tablename = 'p1';
SET client_min_messages TO WARNING; SET client_min_messages TO WARNING;
DROP SCHEMA columnar_citus_integration CASCADE; DROP SCHEMA columnar_citus_integration CASCADE;
SET client_min_messages TO WARNING;
DROP EXTENSION citus_columnar CASCADE;

View File

@ -265,46 +265,6 @@ ROLLBACK;
-- make sure that we read the correct value for "b" when doing index only scan -- make sure that we read the correct value for "b" when doing index only scan
SELECT b=980 FROM include_test WHERE a = 980; SELECT b=980 FROM include_test WHERE a = 980;
-- some tests with distributed & partitioned tables --
CREATE TABLE dist_part_table(
dist_col INT,
part_col TIMESTAMPTZ,
col1 TEXT
) PARTITION BY RANGE (part_col);
-- create an index before creating a columnar partition
CREATE INDEX dist_part_table_btree ON dist_part_table (col1);
-- columnar partition
CREATE TABLE p0 PARTITION OF dist_part_table
FOR VALUES FROM ('2020-01-01') TO ('2020-02-01')
USING columnar;
SELECT create_distributed_table('dist_part_table', 'dist_col');
-- columnar partition
CREATE TABLE p1 PARTITION OF dist_part_table
FOR VALUES FROM ('2020-02-01') TO ('2020-03-01')
USING columnar;
-- row partition
CREATE TABLE p2 PARTITION OF dist_part_table
FOR VALUES FROM ('2020-03-01') TO ('2020-04-01');
INSERT INTO dist_part_table VALUES (1, '2020-03-15', 'str1', POINT(1, 1));
-- insert into columnar partitions
INSERT INTO dist_part_table VALUES (1, '2020-01-15', 'str2', POINT(2, 2));
INSERT INTO dist_part_table VALUES (1, '2020-02-15', 'str3', POINT(3, 3));
-- create another index after creating a columnar partition
CREATE UNIQUE INDEX dist_part_table_unique ON dist_part_table (dist_col, part_col);
-- verify that indexes are created on columnar partitions
SELECT COUNT(*)=2 FROM pg_indexes WHERE tablename = 'p0';
SELECT COUNT(*)=2 FROM pg_indexes WHERE tablename = 'p1';
-- unsupported index types -- -- unsupported index types --
-- gin -- -- gin --

View File

@ -73,7 +73,9 @@ INSERT INTO t
SELECT i, 'last batch', 0 /* no need to record memusage per row */ SELECT i, 'last batch', 0 /* no need to record memusage per row */
FROM generate_series(1, 50000) i; FROM generate_series(1, 50000) i;
SELECT CASE WHEN 1.0 * TopMemoryContext / :top_post BETWEEN 0.98 AND 1.03 THEN 1 ELSE 1.0 * TopMemoryContext / :top_post END AS top_growth -- FIXME: Somehow, after we stopped creating citus extension for columnar tests,
-- we started observing %38 growth in TopMemoryContext here.
SELECT CASE WHEN 1.0 * TopMemoryContext / :top_post BETWEEN 0.98 AND 1.40 THEN 1 ELSE 1.0 * TopMemoryContext / :top_post END AS top_growth
FROM columnar_test_helpers.columnar_store_memory_stats(); FROM columnar_test_helpers.columnar_store_memory_stats();
-- before this change, max mem usage while executing inserts was 28MB and -- before this change, max mem usage while executing inserts was 28MB and

View File

@ -43,9 +43,9 @@ EXPLAIN (costs off) SELECT count(*), sum(i), min(i), max(i) FROM parent;
SELECT count(*), sum(i), min(i), max(i) FROM parent; SELECT count(*), sum(i), min(i), max(i) FROM parent;
-- set older partitions as columnar -- set older partitions as columnar
SELECT alter_table_set_access_method('p0','columnar'); ALTER TABLE p0 SET ACCESS METHOD columnar;
SELECT alter_table_set_access_method('p1','columnar'); ALTER TABLE p1 SET ACCESS METHOD columnar;
SELECT alter_table_set_access_method('p3','columnar'); ALTER TABLE p3 SET ACCESS METHOD columnar;
-- should also be parallel plan -- should also be parallel plan
EXPLAIN (costs off) SELECT count(*), sum(i), min(i), max(i) FROM parent; EXPLAIN (costs off) SELECT count(*), sum(i), min(i), max(i) FROM parent;

View File

@ -137,3 +137,24 @@ BEGIN
PERFORM pg_sleep(0.001); PERFORM pg_sleep(0.001);
END LOOP; END LOOP;
END; $$ language plpgsql; END; $$ language plpgsql;
-- This function formats EXPLAIN output to conform to how pg <= 16 EXPLAIN
-- shows ANY <subquery> in an expression the pg version >= 17. When 17 is
-- the minimum supported pgversion this function can be retired. The commit
-- that changed how ANY <subquery> exrpressions appear in EXPLAIN is:
-- https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=fd0398fcb
CREATE OR REPLACE FUNCTION explain_with_pg16_subplan_format(explain_command text, out query_plan text)
RETURNS SETOF TEXT AS $$
DECLARE
pgversion int = 0;
BEGIN
pgversion = substring(version(), '\d+')::int ;
FOR query_plan IN execute explain_command LOOP
IF pgversion >= 17 THEN
IF query_plan ~ 'SubPlan \d+\).col' THEN
query_plan = regexp_replace(query_plan, '\(ANY \(\w+ = \(SubPlan (\d+)\).col1\)\)', '(SubPlan \1)', 'g');
END IF;
END IF;
RETURN NEXT;
END LOOP;
END; $$ language plpgsql;

View File

@ -1,3 +1,7 @@
SET client_min_messages TO WARNING;
CREATE EXTENSION IF NOT EXISTS citus_columnar;
RESET client_min_messages;
create schema create_distributed_table_concurrently; create schema create_distributed_table_concurrently;
set search_path to create_distributed_table_concurrently; set search_path to create_distributed_table_concurrently;
set citus.shard_replication_factor to 1; set citus.shard_replication_factor to 1;
@ -154,3 +158,6 @@ select count(*) from test_columnar_2;
set client_min_messages to warning; set client_min_messages to warning;
drop schema create_distributed_table_concurrently cascade; drop schema create_distributed_table_concurrently cascade;
SET client_min_messages TO WARNING;
DROP EXTENSION citus_columnar CASCADE;

View File

@ -1,3 +1,7 @@
SET client_min_messages TO WARNING;
CREATE EXTENSION IF NOT EXISTS citus_columnar;
RESET client_min_messages;
CREATE SCHEMA drop_column_partitioned_table; CREATE SCHEMA drop_column_partitioned_table;
SET search_path TO drop_column_partitioned_table; SET search_path TO drop_column_partitioned_table;
@ -209,3 +213,6 @@ ORDER BY 1,2;
\c - - - :master_port \c - - - :master_port
SET client_min_messages TO WARNING; SET client_min_messages TO WARNING;
DROP SCHEMA drop_column_partitioned_table CASCADE; DROP SCHEMA drop_column_partitioned_table CASCADE;
SET client_min_messages TO WARNING;
DROP EXTENSION citus_columnar CASCADE;

View File

@ -59,4 +59,3 @@ col_to_drop_4 date, measureid integer NOT NULL, eventdatetime date NOT NULL, mea
ALTER TABLE sensors ATTACH PARTITION sensors_2004 FOR VALUES FROM ('2004-01-01') TO ('2005-01-01'); ALTER TABLE sensors ATTACH PARTITION sensors_2004 FOR VALUES FROM ('2004-01-01') TO ('2005-01-01');
ALTER TABLE sensors DROP COLUMN col_to_drop_4; ALTER TABLE sensors DROP COLUMN col_to_drop_4;
SELECT alter_table_set_access_method('sensors_2004', 'columnar');

View File

@ -0,0 +1,9 @@
-- When there are no relations using the columnar access method, we don't automatically create
-- "citus_columnar" extension together with "citus" extension.
SELECT NOT EXISTS (SELECT 1 FROM pg_extension WHERE extname = 'citus_columnar') as citus_columnar_not_exists;
-- Likely, we should not have any columnar objects leftover from "old columnar", i.e., the
-- columnar access method that we had before Citus 11.1, around.
SELECT NOT EXISTS (SELECT 1 FROM pg_am WHERE pg_am.amname = 'columnar') as columnar_am_not_exists;
SELECT NOT EXISTS (SELECT 1 FROM pg_namespace WHERE nspname IN ('columnar', 'columnar_internal')) as columnar_catalog_schemas_not_exists;
SELECT NOT EXISTS (SELECT 1 FROM pg_proc WHERE proname IN ('alter_columnar_table_set', 'alter_columnar_table_reset', 'upgrade_columnar_storage', 'downgrade_columnar_storage', 'columnar_ensure_am_depends_catalog')) as columnar_utilities_not_exists;

View File

@ -1,4 +1,8 @@
\c - - - :master_port \c - - - :master_port
SET client_min_messages TO WARNING;
CREATE EXTENSION IF NOT EXISTS citus_columnar;
RESET client_min_messages;
CREATE SCHEMA single_node; CREATE SCHEMA single_node;
SET search_path TO single_node; SET search_path TO single_node;
SET citus.shard_count TO 4; SET citus.shard_count TO 4;

View File

@ -1,3 +1,7 @@
SET client_min_messages TO WARNING;
CREATE EXTENSION IF NOT EXISTS citus_columnar;
RESET client_min_messages;
CREATE SCHEMA generated_identities; CREATE SCHEMA generated_identities;
SET search_path TO generated_identities; SET search_path TO generated_identities;
SET client_min_messages to ERROR; SET client_min_messages to ERROR;
@ -375,3 +379,6 @@ ORDER BY table_name, id;
SET client_min_messages TO WARNING; SET client_min_messages TO WARNING;
DROP SCHEMA generated_identities CASCADE; DROP SCHEMA generated_identities CASCADE;
DROP USER identity_test_user; DROP USER identity_test_user;
SET client_min_messages TO WARNING;
DROP EXTENSION citus_columnar CASCADE;

View File

@ -1,3 +1,7 @@
SET client_min_messages TO WARNING;
CREATE EXTENSION IF NOT EXISTS citus_columnar;
RESET client_min_messages;
CREATE SCHEMA insert_select_into_local_table; CREATE SCHEMA insert_select_into_local_table;
SET search_path TO insert_select_into_local_table; SET search_path TO insert_select_into_local_table;
@ -598,3 +602,6 @@ ROLLBACK;
\set VERBOSITY terse \set VERBOSITY terse
DROP SCHEMA insert_select_into_local_table CASCADE; DROP SCHEMA insert_select_into_local_table CASCADE;
SET client_min_messages TO WARNING;
DROP EXTENSION citus_columnar CASCADE;

View File

@ -2,6 +2,10 @@
-- ISSUE_5248 -- ISSUE_5248
-- --
SET client_min_messages TO WARNING;
CREATE EXTENSION IF NOT EXISTS citus_columnar;
RESET client_min_messages;
CREATE SCHEMA issue_5248; CREATE SCHEMA issue_5248;
SET search_path TO issue_5248; SET search_path TO issue_5248;
SET citus.shard_count TO 4; SET citus.shard_count TO 4;
@ -197,3 +201,6 @@ WHERE pg_catalog.pg_backup_stop() > cast(NULL AS record) limit 100;
SET client_min_messages TO WARNING; SET client_min_messages TO WARNING;
DROP SCHEMA issue_5248 CASCADE; DROP SCHEMA issue_5248 CASCADE;
SET client_min_messages TO WARNING;
DROP EXTENSION citus_columnar CASCADE;

View File

@ -6,6 +6,10 @@ DROP SCHEMA IF EXISTS merge_schema CASCADE;
--WHEN MATCHED AND <condition> --WHEN MATCHED AND <condition>
--WHEN MATCHED --WHEN MATCHED
SET client_min_messages TO WARNING;
CREATE EXTENSION IF NOT EXISTS citus_columnar;
RESET client_min_messages;
CREATE SCHEMA merge_schema; CREATE SCHEMA merge_schema;
SET search_path TO merge_schema; SET search_path TO merge_schema;
SET citus.shard_count TO 4; SET citus.shard_count TO 4;
@ -2610,3 +2614,6 @@ RESET client_min_messages;
DROP SERVER foreign_server CASCADE; DROP SERVER foreign_server CASCADE;
DROP FUNCTION merge_when_and_write(); DROP FUNCTION merge_when_and_write();
DROP SCHEMA merge_schema CASCADE; DROP SCHEMA merge_schema CASCADE;
SET client_min_messages TO WARNING;
DROP EXTENSION citus_columnar CASCADE;

View File

@ -4,6 +4,10 @@
-- The results should match. This process is repeated for various combinations -- The results should match. This process is repeated for various combinations
-- of MERGE SQL. -- of MERGE SQL.
SET client_min_messages TO WARNING;
CREATE EXTENSION IF NOT EXISTS citus_columnar;
RESET client_min_messages;
DROP SCHEMA IF EXISTS merge_repartition1_schema CASCADE; DROP SCHEMA IF EXISTS merge_repartition1_schema CASCADE;
CREATE SCHEMA merge_repartition1_schema; CREATE SCHEMA merge_repartition1_schema;
SET search_path TO merge_repartition1_schema; SET search_path TO merge_repartition1_schema;
@ -570,3 +574,6 @@ WHEN NOT MATCHED THEN
SELECT compare_data(); SELECT compare_data();
DROP SCHEMA merge_repartition1_schema CASCADE; DROP SCHEMA merge_repartition1_schema CASCADE;
SET client_min_messages TO WARNING;
DROP EXTENSION citus_columnar CASCADE;

View File

@ -3,6 +3,10 @@
-- --
-- Tests around dropping and recreating the extension -- Tests around dropping and recreating the extension
SET client_min_messages TO WARNING;
CREATE EXTENSION IF NOT EXISTS citus_columnar;
RESET client_min_messages;
SET citus.next_shard_id TO 550000; SET citus.next_shard_id TO 550000;
@ -143,3 +147,6 @@ SELECT create_distributed_table('testtableddl', 'distributecol', 'append');
SELECT 1 FROM master_create_empty_shard('testtableddl'); SELECT 1 FROM master_create_empty_shard('testtableddl');
SELECT * FROM testtableddl; SELECT * FROM testtableddl;
DROP TABLE testtableddl; DROP TABLE testtableddl;
SET client_min_messages TO WARNING;
DROP EXTENSION citus_columnar CASCADE;

View File

@ -120,7 +120,26 @@ ORDER BY 1, 2;
-- DROP EXTENSION pre-created by the regression suite -- DROP EXTENSION pre-created by the regression suite
DROP EXTENSION citus; DROP EXTENSION citus;
DROP EXTENSION citus_columnar;
SET client_min_messages TO WARNING;
DROP EXTENSION IF EXISTS citus_columnar;
RESET client_min_messages;
CREATE EXTENSION citus;
-- When there are no relations using the columnar access method, we don't automatically create
-- "citus_columnar" extension together with "citus" extension anymore. And as this will always
-- be the case for a fresh "CREATE EXTENSION citus", we know that we should definitely not have
-- "citus_columnar" extension created.
SELECT NOT EXISTS (SELECT 1 FROM pg_extension WHERE extname = 'citus_columnar') as citus_columnar_not_exists;
-- Likely, we should not have any columnar objects leftover from "old columnar", i.e., the
-- columnar access method that we had before Citus 11.1, around.
SELECT NOT EXISTS (SELECT 1 FROM pg_am WHERE pg_am.amname = 'columnar') as columnar_am_not_exists;
SELECT NOT EXISTS (SELECT 1 FROM pg_namespace WHERE nspname IN ('columnar', 'columnar_internal')) as columnar_catalog_schemas_not_exists;
SELECT NOT EXISTS (SELECT 1 FROM pg_proc WHERE proname IN ('alter_columnar_table_set', 'alter_columnar_table_reset', 'upgrade_columnar_storage', 'downgrade_columnar_storage', 'columnar_ensure_am_depends_catalog')) as columnar_utilities_not_exists;
DROP EXTENSION citus;
\c \c
-- these tests switch between citus versions and call ddl's that require pg_dist_object to be created -- these tests switch between citus versions and call ddl's that require pg_dist_object to be created
@ -327,6 +346,52 @@ ALTER EXTENSION citus UPDATE TO '9.5-1';
ALTER EXTENSION citus UPDATE TO '10.0-4'; ALTER EXTENSION citus UPDATE TO '10.0-4';
SELECT * FROM multi_extension.print_extension_changes(); SELECT * FROM multi_extension.print_extension_changes();
-- Update Citus to 13.2-1 and make sure that we don't automatically create
-- citus_columnar extension as we don't have any relations created using columnar.
ALTER EXTENSION citus UPDATE TO '13.2-1';
SELECT NOT EXISTS (SELECT 1 FROM pg_extension WHERE extname = 'citus_columnar') as citus_columnar_not_exists;
SELECT NOT EXISTS (SELECT 1 FROM pg_am WHERE pg_am.amname = 'columnar') as columnar_am_not_exists;
SELECT NOT EXISTS (SELECT 1 FROM pg_namespace WHERE nspname IN ('columnar', 'columnar_internal')) as columnar_catalog_schemas_not_exists;
SELECT NOT EXISTS (SELECT 1 FROM pg_proc WHERE proname IN ('alter_columnar_table_set', 'alter_columnar_table_reset', 'upgrade_columnar_storage', 'downgrade_columnar_storage', 'columnar_ensure_am_depends_catalog')) as columnar_utilities_not_exists;
-- Unfortunately, our downgrade scripts seem to assume that citus_columnar exists.
-- Seems this has always been the case since the introduction of citus_columnar,
-- so we need to create citus_columnar before the downgrade.
CREATE EXTENSION citus_columnar;
ALTER EXTENSION citus UPDATE TO '11.1-1';
-- Update Citus to 13.2-1 and make sure that already having citus_columnar extension
-- doesn't cause any issues.
ALTER EXTENSION citus UPDATE TO '13.2-1';
SELECT EXISTS (SELECT 1 FROM pg_extension WHERE extname = 'citus_columnar') as citus_columnar_exists;
SELECT EXISTS (SELECT 1 FROM pg_am WHERE pg_am.amname = 'columnar') as columnar_am_exists;
SELECT EXISTS (SELECT 1 FROM pg_namespace WHERE nspname IN ('columnar', 'columnar_internal')) as columnar_catalog_schemas_exists;
SELECT EXISTS (SELECT 1 FROM pg_proc WHERE proname IN ('alter_columnar_table_set', 'alter_columnar_table_reset', 'upgrade_columnar_storage', 'downgrade_columnar_storage', 'columnar_ensure_am_depends_catalog')) as columnar_utilities_exists;
ALTER EXTENSION citus UPDATE TO '11.1-1';
DROP EXTENSION citus_columnar;
-- Update Citus to 13.2-1 and make sure that NOT having citus_columnar extension
-- doesn't cause any issues.
ALTER EXTENSION citus UPDATE TO '13.2-1';
SELECT NOT EXISTS (SELECT 1 FROM pg_extension WHERE extname = 'citus_columnar') as citus_columnar_not_exists;
SELECT NOT EXISTS (SELECT 1 FROM pg_am WHERE pg_am.amname = 'columnar') as columnar_am_not_exists;
SELECT NOT EXISTS (SELECT 1 FROM pg_namespace WHERE nspname IN ('columnar', 'columnar_internal')) as columnar_catalog_schemas_not_exists;
SELECT NOT EXISTS (SELECT 1 FROM pg_proc WHERE proname IN ('alter_columnar_table_set', 'alter_columnar_table_reset', 'upgrade_columnar_storage', 'downgrade_columnar_storage', 'columnar_ensure_am_depends_catalog')) as columnar_utilities_not_exists;
-- Downgrade back to 10.0-4 for the rest of the tests.
--
-- same here - to downgrade Citus, first we need to create citus_columnar
CREATE EXTENSION citus_columnar;
ALTER EXTENSION citus UPDATE TO '10.0-4';
-- not print "HINT: " to hide current lib version -- not print "HINT: " to hide current lib version
\set VERBOSITY terse \set VERBOSITY terse
CREATE TABLE columnar_table(a INT, b INT) USING columnar; CREATE TABLE columnar_table(a INT, b INT) USING columnar;
@ -547,6 +612,10 @@ CREATE EXTENSION citus;
ALTER EXTENSION citus UPDATE TO '11.1-1'; ALTER EXTENSION citus UPDATE TO '11.1-1';
SELECT * FROM multi_extension.print_extension_changes(); SELECT * FROM multi_extension.print_extension_changes();
-- Make sure that citus_columnar is automatically created while updating Citus to 11.1-1
-- as we created columnar tables using the columnar access method before.
SELECT EXISTS (SELECT 1 FROM pg_extension WHERE extname = 'citus_columnar') as citus_columnar_exists;
-- Test downgrade to 11.1-1 from 11.2-1 -- Test downgrade to 11.1-1 from 11.2-1
ALTER EXTENSION citus UPDATE TO '11.2-1'; ALTER EXTENSION citus UPDATE TO '11.2-1';
ALTER EXTENSION citus UPDATE TO '11.1-1'; ALTER EXTENSION citus UPDATE TO '11.1-1';
@ -772,7 +841,6 @@ ALTER EXTENSION citus UPDATE;
-- re-create in newest version -- re-create in newest version
DROP EXTENSION citus; DROP EXTENSION citus;
DROP EXTENSION citus_columnar;
\c \c
CREATE EXTENSION citus; CREATE EXTENSION citus;
@ -780,7 +848,6 @@ CREATE EXTENSION citus;
\c - - - :worker_1_port \c - - - :worker_1_port
DROP EXTENSION citus; DROP EXTENSION citus;
DROP EXTENSION citus_columnar;
SET citus.enable_version_checks TO 'false'; SET citus.enable_version_checks TO 'false';
SET columnar.enable_version_checks TO 'false'; SET columnar.enable_version_checks TO 'false';
CREATE EXTENSION citus VERSION '8.0-1'; CREATE EXTENSION citus VERSION '8.0-1';
@ -1048,8 +1115,7 @@ SELECT citus_add_local_table_to_metadata('test');
DROP TABLE test; DROP TABLE test;
-- Verify that we don't consider the schemas created by extensions as tenant schemas. -- Verify that we don't consider the schemas created by extensions as tenant schemas.
-- Easiest way of verifying this is to drop and re-create columnar extension. -- Easiest way of verifying this is to create columnar extension.
DROP EXTENSION citus_columnar;
SET citus.enable_schema_based_sharding TO ON; SET citus.enable_schema_based_sharding TO ON;

View File

@ -301,10 +301,12 @@ SELECT tablename, indexname FROM pg_indexes WHERE schemaname = 'fix_idx_names' O
-- create only one shard & one partition so that the output easier to check -- create only one shard & one partition so that the output easier to check
SET citus.next_shard_id TO 915000; SET citus.next_shard_id TO 915000;
ALTER SEQUENCE pg_catalog.pg_dist_colocationid_seq RESTART 1370100;
SET citus.shard_count TO 1; SET citus.shard_count TO 1;
SET citus.shard_replication_factor TO 1; SET citus.shard_replication_factor TO 1;
CREATE TABLE parent_table (dist_col int, another_col int, partition_col timestamp, name text) PARTITION BY RANGE (partition_col); CREATE TABLE parent_table (dist_col int, another_col int, partition_col timestamp, name text) PARTITION BY RANGE (partition_col);
SELECT create_distributed_table('parent_table', 'dist_col'); SELECT create_distributed_table('parent_table', 'dist_col', colocate_with=>'none');
CREATE TABLE p1 PARTITION OF parent_table FOR VALUES FROM ('2018-01-01') TO ('2019-01-01'); CREATE TABLE p1 PARTITION OF parent_table FOR VALUES FROM ('2018-01-01') TO ('2019-01-01');
CREATE INDEX i1 ON parent_table(dist_col); CREATE INDEX i1 ON parent_table(dist_col);
@ -329,6 +331,14 @@ ALTER INDEX p1_pkey RENAME TO p1_pkey_renamed;
ALTER INDEX p1_dist_col_partition_col_key RENAME TO p1_dist_col_partition_col_key_renamed; ALTER INDEX p1_dist_col_partition_col_key RENAME TO p1_dist_col_partition_col_key_renamed;
ALTER INDEX p1_dist_col_idx RENAME TO p1_dist_col_idx_renamed; ALTER INDEX p1_dist_col_idx RENAME TO p1_dist_col_idx_renamed;
SET client_min_messages TO WARNING;
CREATE EXTENSION IF NOT EXISTS citus_columnar;
RESET client_min_messages;
SET search_path TO fix_idx_names, public;
SET columnar.compression TO 'zstd';
-- should be able to create a new partition that is columnar -- should be able to create a new partition that is columnar
SET citus.log_remote_commands TO ON; SET citus.log_remote_commands TO ON;
CREATE TABLE p2(dist_col int NOT NULL, another_col int, partition_col timestamp NOT NULL, name text) USING columnar; CREATE TABLE p2(dist_col int NOT NULL, another_col int, partition_col timestamp NOT NULL, name text) USING columnar;
@ -341,3 +351,6 @@ ALTER TABLE parent_table DROP CONSTRAINT unique_cst CASCADE;
SET client_min_messages TO WARNING; SET client_min_messages TO WARNING;
DROP SCHEMA fix_idx_names CASCADE; DROP SCHEMA fix_idx_names CASCADE;
SET client_min_messages TO WARNING;
DROP EXTENSION citus_columnar CASCADE;

View File

@ -4,6 +4,10 @@
-- Test user permissions. -- Test user permissions.
-- --
SET client_min_messages TO WARNING;
CREATE EXTENSION IF NOT EXISTS citus_columnar;
RESET client_min_messages;
SET citus.next_shard_id TO 1420000; SET citus.next_shard_id TO 1420000;
SET citus.shard_replication_factor TO 1; SET citus.shard_replication_factor TO 1;
@ -332,3 +336,6 @@ DROP USER full_access;
DROP USER read_access; DROP USER read_access;
DROP USER no_access; DROP USER no_access;
DROP ROLE some_role; DROP ROLE some_role;
SET client_min_messages TO WARNING;
DROP EXTENSION citus_columnar CASCADE;

View File

@ -1,3 +1,7 @@
SET client_min_messages TO WARNING;
CREATE EXTENSION IF NOT EXISTS citus_columnar;
RESET client_min_messages;
-- --
-- MULTI_TENANT_ISOLATION -- MULTI_TENANT_ISOLATION
-- --
@ -610,3 +614,6 @@ TRUNCATE TABLE pg_catalog.pg_dist_colocation;
ALTER SEQUENCE pg_catalog.pg_dist_colocationid_seq RESTART 1; ALTER SEQUENCE pg_catalog.pg_dist_colocationid_seq RESTART 1;
ALTER SEQUENCE pg_catalog.pg_dist_placement_placementid_seq RESTART :last_placement_id; ALTER SEQUENCE pg_catalog.pg_dist_placement_placementid_seq RESTART :last_placement_id;
SET client_min_messages TO WARNING;
DROP EXTENSION citus_columnar CASCADE;

View File

@ -1,3 +1,7 @@
SET client_min_messages TO WARNING;
CREATE EXTENSION IF NOT EXISTS citus_columnar;
RESET client_min_messages;
-- --
-- MULTI_TENANT_ISOLATION -- MULTI_TENANT_ISOLATION
-- --
@ -610,3 +614,6 @@ ALTER SEQUENCE pg_catalog.pg_dist_placement_placementid_seq RESTART :last_placem
-- make sure we don't have any replication objects leftover on the nodes -- make sure we don't have any replication objects leftover on the nodes
SELECT public.wait_for_resource_cleanup(); SELECT public.wait_for_resource_cleanup();
SET client_min_messages TO WARNING;
DROP EXTENSION citus_columnar CASCADE;

View File

@ -1,3 +1,7 @@
SET client_min_messages TO WARNING;
CREATE EXTENSION IF NOT EXISTS citus_columnar;
RESET client_min_messages;
CREATE SCHEMA "Mx Regular User"; CREATE SCHEMA "Mx Regular User";
SET search_path TO "Mx Regular User"; SET search_path TO "Mx Regular User";
@ -345,3 +349,6 @@ SELECT start_metadata_sync_to_node('localhost', :worker_1_port);
SELECT start_metadata_sync_to_node('localhost', :worker_2_port); SELECT start_metadata_sync_to_node('localhost', :worker_2_port);
DROP SCHEMA "Mx Regular User" CASCADE; DROP SCHEMA "Mx Regular User" CASCADE;
SET client_min_messages TO WARNING;
DROP EXTENSION citus_columnar CASCADE;

View File

@ -2,6 +2,10 @@ SET citus.shard_replication_factor to 1;
SET citus.next_shard_id TO 60000; SET citus.next_shard_id TO 60000;
SET citus.next_placement_id TO 60000; SET citus.next_placement_id TO 60000;
SET client_min_messages TO WARNING;
CREATE EXTENSION IF NOT EXISTS citus_columnar;
RESET client_min_messages;
create schema test_pg12; create schema test_pg12;
set search_path to test_pg12; set search_path to test_pg12;
@ -394,3 +398,5 @@ drop schema test_pg12 cascade;
SET citus.shard_replication_factor to 2; SET citus.shard_replication_factor to 2;
SET client_min_messages TO WARNING;
DROP EXTENSION citus_columnar CASCADE;

View File

@ -1,3 +1,7 @@
SET client_min_messages TO WARNING;
CREATE EXTENSION IF NOT EXISTS citus_columnar;
RESET client_min_messages;
create schema pg14; create schema pg14;
set search_path to pg14; set search_path to pg14;
SET citus.shard_replication_factor TO 1; SET citus.shard_replication_factor TO 1;
@ -760,3 +764,6 @@ drop extension postgres_fdw cascade;
drop schema pg14 cascade; drop schema pg14 cascade;
DROP ROLE role_1, r1; DROP ROLE role_1, r1;
reset client_min_messages; reset client_min_messages;
SET client_min_messages TO WARNING;
DROP EXTENSION citus_columnar CASCADE;

View File

@ -5,6 +5,10 @@ SHOW server_version \gset
SELECT substring(:'server_version', '\d+')::int >= 17 AS server_version_ge_17 SELECT substring(:'server_version', '\d+')::int >= 17 AS server_version_ge_17
\gset \gset
SET client_min_messages TO WARNING;
CREATE EXTENSION IF NOT EXISTS citus_columnar;
RESET client_min_messages;
-- PG17 has the capabilty to pull up a correlated ANY subquery to a join if -- PG17 has the capabilty to pull up a correlated ANY subquery to a join if
-- the subquery only refers to its immediate parent query. Previously, the -- the subquery only refers to its immediate parent query. Previously, the
-- subquery needed to be implemented as a SubPlan node, typically as a -- subquery needed to be implemented as a SubPlan node, typically as a
@ -1018,12 +1022,14 @@ CREATE TABLE test_partition_2 PARTITION OF test_partitioned_alter
SELECT create_distributed_table('test_partitioned_alter', 'id'); SELECT create_distributed_table('test_partitioned_alter', 'id');
-- Step 4: Verify that the table and partitions are created and distributed correctly on the coordinator -- Step 4: Verify that the table and partitions are created and distributed correctly on the coordinator
SELECT relname, relam SELECT relname, amname
FROM pg_class FROM pg_class
JOIN pg_am ON (relam = pg_am.oid)
WHERE relname = 'test_partitioned_alter'; WHERE relname = 'test_partitioned_alter';
SELECT relname, relam SELECT relname, amname
FROM pg_class FROM pg_class
JOIN pg_am ON (relam = pg_am.oid)
WHERE relname IN ('test_partition_1', 'test_partition_2') WHERE relname IN ('test_partition_1', 'test_partition_2')
ORDER BY relname; ORDER BY relname;
@ -1032,13 +1038,15 @@ ORDER BY relname;
SET search_path TO pg17; SET search_path TO pg17;
-- Verify the table's access method on the worker node -- Verify the table's access method on the worker node
SELECT relname, relam SELECT relname, amname
FROM pg_class FROM pg_class
JOIN pg_am ON (relam = pg_am.oid)
WHERE relname = 'test_partitioned_alter'; WHERE relname = 'test_partitioned_alter';
-- Verify the partitions' access methods on the worker node -- Verify the partitions' access methods on the worker node
SELECT relname, relam SELECT relname, amname
FROM pg_class FROM pg_class
JOIN pg_am ON (relam = pg_am.oid)
WHERE relname IN ('test_partition_1', 'test_partition_2') WHERE relname IN ('test_partition_1', 'test_partition_2')
ORDER BY relname; ORDER BY relname;
@ -1055,13 +1063,15 @@ ALTER TABLE test_partitioned_alter SET ACCESS METHOD columnar;
-- Reference: https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=374c7a2290429eac3217b0c7b0b485db9c2bcc72 -- Reference: https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=374c7a2290429eac3217b0c7b0b485db9c2bcc72
-- Verify the parent table's access method -- Verify the parent table's access method
SELECT relname, relam SELECT relname, amname
FROM pg_class FROM pg_class
JOIN pg_am ON (relam = pg_am.oid)
WHERE relname = 'test_partitioned_alter'; WHERE relname = 'test_partitioned_alter';
-- Verify the partitions' access methods -- Verify the partitions' access methods
SELECT relname, relam SELECT relname, amname
FROM pg_class FROM pg_class
JOIN pg_am ON (relam = pg_am.oid)
WHERE relname IN ('test_partition_1', 'test_partition_2') WHERE relname IN ('test_partition_1', 'test_partition_2')
ORDER BY relname; ORDER BY relname;
@ -1069,8 +1079,9 @@ ORDER BY relname;
CREATE TABLE test_partition_3 PARTITION OF test_partitioned_alter CREATE TABLE test_partition_3 PARTITION OF test_partitioned_alter
FOR VALUES FROM (200) TO (300); FOR VALUES FROM (200) TO (300);
SELECT relname, relam SELECT relname, amname
FROM pg_class FROM pg_class
JOIN pg_am ON (relam = pg_am.oid)
WHERE relname = 'test_partition_3'; WHERE relname = 'test_partition_3';
-- Step 6 (Repeat on a Worker Node): Verify that the new partition is created correctly -- Step 6 (Repeat on a Worker Node): Verify that the new partition is created correctly
@ -1078,8 +1089,9 @@ WHERE relname = 'test_partition_3';
SET search_path TO pg17; SET search_path TO pg17;
-- Verify the new partition's access method on the worker node -- Verify the new partition's access method on the worker node
SELECT relname, relam SELECT relname, amname
FROM pg_class FROM pg_class
JOIN pg_am ON (relam = pg_am.oid)
WHERE relname = 'test_partition_3'; WHERE relname = 'test_partition_3';
\c - - - :master_port \c - - - :master_port
@ -1764,3 +1776,6 @@ RESET client_min_messages;
DROP ROLE regress_maintain; DROP ROLE regress_maintain;
DROP ROLE regress_no_maintain; DROP ROLE regress_no_maintain;
SET client_min_messages TO WARNING;
DROP EXTENSION citus_columnar CASCADE;

Some files were not shown because too many files have changed in this diff Show More