Commit Graph

4442 Commits (fc09e1cfdcb4619544c6f356b14a39f766c8b718)

Author SHA1 Message Date
Onur Tirtir f3cdb6d1bf Deparse ALTER TABLE commands if ADD COLUMN is the only subcommand
And stabilize multi_alter_table_statements.sql.
2023-07-12 18:17:47 +03:00
Onur Tirtir 6365f47b57 Properly handle index storage options for ADD CONSTRAINT / COLUMN 2023-07-11 17:42:43 +03:00
Onur Tirtir ae142e1764 Properly handle IF NOT EXISTS for ADD COLUMN 2023-07-11 17:42:43 +03:00
Onur Tirtir d4789a2c3a Stabilize test helper sql files
multi_test_helpers is run in parallel with others, so need to stabilize
other test helpers too to make multi_test_helpers runnable multiple
times.
2023-07-06 10:47:41 +03:00
Onur Tirtir 001437bdfe Refactor AppendAlterTableCmdAddConstraint to reuse it for ADD COLUMN too 2023-07-06 10:47:41 +03:00
Onur Tirtir 56f1daa800 Refactor the code that extends constraint/index names on shards into a func 2023-07-06 10:47:41 +03:00
Onur Tirtir ba1ea9b5bd Refactor the code that prepares constraint objects in an alter table stmt into a func 2023-07-06 10:47:41 +03:00
Halil Ozan Akgül 613cced1ae
Use citus_shard_sizes in citus_tables (#7018)
Fixes #7019 

This PR updates citus_tables view to use citus_shard_sizes function,
instead of citus_total_relation_size to improve performance.
2023-07-05 11:40:34 +03:00
aykut-bozkurt 719d92c8b9
mat view should not be converted to tenant table (#7043)
We allow materialized view to exist in distrbuted schema but they should
not be tried to be converted to a tenant table since they cannot be
distributed.

Fixes https://github.com/citusdata/citus/issues/7041
2023-07-04 17:28:03 +03:00
Ahmet Gedemenli 5051be86ff
Skip distributed schema insertion into pg_dist_schema, if already exists (#7044)
Inserting into `pg_dist_schema` causes unexpected duplicate key errors,
for distributed schemas that already exist. With this commit we skip the
insertion if the schema already exists in `pg_dist_schema`.

The error:
```sql
SET citus.enable_schema_based_sharding TO ON;
CREATE SCHEMA sc2;
CREATE SCHEMA IF NOT EXISTS sc2;
NOTICE:  schema "sc2" already exists, skipping
ERROR:  duplicate key value violates unique constraint "pg_dist_schema_pkey"
DETAIL:  Key (schemaid)=(17294) already exists.
```

fixes: #7042
2023-07-04 15:19:07 +03:00
Gokhan Gulbiz e0d3476526
Add locking mechanism for tenant monitoring probabilistic approach (#7026)
This PR 
* Addresses a concurrency issue in the probabilistic approach of tenant
monitoring by acquiring a shared lock for tenant existence checks.
* Changes `citus.stat_tenants_sample_rate_for_new_tenants` type to
double
* Renames `citus.stat_tenants_sample_rate_for_new_tenants` to
`citus.stat_tenants_untracked_sample_rate`
2023-07-03 13:08:03 +03:00
Jelte Fennema ac24e11986
Change default rebalance strategy to by_disk_size (#7033)
DESCRIPTION: Change default rebalance strategy to by_disk_size

When introducing rebalancing by disk size we didn't make it the default
initially. The main reason was, because we expected some problems with
it. We have indeed had some problems/bugs with it over the years, and
have fixed all of them. By now we're quite confident in its stability,
and that it pretty much always gives better results than by_shard_count.

So this PR makes by_disk_size the new default. We don't change the
default when some other strategy than by_shard_count is the current
default. This is in case someone defined their own rebalance strategy
and marked this as the default themselves.

Note: It explicitly does nothing during a downgrade, because there's no
way of knowing if the rebalance strategy before the upgrade was
by_disk_size or by_shard_count. And even in previous versions
by_disk_size is considered superior for quite some time.
2023-07-03 11:08:24 +02:00
Jelte Fennema fd1427de2c
Change by_disk_size rebalance strategy to have a base size (#7035)
One problem with rebalancing by disk size is that shards in newly
created collocation groups are considered extremely small. This can
easily result in bad balances if there are some other collocation groups
that do have some data. One extremely bad example of this is:
1. You have 2 workers
2. Both contain about 100GB of data, but there's a 70MB difference.
3. You create 100 new distributed schemas with a few empty tables in
   them
4. You run the rebalancer
5. Now all new distributed schemas are placed on the node with that had
   70MB less.
6. You start loading some data in these shards and quickly the balance
   is completely off

To address this edge case, this PR changes the by_disk_size rebalance
strategy to add a a base size of 100MB to the actual size of each
shard group. This can still result in a bad balance when shard groups
are empty, but it solves some of the worst cases.
2023-06-27 16:37:09 +02:00
Halil Ozan Akgül 03a4769c3a
Fix Reference Table Check for CDC (#7025)
Previously reference table check only looked at `partition method =
'n'`. This PR adds `replication model = 't'` to that.
2023-06-23 16:37:35 +03:00
Teja Mupparti 387b5f80f9 Fixes the bug#6785 2023-06-22 10:44:45 -07:00
Ahmet Gedemenli 99edb2675f
Improve error/hint messages related to schema-based sharding (#7027)
Improve error/hint messages related to schema-based sharding
2023-06-22 18:10:12 +03:00
Ahmet Gedemenli 44e3c3b9c6
Improve error message for CREATE SCHEMA .. CREATE TABLE (#7024)
Improve error message for CREATE SCHEMA .. CREATE TABLE when
enable_schema_based_sharding is enabled.
2023-06-21 15:24:09 +03:00
aykut-bozkurt 565c5260fd
Properly handle error at owner check (#6984)
We did not properly handle the error at ownership check method, which
causes `max stack depth for errors` as in
https://github.com/citusdata/citus/issues/6980.

**Fix:**
In case of an error, we should rollback subtransaction and throw the
message with log level to `LOG_SERVER_ONLY`.

Note: We prevent logs from the client to prevent pg vanilla test
failures due to Citus logs which differs from the actual Postgres logs.
(For context: https://github.com/citusdata/citus/pull/6130)

I also needed to fix a flaky test: `multi_schema_support`

DESCRIPTION: Fixes a bug related to non-existent objects in DDL
commands.

Fixes https://github.com/citusdata/citus/issues/6980
2023-06-21 14:50:01 +03:00
Naisila Puka 69af3e8509
Drop PG13 Support Phase 2 - Remove PG13 specific paths/tests (#7007)
This commit is the second and last phase of dropping PG13 support.

It consists of the following:

- Removes all PG_VERSION_13 & PG_VERSION_14 from codepaths
- Removes pg_version_compat entries and columnar_version_compat entries
specific for PG13
- Removes alternative pg13 test outputs 
- Removes PG13 normalize lines and fix the test outputs based on that

It is a continuation of 5bf163a27d
2023-06-21 14:18:23 +03:00
aykut-bozkurt 1bb667ce6e
Fix create schema authorization bug (#7015)
Fixes a bug related to `CREATE SCHEMA AUTHORIZATION <rolename>` for single shard
tables. We should properly fetch schema name from role specification if schema name is not given.
2023-06-20 22:05:17 +03:00
aykut-bozkurt f667f14029
Rewind tuple store to fix scrollable with hold cursor fetches (#7014)
We need to rewind the tuplestorestate's tuple index to get correct
results on fetching scrollable with hold cursors.


`PersistHoldablePortal` is responsible for persisting out
tuplestorestate inside a with hold cursor before commiting a
transaction.

It rewinds the cursor like below (`ExecutorRewindcalls` calls `rescan`):
```c
if (portal->cursorOptions & CURSOR_OPT_SCROLL)
{
  ExecutorRewind(queryDesc);
}
```

At the end, it adjusts tuple index for holdStore in the portal properly.
```c
if (portal->cursorOptions & CURSOR_OPT_SCROLL)
{
         if (!tuplestore_skiptuples(portal->holdStore,
	                                         portal->portalPos,
	                                         true))
	    elog(ERROR, "unexpected end of tuple stream");
}
```

DESCRIPTION: Fixes incorrect results on fetching scrollable with hold
cursors.

Fixes https://github.com/citusdata/citus/issues/7010
2023-06-19 23:00:18 +03:00
Teja Mupparti 58da8771aa This pull request introduces support for nonroutable merge commands in the following scenarios:
1) For distributed tables that are not colocated.
2) When joining on a non-distribution column for colocated tables.
3) When merging into a distributed table using reference or citus-local tables as the data source.

This is accomplished primarily through the implementation of the following two strategies.

Repartition: Plan the source query independently,
execute the results into intermediate files, and repartition the files to
co-locate them with the merge-target table. Subsequently, compile a final
merge query on the target table using the intermediate results as the data
source.

Pull-to-coordinator: Execute the plan that requires evaluation at the coordinator,
run the query on the coordinator, and redistribute the resulting rows to ensure
colocation with the target shards. Direct the MERGE SQL operation to the worker
nodes' target shards, using the intermediate files colocated with the data as the
data source.
2023-06-19 12:23:40 -07:00
Xin Li c10cb50aa9
Support custom cast from / to timestamptz in time partition management UDFs (#6923)
This is to implement custom cast of table partition column
type from / to `timestamptz` in time partition management UDFs, as
proposed in ticket #6454

The general idea is for a time partition column with type other than
`date`, `timestamp`, or `timestamptz`, users can provide custom
bidirectional cast between the column type and `timestamptz`, the UDFs
then will be able to create and drop time partitions for such tables.

Fixes #6454

---------

Signed-off-by: Xin Li <xin@swirldslabs.com>
Co-authored-by: Marco Slot <marco.slot@microsoft.com>
Co-authored-by: Ahmet Gedemenli <afgedemenli@gmail.com>
2023-06-19 17:49:05 +03:00
Halil Ozan Akgül d71ad4b65a
Add Publication Tests for Tenant Schema Tables (#7011)
This PR adds schema based sharding tests to publication.sql file
2023-06-19 12:39:41 +03:00
aykut-bozkurt fba5c8dd30
ALTER TABLE <tblname> SET SCHEMA <schemaname> for single shard tables (#7004)
Adds support for altering schema of single shard tables. We do that in 2
steps.
1. Undistribute the tenant table at `preprocess` step,
2. Distribute new schema if it is a distributed schema after DDLs are
propagated.

DESCRIPTION: Adds support for altering a table's schema to/from
distributed schemas.
2023-06-19 10:21:13 +03:00
Nils Dijk ce2ba1d07e
Optimize QueryPushdownSqlTaskList on memory and cpu (#6945)
While going over this piece of code (a long time ago) it was bothering
to me we keep a bool array with the size of shardcount to iterate only
over shards present in the list of non-pruned shards. Especially since
we keep min/max of the set shards to optimize iteration.

Postgres has the bitmapset datastructure which a) takes significantly
less space, b) has iterator functions to only iterate over set bits, c)
can efficiently skip long sequences of unset bits and d) stops quickly
once the last set bit has been reached.

I have been contemplating if it is worth to keep the minShardOffset
because of readability and the efficient skipping of unset bits,
however, I have decided to keep it -although less readable-, as there
are known usecases where 100k+ shards are pruned to single digit shards.
If these would end up at the end of `shardcount` a hotloop of zero
checks on the first iteration _could_ cause a theoretical performance
regression.

All in all, this code is using less memory in all cases where it
matters, and less cpu in most cases, while using more idiomatic
datastructures for the task at hand.
2023-06-16 16:06:22 +02:00
Marco Slot 3adc1575d9
Fix DROP CONSTRAINT in command string with other commands (#7012)
Co-authored-by: Marco Slot <marco.slot@gmail.com>
2023-06-16 15:54:37 +02:00
Onur Tirtir 12a093b456
Allow using generated identity column based on int/smallint when creating a distributed table (#7008)
Allow using generated identity column based on int/smallint when
creating a distributed table so that applications that rely on
those data types don't break.

Inserting into / modifying such columns from workers is not allowed
but it's better than not allowing such columns altogether.
2023-06-16 14:34:23 +03:00
Halil Ozan Akgül 04f6868ed2
Add citus_schemas view (#6979)
DESCRIPTION: Adds citus_schemas view

The citus_schemas view will be created in public schema if it exists, if
not the view will be created in pg_catalog.

Need to:
- [x] Add tests
- [x] Fix tests
2023-06-16 14:21:58 +03:00
Naisila Puka 5bf163a27d
Remove PG13 from CI and Configure (#7002)
DESCRIPTION: Drops PG13 Support

This commit is the first phase of dropping PG13 support.

It consists of the following:

- Removes pg13 from CI tests
Among other things, Citus upgrade tests should now use PG14.
Earliest Citus version supporting PG14 is 10.2.
We also pick 11.3 version for upgrade_pg_dist_cleanup tests.
Therefore, we run the citus upgrade tests with versions 10.2 and 11.3.

- Removes pg13 from configure script

- Remove upgrade_columnar_metapage upgrade tests 
We populate first_row_number column of columnar.stripe table
during citus 10.1-10.2 upgrade. Given that we start from citus 10.2.0,
which is the oldest version supporting PG14, we don't have that
upgrade path anymore. Hence we remove these tests.

- Removes upgrade_pg_dist_object_test and upgrade_partition_constraints tests
These upgrade tests require the citus old version to be less than 10.0.
Given that we drop support for PG13, we run upgrade tests with PG14,
which starts with 10.2.
So we remove these upgrade tests.

- Documents that upgrade_post_11 should upgrade from version less than 11 
In this way we make sure we run
citus_finalize_upgrade_to_citus11 script

- Adds needed alternative output for upgrade_citus_finish_citus_upgrade 
Given that we use 11.3 as the citus old version as well,
we add this alternative output because pg_catalog.citus_finish_citus_upgrade()
makes sense if last_upgrade_major_version < 11. See below for reference:
pg_catalog.citus_finish_citus_upgrade():
...
	IF last_upgrade_major_version < 11 THEN
		PERFORM citus_finalize_upgrade_to_citus11();
		performed_upgrade := true;
	END IF;

	IF NOT performed_upgrade THEN
		RAISE NOTICE 'already at the latest distributed
		schema version (%)', last_upgrade_version_string;
		RETURN;
	END IF;
...

And that's it :)

The second phase of dropping PG13 support will consist in removing
all the PG13 specific compilation paths/tests in the Citus repo.
Will be done soon.
2023-06-15 14:54:06 +03:00
Ahmet Gedemenli 002a88ae7f
Error for single shard table creation if replication factor > 1 (#7006)
Error for single shard table creation if replication factor > 1
2023-06-15 13:13:45 +03:00
Emel Şimşek 4f793abc4a
Turn on GUC_REPORT flag for search_path to enable reporting back the parameter value upon change. (#6983)
DESCRIPTION: Turns on the GUC_REPORT flag for search_path. This results
in postgres to report the parameter status back in addition to Command
Complete packet.

In response to the following command,

> SET search_path TO client1;

postgres sends back the following packets (shown in pseudo form):

C (Command Complete) SET + **S (Parameter Status) search_path =
client1**
2023-06-14 17:35:52 +03:00
Naisila Puka 3cc7a4aa42
Fix pg14-pg15 upgrade_distributed_triggers test (#6981)
This test is only relevant for pg14-15 upgrade.
However, the check on `upgrade_distributed_triggers_after` didn't take
into consideration the case when we are doing pg15-16 upgrade. Hence, I
added one more condition to the test: existence of
`upgrade_distributed_triggers` schema which can only be created in pg14.
2023-06-14 15:32:38 +03:00
Onur Tirtir dbdf04e8ba
Rename pg_dist tenant_schema to pg_dist_schema (#7001) 2023-06-14 12:12:15 +03:00
Naisila Puka ba40eb363c
Fix some gucs' initial and boot values, and flag combinations (#6957)
PG16beta1 added some sanity checks for GUCS, find the Relevant PG
commits below:

1- Add check on initial and boot values when loading GUCs

a73952b795
2- Extend check_GUC_init() with checks on flag combinations when loading
GUCs

009f8d1714

I fixed our currently problematic GUCS, we can merge this directly into
main as these make sense for any PG version.

There was a particular NodeConninfo issue:
Previously we would rely on the fact that NodeConninfo initial value
is an empty string. However, with PG16 enforcing same initial and boot
values, we can't use an empty initial value for NodeConninfo anymore.
Therefore we add a new flag to indicate whether we are at boot check.
2023-06-14 11:55:52 +03:00
Ahmet Gedemenli 7b0bc62173
Support CREATE TABLE .. AS SELECT .. commands for tenant tables (#6998)
Support CREATE TABLE .. AS SELECT .. commands for tenant tables
2023-06-13 17:54:09 +03:00
Halil Ozan Akgül 772d194357
Changes citus_shard_sizes view's Shard Name Column to Shard Id (#7003)
citus_shard_sizes view had a shard name column we use to extract shard
id. This PR changes the column to shard id so we don't do unnecessary
string operation.
2023-06-13 16:36:35 +03:00
Gokhan Gulbiz e0ccd155ab
Make citus_stat_tenants work with schema-based tenants. (#6936)
DESCRIPTION: Enabling citus_stat_tenants to support schema-based
tenants.

This pull request modifies the existing logic to enable tenant
monitoring with schema-based tenants. The changes made are as follows:

- If a query has a partitionKeyValue (which serves as a tenant
key/identifier for distributed tables), Citus annotates the query with
both the partitionKeyValue and colocationId. This allows for accurate
tracking of the query.
- If a query does not have a partitionKeyValue, but its colocationId
belongs to a distributed schema, Citus annotates the query with only the
colocationId. The tenant monitor can then easily look up the schema to
determine if it's a distributed schema and make a decision on whether to
track the query.

---------

Co-authored-by: Jelte Fennema <jelte.fennema@microsoft.com>
2023-06-13 14:11:45 +03:00
aykut-bozkurt 5acbd735ca
Move 2 functions to correct files (#7000)
Followup item from
https://github.com/citusdata/citus/pull/6933#discussion_r1217896933
2023-06-13 11:43:48 +03:00
aykut-bozkurt 213d363bc3
Add citus_schema_distribute/undistribute udfs to convert a schema into a tenant schema / back to a regular schema (#6933)
* Currently we do not allow any Citus tables other than Citus local
tables inside a regular schema before executing
`citus_schema_distribute`.
* `citus_schema_undistribute` expects only single shard distributed
tables inside a tenant schema.

DESCRIPTION: Adds the udf `citus_schema_distribute` to convert a regular
schema into a tenant schema.
DESCRIPTION: Adds the udf `citus_schema_undistribute` to convert a
tenant schema back to a regular schema.

---------

Co-authored-by: Onur Tirtir <onurcantirtir@gmail.com>
2023-06-12 18:41:31 +03:00
Gokhan Gulbiz 2c509b712a
Tenant monitoring performance improvements (#6868)
- [x] Use spinlock instead of lwlock per tenant
[b437aa9](b437aa9e52)
- [x] Use hashtable to store tenant stats
[ccd464b](ccd464ba04)
- [x] Introduce a new GUC for specifying the sampling rate of new tenant
entries in the tenant monitor.
[a8d3805](a8d3805bd6)

Below are the pgbench metrics with select-only workloads from my local
machine. Here is the
[script](https://gist.github.com/gokhangulbiz/7a2308470597dc06734ff7c08f87c656)
I used for benchmarking.

| | Connection Count | Initial Implementation (TPS) | On/Off Diff |
Final Implementation -Run#1 (TPS) | On/Off Diff | Final Implementation
-Run#2 (TPS) | On/Off Diff | Final Implementation -Run#3 (TPS) | On/Off
Diff | Avg On/Off Diff |
| --- | ---------------- | ---------------------------- | ----------- |
---------------------------------- | ----------- |
---------------------------------- | ----------- |
---------------------------------- | ----------- | --------------- |
| On | 32 | 37488.69839 | \-17% | 42859.94402 | \-5% | 43379.63121 |
\-2% | 42636.2264 | \-7% | \-5% |
| Off | 32 | 43909.83121 | | 45139.63151 | | 44188.77425 | | 45451.9548
| | |
| On | 300 | 30463.03538 | \-15% | 33265.19957 | \-7% | 34685.87233 |
\-2% | 34682.5214 | \-1% | \-3% |
| Off | 300 | 35105.73594 | | 35637.45423 | | 35331.33447 | | 35113.3214
| | |
2023-06-11 12:17:31 +03:00
Ahmet Gedemenli 2f13b37ce4
Fix flaky multi_schema_support (#6991)
Dropping a leftover table, delete some unnecessary command, add some
ORDER BY to avoid flakiness in `multi_schema_support`
2023-06-09 17:03:58 +03:00
Naisila Puka 50e6c50534
Remove flaky rebalance plan from test (#6990)
Looks like sometimes shards are a slightly different size than we
expect, 16k vs 8k, resulting in a different rebalance plan.
2023-06-09 15:59:30 +03:00
Ahmet Gedemenli e6ac9f2a68
Propagate ALTER SCHEMA .. OWNER TO .. (#6987)
Propagate `ALTER SCHEMA .. OWNER TO ..` commands to workers
2023-06-09 15:32:18 +03:00
Halil Ozan Akgül 3acadd7321
Citus Clock tests with Single Shard Tables (#6938)
This PR tests Citus clock with single shard tables.
2023-06-09 15:06:46 +03:00
Naisila Puka 2ba3bffe1e
Random warning fixes (#6974)
Citus build with PG16 fails because of the following warnings:
 - using char* instead of Datum
 - using pointer instead of oid
 - candidate function for format attribute
 - remove old definition from PG11 compatibility 62bf571ced

This commit fixes the above.
2023-06-09 14:36:43 +03:00
Emel Şimşek 8b2024b730
When Creating a FOREIGN KEY without a name, schema qualify referenced table name in deparser. (#6986)
DESCRIPTION: Fixes a bug which causes an error when creating a FOREIGN
KEY constraint without a name if the referenced table is schema
qualified.

In deparsing the `ALTER TABLE s1.t1 ADD FOREIGN KEY (key) REFERENCES
s2.t2; `, command back from its cooked form, we should schema qualify
the REFERENCED table.

Fixes #6982.
2023-06-09 14:13:13 +03:00
Onur Tirtir fa8870217d
Enable logical planner for single-shard tables (#6950)
* Enable using logical planner for single-shard tables

* Improve non-colocated table error in physical planner

* Favor distributed tables over reference tables when chosing anchor shard
2023-06-08 10:57:23 +03:00
Halil Ozan Akgül b569d53a0c
Single shard misc udfs (#6956)
This PR tests:
- shards_colocated
- citus_shard_cost_by_disk_size
- citus_update_shard_statistics
- citus_update_table_statistics
2023-06-07 13:30:50 +03:00
Emel Şimşek 6369645db4
Restore Test Coverage for Pushing Down Subqueries. (#6976)
When we add the coordinator in metadata, reference tables gets
replicated to coordinator. As a result we lose some test coverage since
some queries start to run locally instead of getting pushed down.

This PR adds new test cases involving distributed tables instead of
reference tables for covering distributed execution in related cases.
2023-06-07 12:14:34 +03:00
Ahmet Gedemenli 8d8968ae63
Disable ALTER TABLE .. SET SCHEMA for tenant tables (#6973)
Disables `ALTER TABLE .. SET SCHEMA` for tenant tables.
Disables `ALTER TABLE .. SET SCHEMA` for tenant schemas.
2023-06-07 11:02:53 +03:00
Halil Ozan Akgül 3f7bc0cbf5
Single Shard Partition Column UDFs (#6964)
This PR fixes and tests:
- debug_equality_expression
- partition_column_id
2023-06-06 17:55:40 +03:00
Halil Ozan Akgül 7e486345f1
Fix citus_table_type column in citus_tables and citus_shards views for single shard tables (#6971)
`citus_table_type` column of `citus_tables` and `citus_shards` will show
"schema" for tenants schema tables and "distributed" for single shard
tables that are not in a tenant schema.
2023-06-06 16:20:11 +03:00
Naisila Puka c2f117c559
Citus Revise tree-walk APIs to include context (#6975)
Without revising there are Warnings in PG16 build

Relevant PG commit

1c27d16e6e
1c27d16e6e5c1f463bbe1e9ece88dda811235165
2023-06-06 14:17:51 +03:00
Teja Mupparti f6a516dab5 Refactor repartitioning code into generic format 2023-06-05 09:06:05 -07:00
Naisila Puka 48f068d08e
Remove AssertArg and AssertState (#6970)
PG16 removed them. They were already identical to Assert. We can merge
this directly to main branch

Relevant PG commit:

b1099eca8f
b1099eca8f38ff5cfaf0901bb91cb6a22f909bc6

Co-authored-by: onderkalaci <onderkalaci@gmail.com>
2023-06-05 13:25:21 +03:00
Emel Şimşek 3fda2c3254
Change test files in multi and multi-1 schedules to accommodate coordinator in the metadata. (#6939)
Changes test files in multi and multi-1 schedules such that they
accomodate coordinator in metadata.

Changes fall into the following buckets:

1. When coordinator is in metadata, reference table shards are present
in coordinator too.
This changes test outputs checking the table size, shard numbers etc.
for reference tables.

2. When coordinator is in metadata, postgres tables are converted to
citus local tables whenever a foreign key relationship to them is
created. This changes some test cases which tests it should not be
possible to create foreign keys to postgres tables.

3. Remove lines that add/remove coordinator for testing purposes.
2023-06-05 10:37:48 +03:00
ahmet gedemenli 2bd6ff0e93 Use schema name in the error msg 2023-06-02 15:25:14 +03:00
ahmet gedemenli fccfee08b6 Style 2023-06-02 14:48:07 +03:00
ahmet gedemenli f68ea20009 Disable alter_distributed_table for tenant tables 2023-06-02 14:48:07 +03:00
ahmet gedemenli 4b67e398b1 Disable undistribute_table for tenant tables 2023-06-02 14:48:07 +03:00
ahmet gedemenli f4b2494d0c Disable update_distributed_table_colocation for tenant tables 2023-06-02 14:48:07 +03:00
Halil Ozan Akgül 3e183746b7
Single Shard Misc UDFs 2 (#6963)
Creating a second PR to make reviewing easier.
This PR tests:
- replicate_reference_tables
- fix_partition_shard_index_names
- isolate_tenant_to_new_shard
- replicate_table_shards
2023-06-02 13:46:14 +03:00
Halil Ozan Akgül ac7f732be2
Add Single Shard Table Tests for Dependency UDFs (#6960)
This PR tests:
- citus_get_all_dependencies_for_object
- citus_get_dependencies_for_object
- is_citus_depended_object
2023-06-02 11:57:53 +03:00
Teja Mupparti ff2062e8c3 Rename insert-select redistribute code base to generic purpose 2023-06-01 09:43:43 -07:00
Halil Ozan Akgül 9961d39d97
Adds Single Shard Table Tests for Foreign Key UDFs (#6959)
This PR adds tests for:
- get_referencing_relation_id_list
- get_referenced_relation_id_list
- get_foreign_key_connected_relations
2023-06-01 12:56:06 +03:00
ahmet gedemenli 8ace5a7af5 Use citus_drain_node with single shard tables 2023-05-31 14:01:52 +03:00
ahmet gedemenli ee42af7ad2 Add test for rebalancer with single shard tables 2023-05-31 11:48:49 +03:00
Teja Mupparti f9dbe7784b This commit adds a safety-net to the issue seen in #6785. The fix for the underlying issue will be in the PR#6943 2023-05-30 10:53:05 -07:00
Halil Ozan Akgül d99a5e2f62
Single Shard Table Tests for Shard Lock UDFs (#6944)
This PR adds single shard table tests for shard lock UDFs,
`shard_lock_metadata`, `shard_lock_resources`
2023-05-30 12:23:41 +03:00
Halil Ozan Akgül 5b54700b93
Single Shard Table Tests for Time Partitions (#6941)
This PR adds tests for time partitions UDFs and view with single shard
tables.
2023-05-29 14:18:56 +03:00
Halil Ozan Akgül 9d9b3817c1
Single Shard Table Columnar UDFs Tests (#6937)
Adds columnar UDF tests for single shard tables.
2023-05-29 13:53:00 +03:00
Halil Ozan Akgül 321fcfcdb5
Add Support for Single Shard Tables in update_distributed_table_colocation (#6924)
Adds Support for Single Shard Tables in
`update_distributed_table_colocation`.

This PR changes checks that make sure tables should be hash distributed
table to hash or single shard distributed tables.
2023-05-29 11:47:50 +03:00
Ahmet Gedemenli 1ca80813f6
Citus UDFs support for single shard tables (#6916)
Verify Citus UDFs work well with single shard tables

SUPPORTED
* citus_table_size
* citus_total_relation_size
* citus_relation_size
* citus_shard_sizes
* truncate_local_data_after_distributing_table
* create_distributed_function // test function colocated with a single
shard table
* undistribute_table
* alter_table_set_access_method

UNSUPPORTED - error out for single shard tables
* master_create_empty_shard
* create_distributed_table_concurrently
* create_distributed_table
* create_reference_table
* citus_add_local_table_to_metadata
* citus_split_shard_by_split_points
* alter_distributed_table
2023-05-26 17:30:05 +03:00
Onur Tirtir 246b054a7d
Add support for schema-based-sharding via a GUC (#6866)
DESCRIPTION: Adds citus.enable_schema_based_sharding GUC that allows
sharding the database based on schemas when enabled.

* Refactor the logic that automatically creates Citus managed tables 

* Refactor CreateSingleShardTable() to allow specifying colocation id
instead

* Add support for schema-based-sharding via a GUC

### What this PR is about:
Add **citus.enable_schema_based_sharding GUC** to enable schema-based
sharding. Each schema created while this GUC is ON will be considered
as a tenant schema. Later on, regardless of whether the GUC is ON or
OFF, any table created in a tenant schema will be converted to a
single shard distributed table (without a shard key). All the tenant
tables that belong to a particular schema will be co-located with each
other and will have a shard count of 1.

We introduce a new metadata table --pg_dist_tenant_schema-- to do the
bookkeeping for tenant schemas:
```sql
psql> \d pg_dist_tenant_schema
          Table "pg_catalog.pg_dist_tenant_schema"
┌───────────────┬─────────┬───────────┬──────────┬─────────┐
│    Column     │  Type   │ Collation │ Nullable │ Default │
├───────────────┼─────────┼───────────┼──────────┼─────────┤
│ schemaid      │ oid     │           │ not null │         │
│ colocationid  │ integer │           │ not null │         │
└───────────────┴─────────┴───────────┴──────────┴─────────┘
Indexes:
    "pg_dist_tenant_schema_pkey" PRIMARY KEY, btree (schemaid)
    "pg_dist_tenant_schema_unique_colocationid_index" UNIQUE, btree (colocationid)

psql> table pg_dist_tenant_schema;
┌───────────┬───────────────┐
│ schemaid  │ colocationid  │
├───────────┼───────────────┤
│     41963 │            91 │
│     41962 │            90 │
└───────────┴───────────────┘
(2 rows)
```

Colocation id column of pg_dist_tenant_schema can never be NULL even
for the tenant schemas that don't have a tenant table yet. This is
because, we assign colocation ids to tenant schemas as soon as they
are created. That way, we can keep associating tenant schemas with
particular colocation groups even if all the tenant tables of a tenant
schema are dropped and recreated later on.

When a tenant schema is dropped, we delete the corresponding row from
pg_dist_tenant_schema. In that case, we delete the corresponding
colocation group from pg_dist_colocation as well.

### Future work for 12.0 release:
We're building schema-based sharding on top of the infrastructure that
adds support for creating distributed tables without a shard key
(https://github.com/citusdata/citus/pull/6867).
However, not all the operations that can be done on distributed tables
without a shard key necessarily make sense (in the same way) in the
context of schema-based sharding. For example, we need to think about
what happens if user attempts altering schema of a tenant table. We
will tackle such scenarios in a future PR.

We will also add a new UDF --citus.schema_tenant_set() or such-- to
allow users to use an existing schema as a tenant schema, and another
one --citus.schema_tenant_unset() or such-- to stop using a schema as
a tenant schema in future PRs.
2023-05-26 10:49:58 +03:00
Halil Ozan Akgül 2c7beee562
Fix citus.tenant_stats_limit test by setting it to 2 (#6899)
citus.tenant_stats_limit was set to 2 when we were adding tests for it.
Then we changed it to 10, making the tests incorrect.
This PR fixes that without breaking other tests.
2023-05-23 17:44:07 +03:00
Jelte Fennema 350a0f6417
Support running Citus upgrade tests with run_test.py (#6832)
Citus upgrade tests require some additional logic to run, because we
have a before and after schedule and we need to swap the Citus
version in-between. This adds that logic to `run_test.py`.

In passing this makes running upgrade tests locally multiple times
faster by caching tarballs.
2023-05-23 14:38:54 +02:00
Emel Şimşek 02f815ce1f
Disable local execution when Explain Analyze is requested for a query. (#6892)
DESCRIPTION: Fixes a crash when explain analyze is requested for a query
that is normally locally executed.

When explain analyze is requested for a query, a task with two queries
is created. Those two queries are
    
1. Wrapped Query --> `SELECT ... FROM
worker_save_query_explain_analyze(<query>, <explain analyze options>)`
2. Fetch Query -->` SELECT explain_analyze_output, execution_duration
FROM worker_last_saved_explain_analyze();`

When the query is locally executed a task with multiple queries causes a
crash in production. See the Assert at
57455dc64d/src/backend/distributed/executor/tuple_destination.c#:~:text=Assert(task%2D%3EqueryCount%20%3D%3D%201)%3B

This becomes a critical issue when auto_explain extension is used. When
auto_explain extension is enabled, explain analyze is automatically
requested for every query.

One possible solution could be not to create two queries for a locally
executed query. The fetch part may not have to be a query since the
values are available in local variables.

Until we enable local execution for explain analyze, it is best to
disable local execution.

Fixes #6777.
2023-05-23 14:33:22 +03:00
Emel Şimşek f9a5be59b9
Run replicate_reference_tables background task as superuser. (#6930)
DESCRIPTION: Fixes a bug in background shard rebalancer where the
replicate reference tables task fails if the current user is not a
superuser.

This change is to be backported to earlier releases. We should fix the
permissions for replicate_reference_tables on main branch such that it
can be run by non-superuser roles.

Fixes #6925.
Fixes #6926.
2023-05-18 23:46:32 +03:00
Hanefi Onaldi 6a83290d91
Add ORDER BY clauses to some flaky tests (#6931)
I observed a flaky test output
[here](https://app.circleci.com/pipelines/github/citusdata/citus/32692/workflows/32464a22-7fd6-440a-9ff7-cfa62f9ff58a/jobs/1126144)
and added `ORDER BY` clauses to similar queries in the failing test
file.

```diff
 SELECT pg_identify_object_as_address(classid, objid, objsubid) from pg_catalog.pg_dist_object where objid IN('viewsc.prop_view3'::regclass::oid, 'viewsc.prop_view4'::regclass::oid);
   pg_identify_object_as_address  
 ---------------------------------
- (view,"{viewsc,prop_view3}",{})
  (view,"{viewsc,prop_view4}",{})
+ (view,"{viewsc,prop_view3}",{})
 (2 rows)
```
2023-05-18 12:45:39 +03:00
Onur Tirtir 8ff9dde4b3
Prevent pushing down INSERT .. SELECT queries that we shouldn't (and allow some more) (#6752)
Previously INSERT .. SELECT planner were pushing down some queries that should not be pushed down due to wrong colocation checks. It was checking whether one of the table in SELECT part and target table are colocated. But now, we check colocation for all tables in SELECT part and the target table.

Another problem with INSERT .. SELECT planner was that some queries, which is valid to be pushed down, were not pushed down due to unnecessary checks which are currently supported. e.g. UNION check. As solution, we reused the pushdown planner checks for INSERT .. SELECT planner.


DESCRIPTION: Fixes a bug that causes incorrectly pushing down some
INSERT .. SELECT queries that we shouldn't
DESCRIPTION: Prevents unnecessarily pulling the data into coordinator
for some INSERT .. SELECT queries
DESCRIPTION: Drops support for pushing down INSERT .. SELECT with append
table as target

Fixes #6749.
Fixes #1428.
Fixes #6920.

---------

Co-authored-by: aykutbozkurt <aykut.bozkurt1995@gmail.com>
2023-05-17 15:05:08 +03:00
Onur Tirtir 56d217b108
Mark objects as distributed even when pg_dist_node is empty (#6900)
We mark objects as distributed objects in Citus metadata only if we need
to propagate given the command that creates it to worker nodes. For this
reason, we were not doing this for the objects that are created while
pg_dist_node is empty.

One implication of doing so is that we defer the schema propagation to
the time when user creates the first distributed table in the schema.
However, this doesn't help for schema-based sharding (#6866) because we
want to sync pg_dist_tenant_schema to the worker nodes even for empty
schemas too.

* Support test dependencies for isolation tests without a schedule

* Comment out a test due to a known issue (#6901)

* Also, reduce the verbosity for some log messages and make some
   tests compatible with run_test.py.
2023-05-16 11:45:42 +03:00
Onur Tirtir e7abde7e81
Prevent downgrades when there is a single-shard table in the cluster (#6908)
Also add a few tests for Citus/PG upgrade/downgrade scenarios.
2023-05-16 09:44:28 +02:00
Onur Tirtir 893ed416f1
Disable citus.enable_non_colocated_router_query_pushdown by default (#6909)
Fixes #6779.

DESCRIPTION: Disables citus.enable_non_colocated_router_query_pushdown
GUC by default to ensure generating a consistent distributed plan for
the queries that reference non-colocated distributed tables

We already have tests for the cases where this GUC is disabled,
so I'm not adding any more tests in this PR.

Also make multi_insert_select_window idempotent.

Related to: #6793
2023-05-15 12:07:50 +03:00
Jelte Fennema 07b8cd2634
Forward to existing emit_log_hook in our log hook (#6877)
DESCRIPTION: Forward to existing emit_log_hook in our log hook

This makes us work better with other extensions installed in Postgres.
Without this change we would overwrite their emit_log_hook, causing it
to never be called.

Fixes #6874
2023-05-09 16:55:56 +02:00
Ivan Kush e3c6b8a10e
Fix flaky clolumnar_permissions test (#6913)
As attr_num isn't ordered, order may be random. And regression test may
be failed.
This MR adds attr_num to ORDER BY


```
  3 --- /build/contrib/citus/src/test/regress/expected/columnar_permissions.out.modified    2023-05-05 11:13:44.926085432 +0000
  4 +++ /build/contrib/citus/src/test/regress/results/columnar_permissions.out.modified 2023-05-05 11:13:44.934085414 +0000
  5 @@ -124,24 +124,24 @@
  6    from columnar.chunk
  7    where relation in ('no_access'::regclass, 'columnar_permissions'::regclass)
  8    order by relation, stripe_num;
  9         relation       | stripe_num | attr_num | chunk_group_num | value_count
 10  ----------------------+------------+----------+-----------------+-------------
 11   no_access            |          1 |        1 |               0 |           1
 12   no_access            |          2 |        1 |               0 |           1
 13   no_access            |          3 |        1 |               0 |           1
 14   columnar_permissions |          1 |        1 |               0 |           1
 15   columnar_permissions |          1 |        2 |               0 |           1
 16 - columnar_permissions |          2 |        1 |               0 |           1
 17   columnar_permissions |          2 |        2 |               0 |           1
 18 - columnar_permissions |          3 |        1 |               0 |           1
 19 + columnar_permissions |          2 |        1 |               0 |           1
 20   columnar_permissions |          3 |        2 |               0 |           1
 21 + columnar_permissions |          3 |        1 |               0 |           1
 22   columnar_permissions |          4 |        1 |               0 |           1
 23   columnar_permissions |          4 |        2 |               0 |           1
 24  (11 rows)
```

Co-authored-by: Ivan Kush <ivan.kush@tantorlabs.ru>
2023-05-09 12:42:37 +02:00
Hanefi Onaldi 06e6f8e428
Normalize columnar version in tests (#6917)
When we bump columnar version, some tests fail because of the output
change. Instead of changing those lines every time, I think it is better
to normalize it in tests.
2023-05-08 16:10:55 +03:00
Naisila Puka 905fd46410
Fixes flakiness in background_rebalance_parallel test (#6910)
Fixes the following flaky outputs by decreasing citus_task_wait loop
interval, and changing the order of wait commands.

https://app.circleci.com/pipelines/github/citusdata/citus/32102/workflows/19958297-6c7e-49ef-9bc2-8efe8aacb96f/jobs/1089589

``` diff
SELECT job_id, task_id, status, nodes_involved
 FROM pg_dist_background_task WHERE job_id in (:job_id) ORDER BY task_id;
  job_id | task_id |  status  | nodes_involved 
 --------+---------+----------+----------------
   17779 |    1013 | done     | {50,56}
   17779 |    1014 | running  | {50,57}
-  17779 |    1015 | running  | {50,56}
-  17779 |    1016 | blocked  | {50,57}
+  17779 |    1015 | done     | {50,56}
+  17779 |    1016 | running  | {50,57}
   17779 |    1017 | runnable | {50,56}
   17779 |    1018 | blocked  | {50,57}
   17779 |    1019 | runnable | {50,56}
   17779 |    1020 | blocked  | {50,57}
 (8 rows)
```

https://github.com/citusdata/citus/pull/6893#issuecomment-1525661408
```diff
SELECT job_id, task_id, status, nodes_involved
 FROM pg_dist_background_task WHERE job_id in (:job_id) ORDER BY task_id;
  job_id | task_id |  status  | nodes_involved 
 --------+---------+----------+----------------
   17779 |    1013 | done     | {50,56}
-  17779 |    1014 | running  | {50,57}
+  17779 |    1014 | runnable | {50,57}
   17779 |    1015 | running  | {50,56}
   17779 |    1016 | blocked  | {50,57}
   17779 |    1017 | runnable | {50,56}
   17779 |    1018 | blocked  | {50,57}
   17779 |    1019 | runnable | {50,56}
   17779 |    1020 | blocked  | {50,57}
 (8 rows)
```
2023-05-05 16:47:01 +03:00
Hanefi Onaldi 3217e3f181
Fix flaky background rebalance parallel test (#6893)
A test in background_rebalance_parallel.sql was failing intermittently
where the order of tasks in the output was not deterministic. This
commit fixes the test by removing id columns for the background tasks in
the output.

A sample failing diff before this patch is below:

```diff
 SELECT D.task_id,
        (SELECT T.command FROM pg_dist_background_task T WHERE T.task_id = D.task_id),
        D.depends_on,
        (SELECT T.command FROM pg_dist_background_task T WHERE T.task_id = D.depends_on)
 FROM pg_dist_background_task_depend D  WHERE job_id in (:job_id) ORDER BY D.task_id, D.depends_on ASC;
  task_id |                               command                               | depends_on |                               command
 ---------+---------------------------------------------------------------------+------------+---------------------------------------------------------------------
-    1014 | SELECT pg_catalog.citus_move_shard_placement(85674026,50,57,'auto') |       1013 | SELECT pg_catalog.citus_move_shard_placement(85674025,50,56,'auto')
-    1016 | SELECT pg_catalog.citus_move_shard_placement(85674032,50,57,'auto') |       1015 | SELECT pg_catalog.citus_move_shard_placement(85674031,50,56,'auto')
-    1018 | SELECT pg_catalog.citus_move_shard_placement(85674038,50,57,'auto') |       1017 | SELECT pg_catalog.citus_move_shard_placement(85674037,50,56,'auto')
-    1020 | SELECT pg_catalog.citus_move_shard_placement(85674044,50,57,'auto') |       1019 | SELECT pg_catalog.citus_move_shard_placement(85674043,50,56,'auto')
+    1014 | SELECT pg_catalog.citus_move_shard_placement(85674038,50,57,'auto') |       1013 | SELECT pg_catalog.citus_move_shard_placement(85674037,50,56,'auto')
+    1016 | SELECT pg_catalog.citus_move_shard_placement(85674044,50,57,'auto') |       1015 | SELECT pg_catalog.citus_move_shard_placement(85674043,50,56,'auto')
+    1018 | SELECT pg_catalog.citus_move_shard_placement(85674026,50,57,'auto') |       1017 | SELECT pg_catalog.citus_move_shard_placement(85674025,50,56,'auto')
+    1020 | SELECT pg_catalog.citus_move_shard_placement(85674032,50,57,'auto') |       1019 | SELECT pg_catalog.citus_move_shard_placement(85674031,50,56,'auto')
 (4 rows)
```

Notice that the dependent and dependee tasks have some commands, but
they have different task ids.
2023-05-05 12:07:46 +03:00
Teja Mupparti b58665773b Move all pre-15-defined routines to the bottom of the file 2023-05-04 10:07:08 -07:00
Naisila Puka 072ae44742
Adjusts query's CoerceViaIO & RelabelType nodes that are improper for deparsing (#6391)
Adjusts query's CoerceViaIO & RelabelType nodes that are
improper for deparsing

The standard planner converts some `::text` casts to `::cstring` and
here we convert back because `cstring` is a pseudotype and it cannot be
casted to most types. This problem occurs in CoerceViaIO nodes.
There was another problem with RelabelType nodes fixed in the following
PR:
https://github.com/citusdata/citus/pull/4580
We undo the changes in that PR, and fix both CoerceViaIO and RelabelType
nodes in the planning phase (not in the deparsing phase in ruleutils)

Fixes https://github.com/citusdata/citus/issues/5646
Fixes https://github.com/citusdata/citus/issues/5033
Fixes https://github.com/citusdata/citus/issues/6061
2023-05-04 16:46:02 +03:00
Ahmet Gedemenli 4321286005 Disable master_create_empty_shard udf for single shard tables (#6902) 2023-05-03 17:02:43 +03:00
Onur Tirtir db2514ef78 Call null-shard-key tables as single-shard distributed tables in code 2023-05-03 17:02:43 +03:00
Onur Tirtir 39b7711527 Add support for more pushable / non-pushable insert .. select queries with null-shard-key tables (#6823)
* Add support for dist insert select by selecting from a reference
table.
  
  This was the only pushable insert .. select case that
  #6773 didn't cover.

* For the cases where we insert into a Citus table but the INSERT ..
SELECT
  query cannot be pushed down, allow pull-to-coordinator when possible.

  Remove the checks that we had at the very beginning of
  CreateInsertSelectPlanInternal so that we can try insert .. select via
  pull-to-coordinator for the cases where we cannot push-down the insert
  .. select query. What we support via pull-to-coordinator is still
  limited due to lacking of logical planner support for SELECT queries,
but this commit at least allows using pull-to-coordinator for the cases
  where the select query can be planned via router planner, without
  limiting ourselves to restrictive top-level checks.

  Also introduce some additional restrictions into
CreateDistributedInsertSelectPlan for the cases it was missing to check
  for null-shard-key tables. Indeed, it would make more sense to have
those checks for distributed tables in general, via separate PRs against
  main branch. See https://github.com/citusdata/citus/pull/6817.

* Add support for inserting into a Postgres table.
2023-05-03 16:24:20 +03:00
Onur Tirtir 85745b46d5 Add initial sql support for distributed tables that don't have a shard key (#6773/#6822)
Enable router planner and a limited version of INSERT .. SELECT planner
for the queries that reference colocated null shard key tables.

* SELECT / UPDATE / DELETE / MERGE is supported as long as it's a router
query.
* INSERT .. SELECT is supported as long as it only references colocated
  null shard key tables.

Note that this is not only limited to distributed INSERT .. SELECT but
also
covers a limited set of query types that require pull-to-coordinator,
e.g.,
  due to LIMIT clause, generate_series() etc. ...
(Ideally distributed INSERT .. SELECT could handle such queries too,
e.g.,
when we're only referencing tables that don't have a shard key, but
today
this is not the case. See
https://github.com/citusdata/citus/pull/6773#discussion_r1140130562.
2023-05-03 16:24:20 +03:00
Onur Tirtir ac0ffc9839 Add a config for arbitrary config tests where all the tables are null-shard-key tables (#6783/#6788) 2023-05-03 16:18:27 +03:00
Ahmet Gedemenli cdf54ff4b1 Add DDL support null-shard-key tables(#6778/#6784/#6787/#6859)
Add tests for ddl coverage:
* indexes
* partitioned tables + indexes with long names
* triggers
* foreign keys
* statistics
* grant & revoke statements
* truncate & vacuum
* create/test/drop view that depends on a dist table with no shard key
* policy & rls test

* alter table add/drop/alter_type column (using sequences/different data
  types/identity columns)
* alter table add constraint (not null, check, exclusion constraint)
* alter table add column with a default value / set default / drop
  default
* alter table set option (autovacuum)

* indexes / constraints without names
* multiple subcommands

Adds support for
* Creating new partitions after distributing (with null key) the parent
table
* Attaching partitions to a distributed table with null distribution key
(and automatically distribute the new partition with null key as well)
* Detaching partitions from it
2023-05-03 16:18:27 +03:00
Onur Tirtir fa467e05e7 Add support for creating distributed tables with a null shard key (#6745)
With this PR, we allow creating distributed tables with without
specifying a shard key via create_distributed_table(). Here are the
the important details about those tables:
* Specifying `shard_count` is not allowed because it is assumed to be 1.
* We mostly call such tables as "null shard-key" table in code /
comments.
* To avoid doing a breaking layout change in create_distributed_table();
instead of throwing an error, it will inform the user that
`distribution_type`
  param is ignored unless it's explicitly set to NULL or  'h'.
* `colocate_with` param allows colocating such null shard-key tables to
  each other.
* We define this table type, i.e., NULL_SHARD_KEY_TABLE, as a subclass
of
  DISTRIBUTED_TABLE because we mostly want to treat them as distributed
  tables in terms of SQL / DDL / operation support.
* Metadata for such tables look like:
  - distribution method => DISTRIBUTE_BY_NONE
  - replication model => REPLICATION_MODEL_STREAMING
- colocation id => **!=** INVALID_COLOCATION_ID (distinguishes from
Citus local tables)
* We assign colocation groups for such tables to different nodes in a
  round-robin fashion based on the modulo of "colocation id".

Note that this PR doesn't care about DDL (except CREATE TABLE) / SQL /
operation (i.e., Citus UDFs) support for such tables but adds a
preliminary
API.
2023-05-03 16:18:27 +03:00
aykut-bozkurt 2d005ac777
Query Generator Seed (#6883)
- Give seed number as argument to query generator to reproduce a
previous run.
- Expose the difference between results, if any, as artifact on CI.
2023-05-03 15:54:11 +03:00
Teja Mupparti e444dd4f3f MERGE: Support reference table as source with local table as target 2023-05-02 11:37:29 -07:00
Hanefi Onaldi efd41e8ea5
Bump columnar to 11.3 (#6898)
When working on changelog, Marco suggested in
https://github.com/citusdata/citus/pull/6856#pullrequestreview-1386601215
that we should bump columnar version to 11.3 as well.

This PR aims to contain all the necessary changes to allow upgrades to
and downgrades from 11.3.0 for columnar. Note that updating citus
extension version does not affect columnar as the two extension versions
are not really coupled.

The same changes will also be applied to the release branch in
https://github.com/citusdata/citus/pull/6897
2023-05-02 11:58:32 +03:00
Ahmet Gedemenli 59ccf364df
Ignore nodes not allowed for shards, when planning rebalance steps (#6887)
We are handling colocation groups with shard group count less than the
worker node count, using a method different than the usual rebalancer.
See #6739
While making the decision of using this method or not, we should've
ignored the nodes that are marked `shouldhaveshards = false`. This PR
excludes those nodes when making the decision.

Adds a test such that:
 coordinator: []
 worker 1: [1_1, 1_2]
 worker 2: [2_1, 2_2]
(rebalance)
 coordinator: []
 worker 1: [1_1, 2_1]
 worker 2: [1_2, 2_2]

If we take the coordinator into account, the rebalancer considers the
first state as balanced and does nothing (because shard_count <
worker_count)
But with this pr, we ignore the coordinator because it's
shouldhaveshards = false
So the rebalancer distributes each colocation group to both workers

Also, fixes an unrelated flaky test in the same file
2023-05-01 12:21:08 +02:00
aykut-bozkurt 8cb69cfd13
break sequence dependency during table creation (#6889)
We need to break sequence dependency for a table while creating the
table during non-transactional metadata sync to ensure idempotency of
the creation of the table.

**Problem:**
When we send `SELECT
pg_catalog.worker_drop_sequence_dependency(logicalrelid::regclass::text)
FROM pg_dist_partition` to workers during the non-transactional sync,
table might not be in `pg_dist_partition` at worker, and sequence
dependency is not broken at the worker.

**Solution:** 
We break sequence dependency via `SELECT
pg_catalog.worker_drop_sequence_dependency(logicalrelid::regclass::text)`
for each table while creating it at the workers. It is safe to send
since the udf is a no-op when there is no sequence dependency.

DESCRIPTION: Fixes a bug related to sequence idempotency at
non-transactional sync.

Fixes https://github.com/citusdata/citus/issues/6888.
2023-04-28 15:09:09 +03:00
aykut-bozkurt a7fa1db696
fix flaky test regex (#6890)
There was a bug related to regex. We sometimes caught the wrong line
when the test name is also included in comments.
Example: We caught the wrong line as multi_metadata_sync is included in
the comment before the test line.

```
# ----------
# multi_metadata_sync tests the propagation of mx-related metadata changes to metadata workers
# multi_unsupported_worker_operations tests that unsupported operations error out on metadata workers
# ----------
test: multi_metadata_sync
```

Solution: Restrict regex rule better.
2023-04-27 13:14:40 +03:00
Jelte Fennema a5f4fece13
Fix running PG upgrade tests with run_test.py (#6829)
In #6814 we started using the Python test runner for upgrade tests in
run_test.py, instead of the Perl based one. This had a problem though,
not all tests in minimal_schedule can be run with the Python runner.
This adds a separate minimal schedule for the pg_upgrade tests which
doesn't include the tests that break with the Python runner.

This PR also fixes various other issues that came up while testing
the upgrade tests.
2023-04-24 15:54:32 +02:00
aykut-bozkurt a6a7271e63
Query generator test tool (#6686)
- Query generator is used to create queries, allowed by the grammar which is documented at `query_generator/query_gen.py` (currently contains only joins). 
- This PR adds a CI test which utilizes the query generator to compare the results of generated queries that are executed on Citus tables and local (undistributed) tables. It fails if there is an unexpected error at results. The error can be related to Citus, the query generator, or even Postgres.
- The tool is configured by the file `query_generator/config/config.yaml`, which limits table counts at generated queries and sets many table related parameters (e.g. row count).
- Run time of the CI task can be configured from the config file. By default, we run 250 queries with maximum table count of 40 inside each query.
2023-04-23 20:28:26 +03:00
aykut-bozkurt 08e2820c67
skip restriction clause if it contains placeholdervar (#6857)
`PlaceHolderVar` is not relevant to be processed inside a restriction
clause. Otherwise, `pull_var_clause_default` would throw error. PG would
create the restriction to physical `Var` that `PlaceHolderVar` points to
anyway, so it is safe to skip this restriction.

DESCRIPTION: Fixes a bug related to WHERE clause list which contains
placeholder.

Fixes https://github.com/citusdata/citus/issues/6758
2023-04-17 18:14:01 +03:00
Emel Şimşek 2675a68218
Make coordinator always in metadata by default in regression tests. (#6847)
DESCRIPTION: Changes the regression test setups adding the coordinator
to metadata by default.

When creating a Citus cluster, coordinator can be added in metadata
explicitly by running `citus_set_coordinator_host ` function. Adding the
coordinator to metadata allows to create citus managed local tables.
Other Citus functionality is expected to be unaffected.

This change adds the coordinator to metadata by default when creating
test clusters in regression tests.

There are 3 ways to run commands in a sql file (or a schedule which is a
sequence of sql files) with Citus regression tests. Below is how this PR
adds the coordinator to metadata for each.

1. `make <schedule_name>`
Changed the sql files (sql/multi_cluster_management.sql and
sql/minimal_cluster_management.sql) which sets up the test clusters such
that they call `citus_set_coordinator_host`. This ensures any following
tests will have the coordinator in metadata by default.
 
2. `citus_tests/run_test.py <sql_file_name>`
Changed the python code that sets up the cluster to always call `
citus_set_coordinator_host`.
For the upgrade tests, a version check is included to make sure
`citus_set_coordinator_host` function is available for a given version.

3. ` make check-arbitrary-configs  `     
Changed the python code that sets up the cluster to always call
`citus_set_coordinator_host `.

#6864 will be used to track the remaining work which is to change the
tests where coordinator is added/removed as a node.
2023-04-17 14:14:37 +03:00
Gokhan Gulbiz 8782ea1582
Ensure partitionKeyValue and colocationId are set for proper tenant stats gathering (#6834)
This PR updates the tenant stats implementation to set partitionKeyValue
and colocationId in ExecuteLocalTaskListExtended, in addition to
LocallyExecuteTaskPlan. This ensures that tenant stats can be properly
gathered regardless of the code path taken. The changes were initially
made while testing stored procedure calls for tenant stats.
2023-04-17 09:35:26 +03:00
Onur Tirtir f87a2d02b0
Move the common logic related to creating a Citus table down to CreateCitusTable (#6836)
.. rather than having it in user facing functions. That way, we
can use the same logic for creating Citus tables from other places
too.

This would be useful for creating tenant tables via a simple function
call in the utility hook, for schema-based sharding purposes.
2023-04-14 16:13:39 +03:00
aykut-bozkurt 3286ec59e9
fix 3 flaky tests in failure schedule (#6846)
Fixed 3 flaky tests in failure tests which caused flakiness in other
tests due to changed node and group sequence ids during node
addition-removal.
2023-04-13 13:13:28 +03:00
Halil Ozan Akgül 9ba70696f7
Add CPU usage to citus_stat_tenants (#6844)
This PR adds CPU usage to `citus_stat_tenants` monitor.
CPU usage is tracked in periods, similar to query counts.
2023-04-12 16:23:00 +03:00
Emel Şimşek e7a25d82c9
When creating a HTAB we need to use HASH_COMPARE flag in order to set a user defined comparison function. (#6845)
DESCRIPTION: Fixes memory errors, caught by valgrind, of type
"conditional jump or move depends on uninitialized value"

When running Citus tests under Postgres with valgrind, the test cases
calling into `NonBlockingShardSplit` function produce valgrind errors of
type "conditional jump or move depends on uninitialized value".

The issue is caused by creating a HTAB in a wrong way. HASH_COMPARE flag
should have been used when creating a HTAB with user defined comparison
function. In the absence of HASH_COMPARE flag, HTAB falls back into
built-in string comparison function. However, valgrind somehow discovers
that the match function is not assigned to the user defined function as
intended.

Fixes #6835
2023-04-11 21:24:33 +03:00
Halil Ozan Akgül 8b50e95dc8
Fix citus_stat_tenants period updating bug (#6833)
Fixes the bug that causes updating the citus_stat_tenants periods
incorrectly.

`TimestampDifferenceExceeds` expects the difference in milliseconds but
it was microseconds, this is fixed.
`tenantStats->lastQueryTime` was updated during monitoring too, now it's
updated only when there are tenant queries.
2023-04-11 17:40:07 +03:00
aykut-bozkurt a20f7e1a55
fixes update propagation bug when `citus_set_coordinator_host` is called more than once (#6837)
DESCRIPTION: Fixes update propagation bug when
`citus_set_coordinator_host` is called more than once.

Fixes https://github.com/citusdata/citus/issues/6731.
2023-04-11 11:27:16 +03:00
Onur Tirtir 0194657c5d
Bump Citus to 12.0devel (#6840) 2023-04-10 12:05:18 +03:00
rajeshkt78 29c8d9633a
Makefile changes to build CDC in builddir for pgoutput and wal2json. (#6827)
DESCRIPTION: 

Makefile changes to build different versions of CDC decoder for different base decoders like pgoutput and wal2json with the same name and copy it to $packagelib/cdc_decoders dir. This helps the user to use logical replication slots normally with pgoutput without being aware of CDC decoder.

1) Changed src/backend/distributed/cdc/Makefile to setup a build directory
for CDC in build-cdc-$(DECODER) dir and copy the source files (.c.h and Makefile.decoder) to
the build dir and build it for each base decoder.

2) copy the pgoutput.so and wal2json.so into the above build dir and
install them in PG packagelibdir/citus_decoders directory.

3)Added a testcase 016_cdc_wal2json.pl for testing the wal2json decoder
using pg_recv_logical_changes function.
2023-04-06 17:03:12 +05:30
Naisila Puka 84f2d8685a
Adds control for background task executors involving a node (#6771)
DESCRIPTION: Adds control for background task executors involving a node

### Background and motivation

Nonblocking concurrent task execution via background workers was
introduced in [#6459](https://github.com/citusdata/citus/pull/6459), and
concurrent shard moves in the background rebalancer were introduced in
[#6756](https://github.com/citusdata/citus/pull/6756) - with a hard
dependency that limits to 1 shard move per node. As we know, a shard
move consists of a shard moving from a source node to a target node. The
hard dependency was used because the background task runner didn't have
an option to limit the parallel shard moves per node.

With the motivation of controlling the number of concurrent shard
moves that involve a particular node, either as source or target, this
PR introduces a general new GUC
citus.max_background_task_executors_per_node to be used in the
background task runner infrastructure. So, why do we even want to
control and limit the concurrency? Well, it's all about resource
availability: because the moves involve the same nodes, extra
parallelism won’t make the rebalance complete faster if some resource is
already maxed out (usually cpu or disk). Or, if the cluster is being
used in a production setting, the moves might compete for resources with
production queries much more than if they had been executed
sequentially.

### How does it work?

A new column named nodes_involved is added to the catalog table that
keeps track of the scheduled background tasks,
pg_dist_background_task. It is of type integer[] - to store a list
of node ids. It is NULL by default - the column will be filled by the
rebalancer, but we may not care about the nodes involved in other uses
of the background task runner.

Table "pg_catalog.pg_dist_background_task"

     Column     |           Type           
============================================
 job_id         | bigint
 task_id        | bigint
 owner          | regrole
 pid            | integer
 status         | citus_task_status
 command        | text
 retry_count    | integer
 not_before     | timestamp with time zone
 message        | text
+nodes_involved | integer[]

A hashtable named ParallelTasksPerNode keeps track of the number of
parallel running background tasks per node. An entry in the hashtable is
as follows:

ParallelTasksPerNodeEntry
{
	node_id // The node is used as the hash table key 
	counter // Number of concurrent background tasks that involve node node_id
                // The counter limit is citus.max_background_task_executors_per_node
}

When the background task runner assigns a runnable task to a new
executor, it increments the counter for each of the nodes involved with
that runnable task. The limit of each counter is
citus.max_background_task_executors_per_node. If the limit is reached
for any of the nodes involved, this runnable task is skipped. And then,
later, when the running task finishes, the background task runner
decrements the counter for each of the nodes involved with the done
task. The following functions take care of these increment-decrement
steps:

IncrementParallelTaskCountForNodesInvolved(task)
DecrementParallelTaskCountForNodesInvolved(task)

citus.max_background_task_executors_per_node can be changed in the
fly. In the background rebalancer, we simply give {source_node,
target_node} as the nodesInvolved input to the
ScheduleBackgroundTask function. The rest is taken care of by the
general background task runner infrastructure explained above. Check
background_task_queue_monitor.sql and
background_rebalance_parallel.sql tests for detailed examples.

#### Note

This PR also adds a hard node dependency if a node is first being used
as a source for a move, and then later as a target. The reason this
should be a hard dependency is that the first move might make space for
the second move. So, we could run out of disk space (or at least
overload the node) if we move the second shard to it before the first
one is moved away.

Fixes https://github.com/citusdata/citus/issues/6716
2023-04-06 14:12:39 +03:00
Gokhan Gulbiz fa00fc6e3e
Add upgrade/downgrade paths between v11.2.2 and v11.3.1 (#6820)
DESCRIPTION: PR description that will go into the change log, up to 78
characters

---------

Co-authored-by: Hanefi Onaldi <Hanefi.Onaldi@microsoft.com>
2023-04-06 12:46:09 +03:00
Ahmet Gedemenli 83a2cfbfcf
Move cleanup record test to upgrade schedule (#6794)
DESCRIPTION: Move cleanup record test to upgrade schedule
2023-04-06 11:42:49 +03:00
Naisila Puka fc479bfa49
Fixes flakiness in multi_metadata_sync test (#6824)
Fixes flakiness in multi_metadata_sync test


https://app.circleci.com/pipelines/github/citusdata/citus/31863/workflows/ea937480-a4cc-4646-815c-bb2634361d98/jobs/1074457
```diff
SELECT
 	logicalrelid, repmodel
 FROM
 	pg_dist_partition
 WHERE
 	logicalrelid = 'mx_test_schema_1.mx_table_1'::regclass
 	OR logicalrelid = 'mx_test_schema_2.mx_table_2'::regclass;
         logicalrelid         | repmodel 
 -----------------------------+----------
- mx_test_schema_1.mx_table_1 | s
  mx_test_schema_2.mx_table_2 | s
+ mx_test_schema_1.mx_table_1 | s
 (2 rows)
```
This is a simple issue of missing `ORDER BY` clauses. I went ahead and
added some other missing ones in the same file as well. Also, I replaced
existing `ORDER BY logicalrelid` with `ORDER BY logicalrelid::text`, in
order to compare names, not OIDs.
2023-04-06 11:19:32 +03:00
Halil Ozan Akgül 52ad2d08c7
Multi tenant monitoring (#6725)
DESCRIPTION: Adds views that monitor statistics on tenant usages

This PR adds `citus_stats_tenants` view that monitors the tenants on the
cluster.

`citus_stats_tenants` shows the node id, colocation id, tenant
attribute, read count in this period and last period, and query count in
this period and last period of the tenant.
Tenant attribute currently is the tenant's distribution column value,
later when schema based sharding is introduced, this meaning might
change.
A period is a time bucket the queries are counted by. Read and query
counts for this period can increase until the current period ends. After
that those counts are moved to last period's counts, which cannot
change. The period length can be set using 'citus.stats_tenants_period'.

`SELECT` queries are counted as _read_ queries, `INSERT`, `UPDATE` and
`DELETE` queries are counted as _write_ queries. So in the view read
counts are `SELECT` counts and query counts are `SELECT`, `INSERT`,
`UPDATE` and `DELETE` count.

The data is stored in shared memory, in a struct named
`MultiTenantMonitor`.

`citus_stats_tenants` shows the data from local tenants.

`citus_stats_tenants` show up to `citus.stats_tenant_limit` number of
tenants.
The tenants are scored based on the number of queries they run and the
recency of those queries. Every query ran increases the score of tenant
by `ONE_QUERY_SCORE`, and after every period ends the scores are halved.
Halving is done lazily.
To retain information a longer the monitor keeps up to 3 times
`citus.stats_tenant_limit` tenants. When the tenant count hits `3 *
citus.stats_tenant_limit`, last `citus.stats_tenant_limit` tenants are
removed. To see all stored tenants you can use
`citus_stats_tenants(return_all_tenants := true)`

- [x] Create collector view that gets data from all nodes. #6761 
- [x] Add monitoring log #6762 
- [x] Create enable/disable GUC #6769 
- [x] Parse the annotation string correctly #6796 
- [x] Add local queries and prepared statements #6797
- [x] Rename to citus_stat_statements #6821 
- [x] Run pgbench
- [x] Fix role permissions #6812

---------

Co-authored-by: Gokhan Gulbiz <ggulbiz@gmail.com>
Co-authored-by: Jelte Fennema <github-tech@jeltef.nl>
2023-04-05 17:44:17 +03:00
Jelte Fennema d04d32b314
In run_test.py actually return worker_count (#6830)
Fixes a small mistake that was missed in the refactor of run_test.py
that was done in #6816.
2023-04-05 16:38:57 +03:00
Naisila Puka eda3cc418a
Fixes flakiness in multi_cluster_management test (#6825)
Fixes flakiness in multi_cluster_management test

https://app.circleci.com/pipelines/github/citusdata/citus/31816/workflows/2f455a30-1c0b-4b21-9831-f7cf2169df5a/jobs/1071444
```diff
SELECT public.wait_until_metadata_sync();
+WARNING:  waiting for metadata sync timed out
  wait_until_metadata_sync 
 --------------------------
  
 (1 row)
```
Default timeout value is 15000. I increased it to 60000.
2023-04-05 15:50:22 +03:00
Jelte Fennema e5e5eb35c7
Refactor run_test.py (#6816)
Over the last few months run_test.py got more and more complex. This
refactors the code in `run_test.py` to be better understandable. Mostly
this splits up separate pieces of logic into separate functions.
2023-04-05 11:11:30 +02:00
Onur Tirtir d4f9de7875
Explicitly disallow local rels when inserting into dist table (#6817) 2023-04-04 17:46:43 +02:00
Jelte Fennema dcee370270
Fix flakyness in citus_split_shard_by_split_points_deferred_drop (#6819)
In CI we would sometimes get this failure:
```diff
 -- The original shard is marked for deferred drop with policy_type = 2.
 -- The previous shard should be dropped at the beginning of the second split call
 SELECT * from pg_dist_cleanup;
  record_id | operation_id | object_type |                               object_name                                | node_group_id | policy_type
 -----------+--------------+-------------+--------------------------------------------------------------------------+---------------+-------------
+        60 |          778 |           3 | citus_shard_split_slot_18_21216_778                                      |            16 |           0
        512 |          778 |           1 | citus_split_shard_by_split_points_deferred_schema.table_to_split_8981001 |            16 |           2
-(1 row)
+(2 rows)
```

Replication slots sometimes cannot be deleted right away. Which is hard
to resolve, but luckily we can filter these cleanup records out easily
by filtering by policy_type.

While debugging this issue I learnt that we did not use
`GetNextCleanupRecordId` in all places where we created cleanup
records. This caused test failures when running tests multiple times,
when they set `citus.next_cleanup_record_id`. I tried fixing that by
calling GetNextCleanupRecordId in all places but that caused many
other tests to fail due to deadlocks. So, instead this adresses
that issue by using `ALTER SEQUENCE ... RESTART` instead of
`citus.next_cleanup_record_id`. In a follow up PR we should
probably get rid of `citus.next_cleanup_record_id`, since it's
only used in one other file.
2023-04-04 09:45:48 +02:00
Marco Slot 7c0589abb8
Do not override combinefunc of custom aggregates with common names (#6805)
DESCRIPTION: Fix an issue that caused some queries with custom
aggregates to fail

While playing around with https://github.com/pgvector/pgvector I noticed
that the AVG query was broken. That's because we treat it as any other
AVG by breaking it down in SUM and COUNT, but there are no SUM/COUNT
functions in this case, but there is a perfectly usable combinefunc.
This PR changes our aggregate logic to prefer custom aggregates with a
combinefunc even if they have a common name.

Co-authored-by: Marco Slot <marco.slot@gmail.com>
2023-04-03 19:43:09 +02:00
rajeshkt78 d5df892394
Make CDC decoder an independent extension (#6810)
DESCRIPTION: 

- The CDC decoder is refacroted into a seperate extension that can be used loaded dynamically without having to reload citus.
- CDC decoder code can be compiled using DECODER flag to work with different decoders like pgoutput and wal2json.
   by default the base decode is "pgoutput".
- the dynamic_library_path config is adjusted dynamically to prefer the decoders in cdc_decoders directory in citus init
  so that the users can use the replication subscription commands without having to make any config changes.
2023-04-03 21:32:15 +05:30
Ahmet Gedemenli 697bb55fc5
Refactor shard transfers (#6631)
DESCRIPTION: Refactor and unify shard move and copy functions

Shard move and copy functions share a lot of code in common. This PR
unifies these functions into one, along with some helper functions. To
preserve the current behavior, we'll introduce and use an enum
parameter, and hardcoded strings for producing error/warning messages.
2023-04-03 10:43:54 +03:00
Jelte Fennema 92b358fe0a
Make python-regress based tests runnable with run_test.py (#6814)
For some tests such as upgrade tests and arbitrary config tests we set
up the citus cluster using Python. This setup is slightly different from
the perl based setup script (`multi_regress.pl`). Most importantly it
uses replication factor 1 by default.

This changes our run_test.py script to be able to run a schedule using
python instead of `multi_regress.pl`, for the tests that require it.

For now arbitrary config tests are still not runnable with
`run_test.py`, but this brings us one step closer to being able to do
that.

Fixes #6804
2023-03-31 17:07:12 +02:00
Marco Slot 343d1c5072
Refactor executor utility functions into multiple files (#6593)
Co-authored-by: Marco Slot <marco.slot@gmail.com>
2023-03-31 13:07:48 +02:00
Jelte Fennema 085b59f586
Fix flaky multi_mx_schema_support test (#6813)
This happened sometimes:
```diff
 SELECT objid::oid::regnamespace as "Distributed Schemas"
     FROM pg_catalog.pg_dist_object
     WHERE objid::oid::regnamespace IN ('mx_old_schema', 'mx_new_schema');
  Distributed Schemas
 ---------------------
- mx_old_schema
  mx_new_schema
+ mx_old_schema
 (2 rows)
```

Source:
https://app.circleci.com/pipelines/github/citusdata/citus/31706/workflows/edc84a6a-dfef-42b3-ab5c-54daa64c2154/jobs/1065463

In passing make multi_mx_schema_support runnable with run_test.py
2023-03-31 12:36:53 +02:00
Jelte Fennema 7b60cdd13b
Convert columnar tap tests to pytest (#6720)
Having as little Perl as possible in our repo seems a worthy goal. Sadly
Postgres its Perl based TAP infrastructure was the only way in which we
could
run tests that were hard to do using only SQL commands. This change adds
infrastructure to run such "application style tests" using python and
converts all our existing Perl TAP tests to this new infrastructure.

Some of the helper functions that are added in this PR are currently
unused. Most of these will be used by the CDC PR that depends on this.
Some others are there because they were needed by the PgBouncer test
framework that this is based on, and the functions seemed useful enough
to citus testing to keep.

The main features of the test suite are:
1. Application style tests using a programming language that our
developers know how to write.
2. Caching of Citus clusters in-between tests using the ["fixture"
pattern][fixture] from `pytest` to achieve speedy tests. To make this
work in practice any changes made during a test are automatically
undone. Schemas, replication slots, subscriptions, publications are
dropped at the end of each test. And any changes made by `ALTER SYSTEM`
or manually editing of `pg_hba.conf` are undone too.
3. Automatic parallel execution of tests using the `-n auto` flag that's
added by `pytest-xdist`. This improved the speed of tests greatly with
the similar test framework I created for PgBouncer. Right now it doesn't
help much yet though, since this PR only adds two tests (one of which
takes ~10 times longer than the other).

Possible future improvements are:
1. Clean up even more things at the end of each test (e.g. users that
were created). These are fairly easy to add, but I have not done so yet
since they were not needed yet for this PR or the CDC PR. So I would not
be able to test the cleanup easily.
2. Support for query block detection similar to what we can now do using
isolation tests.

[fixture]: https://docs.pytest.org/en/6.2.x/fixture.html
2023-03-31 12:25:19 +02:00
Teja Mupparti 01ea5f58a9 Fix the incorrect value passed to pointer-to-bool parameter, pass a NULL as the value is not used for this invocation. 2023-03-30 10:45:32 -07:00
aykutbozkurt dc57e4b2d8 PR #6728  / commit - 13
Add failure tests for nontransactional metadata sync mode.
2023-03-30 11:06:16 +03:00
aykutbozkurt f2f0ec9dda PR #6728  / commit - 12
Force activated bare connections to close at transaction end.
2023-03-30 11:06:16 +03:00
aykutbozkurt 35dbdae5a4 PR #6728  / commit - 11
Let AddNodeMetadata to use metadatasync api during node addition.
2023-03-30 11:06:16 +03:00
aykutbozkurt fe00b3263a PR #6728  / commit - 10
Do not acquire strict lock on separate transaction to localhost as we already take the lock before.
But make sure that caller has the ExclusiveLock.
2023-03-30 11:06:16 +03:00
aykutbozkurt a74232bb39 PR #6728  / commit - 9
Do not enforce distributed transaction at `EnsureCoordinatorInitiatedOperation`.
2023-03-30 10:53:22 +03:00
aykutbozkurt cf4e93a332 PR #6728  / commit - 8
Drop table, if exists, during table dependency creation.
2023-03-30 10:53:22 +03:00
aykutbozkurt f8fb20cc95 PR #6728  / commit - 7
Remove unused old metadata sync methods.
2023-03-30 10:53:22 +03:00
aykutbozkurt 1fb3de14df PR #6728  / commit - 6
Let `activate_node_snapshot` use new metadata sync api.
2023-03-30 10:53:22 +03:00
aykutbozkurt bc25ba51c3 PR #6728  / commit - 5
Let `ActivateNode` use new metadata sync api.
2023-03-30 10:53:22 +03:00
aykutbozkurt 29ef9117e6 PR #6728  / commit - 4
Add new metadata sync methods which uses MemorySyncContext api so that during the sync we can
- free memory to prevent OOM,
- use either transactional or nontransactional modes according to the GUC .
2023-03-30 10:53:22 +03:00
aykutbozkurt 8feb8c634a PR #6728  / commit - 3
Let nontransactional sync mode create transaction per shell table during dropping the shell tables from worker.
2023-03-30 10:53:20 +03:00
aykutbozkurt 85d50203d1 PR #6728  / commit - 2
- Create MetadataSyncContext api to encapsulate
  both transactional and nontransactional modes,
- Add a GUC to switch between metadata sync transaction modes.
2023-03-30 10:52:46 +03:00
aykutbozkurt 98abd68178 PR #6728  / commit - 1
Add a method to send multiple commands to worker list
reusing the same bare connections. Change will be useful
for metadata sync api.
2023-03-30 10:52:46 +03:00
Gokhan Gulbiz e71bfd6074
Identity column implementation refactorings (#6738)
This pull request proposes a change to the logic used for propagating
identity columns to worker nodes in citus. Instead of creating a
dependent sequence for each identity column and changing its default
value to `nextval(seq)/worker_nextval(seq)`, this update will pass the
identity columns as-is to the worker nodes.

Please note that there are a few limitations to this change. 

1. Only bigint identity columns will be allowed in distributed tables to
ensure compatibility with the DDL from any node functionality. Our
current distributed sequence implementation only allows insert
statements from all nodes for bigint sequences.
2. `alter_distributed_table` and `undistribute_table` operations will
not be allowed for tables with identity columns. This is because we do
not have a proper way of keeping sequence states consistent across the
cluster.

DESCRIPTION: Prevents using identity columns on data types other than
`bigint` on distributed tables
DESCRIPTION: Prevents using `alter_distributed_table` and
`undistribute_table` UDFs when a table has identity columns
DESCRIPTION: Fixes a bug that prevents enforcing identity column
restrictions on worker nodes

Depends on #6740
Fixes #6694
2023-03-30 10:41:01 +03:00
Emel Şimşek d3fb9288ab
Schedule parallel shard moves in background rebalancer by removing task dependencies between shard moves across colocation groups. (#6756)
DESCRIPTION: This PR removes the task dependencies between shard moves
for which the shards belong to different colocation groups. This change
results in scheduling multiple tasks in the RUNNABLE state. Therefore it
is possible that the background task monitor can run them concurrently.

Previously, all the shard moves planned in a rebalance operation took
dependency on each other sequentially.
For instance, given the following table and shards 

colocation group 1 colocation group 2
table1 table2 table3 table4 table 5
shard11 shard21 shard31 shard41 shard51
shard12 shard22 shard32 shard42 shard52
  
 if the rebalancer planner returned the below set of moves 
` {move(shard11), move(shard12), move(shard41), move(shard42)}`

background rebalancer scheduled them such that they depend on each other
sequentially.
```
      {move(reftables) if there is any, none}
               |
      move( shard11)
               |
      move(shard12)
               |                {move(shard41)<--- move(shard12)} This is an artificial dependency  
      move(shard41)
               |
      move(shard42) 

```
This results in artificial dependencies between otherwise independent
moves.

Considering that the shards in different colocation groups can be moved
concurrently, this PR changes the dependency relationship between the
moves as follows:

```
      {move(reftables) if there is any, none}          {move(reftables) if there is any, none}     
               |                                                            |
      move(shard11)                                                  move(shard41)
               |                                                            |
      move(shard12)                                                   move(shard42) 
   
```

---------

Co-authored-by: Jelte Fennema <jelte.fennema@microsoft.com>
2023-03-29 22:03:37 +03:00