Commit Graph

2565 Commits (1c05eebebe2bc0bf9851a37d5da454edaab24e16)

Author SHA1 Message Date
gindibay 1c05eebebe Updates udf name 2023-07-29 19:21:13 +03:00
gindibay 63311e546f Fixes some review notes 2023-07-29 17:35:15 +03:00
gindibay ed40dfe1a7 Give details for exception message 2023-07-28 23:35:14 +03:00
gindibay bb62b84ad7 Parameterizes node id in no node test 2023-07-28 22:57:10 +03:00
Gürkan İndibay 997a5d7217
Merge branch 'main' into citus_pause_node 2023-07-31 16:03:19 +03:00
gindibay 28cda815a3 Fixes coverage issue 2023-07-28 22:24:09 +03:00
gindibay 24380f80b9 Fixes unit tests 2023-07-28 20:34:42 +03:00
gindibay afa7bf640d Fixes upgrade_list_citus_objects diff 2023-07-28 17:21:11 +03:00
gindibay 6da0baada4 Adds citus_pause_node into test files 2023-07-28 17:15:55 +03:00
gindibay 8928c0ff88 Adds multi_extension.out output 2023-07-28 17:09:07 +03:00
Önder Kalacı cb5eb73048
Add support for router INSERT .. SELECT commands (#7077)
Tradionally our planner works in the following order:
   router - > pushdown -> repartition -> pull to coordinator

However, for INSERT .. SELECT commands, we did not support "router".

In practice, that is not a big issue, because pushdown planning can
handle router case as well.

However, with PG 16, certain outer joins are converted to JOIN without
any conditions (e.g., JOIN .. ON (true)) and the filters are pushed down
to the tables.

When the filters are pushed down to the tables, router planner can
detect. However, pushdown planner relies on JOIN conditions.

An example query:
```
INSERT INTO agg_events (user_id)
        SELECT raw_events_first.user_id
        FROM raw_events_first LEFT JOIN raw_events_second
        	ON raw_events_first.user_id = raw_events_second.user_id
        WHERE raw_events_first.user_id = 10;
```

As a side effect of this change, now we can also relax certain
limitation that "pushdown" planner emposes, but not "router". So, with
this PR, we also allow those.

Closes https://github.com/citusdata/citus/pull/6772
DESCRIPTION: Prevents unnecessarily pulling the data into coordinator
for some INSERT .. SELECT queries that target a single-shard group
2023-07-28 15:07:20 +03:00
gindibay 7ccbb848b0 Adds unit tests 2023-07-28 14:25:56 +03:00
Teja Mupparti 846cbc3a39 In the MERGE join clause, there is a datatype mismatch between target's distribution column
and the expression originating from the source. If the types are different, Citus uses
different hash functions for the two column types, which might lead to incorrect repartitioning
of the result data
2023-07-27 16:06:00 -07:00
Nils Dijk 186804c119
fix flappyness of shard_rebalancer operations test (#7083)
Fixes flappyness where the order of shards was dependent on the physical
layout in the heap. Failed here
https://app.circleci.com/pipelines/github/citusdata/citus/33844/workflows/1651f8f5-6e6a-457e-9d35-34b8788ea6d1/jobs/1189836


```diff
--- /home/circleci/project/src/test/regress/expected/shard_rebalancer.out.modified	2023-07-24 12:51:27.126284675 +0000
+++ /home/circleci/project/src/test/regress/results/shard_rebalancer.out.modified	2023-07-24 12:51:27.170285079 +0000
@@ -2571,24 +2571,24 @@
 CREATE TABLE test_with_all_shards_excluded(a int PRIMARY KEY);
 SELECT create_distributed_table('test_with_all_shards_excluded', 'a', colocate_with:='none', shard_count:=4);
  create_distributed_table 
 --------------------------
  
 (1 row)
 
 SELECT shardid FROM pg_dist_shard;
  shardid 
 ---------
-  433504
   433505
   433506
   433507
+  433504
 (4 rows)
 
 SELECT rebalance_table_shards('test_with_all_shards_excluded', excluded_shard_list:='{102073, 102074, 102075, 102076}');
  rebalance_table_shards 
 ------------------------
  
 (1 row)
 
 DROP TABLE test_with_all_shards_excluded;
 SET citus.shard_count TO 2;
```
2023-07-27 16:24:35 +02:00
Önder Kalacı 862dae823e
Expand EnableNonColocatedRouterQueryPushdown to cover shard colocation (e.g., shard index) (#7076)
Previously, we only checked whether the relations are colocated, but we
ignore the shard indexes. That causes certain queries still to be
accidentally router. We should enforce colocation checks for both shard
index and table colocation id to make the check restrictive enough.

For example, the following query should not be router, and after this
patch, it won't:
```SQL
SELECT
   user_id
 FROM
   ((SELECT user_id FROM raw_events_first WHERE user_id = 15) EXCEPT
    (SELECT user_id FROM raw_events_second where user_id = 17)) as foo;
```

DESCRIPTION: Enforce shard level colocation with
citus.enable_non_colocated_router_query_pushdown
2023-07-25 16:20:13 +03:00
ahmet gedemenli c968dc9c27 Do not rebalance if replication factor is greater than the node count 2023-07-25 13:38:33 +03:00
Naisila Puka 42d956888d
PG16 compatibility: Resolve compilation issues (#7005)
This PR provides successful compilation against PG16Beta2. It does some
necessary refactoring to prepare for full support of version 16, in
https://github.com/citusdata/citus/pull/6952 .

Change RelFileNode to RelFileNumber or RelFileLocator 
Relevant PG commit
b0a55e43299c4ea2a9a8c757f9c26352407d0ccc

new header for varatt.h 
Relevant PG commit:
d952373a987bad331c0e499463159dd142ced1ef

drop support for Abs, use fabs 
Relevant PG commit
357cfefb09115292cfb98d504199e6df8201c957

tuplesort PGcommit: d37aa3d35832afde94e100c4d2a9618b3eb76472 
Relevant PG commit:
d37aa3d35832afde94e100c4d2a9618b3eb76472

Fix vacuum in columnar 
Relevant PG commit:
4ce3afb82ecfbf64d4f6247e725004e1da30f47c
older one:
b6074846cebc33d752f1d9a66e5a9932f21ad177

Add alloc_flags to pg_clean_ascii 
Relevant PG commit:
45b1a67a0fcb3f1588df596431871de4c93cb76f

Merge GetNumConfigOptions() into get_guc_variables() 
Relevant PG commit:
3057465acfbea2f3dd7a914a1478064022c6eecd

Minor PG refactor PG_FUNCNAME_MACRO __func__ 
Relevant PG commit
320f92b744b44f961e5d56f5f21de003e8027a7f

Pass NULL context to stringToQualifiedNameList, typeStringToTypeName 
The pre-PG16 error behaviour for the following
stringToQualifiedNameList & typeStringToTypeName
was ereport(ERROR, ...)
Now with PG16 we have this context input. We preserve the same behaviour
by passing a NULL context, because of the following:
(copy paste comment from PG16)
If "context" isn't an ErrorSaveContext node, this behaves as
errstart(ERROR, domain), and the errsave() macro ends up acting
exactly like ereport(ERROR, ...).
Relevant PG commit
858e776c84f48841e7e16fba7b690b76e54f3675

Use RangeVarCallbackMaintainsTable instead of RangeVarCallbackOwnsTable 
Relevant PG commit:
60684dd834a222fefedd49b19d1f0a6189c1632e

FIX THIS: Not implemented grant-level control of role inheritance 
see PG commit
e3ce2de09d814f8770b2e3b3c152b7671bcdb83f

Make Scan node abstract 
PG commit:
8c73c11a0d39049de2c1f400d8765a0eb21f5228

Change in Var representations, get_relids_in_jointree 
PG commit
2489d76c4906f4461a364ca8ad7e0751ead8aa0d

Deadlock detection changes because SHM_QUEUE is removed 
Relevant PG Commit:
d137cb52cb7fd44a3f24f3c750fbf7924a4e9532

TU_UpdateIndexes 
Relevant PG commit
19d8e2308bc51ec4ab993ce90077342c915dd116

Use object_ownercheck and object_aclcheck functions 
Relevant PG commits:
afbfc02983f86c4d71825efa6befd547fe81a926
c727f511bd7bf3c58063737bcf7a8f331346f253

Rework Permission Info for successful compilation 
Relevant PG commits:
postgres/postgres@a61b1f7
postgres/postgres@b803b7d
---------

Co-authored-by: onderkalaci <onderkalaci@gmail.com>
2023-07-21 14:32:37 +03:00
Teja Mupparti 87dc88f837 Isolate schema sharding/MERGE tests into a new file, and
use the new GUC parameter
2023-07-19 12:23:45 -07:00
Halil Ozan Akgül c99a93ffa7
Move SQL file changes for citus_shard_sizes fixes into the new 11.3-2 version (#7050)
This PR moves `citus_shard_sizes` changes from #7003, and #7018 to into
a new Citus version, 11.3-2
2023-07-14 17:19:54 +03:00
aykut-bozkurt 609a5465ea
Bump Citus version into 12.1devel (#7061) 2023-07-14 13:12:30 +03:00
Onur Tirtir f3cdb6d1bf Deparse ALTER TABLE commands if ADD COLUMN is the only subcommand
And stabilize multi_alter_table_statements.sql.
2023-07-12 18:17:47 +03:00
Onur Tirtir 6365f47b57 Properly handle index storage options for ADD CONSTRAINT / COLUMN 2023-07-11 17:42:43 +03:00
Onur Tirtir ae142e1764 Properly handle IF NOT EXISTS for ADD COLUMN 2023-07-11 17:42:43 +03:00
Onur Tirtir d4789a2c3a Stabilize test helper sql files
multi_test_helpers is run in parallel with others, so need to stabilize
other test helpers too to make multi_test_helpers runnable multiple
times.
2023-07-06 10:47:41 +03:00
Halil Ozan Akgül 613cced1ae
Use citus_shard_sizes in citus_tables (#7018)
Fixes #7019 

This PR updates citus_tables view to use citus_shard_sizes function,
instead of citus_total_relation_size to improve performance.
2023-07-05 11:40:34 +03:00
aykut-bozkurt 719d92c8b9
mat view should not be converted to tenant table (#7043)
We allow materialized view to exist in distrbuted schema but they should
not be tried to be converted to a tenant table since they cannot be
distributed.

Fixes https://github.com/citusdata/citus/issues/7041
2023-07-04 17:28:03 +03:00
Ahmet Gedemenli 5051be86ff
Skip distributed schema insertion into pg_dist_schema, if already exists (#7044)
Inserting into `pg_dist_schema` causes unexpected duplicate key errors,
for distributed schemas that already exist. With this commit we skip the
insertion if the schema already exists in `pg_dist_schema`.

The error:
```sql
SET citus.enable_schema_based_sharding TO ON;
CREATE SCHEMA sc2;
CREATE SCHEMA IF NOT EXISTS sc2;
NOTICE:  schema "sc2" already exists, skipping
ERROR:  duplicate key value violates unique constraint "pg_dist_schema_pkey"
DETAIL:  Key (schemaid)=(17294) already exists.
```

fixes: #7042
2023-07-04 15:19:07 +03:00
Gokhan Gulbiz e0d3476526
Add locking mechanism for tenant monitoring probabilistic approach (#7026)
This PR 
* Addresses a concurrency issue in the probabilistic approach of tenant
monitoring by acquiring a shared lock for tenant existence checks.
* Changes `citus.stat_tenants_sample_rate_for_new_tenants` type to
double
* Renames `citus.stat_tenants_sample_rate_for_new_tenants` to
`citus.stat_tenants_untracked_sample_rate`
2023-07-03 13:08:03 +03:00
Jelte Fennema ac24e11986
Change default rebalance strategy to by_disk_size (#7033)
DESCRIPTION: Change default rebalance strategy to by_disk_size

When introducing rebalancing by disk size we didn't make it the default
initially. The main reason was, because we expected some problems with
it. We have indeed had some problems/bugs with it over the years, and
have fixed all of them. By now we're quite confident in its stability,
and that it pretty much always gives better results than by_shard_count.

So this PR makes by_disk_size the new default. We don't change the
default when some other strategy than by_shard_count is the current
default. This is in case someone defined their own rebalance strategy
and marked this as the default themselves.

Note: It explicitly does nothing during a downgrade, because there's no
way of knowing if the rebalance strategy before the upgrade was
by_disk_size or by_shard_count. And even in previous versions
by_disk_size is considered superior for quite some time.
2023-07-03 11:08:24 +02:00
Jelte Fennema fd1427de2c
Change by_disk_size rebalance strategy to have a base size (#7035)
One problem with rebalancing by disk size is that shards in newly
created collocation groups are considered extremely small. This can
easily result in bad balances if there are some other collocation groups
that do have some data. One extremely bad example of this is:
1. You have 2 workers
2. Both contain about 100GB of data, but there's a 70MB difference.
3. You create 100 new distributed schemas with a few empty tables in
   them
4. You run the rebalancer
5. Now all new distributed schemas are placed on the node with that had
   70MB less.
6. You start loading some data in these shards and quickly the balance
   is completely off

To address this edge case, this PR changes the by_disk_size rebalance
strategy to add a a base size of 100MB to the actual size of each
shard group. This can still result in a bad balance when shard groups
are empty, but it solves some of the worst cases.
2023-06-27 16:37:09 +02:00
Teja Mupparti 387b5f80f9 Fixes the bug#6785 2023-06-22 10:44:45 -07:00
Ahmet Gedemenli 99edb2675f
Improve error/hint messages related to schema-based sharding (#7027)
Improve error/hint messages related to schema-based sharding
2023-06-22 18:10:12 +03:00
Ahmet Gedemenli 44e3c3b9c6
Improve error message for CREATE SCHEMA .. CREATE TABLE (#7024)
Improve error message for CREATE SCHEMA .. CREATE TABLE when
enable_schema_based_sharding is enabled.
2023-06-21 15:24:09 +03:00
aykut-bozkurt 565c5260fd
Properly handle error at owner check (#6984)
We did not properly handle the error at ownership check method, which
causes `max stack depth for errors` as in
https://github.com/citusdata/citus/issues/6980.

**Fix:**
In case of an error, we should rollback subtransaction and throw the
message with log level to `LOG_SERVER_ONLY`.

Note: We prevent logs from the client to prevent pg vanilla test
failures due to Citus logs which differs from the actual Postgres logs.
(For context: https://github.com/citusdata/citus/pull/6130)

I also needed to fix a flaky test: `multi_schema_support`

DESCRIPTION: Fixes a bug related to non-existent objects in DDL
commands.

Fixes https://github.com/citusdata/citus/issues/6980
2023-06-21 14:50:01 +03:00
Naisila Puka 69af3e8509
Drop PG13 Support Phase 2 - Remove PG13 specific paths/tests (#7007)
This commit is the second and last phase of dropping PG13 support.

It consists of the following:

- Removes all PG_VERSION_13 & PG_VERSION_14 from codepaths
- Removes pg_version_compat entries and columnar_version_compat entries
specific for PG13
- Removes alternative pg13 test outputs 
- Removes PG13 normalize lines and fix the test outputs based on that

It is a continuation of 5bf163a27d
2023-06-21 14:18:23 +03:00
aykut-bozkurt 1bb667ce6e
Fix create schema authorization bug (#7015)
Fixes a bug related to `CREATE SCHEMA AUTHORIZATION <rolename>` for single shard
tables. We should properly fetch schema name from role specification if schema name is not given.
2023-06-20 22:05:17 +03:00
aykut-bozkurt f667f14029
Rewind tuple store to fix scrollable with hold cursor fetches (#7014)
We need to rewind the tuplestorestate's tuple index to get correct
results on fetching scrollable with hold cursors.


`PersistHoldablePortal` is responsible for persisting out
tuplestorestate inside a with hold cursor before commiting a
transaction.

It rewinds the cursor like below (`ExecutorRewindcalls` calls `rescan`):
```c
if (portal->cursorOptions & CURSOR_OPT_SCROLL)
{
  ExecutorRewind(queryDesc);
}
```

At the end, it adjusts tuple index for holdStore in the portal properly.
```c
if (portal->cursorOptions & CURSOR_OPT_SCROLL)
{
         if (!tuplestore_skiptuples(portal->holdStore,
	                                         portal->portalPos,
	                                         true))
	    elog(ERROR, "unexpected end of tuple stream");
}
```

DESCRIPTION: Fixes incorrect results on fetching scrollable with hold
cursors.

Fixes https://github.com/citusdata/citus/issues/7010
2023-06-19 23:00:18 +03:00
Teja Mupparti 58da8771aa This pull request introduces support for nonroutable merge commands in the following scenarios:
1) For distributed tables that are not colocated.
2) When joining on a non-distribution column for colocated tables.
3) When merging into a distributed table using reference or citus-local tables as the data source.

This is accomplished primarily through the implementation of the following two strategies.

Repartition: Plan the source query independently,
execute the results into intermediate files, and repartition the files to
co-locate them with the merge-target table. Subsequently, compile a final
merge query on the target table using the intermediate results as the data
source.

Pull-to-coordinator: Execute the plan that requires evaluation at the coordinator,
run the query on the coordinator, and redistribute the resulting rows to ensure
colocation with the target shards. Direct the MERGE SQL operation to the worker
nodes' target shards, using the intermediate files colocated with the data as the
data source.
2023-06-19 12:23:40 -07:00
Xin Li c10cb50aa9
Support custom cast from / to timestamptz in time partition management UDFs (#6923)
This is to implement custom cast of table partition column
type from / to `timestamptz` in time partition management UDFs, as
proposed in ticket #6454

The general idea is for a time partition column with type other than
`date`, `timestamp`, or `timestamptz`, users can provide custom
bidirectional cast between the column type and `timestamptz`, the UDFs
then will be able to create and drop time partitions for such tables.

Fixes #6454

---------

Signed-off-by: Xin Li <xin@swirldslabs.com>
Co-authored-by: Marco Slot <marco.slot@microsoft.com>
Co-authored-by: Ahmet Gedemenli <afgedemenli@gmail.com>
2023-06-19 17:49:05 +03:00
Halil Ozan Akgül d71ad4b65a
Add Publication Tests for Tenant Schema Tables (#7011)
This PR adds schema based sharding tests to publication.sql file
2023-06-19 12:39:41 +03:00
aykut-bozkurt fba5c8dd30
ALTER TABLE <tblname> SET SCHEMA <schemaname> for single shard tables (#7004)
Adds support for altering schema of single shard tables. We do that in 2
steps.
1. Undistribute the tenant table at `preprocess` step,
2. Distribute new schema if it is a distributed schema after DDLs are
propagated.

DESCRIPTION: Adds support for altering a table's schema to/from
distributed schemas.
2023-06-19 10:21:13 +03:00
Marco Slot 3adc1575d9
Fix DROP CONSTRAINT in command string with other commands (#7012)
Co-authored-by: Marco Slot <marco.slot@gmail.com>
2023-06-16 15:54:37 +02:00
Onur Tirtir 12a093b456
Allow using generated identity column based on int/smallint when creating a distributed table (#7008)
Allow using generated identity column based on int/smallint when
creating a distributed table so that applications that rely on
those data types don't break.

Inserting into / modifying such columns from workers is not allowed
but it's better than not allowing such columns altogether.
2023-06-16 14:34:23 +03:00
Halil Ozan Akgül 04f6868ed2
Add citus_schemas view (#6979)
DESCRIPTION: Adds citus_schemas view

The citus_schemas view will be created in public schema if it exists, if
not the view will be created in pg_catalog.

Need to:
- [x] Add tests
- [x] Fix tests
2023-06-16 14:21:58 +03:00
Naisila Puka 5bf163a27d
Remove PG13 from CI and Configure (#7002)
DESCRIPTION: Drops PG13 Support

This commit is the first phase of dropping PG13 support.

It consists of the following:

- Removes pg13 from CI tests
Among other things, Citus upgrade tests should now use PG14.
Earliest Citus version supporting PG14 is 10.2.
We also pick 11.3 version for upgrade_pg_dist_cleanup tests.
Therefore, we run the citus upgrade tests with versions 10.2 and 11.3.

- Removes pg13 from configure script

- Remove upgrade_columnar_metapage upgrade tests 
We populate first_row_number column of columnar.stripe table
during citus 10.1-10.2 upgrade. Given that we start from citus 10.2.0,
which is the oldest version supporting PG14, we don't have that
upgrade path anymore. Hence we remove these tests.

- Removes upgrade_pg_dist_object_test and upgrade_partition_constraints tests
These upgrade tests require the citus old version to be less than 10.0.
Given that we drop support for PG13, we run upgrade tests with PG14,
which starts with 10.2.
So we remove these upgrade tests.

- Documents that upgrade_post_11 should upgrade from version less than 11 
In this way we make sure we run
citus_finalize_upgrade_to_citus11 script

- Adds needed alternative output for upgrade_citus_finish_citus_upgrade 
Given that we use 11.3 as the citus old version as well,
we add this alternative output because pg_catalog.citus_finish_citus_upgrade()
makes sense if last_upgrade_major_version < 11. See below for reference:
pg_catalog.citus_finish_citus_upgrade():
...
	IF last_upgrade_major_version < 11 THEN
		PERFORM citus_finalize_upgrade_to_citus11();
		performed_upgrade := true;
	END IF;

	IF NOT performed_upgrade THEN
		RAISE NOTICE 'already at the latest distributed
		schema version (%)', last_upgrade_version_string;
		RETURN;
	END IF;
...

And that's it :)

The second phase of dropping PG13 support will consist in removing
all the PG13 specific compilation paths/tests in the Citus repo.
Will be done soon.
2023-06-15 14:54:06 +03:00
Ahmet Gedemenli 002a88ae7f
Error for single shard table creation if replication factor > 1 (#7006)
Error for single shard table creation if replication factor > 1
2023-06-15 13:13:45 +03:00
Naisila Puka 3cc7a4aa42
Fix pg14-pg15 upgrade_distributed_triggers test (#6981)
This test is only relevant for pg14-15 upgrade.
However, the check on `upgrade_distributed_triggers_after` didn't take
into consideration the case when we are doing pg15-16 upgrade. Hence, I
added one more condition to the test: existence of
`upgrade_distributed_triggers` schema which can only be created in pg14.
2023-06-14 15:32:38 +03:00
Onur Tirtir dbdf04e8ba
Rename pg_dist tenant_schema to pg_dist_schema (#7001) 2023-06-14 12:12:15 +03:00
Ahmet Gedemenli 7b0bc62173
Support CREATE TABLE .. AS SELECT .. commands for tenant tables (#6998)
Support CREATE TABLE .. AS SELECT .. commands for tenant tables
2023-06-13 17:54:09 +03:00
Halil Ozan Akgül 772d194357
Changes citus_shard_sizes view's Shard Name Column to Shard Id (#7003)
citus_shard_sizes view had a shard name column we use to extract shard
id. This PR changes the column to shard id so we don't do unnecessary
string operation.
2023-06-13 16:36:35 +03:00