Commit Graph

3060 Commits (e6ac9f2a680126a9d2784b3396e6cf3ef0070dd9)

Author SHA1 Message Date
Naisila Puka 84f2d8685a
Adds control for background task executors involving a node (#6771)
DESCRIPTION: Adds control for background task executors involving a node

### Background and motivation

Nonblocking concurrent task execution via background workers was
introduced in [#6459](https://github.com/citusdata/citus/pull/6459), and
concurrent shard moves in the background rebalancer were introduced in
[#6756](https://github.com/citusdata/citus/pull/6756) - with a hard
dependency that limits to 1 shard move per node. As we know, a shard
move consists of a shard moving from a source node to a target node. The
hard dependency was used because the background task runner didn't have
an option to limit the parallel shard moves per node.

With the motivation of controlling the number of concurrent shard
moves that involve a particular node, either as source or target, this
PR introduces a general new GUC
citus.max_background_task_executors_per_node to be used in the
background task runner infrastructure. So, why do we even want to
control and limit the concurrency? Well, it's all about resource
availability: because the moves involve the same nodes, extra
parallelism won’t make the rebalance complete faster if some resource is
already maxed out (usually cpu or disk). Or, if the cluster is being
used in a production setting, the moves might compete for resources with
production queries much more than if they had been executed
sequentially.

### How does it work?

A new column named nodes_involved is added to the catalog table that
keeps track of the scheduled background tasks,
pg_dist_background_task. It is of type integer[] - to store a list
of node ids. It is NULL by default - the column will be filled by the
rebalancer, but we may not care about the nodes involved in other uses
of the background task runner.

Table "pg_catalog.pg_dist_background_task"

     Column     |           Type           
============================================
 job_id         | bigint
 task_id        | bigint
 owner          | regrole
 pid            | integer
 status         | citus_task_status
 command        | text
 retry_count    | integer
 not_before     | timestamp with time zone
 message        | text
+nodes_involved | integer[]

A hashtable named ParallelTasksPerNode keeps track of the number of
parallel running background tasks per node. An entry in the hashtable is
as follows:

ParallelTasksPerNodeEntry
{
	node_id // The node is used as the hash table key 
	counter // Number of concurrent background tasks that involve node node_id
                // The counter limit is citus.max_background_task_executors_per_node
}

When the background task runner assigns a runnable task to a new
executor, it increments the counter for each of the nodes involved with
that runnable task. The limit of each counter is
citus.max_background_task_executors_per_node. If the limit is reached
for any of the nodes involved, this runnable task is skipped. And then,
later, when the running task finishes, the background task runner
decrements the counter for each of the nodes involved with the done
task. The following functions take care of these increment-decrement
steps:

IncrementParallelTaskCountForNodesInvolved(task)
DecrementParallelTaskCountForNodesInvolved(task)

citus.max_background_task_executors_per_node can be changed in the
fly. In the background rebalancer, we simply give {source_node,
target_node} as the nodesInvolved input to the
ScheduleBackgroundTask function. The rest is taken care of by the
general background task runner infrastructure explained above. Check
background_task_queue_monitor.sql and
background_rebalance_parallel.sql tests for detailed examples.

#### Note

This PR also adds a hard node dependency if a node is first being used
as a source for a move, and then later as a target. The reason this
should be a hard dependency is that the first move might make space for
the second move. So, we could run out of disk space (or at least
overload the node) if we move the second shard to it before the first
one is moved away.

Fixes https://github.com/citusdata/citus/issues/6716
2023-04-06 14:12:39 +03:00
Gokhan Gulbiz fa00fc6e3e
Add upgrade/downgrade paths between v11.2.2 and v11.3.1 (#6820)
DESCRIPTION: PR description that will go into the change log, up to 78
characters

---------

Co-authored-by: Hanefi Onaldi <Hanefi.Onaldi@microsoft.com>
2023-04-06 12:46:09 +03:00
Halil Ozan Akgül 52ad2d08c7
Multi tenant monitoring (#6725)
DESCRIPTION: Adds views that monitor statistics on tenant usages

This PR adds `citus_stats_tenants` view that monitors the tenants on the
cluster.

`citus_stats_tenants` shows the node id, colocation id, tenant
attribute, read count in this period and last period, and query count in
this period and last period of the tenant.
Tenant attribute currently is the tenant's distribution column value,
later when schema based sharding is introduced, this meaning might
change.
A period is a time bucket the queries are counted by. Read and query
counts for this period can increase until the current period ends. After
that those counts are moved to last period's counts, which cannot
change. The period length can be set using 'citus.stats_tenants_period'.

`SELECT` queries are counted as _read_ queries, `INSERT`, `UPDATE` and
`DELETE` queries are counted as _write_ queries. So in the view read
counts are `SELECT` counts and query counts are `SELECT`, `INSERT`,
`UPDATE` and `DELETE` count.

The data is stored in shared memory, in a struct named
`MultiTenantMonitor`.

`citus_stats_tenants` shows the data from local tenants.

`citus_stats_tenants` show up to `citus.stats_tenant_limit` number of
tenants.
The tenants are scored based on the number of queries they run and the
recency of those queries. Every query ran increases the score of tenant
by `ONE_QUERY_SCORE`, and after every period ends the scores are halved.
Halving is done lazily.
To retain information a longer the monitor keeps up to 3 times
`citus.stats_tenant_limit` tenants. When the tenant count hits `3 *
citus.stats_tenant_limit`, last `citus.stats_tenant_limit` tenants are
removed. To see all stored tenants you can use
`citus_stats_tenants(return_all_tenants := true)`

- [x] Create collector view that gets data from all nodes. #6761 
- [x] Add monitoring log #6762 
- [x] Create enable/disable GUC #6769 
- [x] Parse the annotation string correctly #6796 
- [x] Add local queries and prepared statements #6797
- [x] Rename to citus_stat_statements #6821 
- [x] Run pgbench
- [x] Fix role permissions #6812

---------

Co-authored-by: Gokhan Gulbiz <ggulbiz@gmail.com>
Co-authored-by: Jelte Fennema <github-tech@jeltef.nl>
2023-04-05 17:44:17 +03:00
Onur Tirtir d4f9de7875
Explicitly disallow local rels when inserting into dist table (#6817) 2023-04-04 17:46:43 +02:00
Marco Slot 7c0589abb8
Do not override combinefunc of custom aggregates with common names (#6805)
DESCRIPTION: Fix an issue that caused some queries with custom
aggregates to fail

While playing around with https://github.com/pgvector/pgvector I noticed
that the AVG query was broken. That's because we treat it as any other
AVG by breaking it down in SUM and COUNT, but there are no SUM/COUNT
functions in this case, but there is a perfectly usable combinefunc.
This PR changes our aggregate logic to prefer custom aggregates with a
combinefunc even if they have a common name.

Co-authored-by: Marco Slot <marco.slot@gmail.com>
2023-04-03 19:43:09 +02:00
rajeshkt78 d5df892394
Make CDC decoder an independent extension (#6810)
DESCRIPTION: 

- The CDC decoder is refacroted into a seperate extension that can be used loaded dynamically without having to reload citus.
- CDC decoder code can be compiled using DECODER flag to work with different decoders like pgoutput and wal2json.
   by default the base decode is "pgoutput".
- the dynamic_library_path config is adjusted dynamically to prefer the decoders in cdc_decoders directory in citus init
  so that the users can use the replication subscription commands without having to make any config changes.
2023-04-03 21:32:15 +05:30
Ahmet Gedemenli 697bb55fc5
Refactor shard transfers (#6631)
DESCRIPTION: Refactor and unify shard move and copy functions

Shard move and copy functions share a lot of code in common. This PR
unifies these functions into one, along with some helper functions. To
preserve the current behavior, we'll introduce and use an enum
parameter, and hardcoded strings for producing error/warning messages.
2023-04-03 10:43:54 +03:00
Marco Slot 343d1c5072
Refactor executor utility functions into multiple files (#6593)
Co-authored-by: Marco Slot <marco.slot@gmail.com>
2023-03-31 13:07:48 +02:00
Teja Mupparti 01ea5f58a9 Fix the incorrect value passed to pointer-to-bool parameter, pass a NULL as the value is not used for this invocation. 2023-03-30 10:45:32 -07:00
aykutbozkurt f2f0ec9dda PR #6728  / commit - 12
Force activated bare connections to close at transaction end.
2023-03-30 11:06:16 +03:00
aykutbozkurt 35dbdae5a4 PR #6728  / commit - 11
Let AddNodeMetadata to use metadatasync api during node addition.
2023-03-30 11:06:16 +03:00
aykutbozkurt fe00b3263a PR #6728  / commit - 10
Do not acquire strict lock on separate transaction to localhost as we already take the lock before.
But make sure that caller has the ExclusiveLock.
2023-03-30 11:06:16 +03:00
aykutbozkurt a74232bb39 PR #6728  / commit - 9
Do not enforce distributed transaction at `EnsureCoordinatorInitiatedOperation`.
2023-03-30 10:53:22 +03:00
aykutbozkurt cf4e93a332 PR #6728  / commit - 8
Drop table, if exists, during table dependency creation.
2023-03-30 10:53:22 +03:00
aykutbozkurt f8fb20cc95 PR #6728  / commit - 7
Remove unused old metadata sync methods.
2023-03-30 10:53:22 +03:00
aykutbozkurt 1fb3de14df PR #6728  / commit - 6
Let `activate_node_snapshot` use new metadata sync api.
2023-03-30 10:53:22 +03:00
aykutbozkurt bc25ba51c3 PR #6728  / commit - 5
Let `ActivateNode` use new metadata sync api.
2023-03-30 10:53:22 +03:00
aykutbozkurt 29ef9117e6 PR #6728  / commit - 4
Add new metadata sync methods which uses MemorySyncContext api so that during the sync we can
- free memory to prevent OOM,
- use either transactional or nontransactional modes according to the GUC .
2023-03-30 10:53:22 +03:00
aykutbozkurt 8feb8c634a PR #6728  / commit - 3
Let nontransactional sync mode create transaction per shell table during dropping the shell tables from worker.
2023-03-30 10:53:20 +03:00
aykutbozkurt 85d50203d1 PR #6728  / commit - 2
- Create MetadataSyncContext api to encapsulate
  both transactional and nontransactional modes,
- Add a GUC to switch between metadata sync transaction modes.
2023-03-30 10:52:46 +03:00
aykutbozkurt 98abd68178 PR #6728  / commit - 1
Add a method to send multiple commands to worker list
reusing the same bare connections. Change will be useful
for metadata sync api.
2023-03-30 10:52:46 +03:00
Gokhan Gulbiz e71bfd6074
Identity column implementation refactorings (#6738)
This pull request proposes a change to the logic used for propagating
identity columns to worker nodes in citus. Instead of creating a
dependent sequence for each identity column and changing its default
value to `nextval(seq)/worker_nextval(seq)`, this update will pass the
identity columns as-is to the worker nodes.

Please note that there are a few limitations to this change. 

1. Only bigint identity columns will be allowed in distributed tables to
ensure compatibility with the DDL from any node functionality. Our
current distributed sequence implementation only allows insert
statements from all nodes for bigint sequences.
2. `alter_distributed_table` and `undistribute_table` operations will
not be allowed for tables with identity columns. This is because we do
not have a proper way of keeping sequence states consistent across the
cluster.

DESCRIPTION: Prevents using identity columns on data types other than
`bigint` on distributed tables
DESCRIPTION: Prevents using `alter_distributed_table` and
`undistribute_table` UDFs when a table has identity columns
DESCRIPTION: Fixes a bug that prevents enforcing identity column
restrictions on worker nodes

Depends on #6740
Fixes #6694
2023-03-30 10:41:01 +03:00
Emel Şimşek d3fb9288ab
Schedule parallel shard moves in background rebalancer by removing task dependencies between shard moves across colocation groups. (#6756)
DESCRIPTION: This PR removes the task dependencies between shard moves
for which the shards belong to different colocation groups. This change
results in scheduling multiple tasks in the RUNNABLE state. Therefore it
is possible that the background task monitor can run them concurrently.

Previously, all the shard moves planned in a rebalance operation took
dependency on each other sequentially.
For instance, given the following table and shards 

colocation group 1 colocation group 2
table1 table2 table3 table4 table 5
shard11 shard21 shard31 shard41 shard51
shard12 shard22 shard32 shard42 shard52
  
 if the rebalancer planner returned the below set of moves 
` {move(shard11), move(shard12), move(shard41), move(shard42)}`

background rebalancer scheduled them such that they depend on each other
sequentially.
```
      {move(reftables) if there is any, none}
               |
      move( shard11)
               |
      move(shard12)
               |                {move(shard41)<--- move(shard12)} This is an artificial dependency  
      move(shard41)
               |
      move(shard42) 

```
This results in artificial dependencies between otherwise independent
moves.

Considering that the shards in different colocation groups can be moved
concurrently, this PR changes the dependency relationship between the
moves as follows:

```
      {move(reftables) if there is any, none}          {move(reftables) if there is any, none}     
               |                                                            |
      move(shard11)                                                  move(shard41)
               |                                                            |
      move(shard12)                                                   move(shard42) 
   
```

---------

Co-authored-by: Jelte Fennema <jelte.fennema@microsoft.com>
2023-03-29 22:03:37 +03:00
Marco Slot 8ad444f8ef Hide shards from CDC subscriptions 2023-03-29 00:59:12 +02:00
Marco Slot b09d239809 Propagate CREATE PUBLICATION statements 2023-03-29 00:59:12 +02:00
Gokhan Gulbiz e618345703
Handle identity columns properly in the router planner (#6802)
DESCRIPTION: Fixes a bug with insert..select queries with identity
columns
Fixes #6798
2023-03-29 15:50:12 +03:00
Teja Mupparti 37500806d6 Add appropriate locks for MERGE to run in parallel 2023-03-28 09:45:40 -07:00
rajeshkt78 85b8a2c7a1
CDC implementation for Citus using Logical Replication (#6623)
Description:
Implementing CDC changes using Logical Replication to avoid
re-publishing events multiple times by setting up replication origin
session, which will add "DoNotReplicateId" to every WAL entry.
   - shard splits
   - shard moves
   - create distributed table
   - undistribute table
   - alter distributed tables (for some cases)
   - reference table operations
   

The citus decoder which will be decoding WAL events for CDC clients, 
ignores any WAL entry with replication origin that is not zero.
It also maps the shard names to distributed table names.
2023-03-28 16:00:21 +05:30
Onur Tirtir 616b5018a0
Add a GUC to disallow planning the queries that reference non-colocated tables via router planner (#6793)
Today we allow planning the queries that reference non-colocated tables
if the shards that query targets are placed on the same node. However,
this may not be the case, e.g., after rebalancing shards because it's
not guaranteed to have those shards on the same node anymore.
This commit adds citus.enable_non_colocated_router_query_pushdown GUC
that can be used to disallow  planning such queries via router planner,
when it's set to false. Note that the default value for this GUC will be
"true" for 11.3, but we will alter it to "false" on 12.0 to not
introduce
a breaking change in a minor release.

Closes #692.

Even more, allowing such queries to go through router planner also
causes
generating an incorrect plan for the DML queries that reference
distributed
tables that are sharded based on different replication factor settings.
For
this reason, #6779 can be closed after altering the default value for
this
GUC to "false", hence not now.

DESCRIPTION: Adds `citus.enable_non_colocated_router_query_pushdown` GUC
to ensure generating a consistent distributed plan for the queries that
reference non-colocated distributed tables (when set to "false", the
default is "true").
2023-03-28 13:10:29 +03:00
Teja Mupparti 9bab819f26 Disentangle MERGE planning code from the modify-planning code path 2023-03-27 10:41:46 -07:00
Teja Mupparti da7db53c87 Refactor some of the planning code to accomodate a new planning path for MERGE SQL 2023-03-22 11:29:24 -07:00
Onur Tirtir e1f1d63050
Rename AllRelations.. functions to AllDistributedRelations.. (#6789)
Because they're only interested in distributed tables. Even more,
this replaces HasDistributionKey() check with
IsCitusTableType(DISTRIBUTED_TABLE) because this doesn't make a
difference on main and sounds slightly more intuitive. Plus, this
would also allow safely using this function in
https://github.com/citusdata/citus/pull/6773.
2023-03-22 15:15:23 +03:00
Ahmet Gedemenli 2713e015d6
Check before logicalrep for rebalancer, error if needed (#6754)
DESCRIPTION: Check before logicalrep for rebalancer, error if needed

Check if we can use logical replication or not, in case of shard
transfer mode = auto, before executing the shard moves. If we can't,
error out. Before this PR, we used to error out in the middle of shard
moves:
```sql
set citus.shard_count = 4; -- just to get the error sooner
select citus_remove_node('localhost',9702);

create table t1 (a int primary key);
select create_distributed_table('t1','a');
create table t2 (a bigint);
select create_distributed_table('t2','a');

select citus_add_node('localhost',9702);
select rebalance_table_shards();
NOTICE:  Moving shard 102008 from localhost:9701 to localhost:9702 ...
NOTICE:  Moving shard 102009 from localhost:9701 to localhost:9702 ...
NOTICE:  Moving shard 102012 from localhost:9701 to localhost:9702 ...
ERROR:  cannot use logical replication to transfer shards of the relation t2 since it doesn't have a REPLICA IDENTITY or PRIMARY KEY
```

Now we check and error out in the beginning, without moving the shards.

fixes: #6727
2023-03-21 16:34:52 +03:00
Onur Tirtir aa465b6de1
Decide what to do with router planner error at one place (#6781) 2023-03-21 14:04:07 +03:00
Teja Mupparti cf55136281 1) Restrict MERGE command INSERT to the source's distribution column
Fixes #6672

2) Move all MERGE related routines to a new file merge_planner.c

3) Make ConjunctionContainsColumnFilter() static again, and rearrange the code in MergeQuerySupported()
4) Restore the original format in the comments section.
5) Add big serial test. Implement latest set of comments
2023-03-16 13:43:08 -07:00
Teja Mupparti 1e42cd3da0 Support MERGE on distributed tables with restrictions
This implements the phase - II of MERGE sql support

Support routable query where all the tables in the merge-sql are distributed, co-located, and both the source and
target relations are joined on the distribution column with a constant qual. This should be a Citus single-task
query. Below is an example.

SELECT create_distributed_table('t1', 'id');
SELECT create_distributed_table('s1', 'id', colocate_with => ‘t1’);

MERGE INTO t1
USING s1 ON t1.id = s1.id AND t1.id = 100
WHEN MATCHED THEN
UPDATE SET val = s1.val + 10
WHEN MATCHED THEN
DELETE
WHEN NOT MATCHED THEN
INSERT (id, val, src) VALUES (s1.id, s1.val, s1.src)

Basically, MERGE checks to see if

There are a minimum of two distributed tables (source and a target).
All the distributed tables are indeed colocated.
MERGE relations are joined on the distribution column
MERGE .. USING .. ON target.dist_key = source.dist_key
The query should touch only a single shard i.e. JOIN AND with a constant qual
MERGE .. USING .. ON target.dist_key = source.dist_key AND target.dist_key = <>
If any of the conditions are not met, it raises an exception.

(cherry picked from commit 44c387b978)

This implements MERGE phase3

Support pushdown query where all the tables in the merge-sql are Citus-distributed, co-located, and both
the source and target relations are joined on the distribution column. This will generate multiple tasks
which execute independently after pushdown.

SELECT create_distributed_table('t1', 'id');
SELECT create_distributed_table('s1', 'id', colocate_with => ‘t1’);

MERGE INTO t1
USING s1
ON t1.id = s1.id
        WHEN MATCHED THEN
                UPDATE SET val = s1.val + 10
        WHEN MATCHED THEN
                DELETE
        WHEN NOT MATCHED THEN
                INSERT (id, val, src) VALUES (s1.id, s1.val, s1.src)

*The only exception for both the phases II and III is, UPDATEs and INSERTs must be done on the same shard-group
as the joined key; for example, below scenarios are NOT supported as the key-value to be inserted/updated is not
guaranteed to be on the same node as the id distribution-column.

MERGE INTO target t
USING source s ON (t.customer_id = s.customer_id)
WHEN NOT MATCHED THEN - -
     INSERT(customer_id, …) VALUES (<non-local-constant-key-value>, ……);

OR this scenario where we update the distribution column itself

MERGE INTO target t
USING source s On (t.customer_id = s.customer_id)
WHEN MATCHED THEN
     UPDATE SET customer_id = 100;

(cherry picked from commit fa7b8949a8)
2023-03-16 13:43:08 -07:00
Onur Tirtir f68fc9e69c
Decide core distribution params in CreateCitusTable (#6760)
Decide core distribution params in CreateCitusTable to reduce the
chances of
creating Citus tables based on incorrect combinations of distribution
method
and replication model params.

Also introduce DistributedTableParams struct to encapsulate the
parameters
that are specific to distributed tables.
2023-03-14 14:24:52 +03:00
Onur Tirtir 20a5f3af2b
Replace CITUS_TABLE_WITH_NO_DIST_KEY checks with HasDistributionKey() (#6743)
Now that we will soon add another table type having DISTRIBUTE_BY_NONE
as distribution method and that we want the code to interpret such
tables mostly as distributed tables, let's make the definition of those
other two table types more strict by removing
CITUS_TABLE_WITH_NO_DIST_KEY
macro.

And instead, use HasDistributionKey() check in the places where the
logic applies to all table types that have / don't have a distribution
key. In future PRs, we might want to convert some of those
HasDistributionKey() checks if logic only applies to Citus local /
reference tables, not the others.

And adding HasDistributionKey() also allows us to consider having
DISTRIBUTE_BY_NONE as the distribution method as a "table attribute"
that can apply to distributed tables too, rather something that
determines the table type.
2023-03-10 13:55:52 +03:00
Onur Tirtir e3cf7ace7c
Stabilize single_node.sql and others that report illegal node removal (#6751)
See
https://app.circleci.com/pipelines/github/citusdata/citus/30859/workflows/223d61db-8c1d-4909-9aea-d8e470f0368b/jobs/1009243.
2023-03-08 15:25:36 +03:00
Onur Tirtir d82c11f793
Refactor CreateDistributedTable() (#6742)
Split the main logic that allows creating a Citus table into the
internal function CreateCitusTable().

Old CreateDistributedTable() function was assuming that it's creating
a reference table when the distribution method is DISTRIBUTE_BY_NONE.
However, soon this won't be the case when adding support for creating
single-shard distributed tables because their distribution method would
also be the same.

Now the internal method CreateCitusTable() doesn't make any assumptions
about table's replication model or such. Instead, it expects callers to
properly set all such metadata bits.

Even more, some of the parameters the old CreateDistributedTable() takes
 --such as the shard count-- were not meaningful for a reference table,
and would be the same as for new table type.
2023-03-08 13:38:51 +03:00
Emel Şimşek 4043abd5aa
Exclude-Generated-Columns-In-Copy (#6721)
DESCRIPTION: Fixes a bug in shard copy operations.

For copying shards in both shard move and shard split operations, Citus
uses the COPY statement.

A COPY all statement in the following form
` COPY target_shard FROM STDIN;`
throws an error when there is a GENERATED column in the shard table.

In order to fix this issue, we need to exclude the GENERATED columns in
the COPY and the matching SELECT statements. Hence this fix converts the
COPY and SELECT all statements to the following form:
```
COPY target_shard (col1, col2, ..., coln) FROM STDIN;
SELECT (col1, col2, ..., coln) FROM source_shard;
```
where (col1, col2, ..., coln) does not include a GENERATED column. 
GENERATED column values are created in the target_shard as the values
are inserted.

Fixes #6705.

---------

Co-authored-by: Teja Mupparti <temuppar@microsoft.com>
Co-authored-by: aykut-bozkurt <51649454+aykut-bozkurt@users.noreply.github.com>
Co-authored-by: Jelte Fennema <jelte.fennema@microsoft.com>
Co-authored-by: Gürkan İndibay <gindibay@microsoft.com>
2023-03-07 18:15:50 +03:00
Ahmet Gedemenli 03f1bb70b7
Rebalance shard groups with placement count less than worker count (#6739)
DESCRIPTION: Adds logic to distribute unbalanced shards

If the number of shard placements (for a colocation group) is less than
the number of workers, it means that some of the workers will remain
empty. With this PR, we consider these shard groups as a colocation
group, in order to make them be distributed evenly as much as possible
across the cluster.

Example:
```sql
create table t1 (a int primary key);
create table t2 (a int primary key);
create table t3 (a int primary key);
set citus.shard_count =1;
select create_distributed_table('t1','a');
select create_distributed_table('t2','a',colocate_with=>'t1');
select create_distributed_table('t3','a',colocate_with=>'t2');

create table tb1 (a bigint);
create table tb2 (a bigint);
select create_distributed_table('tb1','a');
select create_distributed_table('tb2','a',colocate_with=>'tb1');

select citus_add_node('localhost',9702);
select rebalance_table_shards();
```

Here we have two colocation groups, each with one shard group. Both
shard groups are placed on the first worker node. When we add a new
worker node and try to rebalance table shards, the rebalance planner
considers it well balanced and does nothing. With this PR, the
rebalancer tries to distribute these shard groups evenly across the
cluster as much as possible. For this example, with this PR, the
rebalancer moves one of the shard groups to the second worker node.

fixes: #6715
2023-03-06 14:14:27 +03:00
Emel Şimşek ed7cc8f460
Remove unused lock functions (#6747)
Code cleanup. This change removes two unused functions seemingly left
over after a previous refactoring of shard move code.
2023-03-06 13:59:45 +03:00
Jelte Fennema b489d763e1
Use pg_total_relation_size in citus_shards (#6748)
DESCRIPTION: Correctly report shard size in citus_shards view

When looking at citus_shards, people are interested in the actual size
that all the data related to the shard takes up on disk.
`pg_total_relation_size` is the function to use for that purpose. The
previously used `pg_relation_size` does not include indexes or TOAST.
Especially the missing toast can have enormous impact on the size of the
shown data.
2023-03-06 10:53:12 +01:00
aykut-bozkurt e2654deeae
fix memory leak during altering distributed table with a lot of partition and shards (#6726)
2 improvements to prevent memory leaks during altering or undistributing
distributed tables with a lot of partitions and shards:

1. Free memory for each call to ConvertTable so that colocated and partition tables at
`AlterDistributedTable`, `UndistributeTable`, or
`AlterTableSetAccessMethod` will not cause an increase
in memory usage,
2. Free memory while executing attach partition commands for each partition table at
`AlterDistributedTable` to prevent an increase in memory usage.

DESCRIPTION: Fixes memory leak issue during altering distributed table
with a lot of partition and shards.

Fixes https://github.com/citusdata/citus/issues/6503.
2023-02-28 21:23:41 +03:00
Teja Mupparti 9cbfdc86dd MERGE: In deparser, add missing check for RETURNING clause. 2023-02-26 22:38:14 -08:00
Teja Mupparti d7b499929c Rearrange the common code into a newfunction to facilitate the multiple checks of the same conditions in a multi-modify MERGE statement 2023-02-24 12:55:11 -08:00
aykut-bozkurt a7689c3f8d
fix memory leak during distribution of a table with a lot of partitions (#6722)
We have memory leak during distribution of a table with a lot of
partitions as we do not release memory at ExprContext until all
partitions are not distributed. We improved 2 things to resolve the
issue:

1. We create and delete MemoryContext for each call to
`CreateDistributedTable` by partitions,
2. We rebuild the cache after we insert all the placements instead of
each placement for a shard.

DESCRIPTION: Fixes memory leak during distribution of a table with a lot
of partitions and shards.

Fixes https://github.com/citusdata/citus/issues/6572.
2023-02-17 18:12:49 +03:00
Emel Şimşek 756c1d3f5d
Remove auto_explain workaround in citus explain hook for ALTER TABLE (#6714)
When auto_explain module is loaded and configured, EXPLAIN will be
implicitly run for all the supported commands. Postgres does not support
`EXPLAIN` for `ALTER` command. However, auto_explain will try to
`EXPLAIN` other supported commands internally triggered by `ALTER`.

 For instance, 
`ALTER TABLE target_table ADD CONSTRAINT fkey_167 FOREIGN KEY (col_1)
REFERENCES ref_table(key) ... `

command may trigger a SELECT command in the following form for foreign
key validation purpose:
`SELECT fk.col_1 FROM ONLY target_table fk LEFT OUTER JOIN ONLY
ref_table pk ON ( pk.key OPERATOR(pg_catalog.=) fk.col_1) WHERE pk.key
IS NULL AND (fk.col_1 IS NOT NULL) `

For Citus tables, the Citus utility hook should ensure that constraint
validation is skipped for shell tables but they are done for shard
tables. The reason behind this design choice can be summed up as:

- An ALTER TABLE command via coordinator node is run in a distributed
transaction.
- Citus does not support nested distributed transactions. 
- A SELECT query on a distributed table (aka shell table) is also run in
a distributed transaction.
- Therefore, Citus does not support running a SELECT query on a shell
table while an ALTER TABLE command is running.

With
eadc88a800
a bug is introduced breaking the skip constraint validation behaviour of
Citus. With this change, we see that validation queries on distributed
tables are triggered within `ALTER` command adding constraints with
validation check. This regression did not cause an issue for regular use
cases since the citus executor hook blocks those queries heuristically
when there is an ALTER TABLE command in progress.

The issue is surfaced as a crash (#6424 Workers, when configured to use
auto_explain, crash during distributed transactions.) when auto_explain
is enabled. This is due to auto_explain trying to execute the SELECT
queries in a nested distributed transaction.

Now since the regression with constraint validation is fixed in
https://github.com/citusdata/citus/issues/6543, we should be able to
remove the workaround.
2023-02-17 17:47:03 +03:00
aykut-bozkurt 9e69dd0e7f
fix single tuple result memory leak (#6724)
We should not omit to free PGResult when we receive single tuple result
from an internal backend.
Single tuple results are normally freed by our ReceiveResults for
`tupleDescriptor != NULL` flow but not for those with `tupleDescriptor
== NULL`. See PR #6722 for details.

DESCRIPTION: Fixes memory leak issue with query results that returns
single row.
2023-02-17 14:15:09 +03:00
aykut-bozkurt 273911ac7f
prevent memory leak during ConvertTable with a lot of partitions (#6693)
Prevents memory leak during ConvertTable call for a table with a lot of
partitions.

DESCRIPTION: Fixes memory leak during undistribution and alteration of a
table with a lot of partitions.
2023-02-13 15:22:13 +03:00
Teja Mupparti 0824d9c1fb Miscellaneous cleanup 2023-02-09 13:05:59 -08:00
Onur Tirtir 483b51392f
Bump Citus to 11.3devel (#6690) 2023-02-06 10:23:25 +00:00
Gokhan Gulbiz b6a4652849
Stop background daemon before dropping the database (#6688)
DESCRIPTION: Stop maintenance daemon when dropping a database even
without Citus extension

Fixes #6670
2023-02-03 15:15:44 +03:00
Jelte Fennema f061dbb253
Also reset transactions at connection shutdown (#6685)
In #6314 I refactored the connection cleanup to be simpler to
understand and use. However, by doing so I introduced a use-after-free
possibility (that valgrind luckily picked up):

In the `ShouldShutdownConnection` path of
`AfterXactHostConnectionHandling`
we free connections without removing the `transactionNode` from the
dlist that it might be part of. Before the refactoring this wasn't a
problem, because the dlist would be completely reset quickly after in
`ResetGlobalVariables` (without reading or writing the dlist entries).

The refactoring changed this by moving the `dlist_delete` call to
`ResetRemoteTransaction`, which in turn was called in the
`!ShouldShutdownConnection` path of `AfterXactHostConnectionHandling`.
Thus this `!ShouldShutdownConnection` path would now delete from the
`dlist`, but the `ShouldShutdownConnection` path would not. Thus to
remove itself the deleting path would sometimes update nodes in the list
that were freed right before.

There's two ways of fixing this:
1. Call `dlist_delete` from **both** of paths.
2. Call `dlist_delete` from **neither** of the paths.

This commit implements the second approach, and #6684 implements the
first. We need to choose which approach we prefer.

To make calling `dlist_delete` from both paths actually work, we also need
to use a slightly different check to determine if we need to call dlist_delete.
Various regression tests showed that there can be cases where the
`transactionState` is something else than `REMOTE_TRANS_NOT_STARTED`
but the connection was not added to the `InProgressTransactions` list
One example of such a case is when running `TransactionStateMachine`
without calling `StartRemoteTransactionBegin` beforehand. In those
cases the connection won't be added to `InProgressTransactions`, but
the `transactionState` is changed to `REMOTE_TRANS_SENT_COMMAND`. 

Sidenote: This bug already existed in 11.1, but valgrind didn't catch it
back then. My guess is that this happened because #6314 was merged after
the initial release branch was cut.

Fixes #6638
2023-02-02 16:05:34 +01:00
Hanefi Onaldi 47ff03123b
Improve rebalance reporting for retried tasks (#6683)
If there is a problem with an ongoing rebalance, we did not show details
on background tasks that are stuck in runnable state. Similar to how we
show details for errored tasks, we now show details on tasks that are
being retried.

Earlier we showed the following output when a task was stuck:

```
┌────────────────────────────┐
│ {                         ↵│
│     "tasks": [            ↵│
│     ],                    ↵│
│     "task_state_counts": {↵│
│         "done": 13,       ↵│
│         "blocked": 2,     ↵│
│         "runnable": 1     ↵│
│     }                     ↵│
│ }                          │
└────────────────────────────┘
```

Now we show details like the following:

```
+-----------------------------------------------------------------------
| {
|     "tasks": [
|         {
|             "state": "runnable",
|             "command": "SELECT pg_catalog.citus_move_shard_placement(1
|             "message": "ERROR: Moving shards to a node that shouldn't
|             "retried": 2,
|             "task_id": 3
|         }
|     ],
|     "task_state_counts": {
|         "blocked": 1,
|         "runnable": 1
|     }
| }
+-----------------------------------------------------------------------
```
2023-01-31 15:26:52 +03:00
Jelte Fennema 14c31fbb07
Fix background rebalance when reference table has no PK (#6682)
DESCRIPTION: Fix background rebalance when reference table has no PK

For the background rebalance we would always fail if a reference table
that was not replicated to all nodes would not have a PK (or replica
identity). Even when we used force_logical or block_writes as the shard
transfer mode. This fixes that and adds some regression tests.

Fixes #6680
2023-01-31 12:18:29 +01:00
aykut-bozkurt 8a9bb272e4
fix dropping table_name option from foreign table (#6669)
We should disallow dropping table_name option if foreign table is in
metadata. Otherwise, we get table not found error which contains
shardid.

DESCRIPTION: Fixes an unexpected foreign table error by disallowing to drop the table_name option.

Fixes #6663
2023-01-30 17:24:30 +03:00
Marco Slot a482b36760
Revert "Support MERGE on distributed tables with restrictions" (#6675)
Co-authored-by: Marco Slot <marco.slot@gmail.com>
2023-01-30 15:01:59 +01:00
aykut-bozkurt ab71cd01ee
fix multi level recursive plan (#6650)
Recursive planner should handle all the tree from bottom to top at
single pass. i.e. It should have already recursively planned all
required parts in its first pass. Otherwise, this means we have bug at
recursive planner, which needs to be handled. We add a check here and
return error.

DESCRIPTION: Fixes wrong results by throwing error in case recursive
planner multipass the query.

We found 3 different cases which causes recursive planner passes the
query multiple times.
1. Sublink in WHERE clause is planned at second pass after we
recursively planned a distributed table at the first pass. Fixed by PR
#6657.
2. Local-distributed joins are recursively planned at both the first and
the second pass. Issue #6659.
3. Some parts of the query is considered to be noncolocated at the
second pass as we do not generate attribute equivalances between
nondistributed and distributed tables. Issue #6653
2023-01-27 21:25:04 +03:00
Gokhan Gulbiz 4e26464969
Allow plain pg foreign tables without a table_name option (#6652) 2023-01-27 16:34:11 +03:00
Jelte Fennema 81dcddd1ef
Actually skip constraint validation on shards after shard move (#6640)
DESCRIPTION: Fix foreign key validation skip at the end of shard move

In eadc88a we started completely skipping foreign key constraint
validation at the end of a non blocking shard move, instead of only for
foreign keys to reference tables. However, it turns out that this didn't
work at all because of a hard to notice bug: By resetting the
SkipConstraintValidation flag at the end of our utility hook, we
actually make the SET command that sets it a no-op.

This fixes that bug by removing the code that resets it. This is fine
because #6543 removed the only place where we set the flag in C code. So
the resetting of the flag has no purpose anymore. This PR also adds a
regression test, because it turned out we didn't have any otherwise we
would have caught that the feature was completely broken.

It also moves the constraint validation skipping to the utility hook.
The reason is that #6550 showed us that this is the better place to skip
it, because it will also skip the planning phase and not just the
execution.
2023-01-27 13:08:05 +01:00
aykut-bozkurt 8870f0f90b
fix order of recursive sublink planning (#6657)
We should do the sublink conversations at the end of the recursive
planning because earlier steps might have transformed the query into a
shape that needs recursively planning the sublinks.

DESCRIPTION: Fixes early sublink check at recursive planner.

Related to PR https://github.com/citusdata/citus/pull/6650
2023-01-27 14:35:16 +03:00
Onur Tirtir 97dba0ac00
Fix uninit mem acceess in UpdateFunctionDistributionInfo (#6658)
Fixes #6655.

heap_modify_tuple() fetches values[i] if replace[i] is set true,
regardless of the fact that whether isnull[i] is true or false. So
similar to replace[], let's init values[] & isnull[] too.

DESCRIPTION: Fixes an uninitialized memory access in
create_distributed_function()
2023-01-27 11:00:41 +03:00
Emel Şimşek 24f6136f72
Fixes ADD {PRIMARY KEY/UNIQUE} USING INDEX cmd (#6647)
This change allows creating a constraint without a name using an index.
The index name will be used as the constraint name the same way postgres
handles it.
Fixes issue #6644

This commit also cleans up some leftovers from nameless constraint checks.
With this commit, we now fully support adding all nameless constraints
directly to a table.

Co-authored-by: naisila <nicypp@gmail.com>
2023-01-25 21:28:07 +03:00
Emel Şimşek 2169e0222b
Propagates NOT VALID option for FK&CHECK constraints w/out a name (#6649)
Adds NOT VALID option to deparser. When we need to deparse:
  "ALTER TABLE ADD FOREIGN KEY ... NOT VALID"
  "ALTER TABLE ADD CHECK ... NOT VALID"
NOT VALID option should be propagated to workers.

Fixes issue #6646

This commit also uses AppendColumnNameList function
instead of repeated code blocks in two appropriate places
in the "ALTER TABLE" deparser.
2023-01-25 20:41:04 +03:00
Hanefi Onaldi 94b63f35a5
Prevent crashes on update with returning clauses (#6643)
If an update query on a reference table has a returns clause with a
subquery that accesses some other local table, we end-up with an crash.

This commit prevents the crash, but does not prevent other error
messages from happening due to Citus not being able to pushdown the
results of that subquery in a valid SQL command.

Related: #6634
2023-01-24 20:07:43 +03:00
Naisila Puka 3c96b2a0cd
Remove unused function RelationUsesIdentityColumns (#6645)
Cleanup from #6591
2023-01-24 17:10:05 +03:00
Jelte Fennema 7a7880aec9
Fix regression in allowed foreign keys on distributed tables (#6550)
DESCRIPTION: Fix regression in allowed foreign keys on distributed
tables

In commit eadc88a we changed how we skip foreign key validation. The
goal was to skip it in more cases. However, one change had the
unintended regression of introducing failures when trying to create
certain foreign keys. This reverts that part of the change.

The way of skipping validation of foreign keys that was introduced in
eadc88a was skipping validation during execution. The reason that
this caused this regression was because some foreign key validation
queries already fail during planning. In those cases it never gets to
the execution step where it would later be skipped.

Fixes #6543
2023-01-24 16:09:21 +03:00
Emel Şimşek 58368b7783
Enable adding FOREIGN KEY constraints on Citus tables without a name. (#6616)
DESCRIPTION: Enable adding FOREIGN KEY constraints on Citus tables
without a name

This PR enables adding a foreign key to a distributed/reference/Citus
local table without specifying the name of the constraint, e.g. `ALTER
TABLE items ADD FOREIGN KEY (user_id) REFERENCES users (id);`
2023-01-20 01:43:52 +03:00
Gokhan Gulbiz 2388fbea6e
Identity Column Support on Citus Managed Tables (#6591)
DESCRIPTION: Identity Column Support on Citus Managed Tables
2023-01-19 15:45:41 +03:00
Marco Slot 64e3fee89b
Remove shardstate leftovers (#6627)
Remove ShardState enum and associated logic.

Co-authored-by: Marco Slot <marco.slot@gmail.com>
Co-authored-by: Ahmet Gedemenli <afgedemenli@gmail.com>
2023-01-19 11:43:58 +03:00
Teja Mupparti 44c387b978 Support MERGE on distributed tables with restrictions
This implements the phase - II of MERGE sql support

Support routable query where all the tables in the merge-sql are distributed, co-located, and both the source and
target relations are joined on the distribution column with a constant qual. This should be a Citus single-task
query. Below is an example.

SELECT create_distributed_table('t1', 'id');
SELECT create_distributed_table('s1', 'id', colocate_with => ‘t1’);

MERGE INTO t1
USING s1 ON t1.id = s1.id AND t1.id = 100
WHEN MATCHED THEN
UPDATE SET val = s1.val + 10
WHEN MATCHED THEN
DELETE
WHEN NOT MATCHED THEN
INSERT (id, val, src) VALUES (s1.id, s1.val, s1.src)

Basically, MERGE checks to see if

There are a minimum of two distributed tables (source and a target).
All the distributed tables are indeed colocated.
MERGE relations are joined on the distribution column
MERGE .. USING .. ON target.dist_key = source.dist_key
The query should touch only a single shard i.e. JOIN AND with a constant qual
MERGE .. USING .. ON target.dist_key = source.dist_key AND target.dist_key = <>
If any of the conditions are not met, it raises an exception.
2023-01-18 11:05:27 -08:00
Ahmet Gedemenli b3b135867e
Remove shardstate from placement insert functions (#6615) 2023-01-18 09:52:38 +01:00
Hanefi Onaldi f21dfd5fae
Rebalance Progress Reporting API (#6576)
citus_job_list() lists all background jobs by simply showing the records
in pg_dist_background_job.

citus_job_status(job_id bigint, raw boolean default false) shows the
status of a single background job by appending a jsonb details column to
the associated row from pg_dist_background_job. If the raw argument is
set, machine readable sizes are used instead of human readable
alternatives.

citus_rebalance_status(raw boolean default false) shows the status of
the last rebalance operation. If the raw argument is set, machine
readable sizes are used instead of human readable alternatives.
2023-01-16 16:17:31 +03:00
Jelte Fennema 92689a8362
Make GPIDs work with pg_dist_poolinfo (#6588)
The original implementation of GPIDs didn't work correctly when using
`pg_dist_poolinfo` together with PgBouncer. The reason is that it
assumed that once a connection was made to a worker, the originating
GPID should stay the same for ever. But when pg_dist_poolinfo is used
this isn't the case, because the same connection on the worker might be
used by different backends of the coordinator.

This fixes that issue by updating the GPID whenever a new application
name is set on a connection. This is the only thing that's needed,
because PgBouncer already sets the application name correctly on the
server connection whenever a client is updated.
2023-01-13 14:39:19 +00:00
Marco Slot ad3407b5ff
Revert "Make the metadata syncing less resource invasive [Phase-1]" (#6618)
Co-authored-by: Marco Slot <marco.slot@gmail.com>
2023-01-13 13:56:55 +01:00
Ahmet Gedemenli ed6dd8b086
Mark 0/0 lsn results as null (#6617)
Marks `source_lsn` and `target_lsn` fields as null if the result is 0/0
2023-01-13 15:33:43 +03:00
Emel Şimşek 28ed013a91
Enable ALTER TABLE ... ADD CHECK (#6606)
DESCRIPTION: Enable adding CHECK constraints on distributed tables
without the client having to provide a constraint name.

This PR enables the following command syntax for adding check
constraints to distributed tables.
 ALTER TABLE ... ADD CHECK ... 

by creating a default constraint name and transforming the command into
the below syntax before sending it to workers.

ALTER TABLE ... ADD CONSTRAINT \<conname> CHECK ...
2023-01-12 23:31:06 +03:00
Ahmet Gedemenli 9b9d8e7abd Use shard transfer UDFs with node ids for rebalancing 2023-01-12 16:57:51 +03:00
Ahmet Gedemenli e5fef40c06 Introduce citus_move_shard_placement UDF with nodeid 2023-01-12 16:57:51 +03:00
Ahmet Gedemenli e19c545fbf Introduce citus_copy_shard_placement UDF with nodeid 2023-01-12 16:57:51 +03:00
Emel Şimşek d322f9e382
Handle DEFERRABLE option for the relevant constraints at deparser. (#6613)
Table Constraints UNIQUE, PRIMARY KEY and EXCLUDE may have option
DEFERRABLE in their command syntax. This PR handles the option when
deparsing the relevant constraint statements.

NOT DEFERRABLE 
and
INITIALLY IMMEDIATE (if DEFERRABLE}

are the default values for the option so we only append the non-default
values to the alter table statement.
2023-01-12 12:32:38 +03:00
Jelte Fennema 34df853bda
Fix bug introduced by #6412 (#6590)
In #6412 I made a change to not re-assign the global PID if it was
already set. This inadvertently introduced a regression where `userId`
and `databaseId` would not be set on the backend data when the global
PID was assigned in the authentication hook.

This fixes it by doing two things:
1. Removing `userId` from `BackendData`, since it's not used anywhere
   anyway.
2. Move assignment of `databaseId` to dedicated
   `SetBackendDataDatabaseId` function, that isn't a no-op when global
   pid is already set.

Since #6412 is not released yet this does not need a description.
2023-01-10 16:21:57 +01:00
Jelte Fennema c2b4087ff0
Quote all identifiers that we use for logical replication (#6604)
In #6598 it was noticed that Citus could generate syntactically invalid
statements during logical replication. With #6603 we resolved the direct
issue, by only generating valid subscription names. But there was also
the underlying problem that we did not escape certain identifier
strings. While in theory this should be okay since we should only
generate names that are valid, this issue reiterated that we should not
take this for granted. As an extra line of defense this quotes all
identifiers we use during logical replication setup.
2023-01-06 14:12:03 +00:00
Ahmet Gedemenli 26b170e1a8
Use %u instead of %i for naming subscriptions & roles (#6603)
DESCRIPTION: Fix the modifier for subscription and role creation

fixes: #6598 
Reported by @ivyazmitinov
2023-01-06 14:38:32 +01:00
Ahmet Gedemenli bc3383170e
Fix crash when trying to replicate a ref table that is actually dropped (#6595)
DESCRIPTION: Fix crash when trying to replicate a ref table that is actually dropped

see #6592 
We should have a real solution for it.
2023-01-06 14:52:08 +03:00
Emel Şimşek db7a70ef3e
Enable ALTER TABLE ... ADD UNIQUE and ADD EXCLUDE. (#6582)
DESCRIPTION: Adds support for creating table constraints UNIQUE and
EXCLUDE via ALTER TABLE command without client having to specify a name.

ALTER TABLE ... ADD CONSTRAINT <conname> UNIQUE ...
ALTER TABLE ... ADD CONSTRAINT <conname> EXCLUDE ...

commands require the client to provide an explicit constraint name.
However, in postgres it is possible for clients not to provide a name
and let the postgres generate it using the following commands

ALTER TABLE ... ADD UNIQUE ...
ALTER TABLE ... ADD EXCLUDE ...

This PR enables the same functionality for citus tables.
2023-01-05 18:12:32 +03:00
Önder Kalacı eb75decbeb
Undo planner extended statistics override (#6492) 2023-01-04 13:25:57 +01:00
Önder Kalacı a1aa96b32c
Make the metadata syncing less resource invasive [Phase-1] (#6537) 2023-01-04 11:36:45 +01:00
Ahmet Gedemenli 235047670d
Drop SHARD_STATE_TO_DELETE (#6494)
DESCRIPTION: Drop `SHARD_STATE_TO_DELETE` and use the cleanup records
instead

Drops the shard state that is used to mark shards as orphaned. Now we
insert cleanup records into `pg_dist_cleanup` so "orphaned" shards will
be dropped either by maintenance daemon or internal cleanup calls. With
this PR, we make the "cleanup orphaned shards" functions to be no-op, as
they would not be needed anymore.

This PR includes some naming changes about placement functions. We don't
need functions that filter orphaned shards, as there will be no orphaned
shards anymore.

We will also be introducing a small script with this PR, for users with
orphaned shards. We'll basically delete the orphaned shard entries from
`pg_dist_placement` and insert cleanup records into `pg_dist_cleanup`
for each one of them, during Citus upgrade.

We also have a lot of flakiness fixes in this PR.

Co-authored-by: Jelte Fennema <github-tech@jeltef.nl>
2023-01-03 14:38:16 +03:00
Ahmet Gedemenli 1b1e737e51
Drop cleanup on failure (#6584)
DESCRIPTION: Defers cleanup after a failure in shard move or split

We don't need to do a cleanup in case of failure on a shard transfer or
split anymore. Because,
* Maintenance daemon will clean them up anyway.
* We trigger a cleanup at the beginning of shard transfers/splits. 
* The cleanup on failure logic also can fail sometimes and instead of
the original error, we throw the error that is raised by the cleanup
procedure, and it causes confusion.
2022-12-28 15:48:44 +03:00
Ahmet Gedemenli eba9abeee2
Fix leftover shard copy on the target node when tx with move is aborted (#6583)
DESCRIPTION: Cleanup the shard on the target node in case of a
failed/aborted shard move

Inserts a cleanup record for the moved shard placement on the target
node. If the move operation succeeds, the record will be deleted. If
not, it will remain there to be cleaned up later.

fixes: #6580
2022-12-27 22:42:46 +03:00
Teja Mupparti 9a9989fc15 Support MERGE Phase – I
All the tables (target, source or any CTE present) in the SQL statement are local i.e. a merge-sql with a combination of Citus local and
Non-Citus tables (regular Postgres tables) should work and give the same result as Postgres MERGE on regular tables. Catch and throw an
exception (not-yet-supported) for all other scenarios during Citus-planning phase.
2022-12-18 20:32:15 -08:00
Emel Şimşek 5268d0a6cb
Enable PRIMARY KEY generation via ALTER TABLE even if the constraint name is not provided (#6520)
DESCRIPTION: Support ALTER TABLE .. ADD PRIMARY KEY ... command

Before processing
	> **ALTER TABLE ... ADD PRIMARY KEY ...**
command

1. 	Create a primary key name to use as the constraint name.
2. Change the **ALTER TABLE ... ADD PRIMARY KEY ...** command to into
**ALTER TABLE ... ADD CONSTRAINT \<constraint name> PRIMARY KEY ...**
form.
This is the only form we can specify a name for a primary key. If we run
ALTER TABLE .. ADD PRIMARY KEY, postgres
would create a constraint name internally in its own scheme. But the
problem is that we need to create constraint names
for shards in our own scheme which is \<constraint name>_\<shardid>.
Hence we need to create a name and send it to workers so that the
workers can append the shardid.
4. Run the changed command on the coordinator to make sure we are using
the same constraint name across the board.
5. Send the changed command to workers such that it is executed for the
main table as well as for the shards.

Fixes #6515.
2022-12-16 20:34:00 +03:00
aykut-bozkurt 9c0073ba57
remove unused boundary type (#6563)
Removes unused job boundary tag `SUBQUERY_MAP_MERGE_JOB`.

Only usage is at `BuildMapMergeJob`, which is only called when the
boundary = `JOIN_MAP_MERGE_JOB`. Hence, it should be safe to remove.
2022-12-16 18:19:22 +03:00
Onder Kalaci feb5534c65 Do not create additional WaitEventSet for RemoteSocketClosed checks
Before this commit, we created an additional WaitEventSet for
checking whether the remote socket is closed per connection -
only once at the start of the execution.

However, for certain workloads, such as pgbench select-only
workloads, the creation/deletion of the additional WaitEventSet
adds ~7% CPU overhead, which is also reflected on the benchmark
results.

With this commit, we use the same WaitEventSet for the purposes
of checking the remote socket at the start of the execution.

We use "rebuildWaitEventSet" flag so that the executor can re-use
the existing WaitEventSet.

As a result, we see the following improvements on PG 15:

main			 : 120051 tps, 0.532 ms latency avg.
avoid_wes_rebuild: 127119 tps, 0.503 ms latency avg.

And, on PG 14, as expected, there is no difference

main			 : 129191 tps, 0.495 ms latency avg.
avoid_wes_rebuild: 129480 tps, 0.494 ms latency avg.

But, note that PG 15 is slightly (~1.5%) slower than PG 14.
That is probably the overhead of checking the remote socket.
2022-12-14 22:42:55 +01:00
Onder Kalaci d52da55ac0 Move WaitEvent to DistributedExecution
Prep. for caching WaitEventsSet/WaitEvents
2022-12-14 21:59:19 +01:00
Nils Dijk b5b73d78c3
add prepare and finish pg upgrade functions to 11.2-1 (#6560)
Fixes a missed include in #6315.

While adding the cluster clock we have added some extra steps to
`citus_prepare_pg_upgrade` and `citus_finish_pg_upgrade`. These changes
were not added to the citus upgrade and downgrade scripts, this allowed
for a syntax error to slip in.

This PR adds the new versions of both UDF's to the upgrade script while
adding the old version to the downgrade script. This exposed the syntax
error which is also solved.
2022-12-14 12:34:22 +01:00
aykut-bozkurt 8be4ce546e
fix vanilla test status on CI (#6555)
- Because of the make command used for vanilla tests, test status is
always shown as success on CI. As a fix, I added `&& false` at the end
of the copying diff file to make the command fail when check-vanilla
fails.
```make
check-vanilla: all
	$(pg_regress_multi_check) --vanillatest || (cp $(vanilla_diffs_file) $(citus_abs_srcdir)/regression.diffs && false)
```

- I also fixed some vanilla tests that fails due to recently added clock
related operators shown up at some queries.
2022-12-13 11:15:47 +03:00