Commit Graph

5827 Commits (dbfdaca0f077c815357b8958da4b7559de0975cc)

Author SHA1 Message Date
jeff-davis b34b1ce06b Columnar: fix wraparound bug. (#5962)
columnar_vacuum_rel() now advances relfrozenxid.

Fixes #5958.

(cherry picked from commit 74ce210f8b)
2022-05-31 07:46:12 -07:00
Onder Kalaci 0d0dd0af1c Show that no metadata is sent when disabled
(cherry picked from commit 89c1ccb7a5)
2022-05-30 17:01:49 +02:00
Onder Kalaci 3227d6551e Do not send metadata changes during add node if citus.enable_metadata_sync is set to false
(cherry picked from commit 7157152f6c)
2022-05-30 17:01:44 +02:00
Onder Kalaci d147d5d0c5 Avoid assertion failure on citus_add_node
(cherry picked from commit 010a2a408e)
2022-05-30 17:01:38 +02:00
Ahmet Gedemenli 4b5f749c23 Propagate dependent views upon distribution (#5950)
(cherry picked from commit 26d927178c)
2022-05-26 18:58:04 +03:00
Burak Velioglu 29c67c660d Create view and materialized views with right schema and owner while
altering the distributed table.

To be able to alter view's owner without enforcing sequential mode.
Alter view process functions have been udpated to use metadata
connection.
2022-05-25 10:42:54 +03:00
Gledis Zeneli 6da2d41e00 Do not obtain AccessShareLock before actual lock (#5965)
Do not obtain AccessShareLock before acquiring the distributed locks.

Acquiring an AccessShareLock ensures that the relations which we are trying to get a distributed lock on will not be dropped in the time between when the LOCK command is issued and the LOCK commands are send to the worker. However, this also leads to distributed deadlocks in such scenarios:

```sql
-- for dist lock acquiring order coor, w1, w2

-- on w2
LOCK t1 IN ACCESS EXLUSIVE MODE;
-- acquire AccessShareLock locally on t1 to ensure it is not dropped while we get ready to distribute the lock

      -- concurrently on w1
      LOCK t1 IN ACCESS EXLUSIVE MODE;
      -- acquire AccessShareLock locally on t1 to ensure it is not dropped while we get ready to distribute the lock
      -- acquire dist lock on coor, w1, gets blocked on local AccessShareLock on w2

-- on w2 continuation of the execution above
-- starts to acquire dist locks and gets blocked on the coor by the lock acquired by w1

-- distributed deadlock

```

We opt for avoiding such deadlocks with the cost of the possibility of running into errors when the relations on which we are trying to acquire locks on get dropped.

(cherry picked from commit 27ddb4fc8e)
2022-05-23 17:28:37 +03:00
Onder Kalaci 2d5560537b Due to new commits in master branch, outputs diverged 2022-05-23 09:36:38 +02:00
Onder Kalaci 8b0499c91a Parallelize metadata syncing on node activate
It is often useful to be able to sync the metadata in parallel
across nodes.

Also citus_finalize_upgrade_to_citus11() uses
start_metadata_sync_to_primary_nodes() after this commit.

Note that this commit does not parallelize all pieces of node
activation or metadata syncing. Instead, it tries to parallelize
potenially large parts of metadata, which is the objects and
distributed tables (in general Citus tables).

In the future, it would be nice to sync the reference tables
in parallel across nodes.

Create ~720 distributed tables / ~23450 shards
```SQL
-- declaratively partitioned table
CREATE TABLE github_events_looooooooooooooong_name (
  event_id bigint,
  event_type text,
  event_public boolean,
  repo_id bigint,
  payload jsonb,
  repo jsonb,
  actor jsonb,
  org jsonb,
  created_at timestamp
) PARTITION BY RANGE (created_at);

SELECT create_time_partitions(
  table_name         := 'github_events_looooooooooooooong_name',
  partition_interval := '1 day',
  end_at             := now() + '24 months'
);

CREATE INDEX ON github_events_looooooooooooooong_name USING btree (event_id, event_type, event_public, repo_id);
SELECT create_distributed_table('github_events_looooooooooooooong_name', 'repo_id');

SET client_min_messages TO ERROR;

```

across 1 node: almost same as expected
```SQL

SELECT start_metadata_sync_to_primary_nodes();
Time: 15664.418 ms (00:15.664)

select start_metadata_sync_to_node(nodename,nodeport) from pg_dist_node;
Time: 14284.069 ms (00:14.284)
```

across 7 nodes: ~3.5x improvement
```SQL

SELECT start_metadata_sync_to_primary_nodes();
┌──────────────────────────────────────┐
│ start_metadata_sync_to_primary_nodes │
├──────────────────────────────────────┤
│ t                                    │
└──────────────────────────────────────┘
(1 row)

Time: 25711.192 ms (00:25.711)

-- across 7 nodes
select start_metadata_sync_to_node(nodename,nodeport) from pg_dist_node;
Time: 82126.075 ms (01:22.126)
```

(cherry picked from commit dd02e1755f)
2022-05-23 09:25:31 +02:00
Onder Kalaci 513e073206 Fixes a bug that prevents dropping/altering indexes
There are two problems in this area. First, when there are expressions
on the index name, we should call `transformIndexExpression()` before
generating the index name. That is what Postgres does.

Second, because of 40c24bfef9
PG 13 and PG 14 generates different names for indexes with function calls even for local PG tables.
Assume we have:
```SQL
create table t(id int);
select create_distributed_table('t', 'id');
create index ON t (my_very_boring_function(id));
```

On PG 13, the name of the index is `t_expr_idx`
```SQL
\d t
Table "public.t"
┌────────┬─────────┬───────────┬──────────┬─────────┐
│ Column │  Type   │ Collation │ Nullable │ Default │
├────────┼─────────┼───────────┼──────────┼─────────┤
│ id     │ integer │           │          │         │
└────────┴─────────┴───────────┴──────────┴─────────┘
Indexes:
    "t_expr_idx" btree (my_very_boring_function(id::bigint))
```

On PG 14, the name of the index is `t_my_very_boring_function_idx`
```SQL
\d t
 Table "public.t"
┌────────┬─────────┬───────────┬──────────┬─────────┐
│ Column │  Type   │ Collation │ Nullable │ Default │
├────────┼─────────┼───────────┼──────────┼─────────┤
│ id     │ integer │           │          │         │
└────────┴─────────┴───────────┴──────────┴─────────┘
Indexes:
    "t_my_very_boring_function_idx" btree (my_very_boring_function(id::bigint))

```

The second issue is not very critical. The important part is that
we adjust regression tests to drop all the indexes, which ensures
the index names are sane on any version.

(cherry picked from commit 2cc4053fc1)
2022-05-23 09:22:25 +02:00
Onder Kalaci 4b5cb7e2b9 Mark existing views as distributed when upgrade to 11.0+
We have a mechanism which ensures that newly distributed
objects are recorded during `alter extension citus update`.

However, the logic was lacking "view"s. With this commit, we make
sure that existing views are also marked as distributed during
upgrade.

(cherry picked from commit ee45e7bfbf)
2022-05-23 09:22:17 +02:00
Gledis Zeneli 97b453e679 Add TRUNCATE arbitrary config tests (#5848)
Adds TRUNCATE arbitrary config tests.
Also adds the ability to skip tests from particular configs.
2022-05-20 19:53:18 +02:00
Marco Slot 8c5035c0a5 Improve nested execution checks and add GUC to disable 2022-05-20 19:35:59 +02:00
Marco Slot 7c6784b1f4 Add caching for functions that check the backend type 2022-05-20 19:35:52 +02:00
Marco Slot 556f43f24a Fix prepared statement bug when switching from local to remote execution 2022-05-20 19:35:45 +02:00
gledis69 909b72b027 Add distributing lock command support
(cherry picked from commit 4731630741)
2022-05-20 18:02:34 +03:00
Gledis Zeneli 3f282c660b Switch to using LOCK instead of lock_relation_if_exists in TRUNCATE (#5930)
Breaking down #5899 into smaller PR-s

This particular PR changes the way TRUNCATE acquires distributed locks on the relations it is truncating to use the LOCK command instead of lock_relation_if_exists. This has the benefit of using pg's recursive locking logic it implements for the LOCK command instead of us having to resolve relation dependencies and lock them explicitly. While this does not directly affect truncate, it will allow us to generalize this locking logic to then log different relations where the pg recursive locking will become useful (e.g. locking views).

This implementation is a bit more complex that it needs to be due to pg not supporting locking foreign tables. We can however, still lock foreign tables with lock_relation_if_exists. So for a command:

TRUNCATE dist_table_1, dist_table_2, foreign_table_1, foreign_table_2, dist_table_3;

We generate and send the following command to all the workers in metadata:
```sql
SEL citus.enable_ddl_propagation TO FALSE;
LOCK dist_table_1, dist_table_2 IN ACCESS EXCLUSIVE MODE;
SELECT lock_relation_if_exists('foreign_table_1', 'ACCESS EXCLUSIVE');
SELECT lock_relation_if_exists('foreign_table_2', 'ACCESS EXCLUSIVE');
LOCK dist_table_3 IN ACCESS EXCLUSIVE MODE;
SEL citus.enable_ddl_propagation TO TRUE;
```

Note that we need to alternate between the lock command and lock_table_if_exists in order to preserve the TRUNCATE order of relations.
When pg supports locking foreign tables, we will be able to massive simplify this logic and send a single LOCK command.

(cherry picked from commit 4c6f62efc6)
2022-05-20 17:24:44 +03:00
Marco Slot 73fd4f7ded Allow distributed execution from run_command_on_* functions 2022-05-20 15:42:50 +02:00
Burak Velioglu 8229d4b7ee Add ALTER VIEW support
Adds support for propagation ALTER VIEW commands to
- Change owner of view
- SET/RESET option
- Rename view and view's column name
- Change schema of the view

Since PG also supports targeting views with ALTER TABLE
commands, related code also added to direct such ALTER TABLE
commands to ALTER VIEW commands while sending them to workers.
2022-05-20 12:18:14 +03:00
Burak Velioglu 0cf769c43a Introduce CREATE/DROP VIEW
Adds support for propagating create/drop view commands and views to
worker node while scaling out the cluster. Since views are dropped while
converting the table type, metadata connection will be used while
propagating view commands to not switch to sequential mode.
2022-05-20 12:18:02 +03:00
Burak Velioglu 591f2565cc Use object address instead of relation id on DDLJob to decide on syncing metadata 2022-05-20 12:17:56 +03:00
Ahmet Gedemenli ddfcbfdca1 Add tests for materialized views 2022-05-20 12:17:48 +03:00
Ahmet Gedemenli 16071fac1d Add view tests to arbitrary configs 2022-05-20 12:17:41 +03:00
Onder Kalaci 9c4e3329f6 Rename metadata sync to node metadata sync where applicable 2022-05-19 11:00:51 +02:00
Onder Kalaci 36f641c586 Serialize reference table modifications with node changes & restore point
With Citus MX enabled, when a reference table is modified, it does
some operations on the first worker node(e.g., acquire locks).

If node metadata is locked (via add node or create restore point),
the changes to the reference tables should be blocked.
2022-05-19 11:00:51 +02:00
Onder Kalaci 5fe384329e Adds "sync" option to citus_disable_node() UDF 2022-05-19 11:00:51 +02:00
Marco Slot c20732142e Add a run_command_on_coordinator function 2022-05-19 10:41:10 +02:00
Marco Slot 082a14656d Fix downgrade scripts and add new downgrade tests 2022-05-19 10:37:56 +02:00
Marco Slot 33dede5b75 Add a citus_is_coordinator function 2022-05-19 10:36:22 +02:00
Nils Dijk 5e4c0e4bea
Merge pull request #5931 from citusdata/refactor/dedupe-object-propagation
Refactor: reduce complexity and code duplication for Object Propagation
2022-05-18 18:06:24 +02:00
Ahmet Gedemenli c2d9e88bf5
Fix schema name bug for sequences (#5937) 2022-05-18 17:29:30 +02:00
Ahmet Gedemenli 88369b6b23
Merge pull request #5934 from citusdata/fix-alter-statistics-nspname
Fix alter statistics namespace name
2022-05-18 17:29:30 +02:00
Onder Kalaci b7a39a232d Refrain reading the metadata cache for all tables during upgrade
First, it is not needed. Second, in the past we had issues regarding
this: https://github.com/citusdata/citus/pull/4344

When I create 10k tables, ~120K shards, this saves
40Mb of memory during ALTER EXTENSION citus UPDATE.

Before the change:  MetadataCacheMemoryContext: 41943040 ~ 40MB
After the change:  MetadataCacheMemoryContext: 8192

(cherry picked from commit f193e16a01)
2022-05-06 13:53:43 +02:00
Marco Slot e8b41d1e5b Convert citus.hide_shards_from_app_name_prefixes to citus.show_shards_for_app_name_prefixes 2022-05-05 13:24:23 +02:00
Onder Kalaci b4a65b9c45 Do not set coordinator's metadatasynced column to false
After a disable_node

(cherry picked from commit 5fc7661169)
2022-04-25 09:35:00 +02:00
Onder Kalaci 6ca3478c8d Do not assign distributed transaction ids for local execution
In the past, for all modifications on the local execution,
we enabled 2PC (with 6a7ed7b309).

This also required us to enable coordinated transactions
via https://github.com/citusdata/citus/pull/4831 .

However, it does have a very substantial impact on the
distributed deadlock detection. The distributed deadlock
detection is designed to avoid single-statement transactions
because they cannot lead to any actual deadlocks.

The implementation is to skip backends without distributed
transactions are assigned. Now that we assign single
statement local executions in the lock graphs, we are
conflicting with the design of distributed deadlock
detection.

In general, we should fix it. However, one might
think that it is not a big deal, even if the processes
show up in the lock graphs, the deadlock detection
should not be causing any false positives. That is
false, unless https://github.com/citusdata/citus/issues/1803
is fixed. Now that local processes are considered as a single
distributed backend, the lock graphs might find:

    local execution 1 [tx id: 1] -> any local process [tx id: 0]
    any local process [tx id: 0] -> local execution 2 [tx id: 2]

And, decides that there is a distributed deadlock.

This commit is:
   (a) right thing to do, as local execuion should not need any
       distributed tx id
   (b) Eliminates performance issues that might come up with
       deadlock detection does a lot of unncessary checks
   (c) After moving local execution after the remote execution
       via https://github.com/citusdata/citus/pull/4301, the
       vauge requirement for assigning distributed tx ids are
       already gone.

(cherry picked from commit a2debe0f02)
2022-04-25 09:34:32 +02:00
Hanefi Onaldi 86df61cae8
Bump Citus to 11.0.1_beta 2022-04-11 16:09:11 +03:00
Hanefi Onaldi e20a6dcd78
Add changelog entries for 11.0.1_beta
(cherry picked from commit 3ec1fc48fc)
2022-04-11 16:08:16 +03:00
Burak Velioglu 6eed51b75c
Create function in transaction according to create object propagation guc
(cherry picked from commit 5d9599f964)
2022-04-11 13:01:14 +03:00
Nils Dijk 675ba65f22
Implement DOMAIN propagation for citus 2022-04-08 16:18:02 +02:00
Marco Slot d611a50a80 Allow adding a unique constraint with an index 2022-04-07 16:41:10 +02:00
Marco Slot c5797030de Fix EXPLAIN ANALYZE JSON format for subplans 2022-04-07 16:00:12 +02:00
Marco Slot a74d991445 Handle user-defined type parameters in EXPLAIN ANALYZE 2022-04-07 11:37:43 +02:00
Marco Slot cb9e510e40 Add TABLESAMPLE support 2022-04-01 16:48:29 +02:00
Onder Kalaci e336b92552 Only hide shards from client backends and pg bg workers
The aim of hiding shards is to hide shards from client applications.

Certain bg workers (such as pg_cron or Citus maintanince daemon)
should be treated like client applications because users can run
queries from such bg workers. And, these bg workers should follow
the similar application_name checks as client backeends.

Certain other bg workers, such as logical replication or postgres'
parallel workers, should never hide shards. They are internal
operations.

Similarly the other backend types like the walsender or
checkpointer or autovacuum should never hide shards.

(cherry picked from commit 9043a1ed3f)
2022-03-30 17:44:03 +02:00
Hanefi Onaldi 4784d5579b
Bump Citus to 11.0.0_beta 2022-03-24 16:17:47 +03:00
Hanefi Onaldi 7dc0a94293
Merge pull request #5852 from citusdata/citus-11.0.0-changelog-1647961698 2022-03-24 16:15:54 +03:00
Hanefi Onaldi 36ca2639f0
Add changelog entry for 11.0.0 2022-03-24 13:48:09 +03:00
Ahmet Gedemenli 6300b86f8a
Merge pull request #5842 from citusdata/drop-pg12-support
Drop PG12 support
2022-03-24 02:31:28 +03:00
Ahmet Gedemenli 42c46a0824 Drop PG12 support 2022-03-23 18:16:04 +03:00