Commit Graph

3095 Commits (a1aa96b32c792280fc54838bb2bb4031aaa8ae12)

Author SHA1 Message Date
Onder Kalaci db998b3d66 Adds "sync" option to citus_disable_node() UDF
Before this commit, we had:
```SQL
SELECT citus_disable_node(nodename, nodeport, force boolean DEFAULT false)
```

Where, we allow forcing to disable first worker node with
`force:=true`. However, it entails the risk for losing
data / diverging placement data etc.

With `force` flag, we control disabling the first worker node,
and with `async` flag we control whether the changes are done
via bg worker or immediately.

```SQL
SELECT citus_disable_node(nodename, nodeport, force boolean DEFAULT false, sync boolean DEFAULT false)
```

Where we can achieve all the following:

| Mode  | Data loss possibility | Can run in 2PC | Handle multiple node failures | Immediately effective |
| --- |--- |--- |--- |--- |
| force:false, sync: false  | false   | true  | true  | false |
| force:false, sync: true   | false  | false | false | true |
| force:true, sync: false   | true   | true  | true   | false |
| force:true, sync: true    | false  | false | false  | true |
2022-05-18 17:21:12 +02:00
Onder Kalaci 2cc4053fc1 Fixes a bug that prevents dropping/altering indexes
There are two problems in this area. First, when there are expressions
on the index name, we should call `transformIndexExpression()` before
generating the index name. That is what Postgres does.

Second, because of 40c24bfef9
PG 13 and PG 14 generates different names for indexes with function calls even for local PG tables.
Assume we have:
```SQL
create table t(id int);
select create_distributed_table('t', 'id');
create index ON t (my_very_boring_function(id));
```

On PG 13, the name of the index is `t_expr_idx`
```SQL
\d t
Table "public.t"
┌────────┬─────────┬───────────┬──────────┬─────────┐
│ Column │  Type   │ Collation │ Nullable │ Default │
├────────┼─────────┼───────────┼──────────┼─────────┤
│ id     │ integer │           │          │         │
└────────┴─────────┴───────────┴──────────┴─────────┘
Indexes:
    "t_expr_idx" btree (my_very_boring_function(id::bigint))
```

On PG 14, the name of the index is `t_my_very_boring_function_idx`
```SQL
\d t
 Table "public.t"
┌────────┬─────────┬───────────┬──────────┬─────────┐
│ Column │  Type   │ Collation │ Nullable │ Default │
├────────┼─────────┼───────────┼──────────┼─────────┤
│ id     │ integer │           │          │         │
└────────┴─────────┴───────────┴──────────┴─────────┘
Indexes:
    "t_my_very_boring_function_idx" btree (my_very_boring_function(id::bigint))

```

The second issue is not very critical. The important part is that
we adjust regression tests to drop all the indexes, which ensures
the index names are sane on any version.
2022-05-18 16:35:17 +02:00
Nils Dijk b71a08955a
Refactor: reduce complexity and code duplication for Object Propagation
Over time we have added significantly improved the support for objects to be propagated by Citus as to make scaling out the database more seamless. It became evident that there was a lot of code duplication that got into the codebase to implement the propagation.

This PR tries to reduce the amount of repeated code that is at most only slightly different. To make things worse, most of the differences were actually oversights instead of correct.

This Patch introduces 3 reusable sets of pre/post processing steps for respectively
 - create
 - alter
 - drop

With the use of the common functionality we should have more coherent behaviour between different supported object by Citus.

Some steps either omit the Pre or Post processing step if they would not make sense to include.

All tests pass, only 1 test needed changing, foreign servers, as the dropping of foreign servers didn't implement support for dropping multiple foreign servers at once. Given the common approach correctly supports dropping of multiple objects, either distributed or not, the test that assumed it wouldn't work was now obsolete.
2022-05-18 15:58:28 +02:00
Onder Kalaci ee45e7bfbf Mark existing views as distributed when upgrade to 11.0+
We have a mechanism which ensures that newly distributed
objects are recorded during `alter extension citus update`.

However, the logic was lacking "view"s. With this commit, we make
sure that existing views are also marked as distributed during
upgrade.
2022-05-18 15:43:17 +02:00
Halil Ozan Akgul d171a736ab Revert "Creates new colocation for colocate_with:='none' too"
This reverts commit f74447b3b7.
2022-05-17 15:32:22 +03:00
Ahmet Gedemenli aa8f46ead0
Fix schema name bug for sequences (#5937) 2022-05-16 18:11:57 +03:00
Halil Ozan Akgul f74447b3b7 Creates new colocation for colocate_with:='none' too 2022-05-16 13:39:05 +03:00
Teja Mupparti e56fc34404 Fixes: #5787 In prepared statements, map any unused parameters
to a generic type.
2022-05-13 19:31:05 -07:00
Burak Velioglu 1875516ae9 Add ALTER VIEW support
Adds support for propagation ALTER VIEW commands to
- Change owner of view
- SET/RESET option
- Rename view and view's column name
- Change schema of the view

Since PG also supports targeting views with ALTER TABLE
commands, related code also added to direct such ALTER TABLE
commands to ALTER VIEW commands while sending them to workers.
2022-05-13 13:21:53 +03:00
Marco Slot 6fad5dc207 Add a citus_is_coordinator function 2022-05-13 10:02:52 +02:00
Ahmet Gedemenli 00e0f4d8e6 Fix alter statistics namespace name 2022-05-11 18:44:37 +03:00
Gledis Zeneli 4c6f62efc6
Switch to using LOCK instead of lock_relation_if_exists in TRUNCATE (#5930)
Breaking down #5899 into smaller PR-s

This particular PR changes the way TRUNCATE acquires distributed locks on the relations it is truncating to use the LOCK command instead of lock_relation_if_exists. This has the benefit of using pg's recursive locking logic it implements for the LOCK command instead of us having to resolve relation dependencies and lock them explicitly. While this does not directly affect truncate, it will allow us to generalize this locking logic to then log different relations where the pg recursive locking will become useful (e.g. locking views).

This implementation is a bit more complex that it needs to be due to pg not supporting locking foreign tables. We can however, still lock foreign tables with lock_relation_if_exists. So for a command:

TRUNCATE dist_table_1, dist_table_2, foreign_table_1, foreign_table_2, dist_table_3;

We generate and send the following command to all the workers in metadata:
```sql
SEL citus.enable_ddl_propagation TO FALSE;
LOCK dist_table_1, dist_table_2 IN ACCESS EXCLUSIVE MODE;
SELECT lock_relation_if_exists('foreign_table_1', 'ACCESS EXCLUSIVE');
SELECT lock_relation_if_exists('foreign_table_2', 'ACCESS EXCLUSIVE');
LOCK dist_table_3 IN ACCESS EXCLUSIVE MODE;
SEL citus.enable_ddl_propagation TO TRUE;
```

Note that we need to alternate between the lock command and lock_table_if_exists in order to preserve the TRUNCATE order of relations.
When pg supports locking foreign tables, we will be able to massive simplify this logic and send a single LOCK command.
2022-05-11 18:38:48 +03:00
Burak Velioglu 1460452442 Introduce CREATE/DROP VIEW
Adds support for propagating create/drop view commands and views to
worker node while scaling out the cluster. Since views are dropped while
converting the table type, metadata connection will be used while
propagating view commands to not switch to sequential mode.
2022-05-10 13:07:14 +03:00
Burak Velioglu 06a94d167e Use object address instead of relation id on DDLJob to decide on syncing metadata 2022-05-05 17:59:44 +03:00
Onder Kalaci f193e16a01 Refrain reading the metadata cache for all tables during upgrade
First, it is not needed. Second, in the past we had issues regarding
this: https://github.com/citusdata/citus/pull/4344

When I create 10k tables, ~120K shards, this saves
40Mb of memory during ALTER EXTENSION citus UPDATE.

Before the change:  MetadataCacheMemoryContext: 41943040 ~ 40MB
After the change:  MetadataCacheMemoryContext: 8192
2022-05-04 16:44:06 +02:00
Marco Slot ceb593c9da Convert citus.hide_shards_from_app_name_prefixes to citus.show_shards_for_app_name_prefixes 2022-05-03 14:22:13 +02:00
Jeff Davis 3e1180de78 PG15: handle extra argument to parse_analyze_varparams().
From PG commit 25751f54b8.
2022-05-02 10:12:03 -07:00
Jeff Davis b6a5617ea8 PG15: handle pg_analyze_and_rewrite_* renaming.
From PG commit 791b1b71da.
2022-05-02 10:12:03 -07:00
Jeff Davis 33ee4877d4 PG15: rename pgstat_initstats() -> pgstat_init_relation().
From PG commits bff258a273 and be902e2651.
2022-05-02 10:12:03 -07:00
Jeff Davis 033f9cfff7 PG15: update copied pg_get_object_address() code.
Account for PG commits 5a2832465fd8 and a0ffa885e478.
2022-05-02 10:12:03 -07:00
Jeff Davis bd455f42e3 PG15: handle change to SeqScan structure.
Account for PG commit 2226b4189b. The one site dependent on it can do
just as well with a Scan instead of a SeqScan.
2022-05-02 10:12:03 -07:00
Jeff Davis 3799f95742 PG15: Value -> String, Integer, Float.
Handle PG commit 639a86e36a.
2022-05-02 10:12:03 -07:00
Jeff Davis 26f5e20580 PG15: update integer parsing APIs.
Account for PG commits 3c6f8c011f and cfc7191dfe.
2022-05-02 10:12:03 -07:00
Jeff Davis 70c915a0f2 PG15: Handle data type changes in pg_collation.
Account for PG commit 54637508f8.
2022-05-02 10:12:03 -07:00
Jeff Davis 9915fe8a1a PG15: Handle different ways to get publication actions.
Account for PG commit 52e4f0cd47.
2022-05-02 10:12:03 -07:00
Jeff Davis 1c1ef7ab8d PG15: Handle extra argument to RelationCreateStorage.
Account for PG commit 9c08aea6a309. Introduce
RelationCreateStorage_compat.
2022-05-02 10:12:03 -07:00
Jeff Davis ac952b2cc2 PG15: Handle extra argument to ExecARDeleteTriggers.
Account for PG commit ba9a7e3921. Introduce
ExecARDeleteTriggers_compat.
2022-05-02 10:12:03 -07:00
Jeff Davis f944722c6a PG15: Use RelationGetSmgr() instead of RelationOpenSmgr().
Handle PG commit f10f0ae420.
2022-05-02 10:12:03 -07:00
Onder Kalaci 5fc7661169 Do not set coordinator's metadatasynced column to false
After a disable_node
2022-04-25 09:25:59 +02:00
Onder Kalaci a2debe0f02 Do not assign distributed transaction ids for local execution
In the past, for all modifications on the local execution,
we enabled 2PC (with 6a7ed7b309).

This also required us to enable coordinated transactions
via https://github.com/citusdata/citus/pull/4831 .

However, it does have a very substantial impact on the
distributed deadlock detection. The distributed deadlock
detection is designed to avoid single-statement transactions
because they cannot lead to any actual deadlocks.

The implementation is to skip backends without distributed
transactions are assigned. Now that we assign single
statement local executions in the lock graphs, we are
conflicting with the design of distributed deadlock
detection.

In general, we should fix it. However, one might
think that it is not a big deal, even if the processes
show up in the lock graphs, the deadlock detection
should not be causing any false positives. That is
false, unless https://github.com/citusdata/citus/issues/1803
is fixed. Now that local processes are considered as a single
distributed backend, the lock graphs might find:

    local execution 1 [tx id: 1] -> any local process [tx id: 0]
    any local process [tx id: 0] -> local execution 2 [tx id: 2]

And, decides that there is a distributed deadlock.

This commit is:
   (a) right thing to do, as local execuion should not need any
       distributed tx id
   (b) Eliminates performance issues that might come up with
       deadlock detection does a lot of unncessary checks
   (c) After moving local execution after the remote execution
       via https://github.com/citusdata/citus/pull/4301, the
       vauge requirement for assigning distributed tx ids are
       already gone.
2022-04-13 13:25:12 +02:00
Burak Velioglu 5d9599f964
Create function in transaction according to create object propagation guc 2022-04-08 17:15:31 +03:00
Nils Dijk 8897361f95
Implement DOMAIN propagation for citus 2022-04-08 15:25:39 +02:00
Onder Kalaci b0b91bab04 Rename metadata sync to node metadata sync where applicable 2022-04-07 17:51:31 +02:00
Marco Slot 2304815356 Allow adding a unique constraint with an index 2022-04-07 16:00:31 +02:00
Marco Slot c0827703ec Fix EXPLAIN ANALYZE JSON format for subplans 2022-04-07 11:38:20 +02:00
Marco Slot 544dce919a Handle user-defined type parameters in EXPLAIN ANALYZE 2022-04-07 11:14:32 +02:00
Marco Slot 9476f377b5 Remove old re-partitioning functions 2022-04-04 18:11:52 +02:00
Marco Slot 8c8c3b665d Add TABLESAMPLE support 2022-04-01 15:51:40 +02:00
jeff-davis c485a04139
Separate build of citus.so and citus_columnar.so. (#5805)
* Separate build of citus.so and citus_columnar.so.

Because columnar code is statically-linked to both modules, it doesn't
make sense to load them both at once.

A subsequent commit will make the modules entirely separate and allow
loading them both simultaneously.

Author: Yanwen Jin

* Separate citus and citus_columnar modules.

Now the modules are independent. Columnar can be loaded by itself, or
along with citus.

Co-authored-by: Jeff Davis <jefdavi@microsoft.com>
2022-03-31 19:47:17 -07:00
Onder Kalaci 9043a1ed3f Only hide shards from client backends and pg bg workers
The aim of hiding shards is to hide shards from client applications.

Certain bg workers (such as pg_cron or Citus maintanince daemon)
should be treated like client applications because users can run
queries from such bg workers. And, these bg workers should follow
the similar application_name checks as client backeends.

Certain other bg workers, such as logical replication or postgres'
parallel workers, should never hide shards. They are internal
operations.

Similarly the other backend types like the walsender or
checkpointer or autovacuum should never hide shards.
2022-03-30 16:56:12 +02:00
Ahmet Gedemenli b52823f8b4
Fix typo in error message for truncating foreign tables (#5864) 2022-03-29 13:14:16 +03:00
Jelte Fennema 3a44fa827a
Add versions of forboth that don't need ListCell (#5856)
We've had custom versions of Postgres its `foreach` macro which with a
hidden ListCell for quite some time now. People like these custom
macros, because they are easier to use and require less boilerplate.
This adds similar custom versions of Postgres its `forboth` macro. Now
you don't need ListCells anymore when looping over two lists at the same
time.
2022-03-23 14:50:36 +03:00
Ahmet Gedemenli b5448e43e3
Fix aggregate signature bug (#5854) 2022-03-23 13:42:03 +03:00
Burak Velioglu db9f0d926c
Add support for deparsing ALTER FUNCION ... SUPPORT ... commands 2022-03-22 21:55:55 +03:00
Onder Kalaci af4ba3eb1f Remove citus.enable_cte_inlining GUC
In Postgres 12+, users can adjust whether to inline/not inline CTEs
by [NOT] MATERIALIZED keywords. So, this GUC is already useless.
2022-03-22 17:14:44 +01:00
Halil Ozan Akgul 4690c42121 Fixes ALTER COLLATION encoding does not exist bug 2022-03-22 17:42:20 +03:00
Marco Slot 32c23c2775 Disallow re-partition joins when no hash function defined 2022-03-22 13:42:53 +01:00
Onur Tirtir 11433ed357 Create DDL job for create enum command in postprocess as we do for composite types
Since now we don't throw an error for enums that user attempts creating
in temp schema, the preprocess / DDL job that contains the prepared
statement (to idempotently create the enum type) gets executed. As a
result, we were emitting the following warning because of the error the
underlying worker connection throws:

```sql
WARNING:  cannot PREPARE a transaction that has operated on temporary objects
CONTEXT:  while executing command on localhost:xxxxx
WARNING:  connection to the remote node localhost:xxxxx failed with the following error: another command is already in progress
ERROR:  cannot PREPARE a transaction that has operated on temporary objects
CONTEXT:  while executing command on localhost:xxxxx
```
2022-03-22 15:09:23 +03:00
Onur Tirtir dc31102630 Locally create objects having a dependency that we cannot distribute
We were already doing so for functions & types believing that
this cannot be the case for other object types.

However, as in #5830, we cannot distribute an object that user
attempts creating in temp schema. Even more, this doesn't only
apply to functions and types but also to many other object types.

So with this commit, we teach preprocess/postprocess functions
(that need to create dependencies on worker nodes) how to skip
trying to distribute such objects.

We also start identifying temp schemas as the objects that we
don't know how to propagate to worker nodes so that we can
simply create objects locally if user attempts creating them
in a temp schema.

There are 36 callers of `EnsureDependenciesExistOnAllNodes` in
the codebase atm and for the most we still need to throw a hard
error (i.e.: not use `DeferErrorIfHasUnsupportedDependency`
beforehand), such as:

i) user explicitly wants to create a distributed object
* CreateCitusLocalTable
* CreateDistributedTable
* master_create_worker_shards
* master_create_empty_shard
* create_distributed_function
* EnsureExtensionFunctionCanBeDistributed

ii) we don't want to skip altering distributed table on worker nodes
* PostprocessIndexStmt
* PostprocessCreateTriggerStmt
* PostprocessCreateStatisticsStmt

iii) object is already distributed / handled by Citus before, so we
aren't okay with not propagating the ALTER command
* PostprocessAlterTableSchemaStmt
* PostprocessAlterCollationOwnerStmt
* PostprocessAlterCollationSchemaStmt
* PostprocessAlterDatabaseOwnerStmt
* PostprocessAlterExtensionSchemaStmt
* PostprocessAlterFunctionOwnerStmt
* PostprocessAlterFunctionSchemaStmt
* PostprocessAlterSequenceOwnerStmt
* PostprocessAlterSequenceSchemaStmt
* PostprocessAlterStatisticsSchemaStmt
* PostprocessAlterStatisticsOwnerStmt
* PostprocessAlterTextSearchConfigurationSchemaStmt
* PostprocessAlterTextSearchDictionarySchemaStmt
* PostprocessAlterTextSearchConfigurationOwnerStmt
* PostprocessAlterTextSearchDictionaryOwnerStmt
* PostprocessAlterTypeSchemaStmt
* PostprocessAlterForeignServerOwnerStmt

iv) we already cannot create those objects in temp schemas, so skipping
for now
* PostprocessCreateExtensionStmt
* PostprocessCreateForeignServerStmt

Also note that there are 3 more callers of
`EnsureDependenciesExistOnAllNodes` in enterprise in addition to those
36 but we don't need to do anything specific about them due to the same
reasoning given in iii).
2022-03-22 15:09:23 +03:00
Halil Ozan Akgul 50bace9cfb Fixes the type names that start with underscore bug 2022-03-22 14:24:30 +03:00
Halil Ozan Akgul 4dbc760603 Introduces citus_coordinator_node_id 2022-03-22 10:34:22 +03:00
Hanefi Onaldi 9f204600af
Allow all possible option types for text search objects (#5838) 2022-03-21 20:01:53 +01:00
Burak Velioglu d4625ec6a1
Add support for zero-argument polymorphic aggregates 2022-03-21 16:10:40 +03:00
Ahmet Gedemenli 46c6630328
Qualify CREATE AGGREGATE stmts in Preprocess (#5834) 2022-03-21 13:55:09 +03:00
Burak Velioglu 2c2064bf36
Create type locally if it has undistributable dependency 2022-03-18 18:23:32 +03:00
Marco Slot 055bbd6212 Use coordinated transaction when there are multiple queries per task 2022-03-18 15:04:27 +01:00
Marco Slot cab243218d Avoid locks in relation_is_a_known_shard 2022-03-18 14:37:39 +01:00
Marco Slot 5bb5359da0 Fix worker node version check 2022-03-17 13:23:02 +01:00
Marco Slot 22a18fc1f2 Fix typo in upgrade function 2022-03-17 13:23:02 +01:00
Burak Velioglu 333c73a53c
Drop distributed table on worker with ProcessUtilityParseTree 2022-03-15 17:42:01 +03:00
Gledis Zeneli 56ab64b747
Patches #5758 with some more error checks (#5804)
Add error checks to detect failed connection and don't ping secondary nodes to detect self reference.
2022-03-15 15:02:47 +03:00
Marco Slot e42a798707 Always use RowShareLock in pg_dist_node when syncing metadata 2022-03-15 10:28:51 +01:00
Onder Kalaci 338752d96e Guard against hard wait event set errors
Similar to https://github.com/citusdata/citus/pull/5158, but this
time instead of the executor, use this in all the remaining places.
2022-03-14 14:35:56 +01:00
Onder Kalaci 953951007c Move wait event error checks to connection manager 2022-03-14 14:35:56 +01:00
Onur Tirtir 216b9b5b7a
Fix an incorrect error message related with fkeys between replicated dist tables (#5796)
This is not supported in enterprise too.
2022-03-14 14:34:09 +01:00
Hanefi Onaldi b24e1dfccc
Propagate text search commands to all worker nodes (#5797)
Here is a list of some functions, and the `TargetWorkerSet` parameters
they supply to `NodeDDLTaskList`:

PostprocessCreateTextSearchConfigurationStmt - 
NON_COORDINATOR_NODES

PreprocessDropTextSearchConfigurationStmt -
NON_COORDINATOR_METADATA_NODES

PreprocessAlterTextSearchConfigurationSchemaStmt -
NON_COORDINATOR_METADATA_NODES 

I guess this means that, if metadata
syncing is disabled on the node, we may have some issues. Consider the
following:

Let's assume the user has metadata syncing disabled. 2 workers.

`CREATE TEXT SEARCH CONFIGURATION ...` will get propagated to all
workers. `ALTER ... CONFIGURATION ...` will not get propagated to
workers.

After adding a new non-metadata node, the new node will get the altered
configuration as it reads from catalog. At this point CONFIGURATION
definitions got diverged in the cluster.

I suggest that we always use `NON_COORDINATOR_METADATA_NODES` in all the
TEXT SEARCH operations here.
2022-03-14 14:44:34 +03:00
Onder Kalaci db529facab Only change the sequence types if the target column type is a supported sequence type
Before this commit, we erroneously converted the sequence
type to the column's type it is used. However, it is possible
that the sequence is used in an expression which then converted
to a type that cannot be a sequence, such as text.

With this commit, we only try this conversion if the column
type is a supported sequence type (e.g., smallint, int and bigint).

Note that we do this conversion because if the column type is a
bigint and the sequence is NOT a bigint, users would be in trouble
because sequences would generate values that are out of the range
of the column. (The other ways are already not supported such as
the column is int and the sequence is bigint would fail on the worker.)

In other words, with this commit, we scope this optimization only
when the target column type is a supported sequence type. Otherwise,
we let users to more freely use the sequences.
2022-03-11 16:06:00 +01:00
Ahmet Gedemenli d06146360d
Support GRANT ON SCHEMA commands in CREATE SCHEMA statements (#5789)
* Support GRANT ON SCHEMA commands in CREATE SCHEMA statements

* Add test

* add comment

* Rename to GetGrantCommandsFromCreateSchemaStmt
2022-03-11 14:47:45 +03:00
Jelte Fennema e5d5c7be93
Start erroring out for unsupported lateral subqueries (#5753)
With the introduction of #4385 we inadvertently started allowing and
pushing down certain lateral subqueries that were unsafe to push down.
To be precise the type of LATERAL subqueries that is unsafe to push down
has all of the following properties:
1. The lateral subquery contains some non recurring tuples
2. The lateral subquery references a recurring tuple from
   outside of the subquery (recurringRelids)
3. The lateral subquery requires a merge step (e.g. a LIMIT)
4. The reference to the recurring tuple should be something else than an
   equality check on the distribution column, e.g. equality on a non
   distribution column.


Property number four is considered both hard to detect and probably not
used very often. Thus this PR ignores property number four and causes
query planning to error out if the first three properties hold.

Fixes #5327
2022-03-11 11:59:18 +01:00
Hanefi Onaldi b0eb685101
Add support for TEXT SEARCH DICTIONARY objects
TEXT SEARCH DICTIONARY objects depend on TEXT SEARCH TEMPLATE objects.
Since we do not yet support distributed TS TEMPLATE objects, we skip
dependency checks for text search templates, similar to what we do for
roles.

The user is expected to manually create the TEXT SEARCH TEMPLATE objects
before a) adding new nodes, b) creating TEXT SEARCH DICTIONARY objects.
2022-03-11 03:40:20 +03:00
Marco Slot 49467e27e6
Ensure worker_save_query_explain_analyze always fully qualifies types (#5776)
Co-authored-by: Onur Tirtir <onurcantirtir@gmail.com>
Co-authored-by: Marco Slot <marco.slot@gmail.com>
2022-03-10 07:30:11 -08:00
Gledis Zeneli 2cb02bfb56
Fix node adding itself with citus_add_node leading to deadlock (Fix #5720) (#5758)
If a worker node is being added, a command is sent to get the server_id of the worker from the pg_dist_node_metadata table. If the worker's id is the same as the node executing the code, we will know the node is trying to add itself. If the node tries to add itself without specifying `groupid:=0` the operation will result in an error.
2022-03-10 17:46:33 +03:00
Burak Velioglu 547f6b18ef
Ensure dependencies exists for all alter owner commands 2022-03-10 16:37:55 +03:00
Ahmet Gedemenli 4312486141
Remove unnecessary schema name from CREATE SCHEMA stmts (#5785) 2022-03-10 15:19:14 +03:00
Hanefi Onaldi d153c2de0d Fix some typos in comments 2022-03-10 15:03:26 +03:00
Ahmet Gedemenli 551a7d1383
Support CREATE SCHEMA without name (#5782) 2022-03-10 13:38:00 +03:00
Marco Slot 8e43c8094d Fix CREATE EXTENSION propagation with custom version 2022-03-09 17:40:50 +01:00
Marco Slot 7559ad12ba Change create_object_propagation default to immediate 2022-03-09 17:40:50 +01:00
Burak Velioglu bbe1b16125
Check whether the object has unsupported or circular dependency 2022-03-09 16:37:53 +03:00
Jelte Fennema c8839de68b
Don't use cascading deletes in Citus 11 migration script (#5767)
Using CASCADE in a DELETE can inadvertently delete things we don't
intend to. It's safer to fail hard and make the user delete depending
things manually.
2022-03-09 14:35:23 +01:00
Halil Ozan Akgül 333bcc7948
Global PID Helper Functions (#5768)
* Introduces citus_nodename_for_nodeid and citus_nodeport_for_nodeid functions

* Introduces citus_nodeid_for_gpid and citus_pid_for_gpid functions

* Add tests
2022-03-09 13:15:59 +03:00
Onder Kalaci c32b2de1a7 Improve citus_lock_waits
1) Remove useless columns
2) Show backends that are blocked on a DDL even before
   gpid is assigned
3) One minor bugfix, where we clear distributedCommandOriginator
   properly.
2022-03-07 11:10:44 +01:00
Ahmet Gedemenli 2a3c0c1914
Revert upgrade script changes (#5757) 2022-03-07 13:04:58 +03:00
Onder Kalaci 24fcd2a88c Handle dropping the partitioned tables properly
Before this commit, we might be leaving some metadata on the workers.
Now, we handle DROP SCHEMA .. CASCADE properly to avoid any metadata
leakage.
2022-03-07 10:02:54 +01:00
Nils Dijk 3801576dfb
Move pg_dist_object to pg_catalog (#5765)
DESCRIPTION: Move pg_dist_object to pg_catalog

Historically `pg_dist_object` had been created in the `citus` schema as an experiment to understand if we could move our catalog tables to a branded schema. We quickly realised that this interfered with the UX on our managed services and other environments, where users connected via a user with the name of `citus`.

By default postgres put the username on the search_path. To be able to read the catalog in the `citus` schema we would need to grant access permissions to the schema. This caused newly created objects like tables etc, to default to this schema for creation. This failed due to the write permissions to that schema.

With this change we move the `pg_dist_object` catalog table to the `pg_catalog` schema, where our other schema's are also located. This makes the catalog table visible and readable by any user, like our other catalog tables, for debugging purposes.

Note: due to the change of schema, we had to disable 1 test that was running into a discrepancy between the schema and binary. Secondly, we needed to make the lookup functions for the `pg_dist_object` relation and their indexes less strict on the fallback of the naming due to an other test that, due to an unfortunate cache invalidation, needed to lookup the relation again. This makes that we won't default to _only_ resolving from `pg_catalog` outside of upgrades.
2022-03-04 17:40:38 +00:00
Halil Ozan Akgul 0500a62515 Updates citus_dist_stat_activity to use citus_stat_activity 2022-03-04 17:28:17 +03:00
Ahmet Gedemenli b8eedcd261
Notice when create_distributed_function called without params (#5752)
* Notice when create_distributed_function called without params

* Move variable comments to top

* Add valid check for cache entry

* add objtype to notice msg

* update test outputs

* Add more tests

* Address feedback
2022-03-04 17:26:39 +03:00
Önder Kalacı bd6a6563ff
Merge branch 'master' into calculate_gpid 2022-03-04 11:34:12 +01:00
Burak Velioglu cb6d67a9a9
Make sure that all dependencies of citus tables can be distributed 2022-03-03 20:08:09 +03:00
Onder Kalaci c7b67ba0ea Add citus_backend_gpid()
And also citus_calculate_gpid(nodeId,pid). These UDFs are just
wrappers for the existing functions. Useful for testing and simple
manipulation of citus_stat_activity.
2022-03-03 15:29:40 +01:00
Halil Ozan Akgul 06a0509b1a Introduces citus_stat_activity view 2022-03-03 16:19:20 +03:00
Marco Slot ddf7cf29f3 Sync pg_dist_colocation as a batch 2022-03-03 12:48:48 +01:00
Marco Slot 3ba61244b8 Synchronize pg_dist_colocation metadata 2022-03-03 11:01:59 +01:00
Marco Slot 43e4dd3808 Add a citus.internal_reserved_connections setting 2022-03-02 19:13:53 +01:00
Onder Kalaci e80a36c4b6 Improve visibility rules for non-priviledge roles
It seems like our approach is way too restrictive and some places
are wrong. Now, we follow very similar approach to pg_stat_activity.

Some of the changes are pre-requsite for implementing citus_dist_stat_activity
via citus_stat_activity.
2022-03-02 18:04:01 +01:00
Onder Kalaci 35ec9721b4 Add a new API for enabling Citus MX for clusters upgrading from earlier versions
Clusters created pre-Citus 11 mostly didn't have metadata sync enabled.
For those clusters, we add a utility UDF which fixes some minor issues
and sync the necessary objects to the workers.
2022-03-02 17:02:55 +01:00
Marco Slot dcfbb51b6b Revert "Build Columnar.so and make Citus depends on it (#5661)"
This reverts commit a4133c69e8.
2022-03-02 11:33:15 +01:00
Ahmet Gedemenli e1809af376 Propagate CREATE AGGREGATE commands 2022-03-02 10:52:43 +03:00
ywj a4133c69e8
Build Columnar.so and make Citus depends on it (#5661)
* [Columnar] Build columnar.so and let citus depends on it


Co-authored-by: Yanwen Jin <yanwjin@microsoft.com>
Co-authored-by: Ying Xu <32597660+yxu2162@users.noreply.github.com>
Co-authored-by: jeff-davis <Jeffrey.Davis@microsoft.com>
2022-03-01 23:31:14 +03:00
Nils Dijk 65bd540943
Feature: configure object propagation behaviour in transactions (#5724)
DESCRIPTION: Add GUC to control ddl creation behaviour in transactions

Historically we would _not_ propagate objects when we are in a transaction block. Creation of distributed tables would not always work in sequential mode, hence objects created in the same transaction as distributing a table that would use the just created object wouldn't work. The benefit was that the user could still benefit from parallelism.

Now that the creation of distributed tables is supported in sequential mode it would make sense for users to force transactional consistency of ddl commands for distributed tables. A transaction could switch more aggressively to sequential mode when creating new objects in a transaction.

We don't change the default behaviour just yet.

Also, many objects would not even propagate their creation when the transaction was already set to sequential, leaving the probability of a self deadlock. The new policy checks solve this discrepancy between objects as well.
2022-03-01 17:29:31 +03:00
Burak Velioglu f17872aed4
Expand functions while resolving dependencies 2022-03-01 17:08:46 +03:00
Gledis Zeneli b825232ecb
Handle rebalance / replication when a node is disabled (Fix #5664) (#5729)
The issue in question is caused when rebalance / replication call `FullShardPlacementList` which returns all shard placements (including those in disabled nodes with `citus_disable_node`).  Eventually, `FindFillStateForPlacement` looks for the state across active workers and fails to find a state for the placements which are in the disabled workers causing a seg fault shortly after.

Approach:
* `ActivePlacementHash` was not using the status of the shard placement's node to determine if the node it is active. Initially, I just fixed that.
* Additionally, I refactored the code which handles active shards in replication / rebalance to:
	* use a single function to determine if a shard placement is active. 
	* do the shard active shard filtering before calling `RebalancePlacementUpdates` and `ReplicationPlacementUpdates`, so test methods like `shard_placement_rebalance_array` and `shard_placement_replication_array` which have different shard placement active requirements can do their own filtering while using the same rebalance / replicate logic that `rebalance_table_shards` and `replicate_table_shards` use. 

Fix #5664
2022-02-25 19:54:30 +03:00
Hanefi Onaldi 6c25eea62f Fix some typos in comments 2022-02-24 19:48:52 +03:00
Onder Kalaci df95d59e33 Drop support for CitusInitiatedBackend
CitusInitiatedBackend was a pre-mature implemenation of the whole
GlobalPID infrastructure. We used it to track whether any individual
query is triggered by Citus or not.

As of now, after GlobalPID is already in place, we don't need
CitusInitiatedBackend, in fact it could even be wrong.
2022-02-24 12:12:43 +01:00
Marco Slot 0c4e3cb69c Drop worker_partition_query_result on downgrade 2022-02-24 10:18:56 +01:00
Hanefi Onaldi f4e8af2c22
Do not acquire locks on node metadata explicitly 2022-02-24 03:19:56 +03:00
Hanefi Onaldi b70949ae8c
Lock nodes when building ddl task lists 2022-02-24 03:19:56 +03:00
Marco Slot ef1ceb3953 Only use a single placement for map tasks 2022-02-23 19:40:21 +01:00
Marco Slot 490765a754 Enable re-partition joins after local execution 2022-02-23 19:40:21 +01:00
Marco Slot 3cd9aa655a Stop using citus.binary_worker_copy_format 2022-02-23 19:40:21 +01:00
Marco Slot 5ac0d31e8b Fix re-partition hash range generation 2022-02-23 19:40:21 +01:00
Marco Slot 72d8fde28b Use intermediate results for re-partition joins 2022-02-23 19:40:21 +01:00
Nils Dijk 1fb970224e
Fix: partitioned index dependencies (#5741)
#5685 introduced the resolution of dependencies for indices. This missed support for indices on partitioned tables. This change adds support for partitioned indices to the dependency resolution code.
2022-02-23 17:53:26 +03:00
Teja Mupparti a62901396b Allow unsafe triggers via a GUC 2022-02-21 22:45:17 -08:00
Onder Kalaci 95d5918967 Properly set worker_query and use 2022-02-21 18:22:33 +01:00
Onder Kalaci dffcafc096 Use global pids in citus_lock_waits 2022-02-21 17:46:34 +01:00
Onder Kalaci 331af3dce8 Dumping wait edges becomes optionally scan all backends
Before this commit, dumping wait edges can only be used for
distributed deadlock detection purposes. With this commit,
we open the possibility that we can use it for any backend.
2022-02-21 17:37:07 +01:00
Halil Ozan Akgul f6cd4d0f07 Overrides pg_cancel_backend and pg_terminate_backend to accept global pid 2022-02-21 16:41:35 +03:00
Ahmet Gedemenli c1d5ca9896 Do distributed check first, for DropSchema stmts 2022-02-21 14:43:04 +03:00
Ahmet Gedemenli 2bc6a00408 Refactor CreateDistributedTable to take column name 2022-02-21 12:07:17 +03:00
yxu2162 8974b2de66 Copied CheckCitusVersion over to Columnar to handle dependency issue. If we split columnar into two extensions, this will later be changed tl CheckColumnarVersion. 2022-02-18 09:47:39 -08:00
Burak Velioglu fa6866ed36
Start to propagate functions to worker nodes with
CREATE FUNCTION command together with it's dependencies.

If the function depends on any nondistributable object,
function will be created only locally. Parameterless
version of create_distributed_function becomes obsolete
with this change, it will deprecated from the code with a subsequent PR.
2022-02-18 13:56:51 +03:00
gledis69 a14fada153 Prevent Deadlocks When a Worker Tries to Create Collation (Fix #5583)
* When a worker tried to create a collation which had a dependency in the same worker node,
it would cause a deadlock, now it throws the correct "not a coordinator" error.
2022-02-18 12:28:02 +03:00
Teja Mupparti 46fa47beea Force-delegated functions' distribution argument must be reset as soon as the routine completes execution,
and not wait until the top level Executor ends. This fixes issue #5687
2022-02-17 10:48:30 -08:00
Nils Dijk 768b320470
reuse GetRoleSpecObjectForUser 2022-02-17 13:16:10 +01:00
Nils Dijk ea86f9f94e
Add support for TEXT SEARCH CONFIGURATION objects (#5685)
DESCRIPTION: Implement TEXT SEARCH CONFIGURATION propagation

The change adds support to Citus for propagating TEXT SEARCH CONFIGURATION objects. TSConfig objects cannot always be created in one create statement, and instead require a create statement followed by many alter statements to get turned into the object they should represent.

To support this we add functionality to the worker to create or replace objects based on a list of statements. When the lists of the local object and the remote object correspond 1:1 we skip the creation of the object and simply mark it distributed. This is especially important for TSConfig objects as initdb pre-populates databases with a dozen configurations (for many different languages).

When the user creates a new TSConfig based on the copy of an existing configuration there is no direct link to the object copied from. Since there is no link we can't simply rely on propagating the dependencies to the worker and send a qualified
2022-02-17 13:12:46 +01:00
Ahmet Gedemenli a1c3580c64 Support TRUNCATE for foreign tables 2022-02-17 09:59:53 +03:00
Onder Kalaci abd5b1c506 Prevent any monitoring view/udf to show already exited backends
The low-level StoreAllActiveTransactions() function filters out
backends that exited.

Before this commit, if you run a pgbench, after that you'd still
see the backends show up:
```SQL
 select count(*) from get_global_active_transactions();
┌───────┐
│ count │
├───────┤
│   538 │
└───────┘
```

After this patch, only active backends show-up:

```SQL
 select count(*) from get_global_active_transactions();
┌───────┐
│ count │
├───────┤
│    72 │
└───────┘
```
2022-02-14 17:34:32 +01:00
Ahmet Gedemenli 0411a98c99
Refactor EnsureSequentialMode functions (#5704) 2022-02-14 18:38:21 +03:00
Gledis Zeneli badfd561b2
Prevent Citus table functions from being called on shards (Fix #5610) (#5694)
DESCRIPTION: Prevent Citus table functions from being called on shards

The operations that guard against using shards are:
* Create Local Table
* Create distributed table (which affects reference table creation as well).

* I used a `ErrorIfRaltionIsKnownShard` instead of `ErrorIfIllegallyChangingKnownShard`.
`ErrorIfIllegallyChangingKnownShard` allows the operation if `citus.enable_manual_changes_to_shards`,
but I am not sure if it ever makes sense to create a distributed, reference, or citus local table out of a shard.

I tried to go over the code to identify other UDF-s where shards could be illegaly changed, but I could not find any other.
My knowledge of the codebase is not solid enough for me to say for sure.

Fixes #5610
2022-02-14 16:06:48 +03:00
Önder Kalacı dc6c194916
Show IDLE backends in citus_dist_stat_activity (#5700)
* Break the dependency to CitusInitiatedBackend infrastructure

With this change, we start to show non-distributed backends as well
in citus_dist_stat_activity. I think that
  (a) it is essential for making citus_lock_waits to work for blocked
      on DDL commands.
  (b) it is more expected from the user's perspective. The name of
      the view is a little inconsistent now (e.g., citus_dist_stat_activity)
      but we are already planning to improve the names with followup
      PRs.

Also, we have global pids assigned, the CitusInitiatedBackend
becomes obsolete.
2022-02-10 08:59:28 -08:00
Ahmet Gedemenli 76b63a307b Propagate create/drop schema commands 2022-02-10 14:58:09 +03:00
Marco Slot d0711ea9b4 Delegate function calls in FROM outside of transaction block 2022-02-09 20:56:25 +01:00
Onder Kalaci 1c30f61a70 Prevent citus.node_conninfo to use "application_name"
With https://github.com/citusdata/citus/pull/5657, Citus uses
a fixed application_name while connecting to remote nodes
for internal purposes.

It means that we cannot allow users to override it via
citus.node_conninfo.
2022-02-09 13:22:04 +01:00
Teja Mupparti 1e3c8e34c0 Allow create_distributed_function() on a function owned by an extension
Implement #5649
Allow create_distributed_function() on functions owned by extensions

1) Only update pg_dist_object, and do not propagate CREATE FUNCTION.
2) Ensure corresponding extension is in pg_dist_object.
3) Verify if dependencies exist on the function they should resolve to the extension.
4) Impact on node-scaling: We build a list of ddl commands based on all objects in
   pg_dist_object. We need to omit the ddl's for the extension-function, as it
   will get propagated by the virtue of the extension creation.
5) Extra checks for functions coming from extensions, to not propagate changes
   via ddl commands, even though the function is marked as distributed in pg_dist_object
2022-02-08 11:52:56 -08:00
Halil Ozan Akgul 8ee02b29d0 Introduce global PID 2022-02-08 16:49:38 +03:00
Burak Velioglu ab248c1785
Check object ownership while creating pg_dist_object entries on remote 2022-02-07 17:50:48 +03:00
Burak Velioglu 8ae7577581
Use superuser connection while syncing dependent objects' pg_dist_object tuples 2022-02-07 17:50:45 +03:00
Marco Slot 872f0a79db Remove random shard placement policy 2022-02-06 21:55:58 +01:00
Marco Slot 0cae8e7d6b Remove local-node-first shard placement 2022-02-06 21:36:34 +01:00
Teja Mupparti c8e504dd69 Fix the issue #5673
If the expression is simple, such as, SELECT function() or PEFORM function()
in PL/PgSQL code, PL engine does a simple expression evaluation which can't
interpret the Citus CustomScan Node. Code checks for simple expressions when
executing an UDF but missed the DO-Block scenario, this commit fixes it.
2022-02-04 15:44:53 -08:00
Ying Xu b5c116449b
Removed dependency from EnsureTableOwner (#5676)
Removed dependency for EnsureTableOwner. Also removed pg_fini() and columnar_tableam_finish() Still need to remove CheckCitusVersion dependency to make Columnar_tableam.h dependency free from Citus.
2022-02-04 12:45:07 -08:00
Onur Tirtir 79442df1b7
Fix coordinator/worker query targetlists for agg. that we cannot push-down (#5679)
Previously, we were wrapping targetlist nodes with Vars that reference
to the result of the worker query, if the node itself is not `Const` or
not a `Param`. Indeed, we should not do that unless the node itself is
a `Var` node or contains a `Var` within it (e.g.: `OpExpr(Var(column_a) > 2)`).
Otherwise, when worker query returns empty result set, then combine
query exec would crash since the `Var` would be pointing to an empty
tuple slot, which is not desirable for the node-executor methods.
2022-02-04 05:37:25 -08:00
Onder Kalaci 72d7d92611 Apply code review feedback 2022-02-04 10:52:57 +01:00
Onder Kalaci ff234fbfd2 Unify old GUCs into a single one
Replaces citus.enable_object_propagation with citus.enable_metadata_sync

Also, within Citus 11 release cycle, we added citus.enable_metadata_sync_by_default,
that is also replaced with citus.enable_metadata_sync.

In essence, when citus.enable_metadata_sync is set to true, all the objects
and the metadata is send to the remote node.

We strongly advice that the users never changes the value of
this GUC.
2022-02-04 10:52:56 +01:00
Teja Mupparti f31bce5b48 Fixes the issue seen in https://github.com/citusdata/citus-enterprise/issues/745
With this commit, rebalancer backends are identified by application_name = citus_rebalancer
and the regular internal backends are identified by application_name = citus_internal
2022-02-03 09:40:46 -08:00
jeff-davis b072b9235e
Columnar: fix checksums, broken in a4067913. (#5669)
Checksums must be set directly before writing the page. log_newpage()
sets the page LSN, and therefore invalidates the checksum.
2022-02-02 13:22:11 -08:00
Onder Kalaci 650243927c Relax some transactional limications on activate node
We already enforce EnsureSequentialModeMetadataOperations(), and given that all activate node is transaction, we should be fine
2022-02-01 15:56:55 +01:00
Onder Kalaci 34d91009ed Update outdated comment
As of the current HEAD, we support sequences as first class objects
2022-02-01 15:37:10 +01:00
Marco Slot 63c6896716 Enable function call pushdown from workers 2022-02-01 14:13:25 +01:00
Burak Velioglu f88cc230bf
Handle tables and objects as metadata. Update UDFs accordingly
With this commit we've started to propagate sequences and shell
tables within the object dependency resolution. So, ensuring any
dependencies for any object will consider shell tables and sequences
as well. Separate logics for both shell tables and sequences have
been removed.

Since both shell tables and sequences logic were implemented as a
part of the metadata handling before that logic, we were propagating
them while syncing table metadata. With this commit we've divided
metadata (which means anything except shards thereafter) syncing
logic into multiple parts and implemented it either as a part of
ActivateNode. You can check the functions called in ActivateNode
to check definition of different metadata.

Definitions of start_metadata_sync_to_node and citus_activate_node
have also been updated. citus_activate_node will basically create
an active node with all metadata and reference table shards.
start_metadata_sync_to_node will be same with citus_activate_node
except replicating reference tables. stop_metadata_sync_to_node
will remove all the metadata. All of those UDFs need to be called
by superuser.
2022-01-31 16:20:15 +03:00
Önder Kalacı f68ac4a7cf
Consider foreign keys between reference tables (#5659)
On #5071, we avoid edge cases, but below there are foreign key constraints as well

This commit makes sure we cover those as well
2022-01-28 13:38:14 +01:00
Heikki Linnakangas a40679139b
Use smgrextend() when extending relation, and WAL-log first. (#5654)
When creating a new table, we bypass the buffer cache and write the
initial pages directly with smgrwrite(). However, you're supposed to
use smgrextend() when extending a relation, rather than smgrwrite().
There isn't much difference between them, but smgrextend() updates the
relation size cache, which seems important, although I haven't seen
any real bugs caused by that.

Also, write the block to disk only after WAL-logging it, so that we
can include the LSN of the WAL record in the version that we write
out. Currently, the page as written to disk has LSN 0. That doesn't
cause any user-visible issues either, at worst it could make us
WAL-log a full page image of the page earlier than necessary, but that
doesn't matter currently because we WAL-log full page images of all
changes anyway.

I bumped into that issue with LSN 0 in the page header when testing
Citus with Zenith (https://github.com/zenithdb/zenith/issues/1176).
Zenith contains a check that PANICs if you write a block to disk
without WAL-logging it, and it works by checking the LSN of the page
that's written out. In this case, we are WAL-logging the page even
though the LSN on the page is 0, so it was a false alarm, but I'd love
to get this changed in Citus to keep the check in Zenith simple.

A downside of WAL-logging the page first is that if you run out of
disk space, you have already created the WAL record. So if you then
crash and restart, WAL recovery will likely run out of disk space,
too, which is bad. In practice, we have the same problem in other
places, like rewriteheap.c. Also, if you are on the brink of running
out of disk space, you will probably run out at WAL replay anyway,
regardless of which order we write these few pages. But if we wanted
to fix that, we could first extend the relation with zeros, and then
WAL-log the pages. That's how heap extension works.

It would be even nicer to use the buffer cache for this, and skip the
smgrimmedsync() on the relation. However, that would require more
work, because we don't have the Relation struct for the relation here.
We could use ReadBufferWithoutRelcache(), but that doesn't work for
unlogged tables. Unlogged tables are currently not supported
(https://github.com/citusdata/citus/issues/4742), but that would
become a problem if we want to support them in the future.
CreateFakeRelcacheEntry() also doesn't work with unlogged tables. We
could do things differently for logged and unlogged tables, but that
complicates the code further.

Co-authored-by: jeff-davis <Jeffrey.Davis@microsoft.com>
2022-01-27 12:04:08 -08:00
Onder Kalaci b26eeaecd3 Use a fixed application_name while connecting to remote nodes
Citus heavily relies on application_name, see
`IsCitusInitiatedRemoteBackend()`.

But if the user set the application name, such as export PGAPPNAME=test_name,
Citus uses that name while connecting to the remote node.

With this commit, we ensure that Citus always connects with
the "citus" user name to the remote nodes.
2022-01-27 10:46:25 +01:00
Onder Kalaci b9b419ef16 Allow creating distributed tables in sequential mode
With https://github.com/citusdata/citus/pull/2780, we allow
COPY to use any number of connections that the executor used
in a tx block.

Meaning that, while COPYing data to the shards, create_distributed_table
could allow sequential mode.
2022-01-26 12:58:18 +01:00
Onur Tirtir 8c8d696621
Not fail over to local execution when it's not supported (#5625)
We fall back to local execution if we cannot establish any more
connections to local node. However, we should not do that for the
commands that we don't know how to execute locally (or we know we
shouldn't execute locally). To fix that, we take localExecutionSupported
take into account in CanFailoverPlacementExecutionToLocalExecution too.

Moreover, we also prompt a more accurate hint message to inform user
about whether the execution is failed because local execution is
disabled by them, or because local execution wasn't possible for given
command.
2022-01-25 16:43:21 +01:00
Onur Tirtir ff3913ad99
Copy errmsg for distributed deadlock error into heap (#5641)
multi_log_hook() hook is called by EmitErrorReport() when emitting the
ereport either to frontend or to the server logs. And some callers of
EmitErrorReport() (e.g.: errfinish()) seems to assume that string fields
of given ErrorData object needs to be freed. For this reason, we copy the
message into heap here.

I don't think we have faced with such a problem before but it seems worth
fixing as it is theoretically possible due to the reasoning above.
2022-01-24 06:27:41 -08:00
Ahmet Gedemenli c838fb428f Refactor GenerateGrantOnSchemaStmtForRights 2022-01-24 11:31:59 +03:00
Onur Tirtir 4dc38e9e3d
Use EnsureCompatibleLocalExecutionState instead (#5640) 2022-01-21 15:37:59 +01:00
Ahmet Gedemenli 8647682c11 Fix typo: taget/target 2022-01-21 10:35:56 +03:00
Onur Tirtir 181111b84f Drop ruleutils copied for statistics 2022-01-20 17:28:19 +03:00
Onur Tirtir 7b59295af2 Drop ruleutils copied for triggers 2022-01-20 17:28:19 +03:00
Teja Mupparti 54862f8c22 (1) Functions will be delegated even when present in the scope of an explicit
BEGIN/COMMIT transaction block or in a UDF calling another UDF.
(2) Prohibit/Limit the delegated function not to do a 2PC (or any work on a
remote connection).
(3) Have a safety net to ensure the (2) i.e. we should block the connections
from the delegated procedure or make sure that no 2PC happens on the node.
(4) Such delegated functions are restricted to use only the distributed argument
value.

Note: To limit the scope of the project we are considering only Functions(not
procedures) for the initial work.

DESCRIPTION: Introduce a new flag "force_delegation" in create_distributed_function(),
which will allow a function to be delegated in an explicit transaction block.

Fixes #3265

Once the function is delegated to the worker, on that node during the planning

distributed_planner()
TryToDelegateFunctionCall()
CheckDelegatedFunctionExecution()
EnableInForceDelegatedFuncExecution()
Save the distribution argument (Constant)
ExecutorStart()
CitusBeginScan()
IsShardKeyValueAllowed()
Ensure to not use non-distribution argument.

ExecutorRun()
AdaptiveExecutor()
StartDistributedExecution()
EnsureNoRemoteExecutionFromWorkers()
Ensure all the shards are local to the node in the remoteTaskList.
NonPushableInsertSelectExecScan()
InitializeCopyShardState()
EnsureNoRemoteExecutionFromWorkers()
Ensure all the shards are local to the node in the placementList.

This also fixes a minor issue: Properly handle expressions+parameters in distribution arguments
2022-01-19 16:43:33 -08:00
Onur Tirtir 4a53967bdd
Remove an outdated comment from RelationIsAKnownShard (#5629) 2022-01-19 11:24:10 +01:00
Marco Slot 33bfa0b191 Hide shards from application_name's with a specific prefix 2022-01-18 15:20:55 +04:00
Ahmet Gedemenli e564220dd5
Fix typo: GetRelationTriggerFunctionDependencyList (#5626) 2022-01-17 18:17:07 +03:00
Ahmet Gedemenli 8936543b80
Create wrapper function CreateObjectAddressDependencyDefList (#5623) 2022-01-17 15:35:40 +03:00
Ying Xu 4dca662e97
Making Columnar Dependency Free from Citus (#5622)
* Removed distributed dependency in columnar_metadata.c

* Changed columnar_debug.c so that it no longer needed distributed/tuplestore and made it return a record instead of a tuplestore

* removed distributed/commands.h dependency

* Made columnar_tableam.c dependency-free

* Fixed spacing for columnar_store_memory_stats function

* indentation fix

* fixed test failures
2022-01-14 09:43:05 -08:00
Onur Tirtir 70d8e1fe97
Assert that we will create indexes on shards via local execution (#5620) 2022-01-13 17:09:57 +01:00
Halil Ozan Akgul 63cd90e5dd Add missing library to dependencies.c 2022-01-11 18:36:43 +03:00
Önder Kalacı 885601c02c
Require superuser while activating a node (#5609)
* Require superuser while activating a node

With this change, we require ActiveNode() (hence citus_add_node(),
citus_activate_node()) explicitly require for a superuser.

Before this commit, these functions were designed to work with
non-superuser roles with the relevent GRANTs given.

However, that is not a widely used way for calling the functions
above.

Due to possibility of non-super user calling the UDFs, they were
designed in a way that some commands were using some additional
short-lived superuser connections. That is:
	(a) breaking transactional behavior (e.g., ROLLBACK
 	    wouldn't fully rollback the whole transaction)
        (b) Making it very complicated to reason about which
	    parts of the node activation goes over which connections,
	    and becoming vulnerable to deadlocks / visibility issues.
2022-01-10 08:30:13 -08:00
Onur Tirtir 3cc44ed8b3
Tell other backends it's safe to ignore the backend that concurrently built the shell table index (#5520)
In addition to starting a new transaction, we also need to tell other
backends --including the ones spawned for connections opened to
localhost to build indexes on shards of this relation-- that concurrent
index builds can safely ignore us.

Normally, DefineIndex() only does that if index doesn't have any
predicates (i.e.: where clause) and no index expressions at all.
However, now that we already called standard process utility, index
build on the shell table is finished anyway.

The reason behind doing so is that we cannot guarantee not grabbing any
snapshots via adaptive executor, and the backends creating indexes on
local shards (if any) might block on waiting for current xact of the
current backend to finish, which would cause self deadlocks that are not
detectable.
2022-01-10 10:23:09 +03:00
Marco Slot ee3b50b026 Disallow remote execution from queries on shards 2022-01-07 17:46:21 +01:00
Ahmet Gedemenli 3c834e6693
Disable foreign distributed tables (#5605)
* Disable foreign distributed tables
* Add warning for existing distributed foreign tables
2022-01-07 18:12:23 +03:00
Onder Kalaci 7cb1d6ae06 Improve metadata connections
With https://github.com/citusdata/citus/pull/5493 we introduced
metadata specific connections.

With this connection we guarantee that there is a single metadata connection.
But note that this connection can be used for any other operation.
In other words, this connection is not only reserved for metadata
operations.

However, as https://github.com/citusdata/citus-enterprise/issues/715 showed
us that the logic has a flaw. We allowed ineligible connections to be
picked as metadata connections: such as exclusively claimed connections
or not fully initialized connections.

With this commit, we make sure that we only consider eligable connections
for metadata operations.
2022-01-07 10:36:32 +01:00
Onder Kalaci 9f2d9e1487 Move placement deletion from disable node to activate node
We prefer the background daemon to only sync node metadata. That's
why we move placement metadata changes from disable node to
activate node. With that, we can make sure that disable node
only changes node metadata, whereas activate node syncs all
the metadata changes. In essence, we already expect all
nodes to be up when a node is activated. So, this does not change
the behavior much.
2022-01-07 09:56:03 +01:00
Hanefi Onaldi 9edfbe7718 Fix the default value for DeferShardDeleteOnMove
The default for GUC citus.defer_drop_after_shard_move is true. However
we initialize the global variable with a false value.
2022-01-07 11:01:49 +03:00
Ahmet Gedemenli 45e423136c
Support foreign tables in MX (#5461) 2022-01-06 18:50:34 +03:00
Önder Kalacı 5305aa4246
Do not drop sequences when dropping metadata (#5584)
Dropping sequences means we need to recreate
and hence losing the sequence.

With this commit, we keep the existing sequences
such that resyncing wouldn't drop the sequence.

We do that by breaking the dependency of the sequence
from the table.
2022-01-06 09:48:34 +01:00
jeff-davis 2e03efd91e
Columnar: move DDL hooks to citus to remove dependency. (#5547)
Add a new hook ColumnarTableSetOptions_hook so that citus can get
control when the columnar table options change.
2022-01-04 23:26:46 -08:00
jeff-davis c9292cfad1
Make pg_version_compat.h and listutils.c dependency-free. (#5548)
Split distributed/version_compat.h into dependency-free
pg_version_compat.h, and the original which still has
dependencies. The original doesn't have much purpose, but until other
files have better discipline about including the correct header files,
then it's still needed.

Also make distributed/listutils.h dependency-free. Should be moved
outside of 'distributed' subdirectory, but that will cause significant
code churn, so leave for another cleanup patch.

Now both files can be included in columnar without creating a
dependency on citus.
2022-01-04 23:02:08 -08:00
jeff-davis 1546aa0d9f
Columnar: use proper generic WAL interface. (#5543)
Previously, we cheated by using the RM_GENERIC_ID record type, but not
actually using the generic WAL API. This worked because we always took
a full page image, and saved the extra work of allocating and copying
to a temporary page.

But it introduced complexity, and perhaps fragility, so better to just
use the API properly. The performance penalty for a serial data load
seems to be less than 1%.
2022-01-04 22:42:21 -08:00
Önder Kalacı 0a8b0b06c6
Do not allow distributed functions on non-metadata synced nodes (#5586)
Before this commit, Citus was triggering metadata syncing
in the background when a function is distributed. However,
with Citus 11, we expect all clusters to have metadata synced
enabled. So, we do not expect any nodes not to have the metadata.

This change:
	(a) pro: simplifies the code and opens up possibilities
		 to simplify futher by reducing the scope of
		 bg worker to only sync node metadata
        (b) pro: explicitly asks users to sync the metadata such that
  	    any unforseen impact can be easily detected
        (c) con: For distributed functions without distribution
		 argument, we do not necessarily require the metadata
		 sycned. However, for completeness and simplicity, we
		 do so.
2022-01-04 13:12:57 +01:00
Önder Kalacı d33650d1c1
Record if any partitioned Citus tables during upgrade (#5555)
With Citus 11, the default behavior is to sync the metadata.
However, partitioned tables created pre-Citus 11 might have
index names that are not compatiable with metadata syncing.

See https://github.com/citusdata/citus/issues/4962 for the
details.

With this commit, we record the existence of partitioned tables
such that we can fix it later if any exists.
2021-12-27 03:33:34 -08:00
Önder Kalacı c9127f921f
Avoid round trips while fixing index names (#5549)
With this commit, fix_partition_shard_index_names()
works significantly faster.

For example,

32 shards, 365 partitions, 5 indexes drop from ~120 seconds to ~44 seconds
32 shards, 1095 partitions, 5 indexes drop from ~600 seconds to ~265 seconds

`queryStringList` can be really long, because it may contain #partitions * #indexes entries.

Before this change, we were actually going through the executor where each command
in the query string triggers 1 round trip per entry in queryStringList.

The aim of this commit is to avoid the round-trips by creating a single query string.

I first simply tried sending `q1;q2;..;qn` . However, the executor is designed to
handle `q1;q2;..;qn` type of query executions via the infrastructure mentioned
above (e.g., by tracking the query indexes in the list and doing 1 statement
per round trip).

One another option could have been to change the executor such that only track
the query index when `queryStringList` is provided not with queryString
including multiple `;`s . That is (a) more work (b) could cause weird edge
cases with failure handling (c) felt like coding a special case in to the executor
2021-12-27 10:29:37 +01:00
Ahmet Gedemenli 042d45b263 Propagate foreign server ops 2021-12-23 17:54:04 +03:00
Talha Nisanci e196d23854
Refactor AttributeEquivalenceId (#5006) 2021-12-23 13:19:02 +03:00
Hanefi Onaldi 76176caea7 Fix typo s/exlusive/exclusive/ 2021-12-23 01:35:01 +03:00
Hanefi Onaldi 1af8ca8f7c
Fix statical analysis findings (#5550) 2021-12-22 18:16:11 +03:00
Ahmet Gedemenli 8e4ff34a2e Do not include return table params in the function arg list
(cherry picked from commit 90928cfd74)

Fix function signature generation

Fix comment typo

Add test for worker_create_or_replace_object

Add test for recreating distributed functions with OUT/TABLE params

Add test for recreating distributed function that returns setof int

Fix test output

Fix comment
2021-12-21 19:01:42 +03:00
Marco Slot 2eef71ccab Propagate SET TRANSACTION commands 2021-12-18 11:31:39 +01:00
Onder Kalaci fc98f83af2 Add citus.grep_remote_commands
Simply applies

```SQL
SELECT textlike(command, citus.grep_remote_commands)
```
And, if returns true, the command is logged. Else, the log is ignored.

When citus.grep_remote_commands is empty string, all commands are
logged.
2021-12-17 11:47:40 +01:00
Onur Tirtir cc4c83b1e5
HAVE_LZ4 -> HAVE_CITUS_LZ4 (#5541) 2021-12-16 16:21:52 +03:00
Hanefi Onaldi 9d4d73898a
Move healthcheck logic into new file (#5531)
and add a missing `CheckCitusVersion(ERROR)` call
2021-12-15 15:58:20 -08:00
Hanefi Onaldi 29e4516642 Introduce citus_check_cluster_node_health UDF
This UDF coordinates connectivity checks accross the whole cluster.

This UDF gets the list of active readable nodes in the cluster, and
coordinates all connectivity checks in sequential order.

The algorithm is:

for sourceNode in activeReadableWorkerList:
    c = connectToNode(sourceNode)
    for targetNode in activeReadableWorkerList:
        result = c.execute(
            "SELECT citus_check_connection_to_node(targetNode.name,
                                                   targetNode.port")
        emit sourceNode.name,
             sourceNode.port,
             targetNode.name,
             targetNode.port,
             result

- result -> true  ->  connection attempt from source to target succeeded
- result -> false -> connection attempt from source to target failed
- result -> NULL  -> connection attempt from the current node to source node failed

I suggest you use the following query to get an overview on the connectivity:

SELECT bool_and(COALESCE(result, false))
FROM citus_check_cluster_node_health();

Whenever this query returns false, there is a connectivity issue, check in detail.
2021-12-15 01:41:51 +03:00
Hanefi Onaldi 13fff9c37a Remove NOOP tuplestore_donestoring calls
PostgreSQL does not need calling this function since 7.4 release, and it
is a NOOP.

For more details, check PostgreSQL commit below :

commit dd04e958c8b03c0f0512497651678c7816af3198
Author: Tom Lane <tgl@sss.pgh.pa.us>
Date:   Sun Mar 9 03:34:10 2003 +0000

    tuplestore_donestoring() isn't needed anymore, but provide a no-op
    macro definition so as not to create compatibility problems.

diff --git a/src/include/utils/tuplestore.h b/src/include/utils/tuplestore.h
index b46babacd1..76fe9fb428 100644
--- a/src/include/utils/tuplestore.h
+++ b/src/include/utils/tuplestore.h
@@ -17,7 +17,7 @@
  * Portions Copyright (c) 1996-2002, PostgreSQL Global Development Group
  * Portions Copyright (c) 1994, Regents of the University of California
  *
- * $Id: tuplestore.h,v 1.8 2003/03/09 02:19:13 tgl Exp $
+ * $Id: tuplestore.h,v 1.9 2003/03/09 03:34:10 tgl Exp $
  *
  *-------------------------------------------------------------------------
  */
@@ -41,6 +41,9 @@ extern Tuplestorestate *tuplestore_begin_heap(bool randomAccess,

 extern void tuplestore_puttuple(Tuplestorestate *state, void *tuple);

+/* tuplestore_donestoring() used to be required, but is no longer used */
+#define tuplestore_donestoring(state)  ((void) 0)
+
 /* backwards scan is only allowed if randomAccess was specified 'true' */
 extern void *tuplestore_gettuple(Tuplestorestate *state, bool forward,
                                        bool *should_free);
2021-12-14 18:55:02 +03:00
Halil Ozan Akgul a951e52ce8 Fix drop index trying to drop coordinator local indexes on metadata worker nodes 2021-12-14 11:28:08 +03:00
Burak Velioglu e8534c1dd5
Drop sequence metadata from workers explicitly 2021-12-06 19:25:51 +03:00
Burak Velioglu 21194c3b9d
Mark sequence distributed explicitly while syncing metadata
Since sequences are not marked as distributed while creating table if no
metadata worker node exists, we are marking all sequences distributed
while syncing metadata explicitly.
2021-12-06 19:25:51 +03:00
Burak Velioglu 6d849cf394
Allow delegating function from worker nodes
We've both allowed delegating functions and procedures from worker nodes
and also prevented delegation if a function/procedure has already been
propagated from another node.
2021-12-06 19:25:51 +03:00
Burak Velioglu a8b1ee87f7
Increment command counter after altering the sequence type 2021-12-06 19:25:51 +03:00
Burak Velioglu ed8e32de5e
Sync pg_dist_object on an update and propagate while syncing to a new node
Before that PR we were updating citus.pg_dist_object metadata, which keeps
the metadata related to objects on Citus, only on the coordinator node. In
order to allow using those object from worker nodes (or erroring out with
proper error message) we've started to propagate that metedata to worker
nodes as well.
2021-12-06 19:25:50 +03:00
Hanefi Onaldi 56e9b1b968 Introduce UDF to check worker connectivity
citus_check_connection_to_node runs a simple query on a remote node and
reports whether this attempt was successful.

This UDF will be used to make sure each worker node can connect to all
the worker nodes in the cluster.

parameters:
nodename: required
nodeport: optional (default: 5432)

return value:
boolean success
2021-12-03 02:30:28 +03:00
Onder Kalaci 549edcabb6 Allow disabling node(s) when multiple failures happen
As of master branch, Citus does all the modifications to replicated tables
(e.g., reference tables and distributed tables with replication factor > 1),
via 2PC and avoids any shardstate=3. As a side-effect of those changes,
handling node failures for replicated tables change.

With this PR, when one (or multiple) node failures happen, the users would
see query errors on modifications. If the problem is intermitant, that's OK,
once the node failure(s) recover by themselves, the modification queries would
succeed. If the node failure(s) are permenant, the users should call
`SELECT citus_disable_node(...)` to disable the node. As soon as the node is
disabled, modification would start to succeed. However, now the old node gets
behind. It means that, when the node is up again, the placements should be
re-created on the node. First, use `SELECT citus_activate_node()`. Then, use
`SELECT replicate_table_shards(...)` to replicate the missing placements on
the re-activated node.
2021-12-01 10:19:48 +01:00
Onder Kalaci d405993b57 Make sure to use a dedicated metadata connection
With this commit, we make sure to use a dedicated connection per
node for all the metadata operations within the same transaction.

This is needed because the same metadata (e.g., metadata includes
the distributed table on the workers) can be modified accross
multiple connections.

With this connection we guarantee that there is a single metadata connection.
But note that this connection can be used for any other operation.
In other words, this connection is not only reserved for metadata
operations.
2021-11-26 14:36:28 +01:00
Onder Kalaci 38b08ebde9 Generalize the error checks while removing node
The checks for preventing to remove a node are very much reference
table centric. We are soon going to add the same checks for replicated
tables. So, make the checks generic such that:
	 (a) replicated tables fit naturally
	 (b) we can the same checks in `citus_disable_node`.
2021-11-26 14:25:29 +01:00
Onder Kalaci 121f5c4271 Active placements can only be on active nodes
We re-define the meaning of active shard placement. It used
to only be defined via shardstate == SHARD_STATE_ACTIVE.

Now, we also add one more check. The worker node that the
placement is on should be active as well.

This is a preparation for supporting citus_disable_node()
for MX with multiple failures at the same time.

With this change, the maintanince daemon only needs to
sync the "node metadata" (e.g., pg_dist_node), not the
shard metadata.
2021-11-26 09:14:33 +01:00
Onder Kalaci b4931f7345 Do not acquire locks on reference tables when a node is removed/disabled
Before this commit, we acquire the metadata locks on the reference
tables while removing/disabling a node on all the MX nodes.

Although it has some marginal benefits, such as a concurrent
modification during remove/disable node blocks, instead of erroring
out, the drawbacks seems worse. Both citus_remove_node and citus_disable_node
are not tolerant to multiple node failures.

With this commit, we relax the locks. The implication is that while
a node is removed/disabled, users might see query errors. On the
other hand, this change becomes removing/disabling nodes more
tolerant to multiple node failures.
2021-11-26 09:08:25 +01:00
Onur Tirtir 76b8006a9e
Allow overwriting columnar storage pages written by aborted xacts (#5484)
When refactoring storage layer in #4907, we deleted the code that allows
overwriting a disk page previously written but not known by metadata.

Readers can see the change that introduced the code allows doing so in
commit a8da9acc63.

The reasoning was that; as of 10.2, we started aligning page
reservations (`AlignReservation`) for subsequent writes right after
allocating pages from disk. That means, even if writer transaction
fails, subsequent writes are guaranteed to allocate a new page and write
to there. For this reason, attempting to write to a page allocated
before is not possible for a columnar table that user created when using
v10.2.x.

However, since the older versions of columnar doesn't do that, following
example scenario can still result in writing to such disk page, even if
user now upgraded to v10.2.x. This is because, when upgrading storage to
2.0 (`ColumnarStorageUpdateIfNeeded`), we calculate `reservedOffset` of
the metapage based on the highest used address known by stripe
metadata (`GetHighestUsedAddressAndId`). However, stripe metadata
doesn't have entries for aborted writes. As a result, highest used
address would be computed by ignoring pages that are allocated but not
used.

- User attempts writing to columnar table on Citus v10.0x/v10.1x.
- Write operation fails for some reason.
- User upgrades Citus to v10.2.x.
- When attempting to write to same columnar table, they hit to "attempt
  to write columnar data .." error since write operation done in the
  older version of columnar already allocated that page, and now we are
  overwriting it.

For this reason, with this commit, we re-do the change done in
a8da9acc63.

And for the reasons given above, it wasn't possible to add a test for
this commit via usual code-paths. For this reason, added a UDF only for
testing purposes so that we can reproduce the exact scenario in our
regression test suite.
2021-11-26 07:51:13 +01:00
Onur Tirtir 85da4fc2e0
Merge branch 'master' into col/pg-upgrade-dependency 2021-11-26 09:34:43 +03:00
Onur Tirtir 81af605e07
Fix typo: "no sharding pruning constraints" -> "no shard pruning constraints" (#5490) 2021-11-25 21:00:44 +01:00
Onur Tirtir 73f06323d8 Introduce dependencies from columnarAM to columnar metadata objects
During pg upgrades, we have seen that it is not guaranteed that a
columnar table will be created after metadata objects got created.
Prior to changes done in this commit, we had such a dependency
relationship in `pg_depend`:

```
columnar_table ----> columnarAM ----> citus extension
                                           ^  ^
                                           |  |
columnar.storage_id_seq --------------------  |
                                              |
columnar.stripe -------------------------------
```

Since `pg_upgrade` just knows to follow topological sort of the objects
when creating database dump, above dependency graph doesn't imply that
`columnar_table` should be created before metadata objects such as
`columnar.storage_id_seq` and `columnar.stripe` are created.

For this reason, with this commit we add new records to `pg_depend` to
make columnarAM depending on all rel objects living in `columnar`
schema. That way, `pg_upgrade` will know it needs to create those before
creating `columnarAM`, and similarly, before creating any tables using
`columnarAM`.

Note that in addition to inserting those records via installation script,
we also do the same in `citus_finish_pg_upgrade()`. This is because,
`pg_upgrade` rebuilds catalog tables in the new cluster and that means,
we must insert them in the new cluster too.
2021-11-23 13:14:00 +03:00
Burak Velioglu 6590f12de4
Merge branch 'master' into velioglu/make_object_lock_explicit 2021-11-22 13:55:36 +03:00
Burak Velioglu 12e05ad196
Sorted addresses before getting lock 2021-11-22 11:43:32 +03:00
Marco Slot 56eae48daf Stop updating shard range in citus_update_shard_statistics 2021-11-19 10:51:15 +01:00
Burak Velioglu 3a68263cc7
Change lock type 2021-11-19 12:03:17 +03:00
Burak Velioglu baeaca7bc5
Update comment 2021-11-19 10:51:56 +03:00
Hanefi Onaldi c0d43d4905
Prevent cache usage on citus_drop_trigger codepaths 2021-11-18 20:24:51 +03:00
Burak Velioglu 77dd12c09d
Merge branch 'master' into velioglu/make_object_lock_explicit 2021-11-18 20:18:07 +03:00
Hanefi Onaldi a3cc9b4e53
Remove case block that is identical to its neighbor (#5472) 2021-11-18 19:41:39 +03:00
Burak Velioglu b484d9b234
Make object locking explicit while adding dependencies 2021-11-18 19:34:00 +03:00
Marco Slot 9e6ca23286 Remove cstore_fdw-related logic 2021-11-16 13:59:03 +01:00
Önder Kalacı 8c0bc94b51
Enable replication factor > 1 in metadata syncing (#5392)
- [x] Add some more regression test coverage
- [x] Make sure returning works fine in case of
     local execution + remote execution
     (task->partiallyLocalOrRemote works as expected, already added tests)
- [x] Implement locking properly (and add isolation tests)
     - [x] We do #shardcount round-trips on `SerializeNonCommutativeWrites`.
           We made it a single round-trip.
- [x] Acquire locks for subselects on the workers & add isolation tests
- [x] Add a GUC to prevent modification from the workers, hence increase the
      coordinator-only throughput
       - The performance slightly drops (~%15), unless
         `citus.allow_modifications_from_workers_to_replicated_tables`
         is set to false
2021-11-15 15:10:18 +03:00
Onur Tirtir 25024b776e
Skip deleting options if columnar.options is already dropped (#5458)
Drop extension might cascade to columnar.options before dropping a
columnar table. In that case, we were getting below error when opening
columnar.options to delete records for the columnar table that we are
about to drop.: "ERROR:  could not open relation with OID 0".

I somehow reproduced this bug easily when upgrading pg, that is why
adding added the test to after_pg_upgrade_schedule.
2021-11-12 12:30:09 +03:00
Ahmet Gedemenli 14a33d4e8e Introduce GUC citus.use_citus_managed_tables 2021-11-11 14:09:06 +03:00
Hanefi Onaldi 3d9cec70fd
Update migration paths from 10.2 to 11.0 (#5459)
We recently introduced a set of patches to 10.2, and introduced 10.2-4
migration version. This migration version only resides on `release-10.2`
branch, and is missing on our default branch. This creates a problem
because we do not have a valid migration path from 10.2 to latest 11.0.

To remedy this issue, I copied the relevant migration files from
`release-10.2` branch, and renamed some of our migration files on
default branch to make sure we have a linear upgrade path.
2021-11-11 13:55:28 +03:00
Önder Kalacı 98ca6ba6ca
Allow lock_shard_resources to be called by the users with privileges (#5441)
Before this commit, we required the user to be owner of the shard/table
in order to call lock_shard_resources.

However, that is too restrictive. We can have users with GRANTS
to the table who are not owners of the tables/shards.

With this commit, we allow such patterns.
2021-11-08 15:36:51 +01:00
Onder Kalaci d5e89b1132 Unify distributed execution logic for single replicated tables
Citus does not acquire any executor locks for shard replication == 1.
With this commit, we unify this decision and exit early.
2021-11-08 13:52:20 +01:00
Önder Kalacı d5b371b2e0
Merge branch 'master' into naisila/fix-partitioned-index 2021-11-08 10:53:16 +01:00
naisila 385ba94d15 Run fix_partition_shard_index_names after each wrong naming command 2021-11-08 10:43:34 +01:00
Marco Slot 78866df13c Remove master_append_table_to_shard UDF 2021-11-08 10:43:24 +01:00
Marco Slot fba93df4b0 Remove copy into new append shard logic 2021-11-07 21:01:40 +01:00
Nils Dijk 3fcb456381
Refactor/partitioned result destreceiver (#5432)
This change creates a slightly higher abstraction of the `PartitionedResultDestReceiver` where it decouples the partitioning from writing it to a file. This allows for easier reuse for other `DestReceiver`'s that would like to route different tuples to different `DestReceiver`'s.

Originally there was a lot of state kept in `PartitionedResultDestReceiver` to be able to lazily create `FileDestReceivers` when the first tuple arrived for that target. This convoluted the implementation of the processing of tuples with where they should go.

This refactor changes that where it makes the `PartitionedResultDestReceiver` completely agnostic of what kind of Receivers it is writing to. When constructed you pass it a list of `DestReceiver` compatible pointers with the length of `partitionCount`. Internally the `PartitionedResultDestReceiver` keeps track of which `DestReceiver`'s have been started or not, and start them when they first receive a tuple.

Alternatively, if the instantiating code of the `PartitionedResultDestReceiver` wants, the startup can be turned from lazily to eagerly. When the startup is eager (not lazy) all `rStartup` functions on the list of `DestReceiver`'s are called during the startup of the `PartitionedResultDestReceiver` and marked as such.

A downside of this approach is the following. On highly partitioned destinations we now need to allocate a `FileDestReceiver` for every target, _always_. When the data passed into the `PartitionedResultDestReceiver` is highly skewed to a small set of `FileDestReceiver`'s this will waste some memory. Given the small size of a `FileDestReceiver`, and the fact that actual file handles are only created during the processing of the startup of the `FileDestReceiver` I think this memory waste is not a problem. If this would become a problem we could refactor the source list into some kind of generator object which can generate the `DestReceiver`'s on the fly.
2021-11-05 13:31:18 +01:00
Nils Dijk 0e7cf9f0ca
reinstate optimization that got unintentionally broken in 366461ccdb (#5418)
DESCRIPTION: Reinstate optimisation for uniform shard interval ranges

During a refactor introduced in #4132 the following change was made, which made the optimisation in `CalculateUniformHashRangeIndex` unreachable: 
366461ccdb (diff-565a339ed3c78bc5a0d4ffeb4e91032150b1dffbeeff59cd3e65981d20b998c7L319-R319)

This PR reinstates the path to the optimisation!
2021-11-05 13:07:51 +01:00
Önder Kalacı 763176a4d9
Some minor improvements on top of 5314 (#5428)
* Refactor some checks in citus local tables

* all existing citus local tables are auto converted after upgrade

* Update warning messages in CreateCitusLocalTable

* Hide notice msg for auto converting local tables

* Hide hint msg

Co-authored-by: Ahmet Gedemenli <afgedemenli@gmail.com>
2021-11-05 13:59:13 +03:00
Sait Talha Nisanci ab29c25658 Fix missing from entry 2021-11-04 18:54:52 +03:00
Ahmet Gedemenli b30ed46068
Fixes ALTER STATISTICS IF EXISTS bug (#5435)
* Fix ALTER STATISTICS IF EXISTS bug
2021-11-04 16:14:05 +03:00
Halil Ozan Akgul 91b377490b Fix multi_cluster_management fails for metadata syncing 2021-11-04 11:09:21 +03:00
Halil Ozan Akgul c0785d570c Remove EnsureSuperUser from start and stop metadata sync to node 2021-11-01 18:01:49 +03:00
Halil Ozan Akgul c0eb67b24f Skip forceCloseAtTransactionEnd connections only if BEGIN was not sent on them 2021-11-01 17:43:04 +03:00
Jelte Fennema 57a0228c52
Fix string-concatenation warning on Clang 13 (#5425)
Clang 13 complains about a suspicious string concatenation. It thinks we
might have missed a comma. This adds parentheses to make it clear that
concatenation is indeed what we meant.
2021-11-01 13:55:43 +03:00
naisila 796d56a7b1 Rename ddlJob->commandString to ddlJob->metadataSyncCommand 2021-10-29 23:45:43 +03:00
Ahmet Gedemenli 67dca4363d
Dont auto-undistribute user-added citus local tables (#5314)
* Disable auto-undistribute for user-added citus local tables
2021-10-28 12:10:26 +03:00
Philip Dubé cc50682158 Fix typos. Spurred spotting "connectios" in logs 2021-10-25 13:54:09 +00:00
Jelte Fennema 3bdbfc3edf
Fix duplicate typedef which can cause compile failures (#5406)
ColumnarScanDesc is already defined in columnar_tableam.h. Redifining it
again causes a compiler error on some C compilers.

Useful reference: https://bugzilla.redhat.com/show_bug.cgi?id=767538

Fixes #5404
2021-10-25 12:20:13 +00:00
Onder Kalaci ce4c4540c5 Simplify 2PC decision in the executor
It seems like the decision for 2PC is more complicated than
it should be.

With this change, we do one behavioral change. In essense,
before this commit, when a SELECT task with replication factor > 1
is executed, the executor was triggering 2PC. And, in fact,
the transaction manager (`ConnectionModifiedPlacement()`) was
able to understand not to trigger 2PC when no modification happens.

However, for transaction blocks like:
BEGIN;
-- a command that triggers 2PC
-- A SELECT command on replication > 1
..
COMMIT;

The SELECT was used to be qualified as required 2PC. And, as a side-effect
the executor was setting `xactProperties.errorOnAnyFailure = true;`

So, the commands was failing at the time of execution. Now, they fail at
the end of the transaction.
2021-10-23 09:06:28 +02:00
Onder Kalaci 575bb6dde9 Drop support for Inactive Shard placements
Given that we do all operations via 2PC, there is no way
for any placement to be marked as INACTIVE.
2021-10-22 18:03:35 +02:00
Önder Kalacı b3299de81c
Drop support for citus.multi_shard_commit_protocol (#5380)
In the past, we allowed users to manually switch to 1PC
(e.g., one phase commit). However, with this commit, we
don't. All multi-shard modifications are done via 2PC.
2021-10-21 14:01:28 +02:00
Marco Slot dafba6c242 Deprecate master_get_table_metadata UDF 2021-10-21 12:08:05 +02:00
Marco Slot defb97b7f5 Support operator class parameters in indexes 2021-10-20 17:03:59 +02:00
Önder Kalacı 3f726c72e0
When replication factor > 1, all modifications are done via 2PC (#5379)
With Citus 9.0, we introduced `citus.single_shard_commit_protocol` which
defaults to 2PC.

With this commit, we prevent any user to set it to 1PC and drop support
for `citus.single_shard_commit_protocol`.

Although this might add some overhead for users, it is already the default
behaviour (so less likely) and marking placements as INVALID is much
worse.
2021-10-20 01:39:03 -07:00
Marco Slot 096660d61d Remove master_apply_delete_command 2021-10-18 22:29:37 +02:00
Marco Slot 93e79b9262 Never allow co-located joins of append-distributed tables 2021-10-18 21:11:16 +02:00
Marco Slot b97e5081c7 Disable co-located joins for append-distributed tables 2021-10-18 21:11:16 +02:00
Marco Slot dfad73d918 Disable implicit single re-partition joins for append tables 2021-10-18 21:11:16 +02:00
Marco Slot 2206e64e42 Disable single-repartition joins for append tables 2021-10-18 21:11:16 +02:00
Önder Kalacı 31c8f279ac
Add helper UDFs to inspect object dependencies (#5293)
- citus_get_all_dependencies_for_object: emulate what Citus
                                         would qualify as
					 dependency when adding
					 a new node
- citus_get_dependencies_for_object: emulate what Citus would qualify
				     as dependency when creating an
				     object

Example use:
```SQL
-- find all the depedencies of table test
SELECT
	pg_identify_object(t.classid, t.objid, t.objsubid)
FROM
	(SELECT * FROM pg_get_object_address('table', '{test}', '{}')) as addr
JOIN LATERAL
	citus_get_all_dependencies_for_object(addr.classid, addr.objid, addr.objsubid) as t(classid oid, objid oid, objsubid int)
ON TRUE
	ORDER BY 1;
```
2021-10-18 14:46:49 +03:00
Halil Ozan Akgul e3446692f3 Fix the bug by adding comma before the values 2021-10-15 18:42:23 +03:00
Ahmet Gedemenli 35f6fe5f9f
Refactor/Improve PreprocessAlterTableStmtAttachPartition (#5366)
* Refactor/Improve PreprocessAlterTableStmtAttachPartition
2021-10-14 11:39:39 +03:00
Hanefi Onaldi 3e64dc44c8
Fix some typos in comments (#5369) 2021-10-13 13:00:39 +03:00
Teja Mupparti a8348047c5
Pushdown procedures with OUT parameters (#5348) 2021-10-11 23:14:36 -07:00
Onur Tirtir f7f4a93073 Remove get_relation_trigger_oid_compat 2021-10-11 11:53:00 +03:00
Onur Tirtir a1e0511583 Remove get_relation_constraint_oid_compat 2021-10-11 11:53:00 +03:00
Ahmet Gedemenli d19793c174 Add partitioning support for citus local tables
Add/fix tests

Fix creating partitions

Add test for mx - partition creating case

Enable cascading to partitioned tables

Fix mx partition adding test

Fix cascading through fkeys

Style

Disable converting with non-inherited fkeys

Fix detach bug

Early return in case of cascade & Add tests

Style

Fix undistribute_table bug & Fix test outputs

Remove RemovePartitionRelationIds

Test with undistribute_table

Add test for mx+convert+undistribute

Remove redundant usage of CreatePartitionedCitusLocalTable

Add some comments

Introduce bulk functions for generating attach/detach partition commands

Fix: Convert partitioned tables after adding fkey

Change the error message for partitions

Introduce function ErrorIfPartitionTableAddedToMetadata

Polish attach/detach command generation functions

Use time_partitions for testing

Move mx tests to citus_local_tables_mx

Add new partitioned table to cascade test

Add test with time series management UDFs

Fix test output

Fix: Assertion fail on relation access tracking

Style

Refactor creating partitioned citus local tables

Remove CreatePartitionedCitusLocalTable

Style

Error out if converting multi-level table

Revert some old tests

Error out adding partitioned partition

Polish

Polish/address

Fix create table partition of case

Use CascadeOperationForRelationIdList if no cascade needed

Fix create partition bug

Revert / Add new tests to mx

Style

Fix dropping fkey bug

Add test with IF NOT EXISTS

Convert to CLT when doing ATTACH PARTITION

Add comments

Add more tests with time series management

Edit the error message for converting the child

Use OR instead of AND in ErrorIfUnsupportedAlterTableStmt

Edit/improve tests

Disable ddl prop when dropping default column definitions

Disable/enable ddl prop just before/after the command

Add comment

Add sequence test

Add trigger test

Remove NeedCascadeViaForeignKeys

Add one more insert to sequence test

Add comment

Style

Fix test output shard ids

Update comments

Disable creating fkey on partitions

Move partition check to CreateCitusLocalTable

Add comment

Add check for  attachingmulti-level  partition

Add test for pg_constraint

Check pg_dist_partition in tests

Add test inserting on the worker
2021-10-11 10:45:07 +03:00
Halil Ozan Akgul 9c9d4b5eeb Turn MX on by default 2021-10-08 18:17:21 +03:00
Naisila Puka d0390af72d
Add fix_partition_shard_index_names udf to fix currently broken names (#5291)
* Add udf to include shardId in broken partition shard index names

* Address reviews: rename index such that operations can be done on it

* More comprehensive index tests

* Final touches and formatting
2021-10-07 19:34:52 +03:00
Marco Slot 91b647024a Fixes CREATE INDEX deparsing issue 2021-10-06 13:08:16 +02:00
Onur Tirtir 5d8f74bd0b
(Share) Lock buffer page when reading from columnar storage (#5338)
Under high write concurrency, we were sometimes reading columnar
metapage as all zeros.

In `WriteToBlock()`, if `clear == true`, then it will clear the page before
writing the new one, rather than just adding data to the page. That
means any concurrent connection that is holding only a pin will be
able to see the all-zero state between the `InitPage()` and the
`memcpy_s()`.

Moreover, postgres/storage/buffer/README states that:

> Buffer access rules:
>
> 1. To scan a page for tuples, one must hold a pin and either shared or
> exclusive content lock.  To examine the commit status (XIDs and status bits)
> of a tuple in a shared buffer, one must likewise hold a pin and either shared
> or exclusive lock.

For those reasons, we have to make sure to never keep a pin on the
page without (at least) the shared lock, to avoid having such problems.
2021-10-06 11:57:02 +03:00
Halil Ozan Akgul 43d5853b6d Fixes function names in comments 2021-10-06 09:24:43 +03:00
Hanefi Onaldi a74409f24c
Bump Citus to 11.0devel 2021-10-01 22:21:22 +03:00
Onur Tirtir fe72e8bb48
Discard index deletion requests made to columnarAM (#5331)
A write operation might trigger index deletion if index already had
dead entries for the key we are about to insert.
There are two ways of index deletion:
  a) simple deletion
  b) bottom-up deletion (>= pg14)

Since columnar_index_fetch_tuple never sets all_dead to true,
columnarAM doesn't ever expect to receive simple deletion requests
(columnar_index_delete_tuples) as we don't mark any index entries
as dead.

However, since columnarAM doesn't delete any dead entries via simple
deletion, postgres might ask for a more comprehensive deletion
(i.e.: bottom-up) at some point when pg >= 14.

So with this commit, we start gracefully ignoring bottom-up deletion
requests made to columnar_index_delete_tuples.

Given that users can anyway "VACUUM FULL" their columnar tables,
we don't see any problem in ignoring deletion requests.
2021-10-01 14:32:47 +03:00
Önder Kalacı c2311b4c0c
Make (columnar.stripe) first_row_number index a unique constraint (#5324)
* Make (columnar.stripe) first_row_number index a unique constraint

Since stripe_first_row_number_idx is required to scan a columnar
table, we need to make sure that it is created before doing anything
with columnar tables during pg upgrades.

However, a plain btree index is not a dependency of a table, so
pg_upgrade cannot guarantee that stripe_first_row_number_idx gets
created when creating columnar.stripe, unless we make it a unique
"constraint".

To do that, drop stripe_first_row_number_idx and create a unique
constraint with the same name to keep the code change at minimum.

* Add more pg upgrade tests for columnar

* Fix a logic error in uprade_columnar_after test

Co-authored-by: Onur Tirtir <onurcantirtir@gmail.com>
2021-09-30 10:51:56 +03:00
Jeff Davis d49d321eac Columnar: only call BuildStripeMetadata() with heap tuple.
BuildStripeMetadata() calls HeapTupleHeaderGetXmin(), which must only
be called on a proper heap tuple with MVCC information. Make sure the
caller passes the heap tuple, and not a datum tuple.

Fixes #5318.
2021-09-23 15:51:01 -07:00
tejeswarm a1604a87e6 Parition shards to be colocated with the parent shards 2021-09-22 14:47:04 -07:00
Onur Tirtir 77a2dd68da
Revoke read access to columnar.chunk from unprivileged user (#5313)
Since this could expose chunk min/max values to unprivileged users.
2021-09-22 16:23:02 +03:00
Onur Tirtir 68335285b4 Columnar CustomScan: Pushdown BoolExpr's as we do before 2021-09-22 10:51:34 +03:00
Onur Tirtir e6ed764f63
Check if xact id is in progress before checking if aborted (#5312) 2021-09-21 21:20:31 +03:00
Onur Tirtir f8b1ff7214
Add CheckCitusVersion() calls to columnarAM (#5308)
Considering all code-paths that we might interact with a columnar table,
add `CheckCitusVersion` calls to tableAM callbacks:
- initializing table scan (`columnar_beginscan` & `columnar_index_fetch_begin`)
- setting a new filenode for a relation (storage initializiation or a table rewrite)
- truncating the storage
- inserting tuple (single and multi)

Also add `CheckCitusVersion` call to:
- drop hook (`ColumnarTableDropHook`)
- `alter_columnar_table_set` & `alter_columnar_table_reset` UDFs
2021-09-20 17:26:41 +03:00
Onder Kalaci cea937f52f Add missing version checks for citus_internal_XXX functions 2021-09-20 09:54:35 +02:00
SaitTalhaNisanci 35ff513dfe
Give proper error while distributing a temp table (#5269) 2021-09-17 14:34:40 +03:00
jeff-davis 6e8b19984e
Columnar: separate plan and runtime quals. (#5261)
* Columnar: separate plain and exec quals.

Make a clear separation between plain quals, which contain constants
or extern params; and exec quals, which contain exec params and can't
be evaluated until a rescan.

Fixes #5258.

* more vanilla tests

Co-authored-by: Onur Tirtir <onurcantirtir@gmail.com>
2021-09-13 10:54:53 -07:00
jeff-davis d48ceee238
Columnar: add method ReparameterizeCustomPathByChild. (#5275)
When performing a partition-wise join, the planner will adjust paths
parameterized by the parent rel to instead parameterize by the child
rel directly. When this reparameterization happens, we also need to
adjust the join quals to reference the child rather than the parent.

Fixes #5257.
2021-09-13 10:33:48 -07:00
Onur Tirtir ea61efb63a
Not flush writes until need to read them when doing index-scan on columnar (#5247)
Not flush pending writes if given tid belongs to a "flushed" or
"aborted" stripe write, or to an "in-progress" stripe write of
another backend.

That way, we would reduce the cases where we flush single-tuple
stripes during index scan.

To do that, we follow below steps for index look-up's:

- Do not flush any pending writes and do stripe metadata look-up for
  given tid.
  If tuple with tid is found, then no need to do another look-up
  since we already found the tuple without needing to flush pending
  writes.

- If tuple is not found without flushing pending writes, then we have two
  scenarios:

  -  If given tid belongs to a pending write of my backend, then do stripe
     metadata look-up for given tid. But this time first **flush any pending
     writes**.
     
  -  Otherwise, just return false from `index_fetch_tuple` since flushing
      pending writes wouldn't help.
2021-09-13 18:41:20 +02:00
Onur Tirtir 4ee0fb2758
Make sure to skip aborted writes when reading the first tuple (#5274)
With 5825c44d5f, we made the changes to
skip aborted writes when scanning a columnar table.

However, looks like we forgot to handle such cases for the very first
call made to columnar_getnextslot. That means, that commit only
considered the intermediate stripe read operations.

However, functions called by columnar_getnextslot to find first stripe
to read (ColumnarBeginRead & ColumnarRescan) were not caring about
those aborted writes.

To fix that, we teach AdvanceStripeRead to find the very first stripe
to read, and then start using it where were blindly calling
FindNextStripeByRowNumber.
2021-09-13 11:50:53 +03:00
Burak Velioglu ceec5d72e3
Swallow errors while aborting remote transactions 2021-09-10 11:06:16 +03:00
Naisila Puka a69abe3be0
Fixes bug about int and smallint sequences on MX (#5254)
* Introduce worker_nextval udf for int&smallint column defaults

* Fix current tests and add new ones for worker_nextval
2021-09-09 23:41:07 +03:00
Onur Tirtir be74518965
Improve memset calls made to reset bool arrays (#5262) 2021-09-09 17:56:03 +03:00
Halil Ozan Akgul 19af1cef2f Errors for CTEs with search clause
Relevant PG commit:
3696a600e2292d43c00949ddf0352e4ebb487e5b
2021-09-09 13:48:24 +03:00
Marco Slot f84164a000 Avoid switch to superuser in worker_merge_files_into_table 2021-09-09 11:00:29 +02:00
Marco Slot 4faa49775b Perform copy command as regular user in worker_append_table_to_shard 2021-09-09 11:00:29 +02:00
Hanefi Onaldi 9ae912a8c8
Prevent C-style comments in all directories (#5250) 2021-09-09 11:54:58 +03:00
SaitTalhaNisanci e3e0a028c7
return early in case we want to skip outer vars (#5259) 2021-09-09 10:53:36 +03:00
Onur Tirtir 32e3e51ed4
Fix a compiler warning that we get on debian (#5260) 2021-09-08 20:03:59 +03:00
Onur Tirtir be3914ae28 Prevent generating index-only "Path"s for columnar tables
Previously, even when `EXPLAIN` output tells that we will do
index-only scan, it was never the case since columnar tables
don't have the visibility fork that postgres is looking for.

For this reason, visibility check done in
`IndexOnlyNext->VM_ALL_VISIBLE`
code-path was always returning false and postgres was reading
the tuple from the columnar relation itself.
2021-09-08 14:14:24 +03:00
Onur Tirtir cc49e63222
Not read heaptuple after closing pg_rewrite (#5255) 2021-09-08 13:03:17 +02:00
Onur Tirtir 3340f17c4e
Prevent planner from choosing parallel scan for columnar tables (#5245)
Previously, for regular table scans, we were setting `RelOptInfo->partial_pathlist`
to `NIL` via `set_rel_pathlist_hook` to discard scan `Path`s that need to use any
parallel workers, this was working nicely.

However, when building indexes, this hook doesn't get called so we were not
able to prevent spawning parallel workers when building an index. For this
reason, 9b4dc2f804 added basic
implementation for `columnar_parallelscan_*` callbacks but also made some
changes to skip using those workers when building the index.

However, now that we are doing stripe reservation in two stages, we call 
`heap_inplace_update` at some point to complete stripe reservation.
However, postgres throws an error if we call `heap_inplace_update` during
a parallel operation, even if we don't actually make use of those workers.

For this reason, with this pr, we make sure to not generate scan `Path`s that
need to use any parallel workers by using `get_relation_info_hook`.

This is indeed useful to prevent spawning parallel workers during index builds.
2021-09-08 13:53:43 +03:00
Onur Tirtir 5825c44d5f
Handle aborted writes properly when scanning a columnar table (#5244)
If it is certain that we will not use any `parallel_worker`s for a columnar table,
then stripe entries inserted by aborted transactions become visible to
`SnapshotAny` and that causes `REINDEX` to fail by throwing a duplicate key
error.

To fix that:
* consider three states for a stripe write operation:
   "flushed", "aborted", or "in-progress",
* make sure to have a clear separation between them, and
* act according to those three states when reading from a columnar table
2021-09-08 13:26:11 +03:00
Jelte Fennema bb5c494104 Enable binary encoding by default on PG14
Since PG14 we can now use binary encoding for arrays and composite types
that contain user defined types. This was fixed in this commit in
Postgres: 670c0a1d47

This change starts using that knowledge, by not necessarily falling back
to text encoding anymore for those types.

While doing this and testing a bit more I found various cases where
binary encoding would fail that our checks didn't cover. This fixes
those cases and adds tests for those. It also fixes EXPLAIN ANALYZE
never using binary encoding, which was a leftover of workaround that
was not necessary anymore.

Finally, it changes the default for both `citus.enable_binary_protocol`
and `citus.binary_worker_copy_format` to `true` for PG14 and up. In our
cloud offering `binary_worker_copy_format` already was true by default.
`enable_binary_protocol` had some bug with MX and user defined types,
this bug was fixed by the above mentioned fixes.
2021-09-06 10:27:29 +02:00
Burak Velioglu c3895f35cd
Add helper UDFs for easy time partition management
- get_missing_time_partition_ranges: Gets the ranges of missing partitions for the given table, interval and range unless any existing partition conflicts with calculated missing ranges.

- create_time_partitions: Creates partitions by getting range values from get_missing_time_partition_ranges.

- drop_old_time_partitions: Drops partitions of the table older than given threshold.
2021-09-03 23:03:13 +03:00
Onur Tirtir 2b71263e40
Align columnar path costing functions (#5239)
* Rename RecostColumnarPaths to CostColumnarPaths

* Rename RecostColumnarIndexPath to CostColumnarIndexPath

* Reorder args of CostColumnarScan to align with other two costing functions

* Not adjust index scan start-up cost

* Rename ColumnarIndexScanAddTotalCost to ColumnarIndexScanAdditionalCost

* Reflect that index scan will at least read one stripe in totalCost calculation

* Organize declarations in columnar_customscan.c
2021-09-03 19:37:42 +03:00