Commit Graph

2688 Commits (a9f8a600071b4e3d89c58782b923fb8839f473e6)

Author SHA1 Message Date
Naisila Puka dbb88f6f8b
Fix insert query with CTEs/sublinks/subqueries etc (#4700)
* Fix insert query with CTE

* Add more cases with deferred pruning but false fast path

* Add more tests

* Better readability with if statements
2021-02-23 18:00:47 +03:00
SaitTalhaNisanci dcf54eaf2a Use PROCESS_UTILITY_QUERY in utility calls
When we use PROCESS_UTILITY_TOPLEVEL it causes some problems when
combined with other extensions such as pg_audit. With this commit we use
PROCESS_UTILITY_QUERY in the codebase to fix those problems.
2021-02-19 13:55:59 +03:00
Sait Talha Nisanci bbf6132226 Revert "wip (#4730)"
This reverts commit 62e6d54a4e.
2021-02-19 13:55:59 +03:00
SaitTalhaNisanci 62e6d54a4e
wip (#4730) 2021-02-19 13:42:19 +03:00
Marco Slot 972a8bc0b7 Rewrite time_partitions join clause to avoid smallint[] operator 2021-02-18 12:01:18 +01:00
Ahmet Gedemenli 1f345f65b4 Support dropping local table indexes along with a distributed index 2021-02-18 13:30:12 +03:00
Onur Tirtir 676d9a9726 Bump Citus to 10.1devel 2021-02-17 11:54:33 +03:00
Onur Tirtir d61fd6e478
Decide changing sequence dependencies on MX nodes according to resulting relation (#4713)
When executing alter_table / undistribute_table udf's, we should not try
to change sequence dependencies on MX workers if new table wouldn't
require syncing metadata.

Previously, we were checking that for input table. But in some cases, the
fact that input table requires syncing metadata doesn't imply the same
for resulting table (e.g when undistributing a Citus table).

Even more, doing that was giving an unexpected error when undistributing
a Citus table so this commit actually fixes that.
2021-02-15 19:20:26 +03:00
SaitTalhaNisanci bcbd24f8de
Only consider pseudo constants for shortcuts (#4712)
It seems that we need to consider only pseudo constants while doing some
shortcuts in planning. For example there could be a false clause but it
can contribute to the result in which case it will not be a pseudo
constant.
2021-02-15 18:39:37 +03:00
SaitTalhaNisanci 0f1ce7a913
Not skip relation in conversion if it doesn't have RelationRestriction (#4685)
We would exclude tables without relationRestriction from conversion
candidates in local-distributed table joins. This could leave a leftover
local table which should have been converted to a subquery.

Ideally I would expect that in each call to CreateDistributedPlan we
would pass a new plan id, but that seems like a bigger change.
2021-02-12 12:33:55 +03:00
Onder Kalaci f297c96ec5 Add regression tests for COPY into colocated intermediate results
To add the tests without too much data, make the copy switchover
configurable.
2021-02-11 15:41:06 +01:00
Onder Kalaci 5d5a357487 Do not connection re-use for intermediate results
/*
 * Colocated intermediate results are just files and not required to use
 * the same connections with their co-located shards. So, we are free to
 * use any connection we can get.
 *
 * Also, the current connection re-use logic does not know how to handle
 * intermediate results as the intermediate results always truncates the
 * existing files. That's why, we use one connection per intermediate
 * result.
 */
2021-02-11 15:41:06 +01:00
Ahmet Gedemenli c8e83d1f26 Fix dropping fkey when distributing table 2021-02-11 15:48:35 +03:00
SaitTalhaNisanci 847b79078f
Not consider subplans in restriction list (#4679)
* Not consider subplans in restriction list

* Not consider sublink, alternative subplan in restrictions
2021-02-11 15:04:07 +03:00
Hadi Moshayedi c3dcd6b9f8 Columnar: don't include stripe reservation locks in lock graph. 2021-02-10 10:20:20 -08:00
Onur Tirtir 9f619a85d6
Fix EXPLAIN ANALYZE exec when query returns no cols (#4672)
We do not include dummy column if original task didn't return any
columns.
Otherwise, number of columns that original task returned wouldn't
match number of columns returned by worker_save_query_explain_analyze.
2021-02-10 17:59:47 +03:00
Onder Kalaci c804c9aa21 Allow local execution for intermediate results in COPY
When COPY is used for copying into co-located files, it was
not allowed to use local execution. The primary reason was
Citus treating co-located intermediate results as co-located
shards, and COPY into the distributed table was done via
"format result". And, local execution of such COPY commands
was not implemented.

With this change, we implement support for local execution with
"format result". To do that, we use the buffer for every file
on shardState->copyOutState, similar to how local copy on
shards are implemented. In fact, the logic is similar to
local copy on shards, but instead of writing to the shards,
Citus writes the results to a file.

The logic relies on LOCAL_COPY_FLUSH_THRESHOLD, and flushes
only when the size exceeds the threshold. But, unlike local
copy on shards, in this case we write the headers and footers
just once.
2021-02-09 15:00:06 +01:00
Hanefi Onaldi 353b080474
Fix Semmle errors (#4636)
Co-authored-by: Halil Ozan Akgül <hozanakgul@gmail.com>
2021-02-08 18:37:44 +03:00
SaitTalhaNisanci e96da4886f
Sort results in citus_shards and give raw size (#4649)
* Sort results in citus_shards and give raw size

Sort results so that it is consistent and also similar to citus_tables.

Use raw size in the output so that doing operations on the size is
easier.

* Change column ordering
2021-02-08 15:29:42 +03:00
Ahmet Gedemenli 5dd2a3da03 Convert RelabelTypes into CollateExprs in get_rule_expr function 2021-02-05 12:06:46 +03:00
Ahmet Gedemenli 503171d2f2
Merge branch 'master' into rename-master-parameter-for-dist-stat-activity 2021-02-04 15:37:13 +03:00
Ahmet Gedemenli 2443b20b2c Rename master to distributed for worker stat activity 2021-02-04 12:20:06 +03:00
Onder Kalaci fc9a23792c COPY uses adaptive connection management on local node
With #4338, the executor is smart enough to failover to
local node if there is not enough space in max_connections
for remote connections.

For COPY, the logic is different. With #4034, we made COPY
work with the adaptive connection management slightly
differently. The cause of the difference is that COPY doesn't
know which placements are going to be accessed hence requires
to get connections up-front.

Similarly, COPY decides to use local execution up-front.

With this commit, we change the logic for COPY on local nodes:

Try to reserve a connection to local host. This logic follows
the same logic (e.g., citus.local_shared_pool_size) as the
executor because COPY also relies on TryToIncrementSharedConnectionCounter().
If reservation to local node fails, switch to local execution
Apart from this, if local execution is disabled, we follow the
exact same logic for multi-node Citus. It means that if we are
out of the connection, we'd give an error.
2021-02-04 09:45:07 +01:00
Ahmet Gedemenli 34840ddc5c Rename master to citus for dist stat activity cols 2021-02-04 11:12:23 +03:00
Sait Talha Nisanci ff82e85ea2 Replace workerNodeCount -> nodeCount 2021-02-03 20:02:03 +03:00
Sait Talha Nisanci eb5be579e3 Set previous cell inside a for loop 2021-02-03 20:02:03 +03:00
Sait Talha Nisanci 9ba3f70420 Remove unused method 2021-02-03 20:02:03 +03:00
Sait Talha Nisanci 24e60b44a1 Consider coordinator in intermediate result optimization
It seems that we were not considering the case where coordinator was
added to the cluster as a worker in the optimization of intermediate
results.

This could lead to errors when coordinator was added as a worker.
2021-02-03 20:02:03 +03:00
Onur Tirtir c0f2817b70
Disallow using alter_table udfs with tables having any identity cols (#4635)
pg_get_tableschemadef_string doesn't know how to deparse identity
columns so we cannot reflect those columns when creating table
from scratch. For this reason, we don't allow using alter_table udfs
with tables having any identity cols.
2021-02-03 19:33:54 +03:00
Onur Tirtir 3a403090fd
Disallow adding local table with identity column to metadata (#4633)
pg_get_tableschemadef_string doesn't know how to deparse identity
columns so we cannot reflect those columns when creating shell
relation.
For this reason, we don't allow adding local tables -having identity cols-
to metadata.
2021-02-03 19:05:17 +03:00
Onur Tirtir 5efb742f8a
Skip copying GENERATED ALWAYS AS STORED cols in ReplaceTable (#4616)
Postgres doesn't allow inserting into columns having GENERATED ALWAYS
AS (...) STORED expressions.
For this reason, when executing undistribute_table or an alter_* udf,
we should skip copying such columns.
This is not bad since Postgres would already generate such columns.
2021-02-03 17:55:16 +03:00
Onur Tirtir 53b1888cac Rename DropAndMoveDefaultSequenceOwnerships 2021-02-02 18:17:42 +03:00
Onur Tirtir 93c3f30024 Rename ExtractColumnsOwningSequences 2021-02-02 18:17:42 +03:00
Onur Tirtir 912d829757 Skip GENERATED AS ALWAYS STORED cols when processing cols owning sequences
When finding columns owning sequences, we shouldn't rely on atthasdef
since it might be true when column has GENERATED ALWAYS AS (...)
STORED expression.
2021-02-02 18:17:42 +03:00
Onur Tirtir c8a48c6eee
Not try to sync metadata for local tables (#4625) 2021-02-02 15:12:12 +03:00
Onur Tirtir c5d4e7081b
Fix invalid read issue in deprecated create_citus_local_table udf (#4611)
Since create_citus_local_table doesn't specify cascadeViaForeignKeys
option, we can't directly call citus_add_local_table_to_metadata
from create_citus_local_table.
Instead, implement an internal method and call it from deprecated udf
too.
2021-02-02 12:53:27 +03:00
Brian Bergeron 1253eeb9ff
Don't propagate ALTER ROLE SET when scoped to a different database (#4471)
Co-authored-by: brberger <brberger@microsoft.com>
2021-02-01 15:49:26 +03:00
Hanefi Önaldı cab17afce9 Introduce UDFs for fixing partitioned table constraint names 2021-01-29 17:32:20 +03:00
Hanefi Önaldı 92cf49b7e9 Limit shardId in partitioned table constraint names to only CHECK 2021-01-29 17:29:53 +03:00
SaitTalhaNisanci 738825cc38
Fix partition column index issue (#4591)
* Fix partition column index issue

We send column names to worker_hash/range_partition_table methods, and
in these methods we check the column name index from tuple descriptor.
Then this index is used to decide the bucket that the current row will
be sent for the repartition.

This becomes a problem when there are the same column names in the
tupleDescriptor. Then we can choose the wrong index. Hence the
partitioned data will be put to wrong workers. Then the result could
miss some data because workers might contain different range of data.

An example:
TupleDescriptor contains "trip_id", "car_id", "car_id" for one table.
It contains only "car_id" for the other table. And assuming that the
tables will be partitioned by car_id, it is not certain what should be
used for deciding the bucket number for the first table. Assuming value
2 goes to bucket 2 and value 3 goes to bucket 3, it is not certain which
bucket "1 2 3" (trip_id, car_id, car_id)  row will go to.

As a solution we send the index of partition column in targetList
instead of the column name.

The old API is kept so that if workers upgrade work, it still works
(though it will have the same bug)

* Use the same method so that backporting is easier
2021-01-29 14:40:40 +03:00
Onder Kalaci 04fcd73eb6 When reaches to shared pool size, COPY sets the placement access
It looks like we forgot to set the placement accesses, and
this could lead to self-deadlocks on complex transaction blocks.
2021-01-28 12:45:57 +01:00
Onder Kalaci 36bdeef1bb When reaches to executor pool size, COPY sets the placement access
It looks like we forgot to set the placement accesses, and
this could lead to self-deadlocks on complex transaction blocks.
2021-01-28 12:45:57 +01:00
Onur Tirtir bb5962ee79
Early error out when creating citus local from a temp table (#4592) 2021-01-28 14:18:06 +03:00
Halil Ozan Akgul 913aa91449 Adds error message to AlterTableSetAccessMethod for below PG12 2021-01-28 11:32:02 +03:00
Onur Tirtir b20615cbbe
Advise dropping foreign key in addition to create_reference_table hint (#4590) 2021-01-27 17:59:06 +03:00
Onur Tirtir 8151c4b443 Merge remote-tracking branch 'origin/master' into rename-create_citus_local_table 2021-01-27 17:08:58 +03:00
Ahmet Gedemenli b2c1bbddd4
Merge branch 'master' into fix-dropping-mat-views-when-alter-table 2021-01-27 16:33:10 +03:00
Ahmet Gedemenli 35043c56f1 Fix dropping materialized views while doing alter table 2021-01-27 16:32:09 +03:00
Onur Tirtir 93a83d5472 Rename create_citus_local_table.c to citus_add_local_table_to_metadata.c 2021-01-27 15:52:37 +03:00
Onur Tirtir 1a4482a37c Get rid of the sql dir for new udf 2021-01-27 15:52:37 +03:00
Onur Tirtir 2f30be823e Rename create_citus_local_table to citus_add_local_table_to_metadata
For simplicity in downgrade test in multi_extension, didn't
actually remove create_citus_local_table udf.
2021-01-27 15:52:36 +03:00
Onur Tirtir c06fcc26e5 Hide notice messages when implicitly undistributing citus local tables 2021-01-27 13:42:06 +03:00
Onur Tirtir 458a81f93d Add suppressNoticeMessages to TableConversionState 2021-01-27 12:53:58 +03:00
Onur Tirtir cacb76d2c6
Not mention citus local tables in error messages (#4579) 2021-01-27 12:36:53 +03:00
Naisila Puka 94bc2703bc
Make undistribute_table() and citus_create_local_table() work with columnar (#4563)
* Make undistribute_table() and citus_create_local_table() work with columnar

* Rename and use LocallyExecuteUtilityTask for UDF check

* Remove 'local' references in ExecuteUtilityCommand
2021-01-27 01:17:20 +03:00
Halil Ozan Akgul bafa692fc1 Adds error messages with names of indexes that will be dropped 2021-01-26 18:18:26 +03:00
Ahmet Gedemenli e99f052904 Fix index renaming when creating citus local tables 2021-01-26 15:52:48 +03:00
Ahmet Gedemenli 14bf9d85d6
Merge branch 'master' into fix-maintenance-daemon-crash 2021-01-26 12:52:28 +03:00
Onur Tirtir 6a28f62239 Remove stale comment 2021-01-25 18:55:57 +03:00
Onur Tirtir 9e0150e9e2 Drop notify_constraint_dropped beforehand when downgrading 2021-01-25 18:55:57 +03:00
Nils Dijk d127516dc8
Mitigate segfault in connection statemachine (#4551)
As described in the comment, we have observed crashes in production
due to a segfault caused by the dereference of a NULL pointer in our
connection statemachine.

As a mitigation, preventing system crashes, we provide an error with
a small explanation of the issue. Unfortunately the case is not
reliably reproduced yet, hence the inability to add tests.

DESCRIPTION: Prevent segfaults when SAVEPOINT handling cannot recover from connection failures
2021-01-25 15:55:04 +01:00
Onur Tirtir b5ea033a0b Convert postgres tables to citus local when creating reference table having fkeys 2021-01-25 11:02:50 +03:00
Onur Tirtir 8e02375aa3 Some refactor as a preparation 2021-01-25 11:01:33 +03:00
Onur Tirtir 253c19062a
Rename IsCitusInitiatedBackend to IsCitusInitiatedRemoteBackend (#4562) 2021-01-23 01:07:43 +03:00
Jeff Davis 53f7b019d5 Columnar: clean up old references to cstore. 2021-01-22 11:08:36 -08:00
Onur Tirtir 941c8fbf32
Automatically undistribute citus local tables when no more fkeys with reference tables (#4538) 2021-01-22 18:15:41 +03:00
Ahmet Gedemenli 5022fc8301 Remove failing assertions 2021-01-22 17:09:24 +03:00
Marco Slot 03328e9679 Rename citus_tables column names to be query-friendly 2021-01-21 18:58:30 +01:00
Ahmet Gedemenli 63fab1b7d9
Merge branch 'master' into remove-deprecated-gucs-udfs 2021-01-22 13:29:07 +03:00
SaitTalhaNisanci 3d69ab5576
Choose the smallest colocation id among all matches (#4559)
Currently we choose an arbitrary colocation id from all the matches for
a colocation id. This could mean that 2 distributed tables, which have
the same scheme could go into different colocation groups. This fix
makes sure that the same match will go to the same colocation group.
2021-01-22 13:28:43 +03:00
Ahmet Gedemenli 3ac30ef9d8
Merge branch 'master' into remove-deprecated-gucs-udfs 2021-01-22 13:06:13 +03:00
Ahmet Gedemenli 76354ff563
Merge branch 'master' into remove-deprecated-gucs-udfs 2021-01-22 12:47:06 +03:00
Ahmet Gedemenli 887b67953b
Merge branch 'master' into fix-bug-create-citus-local-table-with-stats 2021-01-22 12:46:47 +03:00
Hadi Moshayedi 222fb4d589 Don't use 'cstore' in function names 2021-01-21 18:32:21 -08:00
Önder Kalacı 9b39b25390
Prevent citus local table creation via remote execution (#4540)
/*
 * Creating Citus local tables relies on functions that accesses
 * shards locally (e.g., ExecuteAndLogDDLCommand()). As long as
 * we don't teach those functions to access shards remotely, we
 * cannot relax this check.
*/
2021-01-21 11:26:45 +03:00
Ahmet Gedemenli 89a6fe83f7 Replace to update_distributed_table_colocation for tests 2021-01-20 17:30:06 +03:00
Ahmet Gedemenli ceb6b503c0 Remove unused UDF mark_tables_colocated 2021-01-20 17:29:23 +03:00
Ahmet Gedemenli 2fa060a32d Fix bug creating citus local table with stats 2021-01-20 17:17:13 +03:00
Onder Kalaci 8129ce472f Refactor Utility Hook
We want to be able to find the "top-level" DDL commands
(not internal/cascading ones). To achieve that, we have
some refactoring.
2021-01-20 15:54:00 +03:00
Onder Kalaci 8df58926c5 Rename CitusProcessUtility -> ProcessUtilityForNode 2021-01-20 15:54:00 +03:00
Halil Ozan Akgul 434f5af030 Adds same access method check 2021-01-20 15:18:03 +03:00
Hadi Moshayedi bc01c795a2 Reland #4419 2021-01-19 07:48:47 -08:00
Halil Ozan Akgul 27c2bd1599 Moves creation of ALTER INDEX STATISTICS commands next to index commands 2021-01-18 16:55:53 +03:00
Naisila Puka 7124a7715d
Skip 'already exists' in CREATE TABLE IF NOT EXISTS PARTITION OF (#4507)
* Just skip 'already exists' in CT IF NOT EXISTS PARTITION OF

* Generalize to tables that are not already distributed partitions
2021-01-18 15:56:02 +03:00
Onur Tirtir f1ecbc3a53
Fix segfault when adding/dropping fkey from ref to citus local via remote exec (#4528) 2021-01-17 20:43:33 +03:00
Onur Tirtir 5a3e8a6e24
Skip postgres tables for UndistributeTable(cascadeViaFKeys) (#4530)
The reason behind skipping postgres tables is that we support
foreign keys between postgres tables and reference tables
(without converting postgres tables to citus local tables)
when enable_local_reference_table_foreign_keys is false or
when coordinator is not added to metadata.
2021-01-17 20:32:30 +03:00
Ahmet Gedemenli 107097ee28 Fix assert failure when creating statistics 2021-01-15 19:36:58 +03:00
Onur Tirtir 7dddfa2d0b
Not invalidate fkey cache if citus not installed (#4521) 2021-01-15 18:31:43 +03:00
Onder Kalaci c35e22d75d Skip validation for foreign key creation commands
For certaion purposes, we drop and recreate the foreign
keys. As we acquire exclusive locks on the tables in between
drop and re-create, we can safely skip validation phase of
the foreign keys. The reason is purely being performance as
foreign key validation could take a long value.
2021-01-15 18:04:52 +03:00
Onder Kalaci ae0b92233d Rename function 2021-01-15 18:04:52 +03:00
Onder Kalaci 30d0a65f40 Adds citus.enable_local_reference_table_foreign_keys
When enabled any foreign keys between local tables and reference
tables supported by converting the local table to a citus local
table.

When the coordinator is not in the metadata, the logic is disabled
as foreign keys are not allowed in this configuration.
2021-01-15 18:04:52 +03:00
Onder Kalaci ed58a404d5 Release lock on CoordinatorAddedAsWorkerNode()
Because master_add_node(or others) might acquire ExclusiveLock
and their initiated sessions may call CoordinatorAddedAsWorkerNode().

With this we prevent potential deadlocks.
2021-01-15 18:04:42 +03:00
Onur Tirtir e718d24868 Add support for CREATE TABLE commands defining foreign keys 2021-01-15 17:46:06 +03:00
Ahmet Gedemenli 9a100bcdb9 Remove unused GUCs
Remove deprecated variables

Remove GUC citus.sslmode

Remove GUC citus.expire_cached_shards

Remove GUC citus.task_tracker_delay

Remove GUC citus.max_assign_task_batch_size

Remove GUC citus.max_tracked_tasks_per_node

Remove GUC citus.max_running_tasks_per_node

Remove GUC citus.large_table_shard_count

Remove GUC citus.max_task_string_size

Remove GUC citus.binary_master_copy_format
2021-01-15 13:30:45 +03:00
Onur Tirtir 787ed643dd
Undistribute table when cascade_via_foreign_keys=true even if rel has no fkeys (#4516)
If relation is not involved in any foreign key relationships,
foreign key graph would not return any relations for given
relationId as expected.

But even if it's the case, we should still undistribute the table
itself.
2021-01-15 12:45:44 +03:00
Halil Ozan Akgul 9407965817 Moves struct to the header 2021-01-15 11:50:11 +03:00
Onur Tirtir 36b418982f Add support for ALTER TABLE commands defining foreign keys 2021-01-14 17:12:00 +03:00
Onur Tirtir 05931b8fe2 Pass ProcessUtilityContext to .preprocess 2021-01-14 17:12:00 +03:00
Onur Tirtir ac7bccd847 Skip citus tables for CreateCitusLocalTable(cascadeViaFKeys) 2021-01-14 17:12:00 +03:00
Marco Slot b840e97cd6 Add a alter_old_partitions_set_access_method UDF 2021-01-14 10:44:14 +01:00
Ahmet Gedemenli 9b56ad48cb Recreate invalidation functions for Citus10
Fix multi_create_table

Add schema name to altered functions

Recreate invalidation functions when downgrading
2021-01-13 23:18:07 +03:00
Marco Slot de6aaaa648 Expand support for subqueries in target list through recursive planning 2021-01-13 17:26:09 +01:00
Onur Tirtir ccbc3de535 Enable reference/distributed table creation from citus local tables 2021-01-13 17:14:26 +03:00
Onur Tirtir 7180ef5df1 Increment command counter in UndistributeTable 2021-01-13 16:54:35 +03:00
Onur Tirtir 00da1eed20 Some refactor as a preparation 2021-01-13 16:50:09 +03:00
Halil Ozan Akgul 2be14cce2e Adds alter_distributed_table and alter_table_set_access_method UDFs 2021-01-13 16:02:39 +03:00
Onur Tirtir 1299895e71
Give hint to use ref table for unsupported fkeys between citus local & ref (#4501) 2021-01-13 15:33:46 +03:00
SaitTalhaNisanci 724d56f949
Add citus shard helper view (#4361)
With citus shard helper view, we can easily see:
- where each shard is, which node, which port
- what kind of table it belongs to
- its size

With such a view, we can see shards that have a size bigger than some
value, which could be useful. Also debugging can be easier in production
as well with this view.

Fetch shards in one go per node

The previous implementation was slow because it would do a lot of round
trips, one per shard to be exact. Hence it is improved so that we fetch
all the shard_name, shard-size pairs per node in one go.

Construct shards_names, sizes query on coordinator
2021-01-13 13:58:47 +03:00
Ahmet Gedemenli 436c9d9d79
Remove the word 'master' from Citus UDFs (#4472)
* Replace master_add_node with citus_add_node

* Replace master_activate_node with citus_activate_node

* Replace master_add_inactive_node with citus_add_inactive_node

* Use master udfs in old scripts

* Replace master_add_secondary_node with citus_add_secondary_node

* Replace master_disable_node with citus_disable_node

* Replace master_drain_node with citus_drain_node

* Replace master_remove_node with citus_remove_node

* Replace master_set_node_property with citus_set_node_property

* Replace master_unmark_object_distributed with citus_unmark_object_distributed

* Replace master_update_node with citus_update_node

* Replace master_update_shard_statistics with citus_update_shard_statistics

* Replace master_update_table_statistics with citus_update_table_statistics

* Rename master_conninfo_cache_invalidate to citus_conninfo_cache_invalidate

Rename master_dist_local_group_cache_invalidate to citus_dist_local_group_cache_invalidate

* Replace master_copy_shard_placement with citus_copy_shard_placement

* Replace master_move_shard_placement with citus_move_shard_placement

* Rename master_dist_node_cache_invalidate to citus_dist_node_cache_invalidate

* Rename master_dist_object_cache_invalidate to citus_dist_object_cache_invalidate

* Rename master_dist_partition_cache_invalidate to citus_dist_partition_cache_invalidate

* Rename master_dist_placement_cache_invalidate to citus_dist_placement_cache_invalidate

* Rename master_dist_shard_cache_invalidate to citus_dist_shard_cache_invalidate

* Drop master_modify_multiple_shards

* Rename master_drop_all_shards to citus_drop_all_shards

* Drop master_create_distributed_table

* Drop master_create_worker_shards

* Revert old function definitions

* Add missing revoke statement for citus_disable_node
2021-01-13 12:10:43 +03:00
Onur Tirtir 2ef5879bcc
Fix error thrown for foreign keys from citus local to dist tables (#4490) 2021-01-13 10:15:12 +03:00
Onur Tirtir dd55ab394e
Disallow cascade_via_foreign_keys if any partition rel has non-inherited fkeys (#4487) 2021-01-11 21:50:09 +03:00
Naisila Puka 7b05777682
Add ALTER TABLE .. SET LOGGED/UNLOGGED support (#4486) 2021-01-11 20:39:06 +03:00
Marco Slot d900a7336e Automatically add placeholder record for coordinator 2021-01-08 15:09:53 +01:00
Marco Slot 597533b1ff Add citus_set_coordinator_host 2021-01-08 13:36:26 +01:00
Marco Slot e7f13978b5 Add a view for simple (time) partitions and their access methods 2021-01-08 11:28:15 +01:00
Onur Tirtir 5289785da4
Add cascade_via_foreign_keys option to create_citus_local_table (#4462) 2021-01-08 15:13:26 +03:00
Marco Slot 011283122b Add the shard rebalancer implementation 2021-01-07 16:51:55 +01:00
Onur Tirtir f3801143fb Add cascade option to undistribute_table 2021-01-07 15:41:49 +03:00
Onur Tirtir 2e3e680ba9 Add infra to cascade citus table functions 2021-01-07 15:41:48 +03:00
Marco Slot 47c1b19174 Revert "Do metadata sync in a separate background worker."
This reverts commit 4df723cf9b.
2021-01-07 10:30:04 +01:00
Marco Slot d9f175532b Revert "Trigger metadata sync at transaction commit"
This reverts commit a2c73bef27.
2021-01-07 10:30:00 +01:00
Marco Slot 5de3337b2f Support local execution for INSERT..SELECT with re-partitioning 2021-01-06 16:15:53 +01:00
Onder Kalaci 2fe158961b Remove "WarnAboutLeakedPreparedTransaction" function
We used to need WarnAboutLeakedPreparedTransaction()
as we didn't have auto 2PC recovery. But, we long have
2PC recovery by https://github.com/citusdata/citus/pull/1574

So, we don't need anymore.
2021-01-06 15:48:58 +03:00
Naisila Puka bcfc0aa4e9
Rethrow original concurrent index creation failure message (#4469)
* Rethrow original concurrent index creation failure message

* Alter test outputs for concurrent index creation

* Detect duplicate table failure in concurrent index creation

* Add test for conc. index creation w/out duplicates
2021-01-06 15:27:13 +03:00
Onur Tirtir 0d7aea3a22
Move pre undistribute_table chekcs into C API (#4456) 2021-01-06 10:49:35 +03:00
Ahmet Gedemenli 1f36ff7c17
Prevent deadlock for long named partitioned index creation on single node (#4461)
* Prevent deadlock for long named partitioned index creation on single node

* Create IsSingleNodeCluster function

* Use both local and sequential execution
2021-01-05 13:39:13 +03:00
Ahmet Gedemenli f27649754b
Add alter index set statistics support (#4455)
* Add alter index set statistics support

* Use attNum instead of attName
2021-01-05 13:23:11 +03:00
Onur Tirtir e91e745dbc
Implement ConstraintWithNameIsOfType (#4451) 2020-12-29 11:53:06 +03:00
Onur Tirtir feda8bdd37 Now that we use tuples after closing pg_depend, don't release lock 2020-12-25 18:03:28 +03:00
Onur Tirtir 04a4167a8a Implement GetPgDependTuplesForDependingObjects 2020-12-25 18:03:28 +03:00
Halil Ozan Akgül a8626d1944
Fixes the table used in the error message (#4449) 2020-12-25 16:48:50 +03:00
Naisila Puka 04aeb6938b
Merge branch 'master' into issue4237 2020-12-25 12:36:40 +03:00
Hadi Moshayedi a2c73bef27 Trigger metadata sync at transaction commit 2020-12-24 08:28:38 -08:00
Hadi Moshayedi 4df723cf9b Do metadata sync in a separate background worker. 2020-12-24 08:25:55 -08:00
Naisila Puka 0bb2c991f9
Merge branch 'master' into issue4237 2020-12-24 18:05:27 +03:00
Ahmet Gedemenli 5af585269a Add separate pg13 test for stats targets 2020-12-24 18:01:25 +03:00
naisila 59a81491e8 Add test for master_create_empty_shard on coordinator 2020-12-24 17:59:40 +03:00
Ahmet Gedemenli d4bc17f6f0 Propagate statistics with altered targets 2020-12-24 17:10:12 +03:00
Ahmet Gedemenli 48ca1637a4 Propagate alter stats owner 2020-12-24 17:10:12 +03:00
Ahmet Gedemenli f7c70f9a63 Propagate alter stats target 2020-12-24 17:10:12 +03:00
Ahmet Gedemenli 5a1607b6c0 Propagate alter stats schema 2020-12-24 17:10:12 +03:00
Ahmet Gedemenli bdce4a7e67 Propagate rename statistics 2020-12-24 17:10:12 +03:00
Onur Tirtir 5ed9197041
Implement infra to get foreign key connected relations (#4439)
On top of our foreign key graph, implement the infrastructure to get
list of relations that are connected to input relation via a foreign key
graph.
We need this to support cascading create_citus_local_table &
undistribute_table operations.

Also add regression tests to see what our foreign key graph is able to
capture currently.
2020-12-24 16:42:40 +03:00
Onur Tirtir 0db21bbe14
Remove fkey graph visited flags & rework GetConnectedListHelper (#4446)
With this commit, we remove visited flags from ForeignConstraintRelationshipNode
struct since keeping local state in global object is both dangerous and
meaningless.

Also to improve readability, this commit also converts needless recursion to
iterative DFS to avoid passing local hash-map as another parameter to
GetConnectedListHelper function.
2020-12-24 12:38:48 +03:00
Onur Tirtir 57e7defa3c
Support CREATE INDEX commands without index name on citus tables (#4273) 2020-12-23 23:15:39 +03:00
Marco Slot e3dcc278e0 Remove upgrade_to_reference_table UDF 2020-12-23 00:40:14 +01:00
Halil Ozan Akgül 9fd3f62cb6
Refactor foreign key functions to use table types (#4424)
* Reuses extractReferencing/Referenced variables

* Refactors GetForeignKeyOids function to check table types

* Converts flags to inclusive
2020-12-23 17:05:09 +03:00
Onur Tirtir d1b3eaf767
Refactor ColumnAppearsInForeignKeyToReferenceTable (#4441) 2020-12-23 11:44:02 +03:00
naisila 5234caecca Prevent empty placement creation in the coordinator 2020-12-22 19:39:05 +03:00
Ahmet Gedemenli 874fa1fc09 Propagate Drop Statistics 2020-12-22 18:34:46 +03:00
Onur Tirtir 3f60b08b11
Refactor foreign_key_relationship.c (#4438) 2020-12-22 18:12:02 +03:00
Marco Slot 321cc784c7 Collapse Citus 7.* scripts into Citus 8.0-1 2020-12-21 22:55:51 +01:00
Marco Slot f2056e553f
Expose partition column of subqueries in optimizer (#4355)
Co-authored-by: Marco Slot <marco.slot@gmail.com>
2020-12-18 20:32:52 +01:00
SaitTalhaNisanci 145112f3a0
Fix attribute numbers in subquery conversions (#4426)
Attribute number in a subquery RTE and relation RTE means different
things. In a relation attribute number will point to the column number
in the table definition including the dropped columns as well however in
subquery, it means the index in the target list. When we convert a
relation RTE to subquery RTE we should either correct all the relevant
attribute numbers or we can just add a dummy column for the dropped
columns. We choose the latter in this commit because it is practically
too vulnerable to update all the vars in a query.

Another thing this commit fixes is that in case a join restriction
clause list contains a false clause, we should just returns a false
clause instead of the whole list, because the whole list will contain
restrictions from other RTEs as well and this breaks the query, which
can be seen from the output changes, now it is much simpler.

Also instead of adding single tests for dropped columns, we choose to
run the whole mixed queries with tables with dropped columns, this
revealed some bugs already, which are fixed in this commit.
2020-12-18 20:25:41 +03:00
Ahmet Gedemenli 770d3da1ca Add dependencies for stat schemas 2020-12-18 17:04:13 +03:00
Ahmet Gedemenli 6c0465566a Propagate create statistics 2020-12-17 20:38:36 +03:00
Marco Slot 100e5d3196 Address review feedback 2020-12-15 15:23:38 +01:00
Marco Slot 707a6554b1 Support co-located/recurring correlated subqueries 2020-12-15 14:17:16 +01:00
Sait Talha Nisanci 181a7e1d36 Skip dropped columns 2020-12-15 18:18:36 +03:00
Sait Talha Nisanci 7951273f74 Refactor WrapRteRelationIntoSubquery 2020-12-15 18:18:36 +03:00
Sait Talha Nisanci 0e53aa5d3b Add more tests 2020-12-15 18:18:36 +03:00
Sait Talha Nisanci d5b0f02a64 Decide what group to convert, then convert them all in one go 2020-12-15 18:18:36 +03:00
Sait Talha Nisanci c4d3927956 Not allow local table updates with remote citus local tables 2020-12-15 18:18:36 +03:00
Sait Talha Nisanci f5dd5379b2 Add more tests 2020-12-15 18:18:36 +03:00
Sait Talha Nisanci f7c1509fed Not check if the query is routable for converting
It seems that there are only very few cases where that is useful, and
for now we prefer not having that check. This means that we might
perform some unnecessary checks, but that should be rare and not
performance critical.
2020-12-15 18:18:36 +03:00
Sait Talha Nisanci 1d82972ff4 Increase the performance with a trick
Instead of sending NULL's over a network, we now convert the subqueries
in the form of:

SELECT t.a, NULL, NULL FROM (SELECT a FROM table)t;

And we recursively plan the inner part so that we don't send the NULL's
over network. We still need the NULLs in the outer subquery because we
currently don't have an easy way of updating all the necessary places in
the query.

Add some documentation for how the conversion is done
2020-12-15 18:18:36 +03:00
Sait Talha Nisanci 3aed6c3ad0 Rename containsOnlyLocalTable as isLocalTableModification
Update error message in Modify View
2020-12-15 18:18:36 +03:00
Sait Talha Nisanci 13c43d5744 Improve table conversion logic in dist-local joins 2020-12-15 18:18:36 +03:00
Sait Talha Nisanci 5618f3a3fc Use BaseRestrictInfo for finding equality columns
Baseinfo also has pushed down filters etc, so it makes more sense to use
BaseRestrictInfo to determine what columns have constant equality
filters.

Also RteIdentity is used for removing conversion candidates instead of
rteIndex.
2020-12-15 18:18:36 +03:00
Sait Talha Nisanci 28c5b6a425 Convert some hard coded errors to deferred errors in router planner 2020-12-15 18:18:36 +03:00
Sait Talha Nisanci 69992d58f9 Add broken local-dist table modifications tests
It seems that most of the updates were broken, we weren't aware of it
because there wasn't any data in the tables. They are broken mostly
because local tables do not have a shard id and some code paths should
be updated with that information, currently when there is an invalid
shard id, it is assumed to be pruned.

Consider local tables in router planner

In case there is a local table, the shard id will not be valid and there
are some checks that rely on shard id, we should skip these in case of
local tables, which is handled with a dummy placement.

Add citus local table dist table join tests

add local-dist table mixed joins tests
2020-12-15 18:18:36 +03:00
Sait Talha Nisanci a34504d7bf Move recursive planning related function to recursive_planning 2020-12-15 18:17:10 +03:00
Sait Talha Nisanci 2a44029aaf Simplify ContainsTableToBeConvertedToSubquery
AllDataLocallyAccessible and ContainsLocalTableSubqueryJoin are removed.
We can possibly remove ModifiesLocalTableWithRemoteCitusLocalTable as
well. Though this removal has a side effect that now when all the data
is locally available, we could still wrap a relation into a subquery, I
guess that should be resolved in the router planner itself.

Add more tests
2020-12-15 18:17:10 +03:00
Sait Talha Nisanci 26d9f0b457 Use auto mode in tests and fix debug message 2020-12-15 18:17:10 +03:00
Sait Talha Nisanci 3bd53a24a3 Support update on postgres table from citus local table 2020-12-15 18:17:10 +03:00
Sait Talha Nisanci 4b6611460a Support foreign table joins as well 2020-12-15 18:17:10 +03:00
Sait Talha Nisanci 7e9204eba9 Update vars in quals while wrapping RTE to subquery
When we wrap an RTE to subquery we are updating the variables varno's as
1, however we should also update the varno's of vars in quals.

Also some other small code quality improvements are done.
2020-12-15 18:17:10 +03:00
Sait Talha Nisanci 0689f2ac1a Recursively plan distributed tables only if all have unique filters
The previous algorithm was not consistent and it could convert different
RTEs based on the table orders in the query. Now we convert local tables
if there is a distributed table which doesn't have a unique index. So if
there are 4 tables, local1, local2, dist1, dist2_with_pkey then we will
convert local1 and local2 in `auto` mode. Converting a distributed table
is not that logical because as there is a distributed table without a
unique index, we will need to convert the local tables anyway. So
converting the distributed table with pkey is redundant.
2020-12-15 18:17:10 +03:00
Sait Talha Nisanci a008fc611c Support materialized view joins as well 2020-12-15 18:17:10 +03:00
Sait Talha Nisanci ff4f3b2f3c Use PlannerRestrictionContext instead of RecursivePlannerContext 2020-12-15 18:17:10 +03:00
Sait Talha Nisanci 3fe3c55023 Use ShouldConvertLocalTableJoinsToSubqueries
Remove FillLocalAndDistributedRTECandidates and use
ShouldConvertLocalTableJoinsToSubqueries, which simplifies things as we
rely on a single function to decide whether we should continue
converting RTE to subquery.
2020-12-15 18:17:10 +03:00
Sait Talha Nisanci eebcd995b3 Add some more tests 2020-12-15 18:17:10 +03:00
Sait Talha Nisanci 5693cabc41 Not convert an already routable plannable query
We should not recursively plan an already routable plannable query. An
example of this is (SELECT * FROM local JOIN (SELECT * FROM dist) d1
USING(a));

So we let the recursive planner do all of its work and at the end we
convert the final query to to handle unsupported joins. While doing each
conversion, we check if it is router plannable, if so we stop.

Only consider range table entries that are in jointree

If a range table is not in jointree then there is no point in
considering that because we are trying to convert range table entries to
subqueries for join use case.
2020-12-15 18:17:10 +03:00
Sait Talha Nisanci 2ff65f3630 Enable partitioned distributed tables in local-dist table joins 2020-12-15 18:17:10 +03:00
Sait Talha Nisanci 44953579cf Enable citus-local distributed table joins
Check equality in quals

We want to recursively plan distributed tables only if they have an
equality filter on a unique column. So '>' and '<' operators will not
trigger recursive planning of distributed tables in local-distributed
table joins.

Recursively plan distributed table only if the filter is constant

If the filter is not a constant then the join might return multiple rows
and there is a chance that the distributed table will return huge data.
Hence if the filter is not constant we choose to recursively plan the
local table.
2020-12-15 18:17:10 +03:00
Sait Talha Nisanci f3d55448b3 Choose distributed table if it has a unique index in filter
When doing local-distributed table joins we convert one of them to
subquery. The current policy is that we convert distributed tables to
subquery if it has a unique index on a column that has unique
index(primary key also has a unique index).
2020-12-15 18:17:10 +03:00
Onder Kalaci 3f4952cc2b Pushdown projections when relations are recursively planned
This is important to limit the data transfer size.
2020-12-15 18:17:10 +03:00
Onder Kalaci 594e001f3b Add filter pushdown regression tests
Also handle WHERE false
2020-12-15 18:17:10 +03:00
Onder Kalaci 82a4830c7d Adjust the existing regression tests 2020-12-15 18:17:10 +03:00
Onder Kalaci 7a4d6b2984 Handle modifications as well 2020-12-15 18:17:10 +03:00
Onder Kalaci 8f8390ed6e Recursively plan local table joins
The logical planner cannot handle joins between local and distributed table.
Instead, we can recursively plan one side of the join and let the logical
planner handle the rest.

Our algorithm is a little smart, trying not to recursively plan distributed
tables, but favors local tables.
2020-12-15 18:17:10 +03:00
Onder Kalaci 7cc25c9125 Add ability to fetch the restrictions per relation
With this commit, we add the ability to add restrictions
per relation. We simply rely on the restrictions that Postgres
keeps per relation.
2020-12-15 18:17:10 +03:00
Onur Tirtir 0eb5701658
Not consider single shard hash dist. tables as replicated (#4413) 2020-12-15 14:33:01 +03:00
Marco Slot f2538a456f Support co-located/recurring sublinks in the target list 2020-12-13 15:45:24 +01:00
Marco Slot 8e8adcd92a Harden citus_tables against node failure 2020-12-13 15:10:40 +01:00
Jeff Davis 5b3c32eb38 fixup tests 2020-12-07 13:18:22 -08:00
Ahmet Gedemenli 7577821920 Fix transaction name length calculation 2020-12-07 12:34:15 +03:00
Ahmet Gedemenli 936775e8e3 Delete transactions when removing node
With this commit, we delete entries in pg_dist_transaction
for the primary nodes that are removed by `master_remove_node`.
2020-12-07 11:35:20 +03:00
Marco Slot c9b658daea Add a public.citus_tables view 2020-12-03 17:31:40 +01:00
Marco Slot 4098d33acb Allow citus size functions on replicated tables 2020-12-03 16:33:24 +01:00
SaitTalhaNisanci f164575524
Add a utility to process each table index (#4382)
A utility function is added so that each caller can implement a handler
for each index on a given table. This means that the caller doesn't need
to worry about how to access each index, the only thing that it needs to
do each to implement a function to which each index on the table is
passed iteratively.
2020-12-03 16:33:13 +03:00
Onder Kalaci c546ec5e78 Local node connection management
When Citus needs to parallelize queries on the local node (e.g., the node
executing the distributed query and the shards are the same), we need to
be mindful about the connection management. The reason is that the client
backends that are running distributed queries are competing with the client
backends that Citus initiates to parallelize the queries in order to get
a slot on the max_connections.

In that regard, we implemented a "failover" mechanism where if the distributed
queries cannot get a connection, the execution failovers the tasks to the local
execution.

The failover logic is follows:

- As the connection manager if it is OK to get a connection
	- If yes, we are good.
	- If no, we fail the workerPool and the failure triggers
	  the failover of the tasks to local execution queue

The decision of getting a connection is follows:

/*
 * For local nodes, solely relying on citus.max_shared_pool_size or
 * max_connections might not be sufficient. The former gives us
 * a preview of the future (e.g., we let the new connections to establish,
 * but they are not established yet). The latter gives us the close to
 * precise view of the past (e.g., the active number of client backends).
 *
 * Overall, we want to limit both of the metrics. The former limit typically
 * kics in under regular loads, where the load of the database increases in
 * a reasonable pace. The latter limit typically kicks in when the database
 * is issued lots of concurrent sessions at the same time, such as benchmarks.
 */
2020-12-03 14:16:13 +03:00
Ahmet Gedemenli 5242dcfe99 Add tests for propagating alter schema rename 2020-12-02 15:18:26 +03:00
Ahmet Gedemenli 514c6a76ac Propagate alter schema rename 2020-12-02 15:18:26 +03:00
Nils Dijk 6f9c040f76
DESCRIPTION: Propagate columnar table settings for distributed tables
When distributing a columnar table, as well as changing options on a distributed columnar table, this patch will forward the settings from the coordinator to the workers.

For propagating options changes on an already distributed table this change is pretty straight forward. Before applying the change in options locally we will create a `DDLJob` that contains a call to `alter_columnar_table_set(...)` for every shard placement with all settings of the current table. This goes both for setting an option as well as resetting. This will reset the values to the defaults configured on the coordinator. Having the effect that the coordinator is authoritative on the settings and makes sure the shards have the same settings set as the table on the coordinator.

When a columnar table is distributed it is using the `TableDDLCommand` infra structure to create a new kind of `TableDDLCommand`. This new type, called a `TableDDLCommandFunction` contains a context and 2 function pointers to execute. One function returns the command as applied on the table, the second function will return the sql command to apply to a shard with a given shard id. The schema name is ignored as it will use the fully qualified name of the shard in the same schema as the base table.
2020-12-02 13:02:42 +01:00
Onder Kalaci f7e1aa3f22 Multi-row INSERTs use local execution when placements are local
Multi-row execution already uses sequential execution. When shards
are local, using local execution is profitable as it avoids
an extra connection establishment to the local node.
2020-12-01 21:37:59 +03:00
Onur Tirtir 03bcccdee0
Fix hostname length check in StartNodeUserDatabaseConnection (#4363)
Copying string before hostname length check makes the check useless
2020-11-30 20:00:35 +03:00
Onur Tirtir 7f3d1182ed
Handle invalid connection hash entries (#4362)
If MemoryContextAlloc errors out -e.g. during an OOM-, ConnectionHashEntry->connections
stays as NULL.

With this commit, we add isValid flag to ConnectionHashEntry that should be set to true
right after we allocate & initialize ConnectionHashEntry->connections list properly, and we
check it before accesing to ConnectionHashEntry->connections.
2020-11-30 19:44:03 +03:00
SaitTalhaNisanci af02ac6cf5
Refactor MultiRouterPlannableQuery (#4350)
The name of the function is different than the implemantation. Because
the function is designed to only consider SELECT queries. Also this
changes the assert with an error.
2020-11-27 18:44:38 +03:00
Nils Dijk 326e6afa53
refactor table ddl events scoped for shards (#4342)
Refactor internals on how Citus creates the SQL commands it sends to recreate shards.

Before Citus collected solely ddl commands as `char *`'s to recreate a table. If they were used to create a shard they were wrapped with `worker_apply_shard_ddl_command` and send to the workers. On the workers the UDF wrapping the ddl command would rewrite the parsetree to replace tables names with their shard name equivalent.

This worked well, but poses an issue when adding columnar. Due to limitations in Postgres on creating custom options on table access methods we need to fall back on a UDF to set columnar specific options. Now, to recreate the table, we can not longer rely on having solely DDL statements to recreate a table.

A prototype was made to run this UDF wrapped in `worker_apply_shard_ddl_command`. This became pretty messy, hard to understand and subsequently hard to maintain.

This PR proposes a refactor of the internal representation of table ddl commands into a `TableDDLCommand` structure. The current implementation only supports a `char *` as its contents. Based on the use of the DDL statement (eg. creating the table -mx- or creating a shard) one of two different functions can be called to get the statement to send to the worker:
 - `GetTableDDLCommand(TableDDLCommand *command)`: This function returns that ddl command to create the table. In this implementation it will just return the `char *`. This has the same functionality as getting the old list and not wrapping it.
 - `GetShardedTableDDLCommand(TableDDLCommand *command, uint64 shardId, char *schemaName)`: This function returns the ddl command wrapped in `worker_apply_shard_ddl_command` with the `shardId` as an argument. Due to backwards compatibility it also accepts a. `schemaName`. The exact purpose is not directly clear. Ideally new implementations would work with fully qualified statements and ignore the `schemaName`.

A future implementation could accept 2.function pointers and a `void *` for context to let the two pointers work on. This gives greater flexibility in controlling what commands get send in which situations. Also, in a future, we could implement the intermediate step of creating the `parsetree` datastructure of statements based on the contents in the catalog with a corresponding deparser. For sharded queries a mutator could be ran over the parsetree to rewrite the tablenames to the names with the shard identifier. This will completely omit the requirement for `worker_apply_shard_ddl_command`.
2020-11-26 13:31:59 +01:00
SaitTalhaNisanci 83020f444e
Initialize fast planner restriction context (#4349)
We initialize fast planner restriction context so that code paths that
rely on this being not NULL will operate without a problem.
2020-11-26 13:45:27 +03:00
Onder Kalaci 629ecc3dee Add the infrastructure to count the number of client backends
Considering the adaptive connection management
improvements that we plan to roll soon, it makes it
very helpful to know the number of active client
backends.

We are doing this addition to simplify yhe adaptive connection
management for single node Citus. In single node Citus, both the
client backends and Citus parallel queries would compete to get
slots on Postgres' `max_connections` on the same Citus database.

With adaptive connection management, we have the counters for
Citus parallel queries. That helps us to adaptively decide
on the remote executions pool size (e.g., throttle connections
if necessary).

However, we do not have any counters for the total number of
client backends on the database. For single node Citus, we
should consider all the client backends, not only the remote
connections that Citus does.

Of course Postgres internally knows how many client
backends are active. However, to get that number Postgres
iterates over all the backends. For examaple, see [pg_stat_get_db_numbackends](8e90ec5580/src/backend/utils/adt/pgstatfuncs.c (L1240))
where Postgres iterates over all the backends.

For our purpuses, we need this information on every connection
establishment. That's why we cannot affort to do this kind of
iterattion.
2020-11-25 19:19:24 +01:00
SaitTalhaNisanci 180195b445
Remove unused parameter from VarConstOpExprClause (#4348) 2020-11-25 21:00:22 +03:00
Ahmet Gedemenli a64dc8a72b Fixes a bug preventing INSERT SELECT .. ON CONFLICT with a constraint name on local shards
Separate search relation shard function

Add tests
2020-11-25 15:10:46 +03:00
Onur Tirtir 46be63d76b
Refactor PreprocessIndexStmt (#4272) 2020-11-25 12:19:37 +03:00
Onder Kalaci 7accbff3f6 Do not cache all the distributed table metadata during CitusTableTypeIdList()
CitusTableTypeIdList() function iterates on all the entries of pg_dist_partition
and loads all the metadata in to the cache. This can be quite memory intensive
especially when there are lots of distributed tables.

When partitioned tables are used, it is common to have many distributed tables
given that each partition also becomes a distributed table.

CitusTableTypeIdList() is used on every CREATE TABLE .. PARTITION OF.. command
as well. It means that, anytime a partition is created, Citus loads all the
metadata to the cache. Note that Citus typically only loads the accessed table's
metadata to the cache.
2020-11-24 17:44:06 +01:00
Önder Kalacı c760cd3470
Move local execution after remote execution (#4301)
* Move local execution after the remote execution

Before this commit, when both local and remote tasks
exist, the executor was starting the execution with
local execution. There is no strict requirements on
this.

Especially considering the adaptive connection management
improvements that we plan to roll soon, moving the local
execution after to the remote execution makes more sense.

The adaptive connection management for single node Citus
would look roughly as follows:

   - Try to connect back to the coordinator for running
     parallel queries.
        - If succeeds, go on and execute tasks in parallel
        - If fails, fallback to the local execution

So, we'll use local execution as a fallback mechanism. And,
moving it after to the remote execution allows us to implement
such further scenarios.
2020-11-24 13:43:38 +01:00
Önder Kalacı 532b457554
Solidify the slow-start algorithm (#4318)
The adaptive executor emulates the TCP's slow start algorithm.
Whenever the executor needs new connections, it doubles the number
of connections established in the previous iteration.

This approach is powerful. When the remote queries are very short
(like index lookup with < 1ms), even a single connection is sufficent
most of the time. When the remote queries are long, the executor
can quickly establish necessary number of connections.

One missing piece on our implementation seems that the executor
keeps doubling the number of connections even if the previous
connection attempts have been finalized. Instead, we should
wait until all the attempts are finalized. This is how TCP's
slow-start works. Plus, it decreases the unnecessary pressure
on the remote nodes.
2020-11-23 19:20:13 +01:00
Onder Kalaci c433c66f2b Do not execute subplans multiple times with cursors
Before this commit, we let AdaptiveExecutorPreExecutorRun()
to be effective multiple times on every FETCH on cursors.
That does not affect the correctness of the query results,
but adds significant overhead.
2020-11-20 10:43:56 +01:00
Önder Kalacı b0ddbbd33a
Enable parallel query on EXPLAIN ANALYZE (#4325)
It seems that we forgot to pass the revelant
flag to enable Postgres' parallel query
capabilities on the shards when user does
EXPLAIN ANALYZE on a distributed table.
2020-11-20 09:54:04 +01:00
SaitTalhaNisanci 9c44911226
Improve error messages in shard pruning (#4324) 2020-11-18 17:16:06 +03:00
Nils Dijk 7c891a01a9 create missing objects during upgrade path 2020-11-17 19:01:51 +01:00
Nils Dijk d065bb495d
Prepare downgrade script and bump development version to 10.0-1 2020-11-17 18:55:35 +01:00
Nils Dijk 213eb93e6d
make columnar compile and functionally working 2020-11-17 18:55:34 +01:00
Önder Kalacı 0c0fc69f2a
Remove unused field (#4275) 2020-11-17 11:41:57 +01:00
Nils Dijk 7d14800071
add placeholder for enterprise modules 2020-11-11 15:43:04 +01:00
Onur Tirtir 4bf754b245
Fix location of citus--10.0-1--9.5-1.sql downgrade script (#4306) 2020-11-09 16:43:56 +03:00
Onur Tirtir 5e3dc9d707 Bump citus version to 10.0devel 2020-11-09 13:16:54 +03:00
Hanefi Onaldi d3019f1b6d
Introduce foreach_ptr_modify macro (#4303)
If one wishes to iterate through a List and insert list elements in
PG13, it is not safe to use for_each_ptr as the List representation
in PostgreSQL no longer linked lists, but arrays, and it is possible
that the whole array is repalloc'ed if ther is not sufficient space
available.

See postgres commit 1cff1b95ab6ddae32faa3efe0d95a820dbfdc164 for more
information
2020-11-09 12:03:59 +03:00
Onder Kalaci e0d2ac7620 Do not rely on set_rel_pathlist_hook for finding local relations
When a relation is used on an OUTER JOIN with FALSE filters,
set_rel_pathlist_hook may not be called for the table.

There might be other cases as well, so do not rely on the hook
for classification of the tables.
2020-11-06 11:14:30 +01:00
Onur Tirtir cc8be422ce
Fix relkind checks in planner for relkinds other than RELKIND_RELATION (#4294)
We were qualifying relations with relkind != RELKIND_RELATION as
non-relations due to the strict checks around RangeTblEntry->relkind
in planner.
2020-11-05 14:21:02 +03:00
SaitTalhaNisanci 25de5b1290
Fix uninitilized variable (#4293)
Valgrind found that, we were doing an if check on uninitialized variable
and it seems that this is on context.appendparents.

ac22929a26/src/backend/utils/adt/ruleutils.c (L1054)
2020-11-04 12:08:15 +03:00
Hanefi Önaldı d6f19e2298
Honor error message conventions 2020-11-03 18:11:18 +03:00
Hanefi Önaldı 85a4b61a0e
Prevent undistribute_table calls for partitions 2020-11-03 18:10:20 +03:00
Hanefi Önaldı 5db380f33a
Prevent undistribute_table calls for foreign tables 2020-11-03 17:33:29 +03:00
Halil Ozan Akgul 77b3be8b6d Turn RelOptInfos to only used field of them, relids, to be able to copy 2020-10-22 13:42:28 +03:00
Onur Tirtir ef49b75cd6
Fix memory issues around deparsing index commands (#4270) 2020-10-22 13:17:13 +03:00
Onder Kalaci 5c4c9304ba Remove RemoveDuplicateJoinRestrictions() function
RemoveDuplicateJoinRestrictions() function was introduced with the aim of decrasing the overall planning times by eliminating the duplicate JOIN restriction entries (#1989). However, it turns out that the function itself is so CPU intensive with a very high algorithmic complexity, it hurts a lot more than it helps. The function is a clear example of premature optimization.

The table below shows the difference clearly:

"distributed query planning
 time master"	RemoveDuplicateJoinRestrictions() execution time on master	"Remove the function RemoveDuplicateJoinRestrictions()
this PR"
5 table INNER JOIN	9 msec	2msec	7 msec
10 table INNER JOIN	227 msec	194 msec	29  msec
20 table INNER JOIN	1 sec 235 msec	1  sec 139  msec	90 msecs
50 table INNER JOIN	24 seconds	21 seconds	1.5 seconds
100 table INNER JOIN	2 minutes 16 secods	1 minute 53 seconds	23 seconds
250 table INNER JOIN	Bottleneck on JoinClauseList	18 minutes 52 seconds	Bottleneck on JoinClauseList

5 table INNER JOIN in subquery	9 msec	0 msec	6 msec
10 table INNER JOIN subquery	33 msec	10 msec	32 msec
20 table INNER JOIN subquery	132 msec	67 msec	123 msec
50 table INNER JOIN subquery	1.2  seconds	900 msec	500 msec
100 table INNER JOIN subquery	6 seconds	5  seconds	2 seconds
250 table INNER JOIN subquery	54 seconds	37 seconds	20  seconds

5 table LEFT JOIN	5 msec	0 msec	5 msec
10 table LEFT JOIN	11 msec	0 msec	13 msec
20 table LEFT JOIN	26 msec	2 msec	30 msec
50 table LEFT JOIN	150 msec	15 msec	193 msec
100 table LEFT JOIN	757 msec	71 msec	722 msec
250 table LEFT JOIN	8 seconds	600 msec	8 seconds

5 JOINs among 2 table JOINs 	37 msec	11 msec	25 msec
10 JOINs among 2 table JOINs 	536 msec	306 msec	352 msec
20 JOINs among 2 table JOINs 	794 msec	181 msec	640 msec
50 JOINs among 2 table JOINs 	25 seconds	2 seconds	22 seconds
100 JOINs among 2 table JOINs 	Bottleneck on JoinClauseList	9 seconds	Bottleneck on JoinClauseList
150 JOINs among 2 table JOINs 	Bottleneck on JoinClauseList	46 seconds	Bottleneck on JoinClauseList

On top of the performance penalty, the function had a critical bug #4255, and with #4254 we hit one more important bug. It should be fixed by adding the followig check to the ContextCoversJoinRestriction():
```
static bool
JoinRelIdsSame(JoinRestriction *leftRestriction, JoinRestriction *rightRestriction)
{
	Relids leftInnerRelIds = leftRestriction->innerrel->relids;
	Relids rightInnerRelIds = rightRestriction->innerrel->relids;
	if (!bms_equal(leftInnerRelIds, rightInnerRelIds))
	{
		return false;
	}

	Relids leftOuterRelIds = leftRestriction->outerrel->relids;
	Relids rightOuterRelIds = rightRestriction->outerrel->relids;
	if (!bms_equal(leftOuterRelIds, rightOuterRelIds))
	{
		return false;
	}

	return true;
}
```

However, adding this eliminates all the benefits tha RemoveDuplicateJoinRestrictions() brings.

I've used the commands here to generate the JOINs mentioned in the PR: https://gist.github.com/onderkalaci/fe8654f9df5916c7af4c7c5eb892561e#file-gistfile1-txt

Inner and outer JOINs behave roughly the same, to simplify the table only added INNER joins.
2020-10-21 10:29:39 +02:00
SaitTalhaNisanci 0f209377c4
Fix incorrect join related fields (#4242)
* Fix incorrect join related fields

Ruleutils expect to give the original index of join columns hence we
should consider the dropped columns while setting the fields in
SetJoinRelatedFieldsCompat.

* add some more tests for joins

* Move tests to join.sql and create a utility function
2020-10-19 18:28:39 +03:00
Onur Tirtir c49077d594
Disallow outer joins `ON TRUE` with ref & dist tables when ref table is outer relation (#4255)
Disallow `ON TRUE` outer joins with reference & distributed tables
when reference table is outer relation by fixing the logic bug made
when calling `LeftListIsSubset` function.

Also, be more defensive when removing duplicate join restrictions
when join clause is empty for non-inner joins as they might still
contain useful information for non-inner joins.
2020-10-19 16:58:11 +03:00
Onur Tirtir f80f4839ad Remove unused functions that cppcheck found 2020-10-19 13:50:52 +03:00
Onder Kalaci bbedfca761 Improve the relation restriction counters
It seems like Postgres could call set_rel_pathlist() for
the same relation multiple times. This breaks the logic
where we assume relationCount eqauls to the number of
entries in relationRestrictionList.

In summary, relationRestrictionList may contain duplicate
entries.
2020-10-19 08:51:16 +02:00
Nils Dijk caabbf4b84 Table access method support for distributed tables 2020-10-16 12:02:25 -07:00
Onur Tirtir 7cb07c70fa
Move hasSemiJoin to JoinRestrictionContext (#4256) 2020-10-16 18:37:39 +03:00
Marco Slot 8976f245ab Support reference table view in reference table modification 2020-10-16 11:31:24 +02:00
Onur Tirtir de6f2d3f42
Refactor JoinRestrictionListExistsInContext to improve readability (#4249) 2020-10-16 12:24:56 +03:00
Onder Kalaci fe3caf3bc8 Local execution considers intermediate result size limit
With this commit, we make sure that local execution adds the
intermediate result size as the distributed execution adds. Plus,
it enforces the citus.max_intermediate_result_size value.
2020-10-15 17:18:55 +02:00
Marco Slot 31858c8a29 Check table existence in EnsureRelationKindSupported 2020-10-15 17:05:06 +02:00
Sait Talha Nisanci ecde6c6eef Introduce GetCurrentLocalExecutionStatus wrapper
We should not access CurrentLocalExecutionStatus directly because that
would mean that we could also set it directly, which we shouldn't
because we have checks to see if the new state is possible, otherwise we
error.
2020-10-15 15:38:19 +03:00
Simon Kelly 4f94e544b7 create 9.5-1 udfs and update citus--9.4-1--9.5-1.sql 2020-10-15 13:50:36 +02:00
Simon Kelly 2a6c867cb0 Make citus_prepare_pg_upgrade idempotent
https://github.com/citusdata/citus/issues/3527
2020-10-15 13:49:50 +02:00
Onder Kalaci 15e724c073 Add regression tests for outer/cross JOINs 2020-10-14 15:17:30 +02:00
Onder Kalaci de33079065 Improve outer join checks
Before this commit, the logic was:
    - As long as the outer side of the JOIN is not a JOIN (e.g., relation
      or subquery etc.), we check for the existence of any recurring
      tuples. There were two implications of this decision.

      First, even if a subquery which is on the outer side contains
      distributed table JOIN reference table, Citus would unnecessarily throw
      an error. Note that, the JOIN inside the subquery would already
      be going to be tested recursively. But, as long as that check
      passes, there is no reason for the upper JOIN to fail. An example, which
      used to fail and now works:

	SELECT * FROM (SELECT * FROM dist JOIN ref) as foo LEFT JOIN dist;

      Second, certain JOINs, especially with ON (true) conditions were not
      represented as Citus expects the JOINs to be in the format
      DeferredErrorIfUnsupportedRecurringTuplesJoin().
2020-10-14 15:17:30 +02:00
Onur Tirtir 1a28858c47
Disallow field indirection in INSERT/UPDATE queries (#4241) 2020-10-14 14:11:59 +03:00
Onur Tirtir 8efca3b60a
Fix a crash with inserting domain composite types in coord. evaluation (#4231)
Use short lived per-tuple context in citus_evaluate_expr like
(pg) evaluate_expr does.

We should not use planState->ExprContext when evaluating expressions
as it might lead to freeing the same executor twice (first one happens
in citus_evaluate_expr itself and the other one happens when postgres
doing clean-up for the top level executor state), which in turn might
cause seg.faults.

However, now as we don't have necessary planState info to evaluate
prepared statements, we also add planState->es_param_list_info to
per-tuple ExprContext.
2020-10-13 14:19:59 +03:00
Halil Ozan Akgul e2736c25bd Adds support for WITH TIES option 2020-10-12 19:34:18 +03:00
Onder Kalaci e29aa51a87 Do not copy bms 2020-10-09 16:41:36 +02:00
Sait Talha Nisanci dc40758355 Return early if there is no citus table in VACUUM 2020-10-09 11:10:00 +03:00
Sait Talha Nisanci 99bb79745a Commit transaction for VACUUM on shell table
With postgres 13, there is a global lock that prevents multiple VACUUMs
happening in the current database. This global lock is taken for a short
time but this creates a problem because of the following:

- We execute the VACUUM for the shell table through the standard process
utility. In this step the global lock is taken for the current database.
- If the current node has shard placements then it tries to execute
VACUUM over a connection to localhost with ExecuteUtilityTaskList.
- the VACUUM on shard placements cannot proceed because it is waiting
for the global lock for the current database to be released.
- The acquired lock from the VACUUM for shell table will not be released
until the transaction is committed.
- So there is a deadlock.

As a solution, we commit the current transaction in case of VACUUM after
the VACUUM is executed for the shell table. Executing the VACUUM on a
shell table is not important because the data there will probably be
truncated. PostprocessVacuumStmt takes the necessary locks on the shell
table so we don't need to take any extra locks after we commit the
current transaction.
2020-10-09 10:57:44 +03:00
Marco Slot 881e5df780 Fix a bug that could lead to multiple maintenance daemons 2020-10-08 16:18:14 +02:00
Simon Kelly 50fa4af7e4 update migration script 2020-10-08 12:52:27 +02:00
Simon Kelly 6fffee7616
Drop backup table after upgrade
The prepare for upgrade script creates the `'public.pg_dist_rebalance_strategy` table which is not dropped when the upgrade is finished. This may block future upgrades.
2020-10-08 09:48:04 +02:00
Marco Slot 73fc054c27 Rename DDL command functions 2020-10-06 11:30:56 +02:00
Marco Slot 4f69298d90 Fix RLS and replica identity propagation on shard move 2020-10-06 11:30:03 +02:00
Marco Slot dbc348b7e0 Create sequence dependency during metadata syncing 2020-10-06 10:57:39 +02:00
Marco Slot 9bba8bb4e8 Remove master_drop_sequences 2020-10-06 10:57:33 +02:00
Ahmet Gedemenli 81db4dca5c Degrade gracefully when no background workers available 2020-10-05 16:55:00 +03:00
Onur Tirtir 2cd0a69dfb
Fix multi-row & router INSERT crash with local exec. when def. cols not specified (#4197)
Multi-row & router INSERT's were crashing with local execution if at
least one of the DEFAULT columns were not specified in VALUES list.

This was because, the changes we make on query->values_lists and
query->targetList was sufficient for deparsing given INSERT for remote
execution but not sufficient for local execution.

With this commit, DEFAULT value normalization for multi-row & router
INSERT's is fixed by adding dummy column references for unspecified
DEFAULT columns.
2020-10-05 10:45:17 +03:00
Hanefi Önaldı 6d8e83d24f
Replace worker_hash calls with partkey IS NOT NULL filters 2020-10-02 18:16:24 +03:00
Önder Kalacı df5aa0f0cc
Switch to sequential execution if the index name is long (#4209)
Citus has the logic to truncate the long shard names to prevent
various issues, including self-deadlocks. However, for partitioned
tables, when index is created on the parent table, the index names
on the partitions are auto-generated by Postgres. We use the same
Postgres function to generate the index names on the shards of the
partitions. If the length exceeds the limit, we switch to sequential
execution mode.
2020-10-02 13:39:34 +03:00
SaitTalhaNisanci 45bb0fb587
Do initial cleanup only once in pg_init (#4213)
In postmasters execution of _PG_init, IsUnderPostmaster will be false and
we want to do the cleanup at that time only, otherwise there is a chance that
there will be parallel queries and we might do a cleanup for things that are
already in use.
2020-10-02 09:12:39 +03:00
Ahmet Gedemenli d268aa7bc8 Support EXPLAIN(ANALYZE, WAL) 2020-10-01 13:52:42 +03:00
Onder Kalaci 56ca256374 Forcefully terminate connections after citus.node_connection_timeout
After the connection timeout, we fail the session/pool. However, the
underlying connection can still be trying to connect. That is dangerous
because the new placement executions have already been in place. The
executor cannot handle the situation where multiple of
EXECUTION_ORDER_ANY task executions succeeds.

Adding a regression test doesn't seem easily doable. To reproduce the issue
- Add 2 worker nodes
- create a reference table
- set citus.node_connection_timeout to 1ms (requires code change)
- Continiously execute `SELECT count(*) FROM ref_table`
- Sometime later, you hit an out-of-array access in
  `ScheduleNextPlacementExecution()` hence crashing.
- The reason for that is sometimes the first connection
  successfully established while the executor is already
  trying to execute the query on the second node.
2020-09-30 18:24:24 +02:00
Hanefi Önaldı b0a2c1ee5c
Disallow volatile functions on single shard update queries
We currently do not support volatile functions in update/delete statements
because the function evaluation logic does not know how to distinguish
volatile functions (that need to be evaluated per row) from stable functions
(that need to be evaluated per query), and it is also not safe to push the
volatile functions down on replicated tables.
2020-09-29 15:40:21 +03:00
Marco Slot b905c8043d Fix create index concurrently crash with local execution 2020-09-25 11:49:09 +02:00
Ahmet Gedemenli abfb79bda6 Sort explain analyze output by task time
Add sort method parameter for regression tests

Fix check-style

Change sorting method parameters to enum

Polish

Add task fields to OutTask

Add test into multi_explain

Fix isolation test
2020-09-24 11:38:40 +03:00
Onur Tirtir 64d5ac6a10
Do not downgrade if a citus local table exists (#4174)
As the previous versions of Citus don't know how to handle citus local
tables, we should prevent downgrading from 9.5 to older versions if any
citus local tables exists.
2020-09-22 14:19:50 +03:00
Onder Kalaci 5d017cd123 Improve node matedata when coordinator is added
Coordinator should always be always active, hasmetadata and
metadasynced. Prevent changing those fields.
2020-09-21 14:53:41 +02:00
Onder Kalaci 6fc1dea85c Improve the robustness of function call delegation
Pushing down the CALLs to the node that the CALL is executed is
dangerous and could lead to infinite recursion.

When the coordinator added as worker, Citus was by chance preventing
this. The coordinator was marked as "not metadatasynced" node
in pg_dist_node, which prevented CALL/function delegation to happen.

With this commit, we do the following:

  - Fix metadatasynced column for the coordinator on pg_dist_node
  - Prevent pushdown of function/procedure to the same node that
    the function/procedure is being executed. Today, we do not sync
    pg_dist_object (e.g., distributed functions metadata) to the
    worker nodes. But, even if we do it now, the function call delegation
    would prevent the infinite recursion.
2020-09-21 14:53:30 +02:00
SaitTalhaNisanci e7cd1ed0ee
Not take ShareUpdateExlusiveLock on pg_dist_transaction (#4184)
* Not take ShareUpdateExlusiveLock on pg_dist_transaction

We were taking ShareUpdateExlusiveLock on pg_dist_transaction during
recovery to prevent multiple recoveries happening concurrenly. VACUUM(
not FULL) also takes ShareUpdateExclusiveLock, and they can conflict. It
seems that VACUUM will skip the table if there is a conflicting lock
already taken unless it is doing the vacuum to prevent id wraparound, in
which case there can be a deadlock. I guess the deadlock happens if:

- VACUUM takes a lock on pg_dist_transaction and is done for id
wraparound problem
- The transaction in the maintenance tries to take a lock but
cannot as that conflicts with the lock acquired by VACUUM
- The transaction in the maintenance daemon has a very old xid hence
VACUUM cannot proceed.

If we take a row exclusive lock in transaction recovery then it wouldn't
conflict with VACUUM hence it could proceed so the deadlock would be
resolved. To prevent concurrent transaction recoveries happening, an
advisory lock is taken with ShareUpdateExlusiveLock as before.

* Use CITUS_OPERATIONS tag
2020-09-21 15:20:38 +03:00
Onur Tirtir 1b31b22635 Refactor the functions that return OID lists for citus tables 2020-09-18 16:42:46 +03:00
SaitTalhaNisanci dae2c69fd7
Not allow removing a single node with ref tables (#4127)
* Not allow removing a single node with ref tables

We should not allow removing a node if it is the only node in the
cluster and there is a data on it. We have this check for distributed
tables but we didn't have it for reference tables.

* Update src/test/regress/expected/single_node.out

Co-authored-by: Onur Tirtir <onurcantirtir@gmail.com>

* Update src/test/regress/sql/single_node.sql

Co-authored-by: Onur Tirtir <onurcantirtir@gmail.com>
2020-09-18 15:35:59 +03:00
SaitTalhaNisanci 6e316d46a2
Remove unused variable (#4172) 2020-09-18 11:25:07 +03:00
Marco Slot c9d46c618b Fix EXPLAIN ANALYZE truncation 2020-09-17 14:42:21 +02:00
Onur Tirtir d81559b7f8
Use "table" instead of "reference table" in sequential truncate log (#4164)
We might get this debug message for citus local tables as well
2020-09-17 14:37:36 +03:00
SaitTalhaNisanci 5723038f74
Comment user provided input memory allocation (#4163) 2020-09-17 13:18:13 +03:00
Onur Tirtir 4118560b75
Prevent citus local table creation from a catalog table (#4158) 2020-09-15 14:30:48 +03:00
Marco Slot bd12555b16 Fix distributing tables owned by extensions 2020-09-10 04:46:11 +02:00
Onur Tirtir 3a73fba810 Apply planner changes for citus local tables 2020-09-09 11:51:18 +03:00
Onur Tirtir 0b1cc118a9 Adapt other cache entry changes for citus local tables 2020-09-09 11:50:55 +03:00
Onur Tirtir a58a4395ab Extend citus local table utility command support
This commit brings following features:

Foreign key support from citus local tables to reference tables
* Foreign key support from reference tables to citus local tables
  (only with RESTRICT & NO ACTION behavior)
* ALTER TABLE ENABLE/DISABLE trigger command support
* CREATE/DROP/ALTER trigger command support

and disallows:
* ALTER TABLE ATTACH/DETACH PARTITION commands
* CREATE TABLE <postgres table> ATTACH PARTITION <citus local table>
  commands
* Foreign keys from postgres tables to citus local tables
  (the other way was already disallowed)

for citus local tables.
2020-09-09 11:50:55 +03:00
Onur Tirtir 17cc810372 Implement "citus local table" creation logic 2020-09-09 11:50:48 +03:00
Onur Tirtir ba208eae4d
Record non-distributed table accesses in local executor (#4139) 2020-09-07 18:19:08 +03:00
Nils Dijk bbf42063a7
export LookupShardTransferMode 2020-09-03 16:06:38 +02:00
Nils Dijk 6e4862c57f
expose transfermode for ensure reference table existance 2020-09-03 16:06:37 +02:00
SaitTalhaNisanci 366461ccdb
Introduce cache entry/table utilities (#4132)
Introduce table entry utility functions

Citus table cache entry utilities are introduced so that we can easily
extend existing functionality with minimum changes, specifically changes
to these functions. For example IsNonDistributedTableCacheEntry can be
extended for citus local tables without the need to scan the whole
codebase and update each relevant part.

* Introduce utility functions to find the type of tables

A table type can be a reference table, a hash/range/append distributed
table. Utility methods are created so that we don't have to worry about
how a table is considered as a reference table etc. This also makes it
easy to extend the table types.

* Add IsCitusTableType utilities

* Rename IsCacheEntryCitusTableType -> IsCitusTableTypeCacheEntry

* Change citus table types in some checks
2020-09-02 22:26:05 +03:00
Jelte Fennema 451ea04508 Rename ForceXxx functions to to XxxOrError
This clearer naming was suggested in https://github.com/citusdata/citus/pull/4001
2020-09-01 11:19:17 +02:00
Hanefi Önaldı 024d398cd7
Allow distribution of functions that read from reference tables
create_distributed_function(function_name,
                            distribution_arg_name,
                            colocate_with text)

This UDF did not allow colocate_with parameters when there were no
disttribution_arg_name supplied. This commit changes the behaviour to
allow missing distribution_arg_name parameters when the function should
be colocated with a reference table.
2020-09-01 07:28:34 +03:00
Önder Kalacı 983206c5e1
Hide `citus.subquery_pushdown` flag and NOTICE when enabled (#4124)
* Hide citus.subquery_pushdown flag

This flag is dangerous and could likely to let queries
return wrong results.

The flag has a very specific purpose for a very specific
data distribution and query structure. In those cases, when
the flag is set, the user can skip recursive planning altogether
*at their own risk*.

The meaning of the flag is that "I know what I'm doing such that
the query structure/data distribution is on my control, so Citus
can skip many correctness checks".

For regular users, enabling this flag is discouraged. We have to
keep the support only for backward compatibility for some users.

In addition to that, give a NOTICE to discourage new users to
use it.
2020-08-28 14:53:09 +02:00
SaitTalhaNisanci f7c2af0411
Rename RemoveCoordinatorPlacement (#4125)
RemoveCoordinatorPlacement does not do what it says. It removes the
coordinator placement only if there are other placements, so it is not a
single node, and only if the coordinator has a placement.
2020-08-26 13:12:10 +03:00
Hanefi Onaldi f47b3a7e7d
Remove unused parameters from round robin reordering and friends (#4120) 2020-08-20 12:45:01 +03:00
SaitTalhaNisanci 20c39fae9a
Loosen the requirement to pushdown a subquery with ref tables (#4110)
AllTargetExpressionsAreColumnReferences would return false if a query
had an entry that is referencing the outer query. It seems safe to not
have this for non-distributed tables, such as reference tables. We
already have separate checks for other cases such as having limits.
2020-08-14 12:11:15 +03:00
SaitTalhaNisanci 679bf0d2b2
Create CanPushdownSubqery wrapper for better readability (#4108) 2020-08-12 17:28:20 +03:00
SaitTalhaNisanci 73ef40886b
Rename FindNodeCheckXXX functions (#4106)
FindNodeCheck is not clear about what the function is doing. They are
renamed to FindNodeMatchingCheckFunctionXXX. Also for choosing elements in these
functions, CheckNodeFunc type is introduced.
2020-08-11 15:01:23 +03:00
Hadi Moshayedi 7b74eca22d Support EXPLAIN EXECUTE ANALYZE. 2020-08-10 13:44:30 -07:00
Marco Slot 768d8b232c Do not take multi-shard locks on workers 2020-08-06 21:48:25 +02:00
Hanefi Onaldi 5be8287989
Fix comments of helper functions that set local config values (#4100) 2020-08-07 11:20:38 +03:00
Halil Ozan Akgul 375310b7f1 Adds support for table undistribution 2020-08-05 14:36:03 +03:00
Sait Talha Nisanci fe4ac51d8c Normalize Output:.. since it changes with pg13
Fix indentation for better readability
2020-08-04 15:38:13 +03:00
Sait Talha Nisanci 33406598e3 Add ruleutils changes from 3977 and 4011 2020-08-04 15:38:13 +03:00
Sait Talha Nisanci 63ed126ad4 Set buffer usage with explain
It seems that currently we process even postgres tables in explain
commands. This is because we register a hook for explain and we don't
have any check to see if the query has any citus table.

With this commit, we now send the buffer usage as well to the relevant
API. There is some duplicate in the code but it is because of the
existing structure, we can refactor this separately.
2020-08-04 15:38:13 +03:00
Sait Talha Nisanci fe1e1c9b68 Replace Set_ptr_value as SetListCellPtr to be more explicit
Move header to right place and fix comment style
2020-08-04 15:38:13 +03:00
Sait Talha Nisanci 8e9b52971c Use new var field names in the codebase
The codebase is updated to use varattnosync and varnosyn and we defined
the macros for older versions. This way we can just remove the macros
when we drop an older version.
2020-08-04 15:38:13 +03:00
Sait Talha Nisanci b641f63bfd Use CMDTAG_SELECT_COMPAT
CMDTAG_SELECT exists in PG12 hence defining a MACRO such as
CMDTAG_SELECT -> "SELECT" is not possible. I chose CMDTAG_SELECT_COMPAT
because with the COMPAT suffix it is explicit that it maps to different
things in different versions and also has a less chance of mapping
something irrevelant. For example if we used SELECT as a macro, then it
would map every SELECT to whatever it is mapping to, which might have
unexpected/undesired behaviour.
2020-08-04 15:38:13 +03:00
Sait Talha Nisanci d68bfc5687 Improve error for index operator class parameters
The error message when index has opclassopts is improved and the commit
from postgres side is also included for future reference.

Also some minor style related changes are applied.
2020-08-04 15:38:13 +03:00
Sait Talha Nisanci d0b0c88920 Changelog: error out if index has opclassopts
Error out if index has opclassopts.

Changelog entry on PG13:
Allow CREATE INDEX to specify the GiST signature length and maximum number of integer ranges (Nikita Glukhov)
2020-08-04 15:38:13 +03:00
Sait Talha Nisanci 87088d92bc Changelog: handle VACUUM PARALLEL option
Postgres 13 added a new VACUUM option, PARALLEL. It is now supported
in our code as well.

Relevant changelog message on postgres:
Allow VACUUM to process indexes in parallel (Masahiko Sawada, Amit Kapila)
2020-08-04 15:18:27 +03:00
Sait Talha Nisanci 1070828465 update cte inline output for pg13
Make some macros in version_compat more robust
Remove commented code in ruleutils
Remove unnecessary variable assignments
2020-08-04 15:18:27 +03:00
Sait Talha Nisanci 1112b254a7 adapt recently added code for pg13
This commit mostly adds pg_get_triggerdef_command to our ruleutils_13.
This doesn't add anything extra for ruleutils 13 so it is basically a copy
of the change on ruleutils_12
2020-08-04 15:18:27 +03:00
Sait Talha Nisanci 108a2972c2 Introduce a workaround for join aliases
When there is a join alias, var->varnosync will point to the alias and
var->varno will point to the table itself, but we need to use the alias
when deparsing the query. Hence a workaround is introduced to solve this
problem in ruleutils. Normally this case can be understood with
dpns->plan == NULL check but in our case, dpns->plan is always NULL. We
should sync our ruleutils at some point with postgres ruleutils. This
could be a wrong solution as well but the tests pass.
2020-08-04 15:18:27 +03:00
Sait Talha Nisanci c5c9ec288f fix multi_mx_create_table test 2020-08-04 15:18:27 +03:00
Sait Talha Nisanci 6ad708642e Fix rte index with pg >=13
Rte index is increased by range table index offset in pg >= 13. The
offset is removed with the pg >= 13.

Currently pushdown for union all is disabled because translatedVars is
set to nil on postgres side, and we were using translatedVars to
figure out if partition key has the same index in both sides of union
all. This should be fixed.

Commit on postgres side:
6ef77cf46e81f45716ec981cb08781d426181378

fix union all pushdown logic for pg13

Before pg 13, there was a field, translatedVars, and we were using that
to understand if the partition key has the same index on both sides of
the union all. With pg13 there is a parent_colnos field in appendRelInfo
and we can use that to get the attribute numbers(varattnos) in union all
vars. We make use of parent_colnos instead of translatedVars in pg >=13.
2020-08-04 15:18:27 +03:00
Sait Talha Nisanci 3cc7717e64 Fill new join fields for PG>=13
For joins 3 new fields are added, joinleftcols, joinrightcols, and
joinmergedcols. We are not interested in joinmergedcols because we
always expand the column used in joins. There joinmergedcols is always 0
in our case.

For filling joinleftcols and joinrightcols we basically construct the
lists with sequences so either list is of the form: [1 2 3 4 .... n]

Ruleutils is not completed synced with postgres ruleutils and the most
important part is identify_join_columns function change, which now uses
joinleftcols and joinrightcols.

Commit on postgres side:
9ce77d75c5ab094637cc4a446296dc3be6e3c221

A useful email thread:
https://www.postgresql.org/message-id/flat/7115.1577986646%40sss.pgh.pa.us#0ae1d66feeb400013fbaa67a7cccd6ca
2020-08-04 15:10:22 +03:00
Sait Talha Nisanci bc20920252 introduce SetJoinRelatedColumnsCompat
PG13 uses joinmergedcols, joinleftcols and joinrightcols for finding
join order now. There relevant fields are set on citus side.

Postgres side commit:
9ce77d75c5ab094637cc4a446296dc3be6e3c221
2020-08-04 15:10:22 +03:00
Sait Talha Nisanci 135af84859 Update ruleutils for join related changes of postgres
Postgres changed some join related fields and therefore they also
changed ruleutils, this commit applies those changes to our copy of
ruleutils.

Related commit on postgres side:
9ce77d75c5ab094637cc4a446296dc3be6e3c221
2020-08-04 15:10:22 +03:00
Sait Talha Nisanci 38aaf1faba use QueryCompletion struct
Postgres introduced QueryCompletion struct. Hence a compat utility is
added to finish query completion for older versions and pg >= 13.

The commit on Postgres side:
2f9661311b83dc481fc19f6e3bda015392010a40
2020-08-04 15:10:22 +03:00
Sait Talha Nisanci 9f1ec792b3 add queryString to distributed_planner
distributed_planner now takes query string as a parameter.

related commit on PG side:
6aba63ef3e606db71beb596210dd95fa73c44ce2
2020-08-04 15:10:22 +03:00
Sait Talha Nisanci 1a7ccac6ef Add RangeTableEntryFromNSItem macro
addRangeTableEntryXXX methods return a ParseNamespaceItem with pg >= 13.
RangeTableEntryFromNSItem macro is added so that we return the range
table entry from the ParseNamespaceItem in pg>=13 and for pg < 13 rte
would already be returned with addRangeTableEntryXXX methods.

Commit on Postgres side:
5815696bc66b3092f6361f53e0394909647042c8
2020-08-04 15:10:22 +03:00
Sait Talha Nisanci 4ed30a0824 create Set_ptr_value
Since PG13 changed the list, a listcell doesn't contain data anymore.
Therefore Set_ptr_value macro is created, so that depending on the
version it will either use cell->data.ptr_value or cell->ptr_value.

Commit on Postgres side:
1cff1b95ab6ddae32faa3efe0d95a820dbfdc164
2020-08-04 15:10:22 +03:00
Sait Talha Nisanci 688ab16bba Introduce ExplainOnePlanCompat
Since ExplainOnePlan expects BufferUsage as well with PG >= 13,
ExplainOnePlanCompat is added.

Commit on Postgres side:
ed7a5095716ee498ecc406e1b8d5ab92c7662d10
2020-08-04 15:10:22 +03:00
Sait Talha Nisanci 6314eba5df introduce standard_planner_compat
standard_planner now takes the query string as a parameter as well with
pg >= 13.

Commit on Postgres Side:
66888f7424f7d6c7cea2c26e181054d1455d4e7a
2020-08-04 15:10:22 +03:00
Sait Talha Nisanci 991f49efc9 introduce getOwnedSequencesCompat macro
Commit on Postgres side:
19781729f789f3c6b2540e02b96f8aa500460322
2020-08-04 15:10:22 +03:00
Sait Talha Nisanci 01632c56a0 Change utils/hashutils.h to common/hashfn.h for PG >= 13
Commit on postgres side:
05d8449e73694585b59f8b03aaa087f04cc4679a

Command on postgres side:
git log --all --grep="hashutils"

include common/hashfn.h for pg >= 13

tag_hash was moved from hsearch.h to hashutils.h then to hashfn.h

Commits on Postgres side:
9341c783cc42ffae5860c86bdc713bd47d734ffd
2020-08-04 15:10:22 +03:00
Sait Talha Nisanci 00e7386007 introduce PortalDefineQuerySelectCompat
PortalDefineQuery doesn't accept char* for command tag anymore with PG
>= 13. We are currently only using it with Select, therefore a Portal
define query compat for select is created.

Commit on PG side:
2f9661311b83dc481fc19f6e3bda015392010a40
2020-08-04 15:10:22 +03:00
Sait Talha Nisanci 62879ee8c1 introduce planner_compat and pg_plan_query_compat macros
As the new planner and pg_plan_query_compat methods expect the query
string as well, macros are defined to be compatible in different
versions of postgres.

Relevant commit on Postgres:
6aba63ef3e606db71beb596210dd95fa73c44ce2

Command on Postgres:
git log --all --grep="pg_plan_query"
2020-08-04 15:10:22 +03:00
Sait Talha Nisanci bf831d2e59 Use table_openXXX methods in the codebase
With PG13 heap_* (heap_open, heap_close etc) are replaced with table_*
(table_open, table_close etc).

It is better to use the new table access methods in the codebase and
define the macros for the previous versions as we can easily remove the
macro without having to change the codebase when we drop the support for
the old version.

Commits that introduced this change on Postgres:
f25968c49697db673f6cd2a07b3f7626779f1827
e0c4ec07284db817e1f8d9adfb3fffc952252db0
4b21acf522d751ba5b6679df391d5121b6c4a35f

Command to see relevant commits on Postgres side:
git log --all --grep="heap_open"
2020-08-04 15:10:22 +03:00
Sait Talha Nisanci 0819b79631 introduce list compat macros
Pass the list to lnext API
lnext API now expects the list as well.
The commit on Postgres that introduced the change: 1cff1b95ab6ddae32faa3efe0d95a820dbfdc164

lnext_compat and list_delete_cell_compat macros are introduced so that
we can use these macros in the codebase without having to use #if
directives in the codebase.

Related commit on postgres:
1cff1b95ab6ddae32faa3efe0d95a820dbfdc164

Command to search in postgres:
git log --all --grep="list_delete_cell"

add ListCellAndListWrapper

When iterating a list in separate function calls, we need both the list
and the current cell starting from PG13, therefore
ListCellAndListWrapper is added to store both as a wrapper.

Use ListCellAndListWrapper in foreign key test udfs

As we iterate a list in these udfs using a functionContext, we need to
use the wrapper to be able to access both the list and the current cell.
2020-08-04 15:10:22 +03:00
Sait Talha Nisanci 8ce8683ac4 Update ruleutils_13.c with postgres ruleutils
Some manual updates are done for ruleutils_13 based on the difference
between pg12 ruleutils and pg13 ruleutils.
2020-08-04 13:34:13 +03:00
Sait Talha Nisanci 30549dc0e2 add copy of ruleutils_12 as ruleutils_13 2020-08-04 13:34:13 +03:00
Onder Kalaci eeb8c81de2 Implement shared connection count reservation & enable `citus.max_shared_pool_size` for COPY
With this patch, we introduce `locally_reserved_shared_connections.c/h` files
which are responsible for reserving some space in shared memory counters
upfront.

We sometimes need to reserve connections, but not necessarily
establish them. For example:
-  COPY command should reserve connections as it cannot know which
   connections it needs in which order. COPY establishes connections
   as any input data hits the workers. For example, for router COPY
   command, it only establishes 1 connection.

   As discussed here (https://github.com/citusdata/citus/pull/3849#pullrequestreview-431792473),
   COPY needs to reserve connections up-front, otherwise we can end
   up with resource starvation/un-detected deadlocks.
2020-08-03 18:51:40 +02:00
nukoyluoglu 38987431e7
propagation of CHECK statements to workers with parentheses (#4039)
* ensure propagation of CHECK statements to workers with parantheses & adjust regression test outputs

* add tests for distributing tables with simple CHECK constraints

* added test for CHECK on bool variable
2020-07-27 15:08:37 +03:00
Benjamin Satzger a35a15a513
Distribute custom aggregates with multiple arguments (#4047)
Enable custom aggregates with multiple parameters to be executed on workers.

#2921 introduces distributed execution of custom aggregates. One of the limitations of this feature is that only aggregate functions with a single aggregation parameter can be pushed to worker nodes. Aim of this change is to remove that limitation and support handling of multi-parameter aggregates.

Resolves: #3997
See also: #2921
2020-07-24 15:16:00 -07:00
Halil Ozan Akgul 38b72ddd66 Fixes create index concurrently bug 2020-07-24 12:14:14 +03:00
SaitTalhaNisanci ef841115de
Fix int32 overflow and use PG macros for INT32_XX (#4061)
* Use CalculateUniformHashRangeIndex in HashPartitionId

INT32_MIN definition can change among different platforms hence it is
possible to get overflow, we would see crashes because of this in debian
distros. We have already solved a similar problem with introducing
CalculateUniformHashRangeIndex method, hence to solve it we can use the
same method, this also removes some duplication and has a single place
to decide that.

* Use PG_INT32_XX instead of INT32_XX to be safer
2020-07-23 18:30:08 +03:00
Halil Ozan Akgül e9f89ed651
Fixes the non existing table bug (#4058) 2020-07-23 18:01:21 +03:00
Onder Kalaci a2f53dff74 Make FindAvailableConnection() more strict
With adaptive connection management, we might have some connections
which are not fully initialized. Those connections should not be
qualified as available.
2020-07-23 15:59:50 +02:00
Onder Kalaci cfb633601d Minor refactorings in COPY command execution
1) Rename CONNECTION_PER_PLACEMENT to REQUIRE_CLEAN_CONNECTION. This is
mostly to make things clear as the new name reveals more.

2) We also make sure that mark all the copy connections critical,
even if they are accessed earlier in the transction
2020-07-23 15:36:19 +02:00
SaitTalhaNisanci 64469708af
separate the logic in ManageWorkerPool (#3298) 2020-07-23 13:47:35 +03:00
Onder Kalaci 52c0fccb08 Move executor specific logic to a function
Because as we're planning to use the same logic, it'd be nice to
use the exact same functions.
2020-07-22 15:09:47 +02:00
Onder Kalaci ff6555299c Unify node sort ordering
The executor relies on WorkerPool, and many other places rely on WorkerNode.
With this commit, we make sure that they are sorted via the same function/logic.
2020-07-22 11:03:25 +02:00
Sait Talha Nisanci 01c23b0df2 update test outputs with task-tracker removal 2020-07-21 16:25:08 +03:00
Sait Talha Nisanci 4308d867d9 remove task-tracker in comments, documentation 2020-07-21 16:21:01 +03:00
Sait Talha Nisanci a3dc8fe2b5 remove occurrences of task-tracker from gucs 2020-07-21 16:19:46 +03:00
Hanefi Önaldı e534dbae4a
Accept list of values in a supported ALTER ROLE .. SET statement
Some GUCs support a list of values which is indicated by GUC_LIST_INPUT flag.

When an ALTER ROLE .. SET statement is executed, the new configuration
default for affected users and databases are stored in the
setconfig(text[]) column in a pg_db_role_setting record.

If a GUC that supports a list of values is used in an ALTER ROLE .. SET
statement, we need to split the text into items delimited by commas.
2020-07-21 03:49:57 +03:00
Onder Kalaci c25de2cf22 Remove flag from
As it doesn't make any sense anymore
2020-07-20 12:45:05 +02:00
SaitTalhaNisanci b3af63c8ce
Remove task tracker executor (#3850)
* use adaptive executor even if task-tracker is set

* Update check-multi-mx tests for adaptive executor

Basically repartition joins are enabled where necessary. For parallel
tests max adaptive executor pool size is decresed to 2, otherwise we
would get too many clients error.

* Update limit_intermediate_size test

It seems that when we use adaptive executor instead of task tracker, we
exceed the intermediate result size less in the test. Therefore updated
the tests accordingly.

* Update multi_router_planner

It seems that there is one problem with multi_router_planner when we use
adaptive executor, we should fix the following error:
+ERROR:  relation "authors_range_840010" does not exist
+CONTEXT:  while executing command on localhost:57637

* update repartition join tests for check-multi

* update isolation tests for repartitioning

* Error out if shard_replication_factor > 1 with repartitioning

As we are removing the task tracker, we cannot switch to it if
shard_replication_factor > 1. In that case, we simply error out.

* Remove MULTI_EXECUTOR_TASK_TRACKER

* Remove multi_task_tracker_executor

Some utility methods are moved to task_execution_utils.c.

* Remove task tracker protocol methods

* Remove task_tracker.c methods

* remove unused methods from multi_server_executor

* fix style

* remove task tracker specific tests from worker_schedule

* comment out task tracker udf calls in tests

We were using task tracker udfs to test permissions in
multi_multiuser.sql. We should find some other way to test them, then we
should remove the commented out task tracker calls.

* remove task tracker test from follower schedule

* remove task tracker tests from multi mx schedule

* Remove task-tracker specific functions from worker functions

* remove multi task tracker extra schedule

* Remove unused methods from multi physical planner

* remove task_executor_type related things in tests

* remove LoadTuplesIntoTupleStore

* Do initial cleanup for repartition leftovers

During startup, task tracker would call TrackerCleanupJobDirectories and
TrackerCleanupJobSchemas to clean up leftover directories and job
schemas. With adaptive executor, while doing repartitions it is possible
to leak these things as well. We don't retry cleanups, so it is possible
to have leftover in case of errors.

TrackerCleanupJobDirectories is renamed as
RepartitionCleanupJobDirectories since it is repartition specific now,
however TrackerCleanupJobSchemas cannot be used currently because it is
task tracker specific. The thing is that this function is a no-op
currently.

We should add cleaning up intermediate schemas to DoInitialCleanup
method when that problem is solved(We might want to solve it in this PR
as well)

* Revert "remove task tracker tests from multi mx schedule"

This reverts commit 03ecc0a681.

* update multi mx repartition parallel tests

* not error with task_tracker_conninfo_cache_invalidate

* not run 4 repartition queries in parallel

It seems that when we run 4 repartition queries in parallel we get too
many clients error on CI even though we don't get it locally. Our guess
is that, it is because we open/close many connections without doing some
work and postgres has some delay to close the connections. Hence even
though connections are removed from the pg_stat_activity, they might
still not be closed. If the above assumption is correct, it is unlikely
for it to happen in practice because:
- There is some network latency in clusters, so this leaves some times
for connections to be able to close
- Repartition joins return some data and that also leaves some time for
connections to be fully closed.

As we don't get this error in our local, we currently assume that it is
not a bug. Ideally this wouldn't happen when we get rid of the
task-tracker repartition methods because they don't do any pruning and
might be opening more connections than necessary.

If this still gives us "too many clients" error, we can try to increase
the max_connections in our test suite(which is 100 by default).

Also there are different places where this error is given in postgres,
but adding some backtrace it seems that we get this from
ProcessStartupPacket. The backtraces can be found in this link:
https://circleci.com/gh/citusdata/citus/138702

* Set distributePlan->relationIdList when it is needed

It seems that we were setting the distributedPlan->relationIdList after
JobExecutorType is called, which would choose task-tracker if
replication factor > 1 and there is a repartition query. However, it
uses relationIdList to decide if the query has a repartition query, and
since it was not set yet, it would always think it is not a repartition
query and would choose adaptive executor when it should choose
task-tracker.

* use adaptive executor even with shard_replication_factor > 1

It seems that we were already using adaptive executor when
replication_factor > 1. So this commit removes the check.

* remove multi_resowner.c and deprecate some settings

* remove TaskExecution related leftovers

* change deprecated API error message

* not recursively plan single relatition repartition subquery

* recursively plan single relation repartition subquery

* test depreceated task tracker functions

* fix overlapping shard intervals in range-distributed test

* fix error message for citus_metadata_container

* drop task-tracker deprecated functions

* put the implemantation back to worker_cleanup_job_schema_cachesince citus cloud uses it

* drop some functions, add downgrade script

Some deprecated functions are dropped.
Downgrade script is added.
Some gucs are deprecated.
A new guc for repartition joins bucket size is added.

* order by a test to fix flappiness
2020-07-18 13:11:36 +03:00
Hadi Moshayedi 13003d8d05 Use TupleDestination API for partitioning in insert/select. 2020-07-17 09:43:46 -07:00
Marco Slot b823f2127d Prevent integer overflow in FindShardIntervalIndex 2020-07-16 14:30:56 +02:00
Nils Dijk d0b6e62c9a
change wording to allowlist and the likes (#3906)
In the same line as #3904

Change wording to better reflect use and remove words that enforce/maintain bias.
2020-07-15 16:24:40 +02:00
Marco Slot 9cb8dc9d12 Improve error message when creating a foreign key to a local table 2020-07-13 13:57:22 +02:00
Marco Slot 5fbb925df1 Remove level asserts in abort handler 2020-07-12 22:54:35 +02:00
SaitTalhaNisanci bc011a6286
Add IsCitusTable check to citus table utilities (#4028) 2020-07-14 18:29:33 +03:00
Hanefi Önaldı 315b323d47
Introduce new make targets for downgrade scripts
Here are the updated make targets:
- install: install everything except downgrade scripts.
- install-downgrades: build and install only the downgrade migration scripts.
- install-all: install everything along with the downgrade migration scripts.
2020-07-14 13:10:18 +03:00
Sait Talha Nisanci 510535f558 address feedback 2020-07-13 19:45:02 +03:00
Sait Talha Nisanci 41ec76a6ad use ActiveReadableNodeList in JobExecutorType and task tracker
The reason we should use ActiveReadableNodeList instead of ActiveReadableNonCoordinatorNodeList is that if coordinator is added to cluster as a worker, it should be counted as well. Otherwise if there is only coordinator in the cluster, the count will be 0, hence we get a warning.

In MultiTaskTrackerExecute, we should connect to coordinator if it is
added to the cluster because it will also be assigned tasks.
2020-07-13 19:45:02 +03:00
Sait Talha Nisanci d97d03ec65 use ActivePrimaryNodeList to include coordinator
ActiveReadableWorkerNodeList doesn't include coordinator, however if
coordinator is added as a worker, we should also include that while
planning. The current methods are very easily misusable and this
requires a refactoring to make the distinction between methods that
include coordinator and that don't very explicit as they can introduce
subtle/major bugs pretty easily.
2020-07-13 19:20:15 +03:00
Sait Talha Nisanci db1b78148c send schema creation/cleanup to coordinator in repartitions
We were using ALL_WORKERS TargetWorkerSet while sending temporary schema
creation and cleanup. We(well mostly I) thought that ALL_WORKERS would also include coordinator when it is added as a worker. It turns out that it was FILTERING OUT the coordinator even if it is added as a worker to the cluster.

So to have some context here, in repartitions, for each jobId we create
(at least we were supposed to) a schema in each worker node in the cluster. Then we partition each shard table into some intermediate files, which is called the PARTITION step. So after this partition step each node has some intermediate files having tuples in those nodes. Then we fetch the partition files to necessary worker nodes, which is called the FETCH step. Then from the files we create intermediate tables in the temporarily created schemas, which is called a MERGE step. Then after evaluating the result, we remove the temporary schemas(one for each job ID in each node) and files.

If node 1 has file1, and node 2 has file2 after PARTITION step, it is
enough to either move file1 from node1 to node2 or vice versa. So we
prune one of them.

In the MERGE step, if the schema for a given jobID doesn't exist, the
node tries to use the `public` schema if it is a superuser, which is
actually added for testing in the past.

So when we were not sending schema creation comands for each job ID to
the coordinator(because we were using ALL_WORKERS flag, and it doesn't
include the coordinator), we would basically not have any schemas for
repartitions in the coordinator. The PARTITION step would be executed on
the coordinator (because the tasks are generated in the planner part)
and it wouldn't give us any error because it doesn't have anything to do
with the temporary schemas(that we didn't create). But later two things
would happen:

- If by chance the fetch is pruned on the coordinator side, we the other
nodes would fetch the partitioned files from the coordinator and execute
the query as expected, because it has all the information.
- If the fetch tasks are not pruned in the coordinator, in the MERGE
step, the coordinator would either error out saying that the necessary
schema doesn't exist, or it would try to create the temporary tables
under public schema ( if it is a superuser). But then if we had the same
task ID with different jobID it would fail saying that the table already
exists, which is an error we were getting.

In the first case, the query would work okay, but it would still not do
the cleanup, hence we would leave the partitioned files from the
PARTITION step there. Hence ensure_no_intermediate_data_leak would fail.

To make things more explicit and prevent such bugs in the future,
ALL_WORKERS is named as ALL_NON_COORD_WORKERS. And a new flag to return
all the active nodes is added as ALL_DATA_NODES. For repartition case,
we don't use the only-reference table nodes but this version makes the
code simpler and there shouldn't be any significant performance issue
with that.
2020-07-13 19:20:15 +03:00
SaitTalhaNisanci 76ddb85545
improve error message in secondaries (#4025) 2020-07-13 19:18:57 +03:00
Nils Dijk 449d1f0e91
force aliases in deparsing for queries with anonymous column references (#4011)
DESCRIPTION: Force aliases in deparsing for queries with anonymous column references

Fixes: #3985  

The root cause has todo with discrepancies in the query tree we create. I think in the future we should spend some time on categorising all changes we made to ruleutils and see if we can change the data structure `query` we pass to the deparser to have an actual valid postgres query for the deparser to render.

For now the fix is to keep track, besides changing the names of the entries in the target list, also if we have a reference to an anonymous columns. If there are anonymous columns we set the `printaliases` flag to true which forces the deparser to add the aliases.
2020-07-13 16:29:24 +02:00
SaitTalhaNisanci b8830d063f
remove no-op check in TaskListRequires2PC (#4018)
We already return true if replication model is REPLICATION_MODEL_2PC at
the very beginning of the function, hence the check later is not used.
2020-07-10 14:16:23 +03:00
SaitTalhaNisanci 15290bc43b
remove unused worker methods (#4017) 2020-07-10 13:45:55 +03:00
SaitTalhaNisanci 3f50165365
rename TargetWorkerSet enums (#4015)
Rename TargetWorkerSet enums to make them more explicit about what they
mean. Ideally it would be good to treat everything as a node without the
'worker' concept because it makes things complicated. Another
improvement could be to rename TargetWorkerSet as TargetNodeSet but it
goes to renaming many occurrences of Worker, which is probably too big
for this PR.
2020-07-10 11:21:27 +03:00
Hadi Moshayedi 3651fc64ee Fix Subtransaction memory leak 2020-07-09 12:33:39 -07:00
Jelte Fennema 4c68ed4c33
Make static analysis happier (#4008)
Some small non-functional changes to make static analysis happy.
2020-07-09 16:04:27 +02:00
Jelte Fennema 759e628dd5
Handle some NULL issues that static analysis found (#4001)
Static analysis found some issues where we used the result from
ExtractResultRelationRTE, without checking that it wasn't NULL. It seems
like in all these cases it can never actually be NULL, since we have checked
before that it isn't a SELECT query. So, this PR is mostly to make static
analysis happy (and protect a bit against future changes of the code).
2020-07-09 15:46:42 +02:00
SaitTalhaNisanci 96adce77d6
rename node/worker utilities (#4003)
The names were not explicit about what they do, and we have many
misusages in the codebase, so they are renamed to be more explicit.
2020-07-09 15:30:35 +03:00
Jelte Fennema 16242d5264
Fix write queries with const expressions and COLLATE in various places (#3973) 2020-07-08 18:19:53 +02:00
Jelte Fennema ab01571c9e
Fix crash with single node dummy placement (#3993)
Static analysis found an issue where we could dereference `NULL`, because 
`CreateDummyPlacement` could return `NULL` when there were no workers. This
PR changes it so that it never returns `NULL`, which was intended by 
@marcocitus when doing this change: https://github.com/citusdata/citus/pull/3887/files#r438136433

While adding tests for citus on a single node I also added some more basic
tests and it turns out we error out on repartition joins. This has been
present since `shouldhaveshards` was introduced and is not trivial to fix.
So I created a separate issue for this: https://github.com/citusdata/citus/issues/3996
2020-07-08 17:11:25 +02:00
Jelte Fennema f6e2f1b1cb
Replace words that have bad associations (#3992)
We had a few words in our codebase that static analysis flagged as having bad
associations.
2020-07-08 14:57:48 +02:00
Onur Tirtir 844221bb9f
Refactor utility hook global state changes (#3990) 2020-07-08 10:44:00 +03:00
Hadi Moshayedi 23fa421639 Fix task->fetchedExplainAnalyzePlan memory issue. 2020-07-07 07:58:02 -07:00
Philip Dubé 444472ffc6 ruleutils: use get_rtable_name for deparsing resultRelation 2020-07-07 12:20:41 +00:00
citus bot f0693e2f75 Remove unused MaxMasterConnectionCount function 2020-07-07 10:37:57 +02:00
citus bot bdfeb380d3 Fix some more master->coordinator comments 2020-07-07 10:37:53 +02:00
Marco Slot b4fec63bc0 Rename master evaluation to coordinator evaluation 2020-07-07 10:37:41 +02:00
Sait Talha Nisanci 4d217819ff Fix explain subplan duration 2020-07-03 20:39:55 +03:00
Jelte Fennema 9311978487 Add README for CI scripts
We keep accumulating more and more scripts to flag issues in CI. This is
good, but we are currently missing consistent documentation for them.
This commit moves all these scripts to the `ci` directory and adds some
documentation for all of them in the README. It also makes sure that the
last line of output of a failed script points to this documentation.
2020-07-03 10:22:48 +02:00
Onder Kalaci aa8a2866f3 Fix default value of EnableBinaryProtocol 2020-07-02 13:44:56 +02:00
Onur Tirtir be17ebb334 Bump citus version to 9.5devel 2020-07-01 14:46:55 +03:00
Hanefi Önaldı ca2ececb3b
Downgrade path from 9.4 to 9.3 to 9.2 2020-07-01 10:38:11 +03:00
Marco Slot eeffbde8bd Fix pushdown of constants in aggregate queries 2020-06-30 11:41:16 -07:00
Jelte Fennema 392c5e2c34
Fix wrong cancellation message about distributed deadlocks (#3956) 2020-06-30 14:57:46 +02:00
Marco Slot 634d6cf9d7
Improve performance of metadata cache (#3924)
#3866 removed the shard ID hash in metadata_cache.c to simplify cache management, 
but we observed a significant performance regression that was being masked by the
performance improvement provided by #3654 in our benchmarks, but #3654 only 
applies to specific workloads.

This PR brings back the shard ID cache as it existed before #3866 with some extra
 measures to handle invalidation. When we load a table entry, we overwrite 
ShardIdCacheEntry->tableEntry pointers for all the shards in that table, though 
it's possible that the table no longer contains the old shard ID or the table 
entry is never reloaded, which would leave a dangling pointer once the table 
entry is freed. To handle that case, we remove all shard ID cache entries that 
point exactly to that table entry when a table is freed (at the end of the 
transaction or any call to CitusTableCacheFlushInvalidatedEntries).

Co-authored-by: SaitTalhaNisanci <s.talhanisanci@gmail.com>
Co-authored-by: Marco Slot <marco.slot@gmail.com>
Co-authored-by: Jelte Fennema <github-tech@jeltef.nl>
2020-06-30 12:10:10 +02:00
Jelte Fennema 02fa942be1
Fix assertion error when rolling back to savepoint (#3868)
It was possible to get an assertion error, if a DML command was
cancelled that opened a connection and then "ROLLBACK TO SAVEPOINT" was
used to continue the transaction. The reason for this was that canceling
the transaction might leave the `claimedExclusively` flag on for (some
of) it's connections.

This caused an assertion failure because `CanUseExistingConnection`
would return false and a new connection would be opened, and then there
would be two connections doing DML for the same placement. Which is
disallowed. That this situation caused an assertion failure instead of
an error, means that without asserts this could possibly result in some
visibility bugs, similar to the ones described
https://github.com/citusdata/citus/issues/3867
2020-06-30 11:31:46 +02:00
Hadi Moshayedi 4ed59d2db3 Move more from insert_select_executor to insert_select_planner 2020-06-26 08:08:26 -07:00
Hadi Moshayedi d34c21890f Rename CoordinatorInsertSelect... to NonPushableInsertSelect 2020-06-25 08:55:48 -07:00
Hadi Moshayedi cd25a27174 Fix crash caused by EXPLAIN EXECUTE INSERT ... SELECT 2020-06-25 08:55:48 -07:00
Hadi Moshayedi 4e8d79998e Save INSERT/SELECT method in DistributedPlan.
This is so we don't need to calculate it twice in
insert_select_executor.c and multi_explain.c, which can
cause discrepancy if an update in one of them is not
reflected in the other site.
2020-06-25 08:55:48 -07:00
SaitTalhaNisanci f458d1fd1c
Fix/task execution (#3941)
* Not set TaskExecution with adaptive executor

Adaptive executor is using a utility method from task tracker for
repartition joins, however adaptive executor doesn't need taskExecution.
It is only used by task tracker. This causes a problem when explain
analyze is used because what taskExecution is pointing to might be
random.

We solve this by not setting taskExecution from adaptive executor. So it
will stay NULL as set by CreateTask.

* use same memory context as task for taskExecution

Co-authored-by: Jelte Fennema <github-tech@jeltef.nl>
2020-06-24 12:10:00 +03:00
Philip Dubé cd0b2ad5b5 citus_evaluate_expression: call expand_function_arguments beforehand to avoid segfaulting on implicit parameters 2020-06-23 18:06:46 +00:00
Jelte Fennema a98226842d
Use rename to make sure no files are inserted while deleting (#3912)
As suggested by @marcocitus in https://github.com/citusdata/citus/pull/3911#issuecomment-643978531, there was
a regression in #3893. If another backend would write a file during deletion of
the intermediate results directory, this file would not necessarily be deleted.

The approach used in `CitusRemoveDirectory` is to try recursive removal of the
directory again if it has failed. This does not work here, since when a file
can not be removed for other reasons (e.g. `EPERM`) it will not throw an error
anymore. So then we would get into an infinite removal loop. Instead I now
`rename` the directory before removing it. That way other backends will not
write files to it anymore.
2020-06-23 10:38:44 +02:00
Onder Kalaci 88c473e007 Sort WorkerPool in executions
We sort the workerList because adaptive connection management
(e.g., OPTIONAL_CONNECTION) requires any concurrent executions
to wait for the connections in the same order to prevent any
starvation. If we don't sort, we might end up with:
      Execution 1: Get connection for worker 1, wait for worker 2
      Execution 2: Get connection for worker 2, wait for worker 1

and, none could proceed. Instead, we enforce every execution establish
the required connections to workers in the same order.
2020-06-22 16:39:27 +02:00
Hanefi Önaldı 618453a2ba
Disallow C-style comments in migration files 2020-06-22 12:51:16 +03:00
Marco Slot 2a3234ca26 Rename masterQuery to combineQuery 2020-06-17 14:14:37 +02:00
Jelte Fennema 0259815d3a
Fix EXPLAIN ANALYZE received data counter issues (#3917)
In #3901 the "Data received from worker(s)" sections were added to EXPLAIN
ANALYZE. After merging @pykello posted some review comments. This addresses
those comments as well as fixing a other issues that I found while addressing 
them. The things this does:

1. Fix `EXPLAIN ANALYZE EXECUTE p1` to not increase received data on every
   execution
2. Fix `EXPLAIN ANALYZE EXECUTE p1(1)` to not return 0 bytes as received data
   allways.
3. Move `EXPLAIN ANALYZE` specific logic to `multi_explain.c` from
   `adaptive_executor.c`
4. Change naming of new explain sections to `Tuple data received from node(s)`.
   Firstly because a task can reference the coordinator too, so "worker(s)" was
   incorrect. Secondly to indicate that this is tuple data and not all network
   traffic that was performed.
5. Rename `totalReceivedData` in our codebase to `totalReceivedTupleData` to
   make it clearer that it's a tuple data counter, not all network traffic.
6. Actually add `binary_protocol` test to `multi_schedule` (woops)
7. Fix a randomly failing test in `local_shard_execution.sql`.
2020-06-17 11:33:38 +02:00
Marco Slot d1bab78d79 Remove master from file hierarchy 2020-06-16 17:49:09 +02:00
Philip Dubé 39400319e6 Defer freeing CitusTableCacheEntry, as there were memory safety issues before
Shard id to index mapping stored in cache entry as there may now be multiple entries alive for a given relation

insert_select_executor: revert copying cache entry, which was a hack added to avoid memory safety issues
2020-06-15 16:20:50 +00:00
Jelte Fennema 927de6d187
Show amount of data received in EXPLAIN ANALYZE (#3901)
Sadly this does not actually work yet for binary protocol data, because
when doing EXPLAIN ANALYZE we send two commands at the same time. This
means we cannot use `SendRemoteCommandParams`, and thus cannot use the
binary protocol. This can still be useful though when using the text
protocol, to find out that a lot of data is being sent.
2020-06-15 16:01:05 +02:00
SaitTalhaNisanci 077c784fe9
Create EnsureTableCanBeCreated for some checks (#3839) 2020-06-14 14:25:58 +03:00
Hadi Moshayedi ef778c1cd7 address feedback from Sait Talha & Hadi 2020-06-12 18:36:02 -07:00
Marco Slot 4f7989ad8e Rename WorkersContainingAllShards to PlacementsForWorkersContainingAllShards 2020-06-12 18:36:02 -07:00
Marco Slot 080f711e62 Remove useless debug message in router planner 2020-06-12 18:36:02 -07:00
Marco Slot d953f084db Rename FindRouterWorkerList to CreateTaskPlacementListForShardIntervals 2020-06-12 18:36:01 -07:00
Marco Slot 24feadc230 Handle joins between local/reference/cte via router planner 2020-06-12 18:36:01 -07:00
Halil Ozan Akgül 8c5eb6b7ea
Insert Select Into Local Table (#3870)
* Insert select with master query

* Use relid to set custom_scan_tlist varno

* Reviews

* Fixes null check

Co-authored-by: Marco Slot <marco.slot@gmail.com>
2020-06-12 17:06:31 +03:00
Jelte Fennema 0e12d045b1
Support use of binary protocol in between nodes (#3877)
This can save a lot of data to be sent in some cases, thus improving
performance for which inter query bandwidth is the bottleneck.
There's some issues with enabling this as default, so that's currently not done.
2020-06-12 15:02:51 +02:00
Nils Dijk da8f2b0134
Feature: tdigest aggregate (#3897)
DESCRIPTION: Adds support to partially push down tdigest aggregates

tdigest extensions: https://github.com/tvondra/tdigest

This PR implements the partial pushdown of tdigest calculations when possible. The extension adds a tdigest type which can be combined into the same structure. There are several aggregate functions that can be used to get;
 - a quantile
 - a list of quantiles
 - the quantile of a hypothetical value
 - a list of quantiles for a list of hypothetical values

These function can work both on values or tdigest types.

Since we can create tdigest values either by combining them, or based on a group of values we can rewrite the aggregates in such a way that most of the computation gets delegated to the compute on the shards. This both speeds up the percentile calculations because the values don't have to be sorted while at the same time making the transfer size from the shards to the coordinator significantly less.
2020-06-12 13:50:28 +02:00
Philip Dubé 8faaaee6a5 IsReferenceTable, ShardIntervalCount: remove misleading isCitusTable check
GetCitusTableCacheEntry raises an error if relationId is not distributed
2020-06-11 15:35:02 +00:00
Philip Dubé 1722d8ac8b Allow routing modifying CTEs
We still recursively plan some cases, eg:
- INSERTs
- SELECT FOR UPDATE when reference tables in query
- Everything must be same single shard & replication model
2020-06-11 15:14:06 +00:00
Hadi Moshayedi 0e3140c14d Include execution duration in worker_last_saved_explain_analyze 2020-06-11 02:54:54 -07:00
Hadi Moshayedi 7c52c6edb0 CTE statistics in EXPLAIN ANALYZE 2020-06-11 02:39:59 -07:00
Hadi Moshayedi 1f6d6ee4a5 Show query text in EXPLAIN output 2020-06-11 02:19:55 -07:00
Hadi Moshayedi bb96ef5047 Does the EXPLAIN ANALYZE at the same time as execution, so avoids executing twice.
We wrap worker tasks in worker_save_query_explain_analyze() so we can fetch
their explain output later by a call worker_last_saved_explain_analyze().

Fixes #3519
Fixes #2347
Fixes #2613
Fixes #621
2020-06-11 01:55:57 -07:00
Jelte Fennema 6f2eb4cdb6
Remove FlattenJoinVars (#3880)
This code is not needed anymore since #3668 was merged.
It's actually causing some issues when using the binary Postgres 
protocol, because postgres thinks it gets a `bigint` from
the worker, but actually gets an normal `int`. 
The query in question that fails is this:
```sql
CREATE TABLE test_table_1(id int, val1 int);
CREATE TABLE test_table_2(id int, val1 bigint);
SELECT create_distributed_table('test_table_1', 'id');
SELECT create_distributed_table('test_table_2', 'id');
INSERT INTO test_table_1 VALUES(1,1),(2,2),(3,3);
INSERT INTO test_table_2 VALUES(1,1),(3,3),(4,5);

SELECT val1
FROM test_table_1 LEFT JOIN test_table_2 USING(id, val1)
ORDER BY 1;
```

The difference in queries that is sent to the workers after this change is this, for this query:
```diff
--- query_old.sql	2020-06-09 09:51:21.460000000 +0200
+++ query_new.sql	2020-06-09 09:51:39.500000000 +0200
@@ -1 +1 @@
-SELECT worker_column_1 AS val1 FROM (SELECT test_table_1.val1 AS worker_column_1 FROM (public.test_table_1_102015 test_table_1(id, val1) LEFT JOIN public.test_table_2_102019 test_table_2(id, val1) USING (id, val1))) worker_subquery
+SELECT worker_column_1 AS val1 FROM (SELECT val1 AS worker_column_1 FROM (public.test_table_1_102015 test_table_1(id, val1) LEFT JOIN public.test_table_2_102019 test_table_2(id, val1) USING (id, val1))) worker_subquery
```
2020-06-10 17:24:53 +02:00
Jelte Fennema f4791fcb10
Remove SwallowErrors by using PathNameDeleteTemporaryDir (#3893)
This is a different version of #3634. It also removes SwallowErrors, but
instead of modifying our own functions to not throw errors, it uses the
postgres built in `PathNameDeleteTemporaryDir` function. This function
does not throw errors.

Since this change is for a bugfix, I tried to minimize the changes.

PRs with the following changes would be good to do separately from this
PR:
1. Use PathName(Create|Open|Delete)Temporary(File|Dir) to open and
   remove all files/dirs instead of our own custom file functions.
2. Prefix our outmost files/directories with `PG_TEMP_FILE_PREFIX` so
   that they are identified by Postgres as temporary files, which will be
   removed at postmaster start. This way we do not have to do this cleanup
   ourselves.
3. Store the files in the temporary table space if it exists.

Fixes #3634
Fixes #3618
2020-06-10 17:04:07 +02:00
Onder Kalaci 640717bea2
Copy doesn't use more than MaxAdaptiveExecutor
Co-authored-by: Hanefi Önaldı <Hanefi.Onaldi@Microsoft.com>
2020-06-10 16:46:21 +03:00
Jelte Fennema b87bae71bb
Error out when using different users in the same transaction (#3869)
Fixes #3867

As described in the issue above we return incorrect results when
changing user within a transaction. This causes us to error out instead.
2020-06-10 14:07:40 +02:00
Marco Slot 1243b6a948 Execute shard creation as utility tasks 2020-06-10 11:29:49 +02:00
Onder Kalaci 06461ca55f Coerce types properly for INSERT
Also, unify similar code-paths to rely on more accurate function.
2020-06-10 10:40:28 +02:00
Hadi Moshayedi 5cdfa9f571 Implement EXPLAIN ANALYZE udfs.
Implements worker_save_query_explain_analyze and worker_last_saved_explain_analyze.

worker_save_query_explain_analyze executes and returns results of query while
saving its EXPLAIN ANALYZE to be fetched later.

worker_last_saved_explain_analyze returns the saved EXPLAIN ANALYZE result.
2020-06-09 10:02:05 -07:00
Onur Tirtir a4f1c41391
Implement GetQueryLockMode helper (#3860)
If we want to get necessary lockmode for a relation RangeVar within
a query, we can get the lockmode easily from the RangeVar itself (if
pg version >= 12).

However, if we want to decide the lockmode appropriate for the
"query", we can derive this information by using GetQueryLockMode
according to the code comment from RangeTblEntry->rellockmode.
2020-06-09 13:08:44 +03:00
Hadi Moshayedi 02cff1a7c6 Test that EXPLAIN ANALYZE is not supported for some forms of INSERT/SELECT 2020-06-06 23:24:45 -07:00
Hadi Moshayedi f54a8e53c0 Remove unused consts from multi_explain.c 2020-06-06 23:24:45 -07:00
Hadi Moshayedi 0bfd39ea52 Implement TupleDestination intereface.
Implements a new `TupleDestination` interface to allow custom tuple processing per task.

This can be specially useful if a task contains multiple queries. An example of this EXPLAIN
ANALYZE, where it needs to add some UDF calls to the query to fetch the explain output
from worker after fetching the actual query results.
2020-06-05 17:47:40 -07:00
SaitTalhaNisanci d0f47eb338
Check the removeType in IsDropCitusStmt (#3859)
We should check the remove type in IsDropCitusStmt because if the remove
type is not OBJECT_EXTENSION then the stored objects in
dropStmt->objects may not be of type Value. This was crashing PG-13.

Also rename the method as IsDropCitusExtensionStmt.
2020-06-05 20:49:54 +03:00
Onur Tirtir f7224a12f2
Implement PushOverrideEmptySearchPath (#3874)
To reduce code duplication, implement function that pushes search_path
to be NIL and sets addCatalog to true so that all objects outside of
pg_catalog will be schema-prefixed.
2020-06-05 19:23:59 +03:00
Onur Tirtir 8b39d12846
Append IF NOT EXISTS to deparsed CREATE SERVER commands (#3875)
Append IF NOT EXISTS to CREATE SERVER commands generated by
pg_get_serverdef_string function when deparsing an existing server
object that a foreign table depends.
2020-06-05 18:04:33 +03:00
Onur Tirtir f3f711e097 Implement IndexIsImpliedByAConstraint 2020-06-05 15:33:54 +03:00
Philip Dubé 25f86bca3f multi_router_planner: Remove NULL check which would've segfaulted earlier 2020-06-02 13:08:38 +00:00
Philip Dubé 2623aefe38 multi_router_planner: replace GetUpdateOrDeleteRTE with ExtractResultRelationRTE 2020-06-02 00:22:30 +00:00
Onur Tirtir dfcc18468c Error out for unsupported trigger objects
Error out if creating a citus table from a table having triggers.
Error out for CREATE TRIGGER commands that are run on citus tables.
2020-05-31 23:10:01 +03:00
Onur Tirtir 6e6bc155a9 Implement methods to process & recreate triggers on citus tables 2020-05-31 15:28:17 +03:00
Onur Tirtir 5af64084ea Copy & paste pg_get_triggerdef_worker from Postgres 2020-05-31 15:25:07 +03:00
Sait Talha Nisanci dec2b28d49 use RelationGetPartitionDesc to be more safe
For getting the partition desc, we should use RelationGetPartitionDesc
method so that even if it is NULL, it will be created in the method.
2020-05-29 10:55:52 +03:00
Philip Dubé c0515dcd67 This prepares for routing modifying CTEs, where modLevel should not be used to infer whether a plan is a select or not
SELECT_TASK is renamed to READ_TASK as a SELECT with modifying CTEs will be a MODIFYING_TASK

RouterInsertJob: Assert originalQuery->commandType == CMD_INSERT
CreateModifyPlan: Assert originalQuery->commandType != CMD_SELECT

Remove unused function IsModifyDistributedPlan

DistributedExecution, ExecutionParams, DistributedPlan: Rename hasReturning to expectResults
SELECTs set expectResults to true

Rename CreateSingleTaskRouterPlan to CreateSingleTaskRouterSelectPlan
2020-05-20 17:26:12 +00:00
Onur Tirtir 98a660d0b7 Don't release lock on pg_constraint until the xact ends
Do not release AccessShareLock when closing pg_constraint to prevent
modifications to be done on pg_constraint to make sure that caller
will process valid foreign key constraints through the transaction.
2020-05-20 17:27:17 +03:00
Onur Tirtir 79a688ffe0 Refactor the methods accessing to pg_constraint
Implement internal functions to accces to pg_contraint
and utilize them in existing foreign key checks.
2020-05-20 17:27:17 +03:00
SaitTalhaNisanci 80e34382cf
Rename AppropriateReplicationModel -> DecideReplicationModel (#3842) 2020-05-17 10:24:14 +03:00
Onur Tirtir 8f9ef63e8a
Implement get_relation_constraint_oid_compat helper (#3836) 2020-05-15 17:36:59 +03:00
MoYi 9e1f198155 Fix composite create type deparsing to preserve typmod 2020-05-15 13:12:54 +00:00
Onur Tirtir 249550b815
Refactor EnsureLocalTableEmptyIfNecessary (#3830) 2020-05-15 14:20:33 +03:00
Onur Tirtir 8f3373c702
Remove unused parameter from RecordDistributedRelationDependencies (#3831) 2020-05-15 10:34:35 +03:00
SaitTalhaNisanci 22c903b151
remove ExecuteUtilityTaskListWithoutResults (#3696)
This PR removes ExecuteUtilityTaskListWithoutResults and uses the same
path for local execution via ExecuteTaskListExtended.
ExecuteUtilityTaskList is added. ExecuteLocalTaskListExtended now has a
parameter for utility commands so that it can call the right method. In
order not to change the existing calls,
ExecuteTaskListExtendedInternal is added, which is the main method that
runs the execution, via local and remote execution.
2020-05-07 13:30:50 +03:00
Nils Dijk 105de7beb8
Fix for pruned target list entries (#3818)
DESCRIPTION: Ignore pruned target list entries in coordinator plan

The postgres planner has the ability to prune target list entries that are proven not used in the output relation. When this happens at the `CitusCustomScan` boundary we need to _not_ return these pruned columns to not upset the rest of the planner.

By using the target list the planner asks us to return we fix issues that lead to Assertion failures, and potentially could be runtime errors when they hit in a production build.

Fixes #3809
2020-05-06 13:56:02 +02:00
Marco Slot 6ce2803777 Make sure we don't wrap GROUP BY expressions in any_value 2020-05-05 05:12:45 +02:00
Hadi Moshayedi dbf509bbdd Don't error out when cannot create maintenanced 2020-05-04 09:53:52 -07:00
Onder Kalaci f9d4a9cf38 Remove assertion for subqueries in WHERE clause ANDed with FALSE
In the code, we had the assumption that if restriction information
is NULL, it means that we cannot have any disributetd tables in
the subquery.

However, for subqueries in WHERE clause, that is not the case when
the subquery is ANDed with FALSE. In that case, Citus operates
on the originalQuery (which doesn't go through the standard_planner()),
and rely on the restriction information generated by standard_plannner().
As Postgres is smart enough to no generate restriction information for
subqueries ANDed with FALSE, we hit the assertion.
2020-05-04 10:52:15 +02:00
Onder Kalaci 77c397e9ae Rebuild wait event sets after PQconnectPoll() if socket changes
The reason is that PQconnectPoll() may change the underlying
socket. If we don't rebuild the wait event set, the low level
APIs (such as epoll_ctl()) may fail due to invalid sockets.
Instead, rebuilding ensures that we'll use accurate/active sockets.
2020-05-01 09:44:21 +02:00
Jelte Fennema c6f5d5fe88
Add some asserts to pass static analysis (#3805) 2020-04-29 11:19:11 +02:00
SaitTalhaNisanci cbda951395
Fix task copy and appending empty task in ExtractLocalAndRemoteTasks (#3802)
* Not append empty task in ExtractLocalAndRemoteTasks

ExtractLocalAndRemoteTasks extracts the local and remote tasks. If we do
not have a local task the localTaskPlacementList will be NIL, in this
case we should not append anything to local tasks. Previously we would
first check if a task contains a single placement or not, now we first
check if there is any local task before doing anything.

* fix copy of node task

Task node has task query, which might contain a list of strings in its
fields. We were using postgres copyObject for these lists. Postgres
assumes that each element of list will be a node type. If it is not a
node type it will error.

As a solution to that, a new macro is introduced to copy a list of
strings.
2020-04-29 11:05:34 +03:00
Philip Dubé b6b3c1bc17 Fix COPY TO's COPY (SELECT) with distributed table having generated columns
It's necessary to omit generated columns from output
2020-04-28 14:40:47 +00:00
SaitTalhaNisanci 164c00cf08
Fix typo: longer visible -> no longer visible (#3803) 2020-04-27 16:32:46 +03:00
Onder Kalaci bc54c5125f Increase the default value of citus.node_connection_timeout
The previous default was 5 seconds, and we change it to 30 seconds.
The main motivation for this is that for busy clusters, 5 seconds
can be too aggressive. Especially with connection throttling, the servers
might be kept busy for a really long time, and users may see the
connection errors more frequently.

We've done some sanity checks, for really quick queries (like
`SELECT count(*) from table`), 30 seconds is a decent value even
if users execute 300 distributed queries on the coordinator. We've
verified this on Hyperscale(Citus).
2020-04-24 15:16:42 +02:00
Onder Kalaci 0cb7ab2d05 Explicitly mark queries in physical planner for [not] having parameters
Physical planner doesn't support parameters. If the parameters have already
been resolved when the physical planner handling the queries, mark it.
The reason is that the executor is unaware of this, and sends the parameters
along with the worker queries, which fails for composite types.

(See `DissuadePlannerFromUsingPlan()` for the details of paramater resolving)
2020-04-24 12:49:43 +02:00
Onur Tirtir 2e927bd6b7
Bump Citus to 9.4devel (#3788) 2020-04-22 12:50:00 +03:00
Hanefi Önaldı e85b835065
Skip dependency setup on coordinator node 2020-04-21 12:06:31 +03:00
Philip Dubé 9093d51a22 maintenanced: handle before_shmem_exit, assert workerPid == 0 on start 2020-04-20 14:41:40 +00:00
Onder Kalaci e182215d96 Improve connection error message from the worker nodes
We currently put the actual error message to the detail part. However,
many drivers don't show detail part.

As connection errors are somehow common, and hard to trace back, can't
we added the detail to the message itself.

In addition to that, we changed "connection error" message, as it
was confusing to the users who think that the error was happening
while connecting to the coordinator. In fact, this error is showing
up when the coordinator fails to connect remote nodes.
2020-04-20 13:32:55 +02:00
Hadi Moshayedi 1250d691d3 Replicate reference tables before master_create_empty_shard 2020-04-17 16:47:03 -07:00
Philip Dubé 8e79672839 Try copying shard intervals out of cache for long lived borrow 2020-04-17 22:00:41 +00:00
Philip Dubé c00d57a955 CreateDistributedInsertSelectPlan: avoid calling GetCitusTableCacheEntry in a way that would invalidate live ShardInterval pointers 2020-04-17 14:44:23 +00:00
SaitTalhaNisanci 1d0f4bdcd2
invalidate plan cache in master_update_node (#3758)
* invalidate plan cache in master_update_node

If a plan is cached by postgres but a user uses master_update_node, then
when the plan cache is used for the updated node, they will get the old
nodename/nodepost in the plan. This is because the plan cache doesn't
know about the master_update_node. This could be a problem in prepared
statements or anything that goes into plancache. As a solution the plan
cache is invalidated inside master_update_node.

* add invalidate_inactive_shared_connections test function

We introduce invalidate_inactive_shared_connections udf to be used in
testing. It is possible that a connection count for an inactive node
will be greater than 0 and in that case it will not be removed at the
time of invalidation. However, later we don't have a mechanism to remove
it, which means that it will stay in the hash. For this not to cause a
problem, we use this udf in testing.

* move invalidate_inactive_shared_connections to udfs from test as it will be used in mx

* remove the test udf

* remove the IsInactive check
2020-04-17 17:43:48 +03:00
Philip Dubé c0a95a3adb Copy data from CitusTableCacheEntry more often
This copies over fixes from reference counting branch,
all CitusTableCacheEntry data may be freed when a GetCitusTableCacheEntry call occurs for its relationId

This fix is not complete, but reference counting is being deferred until 9.4

CopyShardInterval: remove dest parameter, always return newly allocated object
2020-04-17 14:17:18 +00:00
Önder Kalacı a919f09c96
Remove the entries from the shared connection counter hash when no connections remain (#3775)
We initially considered removing entries just before any change to
pg_dist_node. However, that ended-up being very complex and making
MX even more complex.

Instead, we're switching to a simpler solution, where we remove entries
when the counter gets to 0.

With certain workloads, this may have some performance penalty. But, two
notes on that:
 - When counter == 0, it implies that the cluster is not busy
 - With cached connections, that's not possible
2020-04-17 17:14:58 +03:00
Philip Dubé e4a4707f4a Avoid setting hasWindowFuncs true after window functions have been optimized out of query 2020-04-17 12:22:48 +00:00
SaitTalhaNisanci a9a3be15cc
introduce TASK_QUERY_NULL task type (#3774)
When we call SetTaskQueryString we would set the task type to
TASK_QUERY_TEXT, and some parts of the codebase rely on the fact that if
TASK_QUERY_TEXT is set, the data can be read safely. However if
SetTaskQueryString is called with a NULL taskQueryString this can cause
crashes. In that case taskQueryType will simply be set to
TASK_QUERY_NULL.
2020-04-17 14:59:22 +03:00
Hanefi Önaldı 0c5d0cfee9
Notice message to help truncate local data after distribution 2020-04-17 13:21:34 +03:00
Hanefi Önaldı d535121f8d
Introduce truncate_local_data_after_distributing_table() 2020-04-17 13:21:34 +03:00
Hadi Moshayedi 61198251fd Use block_writes for replicate_reference_tables 2020-04-16 19:25:41 -07:00
Nils Dijk 1d6ba1d09e
Refactor alter role to work on distributed roles (#3739)
DESCRIPTION: Alter role only works for citus managed roles

Alter role was implemented before we implemented good role management that hooks into the object propagation framework. This is a refactor of all alter role commands that have been implemented to
 - be on by default
 - only work for supported roles
 - make the citus extension owner a supported role

Instead of distributing the alter role commands for roles at the beginning of the node activation role it now _only_ executes the alter role commands for all users in all databases and in the current database.

In preparation of full role support small refactors have been done in the deparser.

Earlier tests targeting other roles than the citus extension owner have been either slightly changed or removed to be put back where we have full role support.

Fixes #2549
2020-04-16 12:23:27 +02:00
Hadi Moshayedi 59b9a4e5a1 Detect deadlocks in replicate_reference_tables() 2020-04-15 11:06:18 -07:00
SaitTalhaNisanci df9048ebaa
update outdated comments related to local_execution (#3759) 2020-04-15 16:15:43 +03:00
Marco Slot 8b83306a27 Issue worker messages with the same log level 2020-04-14 21:08:25 +02:00
SaitTalhaNisanci 132efdbc56
add execution params struct (#3747)
We had 9+ parameters in some of the functions related to execution.
Execution params is created to simplify this a bit so that we can set
only the fields that we are interested in and it is easier to read.
2020-04-14 14:32:40 +03:00
Onder Kalaci aa6b641828 Throttle connections to the worker nodes
With this commit, we're introducing a new infrastructure to throttle
connections to the worker nodes. This infrastructure is useful for
multi-shard queries, router queries are have not been affected by this.

The goal is to prevent establishing more than citus.max_shared_pool_size
number of connections per worker node in total, across sessions.

To do that, we've introduced a new connection flag OPTIONAL_CONNECTION.
The idea is that some connections are optional such as the second
(and further connections) for the adaptive executor. A single connection
is enough to finish the distributed execution, the others are useful to
execute the query faster. Thus, they can be consider as optional connections.
When an optional connection is not allowed to the adaptive executor, it
simply skips it and continues the execution with the already established
connections. However, it'll keep retrying to establish optional
connections, in case some slots are open again.
2020-04-14 10:27:48 +02:00
Onder Kalaci 38b8a9ad62 Add citus_remote_connection_stats() function
This function is intended to be used for monitoring
the remote connections.
2020-04-14 10:03:27 +02:00
Onder Kalaci 0dbfbe0c37 Add the necessary shared memory infrastructure
- The hashmap in the shared memory
- The lock to access the hashmap
- The GUC to control the size
2020-04-14 10:03:26 +02:00
Hadi Moshayedi f9de734329 Ensure metadata is synced on ReplicateColocatedShardPlacement 2020-04-13 11:45:21 -07:00
Hadi Moshayedi 2218b7e38d Refactor ReplicateColocatedShardPlacement 2020-04-13 11:07:26 -07:00
SaitTalhaNisanci 2438e80a58
use CURSOR_OPT_PARALLEL_OK flag in local execution (#3745)
We currently don't use any cursor flags in local execution, but we can
use CURSOR_OPT_PARALLEL_OK flag to potentially benefit from parallelism
when possible.
2020-04-12 19:49:22 +03:00
Philip Dubé 30f10984e1 Defer get_agg_clause_costs, it happens later & avoids errors 2020-04-10 13:26:05 +00:00
Philip Dubé ab0b59ad3b GetConnParams: Set runtimeParamStart before setting keywords/values to avoid out of bounds access 2020-04-10 13:14:06 +00:00
SaitTalhaNisanci 07f9a442b0
Refactor CopyLocalDataIntoShards (#3693)
This PR:
- Declares variables when they are needed.
- Creates DoCopyFromLocalTableIntoShards for better readability.
- Doesn't use a hardcoded value, instead use a variable for better
readability.
2020-04-10 09:25:26 +03:00
Marco Slot a4b2197450 Correctly handle non-constant LIMIT/OFFSET clauses 2020-04-09 19:59:50 +00:00
SaitTalhaNisanci 3dc7cad754
use an enum for local execution status (#3733)
We have two variables that are related to local execution status.
TransactionAccessedLocalPlacement and
TransactionConnectedToLocalGroup. Only one of these fields should be
set, however we didn't have any check for this contraint and it was
error prone.

What those two variables are used is that we are trying to understand if
we should use local execution, the current session, or if we should be
using a connection to execute the current query, therefore the tasks. In
the enum, now it is more clear what these variables mean.

Also, now we have a method to change the local execution status. The
method will error if we are trying to transition from a state to a wrong
state. This will help us avoid problems.
2020-04-09 19:11:04 +03:00
SaitTalhaNisanci 24dcb02bca
enable local table join with reference table (#3697)
* enable local table join with reference table

* test different cases with local table and reference join
2020-04-09 15:25:54 +03:00
SaitTalhaNisanci 233e4a24d1
use local execution within transaction block (#3714)
* use local executon when in a transaction block

When we are inside a transaction block, there could be other methods
that need local execution, therefore we will use local execution in a
transaction block.

* update test outputs with transaction block local execution

* add a test to verify we dont leak intermediate schemas
2020-04-09 12:41:58 +03:00
SaitTalhaNisanci fa88046ce1
test that we don't leak intermediate schemas (#3737)
* test that we don't leak intermediate schemas

We have tests to make sure that we don't intermediate any intermediate
files, tables etc but we don't test if we are leaking schemas. It makes
sense to test this as well.

* remove all repartition schemas in case of error

This solution is not an ideal one but it seems to be doing the job.
We should have a more generic solution for the cleanup but it seems that
putting the cleanup in the abort handler is dangerous and it was
crashing.
2020-04-09 12:17:41 +03:00
SaitTalhaNisanci 362d72853c
return early in ExecuteTaskListExtended (#3738)
It is possible to return an error in ExecuteTaskListExtended after
performing local execution with the current structure. However there is
no point in execution the local tasks if we are going to return an error
later. So the local execution is moved after the error check.
2020-04-09 10:10:49 +03:00
Hadi Moshayedi 9b8802ba2d Remove todo from reference_table_utils 2020-04-08 12:46:55 -07:00
Hadi Moshayedi dda53a0bba GUC for replicate reference tables on activate. 2020-04-08 12:42:45 -07:00
Hadi Moshayedi 0758a81287 Prevent reference tables being dropped when replicating reference tables 2020-04-08 12:41:36 -07:00
Marco Slot 924cd7343a Defer reference table replication to shard creation time 2020-04-08 12:41:36 -07:00
Philip Dubé 26797bfb94 Verify trigger relation before reading old/new tuples
master_dist_placement_cache_invalidate: bail when triggering on pg_dist_shard_placement
2020-04-07 15:39:31 +00:00
Önder Kalacı 70012dfd33 Do not error when an intermediate file does not exit (#3707)
When the file does not exist, it could mean two different things.
First -- and a lot more common -- case is that a failure happened
in a concurrent backend on the same distributed transaction. And,
one of the backends in that transaction has already been roll
backed, which has already removed the file. If we throw an error
here, the user might see this error instead of the actual error
message. Instead, we prefer to WARN the user and pretend that the
file has no data in it. In the end, the user would see the actual
error message for the failure.

Second, in case of any bugs in intermediate result broadcasts,
we could try to read a non-existing file. That is most likely
to happen during development. Thus, when asserts enabled, we throw
an error instead of WARNING so that the developers cannot miss.
2020-04-07 17:06:55 +02:00
Onder Kalaci 4f7c902c6c Move connection establishment for intermediate results after query execution
When we have a query like the following:

```SQL
WITH a AS (SELECT * FROM foo LIMIT 10) SELECT max(x) FROM a JOIN bar 2 USING (y);
```

Citus currently opens side channels for doing the
	`COPY "1_1"` FROM STDIN (format 'result')

before starting the execution of
	`SELECT * FROM foo LIMIT 10`

Since we need at least 1 connection per worker to do
	`SELECT * FROM foo LIMIT 10`
We need to have 2 connections to worker in order to broadcast the results.

However, we don't actually send a single row over the side channel until the
execution of `SELECT * FROM foo LIMIT 10` is completely done (and connections
unclaimed) and the results are written to a tuple store. We could actually
reuse the same connection for doing the `COPY "1_1"` FROM STDIN (format 'result').

This also fixes the issue that Citus doesn't obey `citus.max_adaptive_executor_pool_size`
when the query includes an intermediate result.
2020-04-07 17:06:55 +02:00
Onder Kalaci 721daec9a5 Move the logic that initilize connections/local files into a function 2020-04-07 17:06:55 +02:00
Onder Kalaci 9b29a32d7a Remove all references for side channel connections
We don't need any side channel connections. That is actually
problematic in the sense that it creates extra connections.
Say, citus.max_adaptive_executor_pool_size equals to 1, Citus
ends up using one extra connection for the intermediate results.
Thus, not obeying citus.max_adaptive_executor_pool_size.

In this PR, we remove the following entities from the codebase
to allow further commits to implement not requiring extra connection
for the intermediate results:

- The connection flag REQUIRE_SIDECHANNEL
- The function GivePurposeToConnection
- The ConnectionPurpose struct and related fields
2020-04-07 17:06:55 +02:00
Hanefi Onaldi 1d22d0c2ff
Remove metadata locks from size functions 2020-04-07 17:37:15 +03:00
SaitTalhaNisanci 0430b568be
explicitly return false if transaction connected to local node (#3715)
* explicitly return false if transaction connected to local node

* not set TransactionConnectedToLocalGroup if we are writing to a file

We use TransactionConnectedToLocalGroup to prevent local execution from
happening as that might cause visibility problems. As files are visible
to all transactions, we shouldn't set this variable if we are writing to
a file.
2020-04-07 17:30:34 +03:00
Marco Slot 2632343f64 Fix intermediate result pruning for INSERT..SELECT 2020-04-07 11:07:49 +02:00
Marco Slot 84672c3dbd Simplify intermediate result pruning logic 2020-04-07 10:53:29 +02:00
SaitTalhaNisanci a710b3cdc5
fix null tupleStoreState case in ExecuteLocalTaskListExtended (#3711)
In case we don't care about the tupleStoreState in
ExecuteLocalTaskListExtended, it could be passed as null. In that case
we will get a seg error. This changes it so that a dummy tuple store
will be created when it is null.

Do not use local execution in ExecuteTaskListOutsideTransaction.
As we are going to run the tasks outside transaction, we shouldn't use local execution.
However, there is some problem when using local execution related to
repartition joins, when we solve that problem, we can execute the tasks
coming to this path with local execution.

Also logging the local command is simplified.

normalize job id in worker_hash_partition_table in test outputs.
2020-04-07 11:47:09 +03:00
SaitTalhaNisanci a369f9001d
fix incorrect groupid or nodeid (#3710)
For shardplacements, we were setting nodeid, nodename, nodeport and
nodegroup manually. This makes it very error prone, and it seems that we
already forgot to set some of them. This would mean that they would have
their default values, e.g group id would be 0 when its group id is not
0.

So the implication is that we would have inconsistent worker metadata.

A new method is introduced, and we call the method to set those fields
now, so that as long as we call this method, we won't be setting
inconsistent metadata.

It probably makes sense to have a struct for these fields. We already
have NodeMetadata but it doesn't have nodename or nodeport. So that
could be done over another refactor to make things simpler.
2020-04-07 11:14:14 +03:00
Philip Dubé 4860e11561 Duplicate grouping on worker whenever possible
This is possible whenever we aren't pulling up intermediate rows

We want to do this because this was done in 9.2,
some queries rely on the performance of grouping causing distinct values

This change was introduced when implementing window functions on coordinator
2020-04-06 18:51:30 +00:00
Philip Dubé b01bae5937 Check connections from connection_placement before polling 2020-04-06 17:45:44 +00:00
Onur Tirtir 4c95ad1579 do not traverse parse tree in distributed planner one more time 2020-04-03 18:24:48 +03:00
Onur Tirtir abdabbedb2 refactor distributed_planner.c 2020-04-03 18:24:41 +03:00
Onur Tirtir 13a35c6813 implement GetOnlyShardOidOfReferenceTable and some refactor in shard_uitls 2020-04-03 18:24:13 +03:00
Hanefi Önaldı d1223bd6cc
Remove migration paths to 9.3-1, introduce 9.3-2 2020-04-03 12:50:45 +03:00
Marco Slot fd8cdb92f4 Evaluate nextval in the target list on the coordinator 2020-04-02 02:53:19 +02:00
SaitTalhaNisanci df88ab71b6 normalize assign_distributed_transaction_id in tests 2020-04-01 18:23:16 +03:00
SaitTalhaNisanci 0aebd78ea7 use localExecution in ExecuteTaskListExtended
ExecuteTaskListExtended is the common method for different codepaths,
and instead of writing separate local execution logics in different
codepaths, it makes more sense to have the logic here. We still need to
do some refactoring, this is an initial step.

After this commit, we can run create shard commands locally. There is a
special case with shard creation commands. A create shard command might
have a concatenated query string, however local execution did not know
how to execute a task with multiple query strings. This is also
implemented in this commit. We go over each query in the concatenated
query string and plan/execute them one by one.

A more clean solution to this would be to make sure that each task has a
single query. We currently cannot do that because we need to ensure the
task dependencies. However, it would make sense to do that at some point
and it would simplify the code a lot.
2020-04-01 18:23:16 +03:00
SaitTalhaNisanci ba01f3457a
use macros for pg versions instead of hardcoded values (#3694)
3 Macros are defined for removing the hardcoded pg versions.
PG_VERSION_11, PG_VERSION_12 and PG_VERSION_13.
2020-04-01 17:01:52 +03:00
Philip Dubé ddc3377026 Assert bounds checks on two array reads which rely on data not being out of bounds 2020-03-31 18:58:35 +00:00
Marco Slot 252abcce16 Allow table type to be used in target list 2020-03-31 11:11:01 -07:00
SaitTalhaNisanci 6cd32b0db1
refactor ExecuteLocalTaskList (#3617)
ExecuteLocalTaskList doesn't need scanState as it only uses
paramListInfo, distributedPlan and tupleStoreState. It is better to pass
only the variables that the function needs, so that we can call this
function from other places when we dont have scanState.
2020-03-31 19:19:54 +03:00
SaitTalhaNisanci b5591b1b28 use taskQuery as a struct to simplify the code 2020-03-31 15:47:55 +03:00
SaitTalhaNisanci 8806c4d697 move queryStringList into taskQuery
Also allocate task query in the memory context of task.
2020-03-31 15:47:55 +03:00
SaitTalhaNisanci c796ac335d add TaskQuery struct to abstract query string related fields
We had many fields in task related to query strings. It was kind of
complex, and only of them could be set at a time. Therefore it makes
more sense to abstract this and use a union so that it is clear that
only of them should be set.

We have three fields that could have query related strings:
- queryForLocation
- queryStringLazy
- perPlacementQueryStrings

Relatively, they can be set with:
- SetTaskQueryString
- SetTaskQueryIfShouldLazyDeparse
- SetTaskPerPlacementQueryStrings

The direct usage of the query related fields are also removed.

Rename queryForLocalExecution

Currently queryForLocalExecution is only used for deparsing purposes,
therefore it makes sense to rename it to what it is doing.
2020-03-31 15:47:55 +03:00
SaitTalhaNisanci 98f95e2a5e add TaskQueryStringForPlacement
TaskQueryStringForPlacement simplifies how the executor gets the query
string for a given placement. Task will use the necessary fields to
return the correct query placement string. Executor doesn't need to know
the details for this.

rename TaskQueryString as TaskQueryStringAllPlacements

TaskQueryString returns the query string that will be the same for all
the placements. In INSERT..SELECT the query string can be different for
each placement. Adaptive executor uses TaskQueryStringForPlacement,
which returns the query string for a placement. It makes sense to rename
TaskQueryString as TaskQueryStringAllPlacements as it is returning the
query string for all placements.

rename SetTaskQuery as SetTaskQueryIfShouldLazyDeparse

SetTaskQuery does not always sets the task query. It can set the query
string as well. So it is more clear to name it
SetTaskQueryIfShouldLazyDeparse, since it will set the query not query
string only when we should deparse the query in a lazy way.
2020-03-31 15:47:55 +03:00
SaitTalhaNisanci 982b5fbabf add SetTaskPerPlacementStrings
It is possible that a task will have different query string for each
placement. This is the case in INSERT..SELECT via repartitioning. When
we are setting task->perPlacementQueryString, we should set
queryStringLazy to NULL. Therefore a method for that purpose is created.
2020-03-31 15:47:55 +03:00
Marco Slot 331b45348c Fix error when using LEFT JOIN with GROUP BY on primary key 2020-03-30 16:42:22 +02:00
SaitTalhaNisanci e1802c5c00
extract local plan cache related methods into a file (#3667) 2020-03-31 11:11:34 +03:00
SaitTalhaNisanci 8dfc2cb122
not append ; if end of the list in StringJoin (#3672) 2020-03-31 10:01:28 +03:00
Philip Dubé 4eb2c33f38 multi_copy.c: remove tableMetadata 2020-03-30 19:26:44 +00:00
Jelte Fennema 3be665269f
Reintroduce ForceSearchShardPlacementInList (#3664)
This was added to silence static analysis errors. It was removed
accidentally in #3591. This reintroduces it again.
2020-03-27 14:28:50 +01:00
Hanefi Onaldi 0e8103b101
Propagate ALTER ROLE .. SET statements
In PostgreSQL, user defaults for config parameters can be changed by
ALTER ROLE .. SET statements. We wish to propagate those defaults
accross the Citus cluster so that the behaviour will be similar in
different workers.

The defaults can either be set in a specific database, or the whole
cluster, similarly they can be set for a single role or all roles.

We propagate the ALTER ROLE .. SET if all the conditions below are met:
- The query affects the current database, or all databases
- The user is already created in worker nodes
2020-03-27 13:02:48 +03:00
Marco Slot a65ffee266 Fixes a bug that causes some DML queries containing aggregates to fail 2020-03-26 16:08:34 +00:00
SaitTalhaNisanci d3fdade2e8
add missing perPlacementQueryStrings to copy and out funcs (#3657) 2020-03-26 17:16:29 +03:00
Marco Slot b89e9dc158 Fix a bug which caused queries with SRFs and function evalution to fail 2020-03-25 06:55:53 +01:00
SaitTalhaNisanci dd1a456407
store query command list in task (#3649)
Sometimes we have concatenated query strings for a task. However,
when we want to find each query string, it is not a trivial task.
Therefore, it makes sense to store this in task so that when we need
each query string we can easily get it.
2020-03-26 12:04:08 +03:00
Philip Dubé 917cb6ae93 Don't segfault on queries using GROUPING
GROUPING will always return 0 outside of GROUPING SETS, CUBE, or ROLLUP
Since we don't support those, it makes sense to reject GROUPING in queries
2020-03-25 15:46:43 +00:00
Philip Dubé 720525cfda Add support for window functions on coordinator
Some refactoring:
Consolidate expression which decides whether GROUP BY/HAVING are pushed down
Rename early pullUpIntermediateRows to hasNonDistributableAggregates
Create WorkerColumnName to handle formatting WORKER_COLUMN_FORMAT
Ignore NULL StringInfo pointers to SafeToPushdownWindowFunction
Fix bug where SubqueryPushdownMultiNodeTree mutates supplied Query,
	SafeToPushdownWindowFunction requires the original query as it relies on rtable
2020-03-25 15:31:20 +00:00
Nils Dijk 4e611cfc25
Refactor dependency resolution and resolve from pg_shdepend (#3633)
DESCRIPTION: Refactor dependency resolution and resolve from pg_shdepend

This PR refactors how dependencies are resolved by not assuming solely a `pg_depend` record describing the dependency. Instead we keep a definition of the dependency around which records how the dependency is resolved. This can be one of the following ways
 - `pg_depend`, data will contain a copy of the `pg_depend` record
 - `pg_shdepend`, data will contain a copy of the `pg_shdepend` record
 - `ObjectAddress`, data will contain only an `ObjectAddress` describing a dependency

Irregardless of way the dependency was found it will always be able to get to the address of the dependency as that is the most important property.

For some checks we can inspect the source where the dependency was found and perform a deep inspection to decide if we want to follow the dependency. This is important to not distribute dependencies coming from extensions for example.
2020-03-25 13:38:25 +01:00
Onur Tirtir 52fd58d51f move MakeNameListFromRangeVar function to a more appropriate file 2020-03-25 11:01:50 +03:00
Onur Tirtir 2396b66ac5 remove an outdated comment in local executor 2020-03-25 11:01:40 +03:00
Onur Tirtir 8ebb8ef31d use PG_USED_FOR_ASSERTS_ONLY 2020-03-25 11:01:33 +03:00
Onur Tirtir 81d48d3466 fix some typos 2020-03-25 11:01:26 +03:00
Jelte Fennema 149f0b2122
Use Microsoft approved cipher string (#3639)
This cipher string is approved by the Microsoft security team and only enables
TLSv1.2 ciphers.
2020-03-24 15:51:44 +01:00