Commit Graph

2049 Commits (2a7afd62d8067c49da2e378506e7a650f19c6a0f)

Author SHA1 Message Date
Philip Dubé 261a9de42d Fix typos:
VAR_SET_VALUE_KIND -> VAR_SET_VALUE kind
beginnig -> beginning
plannig -> planning
the the -> the
er then -> er than
2019-11-25 23:24:13 +00:00
Marco Slot 4b0ac4b0dd Properly escape ALTER FUNCTION .. SET deparsing. Also test 2019-11-25 23:01:30 +00:00
Philip Dubé 3c10c27b13 GetFunctionAlterOwnerCommand: use format_procedure_qualified
distributed_functions: test a function with a quote in name
AppendDefElemSet: quote variable names
2019-11-25 23:01:30 +00:00
Philip Dubé a81e6a81ab Fix distributed aggregation for non superuser roles
Moves support functions to pg_catalog for now. We'd prefer a different solution
for when we're creating these support functions dynamically
2019-11-25 20:46:25 +00:00
Khashayar Fereidani f81785ad14 Fix underflow initialization of default values
Initialization of queryWindowClause and queryOrderByLimit "memset" underflow these variables.
It's possible due to the invalid usage sizeof this part of the program cause buffer overflow and function return data corruption in future changes.
2019-11-25 19:25:51 +00:00
Onur TIRTIR bef32624c3
Escape extension name in extension command propagation (#3218) 2019-11-24 12:16:10 +03:00
Philip Dubé 99164398bf Fix potential segfault from standard_planner inlining functions 2019-11-21 18:47:36 +00:00
Philip Dubé c563e0825c Strip trailing whitespace and add final newline (#3186)
This brings files in line with our editorconfig file
2019-11-21 14:25:37 +01:00
Jelte Fennema 1d8dde232f
Automatically convert useless declarations using regex replace (#3181)
* Add declaration removal to CI

* Convert declarations
2019-11-21 13:47:29 +01:00
Onur TIRTIR 9961297d7b Improve extension command propagation logic and tests
* Improve extension command propagation tests

* patch for hardcoded citus extension name

(cherry picked from commit 0bb3dbac0afabda10e8928f9c17eda048dc4361a)
2019-11-21 11:24:39 +03:00
Marco Slot e0cccf7f9a Move C files into the appropriate directory 2019-11-16 11:36:17 +01:00
Hanefi Onaldi d82f3e9406
Introduce intermediate result broadcasting
In plain words, each distributed plan pulls the necessary intermediate
results to the worker nodes that the plan hits. This is primarily useful
in three ways. 

(i) If the distributed plan that uses intermediate
result(s) is a router query, then the intermediate results are only
broadcasted to a single node.

(ii) If a distributed plan consists of only intermediate results, which
is not uncommon, the intermediate results are broadcasted to a single
node only.

(iii) If a distributed query hits a sub-set of the shards in multiple
workers, the intermediate results will be broadcasted to the relevant
node(s).

The final item (iii) becomes crucial for append/range distributed
tables where typically the distributed queries hit a small subset of
shards/workers.

To do this, for each query that Citus creates a distributed plan, we keep
track of the subPlans used in the queryTree, and save it in the distributed
plan. Just before Citus executes each subPlan, Citus first keeps track of
every worker node that the distributed plan hits, and marks every subPlan
should be broadcasted to these nodes. Later, for each subPlan which is a
distributed plan, Citus does this operation recursively since these
distributed plans may access to different subPlans, and those have to be
recorded as well.
2019-11-20 15:26:36 +03:00
Philip Dubé b7fef5c31a Miscellaneous cleanup in prep for collation propagation 2019-11-19 17:28:59 +00:00
Jelte Fennema 1ed05be82c
Flaky test: Fix recover_prepared_transactions (#3205)
Failed test: https://app.circleci.com/jobs/github/citusdata/citus/35994

We now always take a new connection
2019-11-19 17:49:13 +01:00
Jelte Fennema 1ac96f228b
Flaky test: Force correct plan (#3203)
Failing test: https://app.circleci.com/jobs/github/citusdata/citus/23148
2019-11-19 17:11:05 +01:00
Onur TIRTIR 26c306d188
Add extensions to distributed object propagation infrastructure (#3185) 2019-11-19 17:56:28 +03:00
SaitTalhaNisanci 2cb82ae9bd
create a utility method to mark tasks as failed (#3150) 2019-11-19 16:35:56 +03:00
SaitTalhaNisanci 306d159072
refactor AfterXacthodtConnectionHandling (#3202) 2019-11-19 14:50:23 +03:00
Jelte Fennema 87f57eb92b
Fix verify_metadata not returning consistent results (#3199)
Failing test: https://app.circleci.com/jobs/github/citusdata/citus/58827
2019-11-19 11:02:35 +01:00
Hanefi Onaldi e3ad4aba94
Bump 9.1devel
* Add Changelog entry for 9.0.1
* Bump citus version to 9.1devel
2019-11-19 10:35:57 +03:00
Marco Slot 622462cad7 Return early in CitusHasBeenLoaded when creating a different extension 2019-11-15 03:00:20 +01:00
Önder Kalacı 40fa3862ce
Prevent Citus extension becoming distributed object (#3197)
Prevent Citus extension being distributed

Because that could prevent doing rolling upgrades, where users may
prefer to upgrade the version on the coordinator but not the workers.

There could be some other edge cases, so I'd prefer to keep Citus
extension outside the picture for now.
2019-11-18 16:57:10 +01:00
Halil Ozan Akgul 5ae7b219ff Create the ALTER ROLE propagation 2019-11-18 18:31:28 +03:00
Nils Dijk 217890af5f
Feature: Expression in reference join (#3180)
DESCRIPTION: Expression in reference join

Fixed: #2582

This patch allows arbitrary expressions in the join clause when joining to a reference table. An example of such joins could be found in CHbenCHmark queries 7, 8, 9 and 11; `mod((s_w_id * s_i_id),10000) = su_suppkey` and `ascii(substr(c_state,1,1)) = n2.n_nationkey`. Since the join is on a reference table these queries are able to be pushed down to the workers.

To implement these queries we will widen the `IsJoinClause` predicate to not check if the expressions are a type `Var` after stripping the implicit coerciens. Instead we define a join clause when the `Var`'s in a clause come from more than 1 table.

This allows more clauses to pass into the logical planner's `MultiNodeTree(...)` planning function. To compensate for this we tighten down the `LocalJoin`, `SinglePartitionJoin` and `DualPartitionJoin` to check for direct column references when planning. This allows the planner to work with arbitrary join expressions on reference tables.
2019-11-18 16:25:46 +01:00
Önder Kalacı a4c90b6ee1
Make distributed object dependency logic follow upto extensions (#3195)
With this commit, we're slightly changing the dependency traversal
logic to enable extension propagation.

The main idea is to "follow" the extension dependencies, but do not
"apply" them.

Since some extension dependencies are base types, and base types
could have circular dependencies, we implement a logic to prevent
revisiting an already visited object.
2019-11-17 17:21:21 +01:00
Hadi Moshayedi d9dcba25e3 Plan reference/local table joins locally 2019-11-15 07:36:50 -08:00
Onder Kalaci 90943a6ce6 Do not include coordinator shards when round-robin is selected
When the user picks "round-robin" policy, the aim is that the load
is distributed across nodes. However, for reference tables on the
coordinator, since local execution kicks in immediately, round-robin
is ignored.

With this change, we're excluding the placement on the coordinator.
Although the approach seems a little bit invasive because of
modifications in the placement list, that sounds acceptable.

We could have done this in some other ways such as:

1) Add a field to "Task->roundRobinPlacement" (or such), which is
updated as the first element after RoundRobinPolicy is applied.
During the execution, if that placement is local to the coordinator,
skip it and try the other remote placements.

2) On TaskAccessesLocalNode()@local_execution.c, check
task_assignment_policy, if round-robin selected and there is local
placement on the coordinator, skip it. However, task assignment is done
on planning, but this decision is happening on the execution, which
could create weird edge cases.
2019-11-15 06:03:32 -08:00
Hadi Moshayedi 15af1637aa Replicate reference tables to coordinator. 2019-11-15 05:50:19 -08:00
Hadi Moshayedi cb011bb30f Propagate isactive to metadata nodes. 2019-11-15 05:48:42 -08:00
SaitTalhaNisanci b9b7fd7660
add IsLoggableLevel utility function (#3149)
* add IsLoggableLevel utility function

* add function comment for IsLoggableLevel

* put ApplyLogRedaction to logutils
2019-11-15 14:59:13 +03:00
Jelte Fennema 1b2c438e69
Rename variables to not shadow globals in RHEL6 (#3194)
Fixes #2839
2019-11-15 12:12:24 +01:00
Jelte Fennema a8bd2d58f5
Update SQL definitions to prepare for drain node functionality (#3179) 2019-11-15 10:11:56 +01:00
Jelte Fennema 4b9b4b0995
Don't warn for declaration-after-statement since we only support GNU99 (#3132)
This change was actually already intended in #3124. However, the
postgres Makefile manually enables this warning too. This way we undo
that.

To confirm that it works two functions were changed to make use of not
having the warning anymore.
2019-11-15 09:46:06 +01:00
Philip Dubé 495c0f5117 Phase 1 implementation of custom aggregates
Phase 1 seeks to implement minimal infrastructure, so does not include:
	- dynamic generation of support aggregates to handle multiple arguments
	- configuration methods to direct aggregation strategy,
		or mark an aggregate's serialize/deserialize as safe to operate across nodes

Aggregates can be distributed when:
	- they have a single argument
	- they have a combinefunc
	- their transition type is not a pseudotype
2019-11-14 19:01:24 +00:00
Philip Dubé edc7a2ee38 Improve RECORD support 2019-11-14 18:32:22 +00:00
Philip Dubé eb35743c3f Remove citus.worker_list_file & master_initialize_node_metadata 2019-11-13 00:49:58 +00:00
Philip Dubé 48552bfffe Call DestReceiver rDestroy before it goes out of scope
CitusCopyDestReceiverDestroy: call hash_destroy on shardStateHash & connectionStateHash
2019-11-12 15:03:07 +00:00
Jelte Fennema adc6ca6100
Make simple in queries on unique columns work with repartion join (#3171)
This is necassery to support Q20 of the CHbenCHmark: #2582.

To summarize the fix: The subquery is converted into an INNER JOIN on a
table. This fixes the issue, since an INNER JOIN on a table is already
supported by the repartion planner.

The way this replacement is happening.:
1. Postgres replaces `col in (subquery)` with a SEMI JOIN (subquery) on col = subquery_result
2. If this subquery is simple enough Postgres will replace it with a
   regular read from a table
3. If the subquery returns unique results (e.g. a primary key) Postgres
   will convert the SEMI JOIN into an INNER JOIN during the planning. It
   will not change this in the rewritten query though.
4. We check if Postgres sends us any SEMI JOINs during its join order
   planning, if it doesn't we replace all SEMI JOINs in the rewritten
   query with INNER JOIN (which we already support).
2019-11-11 13:44:28 +01:00
SaitTalhaNisanci 57380fd668
remove duplicated method in multi_logical_optimizer (#3166) 2019-11-11 13:51:21 +03:00
Önder Kalacı 460f000218
Remove failure tests related to real-time executor (#3174)
Since we've removed the executor, we don't need the specific tests.
Since the tests are already using adaptive executor, they were passing.
But, we've plenty of extra tests for adaptive executor, so seems safe
to remove.
2019-11-11 10:18:37 +01:00
Philip Dubé ad86c1b866 AcquireDistributedLockOnRelations: escape relation names 2019-11-08 21:23:01 +00:00
Philip Dubé e8ecbbfcb3 Escape transaction names 2019-11-08 21:23:01 +00:00
Jelte Fennema 9fb897a074
Fix queries with repartition joins and group by unique column (#3157)
Postgres doesn't require you to add all columns that are in the target list to
the GROUP BY when you group by a unique column (or columns). It even actively
removes these group by clauses when you do.

This is normally fine, but for repartition joins it is not. The reason for this
is that the temporary tables don't have these primary key columns. So when the
worker executes the query it will complain that it is missing columns in the
group by.

This PR fixes that by adding an ANY_VALUE aggregate around each variable in
the target list that does is not contained in the group by or in an aggregate.
This is done only for repartition joins.

The ANY_VALUE aggregate chooses the value from an undefined row in the
group.
2019-11-08 15:36:18 +01:00
SaitTalhaNisanci 02b359623f
remove duplicate code in citus_dist_stat_activity (#3165) 2019-11-08 15:41:32 +03:00
Önder Kalacı 0b3d4e55d9
Local execution should not change hasReturning for distributed tables (#3160)
It looks like the logic to prevent RETURNING in reference tables to
have duplicate entries that comes from local and remote executions
leads to missing some tuples for distributed tables.

With this PR, we're ensuring to kick in the logic for reference tables
only.
2019-11-08 12:49:56 +01:00
Philip Dubé 9a31837647 isolation_create_restore_point: test reference tables too 2019-11-07 17:50:22 +00:00
Philip Dubé 72c3d64ead Rename OpenConnectionsToAllNodes to OpenConnectionsToAllWorkerNodes 2019-11-07 17:50:22 +00:00
Philip Dubé 2fc45e5897 create_distributed_function: accept aggregates
Adds support for OCLASS_PROC to worker_create_or_replace_object
2019-11-06 18:23:37 +00:00
Hadi Moshayedi e00d1546f3 Don't maintain replicationfactor of reference tables 2019-11-05 07:23:14 -08:00
Onder Kalaci 471703bfaf DEBUG only when the function is distributed
Otherwise, we're seeing this message way to often.
2019-11-05 15:08:35 +00:00
Önder Kalacı 960cd02c67
Remove real time router executors (#3142)
* Remove unused executor codes

All of the codes of real-time executor. Some functions
in router executor still remains there because there
are common functions. We'll move them to accurate places
in the follow-up commits.

* Move GUCs to transaction mngnt and remove unused struct

* Update test output

* Get rid of references of real-time executor from code

* Warn if real-time executor is picked

* Remove lots of unused connection codes

* Removed unused code for connection restrictions

Real-time and router executors cannot handle re-using of the existing
connections within a transaction block.

Adaptive executor and COPY can re-use the connections. So, there is no
reason to keep the code around for applying the restrictions in the
placement connection logic.
2019-11-05 12:48:10 +01:00
Jelte Fennema f0c35ad134 Include fmgr.h, don't duplicate FunctionCallInfo typedef 2019-11-04 17:10:33 +00:00
SaitTalhaNisanci 7c410e3cd7
pass CitusCustomState directly to adaptive executor (#3151) 2019-11-01 19:57:32 +03:00
Önder Kalacı ffd89e4e01
Include all relevant relations in the ExtractRangeTableRelationWalker (#3135)
We've changed the logic for pulling RTE_RELATIONs in #3109 and
non-colocated subquery joins and partitioned tables.
@onurctirtir found this steps where I traced back and found the issues.

While looking into it in more detail, we decided to expand the list in a
way that the callers get all the relevant RTE_RELATIONs RELKIND_RELATION,
RELKIND_PARTITIONED_TABLE, RELKIND_FOREIGN_TABLE and RELKIND_MATVIEW.
These are all relation kinds that Citus planner is aware of.
2019-11-01 16:06:58 +01:00
Onur TIRTIR d3f68bf44f
Fix view is not distributed error when view is used in modify statements (#3104) 2019-11-01 16:34:01 +03:00
SaitTalhaNisanci c7ceca3216
update outdated comment in JobExecutorType (#3148) 2019-11-01 11:36:56 +03:00
SaitTalhaNisanci 70e46703aa
Fix debug1 message in JobExecutorType (#3147)
When citus.enable_repartition_joins guc is set to on, and we have
adaptive executor, there was a typo in the debug message, which was
saying realtime executor no adaptive executor.
2019-11-01 11:14:19 +03:00
Marco Slot 51c64c70c9 Do not try to sync metadata on standby coordinator 2019-10-30 05:15:45 +01:00
SaitTalhaNisanci dadbe86af1
refactor some of hard coded values in citus gucs (#3137)
* refactor some of hard coded values in citus gucs

* rename GUC_ALLOW_ALL to GUC_STANDARD
2019-10-30 10:35:39 +03:00
Marco Slot 03cae27782 Add tests for distributing functions with replication_model statement 2019-10-26 23:57:59 +02:00
Marco Slot 067657af26 Disallow distributed functions with distribution arguments unless replication_model is streaming 2019-10-26 23:57:59 +02:00
SaitTalhaNisanci 29d45bd1b9
Do not assign InvalidOid for local execution while extracting parameters (#3131)
* do not assign InvalidOid for local execution while extracting parameters

* rename functions

* rename parameter and replace function
2019-10-28 14:28:22 +03:00
Önder Kalacı dceaddbe4d
Remove real-time/router executors (step 1) (#3125)
See #3125 for details on each item.

* Remove real-time/router executor tests-1

These are the ones which doesn't have '_%d' in the test
output files.

* Remove real-time/router executor tests-2

These are the ones which has in the test
output files.

* Move the tests outputs to correct place

* Make sure that single shard commits use 2PC on adaptive executor

It looks like we've messed the tests in #2891. Fixing back.

* Use adaptive executor for all router queries

This becomes important because when task-tracker is picked, we
used to pick router executor, which doesn't make sense.

* Remove explicit references to real-time/router executors in the tests

* JobExecutorType never picks real-time/router executors

* Make sure to go incremental in test output numbers

* Even users cannot pick real-time anymore

* Do not use real-time/router custom scans

* Get rid of unnecessary normalizations

* Reflect unneeded normalizations

* Get rid of unnecessary test output file
2019-10-25 10:54:54 +02:00
Marco Slot b8c8fd4612 Fix run_command_on_colocated_placements tests 2019-10-23 00:08:17 +02:00
Marco Slot a1162b2023 Rename 9.1 upgrade script to upgrade from 9.0-2 2019-10-23 00:08:17 +02:00
Marco Slot 04040e0a37 Revoke usage from the citus schema 2019-10-23 00:08:17 +02:00
Jelte Fennema a5010e5b17
Add extra foreach convenience macros (#3117)
This completely hides `ListCell` to the user of the loop

Example usage:
```c
WorkerNode *workerNode = NULL;

foreach_ptr(workerNode, workerNodeList) {
	// Do stuff with workerNode
}
```

Instead of:
```c
ListCell *workerNodeCell = NULL;

foreach(cell, workerNodeList) {
    WorkerNode *workerNode = lfirst(workerNodeCell);
	// Do stuff with workerNode
}
```
2019-10-23 16:49:12 +02:00
Onder Kalaci c2460a1c31 Add upgrade test for distributed functions
Simply make sure that Citus can pushdown functions after pg upgrade.
2019-10-23 12:07:51 +02:00
Philip Dubé b2f084d7f5 UnsetMetadataSyncedForAll: use CatalogTupleUpdateWithInfo 2019-10-23 00:45:11 +00:00
Philip Dubé 2a969fe4bb ssl_by_default: remove stray PG10 check 2019-10-23 00:27:54 +00:00
Philip Dubé 2204a17dbd isolation_multiuser_locking: reorder GRANT to avoid deadlock on enterprise 2019-10-22 21:10:55 +00:00
Onder Kalaci a208f8b151 Fix memory leak on ReceiveResults
It turns out that TupleDescGetAttInMetadata() allocates quite a lot
of memory. And, if the target list is long and there are too many rows
returning, the leak becomes appereant.

You can reproduce the issue wout the fix with the following commands:

```SQL

CREATE TABLE users_table (user_id int, time timestamp, value_1 int, value_2 int, value_3 float, value_4 bigint);
SELECT create_distributed_table('users_table', 'user_id');

insert into users_table SELECT i, now(), i, i, i, i FROM generate_series(0,99999)i;

-- load faster

-- 200,000
INSERT INTO users_table SELECT * FROM users_table;

-- 400,000
INSERT INTO users_table SELECT * FROM users_table;

-- 800,000
INSERT INTO users_table SELECT * FROM users_table;

-- 1,600,000
INSERT INTO users_table SELECT * FROM users_table;

-- 3,200,000
INSERT INTO users_table SELECT * FROM users_table;

-- 6,400,000
INSERT INTO users_table SELECT * FROM users_table;

-- 12,800,000
INSERT INTO users_table SELECT * FROM users_table;

-- making the target list entry wider speeds up the leak to show up
 select *,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,* FROM users_table ;

 ```
2019-10-22 17:22:26 +02:00
Jelte Fennema 78e495e030
Add shouldhaveshards to pg_dist_node (#2960)
This is an improvement over #2512.

This adds the boolean shouldhaveshards column to pg_dist_node. When it's false, create_distributed_table for new collocation groups will not create shards on that node. Reference tables will still be created on nodes where it is false.
2019-10-22 16:47:16 +02:00
Hanefi Onaldi 7ebda04494
Update all c-style comments in migration files 2019-10-21 16:05:53 +03:00
Halil Ozan Akgul 5f04ac774f Adds the tests for refresh materialized views 2019-10-17 16:00:56 +03:00
Jelte Fennema 7abedc38b0
Support subqueries in HAVING (#3098)
Areas for further optimization:
- Don't save subquery results to a local file on the coordinator when the subquery is not in the having clause
- Push the the HAVING with subquery to the workers if there's a group by on the distribution column
- Don't push down the results to the workers when we don't push down the HAVING clause, only the coordinator needs it

Fixes #520
Fixes #756
Closes #2047
2019-10-16 16:40:14 +02:00
Onur TIRTIR 3bfb2a078b
Make changes on if-statement in ExtractRangeTableList for furhter walker types (#3110) 2019-10-16 15:50:09 +03:00
Onur TIRTIR d5f83dc110
Refactor range table walkers (#3109) 2019-10-16 01:20:49 +03:00
SaitTalhaNisanci 94a7e6475c
Remove copyright years (#2918)
* Update year as 2012-2019

* Remove copyright years
2019-10-15 17:44:30 +03:00
Jelte Fennema 9b2f4d71ac
Make sure some MX tests use defined shard_ids (#3103) 2019-10-12 22:46:14 +02:00
Philip Dubé 74cb168205 Remove Postgres 10 support 2019-10-11 21:56:56 +00:00
Hadi Moshayedi b50d216536 Fix a typo 2019-10-10 10:44:41 -07:00
Philip Dubé 4063e7ca67 CALL delegation: apply strip_implicit_coercions to distribution argument 2019-10-10 17:42:43 +00:00
Philip Dubé 7ffd78b6e0 isolation_multiuser_locking
Introduce a test which checks that locks are only acquired when a user has necessary permissions
Currently tests REINDEX, CREATE INDEX, TRUNCATE
2019-10-10 16:58:41 +00:00
Philip Dubé dd490b6376 Cache whether an object is in pg_dist_object. Avoids redundant lookups for non-distributed objects 2019-10-10 14:50:38 +00:00
SaitTalhaNisanci 83667436f7
add support to run citus upgrade tests locally (#3083)
* add support to run citus upgrade tests locally

* dont build tars if they already exist

* use current code instead of master for upgrade

* always build the current code

* copy the current citus code to have isolated citus upgrade tests

* fix configure and simplify copy
2019-10-08 15:32:44 +03:00
Nils Dijk 4a4a220945
Fix enum add value order and pg12 (#3082)
DESCRIPTION: Fix order for enum values and correctly support pg12

PG 12 introduces `ALTER TYPE ... ADD VALUE ...` during transactions. Earlier versions would error out when called in a transaction, hence we connect to workers outside of the transaction which could cause inconsistencies on pg12 now that postgres doesn't error with this syntax anymore.

During the implementation of this fix it became apparent there was an error with the ordering of enum labels when the type was recreated. A patch and test have been included.
2019-10-07 17:16:19 +02:00
Jelte Fennema 01da11f264
Change citus truncate trigger to AFTER and add more upgrade tests (#3070)
* Add more upgrade tests

* Fix citus trigger generation after upgrade

citus_truncate_trigger runs before truncate when created by create_distributed_table:
492d1b2cba/src/backend/distributed/commands/create_distributed_table.c (L1163)

* Remove pg_dist_jobid_seq
2019-10-07 16:43:04 +02:00
Onder Kalaci 3be72ce42f Make sure that distributed functions always have the correct user
Objectives:

(a) both super user and regular user should have the correct owner for the function on the worker
(b) The transactional semantics would work fine for both super user and regular user
(c) non-super-user and non-function owner would get a reasonable error message if tries to distribute the function

Co-authored-by: @serprex
2019-10-04 21:38:49 +00:00
SaitTalhaNisanci c547664fae
Add Citus upgrade tests with its job (#3003)
* Add initial citus upgrade test

* Add restart databases and run tests in all nodes

* Add output for citus versions 8.0 8.1 8.2 and 8.3

* Add verify step for citus upgrade

* Add target for citus upgrade test in makefile

* Add check citus upgrade job

* Fix installation file path and add missing tar

* Run citus upgrade for v8.0 v8.1 v8.2 and v8.3

* Create upgrade_common file and rename upgrade check

* Add pg version to citus upgrade test

* Test with postgres 10 and 11 in citus upgrade tests

* Add readme for citus upgrade test

* Add some basic tests to citus upgrade tests

* Add citus upgrade mixed mode test

* Remove citus artifacts before installing another one

* Refactor citus upgrade test according to reviews

* quick and dirty rewrite of citus upgrade tests to support local execution.

I think we need to change the makefile in such a way that the tar files can be injected from the circle ci config file.

Also I removed some of the citus version checks you had to not have the requirement to pass that in separately from the pre tar file. I am not super happy with it, but two flags that need to be kept in sync is also not desirable. Instead I print out the citus version that is installed per node. This will not cause a failure if they are not what one would expect but it lets us verify we are running the expected version.

* use latest citusupgradetester in circleci

* update readme and use common alias for upgrade_common import
2019-10-04 17:44:49 +03:00
Marco Slot 1a3a174f67 Grant usage on schema citus to public 2019-10-04 12:26:08 +02:00
Marco Slot 89377ee578 Move RowExclusiveLock to start in SyncMetadataToNodes 2019-10-04 12:07:41 +02:00
Hadi Moshayedi 217db2a03e Don't block for locks in SyncMetadataToNodes() 2019-10-03 16:53:36 -07:00
Hadi Moshayedi ae915493e6 Don't send metadata commands to not-synced workers.
Otherwise some of the dependencies might not exist yet and
commands will error out.
2019-10-03 16:52:25 -07:00
Marco Slot 0b4b63e647 Drop the rebalancer before creating new UDFs 2019-10-03 16:08:58 +02:00
Marco Slot 2e50306cf8 Check command type in TryToDelegateFunctionCall 2019-10-03 15:37:15 +02:00
Jelte Fennema 9833c07070 Improve upgrade test runner
- Update certifi in regress pipenv
- Use normal test ports for upgrade tests
- Make diff behave correctly for upgrade tests
2019-10-03 13:10:11 +02:00
Halil Ozan Akgul bda8f6f87b Created tests for distribution to reference table foreign keys on mx 2019-10-03 09:31:13 +03:00
SaitTalhaNisanci 19bdca14d8
Add jobs to run tests with pg 12 (#3033)
* Add PG12 test outputs

* Add jobs to run tests with pg 12

* use POSIX collate for compatibility between pg10/pg11/pg12

* do not override the new default value when running vanilla tests

* fix 2 problems with pg12 tests

* update pg12 images with pg12 rc1

* remove pg10 jobs

* Revert "Add PG12 test outputs"

This reverts commit f3545b92ef.

* change images to use latest instead of dev

* add missing coverage flags
2019-10-02 15:33:12 +03:00
Halil Ozan Akgul e5906bead2 Created isolation tests for update, delete, upsert on reference tables with MX. 2019-10-02 10:11:21 +03:00
Hanefi Onaldi bd416ef68f Fix empty FROM clauses in PG12 2019-10-01 19:54:11 +00:00
Halil Ozan Akgul 1d7030a651 Created isolation tests for select for update on reference tables with MX. 2019-10-01 16:29:15 +03:00
Jelte Fennema ec4a165eec Improve isolation test block detection (#3055) 2019-10-01 14:10:15 +02:00
Jelte Fennema 40f785e6d8 Move citus_isolation_test_session_is_blocked to separate udf sql file 2019-10-01 14:10:15 +02:00
Philip Dubé 89d35e9692 Attempt to force custom plans for prepared statements when trying to delegate function calls
We discern between PARAM_EXEC & PARAM_EXTERN:
d52eaa0948/src/include/nodes/primnodes.h (L211)
According to primnodes.h we should only run into PARAM_EXEC or PARAM_EXTERN
2019-09-30 23:49:14 +00:00
Philip Dubé 29f1ea079b PG_VERSION_NUM > 110000 should be PG_VERSION_NUM >= 110000
Also fix a > 12000 typo
2019-09-30 23:37:43 +00:00
Hadi Moshayedi 5e97e5c98e Don't push down queries when in subqueries/ctes 2019-09-30 14:22:05 -07:00
Marco Slot 35bef0f3db Avoid caching connections from backends that servicei internal connections 2019-09-28 08:32:10 +02:00
Nils Dijk 01b26cf91a
Disallow distributed functions for functions depending on an extension (#3049)
DESCRIPTION: Disallow distributed functions for functions depending on an extension

Functions depending on an extension cannot (yet) be distributed by citus. If we would allow this it would cause issues with our dependency following mechanism as we stop following objects depending on an extension.

By not allowing functions to be distributed when they depend on an extension as well as not allowing to make distributed functions depend on an extension we won't break the ability to add new nodes. Allowing functions depending on extensions to be distributed at the moment could cause problems in that area.
2019-09-30 15:19:47 +02:00
Nils Dijk 473cbc0115
Propagate CREATE OR REPLACE FUNCTION to workers for distributed functions (#3043)
DESCRIPTION: Propagate CREATE OR REPLACE FUNCTION

Distributed functions could be replaced, which should be propagated to the workers to keep the function in sync between all nodes.

Due to the complexity of deparsing the `CreateFunctionStmt` we actually produce the plan during the processing phase of our utilityhook. Since the changes have already been made in the catalog tables we can reuse `pg_get_functiondef` to get us the generated `CREATE OR REPLACE` sql.
2019-09-30 12:41:17 +02:00
Jelte Fennema 82ec918b29
Add explain summary support (#3046)
Fixes #2922 and also adds explain analyze regression tests
2019-09-30 10:58:49 +02:00
Nils Dijk 9c2c50d875
Hookup function/procedure deparsing to our utility hook (#3041)
DESCRIPTION: Propagate ALTER FUNCTION statements for distributed functions

Using the implemented deparser for function statements to propagate changes to both functions and procedures that are previously distributed.
2019-09-27 22:06:49 +02:00
Philip Dubé 363409a0c2 Propagate REINDEX TABLE & REINDEX INDEX 2019-09-27 18:14:53 +00:00
Hanefi Onaldi 66b9f2e887 Deparsing and qualifiying for FUNCTION/PROCEDURE statements (#3014)
This PR aims to add all the necessary logic to qualify and deparse all possible `{ALTER|DROP} .. {FUNCTION|PROCEDURE}` queries.

As Procedures are introduced in PG11, the code contains many PG version checks. I tried my best to make it easy to clean up once we drop PG10 support.


Here are some caveats:
- I assumed that the parse tree is a valid one. There are some queries that are not allowed, but still are parsed successfully by postgres planner. Such queries will result in errors in execution time. (e.g. `ALTER PROCEDURE p STRICT` -> `STRICT` action is valid for functions but not procedures. Postgres decides to parse them nevertheless.)
2019-09-27 19:02:52 +02:00
Marco Slot 2868e02a3d Implement SELECT function call delegation.
When a function is marked as colocated with a distributed table,
we try delegating queries of kind "SELECT func(...)" to workers.

We currently only support this simple form, and don't delegate
forms like "SELECT f1(...), f2(...)", "SELECT f1(...) FROM ...",
or function calls inside transactions.

As a side effect, we also fix the transactional semantics of DO blocks.
Previously we didn't consider a DO block a multi-statement transaction.
Now we do.

Co-authored-by: Marco Slot <marco@citusdata.com>
Co-authored-by: serprex <serprex@users.noreply.github.com>
Co-authored-by: pykello <hadi.moshayedi@microsoft.com>
2019-09-27 09:13:25 -07:00
Jelte Fennema dab16be283
Set default threshold on get_rebalance_table_shards_plan to 0, like rebalance_table_shards (#3039)
In this PR the default `threshold` of `rebalance_table_shards` was set to 0: https://github.com/citusdata/shard_rebalancer/pull/73
However, the default for get_rebalance_table_shards_plan was not updated. This
can cause the confusing situation where the actual steps run by
`rebalance_table_shards` are not the same as the ones returned by
`get_rebalance_table_shards_plan`.
2019-09-27 17:21:36 +02:00
Halil Ozan Akgul 824a69587c Created isolation tests for insert select on MX 2019-09-26 17:40:36 +03:00
Marco Slot 32a11bdf6c Return early for common commands in the utility hook (#3031)
We started copying parse trees by default further on in `multi_ProcessUtility`. That's not a problem for maintenance command, but might register for things like `PREPARE` and `EXECUTE`, which might happen thousands of times per second. Add a few common commands to the check at the start.
2019-09-26 11:43:35 +02:00
Halil Ozan Akgul d56ab6274c Created isolation tests for drop, alter, index and select for update on MX. 2019-09-26 10:47:14 +03:00
Halil Ozan Akgul d426fb2159 Created isolation tests for truncate on MX. 2019-09-25 16:51:20 +03:00
Halil Ozan Akgul 62b6852923 Created isolation tests for copy on MX. 2019-09-25 15:36:05 +03:00
Onder Kalaci 219f3676a0 Improve some tests around local execution and CTE inlining on pg 12 2019-09-25 10:53:19 +02:00
Philip Dubé 4f60e3a149 Feedback 2019-09-24 17:31:09 +00:00
Marco Slot c1e43b25da Use the new create_distributed_function API in some call tests 2019-09-24 17:31:09 +00:00
Marco Slot ca478defeb Deparse CALL statement instead of using original query string 2019-09-24 17:31:09 +00:00
Philip Dubé 90e1f1442a Annotated tests for multi_mx_call.
Co-authored-by: pykello <hadi.moshayedi@microsoft.com>
2019-09-24 17:31:09 +00:00
Marco Slot e269d990c9 Cast the distribution argument value when possible 2019-09-24 17:31:09 +00:00
Philip Dubé c95d46b4f3 Extend multi_mx_call with some of Hadi's suggestions for better test coverage 2019-09-24 17:31:09 +00:00
Philip Dubé 432a8ef85b Hadi's feedback
Co-authored-by: pykello <hadi.moshayedi@microsoft.com>
Co-authored-by: serprex <serprex@users.noreply.github.com>
2019-09-24 17:31:09 +00:00
Philip Dubé 16b8d17aba Test: multi_mx_call 2019-09-24 17:31:09 +00:00
Philip Dubé bc1ad67eb5 Distribute CALL on distributed procedures to metadata workers
Lots taken from https://github.com/citusdata/citus/pull/2829
2019-09-24 17:31:09 +00:00
Onder Kalaci 18de78f386 Relax the colocation checks for distributed functions
As long as the types can be coerced, it is safe to pushdown
functions.
2019-09-24 16:31:08 +02:00
Jelte Fennema 0f90c2497e Use synchronous replication for follower tests 2019-09-24 15:51:49 +02:00
Jelte Fennema 78ccc323d1 Remove stuff needed only for PG 9.6 from test runner 2019-09-24 15:51:49 +02:00
Jelte Fennema bd2103e597 Remove flappy test 2019-09-24 14:15:33 +02:00
Jelte Fennema 897ec1bdeb Revert "Temporarily disable flappy test"
This reverts commit 4b4459ee62.
2019-09-24 14:15:33 +02:00
Marco Slot 42be8afd74 Swap pg_dist_node groupid and nodeid sequences 2019-09-24 12:03:44 +02:00
Marco Slot 0dea485c68 Fix misspelling in multi_colocation_utils 2019-09-24 11:27:30 +02:00
Marco Slot 4b4459ee62 Temporarily disable flappy test 2019-09-24 11:02:34 +02:00
Hadi Moshayedi 48078a30e6 Fix wait_until_metadata_sync() for postgres 12.
Postgres 12 now has an assertion that the calls to WaitLatchOrSocket
handle postmaster death.
2019-09-23 14:15:35 -07:00
Philip Dubé 06faba91c0 Include ifdefs for pg12 API changes, update local_shard_executiuon test to avoid CTE inlining 2019-09-23 20:22:35 +00:00
Onder Kalaci d37745bfc7 Sync metadata to worker nodes after create_distributed_function
Since the distributed functions are useful when the workers have
metadata, we automatically sync it.

Also, after master_add_node(). We do it lazily and let the deamon
sync it. That's mainly because the metadata syncing cannot be done
in transaction blocks, and we don't want to add lots of transactional
limitations to master_add_node() and create_distributed_function().
2019-09-23 18:30:53 +02:00
Marco Slot 5f23b951c7 Support serial and smallserial when syncing metadata 2019-09-23 17:39:21 +02:00
Marco Slot e58d76c5f6 Fix assert failure in bare SELECT FROM reference table FOR UPDATE in MX 2019-09-23 17:00:09 +02:00
SaitTalhaNisanci 71e7047e65
Enhance pg upgrade tests (#3002)
* Enhance pg upgrade tests

* Add a specific upgrade test for pg_dist_partition

We store the index of distribution column, and when a column with an
index that is smaller than distribution column index is dropped before
an upgrade, the index should still match the distribution column after
an upgrade
2019-09-23 17:37:14 +03:00
Marco Slot d85d77634d Handle anonymous composite types on the target list 2019-09-23 14:53:02 +02:00
Halil Ozan Akgul b55b275a30 Created isolation tests for update, delete and upsert on MX 2019-09-23 14:13:29 +03:00
Onder Kalaci d7e2968120 Add parameters to create_distributed_function()
With this commit, we're changing the API for create_distributed_function()
such that users can provide the distribution argument and the colocation
information.
2019-09-22 21:53:33 +02:00
Onder Kalaci e1fe8d60b4 Make sure that functions are also listed in SupportedDependencyByCitus
We've recently merged two commits, db5d03931d
and eccba1d4c3, which actually operates
on the very similar places.

It turns out that we've an integration issue, where master_add_node()
fails to replicate the functions to newly added node.
2019-09-20 11:02:50 +02:00
Hadi Moshayedi d24cefd055 Set active snapshot before SyncMetadataToNodes(). 2019-09-19 09:00:25 -07:00
Nils Dijk 72015faeb2
fix disable_object_propagation test for pg12 2019-09-19 17:40:24 +02:00
Hanefi Onaldi ed11b9590c
Add distributed func creation queries in dependency replication logic 2019-09-18 20:07:45 +03:00
Hadi Moshayedi d2f2acc4b2 Make master_update_node citus-ha friendly. 2019-09-18 09:32:54 -07:00
Hadi Moshayedi 76f3933b05 Add metadatasynced, and sync on master_update_node()
Co-authored-by: pykello <hadi.moshayedi@microsoft.com>
Co-authored-by: serprex <serprex@users.noreply.github.com>
2019-09-18 09:32:54 -07:00
Nils Dijk db5d03931d
Feature disable object propagation (#2986)
DESCRIPTION: Provide a GUC to turn of the new dependency propagation functionality

In the case the dependency propagation functionality introduced in 9.0 causes issues to a cluster of a user they can turn it off almost completely. The only dependency that will still be propagated and kept track of is the schema to emulate the old behaviour.

GUC to change is `citus.enable_object_propagation`. When set to `false` the functionality will be mostly turned off. Be aware that objects marked as distributed in `pg_dist_object` will still be kept in the catalog as a distributed object. Alter statements to these objects will not be propagated to workers and may cause desynchronisation.
2019-09-18 17:16:22 +02:00
Philip Dubé ac14f1dd49 pg12 doesn't support client_min_messages as 'fatal' 2019-09-17 20:37:06 +00:00
Nils Dijk 2b7f5552c8
Fix: rename remote type on conflict (#2983)
DESCRIPTION: Rename remote types during type propagation

To prevent data to be destructed when a remote type differs from the type on the coordinator during type propagation we wanted to rename the type instead of `DROP CASCADE`.

This patch removes the `DROP` logic and adds the creation of a rename statement to a free name.
2019-09-17 18:54:10 +02:00
Nils Dijk 0a3152d09c
Add feature flag to turn off create type propagation (#2982)
DESCRIPTION: Add feature flag to turn off create type propagation

When `citus.enable_create_type_propagation` is set to `false` citus will not propagate `CREATE TYPE` statements to the workers. Types are still distributed when tables that depend on these types are distributed.
2019-09-17 15:50:06 +02:00
Halil Ozan Akgul 5333296a54 Created isolation tests for select on MX 2019-09-17 12:44:45 +03:00
Philip Dubé 964020097d Merge two conflicting pg_dist_object headers 2019-09-16 19:19:21 +00:00
Onder Kalaci cde6b02858 Add columns to pg_dist_object for distributed functions
This PR simply adds the columns to pg_dist_object and
implements the necessary metadata changes to keep track of
distribution argument of the functions/procedures.
2019-09-16 17:28:04 +02:00
Jelte Fennema af9fb9f785
Fix depend arguments for OSX clang cpp (#2978)
A better fix for #2975. Apparently for OSX cpp -MF and -MT shouldn't have a
space in between the flag and their value. Without the space it still works for
gcc as well.
2019-09-16 15:22:07 +02:00
Halil Ozan Akgul 7cde785031 Added the MX isolation tests for insert 2019-09-16 15:49:43 +03:00
Jelte Fennema 31fac3b90e
Don't generate SQL files twice by not making directories a target (#2977) 2019-09-16 12:53:17 +02:00
Önder Kalacı 13947a63ce Don't use flags that mac clang doesn't support as it does on other platforms (#2975) 2019-09-16 11:44:06 +02:00
Hanefi Onaldi 8f2a3a0604
Introduce create_distributed_function(regproc) UDF (#2961)
This PR aims to add the minimal set of changes required to start
distributing functions. You can use create_distributed_function(regproc)
UDF to distribute a function.

    SELECT create_distributed_function('add(int,int)');

The function definition should include the param types to properly
identify the correct function that we wish to distribute
2019-09-13 23:27:46 +03:00
Philip Dubé fb10edcb9d isolation_add_node_vs_reference_table_operations: test add in parallel with create_reference_table 2019-09-13 18:13:58 +00:00
Philip Dubé 492d1b2cba ActivePrimaryNodeList: add lockMode parameter 2019-09-13 17:44:56 +00:00
Philip Dubé 5e5f4628a0 Fix pg12 compile 2019-09-13 17:25:30 +00:00
Jelte Fennema 4bbf65d913
Change SQL migration build process for easier reviews (#2951)
@thanodnl told me it was a bit of a problem that it's impossible to see
the history of a UDF in git. The only way to do so is by reading all the
sql migration files from new to old. Another problem is that it's also
hard to review the changed UDF during code review, because to find out
what changed you have to do the same. I thought of a IMHO better (but
not perfect) way to handle this.

We keep the definition of a UDF in sql/udfs/{name_of_udf}/latest.sql.
That file we change whenever we need to make a change to the the UDF. On
top of that you also make a snapshot of the file in
sql/udfs/{name_of_udf}/{migration-version}.sql (e.g. 9.0-1.sql) by
copying the contents. This way you can easily view what the actual
changes were by looking at the latest.sql file.

There's still the question on how to use these files then. Sadly
postgres doesn't allow inclusion of other sql files in the migration sql
file (it does in psql using \i). So instead I used the C preprocessor+
make to compile a sql/xxx.sql to a build/sql/xxx.sql file. This final
build/sql/xxx.sql file has every occurence of #include "somefile.sql" in
sql/xxx.sql replaced by the contents of somefile.sql.
2019-09-13 18:44:27 +02:00
Nils Dijk 2879689441
Distribute Types to worker nodes (#2893)
DESCRIPTION: Distribute Types to worker nodes

When to propagate
==============

There are two logical moments that types could be distributed to the worker nodes
 - When they get used ( just in time distribution )
 - When they get created ( proactive distribution )

The just in time distribution follows the model used by how schema's get created right before we are going to create a table in that schema, for types this would be when the table uses a type as its column.

The proactive distribution is suitable for situations where it is benificial to have the type on the worker nodes directly. They can later on be used in queries where an intermediate result gets created with a cast to this type.

Just in time creation is always the last resort, you cannot create a distributed table before the type gets created. A good example use case is; you have an existing postgres server that needs to scale out. By adding the citus extension, add some nodes to the cluster, and distribute the table. The type got created before citus existed. There was no moment where citus could have propagated the creation of a type.

Proactive is almost always a good option. Types are not resource intensive objects, there is no performance overhead of having 100's of types. If you want to use them in a query to represent an intermediate result (which happens in our test suite) they just work.

There is however a moment when proactive type distribution is not beneficial; in transactions where the type is used in a distributed table.

Lets assume the following transaction:

```sql
BEGIN;
CREATE TYPE tt1 AS (a int, b int);
CREATE TABLE t1 AS (a int PRIMARY KEY, b tt1);
SELECT create_distributed_table('t1', 'a');
\copy t1 FROM bigdata.csv
```

Types are node scoped objects; meaning the type exists once per worker. Shards however have best performance when they are created over their own connection. For the type to be visible on all connections it needs to be created and committed before we try to create the shards. Here the just in time situation is most beneficial and follows how we create schema's on the workers. Outside of a transaction block we will just use 1 connection to propagate the creation.

How propagation works
=================

Just in time
-----------

Just in time propagation hooks into the infrastructure introduced in #2882. It adds types as a supported object in `SupportedDependencyByCitus`. This will make sure that any object being distributed by citus that depends on types will now cascade into types. When types are depending them self on other objects they will get created first.

Creation later works by getting the ddl commands to create the object by its `ObjectAddress` in `GetDependencyCreateDDLCommands` which will dispatch types to `CreateTypeDDLCommandsIdempotent`.

For the correct walking of the graph we follow array types, when later asked for the ddl commands for array types we return `NIL` (empty list) which makes that the object will not be recorded as distributed, (its an internal type, dependant on the user type).

Proactive distribution
---------------------

When the user creates a type (composite or enum) we will have a hook running in `multi_ProcessUtility` after the command has been applied locally. Running after running locally makes that we already have an `ObjectAddress` for the type. This is required to mark the type as being distributed.

Keeping the type up to date
====================

For types that are recorded in `pg_dist_object` (eg. `IsObjectDistributed` returns true for the `ObjectAddress`) we will intercept the utility commands that alter the type.
 - `AlterTableStmt` with `relkind` set to `OBJECT_TYPE` encapsulate changes to the fields of a composite type.
 - `DropStmt` with removeType set to `OBJECT_TYPE` encapsulate `DROP TYPE`.
 - `AlterEnumStmt` encapsulates changes to enum values.
    Enum types can not be changed transactionally. When the execution on a worker fails a warning will be shown to the user the propagation was incomplete due to worker communication failure. An idempotent command is shown for the user to re-execute when the worker communication is fixed.

Keeping types up to date is done via the executor. Before the statement is executed locally we create a plan on how to apply it on the workers. This plan is executed after we have applied the statement locally.

All changes to types need to be done in the same transaction for types that have already been distributed and will fail with an error if parallel queries have already been executed in the same transaction. Much like foreign keys to reference tables.
2019-09-13 17:46:07 +02:00
Jelte Fennema e4cfea3751 Correctly add schema when distributing sequence definitons
Fixes 2958
2019-09-13 17:19:35 +02:00
Jelte Fennema 579a40dfa5 Add make check-base-mx 2019-09-13 17:19:35 +02:00
Jelte Fennema 389086102a
Refactor 9 argument function to use a struct (#2952)
For another PR I needed to add another column which would require to add
another argument to an already 9 argument function signature. In this
case it would be a boolean flag and there were already two boolean flags
in there. In my experience it becomes really easy to mess up the order
of these flags at that point. Especially because the type system doesn't
distinguish between the 3 different booleans with completely different
meanings.

So I refactored these signatures to receive a struct containing most of
these arguments. Like that you don't mess up orderening, because the
meaning of the boolean is not order dependent but fieldname dependent.
It also makes it possible to set good shared defaults for this struct.
2019-09-13 15:49:53 +02:00
Halil Ozan Akgul 4d34b79b87 There were two multi insert - single insert tests but no multi insert - multi insert test. Fixed it. 2019-09-13 16:09:11 +03:00
Nils Dijk 05f0668cdc
Fix: schema leak onto create index statement cache (#2964)
DESCRIPTION: Fix schema leak on CREATE INDEX statement

When a CREATE INDEX is cached between execution we might leak the schema name onto the cached statement of an earlier execution preventing the right index to be created.

Even though the cache is cleared when the search_path changes we can trigger this behaviour by having the schema already on the search path before a colliding table is created in a schema earlier on the `search_path`. When calling an unqualified create index via a function (used to trigger the caching behaviour) we see that the index is created on the wrong table after the schema leaked onto the statement.

By copying the complete `PlannedStmt` and `utilityStmt` during our planning phase for distributed ddls we make sure we are not leaking the schema name onto a cached data structure.

Caveat; COPY statements already have a lot of parsestree copying ongoing without directly putting it back on the `pstmt`. We should verify that copies modify the statement and potentially copy the complete `pstmt` there already.
2019-09-13 14:04:23 +02:00
Hadi Moshayedi 48ff4691a0 Return nodeid instead of record in some UDFs 2019-09-12 12:46:21 -07:00
Philip Dubé ae1171a373 Test invalid aggregate 2019-09-12 16:55:05 +00:00
Philip Dubé 2aa6852dea Begin searching AggregateNames from 1, not 0 2019-09-12 16:55:05 +00:00
Jelte Fennema d6deb062aa Add shard rebalancer stubs 2019-09-12 16:40:25 +02:00
Jelte Fennema 58012054c9 Add an extra advisory lock tag class 2019-09-12 16:40:25 +02:00
Jelte Fennema eb7e45d556 Make LookupNodeForGroup extern 2019-09-12 16:40:25 +02:00
Jelte Fennema 257406fda7 Fix ArrayObjectCount for zero sized arrays 2019-09-12 16:40:25 +02:00
Jelte Fennema de5174f763 include postgres.h into some of our .h files to silence warnings 2019-09-12 16:40:25 +02:00
Jelte Fennema 4ebdf5989b Add check-minimal to test Makefile 2019-09-12 16:40:25 +02:00
Onder Kalaci 0b0c779c77 Introduce the concept of Local Execution
/*
 * local_executor.c
 *
 * The scope of the local execution is locally executing the queries on the
 * shards. In other words, local execution does not deal with any local tables
 * that are not shards on the node that the query is being executed. In that sense,
 * the local executor is only triggered if the node has both the metadata and the
 * shards (e.g., only Citus MX worker nodes).
 *
 * The goal of the local execution is to skip the unnecessary network round-trip
 * happening on the node itself. Instead, identify the locally executable tasks and
 * simply call PostgreSQL's planner and executor.
 *
 * The local executor is an extension of the adaptive executor. So, the executor uses
 * adaptive executor's custom scan nodes.
 *
 * One thing to note that Citus MX is only supported with replication factor = 1, so
 * keep that in mind while continuing the comments below.
 *
 * On the high level, there are 3 slightly different ways of utilizing local execution:
 *
 * (1) Execution of local single shard queries of a distributed table
 *
 *      This is the simplest case. The executor kicks at the start of the adaptive
 *      executor, and since the query is only a single task the execution finishes
 *      without going to the network at all.
 *
 *      Even if there is a transaction block (or recursively planned CTEs), as long
 *      as the queries hit the shards on the same, the local execution will kick in.
 *
 * (2) Execution of local single queries and remote multi-shard queries
 *
 *      The rule is simple. If a transaction block starts with a local query execution,
 *      all the other queries in the same transaction block that touch any local shard
 *      have to use the local execution. Although this sounds restrictive, we prefer to
 *      implement in this way, otherwise we'd end-up with as complex scenarious as we
 *      have in the connection managements due to foreign keys.
 *
 *      See the following example:
 *      BEGIN;
 *          -- assume that the query is executed locally
 *          SELECT count(*) FROM test WHERE key = 1;
 *
 *          -- at this point, all the shards that reside on the
 *          -- node is executed locally one-by-one. After those finishes
 *          -- the remaining tasks are handled by adaptive executor
 *          SELECT count(*) FROM test;
 *
 *
 * (3) Modifications of reference tables
 *
 *		Modifications to reference tables have to be executed on all nodes. So, after the
 *		local execution, the adaptive executor keeps continuing the execution on the other
 *		nodes.
 *
 *		Note that for read-only queries, after the local execution, there is no need to
 *		kick in adaptive executor.
 *
 *  There are also few limitations/trade-offs that is worth mentioning. First, the
 *  local execution on multiple shards might be slow because the execution has to
 *  happen one task at a time (e.g., no parallelism). Second, if a transaction
 *  block/CTE starts with a multi-shard command, we do not use local query execution
 *  since local execution is sequential. Basically, we do not want to lose parallelism
 *  across local tasks by switching to local execution. Third, the local execution
 *  currently only supports queries. In other words, any utility commands like TRUNCATE,
 *  fails if the command is executed after a local execution inside a transaction block.
 *  Forth, the local execution cannot be mixed with the executors other than adaptive,
 *  namely task-tracker, real-time and router executors. Finally, related with the
 *  previous item, COPY command cannot be mixed with local execution in a transaction.
 *  The implication of that any part of INSERT..SELECT via coordinator cannot happen
 *  via the local execution.
 */
2019-09-12 11:51:25 +02:00
Marco Slot 810aca8d41 Drop foreign key from pg_dist_poolinfo to pg_dist_node 2019-09-10 09:52:19 +02:00
SaitTalhaNisanci e132d579f2
Change --new-bindir flag description to be consistent (#2950) 2019-09-11 15:36:39 +03:00
SaitTalhaNisanci 0f170cb75f
Use variables instead of hardcoded tmp dirs (#2944) 2019-09-11 13:25:18 +03:00
Onder Kalaci 485189c0b6 Make sure that lost connections are handled properly
Before this patch, when a connection is lost, we'd have the following
situation:

    - Pop a task execution from readyQueue
    - Lost connection
    - Fail the session/pool. -> This step was not acting properly
      because we've popped the task, but not set to session->currentTask
      yet

After the patch:

    - Pop a task execution from readyQueue
    - Immediately set it to session->currentTask
    - Lost connection
    - Fail the session/pool. -> At this step, failing the
      session would trigger query failures (or failovers)
      properly.
2019-09-10 17:54:27 +02:00
SaitTalhaNisanci d99deab7d9
Add upgrade postgres version test (#2940)
* Add creating a citus cluster script

Creating a citus cluster is automated.
Before running this script:
- Citus should be installed and its control file should be added to postgres. (make install)
- Postgres should be installed.

* Initialize upgrade test table and fill

* Finalize the layout of upgrade tests

Postgres upgrade function is added.
The newly added UDFs(citus_prepare_pg_upgrade, citus_finish_pg_upgrade) are used to
perform upgrade.

* Refactor upgrade test and add config file

* Add schedules for upgrade testing

* Use pg_regress for upgrade tests

pg_regress is used for creating a simple distributed table in
upgrade tests. After upgrading another schedule is used to verify
that the distributed table exists. Router and realtime queries are
used for verifying.

* Run upgrade tests as a postgres user in a temp dir

postgres user is used for psql to be consistent at running tests.
A temp dir is created and the temp dir's permissions are changed so
that postgres user can access it. All psql commands are now run with
postgres user.

"Select * from t" query is changed as "Select * from t order by a"
so that the result is always in the same order.

* Add docopt and arguments for the upgrade script

Docopt dependency is added to parse flags in script.
Some refactoring in variable names is done.

* Add readme for upgrade tests

* Refactor upgrade tests

Use relative data path instead of absolute assuming that this script will
always be run from 'src/test/regress'
Remove 'citus-path' flag
Use specific version for docopt instead of *
Use named args in string formatting

* Resolve a security problem

Instead of using string formatting in subprocess.call, arguments
list is used. Otherwise users could do shell injection.
Shell = True is removed from subprocess call as it is not recommended
to use this.

* Add how the test works to readme

* Refactor some variables to be consistent

* Update upgrade script based on the reviews

It was possible that postgres server would stay running even when the script
crashes, atexit library is used to ensure that we always do a teardown where we stop
the databases.

Some formatting is done in the code for better readability.

Config class is used instead of a dictonary.

A target for upgrade test is added to makefile.

Unused flags/functions/variables are removed.

* Format commands and remove unnecessary flag from readme
2019-09-10 17:56:04 +03:00
Philip Dubé b301cf628a Test worker_cleanup_job_schema_cache actually drops schemas 2019-09-05 16:52:24 +00:00
Philip Dubé 8979fd038b worker_check_invalid_arguments: invalid task/job ids 2019-09-05 16:52:24 +00:00
Philip Dubé 5f9e88b260 multi_multiuser: test that worker_merge_files_and_query doesn't allow privilege escalation 2019-09-05 16:52:24 +00:00
Philip Dubé a28b82d67d get_catalog_object_by_oid requires an extra parameter in pg12 2019-09-05 16:38:07 +00:00
Nils Dijk 511e715ee3
Remove early escape in walking pg_depend (#2930)
This is a bug that got in when we inlined the body of a function into this loop. Earlier revisions had two loops, hence a function that would be reused.

With a return instead of a continue the list of dependencies being walked is dependent on the order in which we find them in pg_depend. This became apparent during pg12 compatibility. The order of entries in pg12 was luckily different causing a random test to fail due to this return.

By changing it to a continue we only skip the entries that we don’t want to follow instead of skipping all entries that happen to be found later.

sidefix for more stable isolation tests around ensure dependency
2019-09-05 18:03:34 +02:00
Philip Dubé bdd30bb181 Don't allow distributing by a generated column 2019-09-04 14:50:17 +00:00
Philip Dubé 41dca121e2 Support GENERATE ALWAYS AS STORED 2019-09-04 14:50:17 +00:00
Nils Dijk 936d546a3c
Refactor Ensure Schema Exists to Ensure Dependecies Exists (#2882)
DESCRIPTION: Refactor ensure schema exists to dependency exists

Historically we only supported schema's as table dependencies to be created on the workers before a table gets distributed. This PR puts infrastructure in place to walk pg_depend to figure out which dependencies to create on the workers. Currently only schema's are supported as objects to create before creating a table.

We also keep track of dependencies that have been created in the cluster. When we add a new node to the cluster we use this catalog to know which objects need to be created on the worker.

Side effect of knowing which objects are already distributed is that we don't have debug messages anymore when creating schema's that are already created on the workers.
2019-09-04 14:10:20 +02:00
Philip Dubé 28d964240f Remove CheckForUpdates
https://reports.citusdata.com/v1/releases/latest
We haven't updated the version CheckForUpdates sees since 7.1.0
2019-09-03 21:11:25 +00:00
Philip Dubé 4d26829d50 Remove normalized_tests.lst, don't normalize check-vanilla 2019-09-03 17:25:00 +00:00
Philip Dubé da00c62eea create_distributed_table: include COLLATE on columns 2019-08-29 14:22:54 +00:00
Philip Dubé 32ef459025 backend_data.c: include max_wal_senders in calculating maxBackend, matches changes in pg12's InitializeMaxBackends 2019-08-28 21:24:33 +00:00
Jelte Fennema cbecf97c84
Move tuplestore setup to a helper function (#2898)
* Add tuplestore helpers

* More detailed error messages in tuplestore

* Add CreateTupleDescCopy to SetupTuplestore

* Use new SetupTuplestore helper function

* Remove unnecessary copy

* Remove comment about undefined behaviour
2019-08-27 09:11:08 +02:00
Philip Dubé eba3828ef7 ColocatedShardIntervalList: sort 2019-08-26 17:42:41 +00:00
Matthias Kurz fc069dc611 Test SET LOCAL propagation when GUC is used in RLS policy 2019-08-22 20:29:52 +00:00
Philip Dubé 6b0d8ed83d SortList in FinalizedShardPlacementList, makes 3 failure tests consistent between 11/12 2019-08-22 19:30:56 +00:00
Philip Dubé 693d4695d7 Create a test 'pg12' for pg12 features & error on unsupported new features
Unsupported new features: COPY FROM WHERE, GENERATED ALWAYS AS, non-heap table access methods
2019-08-22 19:30:56 +00:00
Philip Dubé e84fcc0b12 Modify tests to be consistent between versions
Normalize
UNION to prevent optimization
Remove WITH OIDS
Sort ddl events
client_min_messages no longer accepts FATAL
2019-08-22 19:30:50 +00:00
Philip Dubé e5cd298a98 pg12 revised layout of FunctionCallInfoData
See a9c35cf85c

clang raises a warning due to FunctionCall2InfoData technically being variable sized
This is fine, as the struct is the size we want it to be. So silence the warning
2019-08-22 19:02:35 +00:00
Philip Dubé bee779e7d4 planner/distributed_planner.c: get_func_cost replaced with add_function_cost in pg12 2019-08-22 19:02:10 +00:00
Philip Dubé be3285828f Collations matter for hashing strings in pg12
See https://www.postgresql.org/docs/12/collation.html#COLLATION-NONDETERMINISTIC
2019-08-22 18:58:37 +00:00
Philip Dubé fe10ca453d Implement FileCompat to abstract pg12 requiring API consumer to track file offsets 2019-08-22 18:57:47 +00:00
Philip Dubé 018ad1c58e pg12: version_compat.h, tuples, oids, misc 2019-08-22 18:57:23 +00:00
Philip Dubé 9643ff580e Update commands/vacuum.c with pg12 changes
Adds support for SKIP_LOCKED, INDEX_CLEANUP, TRUNCATE
Removes broken assert
2019-08-22 18:56:54 +00:00
Philip Dubé 68c4b71f93 Fix up includes with pg12 changes 2019-08-22 18:56:21 +00:00
Philip Dubé fbc3e346e8 ruleutils_12.c
Produced this file by copying ruleutils_11.c,
then comparing postgres ruleutils.c changes between REL_11_STABLE & REL_12_STABLE
2019-08-22 18:56:05 +00:00
Hadi Moshayedi 6be1bacddd Fix distributed deadlock for TRUNCATE 2019-08-22 11:03:53 -07:00
Hadi Moshayedi a5b087c89b Support FKs between reference tables 2019-08-21 16:11:27 -07:00
Hadi Moshayedi a3578a6e60 Sort load_shard_placement_array by worker name/port 2019-08-21 14:35:05 -07:00
Philip Dubé 7bf7e41594 commands/index.c: Fix assertion typo 2019-08-21 18:54:05 +00:00
Philip Dubé f4b90419ae Raise an error when REINDEX TABLE or INDEX is invoked on a distributed relation 2019-08-21 17:03:14 +00:00
Philip Dubé db5a7f49a7 Task Tracker: fix error being copy pasted from above block 2019-08-21 15:44:01 +00:00
Philip Dubé f62d4a6712 citus_rm_job_directory for multi_query_directory_cleanup 2019-08-19 17:04:42 +00:00
Philip Dubé 9777f22e1e Avoid invalid array accesses to partitionFileArray 2019-08-19 17:04:42 +00:00
Philip Dubé f4ca02664a single_shard_commit_protocol: GUC_NO_SHOW_ALL 2019-08-18 12:54:32 +00:00
Hadi Moshayedi c582eb89c8 Add some missing locks. 2019-08-15 12:34:31 -07:00
Philip Dubé f4e513b3d4 Introduce citus.single_shard_commit_protocol for if users want 1PC on writes to replicas 2019-08-15 18:49:40 +00:00
Philip Dubé cd951fa9ca Avoid multiple pg_dist_colocation records being created for reference tables
master_deactivate_node is updated to decrement the replication factor
Otherwise deactivation could have create_reference_table produce a second record

UpdateColocationGroupReplicationFactor is renamed UpdateColocationGroupReplicationFactorForReferenceTables
& the implementation looks up the record based on distributioncolumntype == InvalidOid, rather than by id
Otherwise the record's replication factor fails to be maintained when there are no reference tables
2019-08-13 17:21:02 +00:00
Nils Dijk be6b7bec69
Add UDF citus_(prepare|finish)_pg_upgrade to aid with upgrading citus (#2877)
DESCRIPTION: Add functions to help with postgres upgrades

Currently there is [a list of manual steps](https://docs.citusdata.com/en/v8.2/admin_guide/upgrading_citus.html?highlight=upgrade#upgrading-postgresql-version-from-10-to-11) to perform during a postgres upgrade. These steps guarantee our catalog tables are kept and counter values are maintained across upgrades.

Having more than 1 command in our docs for users to manually execute during upgrades is error prone for both the user, and our docs. There are already 2 catalog tables that have been introduced to citus that have not been added to our docs for backing up during upgrades (`pg_authinfo` and `pg_dist_poolinfo`).

As we add more functionality to citus we run into situations where there are more steps required either before or after the upgrade. At the same time, when we move catalog tables to a place where the contents will be maintained automatically during upgrades we could have less steps in our docs. This will come to a hard to maintain matrix of citus versions and steps to be performed.

Instead we could take ownership of these steps within the extension itself. This PR introduces two new functions for the user to use instead of long lists of error prone instructions to follow.
 - `citus_prepare_pg_upgrade`
    This function should be called by the user right before shutting down the cluster. This will ensure all citus catalog tables are backed up in a location where the information will be retained during an upgrade.
- `citus_finish_pg_upgrade`
    This function should be called right after a pg_upgrade of the cluster. This will restore the catalog tables to the state before the upgrade happend.

Both functions need to be executed both on the coordinator and on all the workers, in the same fashion our current documentation instructs to do.

There are two known problems with this function in its current form, which is also a problem with our docs. We should schedule time in the future to improve on this, but having it automated now is better as we are about to add extra steps to take after upgrades.
 - When you install citus in a clean cluster we do enable ssl for communication between the coordinator and the workers. If an upgrade to a clean cluster is performed we do not setup ssl on the new cluster causing the communication to fail.
 - There are no automated tests added in this PR to execute an upgrade test durning every build. 
    Our current test infrastructure does not allow for 2 versions of postgres to exist in the same environment. We will need to invest time to create a new testing harness that could run the following scenario:
      1. Create cluster
      2. Run extensible scripts to execute arbitrary statements on this cluster
      3. Perform an upgrade by preparing, upgrading and finishing
      4. Run extensible scripts to verify all objects created by earlier scripts exists in correct form in the upgraded cluster

    Given the non trivial amount of work involved for such a suite I'd like to land this before we have 
automated testing.

On a side note; As the reviewer noticed, the tables created in the public namespace are not visible in `psql` with `\d`. The backup catalog tables have the same name as the tables in `pg_catalog`. Due to postgres internals `pg_catalog` is first in the search path and therefore the non-qualified name would alwasy resolve to `pg_catalog.pg_dist_*`. Internally this is called a non-visible table as it would resolve to a different table without a qualified name. Only visible tables are shown with `\d`.
2019-08-13 15:53:10 +02:00
Hadi Moshayedi 009d8b7401 Some cleanup 2019-08-12 15:38:52 -07:00
Philip Dubé 5459c01956 multi_partitioning_utils: version_above_ten 2019-08-09 15:25:59 +00:00
Philip Dubé e0f19fb58c multi_partitioning_1.out 2019-08-09 15:25:59 +00:00
Philip Dubé 5e835e7565 Fix multi_repair_shards. There's already a group/shardid entry, pg11 gives us back the inserted one, pg12 gives us the preexisting one 2019-08-09 15:25:59 +00:00
Philip Dubé 66ce2d2d2d Materialize c1 to keep subplan ids in sync 2019-08-09 15:25:59 +00:00
Philip Dubé 9065ef429c foreign_key_to_reference_table: terse to avoid differing order of drop cascade details 2019-08-09 15:25:59 +00:00
Philip Dubé 0d9e5bde9c window_functions: 'ORDER BY time' when using lag(time) & coordinator_plan 2019-08-09 15:25:59 +00:00
Philip Dubé 7992077fd9 multi_modifying_xacts: don't differ in output if reference table select tries broken worker first 2019-08-09 15:25:59 +00:00
Philip Dubé 546b71ac18 multi_router_planner: be terse for ctes with false wheres 2019-08-09 15:25:59 +00:00
Philip Dubé a523a5b773 multi_null_minmax_value_pruning: no versioning & coordinator_plan 2019-08-09 15:25:59 +00:00
Philip Dubé 871dabdc63 Force CTE materialization in pg12 2019-08-09 15:25:59 +00:00
Philip Dubé 667c67891e intermediate_results: COSTS OFF 2019-08-09 15:25:59 +00:00
Philip Dubé b2ea806d8a extra_float_digits=0 2019-08-09 15:25:59 +00:00
Philip Dubé 705d1bf0e0 Use PG_JOB_CACHE_DIR 2019-08-09 15:25:59 +00:00
Onder Kalaci 060ac11476 Do not record relation accessess unnecessarily
Before this commit, we've recorded the relation accesses in 3 different
places
    - FindPlacementListConnection         -- applies all executor in tx block
    - StartPlacementExecutionOnSession()  -- adaptive executor only
    - StartPlacementListConnection()      -- router/real-time only

This is different than Citus 8.2, and could lead to query execution times
increase considerably on multi-shard commands in transaction block
that are on partitioned tables.

Benchmarks:

```
1+8 c5.4xlarge cluster

Empty distributed partitioned table with 365 partitions: https://gist.github.com/onderkalaci/1edace4ed6bd6f061c8a15594865bb51#file-partitions_365-sql

./pgbench -f /tmp/multi_shard.sql -c10 -j10 -P 1 -T 120 postgres://citus:w3r6KLJpv3mxe9E-NIUeJw@c.fy5fkjcv45vcepaogqcaskmmkee.db.citusdata.com:5432/citus?sslmode=require

cat  /tmp/multi_shard.sql
BEGIN;
	DELETE FROM collections_list;
	DELETE FROM collections_list;
	DELETE FROM collections_list;
COMMIT;
cat  /tmp/single_shard.sql
BEGIN;
	DELETE FROM collections_list WHERE key = :aid;
	DELETE FROM collections_list WHERE key = :aid;
	DELETE FROM collections_list WHERE key = :aid;
COMMIT;

cat  /tmp/mix.sql
BEGIN;
	DELETE FROM collections_list WHERE key = :aid;
	DELETE FROM collections_list WHERE key = :aid;
	DELETE FROM collections_list WHERE key = :aid;

	DELETE FROM collections_list;
	DELETE FROM collections_list;
	DELETE FROM collections_list;
COMMIT;
```

The table shows `latency average` of pgbench runs explained above, so we have a pretty solid improvement even over 8.2.2.

| Test  | Citus 8.2.2  |  Citus 8.3.1   | Citus 8.3.2 (this branch)  | Citus 8.3.1 (FKEYs disabled via GUC)  |
| ------------- | ------------- | ------------- |------------- | ------------- |
|multi_shard |  2370.083 ms  |3605.040 ms |1324.094 ms |1247.255 ms  |
| single_shard  | 85.338 ms  |120.934 ms  |73.216 ms  | 78.765 ms |
| mix  | 2434.459 ms | 3727.080 ms  |1306.456 ms  | 1280.326 ms |
2019-08-08 18:42:08 +02:00
Onder Kalaci 35ee896f3d Get rid of an unnecessary parameter
targetPoolSize parameter for ExecuteUtilityTaskListWithoutResults
becomes obsolete, just remove it.
2019-08-07 19:35:56 +02:00
Onder Kalaci b2e01d0745 Refactor switching to sequential mode
We don't need to wait until the execution. As soon as we realize
that we need sequential execution, we should do it.
2019-08-07 19:35:56 +02:00
Hadi Moshayedi b1ab805ce2 Fix a typo in foreign_key_restriction_enforcement 2019-08-02 16:06:52 -07:00
Philip Dubé b77c52f95b PlanRouterQuery: don't store list of list of shard intervals in relationShardList 2019-08-02 14:08:57 +00:00
Philip Dubé fdc0ef6392 Adaptive executor: use 2PC when replication_factor > 1 2019-08-01 23:55:12 +00:00
Philip Dubé 19bcb1b4f7 multi_modifications: extend to demonstrate issue in adaptive executor 2019-08-01 23:55:04 +00:00
Philip Dubé 064bd66a20 Avoid segfault in logging queries 2019-07-31 15:28:46 +00:00
Philip Dubé 3982b4635f CompareShardIntervals: if intervals are equal, compare id. Works around sort being unstable 2019-07-26 16:13:36 +00:00
Philip Dubé 0e233c63a3 multi_colocation_utils: sort by nodeport, not placementid
multi_copy: replace smgr with aclitem, smgr is removed in pg12
2019-07-25 14:33:43 +00:00
Marco Slot e2bc09838e Use ereport instead of elog in adaptive executor 2019-07-23 20:40:32 +02:00
Marco Slot bd111366b0 Skip CheckConnectionTimeout when checkForPoolTimeout is false 2019-07-23 20:40:32 +02:00
Marco Slot a3811b1e55 Avoid FindWorkerNode calls in adaptive executor 2019-07-23 20:40:32 +02:00
Marco Slot 4444d92dbc Set initial pool size to cached connection count 2019-07-23 20:40:32 +02:00
Marco Slot 4c0c33365e Avoid creating a redundant event set at the start 2019-07-23 20:40:32 +02:00
Marco Slot 32e7a80960 Avoid unnecessary calls to PQconsumeInput 2019-07-23 20:40:32 +02:00
Marco Slot 71ad5c095b Use ModifyWaitEvent when only wait flags changed 2019-07-23 20:40:32 +02:00
Philip Dubé 50144b75d0 Add check-empty to testing Makefile
Don't create functions multiple times
Move ALTER TABLEs to their declaration
Remove DROP FUNCTIONS IF EXISTS, OR REPLACE
2019-07-24 11:03:54 -07:00
Philip Dubé acbaa38a62 Squash migrations for versions 5/6, don't use WITH OIDS 2019-07-24 11:03:29 -07:00
Hanefi Onaldi 8127297999 update workerNodeList after sorting 2019-07-23 20:57:07 +00:00
Philip Dubé 6598c68993 Fix multi_prune_shard_list & don't set next_shard_id unnecessarily in multi_null_minmax_value_pruning 2019-07-23 19:44:18 +00:00
Marco Slot efbe58eab2 Fix SQL schema version, we skipped 8.3 2019-07-17 16:05:25 +02:00
Philip Dubé 0915027389 DistributedPlan: replace operation with modLevel
This causes no behaviorial changes, only organizes better to implement modifying CTEs

Also rename ExtactInsertRangeTableEntry to ExtractResultRelationRTE,
as the source of this function didn't match the documentation

Remove Task's upsertQuery in favor of ROW_MODIFY_NONCOMMUTATIVE

Split up AcquireExecutorShardLock into more internal functions

Tests: Normalize multi_reference_table multi_create_table_constraints
2019-07-16 13:58:18 -07:00
Hanefi Onaldi 0bdec52761
Fix default_version in citus.control file (#2840) 2019-07-11 14:24:51 +03:00
Philip Dubé befd0caddd Tests: normalize sql_procedure and custom_aggregate_support
Also fix typo in multi_insert_select
2019-07-10 14:36:17 +00:00
Hanefi Onaldi 5a6eba6ba9
Bump Citus to 8.4devel 2019-07-10 15:26:10 +03:00
Nils Dijk 791cc26a86
Fix an issue with subquery map merge jobs as non-root
Also automated all manual tests around multi user isolation for internal citus udf's

automate upgrade_to_reference_table tests
add negative tests for lock_relation_if_exists
add tests for permissions on worker_cleanup_job_schema_cache
add tests for worker_fetch_partition_file
add tests for worker_merge_files_into_table
fix problem with worker_merge_files_and_run_query when run as non-super user and add tests for behaviour
2019-07-10 12:40:05 +02:00
Hadi Moshayedi 46608e42f9 Add hyperscale tutorial to the regression tests. 2019-07-10 10:47:55 +02:00
Hadi Moshayedi 91d8a41ecd Don't modify cache entry in RelationShardListForShardCreate() 2019-07-09 12:44:48 -07:00
Marco Slot 70434bc716 Increase slow start time in test to make valgrind tests pass 2019-07-08 06:04:13 +02:00
Hadi Moshayedi 032167c553 Fix Assert() in ProcessVariableSetStmt() 2019-07-05 14:11:22 -07:00
Marco Slot 07d2266e11 Fix RESET and other types of SET 2019-07-05 19:30:48 +02:00
Marco Slot 97334ff1ec Copy WorkerNode before returning in FindWorkerNode 2019-07-05 09:35:53 +02:00
Hadi Moshayedi 5d59aab38d Increase valgrind's max-stackframe 2019-07-04 14:19:41 +02:00
Hadi Moshayedi d233887d68 Fix multi_extension in check-multi-vg 2019-07-04 13:03:46 +02:00
Hadi Moshayedi 47aa95d00d Fix a NULL dereference. 2019-07-03 16:26:49 -07:00
Hadi Moshayedi 805a2ac602 Fix a use after free in adaptive executor 2019-07-02 10:12:13 -07:00
Marco Slot d6c667946c Fix citus_executor_name mapping by reimplementing it in C 2019-06-29 22:38:29 +02:00
Marco Slot 70c0d96507 Track partition key for adaptive executor in CitusEndScan 2019-06-29 21:37:15 +02:00
Önder Kalacı 40da78c6fd
Introduce the adaptive executor (#2798)
With this commit, we're introducing the Adaptive Executor. 


The commit message consists of two distinct sections. The first part explains
how the executor works. The second part consists of the commit messages of
the individual smaller commits that resulted in this commit. The readers
can search for the each of the smaller commit messages on 
https://github.com/citusdata/citus and can learn more about the history
of the change.

/*-------------------------------------------------------------------------
 *
 * adaptive_executor.c
 *
 * The adaptive executor executes a list of tasks (queries on shards) over
 * a connection pool per worker node. The results of the queries, if any,
 * are written to a tuple store.
 *
 * The concepts in the executor are modelled in a set of structs:
 *
 * - DistributedExecution:
 *     Execution of a Task list over a set of WorkerPools.
 * - WorkerPool
 *     Pool of WorkerSessions for the same worker which opportunistically
 *     executes "unassigned" tasks from a queue.
 * - WorkerSession:
 *     Connection to a worker that is used to execute "assigned" tasks
 *     from a queue and may execute unasssigned tasks from the WorkerPool.
 * - ShardCommandExecution:
 *     Execution of a Task across a list of placements.
 * - TaskPlacementExecution:
 *     Execution of a Task on a specific placement.
 *     Used in the WorkerPool and WorkerSession queues.
 *
 * Every connection pool (WorkerPool) and every connection (WorkerSession)
 * have a queue of tasks that are ready to execute (readyTaskQueue) and a
 * queue/set of pending tasks that may become ready later in the execution
 * (pendingTaskQueue). The tasks are wrapped in a ShardCommandExecution,
 * which keeps track of the state of execution and is referenced from a
 * TaskPlacementExecution, which is the data structure that is actually
 * added to the queues and describes the state of the execution of a task
 * on a particular worker node.
 *
 * When the task list is part of a bigger distributed transaction, the
 * shards that are accessed or modified by the task may have already been
 * accessed earlier in the transaction. We need to make sure we use the
 * same connection since it may hold relevant locks or have uncommitted
 * writes. In that case we "assign" the task to a connection by adding
 * it to the task queue of specific connection (in
 * AssignTasksToConnections). Otherwise we consider the task unassigned
 * and add it to the task queue of a worker pool, which means that it
 * can be executed over any connection in the pool.
 *
 * A task may be executed on multiple placements in case of a reference
 * table or a replicated distributed table. Depending on the type of
 * task, it may not be ready to be executed on a worker node immediately.
 * For instance, INSERTs on a reference table are executed serially across
 * placements to avoid deadlocks when concurrent INSERTs take conflicting
 * locks. At the beginning, only the "first" placement is ready to execute
 * and therefore added to the readyTaskQueue in the pool or connection.
 * The remaining placements are added to the pendingTaskQueue. Once
 * execution on the first placement is done the second placement moves
 * from pendingTaskQueue to readyTaskQueue. The same approach is used to
 * fail over read-only tasks to another placement.
 *
 * Once all the tasks are added to a queue, the main loop in
 * RunDistributedExecution repeatedly does the following:
 *
 * For each pool:
 * - ManageWorkPool evaluates whether to open additional connections
 *   based on the number unassigned tasks that are ready to execute
 *   and the targetPoolSize of the execution.
 *
 * Poll all connections:
 * - We use a WaitEventSet that contains all (non-failed) connections
 *   and is rebuilt whenever the set of active connections or any of
 *   their wait flags change.
 *
 *   We almost always check for WL_SOCKET_READABLE because a session
 *   can emit notices at any time during execution, but it will only
 *   wake up WaitEventSetWait when there are actual bytes to read.
 *
 *   We check for WL_SOCKET_WRITEABLE just after sending bytes in case
 *   there is not enough space in the TCP buffer. Since a socket is
 *   almost always writable we also use WL_SOCKET_WRITEABLE as a
 *   mechanism to wake up WaitEventSetWait for non-I/O events, e.g.
 *   when a task moves from pending to ready.
 *
 * For each connection that is ready:
 * - ConnectionStateMachine handles connection establishment and failure
 *   as well as command execution via TransactionStateMachine.
 *
 * When a connection is ready to execute a new task, it first checks its
 * own readyTaskQueue and otherwise takes a task from the worker pool's
 * readyTaskQueue (on a first-come-first-serve basis).
 *
 * In cases where the tasks finish quickly (e.g. <1ms), a single
 * connection will often be sufficient to finish all tasks. It is
 * therefore not necessary that all connections are established
 * successfully or open a transaction (which may be blocked by an
 * intermediate pgbouncer in transaction pooling mode). It is therefore
 * essential that we take a task from the queue only after opening a
 * transaction block.
 *
 * When a command on a worker finishes or the connection is lost, we call
 * PlacementExecutionDone, which then updates the state of the task
 * based on whether we need to run it on other placements. When a
 * connection fails or all connections to a worker fail, we also call
 * PlacementExecutionDone for all queued tasks to try the next placement
 * and, if necessary, mark shard placements as inactive. If a task fails
 * to execute on all placements, the execution fails and the distributed
 * transaction rolls back.
 *
 * For multi-row INSERTs, tasks are executed sequentially by
 * SequentialRunDistributedExecution instead of in parallel, which allows
 * a high degree of concurrency without high risk of deadlocks.
 * Conversely, multi-row UPDATE/DELETE/DDL commands take aggressive locks
 * which forbids concurrency, but allows parallelism without high risk
 * of deadlocks. Note that this is unrelated to SEQUENTIAL_CONNECTION,
 * which indicates that we should use at most one connection per node, but
 * can run tasks in parallel across nodes. This is used when there are
 * writes to a reference table that has foreign keys from a distributed
 * table.
 *
 * Execution finishes when all tasks are done, the query errors out, or
 * the user cancels the query.
 *
 *-------------------------------------------------------------------------
 */



All the commits involved here:
* Initial unified executor prototype

* Latest changes

* Fix rebase conflicts to master branch

* Add missing variable for assertion

* Ensure that master_modify_multiple_shards() returns the affectedTupleCount

* Adjust intermediate result sizes

The real-time executor uses COPY command to get the results
from the worker nodes. Unified executor avoids that which
results in less data transfer. Simply adjust the tests to lower
sizes.

* Force one connection per placement (or co-located placements) when requested

The existing executors (real-time and router) always open 1 connection per
placement when parallel execution is requested.

That might be useful under certain circumstances:

(a) User wants to utilize as much as CPUs on the workers per
distributed query
(b) User has a transaction block which involves COPY command

Also, lots of regression tests rely on this execution semantics.
So, we'd enable few of the tests with this change as well.

* For parameters to be resolved before using them

For the details, see PostgreSQL's copyParamList()

* Unified executor sorts the returning output

* Ensure that unified executor doesn't ignore sequential execution of DDLJob's

Certain DDL commands, mainly creating foreign keys to reference tables,
should be executed sequentially. Otherwise, we'd end up with a self
distributed deadlock.

To overcome this situaiton, we set a flag `DDLJob->executeSequentially`
and execute it sequentially. Note that we have to do this because
the command might not be called within a transaction block, and
we cannot call `SetLocalMultiShardModifyModeToSequential()`.

This fixes at least two test: multi_insert_select_on_conflit.sql and
multi_foreign_key.sql

Also, I wouldn't mind scattering local `targetPoolSize` variables within
the code. The reason is that we'll soon have a GUC (or a global
variable based on a GUC) that'd set the pool size. In that case, we'd
simply replace `targetPoolSize` with the global variables.

* Fix 2PC conditions for DDL tasks

* Improve closing connections that are not fully established in unified execution

* Support foreign keys to reference tables in unified executor

The idea for supporting foreign keys to reference tables is simple:
Keep track of the relation accesses within a transaction block.
    - If a parallel access happens on a distributed table which
      has a foreign key to a reference table, one cannot modify
      the reference table in the same transaction. Otherwise,
      we're very likely to end-up with a self-distributed deadlock.
    - If an access to a reference table happens, and then a parallel
      access to a distributed table (which has a fkey to the reference
      table) happens, we switch to sequential mode.

Unified executor misses the function calls that marks the relation
accesses during the execution. Thus, simply add the necessary calls
and let the logic kick in.

* Make sure to close the failed connections after the execution

* Improve comments

* Fix savepoints in unified executor.

* Rebuild the WaitEventSet only when necessary

* Unclaim connections on all errors.

* Improve failure handling for unified executor

   - Implement the notion of errorOnAnyFailure. This is similar to
     Critical Connections that the connection managament APIs provide
   - If the nodes inside a modifying transaction expand, activate 2PC
   - Fix few bugs related to wait event sets
   - Mark placement INACTIVE during the execution as much as possible
     as opposed to we do in the COMMIT handler
   - Fix few bugs related to scheduling next placement executions
   - Improve decision on when to use 2PC

Improve the logic to start a transaction block for distributed transactions

- Make sure that only reference table modifications are always
  executed with distributed transactions
- Make sure that stored procedures and functions are executed
  with distributed transactions

* Move waitEventSet to DistributedExecution

This could also be local to RunDistributedExecution(), but in that case
we had to mark it as "volatile" to avoid PG_TRY()/PG_CATCH() issues, and
cast it to non-volatile when doing WaitEventSetFree(). We thought that
would make code a bit harder to read than making this non-local, so we
move it here. See comments for PG_TRY() in postgres/src/include/elog.h
and "man 3 siglongjmp" for more context.

* Fix multi_insert_select test outputs

Two things:
   1) One complex transaction block is now supported. Simply update
      the test output
   2) Due to dynamic nature of the unified executor, the orders of
      the errors coming from the shards might change (e.g., all of
      the queries on the shards would fail, but which one appears
      on the error message?). To fix that, we simply added it to
      our shardId normalization tool which happens just before diff.

* Fix subeury_and_cte test

The error message is updated from:
	failed to execute task
To:
        more than one row returned by a subquery or an expression

which is a lot clearer to the user.

* Fix intermediate_results test outputs

Simply update the error message from:
	could not receive query results
to
	result "squares" does not exist

which makes a lot more sense.

* Fix multi_function_in_join test

The error messages update from:
     Failed to execute task XXX
To:
     function f(..) does not exist

* Fix multi_query_directory_cleanup test

The unified executor does not create any intermediate files.

* Fix with_transactions test

A test case that just started to work fine

* Fix multi_router_planner test outputs

The error message is update from:
	Could not receive query results
To:
	Relation does not exists

which is a lot more clearer for the users

* Fix multi_router_planner_fast_path test

The error message is update from:
	Could not receive query results
To:
	Relation does not exists

which is a lot more clearer for the users

* Fix isolation_copy_placement_vs_modification by disabling select_opens_transaction_block

* Fix ordering in isolation_multi_shard_modify_vs_all

* Add executor locks to unified executor

* Make sure to allocate enought WaitEvents

The previous code was missing the waitEvents for the latch and
postmaster death.

* Fix rebase conflicts for master rebase

* Make sure that TRUNCATE relies on unified executor

* Implement true sequential execution for multi-row INSERTS

Execute the individual tasks executed one by one. Note that this is different than
MultiShardConnectionType == SEQUENTIAL_CONNECTION case (e.g., sequential execution
mode). In that case, running the tasks across the nodes in parallel is acceptable
and implemented in that way.

However, the executions that are qualified here would perform poorly if the
tasks across the workers are executed in parallel. We currently qualify only
one class of distributed queries here, multi-row INSERTs. If we do not enforce
true sequential execution, concurrent multi-row upserts could easily form
a distributed deadlock when the upserts touch the same rows.

* Remove SESSION_LIFESPAN flag in unified_executor

* Apply failure test updates

We've changed the failure behaviour a bit, and also the error messages
that show up to the user. This PR covers majority of the updates.

* Unified executor honors citus.node_connection_timeout

With this commit, unified executor errors out if even
a single connection cannot be established within
citus.node_connection_timeout.

And, as a side effect this fixes failure_connection_establishment
test.

* Properly increment/decrement pool size variables

Before this commit, the idle and active connection
counts were not properly calculated.

* insert_select_executor goes through unified executor.

* Add missing file for task tracker

* Modify ExecuteTaskListExtended()'s signature

* Sort output of INSERT ... SELECT ... RETURNING

* Take partition locks correctly in unified executor

* Alternative implementation for force_max_query_parallelization

* Fix compile warnings in unified executor

* Fix style issues

* Decrement idleConnectionCount when idle connection is lost

* Always rebuild the wait event sets

In the previous implementation, on waitFlag changes, we were only
modifying the wait events. However, we've realized that it might
be an over optimization since (a) we couldn't see any performance
benefits (b) we see some errors on failures and because of (a)
we prefer to disable it now.

* Make sure to allocate enough sized waitEventSet

With multi-row INSERTs, we might have more sessions than
task*workerCount after few calls of RunDistributedExecution()
because the previous sessions would also be alive.

Instead, re-allocate events when the connectino set changes.

* Implement SELECT FOR UPDATE on reference tables

On master branch, we do two extra things on SELECT FOR UPDATE
queries on reference tables:
   - Acquire executor locks
   - Execute the query on all replicas

With this commit, we're implementing the same logic on the
new executor.

* SELECT FOR UPDATE opens transaction block even if SelectOpensTransactionBlock disabled

Otherwise, users would be very confused and their logic is very likely
to break.

* Fix build error

* Fix the newConnectionCount calculation in ManageWorkerPool

* Fix rebase conflicts

* Fix minor test output differences

* Fix citus indent

* Remove duplicate sorts that is added with rebase

* Create distributed table via executor

* Fix wait flags in CheckConnectionReady

* failure_savepoints output for unified executor.

* failure_vacuum output (pg 10) for unified executor.

* Fix WaitEventSetWait timeout in unified executor

* Stabilize failure_truncate test output

* Add an ORDER BY to multi_upsert

* Fix regression test outputs after rebase to master

* Add executor.c comment

* Rename executor.c to adaptive_executor.c

* Do not schedule tasks if the failed placement is not ready to execute

Before the commit, we were blindly scheduling the next placement executions
even if the failed placement is not on the ready queue. Now, we're ensuring
that if failed placement execution is on a failed pool or session where the
execution is on the pendingQueue, we do not schedule the next task. Because
the other placement execution should be already running.

* Implement a proper custom scan node for adaptive executor

- Switch between the executors, add GUC to set the pool size
- Add non-adaptive regression test suites
- Enable CIRCLE CI for non-adaptive tests
- Adjust test output files

* Add slow start interval to the executor

* Expose max_cached_connection_per_worker to user

* Do not start slow when there are cached connections

* Consider ExecutorSlowStartInterval in NextEventTimeout

* Fix memory issues with ReceiveResults().

* Disable executor via TaskExecutorType

* Make sure to execute the tests with the other executor

* Use task_executor_type to enable-disable adaptive executor

* Remove useless code

* Adjust the regression tests

* Add slow start regression test

* Rebase to master

* Fix test failures in adaptive executor.

* Rebase to master - 2

* Improve comments & debug messages

* Set force_max_query_parallelization in isolation_citus_dist_activity

* Force max parallelization for creating shards when asked to use exclusive connection.

* Adjust the default pool size

* Expand description of max_adaptive_executor_pool_size GUC

* Update warnings in FinishRemoteTransactionCommit()

* Improve session clean up at the end of execution

Explicitly list all the states that the execution might end,
otherwise warn.

* Remove MULTI_CONNECTION_WAIT_RETRY which is not used at all

* Add more ORDER BYs to multi_mx_partitioning
2019-06-28 14:04:40 +02:00
Philip Dubé 4e54c1525d Isolation tests: consistently name COMMIT '-commit' 2019-06-27 07:32:39 +02:00
Hanefi Onaldi 4e08477fed Add test case for issue 2575 2019-06-26 17:12:28 +02:00
Hanefi Onaldi 7e8fd49b94 Create Schemas as superuser on all shard/table creation UDFs
- All the schema creations on the workers will now be  via superuser connections
- If a shard is being repaired or a shard is replicated, we will create the
  schema only in the relevant worker; and in all the other cases where a schema
  creation is needed, we will block operations until we ensure the schema exists
  in all the workers
2019-06-26 17:12:28 +02:00
Philip Dubé aa0c47848e subquery_and_cte: test rejecting volatile ctes
Also update isolation_citus_dist_activity from after merge
2019-06-26 16:27:07 +02:00
Philip Dubé db7fdb1854 Router planner: bail on volatile functions in CTEs 2019-06-26 10:32:01 +02:00
Philip Dubé 5c62f9935a Router planner: reject SELECT FOR UPDATE ctes 2019-06-26 10:32:01 +02:00
Philip Dubé 18575ccfd3 Add tests to subquery_and_cte, update check-multi-mx expected results 2019-06-26 10:32:01 +02:00
Philip Dubé 77efec04a0 Router Planner: accept SELECT_CMD ctes in modification queries 2019-06-26 10:32:01 +02:00
Philip Dubé 84fe626378 multi_router_planner: refactor error propagation 2019-06-26 10:32:01 +02:00
Philip Dubé 9ed6dd5570 Ignore compile_commands.json, fix typo 2019-06-26 10:32:01 +02:00
Hadi Moshayedi 25a984bab4 Normalize multi_name_lengths. 2019-06-25 14:18:33 +02:00
Hadi Moshayedi 3d0a521295 Show just coordinator plan in some test outputs. 2019-06-24 12:24:30 +02:00
Onder Kalaci ad93d6feea Change the order of placement access added to the list
This is to make sure that the error messages related to foreign keys
to reference tables shows the exact placement access name instead of
SELECT.
2019-06-23 11:32:58 +02:00
Nils Dijk eb98f2d13a
Fix null pointer caused by partial initialization of ConnParamsHashEntry (#2789)
It has been reported a null pointer dereference could be triggered in FreeConnParamsHashEntryFields. Likely cause is an error in GetConnParams which will leave the cached ConnParamsHashEntry in a state that would cause the null pointer dereference in a subsequent connection establishment to the same server. This has been simulated by inserting ereport(ERROR, ...) at certain places in the code.

Not only would ConnParamsHashEntry be in a state that would cause a crash, it was also leaking memory in the ConnectionContext due to the loss of pointers as they are only stored on the ConnParamsHashEntry at the end of the function.

This patch rewrites both the GetConnParams to store pointers 'durably' at every point in the code so that an error would not lose the pointer as well as FreeConnParamsHashEntryFields in a way that it can clear half initialised ConnParamsHashEntry's in a safer manner.
2019-06-21 18:16:43 +02:00
Hanefi Onaldi 7a6eb2aba0
Fix one regression test that fails on enterprise (#2786)
GRANT queries are propagated on Enterprise. If a user attempts to
create a user and run a GRANT query before creating it on workers, we
fail. This issue does not happen in community as the user needs to run
the GRANTs on the workers manually.
2019-06-21 15:46:28 +03:00
Nils Dijk 5df1b49bed
Feature: optionally force master_update_node during failover (#2773)
When `master_update_node` is called to update a node's location it waits for appropriate locks to become available. This is useful during normal operation as new operations will be blocked till after the metadata update while running operations have time to finish.

When `master_update_node` is called after a node failure it is less useful to wait for running operations to finish as they can't. The lock being held indicates an operation that once attempted to commit will fail as the machine already failed. Now the downside is the failover is postponed till the termination point of the operation. This has been observed by users to take a significant amount of time causing the rest of the system to be observed unavailable.

With this patch it is possible in such situations to invoke `master_update_node` with 2 optional arguments:
 - `force` (bool defaults to `false`): When called with true the update of the metadata will be forced to proceed by terminating conflicting backends. A cancel is not enough as the backend might be in idle time (eg. an interactive session, or going back and forth between an appliaction), therefore a more intrusive solution of termination is used here.
 - `lock_cooldown` (int defaults to `10000`): This is the time in milliseconds before conflicting backends are terminated. This is to allow the backends to finish cleanly before terminating them. This allows the user to set an upperbound to the expected time to complete the metadata update, eg. performing the failover.

The functionality is implemented by spawning a background worker that has the task of helping a certain backend in acquiring its locks. The backend is either terminated on successful execution of the metadata update, or once the memory context of the expression gets reset, eg. on a cancel of the statement.
2019-06-21 12:03:15 +02:00