Commit Graph

1081 Commits (c563e0825c6e564160aa007056681c102ebf1740)

Author SHA1 Message Date
Philip Dubé 871dabdc63 Force CTE materialization in pg12 2019-08-09 15:25:59 +00:00
Philip Dubé 667c67891e intermediate_results: COSTS OFF 2019-08-09 15:25:59 +00:00
Philip Dubé b2ea806d8a extra_float_digits=0 2019-08-09 15:25:59 +00:00
Onder Kalaci 060ac11476 Do not record relation accessess unnecessarily
Before this commit, we've recorded the relation accesses in 3 different
places
    - FindPlacementListConnection         -- applies all executor in tx block
    - StartPlacementExecutionOnSession()  -- adaptive executor only
    - StartPlacementListConnection()      -- router/real-time only

This is different than Citus 8.2, and could lead to query execution times
increase considerably on multi-shard commands in transaction block
that are on partitioned tables.

Benchmarks:

```
1+8 c5.4xlarge cluster

Empty distributed partitioned table with 365 partitions: https://gist.github.com/onderkalaci/1edace4ed6bd6f061c8a15594865bb51#file-partitions_365-sql

./pgbench -f /tmp/multi_shard.sql -c10 -j10 -P 1 -T 120 postgres://citus:w3r6KLJpv3mxe9E-NIUeJw@c.fy5fkjcv45vcepaogqcaskmmkee.db.citusdata.com:5432/citus?sslmode=require

cat  /tmp/multi_shard.sql
BEGIN;
	DELETE FROM collections_list;
	DELETE FROM collections_list;
	DELETE FROM collections_list;
COMMIT;
cat  /tmp/single_shard.sql
BEGIN;
	DELETE FROM collections_list WHERE key = :aid;
	DELETE FROM collections_list WHERE key = :aid;
	DELETE FROM collections_list WHERE key = :aid;
COMMIT;

cat  /tmp/mix.sql
BEGIN;
	DELETE FROM collections_list WHERE key = :aid;
	DELETE FROM collections_list WHERE key = :aid;
	DELETE FROM collections_list WHERE key = :aid;

	DELETE FROM collections_list;
	DELETE FROM collections_list;
	DELETE FROM collections_list;
COMMIT;
```

The table shows `latency average` of pgbench runs explained above, so we have a pretty solid improvement even over 8.2.2.

| Test  | Citus 8.2.2  |  Citus 8.3.1   | Citus 8.3.2 (this branch)  | Citus 8.3.1 (FKEYs disabled via GUC)  |
| ------------- | ------------- | ------------- |------------- | ------------- |
|multi_shard |  2370.083 ms  |3605.040 ms |1324.094 ms |1247.255 ms  |
| single_shard  | 85.338 ms  |120.934 ms  |73.216 ms  | 78.765 ms |
| mix  | 2434.459 ms | 3727.080 ms  |1306.456 ms  | 1280.326 ms |
2019-08-08 18:42:08 +02:00
Onder Kalaci b2e01d0745 Refactor switching to sequential mode
We don't need to wait until the execution. As soon as we realize
that we need sequential execution, we should do it.
2019-08-07 19:35:56 +02:00
Hadi Moshayedi b1ab805ce2 Fix a typo in foreign_key_restriction_enforcement 2019-08-02 16:06:52 -07:00
Philip Dubé 19bcb1b4f7 multi_modifications: extend to demonstrate issue in adaptive executor 2019-08-01 23:55:04 +00:00
Philip Dubé 3982b4635f CompareShardIntervals: if intervals are equal, compare id. Works around sort being unstable 2019-07-26 16:13:36 +00:00
Philip Dubé 0e233c63a3 multi_colocation_utils: sort by nodeport, not placementid
multi_copy: replace smgr with aclitem, smgr is removed in pg12
2019-07-25 14:33:43 +00:00
Philip Dubé 50144b75d0 Add check-empty to testing Makefile
Don't create functions multiple times
Move ALTER TABLEs to their declaration
Remove DROP FUNCTIONS IF EXISTS, OR REPLACE
2019-07-24 11:03:54 -07:00
Philip Dubé acbaa38a62 Squash migrations for versions 5/6, don't use WITH OIDS 2019-07-24 11:03:29 -07:00
Philip Dubé 6598c68993 Fix multi_prune_shard_list & don't set next_shard_id unnecessarily in multi_null_minmax_value_pruning 2019-07-23 19:44:18 +00:00
Marco Slot efbe58eab2 Fix SQL schema version, we skipped 8.3 2019-07-17 16:05:25 +02:00
Philip Dubé 0915027389 DistributedPlan: replace operation with modLevel
This causes no behaviorial changes, only organizes better to implement modifying CTEs

Also rename ExtactInsertRangeTableEntry to ExtractResultRelationRTE,
as the source of this function didn't match the documentation

Remove Task's upsertQuery in favor of ROW_MODIFY_NONCOMMUTATIVE

Split up AcquireExecutorShardLock into more internal functions

Tests: Normalize multi_reference_table multi_create_table_constraints
2019-07-16 13:58:18 -07:00
Philip Dubé befd0caddd Tests: normalize sql_procedure and custom_aggregate_support
Also fix typo in multi_insert_select
2019-07-10 14:36:17 +00:00
Hanefi Onaldi 5a6eba6ba9
Bump Citus to 8.4devel 2019-07-10 15:26:10 +03:00
Nils Dijk 791cc26a86
Fix an issue with subquery map merge jobs as non-root
Also automated all manual tests around multi user isolation for internal citus udf's

automate upgrade_to_reference_table tests
add negative tests for lock_relation_if_exists
add tests for permissions on worker_cleanup_job_schema_cache
add tests for worker_fetch_partition_file
add tests for worker_merge_files_into_table
fix problem with worker_merge_files_and_run_query when run as non-super user and add tests for behaviour
2019-07-10 12:40:05 +02:00
Hadi Moshayedi 46608e42f9 Add hyperscale tutorial to the regression tests. 2019-07-10 10:47:55 +02:00
Marco Slot 70434bc716 Increase slow start time in test to make valgrind tests pass 2019-07-08 06:04:13 +02:00
Marco Slot 07d2266e11 Fix RESET and other types of SET 2019-07-05 19:30:48 +02:00
Hadi Moshayedi 5d59aab38d Increase valgrind's max-stackframe 2019-07-04 14:19:41 +02:00
Hadi Moshayedi d233887d68 Fix multi_extension in check-multi-vg 2019-07-04 13:03:46 +02:00
Marco Slot d6c667946c Fix citus_executor_name mapping by reimplementing it in C 2019-06-29 22:38:29 +02:00
Önder Kalacı 40da78c6fd
Introduce the adaptive executor (#2798)
With this commit, we're introducing the Adaptive Executor. 


The commit message consists of two distinct sections. The first part explains
how the executor works. The second part consists of the commit messages of
the individual smaller commits that resulted in this commit. The readers
can search for the each of the smaller commit messages on 
https://github.com/citusdata/citus and can learn more about the history
of the change.

/*-------------------------------------------------------------------------
 *
 * adaptive_executor.c
 *
 * The adaptive executor executes a list of tasks (queries on shards) over
 * a connection pool per worker node. The results of the queries, if any,
 * are written to a tuple store.
 *
 * The concepts in the executor are modelled in a set of structs:
 *
 * - DistributedExecution:
 *     Execution of a Task list over a set of WorkerPools.
 * - WorkerPool
 *     Pool of WorkerSessions for the same worker which opportunistically
 *     executes "unassigned" tasks from a queue.
 * - WorkerSession:
 *     Connection to a worker that is used to execute "assigned" tasks
 *     from a queue and may execute unasssigned tasks from the WorkerPool.
 * - ShardCommandExecution:
 *     Execution of a Task across a list of placements.
 * - TaskPlacementExecution:
 *     Execution of a Task on a specific placement.
 *     Used in the WorkerPool and WorkerSession queues.
 *
 * Every connection pool (WorkerPool) and every connection (WorkerSession)
 * have a queue of tasks that are ready to execute (readyTaskQueue) and a
 * queue/set of pending tasks that may become ready later in the execution
 * (pendingTaskQueue). The tasks are wrapped in a ShardCommandExecution,
 * which keeps track of the state of execution and is referenced from a
 * TaskPlacementExecution, which is the data structure that is actually
 * added to the queues and describes the state of the execution of a task
 * on a particular worker node.
 *
 * When the task list is part of a bigger distributed transaction, the
 * shards that are accessed or modified by the task may have already been
 * accessed earlier in the transaction. We need to make sure we use the
 * same connection since it may hold relevant locks or have uncommitted
 * writes. In that case we "assign" the task to a connection by adding
 * it to the task queue of specific connection (in
 * AssignTasksToConnections). Otherwise we consider the task unassigned
 * and add it to the task queue of a worker pool, which means that it
 * can be executed over any connection in the pool.
 *
 * A task may be executed on multiple placements in case of a reference
 * table or a replicated distributed table. Depending on the type of
 * task, it may not be ready to be executed on a worker node immediately.
 * For instance, INSERTs on a reference table are executed serially across
 * placements to avoid deadlocks when concurrent INSERTs take conflicting
 * locks. At the beginning, only the "first" placement is ready to execute
 * and therefore added to the readyTaskQueue in the pool or connection.
 * The remaining placements are added to the pendingTaskQueue. Once
 * execution on the first placement is done the second placement moves
 * from pendingTaskQueue to readyTaskQueue. The same approach is used to
 * fail over read-only tasks to another placement.
 *
 * Once all the tasks are added to a queue, the main loop in
 * RunDistributedExecution repeatedly does the following:
 *
 * For each pool:
 * - ManageWorkPool evaluates whether to open additional connections
 *   based on the number unassigned tasks that are ready to execute
 *   and the targetPoolSize of the execution.
 *
 * Poll all connections:
 * - We use a WaitEventSet that contains all (non-failed) connections
 *   and is rebuilt whenever the set of active connections or any of
 *   their wait flags change.
 *
 *   We almost always check for WL_SOCKET_READABLE because a session
 *   can emit notices at any time during execution, but it will only
 *   wake up WaitEventSetWait when there are actual bytes to read.
 *
 *   We check for WL_SOCKET_WRITEABLE just after sending bytes in case
 *   there is not enough space in the TCP buffer. Since a socket is
 *   almost always writable we also use WL_SOCKET_WRITEABLE as a
 *   mechanism to wake up WaitEventSetWait for non-I/O events, e.g.
 *   when a task moves from pending to ready.
 *
 * For each connection that is ready:
 * - ConnectionStateMachine handles connection establishment and failure
 *   as well as command execution via TransactionStateMachine.
 *
 * When a connection is ready to execute a new task, it first checks its
 * own readyTaskQueue and otherwise takes a task from the worker pool's
 * readyTaskQueue (on a first-come-first-serve basis).
 *
 * In cases where the tasks finish quickly (e.g. <1ms), a single
 * connection will often be sufficient to finish all tasks. It is
 * therefore not necessary that all connections are established
 * successfully or open a transaction (which may be blocked by an
 * intermediate pgbouncer in transaction pooling mode). It is therefore
 * essential that we take a task from the queue only after opening a
 * transaction block.
 *
 * When a command on a worker finishes or the connection is lost, we call
 * PlacementExecutionDone, which then updates the state of the task
 * based on whether we need to run it on other placements. When a
 * connection fails or all connections to a worker fail, we also call
 * PlacementExecutionDone for all queued tasks to try the next placement
 * and, if necessary, mark shard placements as inactive. If a task fails
 * to execute on all placements, the execution fails and the distributed
 * transaction rolls back.
 *
 * For multi-row INSERTs, tasks are executed sequentially by
 * SequentialRunDistributedExecution instead of in parallel, which allows
 * a high degree of concurrency without high risk of deadlocks.
 * Conversely, multi-row UPDATE/DELETE/DDL commands take aggressive locks
 * which forbids concurrency, but allows parallelism without high risk
 * of deadlocks. Note that this is unrelated to SEQUENTIAL_CONNECTION,
 * which indicates that we should use at most one connection per node, but
 * can run tasks in parallel across nodes. This is used when there are
 * writes to a reference table that has foreign keys from a distributed
 * table.
 *
 * Execution finishes when all tasks are done, the query errors out, or
 * the user cancels the query.
 *
 *-------------------------------------------------------------------------
 */



All the commits involved here:
* Initial unified executor prototype

* Latest changes

* Fix rebase conflicts to master branch

* Add missing variable for assertion

* Ensure that master_modify_multiple_shards() returns the affectedTupleCount

* Adjust intermediate result sizes

The real-time executor uses COPY command to get the results
from the worker nodes. Unified executor avoids that which
results in less data transfer. Simply adjust the tests to lower
sizes.

* Force one connection per placement (or co-located placements) when requested

The existing executors (real-time and router) always open 1 connection per
placement when parallel execution is requested.

That might be useful under certain circumstances:

(a) User wants to utilize as much as CPUs on the workers per
distributed query
(b) User has a transaction block which involves COPY command

Also, lots of regression tests rely on this execution semantics.
So, we'd enable few of the tests with this change as well.

* For parameters to be resolved before using them

For the details, see PostgreSQL's copyParamList()

* Unified executor sorts the returning output

* Ensure that unified executor doesn't ignore sequential execution of DDLJob's

Certain DDL commands, mainly creating foreign keys to reference tables,
should be executed sequentially. Otherwise, we'd end up with a self
distributed deadlock.

To overcome this situaiton, we set a flag `DDLJob->executeSequentially`
and execute it sequentially. Note that we have to do this because
the command might not be called within a transaction block, and
we cannot call `SetLocalMultiShardModifyModeToSequential()`.

This fixes at least two test: multi_insert_select_on_conflit.sql and
multi_foreign_key.sql

Also, I wouldn't mind scattering local `targetPoolSize` variables within
the code. The reason is that we'll soon have a GUC (or a global
variable based on a GUC) that'd set the pool size. In that case, we'd
simply replace `targetPoolSize` with the global variables.

* Fix 2PC conditions for DDL tasks

* Improve closing connections that are not fully established in unified execution

* Support foreign keys to reference tables in unified executor

The idea for supporting foreign keys to reference tables is simple:
Keep track of the relation accesses within a transaction block.
    - If a parallel access happens on a distributed table which
      has a foreign key to a reference table, one cannot modify
      the reference table in the same transaction. Otherwise,
      we're very likely to end-up with a self-distributed deadlock.
    - If an access to a reference table happens, and then a parallel
      access to a distributed table (which has a fkey to the reference
      table) happens, we switch to sequential mode.

Unified executor misses the function calls that marks the relation
accesses during the execution. Thus, simply add the necessary calls
and let the logic kick in.

* Make sure to close the failed connections after the execution

* Improve comments

* Fix savepoints in unified executor.

* Rebuild the WaitEventSet only when necessary

* Unclaim connections on all errors.

* Improve failure handling for unified executor

   - Implement the notion of errorOnAnyFailure. This is similar to
     Critical Connections that the connection managament APIs provide
   - If the nodes inside a modifying transaction expand, activate 2PC
   - Fix few bugs related to wait event sets
   - Mark placement INACTIVE during the execution as much as possible
     as opposed to we do in the COMMIT handler
   - Fix few bugs related to scheduling next placement executions
   - Improve decision on when to use 2PC

Improve the logic to start a transaction block for distributed transactions

- Make sure that only reference table modifications are always
  executed with distributed transactions
- Make sure that stored procedures and functions are executed
  with distributed transactions

* Move waitEventSet to DistributedExecution

This could also be local to RunDistributedExecution(), but in that case
we had to mark it as "volatile" to avoid PG_TRY()/PG_CATCH() issues, and
cast it to non-volatile when doing WaitEventSetFree(). We thought that
would make code a bit harder to read than making this non-local, so we
move it here. See comments for PG_TRY() in postgres/src/include/elog.h
and "man 3 siglongjmp" for more context.

* Fix multi_insert_select test outputs

Two things:
   1) One complex transaction block is now supported. Simply update
      the test output
   2) Due to dynamic nature of the unified executor, the orders of
      the errors coming from the shards might change (e.g., all of
      the queries on the shards would fail, but which one appears
      on the error message?). To fix that, we simply added it to
      our shardId normalization tool which happens just before diff.

* Fix subeury_and_cte test

The error message is updated from:
	failed to execute task
To:
        more than one row returned by a subquery or an expression

which is a lot clearer to the user.

* Fix intermediate_results test outputs

Simply update the error message from:
	could not receive query results
to
	result "squares" does not exist

which makes a lot more sense.

* Fix multi_function_in_join test

The error messages update from:
     Failed to execute task XXX
To:
     function f(..) does not exist

* Fix multi_query_directory_cleanup test

The unified executor does not create any intermediate files.

* Fix with_transactions test

A test case that just started to work fine

* Fix multi_router_planner test outputs

The error message is update from:
	Could not receive query results
To:
	Relation does not exists

which is a lot more clearer for the users

* Fix multi_router_planner_fast_path test

The error message is update from:
	Could not receive query results
To:
	Relation does not exists

which is a lot more clearer for the users

* Fix isolation_copy_placement_vs_modification by disabling select_opens_transaction_block

* Fix ordering in isolation_multi_shard_modify_vs_all

* Add executor locks to unified executor

* Make sure to allocate enought WaitEvents

The previous code was missing the waitEvents for the latch and
postmaster death.

* Fix rebase conflicts for master rebase

* Make sure that TRUNCATE relies on unified executor

* Implement true sequential execution for multi-row INSERTS

Execute the individual tasks executed one by one. Note that this is different than
MultiShardConnectionType == SEQUENTIAL_CONNECTION case (e.g., sequential execution
mode). In that case, running the tasks across the nodes in parallel is acceptable
and implemented in that way.

However, the executions that are qualified here would perform poorly if the
tasks across the workers are executed in parallel. We currently qualify only
one class of distributed queries here, multi-row INSERTs. If we do not enforce
true sequential execution, concurrent multi-row upserts could easily form
a distributed deadlock when the upserts touch the same rows.

* Remove SESSION_LIFESPAN flag in unified_executor

* Apply failure test updates

We've changed the failure behaviour a bit, and also the error messages
that show up to the user. This PR covers majority of the updates.

* Unified executor honors citus.node_connection_timeout

With this commit, unified executor errors out if even
a single connection cannot be established within
citus.node_connection_timeout.

And, as a side effect this fixes failure_connection_establishment
test.

* Properly increment/decrement pool size variables

Before this commit, the idle and active connection
counts were not properly calculated.

* insert_select_executor goes through unified executor.

* Add missing file for task tracker

* Modify ExecuteTaskListExtended()'s signature

* Sort output of INSERT ... SELECT ... RETURNING

* Take partition locks correctly in unified executor

* Alternative implementation for force_max_query_parallelization

* Fix compile warnings in unified executor

* Fix style issues

* Decrement idleConnectionCount when idle connection is lost

* Always rebuild the wait event sets

In the previous implementation, on waitFlag changes, we were only
modifying the wait events. However, we've realized that it might
be an over optimization since (a) we couldn't see any performance
benefits (b) we see some errors on failures and because of (a)
we prefer to disable it now.

* Make sure to allocate enough sized waitEventSet

With multi-row INSERTs, we might have more sessions than
task*workerCount after few calls of RunDistributedExecution()
because the previous sessions would also be alive.

Instead, re-allocate events when the connectino set changes.

* Implement SELECT FOR UPDATE on reference tables

On master branch, we do two extra things on SELECT FOR UPDATE
queries on reference tables:
   - Acquire executor locks
   - Execute the query on all replicas

With this commit, we're implementing the same logic on the
new executor.

* SELECT FOR UPDATE opens transaction block even if SelectOpensTransactionBlock disabled

Otherwise, users would be very confused and their logic is very likely
to break.

* Fix build error

* Fix the newConnectionCount calculation in ManageWorkerPool

* Fix rebase conflicts

* Fix minor test output differences

* Fix citus indent

* Remove duplicate sorts that is added with rebase

* Create distributed table via executor

* Fix wait flags in CheckConnectionReady

* failure_savepoints output for unified executor.

* failure_vacuum output (pg 10) for unified executor.

* Fix WaitEventSetWait timeout in unified executor

* Stabilize failure_truncate test output

* Add an ORDER BY to multi_upsert

* Fix regression test outputs after rebase to master

* Add executor.c comment

* Rename executor.c to adaptive_executor.c

* Do not schedule tasks if the failed placement is not ready to execute

Before the commit, we were blindly scheduling the next placement executions
even if the failed placement is not on the ready queue. Now, we're ensuring
that if failed placement execution is on a failed pool or session where the
execution is on the pendingQueue, we do not schedule the next task. Because
the other placement execution should be already running.

* Implement a proper custom scan node for adaptive executor

- Switch between the executors, add GUC to set the pool size
- Add non-adaptive regression test suites
- Enable CIRCLE CI for non-adaptive tests
- Adjust test output files

* Add slow start interval to the executor

* Expose max_cached_connection_per_worker to user

* Do not start slow when there are cached connections

* Consider ExecutorSlowStartInterval in NextEventTimeout

* Fix memory issues with ReceiveResults().

* Disable executor via TaskExecutorType

* Make sure to execute the tests with the other executor

* Use task_executor_type to enable-disable adaptive executor

* Remove useless code

* Adjust the regression tests

* Add slow start regression test

* Rebase to master

* Fix test failures in adaptive executor.

* Rebase to master - 2

* Improve comments & debug messages

* Set force_max_query_parallelization in isolation_citus_dist_activity

* Force max parallelization for creating shards when asked to use exclusive connection.

* Adjust the default pool size

* Expand description of max_adaptive_executor_pool_size GUC

* Update warnings in FinishRemoteTransactionCommit()

* Improve session clean up at the end of execution

Explicitly list all the states that the execution might end,
otherwise warn.

* Remove MULTI_CONNECTION_WAIT_RETRY which is not used at all

* Add more ORDER BYs to multi_mx_partitioning
2019-06-28 14:04:40 +02:00
Philip Dubé 4e54c1525d Isolation tests: consistently name COMMIT '-commit' 2019-06-27 07:32:39 +02:00
Hanefi Onaldi 4e08477fed Add test case for issue 2575 2019-06-26 17:12:28 +02:00
Hanefi Onaldi 7e8fd49b94 Create Schemas as superuser on all shard/table creation UDFs
- All the schema creations on the workers will now be  via superuser connections
- If a shard is being repaired or a shard is replicated, we will create the
  schema only in the relevant worker; and in all the other cases where a schema
  creation is needed, we will block operations until we ensure the schema exists
  in all the workers
2019-06-26 17:12:28 +02:00
Philip Dubé aa0c47848e subquery_and_cte: test rejecting volatile ctes
Also update isolation_citus_dist_activity from after merge
2019-06-26 16:27:07 +02:00
Philip Dubé 5c62f9935a Router planner: reject SELECT FOR UPDATE ctes 2019-06-26 10:32:01 +02:00
Philip Dubé 18575ccfd3 Add tests to subquery_and_cte, update check-multi-mx expected results 2019-06-26 10:32:01 +02:00
Philip Dubé 77efec04a0 Router Planner: accept SELECT_CMD ctes in modification queries 2019-06-26 10:32:01 +02:00
Philip Dubé 84fe626378 multi_router_planner: refactor error propagation 2019-06-26 10:32:01 +02:00
Hadi Moshayedi 25a984bab4 Normalize multi_name_lengths. 2019-06-25 14:18:33 +02:00
Hadi Moshayedi 3d0a521295 Show just coordinator plan in some test outputs. 2019-06-24 12:24:30 +02:00
Onder Kalaci ad93d6feea Change the order of placement access added to the list
This is to make sure that the error messages related to foreign keys
to reference tables shows the exact placement access name instead of
SELECT.
2019-06-23 11:32:58 +02:00
Hanefi Onaldi 7a6eb2aba0
Fix one regression test that fails on enterprise (#2786)
GRANT queries are propagated on Enterprise. If a user attempts to
create a user and run a GRANT query before creating it on workers, we
fail. This issue does not happen in community as the user needs to run
the GRANTs on the workers manually.
2019-06-21 15:46:28 +03:00
Nils Dijk 5df1b49bed
Feature: optionally force master_update_node during failover (#2773)
When `master_update_node` is called to update a node's location it waits for appropriate locks to become available. This is useful during normal operation as new operations will be blocked till after the metadata update while running operations have time to finish.

When `master_update_node` is called after a node failure it is less useful to wait for running operations to finish as they can't. The lock being held indicates an operation that once attempted to commit will fail as the machine already failed. Now the downside is the failover is postponed till the termination point of the operation. This has been observed by users to take a significant amount of time causing the rest of the system to be observed unavailable.

With this patch it is possible in such situations to invoke `master_update_node` with 2 optional arguments:
 - `force` (bool defaults to `false`): When called with true the update of the metadata will be forced to proceed by terminating conflicting backends. A cancel is not enough as the backend might be in idle time (eg. an interactive session, or going back and forth between an appliaction), therefore a more intrusive solution of termination is used here.
 - `lock_cooldown` (int defaults to `10000`): This is the time in milliseconds before conflicting backends are terminated. This is to allow the backends to finish cleanly before terminating them. This allows the user to set an upperbound to the expected time to complete the metadata update, eg. performing the failover.

The functionality is implemented by spawning a background worker that has the task of helping a certain backend in acquiring its locks. The backend is either terminated on successful execution of the metadata update, or once the memory context of the expression gets reset, eg. on a cancel of the statement.
2019-06-21 12:03:15 +02:00
Jason Petersen d4e1172247 Implement propagation of SET LOCAL commands
Adds support for propagation of SET LOCAL commands to all workers
involved in a query. For now, SET SESSION (i.e. plain SET) is not
supported whatsoever, though this code is intended as somewhat of a
base for implementing such support in the future.

As SET LOCAL modifications are scoped to the body of a BEGIN/END xact
block, queries wishing to use SET LOCAL propagation must be within such
a block. In addition, subsequent modifications after e.g. any SAVEPOINT
or ROLLBACK statements will correspondingly push or pop variable mod-
ifications onto an internal stack such that the behavior of changed
values across the cluster will be identical to such behavior on e.g.
single-node PostgreSQL (or equivalently, what values are visible to
the end user by running SHOW on such variables on the coordinator).

If nodes enter the set of participants at some point after SET LOCAL
modifications (or SAVEPOINT, ROLLBACK, etc.) have occurred, the SET
variable state is eagerly propagated to them upon their entrance (this
is identical to, and indeed just augments, the existing logic for the
propagation of the SAVEPOINT "stack").

A new GUC (citus.propagate_set_commands) has been added to control this
behavior. Though the code suggests the valid settings are 'none', 'local',
'session', and 'all', only 'none' (the default) and 'local' are presently
implemented: attempting to use other values will result in an error.
2019-06-20 16:15:43 -07:00
Hadi Moshayedi 4bbae02778 Make COPY compatible with unified executor. 2019-06-20 19:53:40 +02:00
Hadi Moshayedi d4f3e2809d Use normalization for multi_subtransaction output 2019-06-19 17:54:33 +02:00
Hadi Moshayedi 83f6c7dab4 Fix subxact release crash 2019-06-19 17:43:10 +02:00
Hadi Moshayedi c42b22f8fd Fix test name detection in bin/diff 2019-06-17 11:31:42 +02:00
Philip Dubé 342d423725 Fix join alias resolution
FROM (query) alias ignored renaming
In nested subqueries the select list would rename, while the join alias would not respect that
2019-06-12 17:25:07 -07:00
Hadi Moshayedi 8e2d328530 Search all outer node levels for lateral join params. 2019-06-04 10:14:05 -07:00
Philip Dubé b5ced403d8 Also check rewrittenQuery jointree for outer join 2019-06-04 07:47:35 -07:00
Marco Slot c1566d464b Fix failure and isolation tests
On top of citus.max_cached_conns_per_worker GUC, with this commit
we're updating the regression tests to comply with the new behaviour.
2019-05-29 14:42:31 +02:00
Onder Kalaci d46b92d79a Add order by to multi_mx_schema_support 2019-05-28 12:23:28 +02:00
Onder Kalaci fa2a6e4d8f Add order by to multi_mx_router_planner 2019-05-28 12:23:28 +02:00
Onder Kalaci 0a7a173eee Add order by to multi_mx_reference_table 2019-05-28 12:23:28 +02:00
Onder Kalaci 1553e12ee4 Add order by to multi_subquery_complex_reference_clause 2019-05-28 12:06:57 +02:00
Philip Dubé b8871d9ff4 Propagate more ALTER FOREIGN TABLE to workers 2019-05-24 12:54:05 -07:00
Marco Slot b3fcf2a48f Deprecate master_modify_multiple_shards 2019-05-24 15:22:06 +02:00
Marco Slot 7fa5d36057 Stop using master_modify_multiple_shards in TRUNCATE 2019-05-24 14:35:46 +02:00
Hanefi Onaldi 7443191397
Improve tests for round robin & router queries 2019-05-24 14:16:56 +03:00
Philip Dubé 16886b3c63 Fix misc typos 2019-05-23 17:23:27 -07:00
Onder Kalaci f1a80a609f Fix wrong test output
If replication factor eqauls to 2 and there are two worker nodes,
even if two modifications hit different shards, Citus doesn't use
2PC. The reason is that it doesn't fit into the definition of
"expanding participating worker nodes".

Thus, we're simply fixing the test to fit in the comment
on top of it.
2019-05-21 19:12:37 +03:00
Onder Kalaci f76abfe470 Add ORDER BY to multi_router_planner 2019-05-21 15:54:33 +03:00
Onder Kalaci f06a79563d Add ORDER BY to multi_foreign_key 2019-05-21 15:54:03 +03:00
Hanefi Onaldi 4030d603eb
Merge pull request #2691 from citusdata/update_changelog
Add 8.1.2 and 8.2.1 changelog entries
2019-05-15 09:18:58 +03:00
Onder Kalaci 5d68a13139 Add order by to multi_shard_update_delete 2019-05-02 20:09:33 +03:00
Onder Kalaci 2c76b4bc46 Add order by to multi_function_in_join test 2019-05-02 20:05:25 +03:00
Onder Kalaci 3d871c5334 Add some ORDER BYs to make the test output consistent 2019-05-02 18:00:46 +03:00
Hadi Moshayedi 32ecb6884c Test ROLLBACK TO SAVEPOINT with multi-shard CTE failures 2019-05-01 09:33:43 -07:00
Hadi Moshayedi aafd22dffa Fix savepoint rollback for INSERT INTO ... SELECT. 2019-05-01 09:33:43 -07:00
Hadi Moshayedi b69a762e0b Fix savepoint rollback after multi-shard update failure. 2019-05-01 09:33:43 -07:00
Hadi Moshayedi a9f7c1e8cb Normalize test result/expected files before doing diff. 2019-04-30 10:19:23 -07:00
Onder Kalaci 82813a8796 Add ORDER BYs to multi_subquery and subqueries_deep tests 2019-04-24 13:36:11 +03:00
Onder Kalaci 004f28e18c Sort output of RETURNING
The feature is only intended for getting consistent outputs for the regression tests.

RETURNING does not have any ordering gurantees and with unified executor, the ordering
of query executions on the shards are also becoming unpredictable. Thus, we're enforcing
ordering when a GUC is set.

We implicitly add an `ORDER BY` something equivalent of
	`
	  RETURNING expr1, expr2, .. ,exprN
	  ORDER BY expr1, expr2, .. ,exprN
	`

As described in the code comments as well, this is probably not the most
performant approach we could implement. However, since we're only
targeting regression tests, I don't see any issues with that. If we
decide to expand this to a feature to users, we should revisit the
implementation and improve the performance.
2019-04-24 11:51:19 +03:00
Onder Kalaci 64b323d9eb Add ORDER BY to set_operations 2019-04-23 11:51:58 +03:00
Onder Kalaci 913ffc9dcd Add ORDER BY to multi_subquery_in_where_clause 2019-04-23 11:46:00 +03:00
Onder Kalaci 753163b4d8 Be less verbose for printing worker ports in intermediate_results 2019-04-17 14:57:20 +03:00
Onder Kalaci b3af5b2cc4 Add order by multi_mx_modifications 2019-04-17 14:57:20 +03:00
Onder Kalaci a159bd9aed Add order by window_functions 2019-04-17 14:57:20 +03:00
Jason Petersen 4b9519e7d6 Check for non-extended constraint before extending
This will only apply to DROP and VALIDATE commands; see the lengthy
comment in multi_create_table_constraints.sql for more explanation.
2019-04-15 23:14:21 -06:00
Jason Petersen 5a017c684c Add repro case for #2484 2019-04-15 23:14:11 -06:00
Onder Kalaci 6d81fc518c Add order by subquery_complex_target_list 2019-04-10 19:55:41 +03:00
Onder Kalaci 58e90ad60d Add order by multi_outer_join 2019-04-09 12:53:57 +03:00
Onder Kalaci 298e95c441 Add order by multi_shard_update_delete 2019-04-09 12:41:46 +03:00
Onder Kalaci 6a8e2c260a Add order by multi_insert_select 2019-04-09 12:28:57 +03:00
Onder Kalaci af096a898c Add order by subquery_and_cte 2019-04-09 12:19:10 +03:00
Onder Kalaci 56a1a39fd4 Add order by multi_subquery_complex_queries 2019-04-09 12:12:26 +03:00
Onder Kalaci 4effa8c1f8 Add order by multi_schema_support 2019-04-09 11:52:08 +03:00
Onder Kalaci 92e87738dd Make sure that the regression test output is durable to different execution orders
Mostly add order bys and suppress worker node ports in the test
outputs.
2019-04-08 11:48:08 +03:00
Murat Tuncer 1424f75ec9 Support columns referencing an aliased joins
We used to rely on PG function flatten_join_alias_vars
to resolve actual columns referenced in target entry list.

The function goes deep and finds the actual relation. This logic
usually works fine. However, when joins are given an alias, inner
relation names are not visible to target entry entry. Thus relation
resolving should stop when we the target entry column refers an
rte of an aliased join.

We stopped using PG function and provided our own flatten function.
2019-03-26 09:46:22 +03:00
Jason Petersen 4c7f78bd7e Code review feedback 2019-03-25 22:07:27 -05:00
Jason Petersen 69adb627c3 Add Assert that will crash before coercion fix is in 2019-03-22 20:32:19 -06:00
Hadi Moshayedi ff1d4f697a
Ignore test_times.log (#2638) 2019-03-22 10:29:01 -07:00
Nils Dijk feaac69769
Implementation for asycn FinishConnectionListEstablishment (#2584) 2019-03-22 17:30:42 +01:00
Marco Slot e3b7e74f43 Allow rescan in DECLARE .. WITH HOLD 2019-03-22 11:25:55 +01:00
Jason Petersen a2c6f596f9 Address code review comments 2019-03-21 11:59:52 -06:00
Onder Kalaci 41d8c4030a Add some more regression tests for outer join pushdown 2019-03-19 11:49:38 +03:00
Onder Kalaci ad5ff1d01a Some queries lead to infinite recursion with recurisve planning
The rule for infinite recursion is the following:

    - If the query contains a subquery which is recursively planned, and
      no other subqueries can be recursively planned due to correlation
      (e.g., LATERAL joins), the planner keeps recursing again and again.

One interesting thing here is that even if a subquery contains only intermediate
result(s), we re-recursively plan that. In the end, the logic in the code does the following:

  - Try recursive planning any of the subqueries in the query tree
     - If any subquery is recursively planned, call the planner again
        where the subquery is replaced with the intermediate result.
        - Try recursively planning any of the queries
          - If any subquery is recursively planned, call the planner again
            where the subquery (in this case it is already intermediate result)
            is replaced with the intermediate result.
              - Try recursively planning any of the queries
                - If any subquery is recursively planned, call the planner again
                  where the subquery (in this case it is already intermediate result)
                  is replaced with the intermediate result.
                  - Try recursively planning any of the queries
                    - If any subquery is recursively planned, call the planner again
                      where the subquery (in this case it is already intermediate result)
                      is replaced with the intermediate result.
                      ......
2019-03-18 10:35:00 +03:00
Marco Slot f2abf2b8e5 Functions are treated as transaction blocks 2019-03-15 16:34:08 -06:00
Marco Slot 4b9bd54ae0 Remove create_insert_proxy_for_table 2019-03-15 14:13:03 -06:00
Hadi Moshayedi a9e6d06a98 Skip execution of ALTER TABLE constraint checks on the coordinator 2019-03-14 15:40:56 -07:00
Hadi Moshayedi cdd3b15ac8 Fix distributed deadlock for ALTER TABLE ... ATTACH PARTITION.
Following scenario resulted in distributed deadlock before this commit:

CREATE TABLE partitioning_test(id int, time date) PARTITION BY RANGE (time);
CREATE TABLE partitioning_test_2009 (LIKE partitioning_test);
CREATE TABLE partitioning_test_reference(id int PRIMARY KEY, subid int);

SELECT create_distributed_table('partitioning_test_2009', 'id'),
       create_distributed_table('partitioning_test', 'id'),
       create_reference_table('partitioning_test_reference');

ALTER TABLE partitioning_test ADD CONSTRAINT partitioning_reference_fkey FOREIGN KEY (id) REFERENCES partitioning_test_reference(id) ON DELETE CASCADE;
ALTER TABLE partitioning_test_2009 ADD CONSTRAINT partitioning_reference_fkey_2009 FOREIGN KEY (id) REFERENCES partitioning_test_reference(id) ON DELETE CASCADE;

ALTER TABLE partitioning_test ATTACH PARTITION partitioning_test_2009 FOR VALUES FROM ('2009-01-01') TO ('2010-01-01');
2019-03-14 15:28:37 -07:00
Hanefi Onaldi 419f52884f
Merge branch 'master' into improve-mitmproxy-documentation 2019-03-12 07:16:01 -07:00
Murat Tuncer 2681231c98 Create column aliases for shard tables in worker queries when requested 2019-03-07 12:54:42 +03:00
velioglu faf50849d7 Enhance pushdown planning logic to handle full outer joins with using clause
Since flattening query may flatten outer joins' columns into coalesce expr that is
in the USING part, and that was not expected before this commit, these queries were
erroring out. It is fixed by this commit with considering coalesce expression as well.
2019-03-05 11:49:30 +03:00
Jason Petersen 5817bc3cce
Add test-timing script
Through some clever stream redirections and options, we can get decent
timing data for each of our tests.
2019-02-26 23:01:40 -07:00
Jason Petersen 5db45bac45
Enable CircleCI
The configuration for the build is in the YAML file; the changes to the
regression runner are backward-compatible with Travis and just add the
logic to detect whether our custom (isolation- and vanilla-enabled) pkg
is present.
2019-02-26 22:17:26 -07:00
Onder Kalaci f706772b2f Round-robin task assignment policy relies on local transaction id
Before this commit, round-robin task assignment policy was relying
on the taskId. Thus, even inside a transaction, the tasks were
assigned to different nodes. This was especially problematic
while reading from reference tables within transaction blocks.
Because, we had to expand the distributed transaction to many
nodes that are not necessarily already in the distributed transaction.
2019-02-22 19:26:38 +03:00
Onder Kalaci f144bb4911 Introduce fast path router planning
In this context, we define "Fast Path Planning for SELECT" as trivial
queries where Citus can skip relying on the standard_planner() and
handle all the planning.

For router planner, standard_planner() is mostly important to generate
the necessary restriction information. Later, the restriction information
generated by the standard_planner is used to decide whether all the shards
that a distributed query touches reside on a single worker node. However,
standard_planner() does a lot of extra things such as cost estimation and
execution path generations which are completely unnecessary in the context
of distributed planning.

There are certain types of queries where Citus could skip relying on
standard_planner() to generate the restriction information. For queries
in the following format, Citus does not need any information that the
standard_planner() generates:

  SELECT ... FROM single_table WHERE distribution_key = X;  or
  DELETE FROM single_table WHERE distribution_key = X; or
  UPDATE single_table SET value_1 = value_2 + 1 WHERE distribution_key = X;

Note that the queries might not be as simple as the above such that
GROUP BY, WINDOW FUNCIONS, ORDER BY or HAVING etc. are all acceptable. The
only rule is that the query is on a single distributed (or reference) table
and there is a "distribution_key = X;" in the WHERE clause. With that, we
could use to decide the shard that a distributed query touches reside on
a worker node.
2019-02-21 13:27:01 +03:00
Hanefi Onaldi 825666f912
Query samples in docs and better errors 2019-02-04 19:20:02 +03:00
Hanefi Onaldi 1106e14385
Wrap functions in subqueries
remove debug logs to fix travis tests

Support RowType functions in joins

Regression tests for a custom type function in join
2019-02-04 19:19:29 +03:00
Hanefi Onaldi c5c3d6d0a3
Update the mitmscripts readme
and migrate readme to markdown
and create contribution guidelines
2019-02-04 11:30:05 +03:00
Hanefi Onaldi 4dd1f5784b
Failure&cancellation tests for mx metadata sync
Failure&Cancellation tests for initial start_metadata_sync() calls
to worker and DDL queries that send metadata syncing messages to an MX node

Also adds message type definitions for messages that are exchanged
during metadata syncing
-
2019-02-01 11:50:25 +03:00
Murat Tuncer b36b59dd4f Relax reference table restrictions in subquery union pushdowns
We used to error out if there is a reference table
in the query participating a union. This has caused
pushdownable queries to be evaluated in coordinator.

Now we let reference tables inside union queries as long
as there is a distributed table in from clause.

Existing join checks (reference table on the outer part)
sufficient enought that we do not need check the join relation
of reference tables.
2019-01-31 15:34:29 +03:00
Onder Kalaci ec67381ba2 Queries with only intermediate results do not rely on task assignment policy
Previously we allowed task assignment policy to have affect on router queries
with only intermediate results. However, that is erroneous since the code-path
that assigns placements relies on shardIds and placements, which doesn't exists
for intermediate results.

With this commit, we do not apply task assignment policies when a router query
hits only intermediate results.
2019-01-28 17:59:17 +03:00
Jason Petersen 339e6e661e
Remove 9.6 (#2554)
Removes support and code for PostgreSQL 9.6

cr: @velioglu
2019-01-16 13:11:24 -07:00
Nils Dijk 3f2bac18df
Add make target to run regression tests in isolation with vagrant
Also allow `multi_alter_table_add_constraints` to run in isolation
2019-01-16 11:41:09 +01:00
Marco Slot 1656b519c4 Plan outer joins through pushdown planning 2019-01-05 20:55:27 +01:00
Murat Tuncer a72d959735 Fix multi_view tests 2019-01-03 17:07:26 +03:00
Hanefi Onaldi fb497ddad1
Bump 8.2devel on master (#2567) 2018-12-24 13:49:50 +03:00
Onder Kalaci 9fff7d28a7
Revert 4925521 2018-12-21 15:36:40 -07:00
Marco Slot 2e4029973c
Remove sequential create index concurrently test 2018-12-21 14:03:00 -07:00
Marco Slot 3ff2b47366 Restrict visibility of get_*_active_transactions functions to pg_monitor 2018-12-19 18:32:42 +01:00
Marco Slot 13f4a0ac9f Stabilize failure test shard IDs 2018-12-19 04:26:46 +01:00
Nils Dijk 694992e946
upgrade default ssl_ciphers to more restrictive on extension creation
Show ssl_ciphers in ssl_by_default_test
2018-12-12 15:33:15 +01:00
Nils Dijk 4af40eee76 Enable SSL by default during installation of citus 2018-12-07 11:23:19 -07:00
velioglu 8764a19464 Adds support for disabling hash agg with hll functions on coordinator query 2018-12-07 18:49:25 +03:00
Marco Slot 9cf91c438b Only allow transmit from pgsql_job_cache directory 2018-12-05 10:18:27 +01:00
Onder Kalaci b6ebd791a6 Sort task list for multi-task explain outputs
This is purely for ensuring that regression tests do not randomly fail.
2018-11-30 11:19:37 -07:00
Onder Kalaci 18c9badff5 Make sure the explain output for partition wise join is stable
We disable bunch of planning options on the workers. This might be
risky if any concurrent test relies on EXPLAIN OUTPUT as well. Still,
we want to keep this test, so we should try to not parallelize this
test with such test.
2018-11-30 16:44:57 +03:00
Marco Slot 8893cc141d Support INSERT...SELECT with ON CONFLICT or RETURNING via coordinator
Before this commit, Citus supported INSERT...SELECT queries with
ON CONFLICT or RETURNING clauses only for pushdownable ones, since
queries supported via coordinator were utilizing COPY infrastructure
of PG to send selected tuples to the target worker nodes.

After this PR, INSERT...SELECT queries with ON CONFLICT or RETURNING
clauses will be performed in two phases via coordinator. In the first
phase selected tuples will be saved to the intermediate table which
is colocated with target table of the INSERT...SELECT query. Note that,
a utility function to save results to the colocated intermediate result
also implemented as a part of this commit. In the second phase, INSERT..
SELECT query is directly run on the worker node using the intermediate
table as the source table.
2018-11-30 15:29:12 +03:00
Hanefi Onaldi 088a2ef66a throw an error when a subquery has grouping set clause 2018-11-30 13:11:32 +03:00
Onder Kalaci a15f168ce4 Ensure that citus_dist_activity test outputs do not change
Since there is no lock ordering among the query that is executed
and the select from the view, we prefer to add a timeout before
priting the activity.
2018-11-30 11:46:17 +03:00
Nils Dijk 9309e63156
create_distributed_table as user, change table ownership during create 2018-11-29 14:20:42 +01:00
Nils Dijk 6aa191f72c
remove table_ddl_command_array and test master_get_table_ddl_events 2018-11-29 14:20:42 +01:00
Murat Tuncer fd868ec268 Fix citus_stat_statements view
Join between pg_stat_statements and citus_query_stats should
include queryid, dbid, userid instead of just queryid.
2018-11-29 14:49:16 +03:00
Marco Slot 0393910c65 Shard IDs in isolation_citus_dist_stat_activity output changed 2018-11-28 02:59:50 +01:00
Marco Slot aff37cf1bc Control multi-shard modify locks with enable_deadlock_prevention 2018-11-28 02:59:50 +01:00
Marco Slot 5a63deab2e Clean up UDFs and remove unnecessary permissions 2018-11-26 14:40:37 +01:00
Hanefi Onaldi 448b241ab4 validate query isolation tests 2018-11-26 14:04:51 +03:00
Hanefi Onaldi 4edb193f25 make the tests parallelizeable
helper view table_fkeys_in_workers now allows filtering by schema so that a test case can print out foreign keys in its schema only
2018-11-26 14:04:51 +03:00
Hanefi Onaldi b3d897039a constraint validation regression tests 2018-11-26 14:04:51 +03:00
Marco Slot e9a7295ead Add multi-user tests for task-tracker protocol functions 2018-11-23 11:05:09 +01:00
Marco Slot 4245032849 Add user ID suffixes to filenames in check-worker tests 2018-11-23 08:36:12 +01:00
Marco Slot 30bad7e66f Add worker_execute_sql_task UDF 2018-11-22 18:15:33 +01:00
Marco Slot e3521ce320 Test current user in task-tracker queries 2018-11-22 18:15:33 +01:00
Marco Slot e17025e1d4 Check table ownership in mark_tables_colocated 2018-11-18 00:11:38 +01:00
Marco Slot 18acd00553 Check permissions in lock_relation_if_exists 2018-11-18 00:11:38 +01:00
Marco Slot aab9f623eb Check table ownership in upgrade_to_reference_table 2018-11-16 23:27:34 +01:00
Onder Kalaci 052ba21b19 Make sure to prevent unauthorized users to drop sequences in Citus MX 2018-11-15 18:08:04 +03:00
Onder Kalaci 7f0a57a153 Make sure to prevent unauthorized users to drop tables in Citus MX 2018-11-15 18:07:03 +03:00
Nils Dijk f9520be011
Round robin queries to reference tables with task_assignment_policy set to `round-robin` (#2472)
Description: Support round-robin `task_assignment_policy` for queries to reference tables.

This PR allows users to query multiple placements of shards in a round robin fashion. When `citus.task_assignment_policy` is set to `'round-robin'` the planner will use a round robin scheduling feature when multiple shard placements are available.

The primary use-case is spreading the load of reference table queries to all the nodes in the cluster instead of hammering only the first placement of the reference table. Since reference tables share the same path for selecting the shards with single shard queries that have multiple placements (`citus.shard_replication_factor > 1`) this setting also allows users to spread the query load on these shards.

For modifying queries we do not apply a round-robin strategy. This would be negated by an extra reordering step in the executor for such queries where a `first-replica` strategy is enforced.
2018-11-15 15:11:15 +01:00
Marco Slot 2de8ef29c3 Revoke function permissions for node metadata functions 2018-11-15 06:25:07 +01:00
Nils Dijk 97da44558b
Description: Fix failures of tests on recent postgres builds
In recent postgres builds you cannot set client_min_messages to
values higher then ERROR, if will silently set it to ERROR if so.

During some tests we would set it to fatal to hide random values
(eg. pid's of processes) from the test output. This patch will use
different tactics for hiding these values.
2018-11-13 16:53:05 +01:00
Hadi Moshayedi d3e284dcd6
Use heap_deform_tuple() instead of calling heap_getattr(). (#2464)
After Fast ALTER TABLE ADD COLUMN with a non-NULL default in PG11, physical heaps might not contain all attributes after a ALTER TABLE ADD COLUMN happens. heap_getattr() returns NULL when the physical tuple doesn't contain an attribute. So we should use heap_deform_tuple() in these cases, which fills in the missing attributes.

Our catalog tables evolve over time, and an upgrade might involve some ALTER TABLE ADD COLUMN commands.

Note that we don't need to worry about postgres catalog tables and we can use heap_getattr() for them, because they only change between major versions.

This also fixes #2453.
2018-11-05 15:11:01 -05:00
Onder Kalaci 7aa2af8975 Add failure and cancellation tests for multi row inserts 2018-10-29 11:36:02 +03:00
Onder Kalaci 7b4d912904 Add cancellation tests for VACUUM/ANALYZE 2018-10-26 16:25:11 +03:00
Onder Kalaci 85d7d074c3 Add cancellation tests for multi shard modification queries 2018-10-26 15:07:52 +03:00
Onder Kalaci 18eee6d9c8 Add cancellation tests for router selects 2018-10-26 14:29:56 +03:00
Jason Petersen a37a809d49
Add savepoint failure tests
Tests at each significant point (i.e. SAVEPOINT, ROLLBACK, RELEASE)
that correct semantics are preserved (using both no and statement replication).
2018-10-26 11:12:40 +01:00
Onder Kalaci 6e05921736 Processes that are blocked on advisory locks show up in wait edges
Assign the distributed transaction id before trying to acquire the
executor advisory locks. This is useful to show this backend in citus
lock graphs (e.g., dump_global_wait_edges() and citus_lock_waits).
2018-10-24 13:32:13 +03:00
Jason Petersen 98c8267a37
Add single-shard modification failure tests
I'm pretty sure a lot of this test functionality may be covered in some
of our existing regression tests, but I've included them to ensure we
put all failure-based tests under our new testing method for that kind
of test.

Didn't include lower replication factor, as (for a single-shard mod.),
it's indistinguishable from modifying a reference table. So these all
test modifications which hit a single, replicated shard.
2018-10-23 23:31:40 +01:00
Hadi Moshayedi 3e00bf1c0d Don't throw error for DROP DATABASE IF EXISTS 2018-10-23 09:45:03 -04:00
Murat Tuncer 081594ad03 Don't allow PG11 travis failures anymore
We made PG11 builds optional when we had an issue
with mx isolation test that we could not solve back then.

This commit solves the issue with a workaround by running
start_metadata_sync_to_node outside the transaction block.
2018-10-19 15:20:53 +03:00
Murat Tuncer c7efd8aff0 Add failure test for insert/select pushdown 2018-10-18 09:09:26 +03:00
Jason Petersen 9fb951c312
Fix user-facing typos
Lintian found these (presumably by looking in the text section and
running them through e.g. aspell).
2018-10-09 16:54:03 -07:00
velioglu 5713019058 Add failure tests for real time select queries 2018-10-09 14:12:02 -07:00
Onder Kalaci 73696a03e4 Make sure not to leak intermediate result folders on the workers 2018-10-09 22:47:56 +03:00
Hadi Moshayedi 7509c6c8fb Add tests which check we disallow writes to local tables. 2018-10-06 10:54:44 +02:00
Marco Slot d56baefe3d Allow simple DML commands from hot standby 2018-10-06 10:54:44 +02:00
Jason Petersen 1cb48416eb
Add reference table failure tests
Fairly straightforward; verified that modifications fail atomically if
a worker is down or fails mid-transaction (i.e. all workers need to ack
modifications to reference tables in order to persist changes).
2018-10-09 09:39:30 -07:00
Jason Petersen 9bcf2873a7
Add single-shard router select failure tests
Including several examples from #1926. I couldn't understand why the
recover_prepared_transactions "should be an error", and EXPLAIN has
changed since the original bug (so that it runs EXPLAINs in txns, I
think for EXPLAIN ANALYZE to not have side effects); other than that,
most of the reported bugs now error out rather than crash or return
an empty result set.
2018-10-09 08:51:10 -07:00
Jason Petersen 8f2aa00951
Add failure tests for VACUUM/ANALYZE
VACUUM runs outside of a transaction, so the failure modes for it are
somewhat straightforward, though ANALYZE runs in a 1pc transaction and
multi-table VACUUM can fail between statements (PG 11 and higher).
2018-10-09 08:50:37 -07:00
Jason Petersen ee4114bc7a Failure tests for modifying multiple shards in txn
Tests various failure points during a multi-shard modification within
a transaction with multiple statements. Verifies three cases:

  * Reference tables (single shard, many placements)
  * Normal table with replication factor two
  * Multi-shard table with no replication

In the replication-factor case, we expect shard health to be affected
in some transactions; most others fail the transaction entirely and
all we need verify is that no effects of the transaction are visible.

Had trouble testing the final PREPARE/COMMIT/ROLLBACK phase of the 2pc,
in particular because the error message produced includes the PID of
the backend, which is unpredictable.
2018-10-09 09:17:32 -06:00
Murat Tuncer 4f8042085c Fix drop schema in mx with partitioned tables
Drop schema command fails in mx mode if there
is a partitioned table with active partitions.

This is due to fact that sql drop trigger receives
all the dropped objects including partitions. When
we call drop table on parent partition, it also drops
the partitions on the mx node. This causes the drop
table command on partitions to fail on mx node because
they are already dropped when the partition parent was
dropped.

With this work we did not require the table to exist on
worker_drop_distributed_table.
2018-10-08 17:01:54 -07:00
Murat Tuncer 71a910d2fa Add failure tests for insert/select via coordinator 2018-10-04 18:01:19 +03:00
Murat Tuncer 0a987e9c0e Fix cte subquery failure test 2018-10-03 15:43:48 +03:00
Murat Tuncer d26b312cad Add failure test for coordinator pull/push for cte 2018-10-03 15:43:48 +03:00
Murat Tuncer 6c66033455 Add failure tests for multi-shard update/delete
Failure tests for update/delete  on hash distributed tables
using 1PC and 2PC
2018-10-03 15:43:48 +03:00
velioglu 512d23934f Show router modify,select and real-time queries on MX views 2018-10-02 13:59:38 +03:00
Murat Tuncer 9bdef67bab Do not create inherited constraints on worker shards
PG now allows foreign keys on partitioned tables.
Each foreign key constraint on partitioned table
is propagated down to partitions.

We used to create all constraints on shards when we are creating
a new shard, or when just simply moving a shard from one worker
to another. We also used the same logic when creating a copy of
coordinator table in mx node.

With this change we create the constraint on worker node only if
it is not an inherited constraint.
2018-09-28 14:14:51 +03:00
Onder Kalaci cdc0d1491c Make sure to use correct execution mode for TRUNCATE
We used to set the execution mode in the truncate trigger. However,
when multiple tables are truncated with a single command, we could
set the execution mode very late. Instead, now set the execution mode
on the utility hook.
2018-09-25 15:35:27 +03:00
Jason Petersen d7f10b0896 Rewrite parallel ID test to avoid costly JITting
By setting the CPU tuple cost so high, we were triggering JIT. Instead,
we should use parallel_tuple_cost.

See: rhaas.blogspot.com/2018/06/using-forceparallelmode-correctly.html
2018-09-24 09:29:53 +03:00
Jason Petersen e62a1ab43d Revert "Disable JIT during PostgreSQL 11 test runs"
This reverts commit a2fb5a84f1.

JIT wasn't actually interfering with the operation of Citus, a test was
just written in a way which caused JIT to run for a function on every
row in a 150k-row table.
2018-09-24 09:29:53 +03:00
Onder Kalaci abc443d7fa Make sure that shard repair considers replication factor 2018-09-21 15:24:49 +03:00
Onder Kalaci c1b5a04f6e Allow partitioned tables with replication factor > 1
With this commit, we all partitioned distributed tables with
replication factor > 1. However, we also have many restrictions.

In summary, we disallow all kinds of modifications (including DDLs)
on the partition tables. Instead, the user is allowed to run the
modifications over the parent table.

The necessity for such a restriction have two aspects:
   - We need to acquire shard resource locks appropriately
   - We need to handle marking partitions INVALID in case
     of any failures. Note that, in theory, the parent table
     should also become INVALID, which is too aggressive.
2018-09-21 14:40:41 +03:00
velioglu d7f75e5b48 Add citus_lock_waits to show locked distributed queries 2018-09-20 14:13:51 +03:00
Murat Tuncer 0f6e514bfb Fixes a bug on not being able to drop index on a partitioned table.
Reason for the failure is that PG11 introduced a new relation kind
RELKIND_PARTITIONED_INDEX to be used for partitioned indices.

We expanded our check to cover that case.
2018-09-19 13:15:05 +03:00
Marco Slot f34ab55389 Fix bug preventing rollback in stored procedure 2018-08-31 20:49:20 +02:00
Onder Kalaci 41d606b575 Use tree walker instad of mutator in relation visibility
This commit uses *_walker instead of *_mutator for performance reasons.
Given that we're only updating a functionId in the tree, the approach
seems fine.
2018-09-18 09:33:01 +03:00
Marco Slot 55f46acedf Support TABLESAMPLE in router queries 2018-08-31 13:22:38 +02:00
Brian Cloutier 2fae06056a
Attempt to stabilize packet dumps and add them back it 2018-09-12 22:10:39 -06:00
Brian Cloutier 5bde8626c5
Travis uses Pipfile instead of re-specifying deps 2018-09-12 17:37:14 -06:00
Brian Cloutier e61e5d4980
Update mitmproxy version to remove vulnerability warnings 2018-09-12 17:17:22 -06:00
Murat Tuncer ae0032dff8 Add regression tests for procedure calls
PG11 introduced PROCEDURE concept similar to FUNCTION
Procedure's allow committing/rolling back behavior.

This commmit adds regression tests for procedure calls.
2018-09-12 10:28:50 +03:00
velioglu d1f005daac Adds UDFs for testing MX functionalities with isolation tests 2018-09-12 07:04:16 +03:00
Murat Tuncer 470ee0b4d9 Revert multi_partition test back to being required
Test was marked as optional (ignore) by previous
commit. Reverting that change to make test required
2018-09-11 12:39:44 -06:00
Onder Kalaci d657759c97 Views to Provide some insight about the distributed transactions on Citus MX
With this commit, we implement two views that are very similar
to pg_stat_activity, but showing queries that are involved in
distributed queries:

    - citus_dist_stat_activity: Shows all the distributed queries
    - citus_worker_stat_activity: Shows all the queries on the shards
                                  that are initiated by distributed queries.

Both views have the same columns in the outputs. In very basic terms, both of the views
are meant to provide some useful insights about the distributed
transactions within the cluster. As the names reveal, both views are similar to pg_stat_activity.
Also note that these views can be pretty useful on Citus MX clusters.

Note that when the views are queried from the worker nodes, they'd not show the distributed
transactions that are initiated from the coordinator node. The reason is that the worker
nodes do not know the host/port of the coordinator. Thus, it is advisable to query the
views from the coordinator.

If we bucket the columns that the views returns, we'd end up with the following:

- Hostnames and ports:
   - query_hostname, query_hostport: The node that the query is running
   - master_query_host_name, master_query_host_port: The node in the cluster
                                                   initiated the query.
    Note that for citus_dist_stat_activity view, the query_hostname-query_hostport
    is always the same with master_query_host_name-master_query_host_port. The
    distinction is mostly relevant for citus_worker_stat_activity. For example,
    on Citus MX, a users starts a transaction on Node-A, which starts worker
    transactions on Node-B and Node-C. In that case, the query hostnames would be
    Node-B and Node-C whereas the master_query_host_name would Node-A.

- Distributed transaction related things:
    This is mostly the process_id, distributed transactionId and distributed transaction
    number.

- pg_stat_activity columns:
    These two views get all the columns from pg_stat_activity. We're basically joining
    pg_stat_activity with get_all_active_transactions on process_id.
2018-09-10 21:33:27 +03:00
Onder Kalaci 7de5e30432 Change flaky explain test to non-explain
This test's output changes depending on which worker is
picked for explain (e.g., worker port in the output changes).

Given that the test is only aiming to ensure that CTEs inside
CTEs work fine in DML queries, it should be fine to get rid of
the EXPLAIN. The output is verified to be correct as well.
2018-09-10 16:01:30 +03:00
Onder Kalaci 5cf8fbe7b6 Add infrastructure to relation if exists 2018-09-07 14:49:36 +03:00
Onder Kalaci bf28dd0cff Do not recover wrong distributed transactions in MX 2018-09-07 09:52:46 +03:00
Murat Tuncer d8279569b8 Add support for INCLUDE option in index creation
INCLUDE is a new feature in index creation in PG11.
Included column/expression paramameters are now forwarded to shards
2018-09-06 19:41:06 +03:00
Murat Tuncer 7d3f7c2bf4 Add regression tests related to new PG11 partitioning features 2018-09-06 19:06:28 +03:00
Murat Tuncer 55cf3e321c Add regression tests for new PG11 window functions
- <offset> preceding/following
- exclude
2018-09-04 10:48:04 +03:00
Onder Kalaci 1b3257816e Make sure that table is dropped before shards are dropped
This commit fixes a bug where a concurrent DROP TABLE deadlocks
with SELECT (or DML) when the SELECT is executed from the workers.

The problem was that Citus used to remove the metadata before
droping the table on the workers. That creates a time window
where the SELECT starts running on some of the nodes and DROP
table on some of the other nodes.
2018-09-04 08:57:20 +03:00
Onder Kalaci 2ab0e63b30 Fix flaky test 2018-09-03 14:06:32 +03:00