Commit Graph

79 Commits (33b77235302b4b3dcbfb579605f015a91d3e0d19)

Author SHA1 Message Date
Marco Slot 33b7723530 Use UpdateShardPlacementState where appropriate 2016-10-07 11:59:20 -07:00
Andres Freund 982ad66753 Introduce placement IDs.
So far placements were assigned an Oid, but that was just used to track
insertion order. It also did so incompletely, as it was not preserved
across changes of the shard state. The behaviour around oid wraparound
was also not entirely as intended.

The newly introduced, explicitly assigned, IDs are preserved across
shard-state changes.

The prime goal of this change is not to improve ordering of task
assignment policies, but to make it easier to reference shards.  The
newly introduced UpdateShardPlacementState() makes use of that, and so
will the in-progress connection and transaction management changes.
2016-10-07 11:59:20 -07:00
Andres Freund de32b7bbad Don't create hash-table of zero size in TaskHashCreate().
hash_create(), called by TaskHashCreate(), doesn't work correctly for a
zero sized hash table. This triggers valgrind errors, and could
potentially cause crashes even without valgring.

This currently happens for Jobs with 0 tasks. These probably should be
optimized away before reaching TaskHashCreate(), but that's a bigger
change.
2016-10-03 13:07:43 -07:00
Andres Freund a6150c2916 Lower "waiting for activity on tasks took longer than" log level.
It's perfectly normal to wait longer in several circumstances, and the
output can lead to spurious regression output changes.
2016-10-03 13:07:43 -07:00
Onder Kalaci a533b8e7c1 Differentiate worker and master job temporary folders
This commit enables to create different worker and master temporary folders.
This change is important for citus-mx on task-tracker execution. In simple words,
on citus-mx, the worker could actually be reponsible for the master tasks as well.
Prior to this change, both master and worker logic on task-tracker executor was
accessing and using the same files for different purposes which was dangerous on
certain cases (i.e., when task_tracker_delay is low).
2016-10-03 14:24:08 +03:00
Jason Petersen 5f6264105d
Directly register router xact callbacks in PG_init
Not entirely sure why we went with the shared memory hook approach, but
it causes problems (multiple registration) during crashes. Changing to
a simple direct registration call from PG_init.
2016-09-29 11:43:18 -06:00
Jason Petersen 0caf0d95f1
Fix unique-violation-in-xact segfault
An interaction between ReraiseRemoteError and DML transaction support
causes segfaults:

  * ReraiseRemoteError calls PurgeConnection, freeing a connection...
  * That connection is still in the xactParticipantHash

At transaction end, the memory in the freed connection might happen to
pass the "is this connection OK?" check, causing us to try to send an
ABORT over that connection. By removing it from the transaction hash
before calling ReraiseRemoteError, we avoid this possibility.
2016-09-27 16:44:03 -06:00
Metin Doslu c9dcad9b05 Pass text oid inteads of invalid oid for null values
Passing invalid oids even for null values in PQsendQueryParams() causes worker
nodes to fail. Therefore, we pass text oid for null values.
2016-09-27 08:15:46 +03:00
Andres Freund 776b3868b9
Support NoMovement direction in router executor
This is mainly interesting because it allows to use RETURN QUERY/RETURN
QUERY EXECUTE and FOR ... IN .. LOOPs in plpgsql.
2016-09-26 18:28:36 -06:00
Murat Tuncer 2eec0167be
Add support for truncate statement 2016-09-26 18:23:42 -06:00
Jason Petersen 74f4e0003b
Permit multiple DDL commands in a transaction
Three changes here to get to true multi-statement, multi-relation DDL
transactions (same functionality pre-5.2, with benefits of atomicity):

    1. Changed the multi-shard utility hook to always run (consistency
       with router executor hook, removes ad-hoc "installed" boolean)

    2. Change the global connection list in multi_shard_transaction to
       instead be a hash; update related functions to operate on global
       hash instead of local hash/global list

    3. Remove check within DDL code to prevent subsequent DDL commands;
       place unset/reset guard around call to ConnectToNode to permit
       connecting to additional nodes after DDL transaction has begun

In addition, code has been added to raise an error if a ROLLBACK TO
SAVEPOINT is attempted (similar to router executor), and comprehensive
tests execute all multi-DDL scenarios (full success, user ROLLBACK, any
actual errors (say, duplicate index), partial failure (duplicate index
on one node but not others), partial COMMIT (one node fails), and 2PC
partial PREPARE (one node fails)). Interleavings with other commands
(DML, \copy) are similarly all covered.
2016-09-08 22:35:55 -05:00
Jason Petersen 850c51947a
Re-permit DDL in transactions, selectively
Recent changes to DDL and transaction logic resulted in a "regression"
from the viewpoint of users. Previously, DDL commands were allowed in
multi-command transaction blocks, though they were not processed in any
actual transactional manner. We improved the atomicity of our DDL code,
but added a restriction that DDL commands themselves must not occur in
any BEGIN/END transaction block.

To give users back the original functionality (and improved atomicity)
we now keep track of whether a multi-command transaction has modified
data (DML) or schema (DDL). Interleaving the two modification types in
a single transaction is disallowed.

This first step simply permits a single DDL command in such a block,
admittedly an incomplete solution, but one which will permit us to add
full multi-DDL command support in a subsequent commit.
2016-08-30 20:37:19 -06:00
Metin Doslu 75618fc3fb Return false in MultiClientQueryResult() on failing query 2016-08-29 17:05:35 +03:00
Andres Freund 7fdb5fbe29 Skip over unreferenced parameters when router executing prepared statement.
When an unreferenced prepared statement parameter does not explicitly
have a type assigned, we cannot deserialize it, to send to the remote
side.  That commonly happens inside plpgsql functions, where local
variables are passed in as unused prepared statement parameters.
2016-08-05 14:12:06 -07:00
Jason Petersen eba8396501
Avoid attempting to lock invalid shard identifier
A recent change generates a "dummy" shard placement with its identifier
set to INVALID_SHARD_ID for SELECT queries against distributed tables
with no shards. Normally, no lock is acquired for SELECT statements,
but if all_modifications_commutative is set to true, we will acquire a
shared lock, triggering an assertion failure within LockShardResource
in the above case.

The "dummy" shard placement is actually necessary to ensure such empty
queries have somewhere to execute, and INVALID_SHARD_ID seems the most
appropriate value for the dummy's shard identifier field, so the most
straightforward fix is to just avoid locking invalid shard identifiers.
2016-08-04 13:49:51 -07:00
Marco Slot 9705cbcdf8 Rewrite WorkerShardStats to avoid invalid value bugs 2016-07-29 20:11:18 +02:00
Marco Slot 5e432449ba Add MultiClientExecute and MultiClientValueIsNull for simple remote query execution 2016-07-29 20:07:18 +02:00
Eren Başak bb3893d0d8 Set 1PC as the Default Commit Protocol for DDL Commands
Fixes #679

This change sets the default commit protocol for distributed DDL
commands to '1pc'. If the user issues a distributed DDL command with
this default setting, then once in a session, a NOTICE message is
shown about using '2pc' being extra safe.
2016-07-29 16:42:55 +03:00
Jason Petersen abe7304898
Support SERIAL/BIGSERIAL non-partition columns
This adds support for SERIAL/BIGSERIAL column types. Because we now can
evaluate functions on the master (during execution), adding this is a
matter of ensuring the table creation step works properly.

To accomplish this, I've added some logic to detect sequences owned by
a table (i.e. those related to its columns). Simply creating a sequence
and using it in a default value is insufficient; users who do so must
ensure the sequence is owned by the column using it.

Fortunately, this is exactly what SERIAL and BIGSERIAL do, which is the
use case we're targeting with this feature. While testing this, I found
that worker_apply_shard_ddl_command actually adds shard identifiers to
sequence names, though I found no places that use or test this path. I
removed that code so that sequence names are not mutated and will match
those used by a SERIAL default value expression.

Our use of the new-to-9.5 CREATE SEQUENCE IF NOT EXISTS syntax means we
are dropping support for 9.4 (which is being done regardless, but makes
this change simpler). I've removed 9.4 from the Travis build matrix.

Some edge cases are possible in ALTER SEQUENCE, COPY FROM (on workers),
and CREATE SEQUENCE OWNED BY. I've added errors for each so that users
understand when and why certain operations are prohibited.
2016-07-28 23:55:40 -06:00
Burak Yucesoy 6f20af9e38 Remove schema name parameter from API functions
We remove schema name parameter from worker_fetch_foreign_file and
worker_fetch_regular_table functions. We now send schema name
concatanated with table name.
2016-07-28 20:41:05 +03:00
Eren Başak 8a590c9e5b Remove AllFinalizedlacementsAccessible Function
This change removes AllFinalizedPlacementsAccessible function since,
we open connections to all shard placements before any command is
sent so we immediately error out if a shard placement is not accessible.
2016-07-28 17:24:37 +03:00
Eren Başak 40f8149320 Allow Cancellation During Distributed DDL Commands
This change allows users to interrupt long running DDL commands.
Interrupt requests are handled after each DDL command being propagated
to a shard placement, which means that generally the cancel request will
be processed right after the execution of the DDL is finished in the
current placement.
2016-07-28 17:12:07 +03:00
Murat Tuncer c20080992d Remove PostgreSQL 9.4 support 2016-07-26 20:16:09 +03:00
Onder Kalaci 56c0b0825f Fix bug related to poll timeout
This commit fixes a bug on setting polling timeout. The code
updated to comform to the comment that is already placed.
2016-07-26 09:55:47 +03:00
Burak Yucesoy c1a3478c3b Remove warnings on schema creation
Since now we support schema related operations, there is no need to warn user about
schema usage.
2016-07-22 18:24:23 +03:00
Burak Yucesoy bdff72ed75 Fix ALTER TABLE SET SCHEMA
Fixes #132

We hook into ALTER ... SET SCHEMA and warn out if user tries to change schema of a
distributed table.

We also hook into ALTER TABLE ALL IN TABLE SPACE statements and warn out if citus has
been loaded.
2016-07-22 17:52:40 +03:00
Burak Yucesoy b58872b441
Fix worker_fetch_regular_table with schema
Fixes #504
Fixes #646

We changed signature of worker_fetch_regular_table to accept schema name as parameter to
make it work with schemas.
2016-07-22 00:44:02 -06:00
Jason Petersen 5d525fba24
Permit "single-shard" transactions
Allows the use of modification commands (INSERT/UPDATE/DELETE) within
transaction blocks (delimited by BEGIN and ROLLBACK/COMMIT), so long as
all modifications hit a subset of nodes involved in the first such com-
mand in the transaction. This does not circumvent the requirement that
each individual modification command must still target a single shard.

For instance, after sending BEGIN, a user might INSERT some rows to a
shard replicated on two nodes. Subsequent modifications can hit other
shards, so long as they are on one or both of these nodes.

SAVEPOINTs are supported, though if the user actually attempts to send
a ROLLBACK command that specifies a SAVEPOINT they will receive an
ERROR at the end of the topmost transaction.

Placements are only marked inactive if at least one replica succeeds
in a transaction where others fail. Non-atomic behavior is possible if
the shard targeted by the initial modification within a transaction has
a higher replication factor than another shard within the same block
and a node with the latter shard has a failure during the COMMIT phase.

Other methods of denoting transaction blocks (multi-statement commands
sent all at once and functions written in e.g. PL/pgSQL or other such
languages) are not presently supported; their treatment remains the
same as before.
2016-07-21 15:57:22 -06:00
Burak Yucesoy 2f0158dde1 Change worker_apply_shard_ddl_command to accept schema name as parameter
Fixes #565
Fixes #626

To add schema support to citus, we need to schema-prefix all table names, object names etc.
in the queries sent to worker nodes. However; query deparsing is not available for most of
DDL commands, therefore it is not easy to generate worker query in the master node.

As a solution we are sending schema names along with shard id and query to run to worker
nodes with worker_apply_shard_ddl_command.

To not break \STAGE command we pass public schema as paramater while calling
worker_apply_shard_ddl_command from there. This will not cause problem if user uses \STAGE
in different schema because passes schema name is used only if there is no schema name is
given in the query.
2016-07-21 14:17:26 +03:00
Metin Doslu a811e09dd4 Add support for prepared statements with parameterized non-partition columns in router executor 2016-07-21 11:09:28 +03:00
Eren Başak 1063fbc80a Fix Unused Parameter isTopLevel in ExecuteDistributedDDLCommand
This change fixes the unused variable problem in
`ExecuteDistributedDDLCommand` function (multi_utility.c). The
parameter is meant to be used in PreventTransactionChain call.
2016-07-19 14:14:02 +03:00
Eren 3eaff48114 Propagate DDL Commands with 2PC
Fixes #513

This change modifies the DDL Propagation logic so that DDL queries
are propagated via 2-Phase Commit protocol. This way, failures during
the execution of distributed DDL commands will not leave the table in
an intermediate state and the pending prepared transactions can be
commited manually.

DDL commands are not allowed inside other transaction blocks or functions.

DDL commands are performed with 2PC regardless of the value of
`citus.multi_shard_commit_protocol` parameter.

The workflow of the successful case is this:
1. Open individual connections to all shard placements and send `BEGIN`
2. Send `SELECT worker_apply_shard_ddl_command(<shardId>, <DDL Command>)`
to all connections, one by one, in a serial manner.
3. Send `PREPARE TRANSCATION <transaction_id>` to all connections.
4. Sedn `COMMIT` to all connections.

Failure cases:
- If a worker problem occurs before sending of all DDL commands is finished, then
all changes are rolled back.
- If a worker problem occurs after all DDL commands are sent but not after
`PREPARE TRANSACTION` commands are finished, then all changes are rolled back.
However, if a worker node is failed, then the prepared transactions in that worker
should be rolled back manually.
- If a worker problem occurs during `COMMIT PREPARED` statements are being sent,
then the prepared transactions on the failed workers should be commited manually.
- If master fails before the first 'PREPARE TRANSACTION' is sent, then nothing is
changed on workers.
- If master fails during `PREPARE TRANSACTION` commands are being sent, then the
prepared transactions on workers should be rolled back manually.
- If master fails during `COMMIT PREPARED` or `ROLLBACK PREPARED` commands are being
sent, then the remaining prepared transactions on the workers should be handled manually.

This change also helps with #480, since failed DDL changes no longer mark
failed placements as inactive.
2016-07-19 10:44:11 +03:00
Brian Cloutier 0cad3b22cc Simplify code and fix include guards in citus_clauses 2016-07-13 11:45:51 -07:00
Brian Cloutier af9515f669 Only reparse queries if the planner flags them for reparsing 2016-07-13 11:45:51 -07:00
Brian Cloutier 4820366a6f citus_indent and some renaming 2016-07-13 11:45:51 -07:00
Brian Cloutier ae91768c96 Evaluate functions on the master
- Enables using VOLATILE functions (like nextval()) in INSERT queries
- Enables using STABLE functions (like now()) targetLists and joinTrees

UPDATE and INSERT can now contain non-immutable functions. INSERT can contain any kind of
expression, while UPDATE can contain any STABLE function, so long as a Var is not passed
into the STABLE function, even indirectly. UPDATE TagetEntry's can now also include Vars.

There's an exception, CASE/COALESCE statements may not contain mutable functions.

Functions calls in master_modify_multiple_shards are also evaluated.
2016-07-13 11:45:51 -07:00
Andres Freund c38c1adce1 Combine router executor paths for select and modify commands.
The upcoming RETURNING support would otherwise require too much
duplication.  This contains most of the pieces required for RETURNING
support, except removing the planner checks and adjusting regression
test output.
2016-07-01 13:07:12 -07:00
Andres Freund e1282b6d70 Remember original targetlist in MultiQueryContainerNode().
The old targetlist wasn't used so far, but the upcoming RETURNING
support relies on it.

This also allows to get rid of some crufty code in
multi_executor.c:multi_ExecutorStart(), which used the worker query's
targetlist instead of the main statement's (which didn't have one up to
now).
2016-07-01 12:50:12 -07:00
Jason Petersen e064cacea9
Minor formatting fix
Noticed that uncrustify doesn't like the array-of-struct literals, so
omitting them from formatting (at least here).
2016-06-28 13:09:57 -06:00
Jason Petersen 8b788eb899
Use literal instead of constant to fix 9.4 build
PG_UINT32_MAX doesn't exist before 9.5. Missed this because I removed
my assert-enabled builds during packaging work.

Fixes #619
2016-06-28 12:36:14 -06:00
Eren 57256b3476 Eliminate compile time warnings in multi_logical_optimizer.c
This change removes some issues about mixed declarations
and code in TablePartitioningSupportsDistinct() and
WorkerExtendedOpNode() functions.
2016-06-10 12:27:12 +03:00
Murat Tuncer bb3eee63e7 Refactor task tracker cleanup to enable workers receive cleanup jobs
Long sleep is replaced by multiple small sleeps. Maximum timeout
is also increased since we do not have to wait for that long most
of the cases.
2016-06-09 17:03:54 +03:00
Jason Petersen d39508d8b3
Minor formatting/comment fixes 2016-06-08 10:34:07 -06:00
Amos Bird fd1b088208 Add overflow checks. 2016-06-08 10:30:03 +08:00
Amos Bird 107b926262 Eliminates the possibilities of counter overflows.
This patch uses scanint8 instead of pg_atoi to make sure the affected
tuples counter never gets overflow.
2016-06-08 10:30:03 +08:00
Jason Petersen 9ba02928ac
Refactor ReportRemoteError to remove boolean arg
Broke it into two explicitly-named functions instead: WarnRemoteError
and ReraiseRemoteError.
2016-06-07 12:38:32 -06:00
Metin Doslu 7d0c90b398 Fail fast on constraint violations in router executor 2016-06-07 18:11:17 +03:00
Murat Tuncer 360e884de1 Add enable_ddl_propagation flag to control automatic ddl propagation 2016-06-06 13:42:46 +03:00
Andres Freund 3dac0a4d14
Rely less on remote_task_check_interval.
When executing queries with citus.task_executor = 'real-time', query
execution could, so far, spend a significant amount of time
sleeping. That's because we were
a) sleeping after several phases of query execution, even if we're not
   waiting for network IO
b) sleeping for a fixed amount of time when waiting for network IO;
   often a lot longer than actually required.
Just reducing the amount of time slept isn't a real solution, because
that just increases CPU usage.

Instead have the real-time executor's ManageTaskExecution return whether
a task is currently being processed, waiting for reads or writes, or
failed. When all tasks are waiting for IO use poll() to wait for IO
readyness.

That requires to slightly redefine how connection timeouts are handled:
before we counted the number of times ManageTaskExecution() was called,
and compared that with the timeout divided by the task check
interval. That, if processing of tasks took a while, could significantly
increase the time till a timeout occurred. Because it was based on the
ManageTaskExecution() being called on a constant interval, this approach
isn't feasible anymore.  Instead measure the actual time since
connection establishment was started. That could in theory, if task
processing takes a very long time, lead to few passes over
PQconnectPoll().

The problem of sleeping too much also exists for the 'task-tracker'
executor, but is generally less problematic there, as processing the
individual tasks usually will take longer. That said, for e.g. the
regression tests it'd be helpful to use a similar approach.
2016-06-02 12:11:16 -06:00
Jason Petersen e774f22ed4
Fix formatting
Checking in citus_indent output.
2016-05-27 15:13:28 -06:00