Commit Graph

249 Commits (601b17d544a400c43116efe2d2ef2545ed0cfa50)

Author SHA1 Message Date
Jason Petersen cf775c4773
Improve CONCURRENTLY-related error messages
Thought this looked slightly nicer than the default behavior.

Changed preventTransaction to concurrent to be clearer that this code
path presently affects CONCURRENTLY code only.
2017-04-03 11:19:15 -06:00
Jason Petersen dd9365433e
Update documentation
Ensure all functions have comments, etc.
2017-04-03 11:19:15 -06:00
Jason Petersen d904e96c59
Address MX CONCURRENTLY problems
Adds a non-transactional multi-command method to propagate DDLs to all
MX/metadata-synced nodes.
2017-04-03 11:19:15 -06:00
Jason Petersen dea6c44f75
Remove CONCURRENTLY checks, fix tests
Still pending failure testing, which broke with my recent changes.
2017-04-03 11:19:15 -06:00
Jason Petersen 95d8d27c4f
Change IndexStmt to generate worker DDL on master
Because we can't execute CREATE INDEX CONCURRENTLY during transactions,
worker_apply_shard_ddl_command is insufficient.
2017-04-03 11:19:14 -06:00
Marco Slot 0f355a4a48 Batch task_tracker_status calls to reduce task-tracker query times 2017-03-31 11:54:11 +02:00
Jason Petersen 34a62abb7d
Address code review comments 2017-03-22 17:29:17 -06:00
Jason Petersen d95b5bbad3
Rework ReplicateGrantStmt to use new flow
This was the impetus for the previous commit that changed from using a
DDLJob * to a List * of them.
2017-03-22 17:29:16 -06:00
Jason Petersen a02a2a90c7
Refactor ExecuteDistDDLCommand to expect struct
Will let us separate out the determination of what to execute from its
actual execution.
2017-03-22 17:21:49 -06:00
velioglu e32aff1a26 Size UDFs implemented
citus_table_size, citus_relation_size and citus_total_relation_size UDFs are implemented.
2017-03-16 13:50:30 +03:00
Metin Doslu 1f838199f8 Use CustomScan API for query execution
Custom Scan is a node in the planned statement which helps external providers
to abstract data scan not just for foreign data wrappers but also for regular
relations so you can benefit your version of caching or hardware optimizations.
This sounds like only an abstraction on the data scan layer, but we can use it
as an abstraction for our distributed queries. The only thing we need to do is
to find distributable parts of the query, plan for them and replace them with
a Citus Custom Scan. Then, whenever PostgreSQL hits this custom scan node in
its Vulcano style execution, it will call our callback functions which run
distributed plan and provides tuples to the upper node as it scans a regular
relation. This means fewer code changes, fewer bugs and more supported features
for us!

First, in the distributed query planner phase, we create a Custom Scan which
wraps the distributed plan. For real-time and task-tracker executors, we add
this custom plan under the master query plan. For router executor, we directly
pass the custom plan because there is not any master query. Then, we simply let
the PostgreSQL executor run this plan. When it hits the custom scan node, we
call the related executor parts for distributed plan, fill the tuple store in
the custom scan and return results to PostgreSQL executor in Vulcano style,
a tuple per XXX_ExecScan() call.

* Modify planner to utilize Custom Scan node.
* Create different scan methods for different executors.
* Use native PostgreSQL Explain for master part of queries.
2017-03-14 12:17:51 +02:00
Andres Freund 52358fe891 Initial temp table removal implementation 2017-03-14 12:09:49 +02:00
Jason Petersen 6f4886cd11
Revert "Remove unused SendCommandToWorker"
This reverts commit c8c308c109.
2017-03-13 15:48:51 -06:00
Brian Cloutier c8c308c109 Remove unused SendCommandToWorker 2017-03-08 16:30:23 +03:00
Brian Cloutier 95936ff481 Remove unused master_get_round_robin_candidate_nodes 2017-03-07 11:51:24 +03:00
Brian Cloutier 807beb7bc0 Remove master_get_local_first_candidate_nodes 2017-03-07 11:50:59 +03:00
Murat Tuncer 72027f2eba Remove default clause from shard DDL when sequences are used 2017-03-01 17:32:48 +03:00
Marco Slot db98c28354 Address review feedback in COPY refactoring 2017-02-28 17:39:45 +01:00
Marco Slot bf3541cb24 Add CitusCopyDestReceiver infrastructure 2017-02-28 17:24:45 +01:00
Eren Basak df9cf346ee Enforce statement based replication on old APIs and non-hash tables
This change ignores `citus.replication_model` setting and uses the
statement based replication in

- Tables distributed via the old `master_create_distributed_table` function
- Append and range partitioned tables, even if created via
`create_distributed_table` function

This seems like the easiest solution to #1191, without changing the existing
behavior and harming existing users with custom scripts.

This change also prevents RF>1 on streaming replicated tables on `master_create_worker_shards`

Prior to this change, `master_create_worker_shards` command was not checking
the replication model of the target table, thus allowing RF>1 with streaming
replicated tables. With this change, `master_create_worker_shards` errors
out on the case.
2017-02-16 10:37:53 -08:00
Brian Cloutier 1173f3f225 Refactor CheckShardPlacements
- Break CheckShardPlacements into multiple functions (The most important
  is MarkFailedShardPlacements), so that we can get rid of the global
  CoordinatedTransactionUses2PC.
- Call MarkFailedShardPlacements in the router executor, so we mark
  shards as invalid and stop using them while inside transaction blocks.
2017-01-26 13:20:45 +02:00
Marco Slot ba940a1de9 Use coordinator instead of schema node in terminology 2017-01-25 11:07:23 +01:00
Burak Yucesoy 484cb12cd0 Add LoadShardPlacement UDF
This UDF returns a shard placement from cache given shard id and placement id. At the
moment it iterates over all shard placements of given shard by ShardPlacementList and
searches given placement id in that list, which is not a good solution performance-wise.
However, currently, this function will be used only when there is a failed transaction.
If a need arises we can optimize this function in the future.
2017-01-23 21:04:57 +03:00
Marco Slot 1585c02322 Use placement connection API for multi-shard transactions 2017-01-23 18:34:50 +01:00
Andres Freund 6939cb8c56 Hack up PREPARE/EXECUTE for nearly all distributed queries.
All router, real-time, task-tracker plannable queries should now have
full prepared statement support (and even use router when possible),
unless they don't go through the custom plan interface (which
basically just affects LANGUAGE SQL (not plpgsql) functions).

This is achieved by forcing postgres' planner to always choose a
custom plan, by assigning very low costs to plans with bound
parameters (i.e. ones were the postgres planner replanned the query
upon EXECUTE with all parameter values provided), instead of the
generic one.

This requires some trickery, because for custom plans to work the
costs for a non-custom plan have to be known, which means we can't
error out when planning the generic plan.  Instead we have to return a
"faux" plan, that'd trigger an error message if executed.  But due to
the custom plan logic that plan will likely (unless called by an SQL
function, or because we can't support that query for some reason) not
be executed; instead the custom plan will be chosen.
2017-01-23 09:23:50 -08:00
Andres Freund c244b8ef4a Make router planner error handling more flexible.
So far router planner had encapsulated different functionality in
MultiRouterPlanCreate. Modifications always go through router, selects
sometimes. Modifications always error out if the query is unsupported,
selects return NULL.  Especially the error handling is a problem for
the upcoming extension of prepared statement support.

Split MultiRouterPlanCreate into CreateRouterPlan and
CreateModifyPlan, and change them to not throw errors.

Instead errors are now reported by setting the new
MultiPlan->plannigError.

Callers of router planner functionality now have to throw errors
themselves if desired, but also can skip doing so.

This is a pre-requisite for expanding prepared statement support.

While touching all those lines, improve a number of error messages by
getting them closer to the postgres error message guidelines.
2017-01-23 09:23:50 -08:00
Andres Freund 557ccc6fda Support for deferred error messages.
It can be useful, e.g. in the upcoming prepared statement support, to
be able to return an error from a function that is not raised
immediately, but can later be thrown.  That allows e.g. to attempt to
plan a statment using different methods and to create good error
messages in each planner, but to only error out after all planners
have been run.

To enable that create support for deferred error messages that can be
created (supporting errorcode, message, detail, hint) in one function,
and then thrown in different place.
2017-01-23 09:23:50 -08:00
Jason Petersen 56197dbdba
Add replication_model GUC
This adds a replication_model GUC which is used as the replication
model for any new distributed table that is not a reference table.
With this change, tables with replication factor 1 are no longer
implicitly MX tables.

The GUC is similarly respected during empty shard creation for e.g.
existing append-partitioned tables. If the model is set to streaming
while replication factor is greater than one, table and shard creation
routines will error until this invalid combination is corrected.

Changing this parameter requires superuser permissions.
2017-01-23 09:05:14 -07:00
Brian Cloutier fe5465aa4e Port master_append_table_to_shard to new connection API (#1149)
If any placements fail it doesn't update shard statistics on those placements.

A minor enabling refactor: Make CoordinatedTransactionUses2PC public (it used to be CoordinatedTransactionUse2PC but that symbol already existed, so renamed it as well)
2017-01-23 15:57:44 +02:00
Marco Slot ea855ddf86 Add an enable_deadlock_prevention flag to allow router transactions to expand to multiple nodes 2017-01-22 17:31:24 +01:00
Andres Freund 78b085106a Remove connection_cache.[ch]. 2017-01-21 09:01:15 -08:00
Andres Freund 6ec34bed84 Remove remnants of commit_protocol.[ch]. 2017-01-21 09:01:15 -08:00
Andres Freund fd717d6da9 Consistently libpq forward declaration in remote_commands.h. 2017-01-21 09:01:14 -08:00
Murat Tuncer d76f781ae4 Convert multi copy to use new connection api
This enables proper transactional behaviour for copy and relaxes some
restrictions like combining COPY with single-row modifications. It
also provides the basis for relaxing restrictions further, and for
optionally allowing connection caching.
2017-01-20 19:15:19 -08:00
Andres Freund 3a36d32c43 Mark some now unnecessarily exposed multi_planner.c functions static. 2017-01-20 12:31:56 -08:00
Andres Freund 0f28a11970 Remove citus.explain_multi_logical/physical_plan.
They make fixing explain for prepared statement harder, and they don't
really fit into EXPLAIN in the first place.  Additionally they're
currently not exercised in any tests.
2017-01-20 12:31:19 -08:00
Metin Doslu 2bd8f8f12e Add a function to delete shard metadata from MX nodes 2017-01-20 14:38:01 +02:00
Metin Doslu 93e626c896 Refactor get_shard_id_for_distribution_column() and other minor changes 2017-01-20 14:38:01 +02:00
Eren Basak e7c15ecc1f Make `upgrade_to_reference_table` function MX-compatible 2017-01-18 16:49:50 +03:00
Eren Basak be78769ae4 Propagate new reference table placement metadata on `master_add_node` 2017-01-18 15:59:06 +03:00
Eren Basak b686d9a025 Add Sequence Support for MX Tables
This change adds support for serial columns to be used with MX tables.
Prior to this change, sequences of serial columns were created in all
workers (for being able to create shards) but never used. With MX, we
need to set the sequences so that sequences in each worker create
unique values. This is done by setting the MINVALUE, MAXVALUE and
START values of the sequence.
2017-01-18 09:43:38 +03:00
Andres Freund bdef35ac14 Query placementId in RemoteFinalizedShardPlacementList().
Not having the id in the ShardPlacement struct causes issues while
making copy use the placement aware connection management.
2017-01-17 13:27:26 -08:00
Brian Cloutier b1b2b4fadf Create ExecuteOptionalRemoteCommand
A small refactor which pulls some code out of `RecoverWorkerTransactions`
and into `remote_commands.c`. This code block currently only occurs in
`RecoverWorkerTransactions` but will be useful to other functions
shortly.

Unfortunately we couldn't call it `ExecuteRemoteCommand`, that name was
already taken.
2017-01-17 17:04:37 +02:00
Andres Freund 6972186652 Add ShardPlacement fields required for colocated placement connection mapping. 2017-01-16 13:42:54 -08:00
Burak Yucesoy 3315ae6142 Remove placement metadata of reference tables after master_remove_node
With this change, we start to delete placement of reference tables at given worker node
after master_remove_node UDF call. We remove placement metadata at master node but we do
not drop actual shard from the worker node. There are two reasons for that decision,
first, it is not critical to DROP the shards in the workers because Citus will ignore them
as long as node is removed from cluster and if we add that node back to cluster we will
DROP and recreate all reference tables. Second, if node is unreachable, it becomes
complicated to cover failure cases and have a transaction support.
2017-01-16 11:24:56 +03:00
Murat Tuncer 77f8db6b14 Add view support
Enables use views within distributed queries.
User can create and use a view on distributed tables/queries
as he/she would use with regular queries.

After this change router queries will have full support for views,
insert into select queries will support reading from views, not
writing into. Outer joins would have a limited support, and would
error out at certain cases such as when a view is in the inner side
of the outer join.

Although PostgreSQL supports writing into views under certain circumstances.
We disallowed that for distributed views.
2017-01-13 09:39:42 +03:00
Onder Kalaci aed5f817fa Refactor CheckShardPlacements() and improve support for node removal
This commit refactors CheckShardPlacements() so that it only considers
modifyingConnection. Also, it skips nodes which are removed from the
cluster.
2017-01-12 20:10:10 +02:00
Andres Freund b813b39241 Cache ShardPlacements in metadata cache.
So far we've reloaded them frequently. Besides avoiding that cost -
noticeable for some workloads with large shard counts - it makes it
easier to add information to ShardPlacements that help us make
placement_connection.c colocation aware.
2017-01-10 18:14:18 -08:00
Andres Freund 7320c17f00 Convert router executor to placement connection management infrastructure.
Remove the router specific transaction and shard management, and
replace it with the new placement connection API.  This mostly leaves
behaviour alone, except that it is now, inside a transaction, legal to
select from a shard to which no pre-existing connection exists.

To simplify code the code handling task executions for select and
modify has been split into two - the previous coding was starting to
get confusing due to the amount of only conditionally applicable code.

Modification connections & transactions are now always established in
parallel, not just for reference tables.
2017-01-09 13:13:02 -08:00
Andres Freund bfa742d794 Centralized shard/placement connection and state management.
Currently there are several places in citus that map placements to
connections and that manage placement health. Centralize this
knowledge.  Because of the centralized knowledge about which
connection has previously been used for which shard/placement, this
also provides the basis for relaxing restrictions around combining
various forms of DDL/DML.

Connections for a placement can now be acquired using
GetPlacementConnection(). If the connection is used for DML or DDL the
FOR_DDL/DML flags should be used respectively.  If an individual
remote transaction fails (but the transaction on the master succeeds)
and FOR_DDL/DML have been specified, the placement is marked as
invalid, unless that'd mark all placements for a shard as invalid.
2017-01-09 13:13:02 -08:00
Andres Freund d256f3fca9 Remove unused LogPreparedTransactions() function.
This is unused since 92c7567008.
2017-01-06 09:15:01 -08:00
Burak Yucesoy 9c9f479e4b Replicate reference tables when new node is added
With this change, we start to replicate all reference tables to the new node when new node
is added to the cluster with master_add_node command. We also update replication factor
of reference table's colocation group.
2017-01-05 14:30:41 +03:00
Onder Kalaci 6d050fd677 Use 2PC for reference table modification
With this commit, we ensure that router executor always uses
2PC for reference table modifications and never mark the placements
of it as INVALID.
2017-01-04 12:46:35 +02:00
Burak Yucesoy 31cd2357fe Add upgrade_to_reference_table
With this change we introduce new UDF, upgrade_to_reference_table, which can be used to
upgrade existing broadcast tables reference tables. For upgrading, we require that given
table contains only one shard.
2017-01-02 17:54:42 +02:00
Eren Basak 7e09bd6836 Error on Unsupported Features on Workers
This change makes the metadata workers error out on unsupported commands.
2017-01-02 16:03:45 +03:00
Marco Slot 59bc5972fa
Use MultiConnection in multi-shard transactions 2016-12-30 14:43:21 -07:00
Metin Doslu 1ddc70ca55 Add binary search capability to ShardIndex()
Renamed FindShardIntervalIndex() to ShardIndex() and added binary search
capability. It used to assume that hash partition tables are always
uniformly distributed which is not true if upcoming tenant isolation
feature is applied. This commit also reduces code duplication.
2016-12-30 18:55:34 +02:00
Marco Slot 6cbc1945f9 Enable transaction recovery in connection API 2016-12-23 16:14:29 +01:00
Marco Slot 92c7567008 Convert worker_transactions to new connection API 2016-12-23 16:14:29 +01:00
Marco Slot 00d55ad957 Add a wrapper for PQsendQuery 2016-12-23 16:14:29 +01:00
Marco Slot 87c62d598e Connectionapify SendCommandListToWorkerInSingleTransaction 2016-12-23 16:14:29 +01:00
Eren Basak 31af40cc26 Handle MX tables on workers during drop table commands 2016-12-23 15:43:32 +03:00
Eren Basak bed2e353db Propagate `mark_tables_colocated` changes in `pg_dist_partition` table to metadata workers. 2016-12-23 15:43:32 +03:00
Eren Basak 71d73ec5ff Propagate DDL commands to metadata workers for MX tables 2016-12-23 15:43:32 +03:00
Eren Basak 048fddf4da Propagate MX table and shard metadata on `create_distributed_table` call 2016-12-23 15:43:32 +03:00
Marco Slot 11031bcf55 Enable evaluation of stable functions in INSERT..SELECT 2016-12-23 12:47:21 +01:00
Marco Slot d745d7bf70 Add explicit RelationShards mapping to tasks 2016-12-23 10:23:43 +01:00
Onder Kalaci 9f0bd4cb36 Reference Table Support - Phase 1
With this commit, we implemented some basic features of reference tables.

To start with, a reference table is
  * a distributed table whithout a distribution column defined on it
  * the distributed table is single sharded
  * and the shard is replicated to all nodes

Reference tables follows the same code-path with a single sharded
tables. Thus, broadcast JOINs are applicable to reference tables.
But, since the table is replicated to all nodes, table fetching is
not required any more.

Reference tables support the uniqueness constraints for any column.

Reference tables can be used in INSERT INTO .. SELECT queries with
the following rules:
  * If a reference table is in the SELECT part of the query, it is
    safe join with another reference table and/or hash partitioned
    tables.
  * If a reference table is in the INSERT part of the query, all
    other participating tables should be reference tables.

Reference tables follow the regular co-location structure. Since
all reference tables are single sharded and replicated to all nodes,
they are always co-located with each other.

Queries involving only reference tables always follows router planner
and executor.

Reference tables can have composite typed columns and there is no need
to create/define the necessary support functions.

All modification queries, master_* UDFs, EXPLAIN, DDLs, TRUNCATE,
sequences, transactions, COPY, schema support works on reference
tables as expected. Plus, all the pre-requisites associated with
distribution columns are dismissed.
2016-12-20 14:09:35 +02:00
Eren Basak 296e0bd33a Add citus.node_connection_timeout GUC 2016-12-20 14:11:37 +03:00
Murat Tuncer c3a60bff70 Make router planner active at all times
We used to disable router planner and executor
when task executor is set to task-tracker.

This change enables router planning and execution
at all times regardless of task execution mode.

We are introducing a hidden flag enable_router_execution
to enable/disable router execution. Its default value is
true. User may disable router planning by setting it to false.
2016-12-20 11:24:01 +03:00
Jason Petersen 6f95875191 Add targeted VACUUM/ANALYZE support
Adds support for VACUUM and ANALYZE commands which target a specific
distributed table. After grabbing the appropriate locks, this imple-
mentation sends VACUUM commands to each placement (using one connec-
tion per placement). These commands are sent in parallel, so users
with large tables will benefit from sharding. Except for VERBOSE, all
VACUUM and ANALYZE options are supported, including the explicit
column list used by ANALYZE.

As with many of our utility commands, the local command also runs. In
the VACUUM/ANALYZE case, the local command is executed before any re-
mote propagation. Because error handling is managed after local proc-
essing, this can result in a VACUUM completing locally but erroring
out when distributed processing commences: a minor technicality in all
cases, as there isn't really much reason to ever roll back a VACUUM (an
impossibility in any case, as VACUUM cannot run within a transaction).

Remote propagation of targeted VACUUM/ANALYZE is controlled by the
enable_ddl_propagation setting; warnings are emitted if such a command
is attempted when DDL propagation is disabled. Unqualified VACUUM or
ANALYZE is not handled, but a warning message informs the user of this.

Implementation note: this commit adds a "BARE" value to MultiShard-
CommitProtocol. When active, no BEGIN command is ever sent to remote
nodes, useful for commands such as VACUUM/ANALYZE which must not run in
a transaction block. This value is not user-facing and is reset at
transaction end.
2016-12-16 16:59:06 -07:00
Metin Doslu 20b8f1feeb Refactor distribution column type check for colocation 2016-12-16 15:24:45 +02:00
Metin Doslu e2d0bd38f2 Don't allow tables with different replication models to be colocated 2016-12-16 15:23:49 +02:00
Metin Doslu 86cca54857 Add colocate_with option to create_distributed_table()
With this commit, we support three versions of colocate_with: i.default, ii.none
and iii. a specific table name.
2016-12-16 14:53:35 +02:00
Metin Doslu edbedbd744 Move colocation related functions to colocation_utils.c 2016-12-16 14:52:40 +02:00
Eren Basak b94647c3bc Propagate CREATE SCHEMA commands with the correct AUTHORIZATION clause in start_metadata_sync_to_node 2016-12-14 10:53:12 +03:00
Eren Basak fb08093b00 Make start_metadata_sync_to_node UDF to propagate foreign-key constraints 2016-12-14 10:53:12 +03:00
Eren Basak 5e96e4f60e Make truncate triggers propagated on start_metadata_sync_to_node call 2016-12-14 10:53:10 +03:00
Eren Basak 9eff968d1f Add start_metadata_sync_to_node UDF
This change adds `start_metadata_sync_to_node` UDF which copies the metadata about nodes and MX tables
from master to the specified worker, sets its local group ID and marks its hasmetadata to true to
allow it receive future DDL changes.
2016-12-13 10:48:03 +03:00
Andres Freund 80b34a5d6b Integrate router executor into transaction management framework.
One less place managing remote transactions. It also makes it fairly
easy to use 2PC for certain modifications (e.g. reference tables). Just
issue a CoordinatedTransactionUse2PC(). If every placement failure
should cause the whole transaction to abort, additionally mark the
relevant transactions as critical.
2016-12-12 15:18:12 -08:00
Andres Freund fa5e202403 Convert multi_shard_transaction.[ch] to new framework. 2016-12-12 15:18:12 -08:00
Andres Freund fc298ec095 Coordinated remote transaction management. 2016-12-12 15:18:12 -08:00
Andres Freund 6eeb43af15 Add PQgetResult() wrapper handling interrupts.
This makes it possible to implement cancelling queries blocked on
communication with remote nodes.
2016-12-12 15:18:12 -08:00
Andres Freund 2374905c89 Move multi_client_executor.[ch] ontop of connection_management.[ch].
That way connections can be automatically closed after errors and such,
and the connection management infrastructure gets wider testing.  It
also fixes a few issues around connection string building.
2016-12-07 11:44:24 -08:00
Andres Freund a77cf36778 Use connection_management.c from within connection_cache.c.
This is a temporary step towards removing connection_cache.c.
2016-12-07 11:44:24 -08:00
Andres Freund 3505d431cd Add initial helpers to make interactions with MultiConnection et al. easier.
This includes basic infrastructure for logging of commands sent to
remote/worker nodes. Note that this has no effect as of yet, since no
callers are converted to the new infrastructure.
2016-12-07 11:44:24 -08:00
Andres Freund 3223b3c92d Centralized Connection Lifetime Management.
Connections are tracked and released by integrating into postgres'
transaction handling. That allows to to use connections without having
to resort to having to disable interrupts or using PG_TRY/CATCH blocks
to avoid leaking connections.

This is intended to eventually replace multi_client_executor.c and
connection_cache.c, and to provide the basis of a centralized
transaction management.

The newly introduced transaction hook should, in the future, be the only
one in citus, to allow for proper ordering between operations.  For now
this central handler is responsible for releasing connections and
resetting XactModificationLevel after a transaction.
2016-12-07 11:43:18 -08:00
Andres Freund 883af02b54 Add some basic helpers to make use of dynahash hashtables easier. 2016-12-06 14:15:36 -08:00
Eren Basak fb88b167a7 Propagate node add/remove to the nodes with hasmetadata=true
This change propagates the changes done by `master_add_node` and `master_remove_node`
to the workers that contain metadata.
2016-12-02 14:43:32 +03:00
Brian Cloutier a4096c9f45 Remove dead code: ResponsiveWorkerNodeList 2016-12-02 13:14:11 +03:00
Marco Slot f6b3af7a49 Use co-located shard ID in multi_shard_transaction 2016-11-02 11:01:19 +01:00
Önder Kalacı 83e1719541 Always CASCADE while dropping a shard 2016-11-01 10:16:34 +01:00
Metin Doslu 4e555880b7 Add mark_tables_colocated() to update colocation groups
Added a new UDF, mark_tables_colocated(), to colocate tables with the same
configuration (shard count, shard replication count and distribution column type).
2016-10-26 17:29:03 +03:00
Marco Slot 275378aa45 Re-acquire metadata locks in RouterExecutorStart 2016-10-26 14:34:59 +02:00
Brian Cloutier 1e6d1ef67e Fix segfault during EXPLAIN EXECUTE
Fix citusdata/citus#886

The way postgres' explain hook is designed means that our hook is never
called during EXPLAIN EXECUTE. So, we special-case EXPLAIN EXECUTE by
catching it in the utility hook.  We then replace the EXECUTE with the
original query and pass it back to Citus.
2016-10-26 15:18:42 +03:00
Andres Freund fcd150c7c8 Invalidate relcache after pg_dist_shard_placement changes.
This forces prepared statements to be re-planned after changes of the
placement metadata. There's some locking issues remaining, but that's a
a separate task.

Also add regression tests verifying that invalidations take effect on
prepared statements.
2016-10-26 03:36:35 -07:00
Onder Kalaci 1673ea937c Feature: INSERT INTO ... SELECT
This commit adds INSERT INTO ... SELECT feature for distributed tables.

We implement INSERT INTO ... SELECT by pushing down the SELECT to
each shard. To compute that we use the router planner, by adding
an "uninstantiated" constraint that the partition column be equal to a
certain value. standard_planner() distributes that constraint to all
the tables where it knows how to push the restriction safely. An example
is that the tables that are connected via equi joins.

The router planner then iterates over the target table's shards,
for each we replace the "uninstantiated" restriction, with one that
PruneShardList() handles. Do so by replacing the partitioning qual
parameter added in multi_planner() with the current shard's
actual boundary values. Also, add the current shard's boundary values to the
top level subquery to ensure that even if the partitioning qual is
not distributed to all the tables, we never run the queries on the shards
that don't match with the current shard boundaries. Finally, perform the
normal shard pruning to decide on whether to push the query to the
current shard or not.

We do not support certain SQLs on the subquery, which are described/commented
on ErrorIfInsertSelectQueryNotSupported().

We also added some locking on the router executor. When an INSERT/SELECT command
runs on a distributed table with replication factor >1, we need to ensure that
it sees the same result on each placement of a shard. So we added the ability
such that router executor takes exclusive locks on shards from which the SELECT
in an INSERT/SELECT reads in order to prevent concurrent changes. This is not a
very optimal solution, but it's simple and correct. The
citus.all_modifications_commutative can be used to avoid aggressive locking.
An INSERT/SELECT whose filters are known to exclude any ongoing writes can be
marked as commutative. See RequiresConsistentSnapshot() for the details.

We also moved the decison of whether the multiPlan should be executed on
the router executor or not to the planning phase. This allowed us to
integrate multi task router executor tasks to the router executor smoothly.
2016-10-26 10:01:00 +03:00
Onder Kalaci e0d83d65af Add ability to reorder target list for INSERT/SELECT queries
The necessity for this functionality comes from the fact that ruleutils.c is not supposed to be
used on "rewritten" queries (i.e. ones that have been passed through QueryRewrite()).
Query rewriting is the process in which views and such are expanded,
and, INSERT/UPDATE targetlists are reordered to match the physical order,
defaults etc. For the details of reordeing, see transformInsertRow().
2016-10-26 10:00:03 +03:00
Marco Slot 271b20a23e Parallelise DDL commands 2016-10-24 12:39:08 +02:00
Burak Yucesoy 5a03acf2bf Foreign Constraint Support for create_distributed_table and shard move
With this change, we now push down foreign key constraints created during CREATE TABLE
statements. We also start to send foreign constraints during shard move along with
other DDL statements
2016-10-21 15:38:55 +03:00