Commit Graph

234 Commits (290f9a63019f8597c2d649962d873d04736e13d3)

Author SHA1 Message Date
Brian Cloutier 9f876986e2 Remove unused master_get_round_robin_candidate_nodes 2017-03-07 11:51:24 +03:00
Brian Cloutier c3e9bb880b Remove master_get_local_first_candidate_nodes 2017-03-07 11:50:59 +03:00
Murat Tuncer e718b10ce9 Remove default clause from shard DDL when sequences are used 2017-03-01 17:32:48 +03:00
Marco Slot 29b1fb97c5 Address review feedback in COPY refactoring 2017-02-28 17:39:45 +01:00
Marco Slot ae9d2be84e Add CitusCopyDestReceiver infrastructure 2017-02-28 17:24:45 +01:00
Eren Basak 99ebe06af5 Enforce statement based replication on old APIs and non-hash tables
This change ignores `citus.replication_model` setting and uses the
statement based replication in

- Tables distributed via the old `master_create_distributed_table` function
- Append and range partitioned tables, even if created via
`create_distributed_table` function

This seems like the easiest solution to #1191, without changing the existing
behavior and harming existing users with custom scripts.

This change also prevents RF>1 on streaming replicated tables on `master_create_worker_shards`

Prior to this change, `master_create_worker_shards` command was not checking
the replication model of the target table, thus allowing RF>1 with streaming
replicated tables. With this change, `master_create_worker_shards` errors
out on the case.
2017-02-16 10:37:53 -08:00
Brian Cloutier 24654ac7e1 Refactor CheckShardPlacements
- Break CheckShardPlacements into multiple functions (The most important
  is MarkFailedShardPlacements), so that we can get rid of the global
  CoordinatedTransactionUses2PC.
- Call MarkFailedShardPlacements in the router executor, so we mark
  shards as invalid and stop using them while inside transaction blocks.
2017-01-26 13:20:45 +02:00
Marco Slot 8adb9c3ec1 Use coordinator instead of schema node in terminology 2017-01-25 11:07:23 +01:00
Burak Yucesoy 02880a717b Add LoadShardPlacement UDF
This UDF returns a shard placement from cache given shard id and placement id. At the
moment it iterates over all shard placements of given shard by ShardPlacementList and
searches given placement id in that list, which is not a good solution performance-wise.
However, currently, this function will be used only when there is a failed transaction.
If a need arises we can optimize this function in the future.
2017-01-23 21:04:57 +03:00
Marco Slot cff95c310e Use placement connection API for multi-shard transactions 2017-01-23 18:34:50 +01:00
Andres Freund 970c81f589 Hack up PREPARE/EXECUTE for nearly all distributed queries.
All router, real-time, task-tracker plannable queries should now have
full prepared statement support (and even use router when possible),
unless they don't go through the custom plan interface (which
basically just affects LANGUAGE SQL (not plpgsql) functions).

This is achieved by forcing postgres' planner to always choose a
custom plan, by assigning very low costs to plans with bound
parameters (i.e. ones were the postgres planner replanned the query
upon EXECUTE with all parameter values provided), instead of the
generic one.

This requires some trickery, because for custom plans to work the
costs for a non-custom plan have to be known, which means we can't
error out when planning the generic plan.  Instead we have to return a
"faux" plan, that'd trigger an error message if executed.  But due to
the custom plan logic that plan will likely (unless called by an SQL
function, or because we can't support that query for some reason) not
be executed; instead the custom plan will be chosen.
2017-01-23 09:23:50 -08:00
Andres Freund 67da5611f7 Make router planner error handling more flexible.
So far router planner had encapsulated different functionality in
MultiRouterPlanCreate. Modifications always go through router, selects
sometimes. Modifications always error out if the query is unsupported,
selects return NULL.  Especially the error handling is a problem for
the upcoming extension of prepared statement support.

Split MultiRouterPlanCreate into CreateRouterPlan and
CreateModifyPlan, and change them to not throw errors.

Instead errors are now reported by setting the new
MultiPlan->plannigError.

Callers of router planner functionality now have to throw errors
themselves if desired, but also can skip doing so.

This is a pre-requisite for expanding prepared statement support.

While touching all those lines, improve a number of error messages by
getting them closer to the postgres error message guidelines.
2017-01-23 09:23:50 -08:00
Andres Freund 83fe9bf489 Support for deferred error messages.
It can be useful, e.g. in the upcoming prepared statement support, to
be able to return an error from a function that is not raised
immediately, but can later be thrown.  That allows e.g. to attempt to
plan a statment using different methods and to create good error
messages in each planner, but to only error out after all planners
have been run.

To enable that create support for deferred error messages that can be
created (supporting errorcode, message, detail, hint) in one function,
and then thrown in different place.
2017-01-23 09:23:50 -08:00
Jason Petersen b5734eb11f Add replication_model GUC
This adds a replication_model GUC which is used as the replication
model for any new distributed table that is not a reference table.
With this change, tables with replication factor 1 are no longer
implicitly MX tables.

The GUC is similarly respected during empty shard creation for e.g.
existing append-partitioned tables. If the model is set to streaming
while replication factor is greater than one, table and shard creation
routines will error until this invalid combination is corrected.

Changing this parameter requires superuser permissions.
2017-01-23 09:05:14 -07:00
Brian Cloutier e4b65d03a2 Port master_append_table_to_shard to new connection API (#1149)
If any placements fail it doesn't update shard statistics on those placements.

A minor enabling refactor: Make CoordinatedTransactionUses2PC public (it used to be CoordinatedTransactionUse2PC but that symbol already existed, so renamed it as well)
2017-01-23 15:57:44 +02:00
Marco Slot ac919337f1 Add an enable_deadlock_prevention flag to allow router transactions to expand to multiple nodes 2017-01-22 17:31:24 +01:00
Andres Freund a596858463 Remove connection_cache.[ch]. 2017-01-21 09:01:15 -08:00
Andres Freund 0bdb22268f Remove remnants of commit_protocol.[ch]. 2017-01-21 09:01:15 -08:00
Andres Freund 9d3d6a2c22 Consistently libpq forward declaration in remote_commands.h. 2017-01-21 09:01:14 -08:00
Murat Tuncer 299d002f1c Convert multi copy to use new connection api
This enables proper transactional behaviour for copy and relaxes some
restrictions like combining COPY with single-row modifications. It
also provides the basis for relaxing restrictions further, and for
optionally allowing connection caching.
2017-01-20 19:15:19 -08:00
Andres Freund 2b58ac24dc Mark some now unnecessarily exposed multi_planner.c functions static. 2017-01-20 12:31:56 -08:00
Andres Freund 2ec404a122 Remove citus.explain_multi_logical/physical_plan.
They make fixing explain for prepared statement harder, and they don't
really fit into EXPLAIN in the first place.  Additionally they're
currently not exercised in any tests.
2017-01-20 12:31:19 -08:00
Metin Doslu 099ef88238 Add a function to delete shard metadata from MX nodes 2017-01-20 14:38:01 +02:00
Metin Doslu 09ca6a464f Refactor get_shard_id_for_distribution_column() and other minor changes 2017-01-20 14:38:01 +02:00
Eren Basak ce8f9d0819 Make `upgrade_to_reference_table` function MX-compatible 2017-01-18 16:49:50 +03:00
Eren Basak fa0b36b28c Propagate new reference table placement metadata on `master_add_node` 2017-01-18 15:59:06 +03:00
Eren Basak 096be1dde5 Add Sequence Support for MX Tables
This change adds support for serial columns to be used with MX tables.
Prior to this change, sequences of serial columns were created in all
workers (for being able to create shards) but never used. With MX, we
need to set the sequences so that sequences in each worker create
unique values. This is done by setting the MINVALUE, MAXVALUE and
START values of the sequence.
2017-01-18 09:43:38 +03:00
Andres Freund 5f4c85f1c4 Query placementId in RemoteFinalizedShardPlacementList().
Not having the id in the ShardPlacement struct causes issues while
making copy use the placement aware connection management.
2017-01-17 13:27:26 -08:00
Brian Cloutier 213c24524d Create ExecuteOptionalRemoteCommand
A small refactor which pulls some code out of `RecoverWorkerTransactions`
and into `remote_commands.c`. This code block currently only occurs in
`RecoverWorkerTransactions` but will be useful to other functions
shortly.

Unfortunately we couldn't call it `ExecuteRemoteCommand`, that name was
already taken.
2017-01-17 17:04:37 +02:00
Andres Freund 9d43ad7b07 Add ShardPlacement fields required for colocated placement connection mapping. 2017-01-16 13:42:54 -08:00
Burak Yucesoy 96b55249a4 Remove placement metadata of reference tables after master_remove_node
With this change, we start to delete placement of reference tables at given worker node
after master_remove_node UDF call. We remove placement metadata at master node but we do
not drop actual shard from the worker node. There are two reasons for that decision,
first, it is not critical to DROP the shards in the workers because Citus will ignore them
as long as node is removed from cluster and if we add that node back to cluster we will
DROP and recreate all reference tables. Second, if node is unreachable, it becomes
complicated to cover failure cases and have a transaction support.
2017-01-16 11:24:56 +03:00
Murat Tuncer 3b9a579a63 Add view support
Enables use views within distributed queries.
User can create and use a view on distributed tables/queries
as he/she would use with regular queries.

After this change router queries will have full support for views,
insert into select queries will support reading from views, not
writing into. Outer joins would have a limited support, and would
error out at certain cases such as when a view is in the inner side
of the outer join.

Although PostgreSQL supports writing into views under certain circumstances.
We disallowed that for distributed views.
2017-01-13 09:39:42 +03:00
Onder Kalaci 119b691eb4 Refactor CheckShardPlacements() and improve support for node removal
This commit refactors CheckShardPlacements() so that it only considers
modifyingConnection. Also, it skips nodes which are removed from the
cluster.
2017-01-12 20:10:10 +02:00
Andres Freund ab0629ae4a Cache ShardPlacements in metadata cache.
So far we've reloaded them frequently. Besides avoiding that cost -
noticeable for some workloads with large shard counts - it makes it
easier to add information to ShardPlacements that help us make
placement_connection.c colocation aware.
2017-01-10 18:14:18 -08:00
Andres Freund 5e35f6a5dd Convert router executor to placement connection management infrastructure.
Remove the router specific transaction and shard management, and
replace it with the new placement connection API.  This mostly leaves
behaviour alone, except that it is now, inside a transaction, legal to
select from a shard to which no pre-existing connection exists.

To simplify code the code handling task executions for select and
modify has been split into two - the previous coding was starting to
get confusing due to the amount of only conditionally applicable code.

Modification connections & transactions are now always established in
parallel, not just for reference tables.
2017-01-09 13:13:02 -08:00
Andres Freund b8a1c0678c Centralized shard/placement connection and state management.
Currently there are several places in citus that map placements to
connections and that manage placement health. Centralize this
knowledge.  Because of the centralized knowledge about which
connection has previously been used for which shard/placement, this
also provides the basis for relaxing restrictions around combining
various forms of DDL/DML.

Connections for a placement can now be acquired using
GetPlacementConnection(). If the connection is used for DML or DDL the
FOR_DDL/DML flags should be used respectively.  If an individual
remote transaction fails (but the transaction on the master succeeds)
and FOR_DDL/DML have been specified, the placement is marked as
invalid, unless that'd mark all placements for a shard as invalid.
2017-01-09 13:13:02 -08:00
Andres Freund c6498cb04e Remove unused LogPreparedTransactions() function.
This is unused since 92c7567008.
2017-01-06 09:15:01 -08:00
Burak Yucesoy 9d756de3ae Replicate reference tables when new node is added
With this change, we start to replicate all reference tables to the new node when new node
is added to the cluster with master_add_node command. We also update replication factor
of reference table's colocation group.
2017-01-05 14:30:41 +03:00
Onder Kalaci 0a05f12475 Use 2PC for reference table modification
With this commit, we ensure that router executor always uses
2PC for reference table modifications and never mark the placements
of it as INVALID.
2017-01-04 12:46:35 +02:00
Burak Yucesoy 541e45c26e Add upgrade_to_reference_table
With this change we introduce new UDF, upgrade_to_reference_table, which can be used to
upgrade existing broadcast tables reference tables. For upgrading, we require that given
table contains only one shard.
2017-01-02 17:54:42 +02:00
Eren Basak da3ce88091 Error on Unsupported Features on Workers
This change makes the metadata workers error out on unsupported commands.
2017-01-02 16:03:45 +03:00
Marco Slot 6b7404f59c Use MultiConnection in multi-shard transactions 2016-12-30 14:43:21 -07:00
Metin Doslu 8282fe4af0 Add binary search capability to ShardIndex()
Renamed FindShardIntervalIndex() to ShardIndex() and added binary search
capability. It used to assume that hash partition tables are always
uniformly distributed which is not true if upcoming tenant isolation
feature is applied. This commit also reduces code duplication.
2016-12-30 18:55:34 +02:00
Marco Slot e55a27a487 Enable transaction recovery in connection API 2016-12-23 16:14:29 +01:00
Marco Slot 06e3eff3d2 Convert worker_transactions to new connection API 2016-12-23 16:14:29 +01:00
Marco Slot 6ea2cb7c8e Add a wrapper for PQsendQuery 2016-12-23 16:14:29 +01:00
Marco Slot b9cc1d4d2c Connectionapify SendCommandListToWorkerInSingleTransaction 2016-12-23 16:14:29 +01:00
Eren Basak 93bc2c6c12 Handle MX tables on workers during drop table commands 2016-12-23 15:43:32 +03:00
Eren Basak bdf732d115 Propagate `mark_tables_colocated` changes in `pg_dist_partition` table to metadata workers. 2016-12-23 15:43:32 +03:00
Eren Basak 9876e253b7 Propagate DDL commands to metadata workers for MX tables 2016-12-23 15:43:32 +03:00
Eren Basak 3d9540e500 Propagate MX table and shard metadata on `create_distributed_table` call 2016-12-23 15:43:32 +03:00
Marco Slot 9cdea04466 Enable evaluation of stable functions in INSERT..SELECT 2016-12-23 12:47:21 +01:00
Marco Slot f058ba3ec0 Add explicit RelationShards mapping to tasks 2016-12-23 10:23:43 +01:00
Onder Kalaci 807fc1cc28 Reference Table Support - Phase 1
With this commit, we implemented some basic features of reference tables.

To start with, a reference table is
  * a distributed table whithout a distribution column defined on it
  * the distributed table is single sharded
  * and the shard is replicated to all nodes

Reference tables follows the same code-path with a single sharded
tables. Thus, broadcast JOINs are applicable to reference tables.
But, since the table is replicated to all nodes, table fetching is
not required any more.

Reference tables support the uniqueness constraints for any column.

Reference tables can be used in INSERT INTO .. SELECT queries with
the following rules:
  * If a reference table is in the SELECT part of the query, it is
    safe join with another reference table and/or hash partitioned
    tables.
  * If a reference table is in the INSERT part of the query, all
    other participating tables should be reference tables.

Reference tables follow the regular co-location structure. Since
all reference tables are single sharded and replicated to all nodes,
they are always co-located with each other.

Queries involving only reference tables always follows router planner
and executor.

Reference tables can have composite typed columns and there is no need
to create/define the necessary support functions.

All modification queries, master_* UDFs, EXPLAIN, DDLs, TRUNCATE,
sequences, transactions, COPY, schema support works on reference
tables as expected. Plus, all the pre-requisites associated with
distribution columns are dismissed.
2016-12-20 14:09:35 +02:00
Eren Basak 5eb90d6d93 Add citus.node_connection_timeout GUC 2016-12-20 14:11:37 +03:00
Murat Tuncer 6c95f0352e Make router planner active at all times
We used to disable router planner and executor
when task executor is set to task-tracker.

This change enables router planning and execution
at all times regardless of task execution mode.

We are introducing a hidden flag enable_router_execution
to enable/disable router execution. Its default value is
true. User may disable router planning by setting it to false.
2016-12-20 11:24:01 +03:00
Jason Petersen 54c2efccc1 Add targeted VACUUM/ANALYZE support
Adds support for VACUUM and ANALYZE commands which target a specific
distributed table. After grabbing the appropriate locks, this imple-
mentation sends VACUUM commands to each placement (using one connec-
tion per placement). These commands are sent in parallel, so users
with large tables will benefit from sharding. Except for VERBOSE, all
VACUUM and ANALYZE options are supported, including the explicit
column list used by ANALYZE.

As with many of our utility commands, the local command also runs. In
the VACUUM/ANALYZE case, the local command is executed before any re-
mote propagation. Because error handling is managed after local proc-
essing, this can result in a VACUUM completing locally but erroring
out when distributed processing commences: a minor technicality in all
cases, as there isn't really much reason to ever roll back a VACUUM (an
impossibility in any case, as VACUUM cannot run within a transaction).

Remote propagation of targeted VACUUM/ANALYZE is controlled by the
enable_ddl_propagation setting; warnings are emitted if such a command
is attempted when DDL propagation is disabled. Unqualified VACUUM or
ANALYZE is not handled, but a warning message informs the user of this.

Implementation note: this commit adds a "BARE" value to MultiShard-
CommitProtocol. When active, no BEGIN command is ever sent to remote
nodes, useful for commands such as VACUUM/ANALYZE which must not run in
a transaction block. This value is not user-facing and is reset at
transaction end.
2016-12-16 16:59:06 -07:00
Metin Doslu fc908a3ab6 Refactor distribution column type check for colocation 2016-12-16 15:24:45 +02:00
Metin Doslu d43a01ebae Don't allow tables with different replication models to be colocated 2016-12-16 15:23:49 +02:00
Metin Doslu 4ff550429f Add colocate_with option to create_distributed_table()
With this commit, we support three versions of colocate_with: i.default, ii.none
and iii. a specific table name.
2016-12-16 14:53:35 +02:00
Metin Doslu 15aeac0502 Move colocation related functions to colocation_utils.c 2016-12-16 14:52:40 +02:00
Eren Basak 59b95a958d Propagate CREATE SCHEMA commands with the correct AUTHORIZATION clause in start_metadata_sync_to_node 2016-12-14 10:53:12 +03:00
Eren Basak 72e5e826b8 Make start_metadata_sync_to_node UDF to propagate foreign-key constraints 2016-12-14 10:53:12 +03:00
Eren Basak eea27619a0 Make truncate triggers propagated on start_metadata_sync_to_node call 2016-12-14 10:53:10 +03:00
Eren Basak 9c415d4f34 Add start_metadata_sync_to_node UDF
This change adds `start_metadata_sync_to_node` UDF which copies the metadata about nodes and MX tables
from master to the specified worker, sets its local group ID and marks its hasmetadata to true to
allow it receive future DDL changes.
2016-12-13 10:48:03 +03:00
Andres Freund 90ad02f5fb Integrate router executor into transaction management framework.
One less place managing remote transactions. It also makes it fairly
easy to use 2PC for certain modifications (e.g. reference tables). Just
issue a CoordinatedTransactionUse2PC(). If every placement failure
should cause the whole transaction to abort, additionally mark the
relevant transactions as critical.
2016-12-12 15:18:12 -08:00
Andres Freund 21449ed0a0 Convert multi_shard_transaction.[ch] to new framework. 2016-12-12 15:18:12 -08:00
Andres Freund 7ac4a2bc94 Coordinated remote transaction management. 2016-12-12 15:18:12 -08:00
Andres Freund ccb2a35b6b Add PQgetResult() wrapper handling interrupts.
This makes it possible to implement cancelling queries blocked on
communication with remote nodes.
2016-12-12 15:18:12 -08:00
Andres Freund 3754e74acb Move multi_client_executor.[ch] ontop of connection_management.[ch].
That way connections can be automatically closed after errors and such,
and the connection management infrastructure gets wider testing.  It
also fixes a few issues around connection string building.
2016-12-07 11:44:24 -08:00
Andres Freund c7f19f6c83 Use connection_management.c from within connection_cache.c.
This is a temporary step towards removing connection_cache.c.
2016-12-07 11:44:24 -08:00
Andres Freund 7bc7284b61 Add initial helpers to make interactions with MultiConnection et al. easier.
This includes basic infrastructure for logging of commands sent to
remote/worker nodes. Note that this has no effect as of yet, since no
callers are converted to the new infrastructure.
2016-12-07 11:44:24 -08:00
Andres Freund 8e32951eb9 Centralized Connection Lifetime Management.
Connections are tracked and released by integrating into postgres'
transaction handling. That allows to to use connections without having
to resort to having to disable interrupts or using PG_TRY/CATCH blocks
to avoid leaking connections.

This is intended to eventually replace multi_client_executor.c and
connection_cache.c, and to provide the basis of a centralized
transaction management.

The newly introduced transaction hook should, in the future, be the only
one in citus, to allow for proper ordering between operations.  For now
this central handler is responsible for releasing connections and
resetting XactModificationLevel after a transaction.
2016-12-07 11:43:18 -08:00
Andres Freund 9564e1e7fc Add some basic helpers to make use of dynahash hashtables easier. 2016-12-06 14:15:36 -08:00
Eren Basak 517db3648a Propagate node add/remove to the nodes with hasmetadata=true
This change propagates the changes done by `master_add_node` and `master_remove_node`
to the workers that contain metadata.
2016-12-02 14:43:32 +03:00
Brian Cloutier 14b47b769c Remove dead code: ResponsiveWorkerNodeList 2016-12-02 13:14:11 +03:00
Marco Slot 1d3caceda4 Use co-located shard ID in multi_shard_transaction 2016-11-02 11:01:19 +01:00
Önder Kalacı 34fa711b14 Always CASCADE while dropping a shard 2016-11-01 10:16:34 +01:00
Metin Doslu 520e7e3cb2 Add mark_tables_colocated() to update colocation groups
Added a new UDF, mark_tables_colocated(), to colocate tables with the same
configuration (shard count, shard replication count and distribution column type).
2016-10-26 17:29:03 +03:00
Marco Slot 7a606f1c1d Re-acquire metadata locks in RouterExecutorStart 2016-10-26 14:34:59 +02:00
Brian Cloutier c7e722e624 Fix segfault during EXPLAIN EXECUTE
Fix citusdata/citus#886

The way postgres' explain hook is designed means that our hook is never
called during EXPLAIN EXECUTE. So, we special-case EXPLAIN EXECUTE by
catching it in the utility hook.  We then replace the EXECUTE with the
original query and pass it back to Citus.
2016-10-26 15:18:42 +03:00
Andres Freund ebbef819f2 Invalidate relcache after pg_dist_shard_placement changes.
This forces prepared statements to be re-planned after changes of the
placement metadata. There's some locking issues remaining, but that's a
a separate task.

Also add regression tests verifying that invalidations take effect on
prepared statements.
2016-10-26 03:36:35 -07:00
Onder Kalaci 9e82cd6d2d Feature: INSERT INTO ... SELECT
This commit adds INSERT INTO ... SELECT feature for distributed tables.

We implement INSERT INTO ... SELECT by pushing down the SELECT to
each shard. To compute that we use the router planner, by adding
an "uninstantiated" constraint that the partition column be equal to a
certain value. standard_planner() distributes that constraint to all
the tables where it knows how to push the restriction safely. An example
is that the tables that are connected via equi joins.

The router planner then iterates over the target table's shards,
for each we replace the "uninstantiated" restriction, with one that
PruneShardList() handles. Do so by replacing the partitioning qual
parameter added in multi_planner() with the current shard's
actual boundary values. Also, add the current shard's boundary values to the
top level subquery to ensure that even if the partitioning qual is
not distributed to all the tables, we never run the queries on the shards
that don't match with the current shard boundaries. Finally, perform the
normal shard pruning to decide on whether to push the query to the
current shard or not.

We do not support certain SQLs on the subquery, which are described/commented
on ErrorIfInsertSelectQueryNotSupported().

We also added some locking on the router executor. When an INSERT/SELECT command
runs on a distributed table with replication factor >1, we need to ensure that
it sees the same result on each placement of a shard. So we added the ability
such that router executor takes exclusive locks on shards from which the SELECT
in an INSERT/SELECT reads in order to prevent concurrent changes. This is not a
very optimal solution, but it's simple and correct. The
citus.all_modifications_commutative can be used to avoid aggressive locking.
An INSERT/SELECT whose filters are known to exclude any ongoing writes can be
marked as commutative. See RequiresConsistentSnapshot() for the details.

We also moved the decison of whether the multiPlan should be executed on
the router executor or not to the planning phase. This allowed us to
integrate multi task router executor tasks to the router executor smoothly.
2016-10-26 10:01:00 +03:00
Onder Kalaci 75dd6ee3ab Add ability to reorder target list for INSERT/SELECT queries
The necessity for this functionality comes from the fact that ruleutils.c is not supposed to be
used on "rewritten" queries (i.e. ones that have been passed through QueryRewrite()).
Query rewriting is the process in which views and such are expanded,
and, INSERT/UPDATE targetlists are reordered to match the physical order,
defaults etc. For the details of reordeing, see transformInsertRow().
2016-10-26 10:00:03 +03:00
Marco Slot 914100cfbe Parallelise DDL commands 2016-10-24 12:39:08 +02:00
Burak Yucesoy c7414c3af2 Foreign Constraint Support for create_distributed_table and shard move
With this change, we now push down foreign key constraints created during CREATE TABLE
statements. We also start to send foreign constraints during shard move along with
other DDL statements
2016-10-21 15:38:55 +03:00
Metin Doslu 7baf72cc86 Final refactoring 2016-10-20 11:29:11 +03:00
Metin Doslu 16feabec95 Change return type of BuildDistributionKeyFromColumnName() to Var *
BuildDistributionKeyFromColumnName() always returns a Var pointer, so there is
no reason to return a Node pointer instead of a Var pointer.
2016-10-20 10:59:31 +03:00
Metin Doslu cbf5f05c86 Convert colocationid to uint32 2016-10-20 10:59:31 +03:00
Metin Doslu 7586e567b7 Add local function GetNextShardId() 2016-10-20 10:59:31 +03:00
Metin Doslu 31f08f8377 Add create_distributed_table()
create_distributed_table() creates a hash distributed table with default values
of shard count and shard replication factor.
2016-10-20 10:58:25 +03:00
Metin Doslu 39ddf36084 Add guc variable for shard count 2016-10-19 10:44:50 +03:00
Marco Slot 06e790f420 Parallelise master_modify_multiple_shards 2016-10-19 08:33:08 +02:00
Marco Slot a72eed5aed Move requiresMasterEvaluation from Task to Job 2016-10-19 08:23:06 +02:00
Andres Freund 0e02b838a3 Support PostgreSQL 9.6
Adds support for PostgreSQL 9.6 by copying in the requisite ruleutils
file and refactoring the out/readfuncs code to flexibly support the
old-style copy/pasted out/readfuncs (prior to 9.6) or use extensible
node APIs (in 9.6 and higher).

Most version-specific code within this change is only needed to set new
fields in the AggRef nodes we build for aggregations. Version-specific
test output files were added in certain cases, though in most they were
not necessary. Each such file begins by e.g. printing the major version
in order to clarify its purpose.

The comment atop citus_nodes.h details how to add support for new nodes
for when that becomes necessary.
2016-10-18 16:23:55 -06:00
Eren Basak e31830f3fb Add worker transaction and transaction recovery infrastructure 2016-10-18 14:18:14 +03:00
Eren Basak cb1d9cba5e Add hasmetadata column to pg_dist_node 2016-10-17 11:52:18 +03:00
Eren Basak c3107b1315 Add metadata infrastructure for pg_dist_local_group table 2016-10-17 11:52:18 +03:00
Brian Cloutier 84c4ba1073 Drop shardalias 2016-10-14 11:03:26 +03:00
Metin Doslu 827d1ddb75 Add HAVING support
This commit completes having support in Citus by adding having support for
real-time and task-tracker executors. Multiple tests are added to regression
tests to cover new supported queries with having support.
2016-10-13 15:47:53 +03:00