Commit Graph

96 Commits (da3ce88091db81a208ec08a0788dd30b79925022)

Author SHA1 Message Date
Eren Basak da3ce88091 Error on Unsupported Features on Workers
This change makes the metadata workers error out on unsupported commands.
2017-01-02 16:03:45 +03:00
Metin Doslu 8282fe4af0 Add binary search capability to ShardIndex()
Renamed FindShardIntervalIndex() to ShardIndex() and added binary search
capability. It used to assume that hash partition tables are always
uniformly distributed which is not true if upcoming tenant isolation
feature is applied. This commit also reduces code duplication.
2016-12-30 18:55:34 +02:00
Marco Slot 06e3eff3d2 Convert worker_transactions to new connection API 2016-12-23 16:14:29 +01:00
Eren Basak bdf732d115 Propagate `mark_tables_colocated` changes in `pg_dist_partition` table to metadata workers. 2016-12-23 15:43:32 +03:00
Eren Basak 9876e253b7 Propagate DDL commands to metadata workers for MX tables 2016-12-23 15:43:32 +03:00
Marco Slot 9cdea04466 Enable evaluation of stable functions in INSERT..SELECT 2016-12-23 12:47:21 +01:00
Marco Slot f058ba3ec0 Add explicit RelationShards mapping to tasks 2016-12-23 10:23:43 +01:00
Marco Slot 483648a4a4 Add shard locking UDFs 2016-12-22 11:04:34 +01:00
Burak Yücesoy f0e9f132c8 Add get_distribution_value_shardid UDF (#1048)
* Add get_distribution_value_shardid UDF

With this UDF users can now map given distribution value to shard id. We mostly hide
shardids from users to prevent unnecessary complexity but some power users might need
to know about which entry/value is stored in which shard for maintanence purposes.

Signature of this UDF is as follows;

bigint get_distribution_value_shardid(table_name regclass, distribution_value anyelement)
2016-12-22 12:17:08 +03:00
Onder Kalaci 807fc1cc28 Reference Table Support - Phase 1
With this commit, we implemented some basic features of reference tables.

To start with, a reference table is
  * a distributed table whithout a distribution column defined on it
  * the distributed table is single sharded
  * and the shard is replicated to all nodes

Reference tables follows the same code-path with a single sharded
tables. Thus, broadcast JOINs are applicable to reference tables.
But, since the table is replicated to all nodes, table fetching is
not required any more.

Reference tables support the uniqueness constraints for any column.

Reference tables can be used in INSERT INTO .. SELECT queries with
the following rules:
  * If a reference table is in the SELECT part of the query, it is
    safe join with another reference table and/or hash partitioned
    tables.
  * If a reference table is in the INSERT part of the query, all
    other participating tables should be reference tables.

Reference tables follow the regular co-location structure. Since
all reference tables are single sharded and replicated to all nodes,
they are always co-located with each other.

Queries involving only reference tables always follows router planner
and executor.

Reference tables can have composite typed columns and there is no need
to create/define the necessary support functions.

All modification queries, master_* UDFs, EXPLAIN, DDLs, TRUNCATE,
sequences, transactions, COPY, schema support works on reference
tables as expected. Plus, all the pre-requisites associated with
distribution columns are dismissed.
2016-12-20 14:09:35 +02:00
Metin Doslu fc908a3ab6 Refactor distribution column type check for colocation 2016-12-16 15:24:45 +02:00
Metin Doslu d43a01ebae Don't allow tables with different replication models to be colocated 2016-12-16 15:23:49 +02:00
Metin Doslu 4ff550429f Add colocate_with option to create_distributed_table()
With this commit, we support three versions of colocate_with: i.default, ii.none
and iii. a specific table name.
2016-12-16 14:53:35 +02:00
Metin Doslu 15aeac0502 Move colocation related functions to colocation_utils.c 2016-12-16 14:52:40 +02:00
Andres Freund c7f19f6c83 Use connection_management.c from within connection_cache.c.
This is a temporary step towards removing connection_cache.c.
2016-12-07 11:44:24 -08:00
Andres Freund 7bc7284b61 Add initial helpers to make interactions with MultiConnection et al. easier.
This includes basic infrastructure for logging of commands sent to
remote/worker nodes. Note that this has no effect as of yet, since no
callers are converted to the new infrastructure.
2016-12-07 11:44:24 -08:00
Andres Freund 8e32951eb9 Centralized Connection Lifetime Management.
Connections are tracked and released by integrating into postgres'
transaction handling. That allows to to use connections without having
to resort to having to disable interrupts or using PG_TRY/CATCH blocks
to avoid leaking connections.

This is intended to eventually replace multi_client_executor.c and
connection_cache.c, and to provide the basis of a centralized
transaction management.

The newly introduced transaction hook should, in the future, be the only
one in citus, to allow for proper ordering between operations.  For now
this central handler is responsible for releasing connections and
resetting XactModificationLevel after a transaction.
2016-12-07 11:43:18 -08:00
Andres Freund 9564e1e7fc Add some basic helpers to make use of dynahash hashtables easier. 2016-12-06 14:15:36 -08:00
Marco Slot 6cff558896 Use READ_UINT64_FIELD for placement ID in ReadShardPlacement 2016-12-05 17:22:23 +01:00
Eren Basak 517db3648a Propagate node add/remove to the nodes with hasmetadata=true
This change propagates the changes done by `master_add_node` and `master_remove_node`
to the workers that contain metadata.
2016-12-02 14:43:32 +03:00
Murat Tuncer 9bc833f335 Fix failures during pg_upgrade
- fix error in CitusHasBeenLoaded()
- allow creation of pg_catalog tables during upgrade
2016-11-11 17:22:45 -08:00
Marco Slot 1d3caceda4 Use co-located shard ID in multi_shard_transaction 2016-11-02 11:01:19 +01:00
Metin Doslu 16218413d0 Error on different shard placement count
In ErrorIfShardPlacementsNotColocated(), while checking if shards are colocated,
error out if matching shard intervals have different number of shard placements.
2016-10-26 18:46:05 +03:00
Metin Doslu 520e7e3cb2 Add mark_tables_colocated() to update colocation groups
Added a new UDF, mark_tables_colocated(), to colocate tables with the same
configuration (shard count, shard replication count and distribution column type).
2016-10-26 17:29:03 +03:00
Marco Slot 7a606f1c1d Re-acquire metadata locks in RouterExecutorStart 2016-10-26 14:34:59 +02:00
Andres Freund ebbef819f2 Invalidate relcache after pg_dist_shard_placement changes.
This forces prepared statements to be re-planned after changes of the
placement metadata. There's some locking issues remaining, but that's a
a separate task.

Also add regression tests verifying that invalidations take effect on
prepared statements.
2016-10-26 03:36:35 -07:00
Onder Kalaci 9e82cd6d2d Feature: INSERT INTO ... SELECT
This commit adds INSERT INTO ... SELECT feature for distributed tables.

We implement INSERT INTO ... SELECT by pushing down the SELECT to
each shard. To compute that we use the router planner, by adding
an "uninstantiated" constraint that the partition column be equal to a
certain value. standard_planner() distributes that constraint to all
the tables where it knows how to push the restriction safely. An example
is that the tables that are connected via equi joins.

The router planner then iterates over the target table's shards,
for each we replace the "uninstantiated" restriction, with one that
PruneShardList() handles. Do so by replacing the partitioning qual
parameter added in multi_planner() with the current shard's
actual boundary values. Also, add the current shard's boundary values to the
top level subquery to ensure that even if the partitioning qual is
not distributed to all the tables, we never run the queries on the shards
that don't match with the current shard boundaries. Finally, perform the
normal shard pruning to decide on whether to push the query to the
current shard or not.

We do not support certain SQLs on the subquery, which are described/commented
on ErrorIfInsertSelectQueryNotSupported().

We also added some locking on the router executor. When an INSERT/SELECT command
runs on a distributed table with replication factor >1, we need to ensure that
it sees the same result on each placement of a shard. So we added the ability
such that router executor takes exclusive locks on shards from which the SELECT
in an INSERT/SELECT reads in order to prevent concurrent changes. This is not a
very optimal solution, but it's simple and correct. The
citus.all_modifications_commutative can be used to avoid aggressive locking.
An INSERT/SELECT whose filters are known to exclude any ongoing writes can be
marked as commutative. See RequiresConsistentSnapshot() for the details.

We also moved the decison of whether the multiPlan should be executed on
the router executor or not to the planning phase. This allowed us to
integrate multi task router executor tasks to the router executor smoothly.
2016-10-26 10:01:00 +03:00
Brian Cloutier c9809da4d7 Fix crash when upgrading to Citus 6
Between restart (running the new code) and ALTER EXTENSION citus
UPGRADE there was an inconsistency where we assumed that
pg_dist_partition had the repmodel column set. Now we give it a default
value if the column doesn't exist yet.
2016-10-24 15:18:29 +03:00
Burak Yucesoy c7414c3af2 Foreign Constraint Support for create_distributed_table and shard move
With this change, we now push down foreign key constraints created during CREATE TABLE
statements. We also start to send foreign constraints during shard move along with
other DDL statements
2016-10-21 15:38:55 +03:00
Metin Doslu 16feabec95 Change return type of BuildDistributionKeyFromColumnName() to Var *
BuildDistributionKeyFromColumnName() always returns a Var pointer, so there is
no reason to return a Node pointer instead of a Var pointer.
2016-10-20 10:59:31 +03:00
Metin Doslu cbf5f05c86 Convert colocationid to uint32 2016-10-20 10:59:31 +03:00
Metin Doslu 31f08f8377 Add create_distributed_table()
create_distributed_table() creates a hash distributed table with default values
of shard count and shard replication factor.
2016-10-20 10:58:25 +03:00
Marco Slot 06e790f420 Parallelise master_modify_multiple_shards 2016-10-19 08:33:08 +02:00
Marco Slot a72eed5aed Move requiresMasterEvaluation from Task to Job 2016-10-19 08:23:06 +02:00
Andres Freund 0e02b838a3 Support PostgreSQL 9.6
Adds support for PostgreSQL 9.6 by copying in the requisite ruleutils
file and refactoring the out/readfuncs code to flexibly support the
old-style copy/pasted out/readfuncs (prior to 9.6) or use extensible
node APIs (in 9.6 and higher).

Most version-specific code within this change is only needed to set new
fields in the AggRef nodes we build for aggregations. Version-specific
test output files were added in certain cases, though in most they were
not necessary. Each such file begins by e.g. printing the major version
in order to clarify its purpose.

The comment atop citus_nodes.h details how to add support for new nodes
for when that becomes necessary.
2016-10-18 16:23:55 -06:00
Eren Basak e31830f3fb Add worker transaction and transaction recovery infrastructure 2016-10-18 14:18:14 +03:00
Eren Basak cb1d9cba5e Add hasmetadata column to pg_dist_node 2016-10-17 11:52:18 +03:00
Eren Basak c3107b1315 Add metadata infrastructure for pg_dist_local_group table 2016-10-17 11:52:18 +03:00
Metin Doslu 827d1ddb75 Add HAVING support
This commit completes having support in Citus by adding having support for
real-time and task-tracker executors. Multiple tests are added to regression
tests to cover new supported queries with having support.
2016-10-13 15:47:53 +03:00
Eren Basak 6cb3ae93ba Add Metadata Snapshot Infrastructure
This change adds the required infrastructure about metadata snapshot from MX
codebase into Citus, mainly metadata_sync.c file and master_metadata_snapshot UDF.
2016-10-13 10:40:14 +03:00
Andres Freund 5de52c3b04 Introduce placement IDs.
So far placements were assigned an Oid, but that was just used to track
insertion order. It also did so incompletely, as it was not preserved
across changes of the shard state. The behaviour around oid wraparound
was also not entirely as intended.

The newly introduced, explicitly assigned, IDs are preserved across
shard-state changes.

The prime goal of this change is not to improve ordering of task
assignment policies, but to make it easier to reference shards.  The
newly introduced UpdateShardPlacementState() makes use of that, and so
will the in-progress connection and transaction management changes.
2016-10-07 11:59:20 -07:00
Brian Cloutier 62e7bdbdd6 Switch from pg_worker_list.conf file to pg_dist_node metadata table.
Related to #786

This change adds the `pg_dist_node` table that contains the information
about the workers in the cluster, replacing the previously used
`pg_worker_list.conf` file (or the one specified with `citus.worker_list_file`).

Upon update, `pg_worker_list.conf` file is read and `pg_dist_node` table is
populated with the file's content. After that, `pg_worker_list.conf` file
is renamed to `pg_worker_list.conf.obsolete`

For adding and removing nodes, the change also includes two new UDFs:
`master_add_node` and `master_remove_node`, which require superuser
permissions.

'citus.worker_list_file' guc is kept for update purposes but not used after the
update is finished.
2016-10-05 13:01:35 +03:00
Marco Slot 31ab616b31 Add replication model column to pg_dist_partition 2016-10-05 01:14:28 +02:00
Onder Kalaci 06217bade0 Update ColocatedShardPlacementList() function name to
ColocatedShardIntervalList() which was intented.
2016-10-04 09:51:42 +03:00
Jason Petersen 44c7626f0c Update ruleutils_95 with latest PostgreSQL changes
Hand-applied changes from a diff I generated between 9.5.0 and 9.5.4.
2016-09-29 15:54:38 -06:00
Burak Yucesoy 1a618b9c43 Internal co-location API
With this commit we introduce internal API for co-location related operations.
2016-09-29 11:56:53 +03:00
Jason Petersen f9e63097c9 Fix unique-violation-in-xact segfault
An interaction between ReraiseRemoteError and DML transaction support
causes segfaults:

  * ReraiseRemoteError calls PurgeConnection, freeing a connection...
  * That connection is still in the xactParticipantHash

At transaction end, the memory in the freed connection might happen to
pass the "is this connection OK?" check, causing us to try to send an
ABORT over that connection. By removing it from the transaction hash
before calling ReraiseRemoteError, we avoid this possibility.
2016-09-27 16:44:03 -06:00
Murat Tuncer a342cacfc4 Address feedback 2016-09-26 18:23:42 -06:00
Jason Petersen b00b15a718 Permit multiple DDL commands in a transaction
Three changes here to get to true multi-statement, multi-relation DDL
transactions (same functionality pre-5.2, with benefits of atomicity):

    1. Changed the multi-shard utility hook to always run (consistency
       with router executor hook, removes ad-hoc "installed" boolean)

    2. Change the global connection list in multi_shard_transaction to
       instead be a hash; update related functions to operate on global
       hash instead of local hash/global list

    3. Remove check within DDL code to prevent subsequent DDL commands;
       place unset/reset guard around call to ConnectToNode to permit
       connecting to additional nodes after DDL transaction has begun

In addition, code has been added to raise an error if a ROLLBACK TO
SAVEPOINT is attempted (similar to router executor), and comprehensive
tests execute all multi-DDL scenarios (full success, user ROLLBACK, any
actual errors (say, duplicate index), partial failure (duplicate index
on one node but not others), partial COMMIT (one node fails), and 2PC
partial PREPARE (one node fails)). Interleavings with other commands
(DML, \copy) are similarly all covered.
2016-09-08 22:35:55 -05:00
Eric B. Ridge 361e37f921 Add syscols in queries; extend relnames in indexes
To permit use with ZomboDB (https://github.com/zombodb/zombodb), two
changes were necessary:

  1. Permit use of `tableoid` system column in queries
  2. Extend relation names appearing in index expressions

The first is accomplished by simply changing the deparse logic to allow
system columns in queries destined for distributed tables. The latter
was slightly more complex, given that DDL extension currently occurs on
workers. But since indexes cannot reference tables other than the one
being indexed, it is safe to look for any relation reference ending in
a '*' character and extend their penultimate segments with a shard id.

This change also adds an error to prevent users from distributing any
relations using the WITH (OIDS) feature, which is unsupported.
2016-09-07 11:54:55 -05:00