Commit Graph

194 Commits (5fcddfa2c63ae5d5c60ee7978b4aec63e9a63b80)

Author SHA1 Message Date
Metin Doslu 7cff8719c2 Add worker_hash() and a stub for isolate_tenant_to_new_shard() 2017-01-20 14:38:01 +02:00
Burak Yucesoy 3315ae6142 Remove placement metadata of reference tables after master_remove_node
With this change, we start to delete placement of reference tables at given worker node
after master_remove_node UDF call. We remove placement metadata at master node but we do
not drop actual shard from the worker node. There are two reasons for that decision,
first, it is not critical to DROP the shards in the workers because Citus will ignore them
as long as node is removed from cluster and if we add that node back to cluster we will
DROP and recreate all reference tables. Second, if node is unreachable, it becomes
complicated to cover failure cases and have a transaction support.
2017-01-16 11:24:56 +03:00
Murat Tuncer 77f8db6b14 Add view support
Enables use views within distributed queries.
User can create and use a view on distributed tables/queries
as he/she would use with regular queries.

After this change router queries will have full support for views,
insert into select queries will support reading from views, not
writing into. Outer joins would have a limited support, and would
error out at certain cases such as when a view is in the inner side
of the outer join.

Although PostgreSQL supports writing into views under certain circumstances.
We disallowed that for distributed views.
2017-01-13 09:39:42 +03:00
Burak Yucesoy 9c9f479e4b Replicate reference tables when new node is added
With this change, we start to replicate all reference tables to the new node when new node
is added to the cluster with master_add_node command. We also update replication factor
of reference table's colocation group.
2017-01-05 14:30:41 +03:00
Burak Yucesoy 1d18950860 Modify tests to create clean workspace
Since we will now replicate reference tables each time we add node, we need to ensure
that test space is clean in terms of reference tables before any add node operation.
For this purpose we had to change order of multi_drop_extension test which caused
change of some of the colocation ids.
2017-01-05 12:22:44 +03:00
Burak Yucesoy 31cd2357fe Add upgrade_to_reference_table
With this change we introduce new UDF, upgrade_to_reference_table, which can be used to
upgrade existing broadcast tables reference tables. For upgrading, we require that given
table contains only one shard.
2017-01-02 17:54:42 +02:00
Eren Basak 7e09bd6836 Error on Unsupported Features on Workers
This change makes the metadata workers error out on unsupported commands.
2017-01-02 16:03:45 +03:00
Burak Yucesoy 0851fd2f0b GRANT SELECT access for metadata tables to public
Previously, we errored out if non-user tries to SELECT query for some metadata tables. It
seems that we already GRANT SELECT access to some metadata tables but not others. With
this change, we GRANT SELECT access to all existing Citus metadata tables.
2016-12-23 16:32:47 +03:00
Eren Basak efcb1f9dd9 Rename multi_metadata_snapshot to multi_metadata_sync to make it include future mx metadata syncing regression tests 2016-12-23 15:43:32 +03:00
Onder Kalaci 9f0bd4cb36 Reference Table Support - Phase 1
With this commit, we implemented some basic features of reference tables.

To start with, a reference table is
  * a distributed table whithout a distribution column defined on it
  * the distributed table is single sharded
  * and the shard is replicated to all nodes

Reference tables follows the same code-path with a single sharded
tables. Thus, broadcast JOINs are applicable to reference tables.
But, since the table is replicated to all nodes, table fetching is
not required any more.

Reference tables support the uniqueness constraints for any column.

Reference tables can be used in INSERT INTO .. SELECT queries with
the following rules:
  * If a reference table is in the SELECT part of the query, it is
    safe join with another reference table and/or hash partitioned
    tables.
  * If a reference table is in the INSERT part of the query, all
    other participating tables should be reference tables.

Reference tables follow the regular co-location structure. Since
all reference tables are single sharded and replicated to all nodes,
they are always co-located with each other.

Queries involving only reference tables always follows router planner
and executor.

Reference tables can have composite typed columns and there is no need
to create/define the necessary support functions.

All modification queries, master_* UDFs, EXPLAIN, DDLs, TRUNCATE,
sequences, transactions, COPY, schema support works on reference
tables as expected. Plus, all the pre-requisites associated with
distribution columns are dismissed.
2016-12-20 14:09:35 +02:00
Onder Kalaci 1673ea937c Feature: INSERT INTO ... SELECT
This commit adds INSERT INTO ... SELECT feature for distributed tables.

We implement INSERT INTO ... SELECT by pushing down the SELECT to
each shard. To compute that we use the router planner, by adding
an "uninstantiated" constraint that the partition column be equal to a
certain value. standard_planner() distributes that constraint to all
the tables where it knows how to push the restriction safely. An example
is that the tables that are connected via equi joins.

The router planner then iterates over the target table's shards,
for each we replace the "uninstantiated" restriction, with one that
PruneShardList() handles. Do so by replacing the partitioning qual
parameter added in multi_planner() with the current shard's
actual boundary values. Also, add the current shard's boundary values to the
top level subquery to ensure that even if the partitioning qual is
not distributed to all the tables, we never run the queries on the shards
that don't match with the current shard boundaries. Finally, perform the
normal shard pruning to decide on whether to push the query to the
current shard or not.

We do not support certain SQLs on the subquery, which are described/commented
on ErrorIfInsertSelectQueryNotSupported().

We also added some locking on the router executor. When an INSERT/SELECT command
runs on a distributed table with replication factor >1, we need to ensure that
it sees the same result on each placement of a shard. So we added the ability
such that router executor takes exclusive locks on shards from which the SELECT
in an INSERT/SELECT reads in order to prevent concurrent changes. This is not a
very optimal solution, but it's simple and correct. The
citus.all_modifications_commutative can be used to avoid aggressive locking.
An INSERT/SELECT whose filters are known to exclude any ongoing writes can be
marked as commutative. See RequiresConsistentSnapshot() for the details.

We also moved the decison of whether the multiPlan should be executed on
the router executor or not to the planning phase. This allowed us to
integrate multi task router executor tasks to the router executor smoothly.
2016-10-26 10:01:00 +03:00
Onder Kalaci e0d83d65af Add ability to reorder target list for INSERT/SELECT queries
The necessity for this functionality comes from the fact that ruleutils.c is not supposed to be
used on "rewritten" queries (i.e. ones that have been passed through QueryRewrite()).
Query rewriting is the process in which views and such are expanded,
and, INSERT/UPDATE targetlists are reordered to match the physical order,
defaults etc. For the details of reordeing, see transformInsertRow().
2016-10-26 10:00:03 +03:00
Burak Yucesoy 5a03acf2bf Foreign Constraint Support for create_distributed_table and shard move
With this change, we now push down foreign key constraints created during CREATE TABLE
statements. We also start to send foreign constraints during shard move along with
other DDL statements
2016-10-21 15:38:55 +03:00
Murat Tuncer b453f6c7ab Add master_run_on_worker UDF 2016-10-18 17:59:54 +03:00
Eren Basak cee7b54e7c Add worker transaction and transaction recovery infrastructure 2016-10-18 14:18:14 +03:00
Metin Doslu 27616cca52 Add regression tests for parameterized queries 2016-10-18 14:02:50 +03:00
Burak Yucesoy 6668d19a3b Make shard transfer functions co-location aware
With this change, master_copy_shard_placement and master_move_shard_placement functions
start to copy/move given shard along with its co-located shards.
2016-10-13 18:16:40 +03:00
Eren Basak ed3af403fd Add Metadata Snapshot Infrastructure
This change adds the required infrastructure about metadata snapshot from MX
codebase into Citus, mainly metadata_sync.c file and master_metadata_snapshot UDF.
2016-10-13 10:40:14 +03:00
Jason Petersen bcfc58a7c7
Fix tests and tell Travis to run them all
Two sets of tests are fixed by this change:
  * multi_agg_approximate_distinct
  * those in multi_task_tracker_extra_schedule

The first broke when we renamed stage to load in many files and was
never being run because the HyperLogLog extension wasn't easily
available in Debian. Now it's in our repo, so we install it and run
the test. I removed the distinct HLL target in favor of just always
running it and providing an output variant to handle when the extension
is absent. Basically, if PostgreSQL thinks HLL is available, the test
installs it and runs normally, otherwise the absent variant is used.

The second broke when I removed a test variant, erroneously believing
it to be related to an older Citus version. I've added a line in that
test to clarify why the variant is necessary (a practice we should
widely adopt).
2016-10-07 17:32:54 -06:00
Brian Cloutier 9d6699b07c Switch from pg_worker_list.conf file to pg_dist_node metadata table.
Related to #786

This change adds the `pg_dist_node` table that contains the information
about the workers in the cluster, replacing the previously used
`pg_worker_list.conf` file (or the one specified with `citus.worker_list_file`).

Upon update, `pg_worker_list.conf` file is read and `pg_dist_node` table is
populated with the file's content. After that, `pg_worker_list.conf` file
is renamed to `pg_worker_list.conf.obsolete`

For adding and removing nodes, the change also includes two new UDFs:
`master_add_node` and `master_remove_node`, which require superuser
permissions.

'citus.worker_list_file' guc is kept for update purposes but not used after the
update is finished.
2016-10-05 13:01:35 +03:00
Robin Thomas f677fadbe6 Provides safe, idempotent shard-extended names to any object name
related to a table that might be distributed, allowing any name
that is within regular PostgreSQL length limits to be extended
with a shard ID for use in shards on workers. Handles multi-byte
character boundaries in identifiers when making prefixes for
shard-extended names. Includes tests.
Uses hash_any from PostgreSQL's access/hashfunc.c.
Removes AppendShardIdToStringInfo() as it's used only once
and arguably is best replaced there with a call to AppendShardIdToName().

Adds UDF shard_name(object_name, shard_id) to expose the shard-extended
name logic to other PL/PGSQL, UDFs and scripts.

Bumps version to 6.0-2 to allow for UDF to be created in migration script.

Fixes citusdata/citus#781 and citusdata/citus#179.
2016-10-03 17:02:34 -04:00
Robin Thomas c507a0df1c During repartitions, the partitionColumnType argument sent to workers
is now a `::regtype` using the qualified name of the column type,
not the column type OID which may differ between master/worker nodes.
Test coverage of a hash reparitition using a UDT as the join column.

Note that the UDFs `worker_hash_partition_table` and `worker_range_partition_table`
are unchanged, and rightly expect an OID for the column type; but the
planner code building the commands now allows for `::regtype` casting
to do its magic.

Fixes citusdata/citus#111.
2016-10-03 13:41:20 -04:00
Burak Yucesoy 1ee39eb098 Internal co-location API
With this commit we introduce internal API for co-location related operations.
2016-09-29 11:56:53 +03:00
Murat Tuncer c16dec88c3 Add UDF master_expire_table_cache 2016-09-28 12:08:37 +03:00
Murat Tuncer 2eec0167be
Add support for truncate statement 2016-09-26 18:23:42 -06:00
Robin Thomas 614c858375 Forbid EXCLUDE constraints on distributed tables just as we forbid
UNIQUE or PRIMARY KEY constraints. Also, properly propagate valid
EXCLUDE constraints to worker shard tables.

If an EXCLUDE constraint includes the distribution column,
the operator must be an equality operator.
Tests in regression suite for exclusion constraints that include
the partition column, omit it, and include it but with non-equality
operator. Regression tests also verify that valid exclusion constraints
are propagated to the shard tables. And the tests work in different
timezones now.

Fixes citusdata/citus#748 and citusdata/citus#778.
2016-09-21 14:02:42 -04:00
Jason Petersen e54d3f6d32
Rename test files with 'stage' in name
Ignored FDW files as those test are being removed entirely, I believe.
2016-08-22 13:32:53 -06:00
Jason Petersen 35e9f51348
Replace verb 'stage' with 'load' in schedules
"Staging table" will be the only valid use of 'stage' from now on.
2016-08-22 11:48:41 -06:00
Robin Thomas 010cbf16fc Remove all usage of pg_dist_shard.shardalias in extension code. (#739)
Remove regression test of non-null shardalias.
2016-08-19 17:06:22 +03:00
Jason Petersen 5d525fba24
Permit "single-shard" transactions
Allows the use of modification commands (INSERT/UPDATE/DELETE) within
transaction blocks (delimited by BEGIN and ROLLBACK/COMMIT), so long as
all modifications hit a subset of nodes involved in the first such com-
mand in the transaction. This does not circumvent the requirement that
each individual modification command must still target a single shard.

For instance, after sending BEGIN, a user might INSERT some rows to a
shard replicated on two nodes. Subsequent modifications can hit other
shards, so long as they are on one or both of these nodes.

SAVEPOINTs are supported, though if the user actually attempts to send
a ROLLBACK command that specifies a SAVEPOINT they will receive an
ERROR at the end of the topmost transaction.

Placements are only marked inactive if at least one replica succeeds
in a transaction where others fail. Non-atomic behavior is possible if
the shard targeted by the initial modification within a transaction has
a higher replication factor than another shard within the same block
and a node with the latter shard has a failure during the COMMIT phase.

Other methods of denoting transaction blocks (multi-statement commands
sent all at once and functions written in e.g. PL/pgSQL or other such
languages) are not presently supported; their treatment remains the
same as before.
2016-07-21 15:57:22 -06:00
Brian Cloutier ae91768c96 Evaluate functions on the master
- Enables using VOLATILE functions (like nextval()) in INSERT queries
- Enables using STABLE functions (like now()) targetLists and joinTrees

UPDATE and INSERT can now contain non-immutable functions. INSERT can contain any kind of
expression, while UPDATE can contain any STABLE function, so long as a Var is not passed
into the STABLE function, even indirectly. UPDATE TagetEntry's can now also include Vars.

There's an exception, CASE/COALESCE statements may not contain mutable functions.

Functions calls in master_modify_multiple_shards are also evaluated.
2016-07-13 11:45:51 -07:00
Burak Yucesoy 78aaad2738 Fix master_append_table_to_shard to work with schemas
Fixes #78

With this change, it is possible to append a table in any schema to shard. The function
master_append_table_to_shard now supports schema names.
2016-06-17 04:35:00 +03:00
Murat Tuncer 9cb6a022a5 Reduce regression test runtime
-Added 2 more schedules for task-tracker and multi-binary
 instead of running multi_schedule 3 times
-set task-tracker-delay for each long running schedule
2016-06-15 16:35:07 +03:00
Murat Tuncer 2b0d6473b9 Add complex distinct count support for repartitioned subqueries
Single table repartition subqueries now support count(distinct column)
and count(distinct (case when ...)) expressions. Repartition query
extracts column used in aggregate expression and adds them to target
list and group by list, master query stays the same (count (distinct ...))
but attribute numbers inside the aggregate expression is modified to
reflect changes in repartition query.
2016-05-27 15:43:05 +03:00
eren 132d9212d0 ADD master_modify_multiple_shards UDF
Fixes #10

This change creates a new UDF: master_modify_multiple_shards
Parameters:
  modify_query: A simple DELETE or UPDATE query as a string.

The UDF is similar to the existing master_apply_delete_command UDF.
Basically, given the modify query, it prunes the shard list, re-constructs
the query for each shard and sends the query to the placements.

Depending on the value of citus.multi_shard_commit_protocol, the commit
can be done in one-phase or two-phase manner.

Limitations:
* It cannot be called inside a transaction block
* It only be called with simple operator expressions (like Single Shard Modify)

Sample Usage:
```
SELECT master_modify_multiple_shards(
  'DELETE FROM customer_delete_protocol WHERE c_custkey > 500 AND c_custkey < 500');
```
2016-05-26 17:30:35 +03:00
Marco Slot fc4f23065a Add EXPLAIN for simple distributed queries 2016-04-30 00:11:02 +02:00
Andres Freund 25f919576f Add very basic infrastructure for schema upgrade scripts.
Citus' extension version now has a -$schemaversion appendix.  When the
schema is changed, a new schema version has to be added; changes to the
same schema version several commits inside a single pull request are ok.

Schema migration scripts between each schema version have to be
added. To ensure upgrade scripts work correctly a new regression test
ensures that all steps work.

The extension scripts to-be-used for CREATE EXTENSION (i.e. not
extension updates) are generated by concatenating citus.sql and the
relevant migration scripts.
2016-04-27 10:00:08 -07:00
Brian Cloutier 7a6d689259 Clear metadata_cache upon DROP EXTENSION
When we notice that pg_dist_partition is being invalidated we assume
that the citus extension is being dropped and drop state such as
extensionLoaded and the cached oids of all the metadata tables.

This frees the user from needing to reconnect after running DROP
EXTENSION, so we also no longer send a warning message.
2016-04-22 07:25:49 -07:00
Murat Tuncer 938546b938 Add router plannable check and router planning logic
for single shard select queries
2016-04-21 09:15:33 +03:00
Metin Doslu 1150ce6414 Send COPY rows in binary format 2016-04-12 20:22:31 +02:00
Marco Slot d25ee8fbd8 Support for COPY FROM, based on pg_shard PR by Postres Pro 2016-04-12 20:22:31 +02:00
eren 8a284aab92 Fixes issue #313
Prior to this change, it was not possible to use UDFs in repartitioned
subqueries. The reason is that we were setting the search path explicitly
and omiting public schema from that path.

This change adds the public schema to the explicitly set search path.
2016-03-30 15:39:12 +03:00
Murat Tuncer e74decf8b4 Allow users to create unique indexes
Users can now create unique indexes on partition columns
for hash and range distributed tables.
2016-03-24 09:31:06 +02:00
Onder Kalaci 136306a1fe Initial commit of Citus 5.0 2016-02-11 04:05:32 +02:00