Commit Graph

758 Commits (86a70515c54a217fe246f055b1e02b42f73a11a4)

Author SHA1 Message Date
velioglu a1ea29ec2b Use placement connection to drop shards instead of node connection 2017-06-14 14:14:59 +03:00
Marco Slot 70abfd29d2 Allow COPY after a multi-shard command
This change removes the XactModificationLevel check at the start of COPY
that was made redundant by consistently using GetPlacementConnection.
2017-06-09 13:54:58 +02:00
jmunsch 1647d17a14 Clarify error message for local and distributed query plans. 2017-06-01 11:52:49 -07:00
Marco Slot f1d804180b Don't take a table lock in ForeignConstraintGetReferencedTableId 2017-05-31 11:15:21 +02:00
Burak Yucesoy 8c1bbf1417 Register cache invalidation callback before version checks
With this commit we start to register InvalidateDistRelationCacheCallback
function as cache invalidation callback function before version checks
because during version checks we use cache to look up relation ids of some
relations like pg_dist_relation or pg_dist_partition_logical_relid_index
and we want to know about cache invalidation before accessing them.
2017-05-24 17:39:25 +03:00
Burak Yucesoy c7bfa06cb9 Fix incorrect call to CheckInstalledVersion
During version update, we indirectly calld CheckInstalledVersion via
ChackCitusVersions. This obviously fails because during version update it is
expected to have version mismatch between installed version and binary version.
Thus, we remove that ChackCitusVersions. We now only call ChackAvailableVersion.
2017-05-24 17:39:25 +03:00
Burak Yucesoy 9fb15c439c Add version checks to necessary UDFs 2017-05-22 09:53:29 +03:00
Burak Yucesoy eea8c51e1f Only error out on distributed queries when there is version mismatch
Before this commit, we were erroring out at almost all queries if there is a
version mismatch. With this commit, we started to error out only requested
operation touches distributed tables.

Normally we would need to use distributed cache to understand whether a table
is distributed or not. However, it is not safe to read our metadata tables when
there is a version mismatch, thus it is not safe to create distributed cache.
Therefore for this specific occasion, we directly read from pg_dist_partition
table. However; reading from catalog is costly and we should not use this
method in other places as much as possible.
2017-05-22 09:53:29 +03:00
Burak Yucesoy acb0d23717 Fix crash during upgrade from 5.2 to 6.2
This commit fixes the problem where we incorrectly try to reach distributed table
cache when the extension is not loaded completely. We tried to reach the cache
because we wanted to get reference table information to activate the node. However
it is actually not necessary to explicitly activate the nodes which come from
master_initialize_node_metadata. Because it only runs during extension creation and
at that time there are no reference tables and all nodes are considered as active.
2017-05-19 00:01:36 +03:00
Jason Petersen cc45712144 Bump extension and configure PACKAGE versions
Actually getting this done before the next dev cycle begins.
2017-05-17 15:25:30 -06:00
Jason Petersen 489aa73257
Add missing CCI call in metadata seq sync
Be explicit about the fact that we've made a modification: we need
subsequent commands to see this sequence.
2017-05-16 11:05:34 -06:00
Jason Petersen c9fa11b445
Use library and symbol name for bgw entry
PostgreSQL 10 takes away the ability to directly assign a function
pointer; the other approach (library and symbol name) is supported by
all versions.
2017-05-16 11:05:33 -06:00
Jason Petersen f86920f9d6
Add includes for missing standard headers
We use symbols from each of these and were relying on them being
included by other headers.
2017-05-16 11:05:33 -06:00
Jason Petersen 82b03d5cb6
Add explicit cast for argument to copyObject
PostgreSQL 10 adds a call to typeof, if supported.
2017-05-16 11:05:33 -06:00
Burak Yucesoy 5a3a32d6df Quote schema's owner name
When we propogate the schema creation command to data nodes we add schema's
owner name too. Before this patch, we did not quote the owner's name which
causes problems with the names containing characters like '-'.
2017-05-15 16:26:32 +03:00
Burak Yucesoy 1b5560b2f7 Fix OwnerName function to work with schemas
We incorrectly try to use relation cache to find particular schema's owner and
when we cannot find the schema in the relation cache(i.e always), we automatically
used current user as the schema's owner. This means we always created schemas in
the data nodes with current user. With this patch we started to use namespace
cache to find schemas.
2017-05-15 16:26:32 +03:00
Önder Kalacı e0257aecd9 Accept invalidation messages before accessing the metadata cache (#1406)
* Accept invalidation messages before accessing the metadata cache

This commit is crucial to prevent stale metadata reads from the
cache. Without this commit, some of the operations may use stale
metadata which could end up with various bugs such as crashes,
inconsistent/lost data etc.

As an example, consider that a COPY operation is blocked on shard
metadata lock. Another concurrent session updates the metadata and
invalidates the cache. However, since Citus doesn't accept invalidations,
COPY continues with the stale metadata once it acquires the lock.

With this commit, we make sure that invalidation messages are accepted
just before accessing the metadata cache and preventing any operation to
use stale metadata.

* Add isolation tests for placement changes and conccurrent operations

   - add node with reference table vs COPY/insert/update/DDL
   - repair shard vs COPY/insert/update/DDL
   - repair shard vs repair shard
2017-05-12 12:32:35 +03:00
Marco Slot 6f9e18de24 Ensure all preceding writes are visible in data migration 2017-05-11 09:42:12 +02:00
Önder Kalacı 3ec502b286 Add support for parametrized execution for subquery pushdown (#1356)
Distributed query planning for subquery pushdown is done on the original
query. This prevents the usage of external parameters on the execution.
To overcome this, we manually replace the parameters on the original
query.
2017-05-10 09:38:48 +03:00
Marco Slot a8f368fced Fix locking in master_drop_all_shards / master_apply_delete_command 2017-05-08 17:26:55 +02:00
Marco Slot 853f07dd33 Don't change query tree of DDL commands 2017-05-04 21:34:28 +02:00
Jason Petersen f0c6c47c4e
Fix CREATE SEQUENCE generation bug
Apparently we've had a typo all this time causing us to pass the cache
value for the start value.
2017-05-03 21:47:06 -07:00
Önder Kalacı ef6d3587b6 Skip exhaustive test in CoPartitionedTables() if declared colocated (#1376)
That's considerably cheaper.
2017-05-02 03:33:21 +03:00
Önder Kalacı b74ed3c8e1 Subqueries in where -- updated (#1372)
* Support for subqueries in WHERE clause

This commit enables subqueries in WHERE clause to be pushed down
by the subquery pushdown logic.

The support covers:
  - Correlated subqueries with IN, NOT IN, EXISTS, NOT EXISTS,
    operator expressions such as (>, <, =, ALL, ANY etc.)
  - Non-correlated subqueries with (partition_key) IN (SELECT partition_key ..)
    (partition_key) =ANY (SELECT partition_key ...)

Note that this commit heavily utilizes the attribute equivalence logic introduced
in the 1cb6a34ba8. In general, this commit mostly
adjusts the logical planner not to error out on the subqueries in WHERE clause.

* Improve error checks for subquery pushdown and INSERT ... SELECT

Since we allow subqueries in WHERE clause with the previous commit,
we should apply the same limitations to those subqueries.

With this commit, we do not iterate on each subquery one by one.
Instead, we extract all the subqueries and apply the checks directly
on those subqueries. The aim of this change is to (i) Simplify the
code (ii) Make it close to the checks on INSERT .. SELECT code base.

* Extend checks for unresolved paramaters to include SubLinks

With the presence of subqueries in where clause (i.e., SubPlans on the
query) the existing way for checking unresolved parameters fail. The
reason is that the parameters for SubPlans are kept on the parent plan not
on the query itself (see primnodes.h for the details).

With this commit, instead of checking SubPlans on the modified plans
we start to use originalQuery, where SubLinks represent the subqueries
in where clause. The unresolved parameters can be found on the SubLinks.

* Apply code-review feedback

* Remove unnecessary copying of shard interval list

This commit removes unnecessary copying of shard interval list. Note
that there are no copyObject function implemented for shard intervals.
2017-05-01 17:20:21 +03:00
Marco Slot 8edba5f309 Honour enable_ddl_propagation in truncate trigger 2017-04-29 03:32:52 +02:00
Brian Cloutier 22e7aa9a4f Fix crash in isolation tests
- There was a crash when the table a shardid belonged to changed during
  a session. Instead of crashing (a failed assert) we now throw an error
- Update the isolation test which was crashing to no longer exercise
  that code path
- Add a regression test to check that the error is thrown
2017-04-29 04:25:26 +03:00
Önder Kalacı ad5cd326a4 Subquery pushdown - main branch (#1323)
* Enabling physical planner for subquery pushdown changes

This commit applies the logic that exists in INSERT .. SELECT
planning to the subquery pushdown changes.

The main algorithm is followed as :
   - pick an anchor relation (i.e., target relation)
   - per each target shard interval
       - add the target shard interval's shard range
         as a restriction to the relations (if all relations
         joined on the partition keys)
        - Check whether the query is router plannable per
          target shard interval.
        - If router plannable, create a task

* Add union support within the JOINS

This commit adds support for UNION/UNION ALL subqueries that are
in the following form:

     .... (Q1 UNION Q2 UNION ...) as union_query JOIN (QN) ...

In other words, we currently do NOT support the queries that are
in the following form where union query is not JOINed with
other relations/subqueries :

     .... (Q1 UNION Q2 UNION ...) as union_query ....

* Subquery pushdown planner uses original query

With this commit, we change the input to the logical planner for
subquery pushdown. Before this commit, the planner was relying
on the query tree that is transformed by the postgresql planner.
After this commit, the planner uses the original query. The main
motivation behind this change is the simplify deparsing of
subqueries.

* Enable top level subquery join queries

This work enables
- Top level subquery joins
- Joins between subqueries and relations
- Joins involving more than 2 range table entries

A new regression test file is added to reflect enabled test cases

* Add top level union support

This commit adds support for UNION/UNION ALL subqueries that are
in the following form:

     .... (Q1 UNION Q2 UNION ...) as union_query ....

In other words, Citus supports allow top level
unions being wrapped into aggregations queries
and/or simple projection queries that only selects
some fields from the lower level queries.

* Disallow subqueries without a relation in the range table list for subquery pushdown

This commit disallows subqueries without relation in the range table
list. This commit is only applied for subquery pushdown. In other words,
we do not add this limitation for single table re-partition subqueries.

The reasoning behind this limitation is that if we allow pushing down
such queries, the result would include (shardCount * expectedResults)
where in a non distributed world the result would be (expectedResult)
only.

* Disallow subqueries without a relation in the range table list for INSERT .. SELECT

This commit disallows subqueries without relation in the range table
list. This commit is only applied for INSERT.. SELECT queries.

The reasoning behind this limitation is that if we allow pushing down
such queries, the result would include (shardCount * expectedResults)
where in a non distributed world the result would be (expectedResult)
only.

* Change behaviour of subquery pushdown flag (#1315)

This commit changes the behaviour of the citus.subquery_pushdown flag.
Before this commit, the flag is used to enable subquery pushdown logic. But,
with this commit, that behaviour is enabled by default. In other words, the
flag is now useless. We prefer to keep the flag since we don't want to break
the backward compatibility. Also, we may consider using that flag for other
purposes in the next commits.

* Require subquery_pushdown when limit is used in subquery

Using limit in subqueries may cause returning incorrect
results. Therefore we allow limits in subqueries only
if user explicitly set subquery_pushdown flag.

* Evaluate expressions on the LIMIT clause (#1333)

Subquery pushdown uses orignal query, the LIMIT and OFFSET clauses
are not evaluated. However, logical optimizer expects these expressions
are already evaluated by the standard planner. This commit manually
evaluates the functions on the logical planner for subquery pushdown.

* Better format subquery regression tests (#1340)

* Style fix for subquery pushdown regression tests

With this commit we intented a more consistent style for the
regression tests we've added in the
  - multi_subquery_union.sql
  - multi_subquery_complex_queries.sql
  - multi_subquery_behavioral_analytics.sql

* Enable the tests that are temporarily commented

This commit enables some of the regression tests that were commented
out until all the development is done.

* Fix merge conflicts (#1347)

 - Update regression tests to meet the changes in the regression
   test output.
 - Replace Ifs with Asserts given that the check is already done
 - Update shard pruning outputs

* Add view regression tests for increased subquery coverage (#1348)

- joins between views and tables
- joins between views
- union/union all queries involving views
- views with limit
- explain queries with view

* Improve btree operators for the subquery tests

This commit adds the missing comprasion for subquery composite key
btree comparator.
2017-04-29 04:09:48 +03:00
Andres Freund 90b211267d Perform range based pruning if equality pruning has survivor.
We previously dismissed this as unimportant, but it turns out to be
very useful for the upcoming subquery pushdown, where a user might
specify an equality constraint in a subquery, and the subquery
pushdown machinery adds >= and <= restrictions on the shard boundary.
Previously the latter restriction was ignored.
2017-04-28 17:35:18 -07:00
Andres Freund 6c08fe72f9 Use stricter qual for pruning if both >/< and >=/<= are present.
Previously, if both =< and < (>= and < respectively) were specified,
we always used the latter restriction.  Instead use the stricter one.
2017-04-28 17:35:18 -07:00
Marco Slot 6e58067962 Fix list length lookup in WorkerGetLiveNodeCount 2017-04-29 02:13:20 +02:00
Burak Yucesoy 6599677902 Fix check-vanilla tests
It semms that GEQO optimizations, when it is set to on, create their own memory context
and free it after when it is no longer necessary. In join multi_join_restriction_hook
we allocate our variables in the CurrentMemoryContext, which is GEQO's memory context
if it is active. To prevent deallocation of our variables when GEQO's memory context is
freed, we started to allocate memory fo these variables in separate MemoryContext.
2017-04-29 01:55:18 +02:00
Marco Slot 0b579d027a Check whether relation ID exists in citus_relation_size 2017-04-29 01:39:39 +02:00
Andres Freund d399f395f7 Faster shard pruning.
So far citus used postgres' predicate proofing logic for shard
pruning, except for INSERT and COPY which were already optimized for
speed.  That turns out to be too slow:
* Shard pruning for SELECTs is currently O(#shards), because
  PruneShardList calls predicate_refuted_by() for every
  shard. Obviously using an O(N) type algorithm for general pruning
  isn't good.
* predicate_refuted_by() is quite expensive on its own right. That's
  primarily because it's optimized for doing a single refutation
  proof, rather than performing the same proof over and over.
* predicate_refuted_by() does not keep persistent state (see 2.) for
  function calls, which means that a lot of syscache lookups will be
  performed. That's particularly bad if the partitioning key is a
  composite key, because without a persistent FunctionCallInfo
  record_cmp() has to repeatedly look-up the type definition of the
  composite key. That's quite expensive.

Thus replace this with custom-code that works in two phases:
1) Search restrictions for constraints that can be pruned upon
2) Use those restrictions to search for matching shards in the most
   efficient manner available:
   a) Binary search / Hash Lookup in case of hash partitioned tables
   b) Binary search for equal clauses in case of range or append
      tables without overlapping shards.
   c) Binary search for inequality clauses, searching for both lower
      and upper boundaries, again in case of range or append
      tables without overlapping shards.
   d) exhaustive search testing each ShardInterval

My measurements suggest that we are considerably, often orders of
magnitude, faster than the previous solution, even if we have to fall
back to exhaustive pruning.
2017-04-28 14:40:41 -07:00
Andres Freund 6bd2e3ed30 Add DistTableCacheEntry->hasOverlappingShardInterval.
This determines whether it's possible to perform binary search on
sortedShardIntervalArray or not.  If e.g. two shards have overlapping
ranges, that'd be prohibitive.

That'll be useful in later commit introducing faster shard pruning.
2017-04-28 14:40:38 -07:00
Andres Freund 105483ec56 Add DistTableCacheEntry->shardValueCompareFunction.
That's useful when comparing values a hash-partitioned table is
filtered by.  The existing shardIntervalCompareFunction is about
comparing hashed values, not unhashed ones.

The added btree opclass function is so we can get a comparator
back. This should be changed much more widely, but is not necessary so
far.
2017-04-28 14:40:38 -07:00
Andres Freund 52571c00ad Build DistTableCacheEntry->shardIntervalCompareFunction even for 0 shards.
Previously we, unnecessarily, used a the first shard's type
information to to look up the comparison function.  But that
information is already available, so use it.  That's helpful because
we sometimes want to access the comparator function even if there's no
shards.
2017-04-28 14:40:38 -07:00
Andres Freund ba93d32c8a Fix: Make FindShardIntervalIndex robust against 0 shards. 2017-04-28 14:40:38 -07:00
Metin Doslu b6659bec22 Send explain queries with savepoints
With this commit, we started to send explain queries within a savepoint. After
running explain query, we rollback to savepoint. This saves us from side effects
of EXPLAIN ANALYZE on DML queries.
2017-04-28 12:13:48 -07:00
Jason Petersen 93e3afc25c
Remove FastShardPruning method
With the other simplifications, it doesn't make sense to keep around.
2017-04-27 13:32:36 -06:00
Jason Petersen 42ee7c05f5
Refactor FindShardInterval to use cacheEntry
All callers fetch a cache entry and extract/compute arguments for the
eventual FindShardInterval call, so it makes more sense to refactor
into that function itself; this solves the use-after-free bug, too.
2017-04-27 13:32:36 -06:00
Andres Freund b7dfeb0bec Boring regression test output adjustments.
Soon shard pruning will be optimized not to generally work linearly
anymore.  Thus we can't print the pruned shard intervals as currently
done anymore.

The current printing of shard ids also prevents us from running tests
in parallel, as otherwise shard ids aren't linearly numbered.
2017-04-26 11:33:56 -07:00
Andres Freund 71a7f39b05 Skip exhaustive test in CoPartitionedTables() if declared colocated.
That's considerably cheaper.
2017-04-26 11:19:17 -07:00
Marco Slot 7f9e80db10 Only process error if not NULL in StoreErrorMessage 2017-04-21 17:01:01 +02:00
Marco Slot 7faf4657b7 Use right sizeof in UpdateRelationColocationGroup 2017-04-21 16:37:09 +02:00
Marco Slot 4ed093970a Support expressions in the partition column in INSERTs 2017-04-21 14:05:52 +02:00
velioglu 24d24db25c Implement ALTER TABLE ADD CONSTRAINT command 2017-04-20 15:02:33 +03:00
velioglu 8cbef819be Log message of across shard queries according to the log level 2017-04-20 12:24:46 +03:00
velioglu 2327b63291 Change native hash function with worker_hash 2017-04-19 22:16:55 +03:00
Jason Petersen 5272c2c44b
Enable distributed ALTER TABLE ... RENAME COLUMN
Pretty straightforward. Had some concerns about locking, but due to the
fact that all distributed operations use either some level of deparsing
or need to enumerate column names, they all block during any concurrent
column renames (due to the AccessExclusive lock).

In addition, I had some misgivings about permitting renames of the dis-
tribution column, but nothing bad comes from just allowing them.

Finally, I tried to trigger any sort of error using prepared statements
and could not trigger any errors not also exhibited by plain PostgreSQL
tables.
2017-04-18 22:47:48 -06:00
Marco Slot dfd7d86948 Stop using a sequence to generate unique job IDs 2017-04-18 11:31:51 +02:00
Burak Yucesoy 00747dc8c9 Set default value of isactive to true
With this change, we set to default value of isactive column to true so that
upgrading users all nodes will be marked as active to not break their environment.
2017-04-18 09:40:44 +03:00
Burak Yucesoy 1a56b99f13 Fix node copy error
Instead of directly returning heap tuple obtained from heap scan
we return copied version of it.
2017-04-17 19:38:18 +03:00
Marco Slot af0e462409 Support UPDATE/DELETE with parameterised partition column qual 2017-04-17 16:17:30 +02:00
Marco Slot 5e58804d44 Support query parameters in combination with function evaluation 2017-04-17 15:40:55 +02:00
Marco Slot 0bcc227a62 Create indexes after worker_append_table_to_shard during shard repair 2017-04-17 15:17:21 +02:00
Burak Yucesoy e9095e62ec Decouple reference table replication
With this change we add an option to add a node without replicating all reference
tables to that node. If a node is added with this option, we mark the node as
inactive and no queries will sent to that node.

We also added two new UDFs;
 - master_activate_node(host, port):
    - marks node as active and replicates all reference tables to that node
 - master_add_inactive_node(host, port):
    - only adds node to pg_dist_node
2017-04-17 13:33:31 +03:00
Burak Yucesoy 7cfcb7d2f8 Error out on parameterized SQL functions
Before this commit, we were erroring out for queries containing parameterized SQL functions
like 'SELECT parameterized_sql_query(value)' as we should, however we were returning wrong
results for queries like 'SELECT * FROM parameterized_sql_query(value)'. With this commit
we started to error out on such queries too.
2017-04-13 16:36:24 +03:00
Onder Kalaci 1cb6a34ba8 Remove uninstantiated qual logic, use attribute equivalences
In this PR, we aim to deduce whether each of the RTE_RELATION
is joined with at least on another RTE_RELATION on their partition keys. If each
RTE_RELATION follows the above rule, we can conclude that all RTE_RELATIONs are
joined on their partition keys.

In order to do that, we invented a new equivalence class namely:
AttributeEquivalenceClass. In very simple words, a AttributeEquivalenceClass is
identified by an unique id and consists of a list of AttributeEquivalenceMembers.

Each AttributeEquivalenceMember is designed to identify attributes uniquely within the
whole query. The necessity of this arise since varno attributes are defined within
a single level of a query. Instead, here we want to identify each RTE_RELATION uniquely
and try to find equality among each RTE_RELATION's partition key.

Whenever we find an equality clause A = B, where both A and B originates from
relation attributes (i.e., not random expressions), we create an
AttributeEquivalenceClass to record this knowledge. If we later find another
equivalence B = C, we create another AttributeEquivalenceClass. Finally, we can
apply transitity rules and generate a new AttributeEquivalenceClass which includes
A, B and C.

Note that equality among the members are identified by the varattno and rteIdentity.

Each equality among RTE_RELATION is saved using an AttributeEquivalenceClass where
each member attribute is identified by a AttributeEquivalenceMember. In the final
step, we try generate a common attribute equivalence class that holds as much as
AttributeEquivalenceMembers whose attributes are a partition keys.
2017-04-13 11:51:26 +03:00
velioglu 1fb11c738f Check binary output function of type. 2017-04-10 16:28:09 +03:00
Jason Petersen 7e46f41c12
Add comments, use strncmp, clean up GUC desc.
Good to go!
2017-04-04 16:16:49 -06:00
Jason Petersen 033fda9183
Clean up remaining error messages
Added details and hints, based off of similar PostgreSQL scenarios.
2017-04-04 16:11:59 -06:00
Jason Petersen ef81b21a49
Clean up ErrorIfUnstableCreateOrAlterExtensionStmt
Swaps an Assert in for an ereport, and adds details and hints to the
error message to help users with a possibly confusing scenario.
2017-04-04 15:58:57 -06:00
Jason Petersen ad3fbd9689
Refactor utility-skip/extn-check code
This was getting pretty long and complex in the context of the main
utility hook. Moved out the checks for what should skip Citus process-
ing and what should have version checks performed.
2017-04-04 15:07:22 -06:00
Burak Yucesoy a09614553f Add enable_version_checks GUC and address feedback 2017-04-04 19:11:13 +03:00
Burak Yucesoy 087d8427e3
Error out if binary citus version does not match installed extension
With this change, we start to error out if loaded citus binaries does not match
the available major version or installed citus extension version. In this case
we force user to restart the server or run ALTER EXTENSION depending on the
situation
2017-04-03 17:36:13 -06:00
Jason Petersen 4cdfc3a10f
Address review feedback
Should just about do it.
2017-04-03 11:44:57 -06:00
Jason Petersen cf775c4773
Improve CONCURRENTLY-related error messages
Thought this looked slightly nicer than the default behavior.

Changed preventTransaction to concurrent to be clearer that this code
path presently affects CONCURRENTLY code only.
2017-04-03 11:19:15 -06:00
Jason Petersen dd9365433e
Update documentation
Ensure all functions have comments, etc.
2017-04-03 11:19:15 -06:00
Jason Petersen d904e96c59
Address MX CONCURRENTLY problems
Adds a non-transactional multi-command method to propagate DDLs to all
MX/metadata-synced nodes.
2017-04-03 11:19:15 -06:00
Jason Petersen 32886e97a3
Add code to set index validity on failure
Coordinator code marks index as invalid as a base, set it as valid in a
transactional layer atop that base, then proceeds with worker commands.
If a worker command has problems, the rollback results in an index with
isvalid = false. If everything succeeds, the user sees a valid index.
2017-04-03 11:19:15 -06:00
Jason Petersen dea6c44f75
Remove CONCURRENTLY checks, fix tests
Still pending failure testing, which broke with my recent changes.
2017-04-03 11:19:15 -06:00
Jason Petersen 0b6c4e756e
Change DropStmt to generate worker DDL on master
Because we can't execute DROP INDEX CONCURRENTLY during transactions,
worker_apply_shard_ddl_command is insufficient.
2017-04-03 11:19:15 -06:00
Jason Petersen 95d8d27c4f
Change IndexStmt to generate worker DDL on master
Because we can't execute CREATE INDEX CONCURRENTLY during transactions,
worker_apply_shard_ddl_command is insufficient.
2017-04-03 11:19:14 -06:00
Marco Slot 0f355a4a48 Batch task_tracker_status calls to reduce task-tracker query times 2017-03-31 11:54:11 +02:00
Metin Doslu 54a277ff01 Add disable/enable trigger all support 2017-03-29 22:00:14 +03:00
Onder Kalaci 11665dbe3c Fix pushing down wrong queries for INSERT ... SELECT queries
Before this commit, in certain cases router planner allowed pushing
down JOINs that are not on the partition keys.

With @anarazel's suggestion, we change the logic to use uninstantiated
parameter. Previously, the planner was traversing on the restriction
information and once it finds the parameter, it was replacing it with
the shard range. With this commit, instead of traversing the restrict
infos, the planner explicitly checks for the equivalence of the relation
partition key with the uninstantiated parameter. If finds an equivalence,
it adds the restrictions. In this way, we have more control over the
queries that are pushed down.
2017-03-24 11:37:35 +02:00
Jason Petersen 34a62abb7d
Address code review comments 2017-03-22 17:29:17 -06:00
Jason Petersen d95b5bbad3
Rework ReplicateGrantStmt to use new flow
This was the impetus for the previous commit that changed from using a
DDLJob * to a List * of them.
2017-03-22 17:29:16 -06:00
Jason Petersen 23f5e4282d
Change DDLJob usage to be wrapped in lists
To prepare for GRANT fixes.
2017-03-22 17:29:16 -06:00
Jason Petersen f181b24859
Move worker execution to after master, fix tests
Some tests relied on worker errors though local commands were invalid.
Fixed those by ensuring preconditions were met to have command work
correctly. Otherwise most test changes are related to slight changes
in local/remote error ordering.
2017-03-22 17:21:49 -06:00
Jason Petersen 419a4c3745
Remove execution from stmt-specific util functions
Now have a single Execute call in the main body.
2017-03-22 17:21:49 -06:00
Jason Petersen a64165767d
Rename Process*Stmt functions to Plan*Stmt
To reflect their new purpose planning a DDLJob rather than fully
processing a distributed DDL statement.
2017-03-22 17:21:49 -06:00
Jason Petersen a02a2a90c7
Refactor ExecuteDistDDLCommand to expect struct
Will let us separate out the determination of what to execute from its
actual execution.
2017-03-22 17:21:49 -06:00
Metin Doslu b1ee7ec93e
Fix access permission checks for distributed relations
With this commit, we add the range table list of the original query to our
custom plan. Therefore, PostgreSQL can check relations in the original query
for access permissions and error out if the proper access is not granted.
2017-03-22 15:25:00 -06:00
Murat Tuncer c4734d7d94 Rephrase router modify errors
generic "distributed modifications must target exactly one shard"
message is replaced by more context aware error messages.
2017-03-16 15:09:10 +03:00
velioglu e32aff1a26 Size UDFs implemented
citus_table_size, citus_relation_size and citus_total_relation_size UDFs are implemented.
2017-03-16 13:50:30 +03:00
Metin Doslu 1f838199f8 Use CustomScan API for query execution
Custom Scan is a node in the planned statement which helps external providers
to abstract data scan not just for foreign data wrappers but also for regular
relations so you can benefit your version of caching or hardware optimizations.
This sounds like only an abstraction on the data scan layer, but we can use it
as an abstraction for our distributed queries. The only thing we need to do is
to find distributable parts of the query, plan for them and replace them with
a Citus Custom Scan. Then, whenever PostgreSQL hits this custom scan node in
its Vulcano style execution, it will call our callback functions which run
distributed plan and provides tuples to the upper node as it scans a regular
relation. This means fewer code changes, fewer bugs and more supported features
for us!

First, in the distributed query planner phase, we create a Custom Scan which
wraps the distributed plan. For real-time and task-tracker executors, we add
this custom plan under the master query plan. For router executor, we directly
pass the custom plan because there is not any master query. Then, we simply let
the PostgreSQL executor run this plan. When it hits the custom scan node, we
call the related executor parts for distributed plan, fill the tuple store in
the custom scan and return results to PostgreSQL executor in Vulcano style,
a tuple per XXX_ExecScan() call.

* Modify planner to utilize Custom Scan node.
* Create different scan methods for different executors.
* Use native PostgreSQL Explain for master part of queries.
2017-03-14 12:17:51 +02:00
Andres Freund 52358fe891 Initial temp table removal implementation 2017-03-14 12:09:49 +02:00
Jason Petersen 6f4886cd11
Revert "Remove unused SendCommandToWorker"
This reverts commit c8c308c109.
2017-03-13 15:48:51 -06:00
Murat Tuncer f657a744d5 Enable router planner for queries on range partitioned tables
Router planner now supports queries using range partitioned
tables. Queries on append partitioned tables are still not
supported.
2017-03-09 16:39:15 +03:00
Brian Cloutier c8c308c109 Remove unused SendCommandToWorker 2017-03-08 16:30:23 +03:00
Brian Cloutier a2ba565a9e Remove unused master_stage_shard_{placement_,}row 2017-03-07 11:59:26 +03:00
Brian Cloutier 95936ff481 Remove unused master_get_round_robin_candidate_nodes 2017-03-07 11:51:24 +03:00
Brian Cloutier 807beb7bc0 Remove master_get_local_first_candidate_nodes 2017-03-07 11:50:59 +03:00
Andres Freund fa5b8fb39f Fix SendRemoteCommandParams() handling of a NULL MultiConnection->pgConn. (#1271)
Previously we'd segfault in PQisnonblocking() which, contrary to other
libpq calls, doesn't handle a NULL PQconn (because there'd be no
appropriate return value for that).

cr: @jasonmp85
2017-03-03 12:02:15 -07:00
Murat Tuncer 72027f2eba Remove default clause from shard DDL when sequences are used 2017-03-01 17:32:48 +03:00
Marco Slot bab1b65491 Fix spelling in master_initialize_node_metadata comment 2017-03-01 12:27:50 +01:00
Jason Petersen 047825c6ca
Rename misleading allowEmpty parameter
Last bit of PR feedback.
2017-02-28 22:48:00 -07:00
Marco Slot 56d4d375c2 Address review feedback in create_distributed_table data loading 2017-02-28 17:39:45 +01:00
Marco Slot db98c28354 Address review feedback in COPY refactoring 2017-02-28 17:39:45 +01:00
Marco Slot d74fb764b1 Use CitusCopyDestReceiver for regular COPY 2017-02-28 17:24:45 +01:00
Marco Slot d11eca7d4a Load data into distributed table on creation 2017-02-28 17:24:45 +01:00
Marco Slot bf3541cb24 Add CitusCopyDestReceiver infrastructure 2017-02-28 17:24:45 +01:00
Burak Velioglu e158c7ae67 Merge branch 'master' into disallow_master_appy_delete_on_hash 2017-02-24 10:40:23 +02:00
velioglu 4dbb69cfc3 Fix error message of start_metadata_sync_to_node
Single quotation mark is added around nodename to make the
error code consistent with master_add_node usage.
2017-02-22 18:03:58 +03:00
Metin Doslu ee425871ee Get reproducible costs between different PostgreSQL versions 2017-02-22 15:40:02 +02:00
Burak Velioglu 49812ddfa0 Disallow master_apply_delete_command on hash distributed table
Delete operation is blocked for any table distributed by hash using master_apply_delete_command. Suggested master_modify_multiple_shards command as a hint.
2017-02-22 11:54:46 +03:00
Andres Freund 9721e80901 Use DEBUG2 instead of DEBUG4 in INSERT SELECT tests & debug message.
During later work the transaction debug output will change (as it will
in postgres 10), which makes it hard to see actual changes in the
INSERT ... SELECT ... test.  Reduce to DEBUG2 after changing a debug
message to that log level.
2017-02-20 12:56:16 +02:00
Eren Basak df9cf346ee Enforce statement based replication on old APIs and non-hash tables
This change ignores `citus.replication_model` setting and uses the
statement based replication in

- Tables distributed via the old `master_create_distributed_table` function
- Append and range partitioned tables, even if created via
`create_distributed_table` function

This seems like the easiest solution to #1191, without changing the existing
behavior and harming existing users with custom scripts.

This change also prevents RF>1 on streaming replicated tables on `master_create_worker_shards`

Prior to this change, `master_create_worker_shards` command was not checking
the replication model of the target table, thus allowing RF>1 with streaming
replicated tables. With this change, `master_create_worker_shards` errors
out on the case.
2017-02-16 10:37:53 -08:00
Onder Kalaci 95f8382ca2 Bugfix for creating foreign key
This commit fixes crash for adding foreign keys without
specifying the referenced column crashes the backend.
2017-02-07 09:34:24 +02:00
Brian Cloutier e6e5f63d9d Utility hook does nothing if the extension is not loaded 2017-02-02 17:48:31 +02:00
Brian Cloutier a30b9b93a4 Set a memory context when throwing deferred errors 2017-02-02 15:14:21 +02:00
Brian Cloutier e3c763c3f7 Start remote transactions in master_append_table_to_shard
Add a call to RemoteTransactionBeginIfNecessary so that BEGIN is
actually sent to the remote connections. This means that ROLLBACK and
Ctrl-C are respected and don't leave the table in a partial state.
2017-02-01 18:12:19 +02:00
Eren Basak ae0bfb1394 Allow dropping sequences on mx workers
This change allows users to drop sequences on MX workers. Previously, Citus didn't allow dropping
sequences on MX workers because it could cause shards to be dropped if `DROP SEQUENCE ... CASCADE`
is used. We now allow that since allowing sequence creation but not dropping hurts user experience
and also may cause problems with custom Citus solutions.
2017-01-31 14:51:44 -08:00
Brian Cloutier 6843ad8e91 Fix bug where router executor sends query to failed connections 2017-01-27 09:40:30 +02:00
Brian Cloutier 1173f3f225 Refactor CheckShardPlacements
- Break CheckShardPlacements into multiple functions (The most important
  is MarkFailedShardPlacements), so that we can get rid of the global
  CoordinatedTransactionUses2PC.
- Call MarkFailedShardPlacements in the router executor, so we mark
  shards as invalid and stop using them while inside transaction blocks.
2017-01-26 13:20:45 +02:00
Marco Slot f56454360c Mark failed placements as inactive immediately after COPY 2017-01-25 19:19:39 +03:00
Marco Slot b1626887d5 Don't mark placements inactive in COPY after successful connection 2017-01-25 19:19:38 +03:00
Marco Slot d0c76407b8 Set placement to inactive on connection failure in COPY 2017-01-25 19:19:38 +03:00
Marco Slot 85c1a87999 Short circuit in multi_ProcessUtility on ABORT/COMMIT 2017-01-25 11:57:00 +01:00
Marco Slot 2748660b1c Always skip foreign key validation when enable_ddl_propagation is off 2017-01-25 11:56:59 +01:00
Marco Slot ba940a1de9 Use coordinator instead of schema node in terminology 2017-01-25 11:07:23 +01:00
Marco Slot 72725ba30c Use bigserial instead of BIGINT in sequence error 2017-01-25 11:07:23 +01:00
Burak Yucesoy d80e7849a4 Convert DropShards to use new connection API
With this change DropShards function started to use new connection API. DropShards
function is used by DROP TABLE, master_drop_all_shards and master_apply_delete_command,
therefore all of these functions now support transactional operations. In DropShards
function, if we cannot reach a node, we mark shard state of related placements as
FILE_TO_DELETE and continue to drop remaining shards; however if any error occurs after
establishing the connection, we ROLLBACK whole operation.
2017-01-23 21:08:41 +03:00
Burak Yucesoy 2489c59c15 In case of failed transactions update shard state only if it is FILE_FINALIZED
Before this change, when a transaction failed, we update related placements shard states
to FILE_INACTIVE during XACT_EVENT_PRE_COMMIT. However that means if another code block
changed shard state to something else (e.g. FILE_TO_DELETE) before XACT_EVENT_PRE_COMMIT
we overwrite that. To prevent that problem, in case of failure we started to change
shard state, only if its current shard state is FILE_FINALIZED.
2017-01-23 21:04:57 +03:00
Burak Yucesoy 484cb12cd0 Add LoadShardPlacement UDF
This UDF returns a shard placement from cache given shard id and placement id. At the
moment it iterates over all shard placements of given shard by ShardPlacementList and
searches given placement id in that list, which is not a good solution performance-wise.
However, currently, this function will be used only when there is a failed transaction.
If a need arises we can optimize this function in the future.
2017-01-23 21:04:57 +03:00
Marco Slot 1585c02322 Use placement connection API for multi-shard transactions 2017-01-23 18:34:50 +01:00
Andres Freund 6939cb8c56 Hack up PREPARE/EXECUTE for nearly all distributed queries.
All router, real-time, task-tracker plannable queries should now have
full prepared statement support (and even use router when possible),
unless they don't go through the custom plan interface (which
basically just affects LANGUAGE SQL (not plpgsql) functions).

This is achieved by forcing postgres' planner to always choose a
custom plan, by assigning very low costs to plans with bound
parameters (i.e. ones were the postgres planner replanned the query
upon EXECUTE with all parameter values provided), instead of the
generic one.

This requires some trickery, because for custom plans to work the
costs for a non-custom plan have to be known, which means we can't
error out when planning the generic plan.  Instead we have to return a
"faux" plan, that'd trigger an error message if executed.  But due to
the custom plan logic that plan will likely (unless called by an SQL
function, or because we can't support that query for some reason) not
be executed; instead the custom plan will be chosen.
2017-01-23 09:23:50 -08:00
Andres Freund c244b8ef4a Make router planner error handling more flexible.
So far router planner had encapsulated different functionality in
MultiRouterPlanCreate. Modifications always go through router, selects
sometimes. Modifications always error out if the query is unsupported,
selects return NULL.  Especially the error handling is a problem for
the upcoming extension of prepared statement support.

Split MultiRouterPlanCreate into CreateRouterPlan and
CreateModifyPlan, and change them to not throw errors.

Instead errors are now reported by setting the new
MultiPlan->plannigError.

Callers of router planner functionality now have to throw errors
themselves if desired, but also can skip doing so.

This is a pre-requisite for expanding prepared statement support.

While touching all those lines, improve a number of error messages by
getting them closer to the postgres error message guidelines.
2017-01-23 09:23:50 -08:00
Andres Freund 7681f6ab9d Centralize more of distributed planning into CreateDistributedPlan().
The name CreatePhysicalPlan() hasn't been accurate for a while, and
the split of work between multi_planner() and CreatePhysicalPlan()
doesn't seem perfect.  So rename to CreateDistributedPlan() and move a
bit more logic in there.
2017-01-23 09:23:50 -08:00
Andres Freund 557ccc6fda Support for deferred error messages.
It can be useful, e.g. in the upcoming prepared statement support, to
be able to return an error from a function that is not raised
immediately, but can later be thrown.  That allows e.g. to attempt to
plan a statment using different methods and to create good error
messages in each planner, but to only error out after all planners
have been run.

To enable that create support for deferred error messages that can be
created (supporting errorcode, message, detail, hint) in one function,
and then thrown in different place.
2017-01-23 09:23:50 -08:00
Andres Freund 9a82e8f06b Make usage of static a bit more consistent in multi_planner.c. 2017-01-23 09:23:50 -08:00
Jason Petersen 56197dbdba
Add replication_model GUC
This adds a replication_model GUC which is used as the replication
model for any new distributed table that is not a reference table.
With this change, tables with replication factor 1 are no longer
implicitly MX tables.

The GUC is similarly respected during empty shard creation for e.g.
existing append-partitioned tables. If the model is set to streaming
while replication factor is greater than one, table and shard creation
routines will error until this invalid combination is corrected.

Changing this parameter requires superuser permissions.
2017-01-23 09:05:14 -07:00
Brian Cloutier fe5465aa4e Port master_append_table_to_shard to new connection API (#1149)
If any placements fail it doesn't update shard statistics on those placements.

A minor enabling refactor: Make CoordinatedTransactionUses2PC public (it used to be CoordinatedTransactionUse2PC but that symbol already existed, so renamed it as well)
2017-01-23 15:57:44 +02:00
Burak Yucesoy 2e1df4c910 Reword error message for outer joins requiring repartition
We changed error message which appears when user tries to execute outer join command and
that command requires repartitioning. Old error message mentioned about 1-to-1 shard
partitioning which may not be clear to user.
2017-01-23 10:42:36 +03:00
Marco Slot ea855ddf86 Add an enable_deadlock_prevention flag to allow router transactions to expand to multiple nodes 2017-01-22 17:31:24 +01:00
Marco Slot 87ae26aef3 Ensure job IDs are unique across workers 2017-01-22 16:55:14 +01:00
Andres Freund 78b085106a Remove connection_cache.[ch]. 2017-01-21 09:01:15 -08:00
Andres Freund 6ec34bed84 Remove remnants of commit_protocol.[ch]. 2017-01-21 09:01:15 -08:00
Andres Freund 52c3369f79 Minimal citus tools conversion to new connection API. 2017-01-21 09:01:14 -08:00
Önder Kalacı 594fa761e1 Merge branch 'master' into fix_command_counter_increment 2017-01-21 09:21:19 +02:00
Murat Tuncer d76f781ae4 Convert multi copy to use new connection api
This enables proper transactional behaviour for copy and relaxes some
restrictions like combining COPY with single-row modifications. It
also provides the basis for relaxing restrictions further, and for
optionally allowing connection caching.
2017-01-20 19:15:19 -08:00
Jason Petersen 4e7b23472c
Change default replication factor to one
Took the quick-and-dirty approach of changing it back to two during
test runs. Can update tests to expect one in due time.
2017-01-20 18:56:43 -07:00
Andres Freund 3a36d32c43 Mark some now unnecessarily exposed multi_planner.c functions static. 2017-01-20 12:31:56 -08:00
Andres Freund 608bed0387 Don't duplicate planning logic in citus' explain hook.
Instead use pg_plan_query() like the normal explain does, and use that
to explain the query.  That's important because it allows to remove
the duplicated planner logic from multi_explain - and that logic is
about to get more complicated.
2017-01-20 12:31:28 -08:00
Andres Freund 0f28a11970 Remove citus.explain_multi_logical/physical_plan.
They make fixing explain for prepared statement harder, and they don't
really fit into EXPLAIN in the first place.  Additionally they're
currently not exercised in any tests.
2017-01-20 12:31:19 -08:00
Onder Kalaci bd825be340 Improve heap access methods
This commit improves heap access methods for reference table
upgrade and colocation group modifications.
2017-01-20 14:53:29 +02:00
Metin Doslu 2bd8f8f12e Add a function to delete shard metadata from MX nodes 2017-01-20 14:38:01 +02:00
Metin Doslu 93e626c896 Refactor get_shard_id_for_distribution_column() and other minor changes 2017-01-20 14:38:01 +02:00
Metin Doslu ed77260aa1 Return a deep copy shard list from ColocatedShardIntervalList() 2017-01-20 14:38:01 +02:00
Metin Doslu 7cff8719c2 Add worker_hash() and a stub for isolate_tenant_to_new_shard() 2017-01-20 14:38:01 +02:00
Murat Tuncer c12bd7b75e
Remove hint message from master_remove_node UDF
Hint about master_disable_node  was giving wrong
impression to users. Removal is better than keeping it.
2017-01-18 22:33:00 -07:00
Eren Basak 4def1ca696 Prevent COPY to reference tables from worker nodes 2017-01-18 17:38:01 +03:00
Eren Basak e7c15ecc1f Make `upgrade_to_reference_table` function MX-compatible 2017-01-18 16:49:50 +03:00
Eren Basak 56ca590daa Propagate metadata changes for deleted reference table placements on master_remove_node call 2017-01-18 16:00:07 +03:00
Eren Basak be78769ae4 Propagate new reference table placement metadata on `master_add_node` 2017-01-18 15:59:06 +03:00
Eren Basak 23b2619412 Make reference table metadata synced to workers 2017-01-18 15:59:05 +03:00
Eren Basak e44d226221 Propagate Metadata to Workers on `create_reference_table` call. 2017-01-18 11:05:24 +03:00
Eren Basak b686d9a025 Add Sequence Support for MX Tables
This change adds support for serial columns to be used with MX tables.
Prior to this change, sequences of serial columns were created in all
workers (for being able to create shards) but never used. With MX, we
need to set the sequences so that sequences in each worker create
unique values. This is done by setting the MINVALUE, MAXVALUE and
START values of the sequence.
2017-01-18 09:43:38 +03:00
Eren Basak b1ce8d61c0 Create Invalidation Trigger for pg_dist_local_group Table Updates 2017-01-18 09:43:38 +03:00
Andres Freund bdef35ac14 Query placementId in RemoteFinalizedShardPlacementList().
Not having the id in the ShardPlacement struct causes issues while
making copy use the placement aware connection management.
2017-01-17 13:27:26 -08:00
Brian Cloutier 67ee357d7f Port WorkerShardStats to new connection API
Part of the work in citusdata/citus#1101, this is a pretty direct port
over to the new functions and shouldn't result in any behavior changes.
2017-01-17 17:04:37 +02:00
Brian Cloutier b1b2b4fadf Create ExecuteOptionalRemoteCommand
A small refactor which pulls some code out of `RecoverWorkerTransactions`
and into `remote_commands.c`. This code block currently only occurs in
`RecoverWorkerTransactions` but will be useful to other functions
shortly.

Unfortunately we couldn't call it `ExecuteRemoteCommand`, that name was
already taken.
2017-01-17 17:04:37 +02:00
Brian Cloutier 539a205462 Pass entire ShardPlacement into WorkerShardStats
A small refactor so we'll be able to call the new connection API (which
requires having a ShardPlacement) from within WorkerShardStats.
2017-01-17 17:04:37 +02:00
Andres Freund b9385700ee Make placement_connection.c colocation aware.
Because of foreign keys and similar concerns there should only be a
single modifying/DDL connection for a set of colocated placements to a
node.  To enforce placement_connection.c now has an additional
hash-table keeping track of the connections to a set of colocated
placements.  In addition to enforcing per placement restrictions on
connections, there's now very similar restrictions for sets of
colocated placements.
2017-01-16 13:47:01 -08:00
Andres Freund 6972186652 Add ShardPlacement fields required for colocated placement connection mapping. 2017-01-16 13:42:54 -08:00
Andres Freund 1d79820b74 Fix use of wrong constant.
This could potentially lead to spuriously shared connections if the
first 63 characters of a hostname are the same.
2017-01-16 13:42:53 -08:00
Andres Freund 4b1d37b7be Remove fields used in earlier revisions of placement_connection.c. 2017-01-16 13:42:53 -08:00
Onder Kalaci a7ed49c16e
Improve error messages for INSERT INTO .. SELECT
This commit is intended to improve the error messages while planning
INSERT INTO .. SELECT queries. The main motivation for this change is
that we used to map multiple cases into a single message. With this change,
we added explicit error messages for many cases.
2017-01-16 12:16:14 -07:00
Burak Yucesoy 3315ae6142 Remove placement metadata of reference tables after master_remove_node
With this change, we start to delete placement of reference tables at given worker node
after master_remove_node UDF call. We remove placement metadata at master node but we do
not drop actual shard from the worker node. There are two reasons for that decision,
first, it is not critical to DROP the shards in the workers because Citus will ignore them
as long as node is removed from cluster and if we add that node back to cluster we will
DROP and recreate all reference tables. Second, if node is unreachable, it becomes
complicated to cover failure cases and have a transaction support.
2017-01-16 11:24:56 +03:00
Murat Tuncer e7935a3be4 Report error when original range table id is not found in NewTableId() 2017-01-13 09:39:43 +03:00
Murat Tuncer 77f8db6b14 Add view support
Enables use views within distributed queries.
User can create and use a view on distributed tables/queries
as he/she would use with regular queries.

After this change router queries will have full support for views,
insert into select queries will support reading from views, not
writing into. Outer joins would have a limited support, and would
error out at certain cases such as when a view is in the inner side
of the outer join.

Although PostgreSQL supports writing into views under certain circumstances.
We disallowed that for distributed views.
2017-01-13 09:39:42 +03:00
Onder Kalaci aed5f817fa Refactor CheckShardPlacements() and improve support for node removal
This commit refactors CheckShardPlacements() so that it only considers
modifyingConnection. Also, it skips nodes which are removed from the
cluster.
2017-01-12 20:10:10 +02:00
Murat Tuncer cb1dfd0a17 Add hint to errored real time queries 2017-01-12 11:33:35 +03:00
Onder Kalaci 1efa301ada Copy on reference tables should never mark placements invalid
This commit ensures that COPY does not mark any placement
of reference's state as INVALID in case of an error.
2017-01-12 02:43:41 +02:00
Eren Basak 859b920ba9 Fix escaping of workerrack in NodeListInsertCommand
This change fixes a small bug about quoting of workerrack column in
NodeListInsertCommand:

Previous: `"..., '%s'", workerRack`
Now: `"..., %s", quote_literal_cstr(workerRack)`
2017-01-11 10:18:48 +03:00
Andres Freund b813b39241 Cache ShardPlacements in metadata cache.
So far we've reloaded them frequently. Besides avoiding that cost -
noticeable for some workloads with large shard counts - it makes it
easier to add information to ShardPlacements that help us make
placement_connection.c colocation aware.
2017-01-10 18:14:18 -08:00
Andres Freund 8cb47195ba Make LoadShardInterval() backed by the metadata cache.
Doing so requires adding a mapping from shardId to the cache
entries. For that metadata_cache.c now maintains an additional
hashtable. That hashtable only references shard intervals in the
dist table cache.
2017-01-10 17:00:19 -08:00
Andres Freund f6e8647337 Split DistTableCacheEntry() into separate functions.
Previously the function was getting too large. Thus this splits the
function into separate parts for looking up the cache entry and
building the cache contents.
2017-01-10 15:23:18 -08:00
Onder Kalaci cd8e41bb79 Fix CloseNodeConnections to actually close connections
CloseNodeConnections() is supposed to close connections to a given node.
However, before this commit it lacks to actually call PQFinish() on the
connections. Using CloseConnection() handles closing and all other necessary
actions.
2017-01-11 01:13:58 +02:00
Murat Tuncer 95862632de Add citus tools to default configuration 2017-01-10 17:53:27 +03:00
Murat Tuncer b93185d800 Add master_disable_node UDF
We can now remove nodes from cluster regardless of them
having an active shard placement.
2017-01-10 10:54:57 +03:00
Burak Yucesoy 59d3d05bc4 Error out on CTEs with data modifying statement
With this change we start to error out on router planner queries where a common table
expression with data-modifying statement is present. We already do not support if
there is a data-modifying statement using result of the CTE, now we also error out
if CTE itself is data-modifying statement.
2017-01-10 10:30:09 +02:00
Marco Slot ef326b202a PQclear in ReportResultError to prevent memory leaks 2017-01-10 02:51:39 +01:00
Marco Slot 31231ce196 Use GetNodeConnection to establish a connection in transaction recovery 2017-01-10 02:44:34 +01:00
Andres Freund c390daed0f Use interrupt checking libpq wrappers in router executor. 2017-01-09 14:02:45 -08:00
Andres Freund 7320c17f00 Convert router executor to placement connection management infrastructure.
Remove the router specific transaction and shard management, and
replace it with the new placement connection API.  This mostly leaves
behaviour alone, except that it is now, inside a transaction, legal to
select from a shard to which no pre-existing connection exists.

To simplify code the code handling task executions for select and
modify has been split into two - the previous coding was starting to
get confusing due to the amount of only conditionally applicable code.

Modification connections & transactions are now always established in
parallel, not just for reference tables.
2017-01-09 13:13:02 -08:00
Andres Freund bfa742d794 Centralized shard/placement connection and state management.
Currently there are several places in citus that map placements to
connections and that manage placement health. Centralize this
knowledge.  Because of the centralized knowledge about which
connection has previously been used for which shard/placement, this
also provides the basis for relaxing restrictions around combining
various forms of DDL/DML.

Connections for a placement can now be acquired using
GetPlacementConnection(). If the connection is used for DML or DDL the
FOR_DDL/DML flags should be used respectively.  If an individual
remote transaction fails (but the transaction on the master succeeds)
and FOR_DDL/DML have been specified, the placement is marked as
invalid, unless that'd mark all placements for a shard as invalid.
2017-01-09 13:13:02 -08:00
Andres Freund 3286b99ff1 Remove useless changing of CurrentMemoryContext. 2017-01-06 09:16:45 -08:00
Andres Freund 6291998ae1 Use FinishConnectionListEstablishment() instead of manually iterating. 2017-01-06 09:16:01 -08:00
Andres Freund d256f3fca9 Remove unused LogPreparedTransactions() function.
This is unused since 92c7567008.
2017-01-06 09:15:01 -08:00
Burak Yucesoy 9c9f479e4b Replicate reference tables when new node is added
With this change, we start to replicate all reference tables to the new node when new node
is added to the cluster with master_add_node command. We also update replication factor
of reference table's colocation group.
2017-01-05 14:30:41 +03:00
Onder Kalaci 6d050fd677 Use 2PC for reference table modification
With this commit, we ensure that router executor always uses
2PC for reference table modifications and never mark the placements
of it as INVALID.
2017-01-04 12:46:35 +02:00
Burak Yucesoy 31cd2357fe Add upgrade_to_reference_table
With this change we introduce new UDF, upgrade_to_reference_table, which can be used to
upgrade existing broadcast tables reference tables. For upgrading, we require that given
table contains only one shard.
2017-01-02 17:54:42 +02:00
Eren Basak 7e09bd6836 Error on Unsupported Features on Workers
This change makes the metadata workers error out on unsupported commands.
2017-01-02 16:03:45 +03:00
Marco Slot 59bc5972fa
Use MultiConnection in multi-shard transactions 2016-12-30 14:43:21 -07:00
Metin Doslu 1ddc70ca55 Add binary search capability to ShardIndex()
Renamed FindShardIntervalIndex() to ShardIndex() and added binary search
capability. It used to assume that hash partition tables are always
uniformly distributed which is not true if upcoming tenant isolation
feature is applied. This commit also reduces code duplication.
2016-12-30 18:55:34 +02:00
Eren Basak e43eed0f7a Prevent Deadlock on Dropping MX Tables with Sequences
This change prevents a deadlock situation during DROP TABLE on an
mx table with sequences on workers with metadata.
2016-12-28 16:32:20 +03:00
Burak Yucesoy 88ee7802dd Address Onder's comments 2016-12-28 12:26:16 +03:00
Burak Yucesoy bb9e95e134 Error out on foreign keys with reference tables
We have one replication of reference table for each node. Therefore all problems with
replication factor > 1 also applies to reference table. As a solution we will not allow
foreign keys on reference tables. It is not possible to define foreign key from, to or
between reference tables.
2016-12-28 10:58:26 +03:00
Murat Tuncer 2f76b4be99 Add error hint to failing modify query 2016-12-23 19:43:55 +03:00
Marco Slot 6cbc1945f9 Enable transaction recovery in connection API 2016-12-23 16:14:29 +01:00
Marco Slot 92c7567008 Convert worker_transactions to new connection API 2016-12-23 16:14:29 +01:00
Marco Slot 00d55ad957 Add a wrapper for PQsendQuery 2016-12-23 16:14:29 +01:00
Marco Slot 87c62d598e Connectionapify SendCommandListToWorkerInSingleTransaction 2016-12-23 16:14:29 +01:00
Burak Yucesoy 0851fd2f0b GRANT SELECT access for metadata tables to public
Previously, we errored out if non-user tries to SELECT query for some metadata tables. It
seems that we already GRANT SELECT access to some metadata tables but not others. With
this change, we GRANT SELECT access to all existing Citus metadata tables.
2016-12-23 16:32:47 +03:00
Eren Basak 31af40cc26 Handle MX tables on workers during drop table commands 2016-12-23 15:43:32 +03:00
Eren Basak bed2e353db Propagate `mark_tables_colocated` changes in `pg_dist_partition` table to metadata workers. 2016-12-23 15:43:32 +03:00
Eren Basak 71d73ec5ff Propagate DDL commands to metadata workers for MX tables 2016-12-23 15:43:32 +03:00
Eren Basak 048fddf4da Propagate MX table and shard metadata on `create_distributed_table` call 2016-12-23 15:43:32 +03:00
Eren Basak 61a1e487d0 Mark hash distributed tables with replication factor = 1 as streaming replicated tables (repmodel=s).
This works only with `create_distributed_table` call.
2016-12-23 15:43:31 +03:00
Marco Slot 11031bcf55 Enable evaluation of stable functions in INSERT..SELECT 2016-12-23 12:47:21 +01:00
Marco Slot d745d7bf70 Add explicit RelationShards mapping to tasks 2016-12-23 10:23:43 +01:00
Marco Slot 6852f8a951 Add shard locking UDFs 2016-12-22 11:04:34 +01:00
Burak Yücesoy 501a2ecead Add get_distribution_value_shardid UDF (#1048)
* Add get_distribution_value_shardid UDF

With this UDF users can now map given distribution value to shard id. We mostly hide
shardids from users to prevent unnecessary complexity but some power users might need
to know about which entry/value is stored in which shard for maintanence purposes.

Signature of this UDF is as follows;

bigint get_distribution_value_shardid(table_name regclass, distribution_value anyelement)
2016-12-22 12:17:08 +03:00
Onder Kalaci 9f0bd4cb36 Reference Table Support - Phase 1
With this commit, we implemented some basic features of reference tables.

To start with, a reference table is
  * a distributed table whithout a distribution column defined on it
  * the distributed table is single sharded
  * and the shard is replicated to all nodes

Reference tables follows the same code-path with a single sharded
tables. Thus, broadcast JOINs are applicable to reference tables.
But, since the table is replicated to all nodes, table fetching is
not required any more.

Reference tables support the uniqueness constraints for any column.

Reference tables can be used in INSERT INTO .. SELECT queries with
the following rules:
  * If a reference table is in the SELECT part of the query, it is
    safe join with another reference table and/or hash partitioned
    tables.
  * If a reference table is in the INSERT part of the query, all
    other participating tables should be reference tables.

Reference tables follow the regular co-location structure. Since
all reference tables are single sharded and replicated to all nodes,
they are always co-located with each other.

Queries involving only reference tables always follows router planner
and executor.

Reference tables can have composite typed columns and there is no need
to create/define the necessary support functions.

All modification queries, master_* UDFs, EXPLAIN, DDLs, TRUNCATE,
sequences, transactions, COPY, schema support works on reference
tables as expected. Plus, all the pre-requisites associated with
distribution columns are dismissed.
2016-12-20 14:09:35 +02:00
Eren Basak 296e0bd33a Add citus.node_connection_timeout GUC 2016-12-20 14:11:37 +03:00
Marco Slot dd094bc372 Run copy commands in worker_merge_files_into_table as superuser 2016-12-20 10:15:42 +01:00
Marco Slot 42ff472721 Set user as pg_merge_job_* schema owner 2016-12-20 10:15:42 +01:00
Murat Tuncer c3a60bff70 Make router planner active at all times
We used to disable router planner and executor
when task executor is set to task-tracker.

This change enables router planning and execution
at all times regardless of task execution mode.

We are introducing a hidden flag enable_router_execution
to enable/disable router execution. Its default value is
true. User may disable router planning by setting it to false.
2016-12-20 11:24:01 +03:00
Jason Petersen 6f95875191 Add targeted VACUUM/ANALYZE support
Adds support for VACUUM and ANALYZE commands which target a specific
distributed table. After grabbing the appropriate locks, this imple-
mentation sends VACUUM commands to each placement (using one connec-
tion per placement). These commands are sent in parallel, so users
with large tables will benefit from sharding. Except for VERBOSE, all
VACUUM and ANALYZE options are supported, including the explicit
column list used by ANALYZE.

As with many of our utility commands, the local command also runs. In
the VACUUM/ANALYZE case, the local command is executed before any re-
mote propagation. Because error handling is managed after local proc-
essing, this can result in a VACUUM completing locally but erroring
out when distributed processing commences: a minor technicality in all
cases, as there isn't really much reason to ever roll back a VACUUM (an
impossibility in any case, as VACUUM cannot run within a transaction).

Remote propagation of targeted VACUUM/ANALYZE is controlled by the
enable_ddl_propagation setting; warnings are emitted if such a command
is attempted when DDL propagation is disabled. Unqualified VACUUM or
ANALYZE is not handled, but a warning message informs the user of this.

Implementation note: this commit adds a "BARE" value to MultiShard-
CommitProtocol. When active, no BEGIN command is ever sent to remote
nodes, useful for commands such as VACUUM/ANALYZE which must not run in
a transaction block. This value is not user-facing and is reset at
transaction end.
2016-12-16 16:59:06 -07:00
Metin Doslu 20b8f1feeb Refactor distribution column type check for colocation 2016-12-16 15:24:45 +02:00
Metin Doslu e2d0bd38f2 Don't allow tables with different replication models to be colocated 2016-12-16 15:23:49 +02:00
Metin Doslu 86cca54857 Add colocate_with option to create_distributed_table()
With this commit, we support three versions of colocate_with: i.default, ii.none
and iii. a specific table name.
2016-12-16 14:53:35 +02:00
Metin Doslu edbedbd744 Move colocation related functions to colocation_utils.c 2016-12-16 14:52:40 +02:00
Marco Slot 5714be0da5 Expose the column_to_column_name UDF to make partkey in pg_dist_partition human-readable 2016-12-14 10:46:33 +01:00
Eren Basak afbb5ffb31 Add stop_metadata_sync_to_node UDF 2016-12-14 10:53:12 +03:00
Eren Basak b94647c3bc Propagate CREATE SCHEMA commands with the correct AUTHORIZATION clause in start_metadata_sync_to_node 2016-12-14 10:53:12 +03:00
Eren Basak fb08093b00 Make start_metadata_sync_to_node UDF to propagate foreign-key constraints 2016-12-14 10:53:12 +03:00
Eren Basak 5e96e4f60e Make truncate triggers propagated on start_metadata_sync_to_node call 2016-12-14 10:53:10 +03:00
Eren Basak 4fd086f0af Prevent Transactions in start_metadata_sync_to_node 2016-12-13 10:48:03 +03:00
Eren Basak 9eff968d1f Add start_metadata_sync_to_node UDF
This change adds `start_metadata_sync_to_node` UDF which copies the metadata about nodes and MX tables
from master to the specified worker, sets its local group ID and marks its hasmetadata to true to
allow it receive future DDL changes.
2016-12-13 10:48:03 +03:00
Andres Freund 80b34a5d6b Integrate router executor into transaction management framework.
One less place managing remote transactions. It also makes it fairly
easy to use 2PC for certain modifications (e.g. reference tables). Just
issue a CoordinatedTransactionUse2PC(). If every placement failure
should cause the whole transaction to abort, additionally mark the
relevant transactions as critical.
2016-12-12 15:18:12 -08:00
Andres Freund fa5e202403 Convert multi_shard_transaction.[ch] to new framework. 2016-12-12 15:18:12 -08:00
Andres Freund fc298ec095 Coordinated remote transaction management. 2016-12-12 15:18:12 -08:00
Andres Freund 6eeb43af15 Add PQgetResult() wrapper handling interrupts.
This makes it possible to implement cancelling queries blocked on
communication with remote nodes.
2016-12-12 15:18:12 -08:00
Andres Freund 7434fcc6df Make prepared transactions available if not configured. 2016-12-08 19:57:22 -08:00
Burak Yucesoy 8d7cd4d746 Add Foreign Key Support to ALTER TABLE commands
With this PR, we add foreign key support to ALTER TABLE commands. For now,
we only support foreign constraint creation via ALTER TABLE query, if it
is only subcommand in ALTER TABLE subcommand list.

We also only allow foreign key creation if replication factor is 1.
2016-12-08 15:03:25 +02:00
Andres Freund 2374905c89 Move multi_client_executor.[ch] ontop of connection_management.[ch].
That way connections can be automatically closed after errors and such,
and the connection management infrastructure gets wider testing.  It
also fixes a few issues around connection string building.
2016-12-07 11:44:24 -08:00
Andres Freund a77cf36778 Use connection_management.c from within connection_cache.c.
This is a temporary step towards removing connection_cache.c.
2016-12-07 11:44:24 -08:00
Andres Freund 3505d431cd Add initial helpers to make interactions with MultiConnection et al. easier.
This includes basic infrastructure for logging of commands sent to
remote/worker nodes. Note that this has no effect as of yet, since no
callers are converted to the new infrastructure.
2016-12-07 11:44:24 -08:00
Andres Freund 3223b3c92d Centralized Connection Lifetime Management.
Connections are tracked and released by integrating into postgres'
transaction handling. That allows to to use connections without having
to resort to having to disable interrupts or using PG_TRY/CATCH blocks
to avoid leaking connections.

This is intended to eventually replace multi_client_executor.c and
connection_cache.c, and to provide the basis of a centralized
transaction management.

The newly introduced transaction hook should, in the future, be the only
one in citus, to allow for proper ordering between operations.  For now
this central handler is responsible for releasing connections and
resetting XactModificationLevel after a transaction.
2016-12-07 11:43:18 -08:00
Andres Freund 883af02b54 Add some basic helpers to make use of dynahash hashtables easier. 2016-12-06 14:15:36 -08:00
Marco Slot 3d09a2e5c2 Use READ_UINT64_FIELD for placement ID in ReadShardPlacement 2016-12-05 17:22:23 +01:00
Marco Slot 172bb457e6 Take shard metadata lock in master_append_table_to_shard 2016-12-02 15:56:30 +01:00
Eren Basak fb88b167a7 Propagate node add/remove to the nodes with hasmetadata=true
This change propagates the changes done by `master_add_node` and `master_remove_node`
to the workers that contain metadata.
2016-12-02 14:43:32 +03:00
Brian Cloutier a4096c9f45 Remove dead code: ResponsiveWorkerNodeList 2016-12-02 13:14:11 +03:00
Onder Kalaci df974e15b8 Bugfix for deparsing INSERT..SELECT queries which involve constant values
This commit fixes a bug when the SELECT target list includes a constant
value.

Previous behaviour of target list re-ordering:
  * Iterate over the INSERT target list
    * If it includes a Var, find the corresponding SELECT entry
      and update its resno accordingly
    * If it does not include a Var (which we only considered to be
      DEFAULTs), generate a new SELECT target entry
  * If the processed target entry count in SELECT target list is less
    than the original SELECT target list (GROUP BY elements not included in
    the SELECT target entry), add them in the SELECT target list and
    update the resnos accordingly.
     * However, this step was leading to add the CONST SELECT target entries
       twice. The reason is that when CONST target list entries appear in the
       SELECT target list, the INSERT target list doesn't include a Var. Instead,
       it includes CONST as it does for DEFAULTs.

New behaviour of target list re-ordering:
  * Iterate over the INSERT target list
    * If it includes a Var, find the corresponding SELECT entry
      and update its resno accordingly
    * If it does not include a Var (which we consider to be
      DEFAULTs and CONSTs on the SELECT), generate a new SELECT
      target entry
  * If any target entry remains on the SELECT target list which are resjunk,
    (GROUP BY elements not included in the SELECT target entry), keep them
    in the SELECT target list by updating the resnos.
2016-12-01 10:41:56 +02:00
Murat Tuncer 45762006f3 Add support for filters
Ensures filter clauses are stripped from master query, and pushed
down to worker queries.
2016-12-01 08:53:46 +03:00
Sumedh Pathak 0a0d4784b9 Change DDL error message to say "unsupported" instead of "supported" 2016-11-26 10:30:09 +01:00
Murat Tuncer b5c1ecb684 Fix failures during pg_upgrade
- fix error in CitusHasBeenLoaded()
- allow creation of pg_catalog tables during upgrade
2016-11-11 17:22:45 -08:00
Marco Slot b566c4815c Pass down the correct type for null parameters 2016-11-11 07:14:08 +01:00
Metin Doslu a0c92b38cb Use AccessShareLock on the source table while creating a colocated table
While creating a colocated table, we don't want the source table to be dropped.
However, using a ShareLock blocks DML statements on the source table, and
using AccessShareLock is enough to prevent DROP. Therefore, we just loosened
the lock to AccessShareLock.
2016-11-10 09:17:05 -08:00
Eren Basak 444f14d546 Add Column Definition List for Output Columns for master_add_node
This change allows seeing the names of columns of `master_add_node`,
using `SELECT * FROM master_add_node(...)` by specifying output
columns in UDF definition.
2016-11-07 14:08:58 -08:00
Marco Slot c157c3b419 Disallow SendCommandListToWorkerInSingleTransaction when modifications have occurred 2016-11-02 12:26:56 +01:00
Marco Slot f6b3af7a49 Use co-located shard ID in multi_shard_transaction 2016-11-02 11:01:19 +01:00
Samay Sharma 82e5faa190 Avoid error during CREATE INDEX IF NOT EXISTS
Previously, we threw an error when we ran CREATE INDEX IF NOT EXISTS
with an already existing index. This change enables expected behavior by
checking if the statement has IF NOT EXISTS before throwing the error.
We also ensure that we don't execute the command on the workers, if an
index already exists on the master.
2016-11-01 14:51:19 -07:00
Burak Yucesoy b30b339f91 Fix typo in error message 2016-11-01 16:58:27 +02:00
Burak Yucesoy 6246702a4c Change error message we displayed for foreign constraints if RF > 1
At the moment, we do not support foreign constraints if replication factor is greater
than 1. However foreign constraints can be used in cloud with high availability option.
Therefore we do not want to create an impression such that foreign constraints with
high availability is not supported at all. We call users to action with this error
message.
2016-11-01 15:47:19 +02:00
Önder Kalacı 83e1719541 Always CASCADE while dropping a shard 2016-11-01 10:16:34 +01:00
Brian Cloutier 50805f1e5c Copy raw_parse_tree before using it
Address citusdata/citus#922.

Fixes a segfault in PG's installcheck caused by our reuse of
raw_parse_tree when handling EXPLAIN EXECUTE.
2016-10-27 18:25:49 +03:00
Onder Kalaci a43e3bad56 Improve error semantics for INSERT..SELECT
With this commit, we error out if a worker query cannot be executed
on all placements of a target insert shard interval.
2016-10-27 14:09:05 +03:00
Metin Doslu c6f5cabbe3 Error on different shard placement count
In ErrorIfShardPlacementsNotColocated(), while checking if shards are colocated,
error out if matching shard intervals have different number of shard placements.
2016-10-26 18:46:05 +03:00
Onder Kalaci 9cd549f21f Add stub for Copy shard placement
This commit does not change the current behaviour, but, helps to implement
enterprise feature without any version changes.
2016-10-26 17:57:55 +03:00
Metin Doslu 4e555880b7 Add mark_tables_colocated() to update colocation groups
Added a new UDF, mark_tables_colocated(), to colocate tables with the same
configuration (shard count, shard replication count and distribution column type).
2016-10-26 17:29:03 +03:00
Marco Slot 275378aa45 Re-acquire metadata locks in RouterExecutorStart 2016-10-26 14:34:59 +02:00
Brian Cloutier 1e6d1ef67e Fix segfault during EXPLAIN EXECUTE
Fix citusdata/citus#886

The way postgres' explain hook is designed means that our hook is never
called during EXPLAIN EXECUTE. So, we special-case EXPLAIN EXECUTE by
catching it in the utility hook.  We then replace the EXECUTE with the
original query and pass it back to Citus.
2016-10-26 15:18:42 +03:00
Burak Yucesoy fc2fea839b Only repair given shard
Previously, when a repair is requested on a shard, we also repair all co-located shards
of given shard, which may cause repairing already healthy shards. With this change, we
only repair given shard.
2016-10-26 14:36:37 +03:00
Brian Cloutier 80c8cfeabe Don't add a raw 32-bit int to tuples in create_distributed_table 2016-10-26 14:02:42 +03:00
Andres Freund fcd150c7c8 Invalidate relcache after pg_dist_shard_placement changes.
This forces prepared statements to be re-planned after changes of the
placement metadata. There's some locking issues remaining, but that's a
a separate task.

Also add regression tests verifying that invalidations take effect on
prepared statements.
2016-10-26 03:36:35 -07:00
Onder Kalaci 1673ea937c Feature: INSERT INTO ... SELECT
This commit adds INSERT INTO ... SELECT feature for distributed tables.

We implement INSERT INTO ... SELECT by pushing down the SELECT to
each shard. To compute that we use the router planner, by adding
an "uninstantiated" constraint that the partition column be equal to a
certain value. standard_planner() distributes that constraint to all
the tables where it knows how to push the restriction safely. An example
is that the tables that are connected via equi joins.

The router planner then iterates over the target table's shards,
for each we replace the "uninstantiated" restriction, with one that
PruneShardList() handles. Do so by replacing the partitioning qual
parameter added in multi_planner() with the current shard's
actual boundary values. Also, add the current shard's boundary values to the
top level subquery to ensure that even if the partitioning qual is
not distributed to all the tables, we never run the queries on the shards
that don't match with the current shard boundaries. Finally, perform the
normal shard pruning to decide on whether to push the query to the
current shard or not.

We do not support certain SQLs on the subquery, which are described/commented
on ErrorIfInsertSelectQueryNotSupported().

We also added some locking on the router executor. When an INSERT/SELECT command
runs on a distributed table with replication factor >1, we need to ensure that
it sees the same result on each placement of a shard. So we added the ability
such that router executor takes exclusive locks on shards from which the SELECT
in an INSERT/SELECT reads in order to prevent concurrent changes. This is not a
very optimal solution, but it's simple and correct. The
citus.all_modifications_commutative can be used to avoid aggressive locking.
An INSERT/SELECT whose filters are known to exclude any ongoing writes can be
marked as commutative. See RequiresConsistentSnapshot() for the details.

We also moved the decison of whether the multiPlan should be executed on
the router executor or not to the planning phase. This allowed us to
integrate multi task router executor tasks to the router executor smoothly.
2016-10-26 10:01:00 +03:00
Onder Kalaci e0d83d65af Add ability to reorder target list for INSERT/SELECT queries
The necessity for this functionality comes from the fact that ruleutils.c is not supposed to be
used on "rewritten" queries (i.e. ones that have been passed through QueryRewrite()).
Query rewriting is the process in which views and such are expanded,
and, INSERT/UPDATE targetlists are reordered to match the physical order,
defaults etc. For the details of reordeing, see transformInsertRow().
2016-10-26 10:00:03 +03:00
Jason Petersen 73f5b8b05f
Move all funcs to pg_catalog, add test to verify
We'd been relying on a single SET search_path command in an earlier
script, but a subsequent script RESET search_path, causing any further
bare functions to be created in the first schema on the search path.

However, starting with an older extension version and executing ALTER
scripts one at a time DOES avoid putting any functions in the public
namespace, so I wrote an upgrade script resilient to that, especially
because PostgreSQL 9.5 will error out if a function is already in the
schema it's being moved to.
2016-10-25 12:45:53 -06:00
Brian Cloutier c6b74b023f Treat nodePort as the 8byte number it is 2016-10-25 16:31:48 +03:00
Brian Cloutier 2e96f6ab27 Fix crash when upgrading to Citus 6
Between restart (running the new code) and ALTER EXTENSION citus
UPGRADE there was an inconsistency where we assumed that
pg_dist_partition had the repmodel column set. Now we give it a default
value if the column doesn't exist yet.
2016-10-24 15:18:29 +03:00
Marco Slot 271b20a23e Parallelise DDL commands 2016-10-24 12:39:08 +02:00
Burak Yucesoy 5a03acf2bf Foreign Constraint Support for create_distributed_table and shard move
With this change, we now push down foreign key constraints created during CREATE TABLE
statements. We also start to send foreign constraints during shard move along with
other DDL statements
2016-10-21 15:38:55 +03:00
Marco Slot 02d2b86e68 Re-disable master evaluation for SELECT 2016-10-21 10:51:47 +02:00
Metin Doslu 405335fcee Add create_reference_table()
create_reference_table() creates a hash distributed table with shard count
equals to 1 and replication factor equals to shard_replication_factor
configuration value.
2016-10-20 15:29:30 +03:00
Metin Doslu d3e7d9dc8d Final refactoring 2016-10-20 11:29:11 +03:00
Metin Doslu 58ac477ffb Change return type of BuildDistributionKeyFromColumnName() to Var *
BuildDistributionKeyFromColumnName() always returns a Var pointer, so there is
no reason to return a Node pointer instead of a Var pointer.
2016-10-20 10:59:31 +03:00
Metin Doslu 161093908e Convert colocationid to uint32 2016-10-20 10:59:31 +03:00
Metin Doslu 8334d853c0 Add local function GetNextShardId() 2016-10-20 10:59:31 +03:00
Metin Doslu 40bdafa8d1 Add create_distributed_table()
create_distributed_table() creates a hash distributed table with default values
of shard count and shard replication factor.
2016-10-20 10:58:25 +03:00
Metin Doslu d04f4f5935 Add guc variable for shard count 2016-10-19 10:44:50 +03:00
Marco Slot 65f6d7c02a Follow consistent execution order in parallel commands 2016-10-19 08:33:08 +02:00
Marco Slot a497e7178c Parallelise master_modify_multiple_shards 2016-10-19 08:33:08 +02:00
Marco Slot 9d98acfb6d Move requiresMasterEvaluation from Task to Job 2016-10-19 08:23:06 +02:00
Marco Slot 213d8419c6 Refactor and redocument executor shard lock code 2016-10-19 08:13:35 +02:00
Andres Freund ac14b2edbc
Support PostgreSQL 9.6
Adds support for PostgreSQL 9.6 by copying in the requisite ruleutils
file and refactoring the out/readfuncs code to flexibly support the
old-style copy/pasted out/readfuncs (prior to 9.6) or use extensible
node APIs (in 9.6 and higher).

Most version-specific code within this change is only needed to set new
fields in the AggRef nodes we build for aggregations. Version-specific
test output files were added in certain cases, though in most they were
not necessary. Each such file begins by e.g. printing the major version
in order to clarify its purpose.

The comment atop citus_nodes.h details how to add support for new nodes
for when that becomes necessary.
2016-10-18 16:23:55 -06:00
Murat Tuncer b453f6c7ab Add master_run_on_worker UDF 2016-10-18 17:59:54 +03:00
Eren Basak cee7b54e7c Add worker transaction and transaction recovery infrastructure 2016-10-18 14:18:14 +03:00
Eren Basak f3ede37c9f Add hasmetadata column to pg_dist_node 2016-10-17 11:52:18 +03:00
Eren Basak c7bf2021fa Add metadata infrastructure for pg_dist_local_group table 2016-10-17 11:52:18 +03:00
Eren Basak 8f477d18f1 Add pg_dist_local_group Metadata Table
This change adds the pg_dist_local_group metadata table, which indicates
the group id of the current node. It is expected that this table contains
one and only one row, which only contains the group id of the node as an
integer.
2016-10-14 11:41:14 +03:00
Brian Cloutier 6c3d79b4e7 Drop shardalias 2016-10-14 11:03:26 +03:00
Burak Yucesoy 6668d19a3b Make shard transfer functions co-location aware
With this change, master_copy_shard_placement and master_move_shard_placement functions
start to copy/move given shard along with its co-located shards.
2016-10-13 18:16:40 +03:00
Metin Doslu d03a2af778 Add HAVING support
This commit completes having support in Citus by adding having support for
real-time and task-tracker executors. Multiple tests are added to regression
tests to cover new supported queries with having support.
2016-10-13 15:47:53 +03:00
Eren Basak ed3af403fd Add Metadata Snapshot Infrastructure
This change adds the required infrastructure about metadata snapshot from MX
codebase into Citus, mainly metadata_sync.c file and master_metadata_snapshot UDF.
2016-10-13 10:40:14 +03:00
Marco Slot 33b7723530 Use UpdateShardPlacementState where appropriate 2016-10-07 11:59:20 -07:00