Commit Graph

393 Commits (d6d88efc2db2ea48ed2f46bdac2f04076acd45d2)

Author SHA1 Message Date
Marco Slot 8edba5f309 Honour enable_ddl_propagation in truncate trigger 2017-04-29 03:32:52 +02:00
Brian Cloutier 22e7aa9a4f Fix crash in isolation tests
- There was a crash when the table a shardid belonged to changed during
  a session. Instead of crashing (a failed assert) we now throw an error
- Update the isolation test which was crashing to no longer exercise
  that code path
- Add a regression test to check that the error is thrown
2017-04-29 04:25:26 +03:00
Önder Kalacı ad5cd326a4 Subquery pushdown - main branch (#1323)
* Enabling physical planner for subquery pushdown changes

This commit applies the logic that exists in INSERT .. SELECT
planning to the subquery pushdown changes.

The main algorithm is followed as :
   - pick an anchor relation (i.e., target relation)
   - per each target shard interval
       - add the target shard interval's shard range
         as a restriction to the relations (if all relations
         joined on the partition keys)
        - Check whether the query is router plannable per
          target shard interval.
        - If router plannable, create a task

* Add union support within the JOINS

This commit adds support for UNION/UNION ALL subqueries that are
in the following form:

     .... (Q1 UNION Q2 UNION ...) as union_query JOIN (QN) ...

In other words, we currently do NOT support the queries that are
in the following form where union query is not JOINed with
other relations/subqueries :

     .... (Q1 UNION Q2 UNION ...) as union_query ....

* Subquery pushdown planner uses original query

With this commit, we change the input to the logical planner for
subquery pushdown. Before this commit, the planner was relying
on the query tree that is transformed by the postgresql planner.
After this commit, the planner uses the original query. The main
motivation behind this change is the simplify deparsing of
subqueries.

* Enable top level subquery join queries

This work enables
- Top level subquery joins
- Joins between subqueries and relations
- Joins involving more than 2 range table entries

A new regression test file is added to reflect enabled test cases

* Add top level union support

This commit adds support for UNION/UNION ALL subqueries that are
in the following form:

     .... (Q1 UNION Q2 UNION ...) as union_query ....

In other words, Citus supports allow top level
unions being wrapped into aggregations queries
and/or simple projection queries that only selects
some fields from the lower level queries.

* Disallow subqueries without a relation in the range table list for subquery pushdown

This commit disallows subqueries without relation in the range table
list. This commit is only applied for subquery pushdown. In other words,
we do not add this limitation for single table re-partition subqueries.

The reasoning behind this limitation is that if we allow pushing down
such queries, the result would include (shardCount * expectedResults)
where in a non distributed world the result would be (expectedResult)
only.

* Disallow subqueries without a relation in the range table list for INSERT .. SELECT

This commit disallows subqueries without relation in the range table
list. This commit is only applied for INSERT.. SELECT queries.

The reasoning behind this limitation is that if we allow pushing down
such queries, the result would include (shardCount * expectedResults)
where in a non distributed world the result would be (expectedResult)
only.

* Change behaviour of subquery pushdown flag (#1315)

This commit changes the behaviour of the citus.subquery_pushdown flag.
Before this commit, the flag is used to enable subquery pushdown logic. But,
with this commit, that behaviour is enabled by default. In other words, the
flag is now useless. We prefer to keep the flag since we don't want to break
the backward compatibility. Also, we may consider using that flag for other
purposes in the next commits.

* Require subquery_pushdown when limit is used in subquery

Using limit in subqueries may cause returning incorrect
results. Therefore we allow limits in subqueries only
if user explicitly set subquery_pushdown flag.

* Evaluate expressions on the LIMIT clause (#1333)

Subquery pushdown uses orignal query, the LIMIT and OFFSET clauses
are not evaluated. However, logical optimizer expects these expressions
are already evaluated by the standard planner. This commit manually
evaluates the functions on the logical planner for subquery pushdown.

* Better format subquery regression tests (#1340)

* Style fix for subquery pushdown regression tests

With this commit we intented a more consistent style for the
regression tests we've added in the
  - multi_subquery_union.sql
  - multi_subquery_complex_queries.sql
  - multi_subquery_behavioral_analytics.sql

* Enable the tests that are temporarily commented

This commit enables some of the regression tests that were commented
out until all the development is done.

* Fix merge conflicts (#1347)

 - Update regression tests to meet the changes in the regression
   test output.
 - Replace Ifs with Asserts given that the check is already done
 - Update shard pruning outputs

* Add view regression tests for increased subquery coverage (#1348)

- joins between views and tables
- joins between views
- union/union all queries involving views
- views with limit
- explain queries with view

* Improve btree operators for the subquery tests

This commit adds the missing comprasion for subquery composite key
btree comparator.
2017-04-29 04:09:48 +03:00
Marco Slot 0b579d027a Check whether relation ID exists in citus_relation_size 2017-04-29 01:39:39 +02:00
Andres Freund d399f395f7 Faster shard pruning.
So far citus used postgres' predicate proofing logic for shard
pruning, except for INSERT and COPY which were already optimized for
speed.  That turns out to be too slow:
* Shard pruning for SELECTs is currently O(#shards), because
  PruneShardList calls predicate_refuted_by() for every
  shard. Obviously using an O(N) type algorithm for general pruning
  isn't good.
* predicate_refuted_by() is quite expensive on its own right. That's
  primarily because it's optimized for doing a single refutation
  proof, rather than performing the same proof over and over.
* predicate_refuted_by() does not keep persistent state (see 2.) for
  function calls, which means that a lot of syscache lookups will be
  performed. That's particularly bad if the partitioning key is a
  composite key, because without a persistent FunctionCallInfo
  record_cmp() has to repeatedly look-up the type definition of the
  composite key. That's quite expensive.

Thus replace this with custom-code that works in two phases:
1) Search restrictions for constraints that can be pruned upon
2) Use those restrictions to search for matching shards in the most
   efficient manner available:
   a) Binary search / Hash Lookup in case of hash partitioned tables
   b) Binary search for equal clauses in case of range or append
      tables without overlapping shards.
   c) Binary search for inequality clauses, searching for both lower
      and upper boundaries, again in case of range or append
      tables without overlapping shards.
   d) exhaustive search testing each ShardInterval

My measurements suggest that we are considerably, often orders of
magnitude, faster than the previous solution, even if we have to fall
back to exhaustive pruning.
2017-04-28 14:40:41 -07:00
Andres Freund 6bd2e3ed30 Add DistTableCacheEntry->hasOverlappingShardInterval.
This determines whether it's possible to perform binary search on
sortedShardIntervalArray or not.  If e.g. two shards have overlapping
ranges, that'd be prohibitive.

That'll be useful in later commit introducing faster shard pruning.
2017-04-28 14:40:38 -07:00
Andres Freund 105483ec56 Add DistTableCacheEntry->shardValueCompareFunction.
That's useful when comparing values a hash-partitioned table is
filtered by.  The existing shardIntervalCompareFunction is about
comparing hashed values, not unhashed ones.

The added btree opclass function is so we can get a comparator
back. This should be changed much more widely, but is not necessary so
far.
2017-04-28 14:40:38 -07:00
Metin Doslu b6659bec22 Send explain queries with savepoints
With this commit, we started to send explain queries within a savepoint. After
running explain query, we rollback to savepoint. This saves us from side effects
of EXPLAIN ANALYZE on DML queries.
2017-04-28 12:13:48 -07:00
Andres Freund 1f93c325fa Some cleanup in multi_subquery test.
Remove trailing whitespace and use of EXPLAIN instead of
EXPLAIN (COSTS OFF).
2017-04-26 11:33:56 -07:00
Andres Freund b0585c7df6 Add back pruning coverage lost in last commit.
Because we can't rely on the debuggin message anymore, add a bunch of
explain statements that roughly fulfill the same purpose.
2017-04-26 11:33:56 -07:00
Andres Freund b7dfeb0bec Boring regression test output adjustments.
Soon shard pruning will be optimized not to generally work linearly
anymore.  Thus we can't print the pruned shard intervals as currently
done anymore.

The current printing of shard ids also prevents us from running tests
in parallel, as otherwise shard ids aren't linearly numbered.
2017-04-26 11:33:56 -07:00
Burak Yucesoy 5de61ebf78 Configure valgrind command line arguments 2017-04-21 16:30:12 +03:00
Burak Yucesoy d6cb88a73a Stabilize test outputs 2017-04-21 16:08:52 +03:00
Eren Basak abc84e6b2b Add support for proper valgrind tests
This change allows valgrind tests (`make check-multi-vg`) to be
run seamlessly without test output errors and timeout problems.
2017-04-21 16:08:52 +03:00
Marco Slot 4ed093970a Support expressions in the partition column in INSERTs 2017-04-21 14:05:52 +02:00
velioglu 24d24db25c Implement ALTER TABLE ADD CONSTRAINT command 2017-04-20 15:02:33 +03:00
velioglu 8cbef819be Log message of across shard queries according to the log level 2017-04-20 12:24:46 +03:00
velioglu 2327b63291 Change native hash function with worker_hash 2017-04-19 22:16:55 +03:00
Jason Petersen 5272c2c44b
Enable distributed ALTER TABLE ... RENAME COLUMN
Pretty straightforward. Had some concerns about locking, but due to the
fact that all distributed operations use either some level of deparsing
or need to enumerate column names, they all block during any concurrent
column renames (due to the AccessExclusive lock).

In addition, I had some misgivings about permitting renames of the dis-
tribution column, but nothing bad comes from just allowing them.

Finally, I tried to trigger any sort of error using prepared statements
and could not trigger any errors not also exhibited by plain PostgreSQL
tables.
2017-04-18 22:47:48 -06:00
Marco Slot 3d99cdfcc7 Add basic read-only transaction tests 2017-04-18 11:42:33 +02:00
Marco Slot f838c83809 Remove redundant pg_dist_jobid_seq restarts in tests 2017-04-18 11:42:32 +02:00
Marco Slot 40829c2ba9 Set citus.enable_unique_job_ids in tests with job ID in output 2017-04-18 11:42:32 +02:00
Metin Doslu 4615100da5 Fix table in name in prepared statement regression tests 2017-04-17 16:17:30 +02:00
Marco Slot af0e462409 Support UPDATE/DELETE with parameterised partition column qual 2017-04-17 16:17:30 +02:00
Marco Slot 5e58804d44 Support query parameters in combination with function evaluation 2017-04-17 15:40:55 +02:00
Burak Yucesoy e9095e62ec Decouple reference table replication
With this change we add an option to add a node without replicating all reference
tables to that node. If a node is added with this option, we mark the node as
inactive and no queries will sent to that node.

We also added two new UDFs;
 - master_activate_node(host, port):
    - marks node as active and replicates all reference tables to that node
 - master_add_inactive_node(host, port):
    - only adds node to pg_dist_node
2017-04-17 13:33:31 +03:00
Burak Yucesoy 7cfcb7d2f8 Error out on parameterized SQL functions
Before this commit, we were erroring out for queries containing parameterized SQL functions
like 'SELECT parameterized_sql_query(value)' as we should, however we were returning wrong
results for queries like 'SELECT * FROM parameterized_sql_query(value)'. With this commit
we started to error out on such queries too.
2017-04-13 16:36:24 +03:00
Onder Kalaci 1cb6a34ba8 Remove uninstantiated qual logic, use attribute equivalences
In this PR, we aim to deduce whether each of the RTE_RELATION
is joined with at least on another RTE_RELATION on their partition keys. If each
RTE_RELATION follows the above rule, we can conclude that all RTE_RELATIONs are
joined on their partition keys.

In order to do that, we invented a new equivalence class namely:
AttributeEquivalenceClass. In very simple words, a AttributeEquivalenceClass is
identified by an unique id and consists of a list of AttributeEquivalenceMembers.

Each AttributeEquivalenceMember is designed to identify attributes uniquely within the
whole query. The necessity of this arise since varno attributes are defined within
a single level of a query. Instead, here we want to identify each RTE_RELATION uniquely
and try to find equality among each RTE_RELATION's partition key.

Whenever we find an equality clause A = B, where both A and B originates from
relation attributes (i.e., not random expressions), we create an
AttributeEquivalenceClass to record this knowledge. If we later find another
equivalence B = C, we create another AttributeEquivalenceClass. Finally, we can
apply transitity rules and generate a new AttributeEquivalenceClass which includes
A, B and C.

Note that equality among the members are identified by the varattno and rteIdentity.

Each equality among RTE_RELATION is saved using an AttributeEquivalenceClass where
each member attribute is identified by a AttributeEquivalenceMember. In the final
step, we try generate a common attribute equivalence class that holds as much as
AttributeEquivalenceMembers whose attributes are a partition keys.
2017-04-13 11:51:26 +03:00
velioglu 19d0c66fa5 Change checks with built-in type 2017-04-11 14:41:37 +03:00
velioglu 1fb11c738f Check binary output function of type. 2017-04-10 16:28:09 +03:00
Jason Petersen 8b4620ef16
Use RESET for GUC test, not reconnect
More limited in what it does, better test.
2017-04-04 16:40:17 -06:00
Jason Petersen ef81b21a49
Clean up ErrorIfUnstableCreateOrAlterExtensionStmt
Swaps an Assert in for an ereport, and adds details and hints to the
error message to help users with a possibly confusing scenario.
2017-04-04 15:58:57 -06:00
Burak Yucesoy a09614553f Add enable_version_checks GUC and address feedback 2017-04-04 19:11:13 +03:00
Burak Yucesoy 087d8427e3
Error out if binary citus version does not match installed extension
With this change, we start to error out if loaded citus binaries does not match
the available major version or installed citus extension version. In this case
we force user to restart the server or run ALTER EXTENSION depending on the
situation
2017-04-03 17:36:13 -06:00
Jason Petersen 4cdfc3a10f
Address review feedback
Should just about do it.
2017-04-03 11:44:57 -06:00
Jason Petersen cf775c4773
Improve CONCURRENTLY-related error messages
Thought this looked slightly nicer than the default behavior.

Changed preventTransaction to concurrent to be clearer that this code
path presently affects CONCURRENTLY code only.
2017-04-03 11:19:15 -06:00
Jason Petersen d904e96c59
Address MX CONCURRENTLY problems
Adds a non-transactional multi-command method to propagate DDLs to all
MX/metadata-synced nodes.
2017-04-03 11:19:15 -06:00
Jason Petersen 32886e97a3
Add code to set index validity on failure
Coordinator code marks index as invalid as a base, set it as valid in a
transactional layer atop that base, then proceeds with worker commands.
If a worker command has problems, the rollback results in an index with
isvalid = false. If everything succeeds, the user sees a valid index.
2017-04-03 11:19:15 -06:00
Jason Petersen dea6c44f75
Remove CONCURRENTLY checks, fix tests
Still pending failure testing, which broke with my recent changes.
2017-04-03 11:19:15 -06:00
Metin Doslu 54a277ff01 Add disable/enable trigger all support 2017-03-29 22:00:14 +03:00
Onder Kalaci 11665dbe3c Fix pushing down wrong queries for INSERT ... SELECT queries
Before this commit, in certain cases router planner allowed pushing
down JOINs that are not on the partition keys.

With @anarazel's suggestion, we change the logic to use uninstantiated
parameter. Previously, the planner was traversing on the restriction
information and once it finds the parameter, it was replacing it with
the shard range. With this commit, instead of traversing the restrict
infos, the planner explicitly checks for the equivalence of the relation
partition key with the uninstantiated parameter. If finds an equivalence,
it adds the restrictions. In this way, we have more control over the
queries that are pushed down.
2017-03-24 11:37:35 +02:00
Jason Petersen 42c799faee
Fix MX tests
Missed some of these. One had a bad DDL statement to begin with (mixed
up column type and column name) and other was just master/worker order.
2017-03-22 17:21:49 -06:00
Jason Petersen f181b24859
Move worker execution to after master, fix tests
Some tests relied on worker errors though local commands were invalid.
Fixed those by ensuring preconditions were met to have command work
correctly. Otherwise most test changes are related to slight changes
in local/remote error ordering.
2017-03-22 17:21:49 -06:00
Jason Petersen 2cb34406d1
Minor permissions test fix
When running under Enterprise, some of the GRANT commands and whatnot
are propagated. Guarding that section with a call to disable DDL prop.
fixes everything.
2017-03-22 17:07:05 -06:00
Metin Doslu 404e32cdb4
Add basic permission checking tests 2017-03-22 15:25:00 -06:00
Metin Doslu bcff6aa96c
Update regression tests for changing explain output 2017-03-22 15:25:00 -06:00
Murat Tuncer c4734d7d94 Rephrase router modify errors
generic "distributed modifications must target exactly one shard"
message is replaced by more context aware error messages.
2017-03-16 15:09:10 +03:00
velioglu e32aff1a26 Size UDFs implemented
citus_table_size, citus_relation_size and citus_total_relation_size UDFs are implemented.
2017-03-16 13:50:30 +03:00
Metin Doslu 1f838199f8 Use CustomScan API for query execution
Custom Scan is a node in the planned statement which helps external providers
to abstract data scan not just for foreign data wrappers but also for regular
relations so you can benefit your version of caching or hardware optimizations.
This sounds like only an abstraction on the data scan layer, but we can use it
as an abstraction for our distributed queries. The only thing we need to do is
to find distributable parts of the query, plan for them and replace them with
a Citus Custom Scan. Then, whenever PostgreSQL hits this custom scan node in
its Vulcano style execution, it will call our callback functions which run
distributed plan and provides tuples to the upper node as it scans a regular
relation. This means fewer code changes, fewer bugs and more supported features
for us!

First, in the distributed query planner phase, we create a Custom Scan which
wraps the distributed plan. For real-time and task-tracker executors, we add
this custom plan under the master query plan. For router executor, we directly
pass the custom plan because there is not any master query. Then, we simply let
the PostgreSQL executor run this plan. When it hits the custom scan node, we
call the related executor parts for distributed plan, fill the tuple store in
the custom scan and return results to PostgreSQL executor in Vulcano style,
a tuple per XXX_ExecScan() call.

* Modify planner to utilize Custom Scan node.
* Create different scan methods for different executors.
* Use native PostgreSQL Explain for master part of queries.
2017-03-14 12:17:51 +02:00
Andres Freund 52358fe891 Initial temp table removal implementation 2017-03-14 12:09:49 +02:00
Murat Tuncer f657a744d5 Enable router planner for queries on range partitioned tables
Router planner now supports queries using range partitioned
tables. Queries on append partitioned tables are still not
supported.
2017-03-09 16:39:15 +03:00
Brian Cloutier 95936ff481 Remove unused master_get_round_robin_candidate_nodes 2017-03-07 11:51:24 +03:00
Brian Cloutier 807beb7bc0 Remove master_get_local_first_candidate_nodes 2017-03-07 11:50:59 +03:00
Murat Tuncer 72027f2eba Remove default clause from shard DDL when sequences are used 2017-03-01 17:32:48 +03:00
Marco Slot 56d4d375c2 Address review feedback in create_distributed_table data loading 2017-02-28 17:39:45 +01:00
Marco Slot d74fb764b1 Use CitusCopyDestReceiver for regular COPY 2017-02-28 17:24:45 +01:00
Marco Slot d11eca7d4a Load data into distributed table on creation 2017-02-28 17:24:45 +01:00
Burak Velioglu e158c7ae67 Merge branch 'master' into disallow_master_appy_delete_on_hash 2017-02-24 10:40:23 +02:00
Metin Doslu ee425871ee Get reproducible costs between different PostgreSQL versions 2017-02-22 15:40:02 +02:00
Burak Velioglu 49812ddfa0 Disallow master_apply_delete_command on hash distributed table
Delete operation is blocked for any table distributed by hash using master_apply_delete_command. Suggested master_modify_multiple_shards command as a hint.
2017-02-22 11:54:46 +03:00
Andres Freund 9721e80901 Use DEBUG2 instead of DEBUG4 in INSERT SELECT tests & debug message.
During later work the transaction debug output will change (as it will
in postgres 10), which makes it hard to see actual changes in the
INSERT ... SELECT ... test.  Reduce to DEBUG2 after changing a debug
message to that log level.
2017-02-20 12:56:16 +02:00
Eren Basak df9cf346ee Enforce statement based replication on old APIs and non-hash tables
This change ignores `citus.replication_model` setting and uses the
statement based replication in

- Tables distributed via the old `master_create_distributed_table` function
- Append and range partitioned tables, even if created via
`create_distributed_table` function

This seems like the easiest solution to #1191, without changing the existing
behavior and harming existing users with custom scripts.

This change also prevents RF>1 on streaming replicated tables on `master_create_worker_shards`

Prior to this change, `master_create_worker_shards` command was not checking
the replication model of the target table, thus allowing RF>1 with streaming
replicated tables. With this change, `master_create_worker_shards` errors
out on the case.
2017-02-16 10:37:53 -08:00
Jason Petersen ad19b668ac Fix tests broken by new PostgreSQL patch releases (#1220)
PostgreSQL 9.5.6 and 9.6.2 were released today and broke several tests
by adding TABLESPACE pg_default output to some DDL commands. Fixed all
occurrences.

cr: @anarazel
2017-02-09 16:53:02 -07:00
Onder Kalaci 95f8382ca2 Bugfix for creating foreign key
This commit fixes crash for adding foreign keys without
specifying the referenced column crashes the backend.
2017-02-07 09:34:24 +02:00
Brian Cloutier e3c763c3f7 Start remote transactions in master_append_table_to_shard
Add a call to RemoteTransactionBeginIfNecessary so that BEGIN is
actually sent to the remote connections. This means that ROLLBACK and
Ctrl-C are respected and don't leave the table in a partial state.
2017-02-01 18:12:19 +02:00
Eren Basak 005c533ee8 Fix Random Fails on Travis
This change fixes the random failures on Travis, which is a bug introduced
with citus/#1124. Before this fix, travis was failing randomly on `check_multi_mx`
test schedule, specifically in the parallel group of `multi_mx_metadata`,
'multi_mx_modifications` and `multi_mx_modifying_xacts` tests. This change fixes this
by serializing these three test cases.
2017-01-31 15:23:06 -08:00
Eren Basak ae0bfb1394 Allow dropping sequences on mx workers
This change allows users to drop sequences on MX workers. Previously, Citus didn't allow dropping
sequences on MX workers because it could cause shards to be dropped if `DROP SEQUENCE ... CASCADE`
is used. We now allow that since allowing sequence creation but not dropping hurts user experience
and also may cause problems with custom Citus solutions.
2017-01-31 14:51:44 -08:00
Brian Cloutier 6843ad8e91 Fix bug where router executor sends query to failed connections 2017-01-27 09:40:30 +02:00
Brian Cloutier 1173f3f225 Refactor CheckShardPlacements
- Break CheckShardPlacements into multiple functions (The most important
  is MarkFailedShardPlacements), so that we can get rid of the global
  CoordinatedTransactionUses2PC.
- Call MarkFailedShardPlacements in the router executor, so we mark
  shards as invalid and stop using them while inside transaction blocks.
2017-01-26 13:20:45 +02:00
Murat Tuncer 0e635b69f0 Add copy failure tests inside transactions 2017-01-26 11:54:40 +03:00
Murat Tuncer 1107439ade Fix dependent tests 2017-01-25 19:19:39 +03:00
Murat Tuncer 5194111420 Add failure case for regression tests 2017-01-25 19:19:39 +03:00
Marco Slot b1626887d5 Don't mark placements inactive in COPY after successful connection 2017-01-25 19:19:38 +03:00
Marco Slot 2748660b1c Always skip foreign key validation when enable_ddl_propagation is off 2017-01-25 11:56:59 +01:00
Marco Slot ba940a1de9 Use coordinator instead of schema node in terminology 2017-01-25 11:07:23 +01:00
Marco Slot 72725ba30c Use bigserial instead of BIGINT in sequence error 2017-01-25 11:07:23 +01:00
Burak Yucesoy 00dd42dc44 Add ORDER BY to some tests to have consistent output 2017-01-25 11:43:25 +02:00
Eren Basak 88e9a429e1 Add Regression Tests For Querying MX Tables from Workers 2017-01-24 10:36:59 +03:00
Burak Yucesoy d80e7849a4 Convert DropShards to use new connection API
With this change DropShards function started to use new connection API. DropShards
function is used by DROP TABLE, master_drop_all_shards and master_apply_delete_command,
therefore all of these functions now support transactional operations. In DropShards
function, if we cannot reach a node, we mark shard state of related placements as
FILE_TO_DELETE and continue to drop remaining shards; however if any error occurs after
establishing the connection, we ROLLBACK whole operation.
2017-01-23 21:08:41 +03:00
Andres Freund 6939cb8c56 Hack up PREPARE/EXECUTE for nearly all distributed queries.
All router, real-time, task-tracker plannable queries should now have
full prepared statement support (and even use router when possible),
unless they don't go through the custom plan interface (which
basically just affects LANGUAGE SQL (not plpgsql) functions).

This is achieved by forcing postgres' planner to always choose a
custom plan, by assigning very low costs to plans with bound
parameters (i.e. ones were the postgres planner replanned the query
upon EXECUTE with all parameter values provided), instead of the
generic one.

This requires some trickery, because for custom plans to work the
costs for a non-custom plan have to be known, which means we can't
error out when planning the generic plan.  Instead we have to return a
"faux" plan, that'd trigger an error message if executed.  But due to
the custom plan logic that plan will likely (unless called by an SQL
function, or because we can't support that query for some reason) not
be executed; instead the custom plan will be chosen.
2017-01-23 09:23:50 -08:00
Andres Freund c244b8ef4a Make router planner error handling more flexible.
So far router planner had encapsulated different functionality in
MultiRouterPlanCreate. Modifications always go through router, selects
sometimes. Modifications always error out if the query is unsupported,
selects return NULL.  Especially the error handling is a problem for
the upcoming extension of prepared statement support.

Split MultiRouterPlanCreate into CreateRouterPlan and
CreateModifyPlan, and change them to not throw errors.

Instead errors are now reported by setting the new
MultiPlan->plannigError.

Callers of router planner functionality now have to throw errors
themselves if desired, but also can skip doing so.

This is a pre-requisite for expanding prepared statement support.

While touching all those lines, improve a number of error messages by
getting them closer to the postgres error message guidelines.
2017-01-23 09:23:50 -08:00
Jason Petersen 56197dbdba
Add replication_model GUC
This adds a replication_model GUC which is used as the replication
model for any new distributed table that is not a reference table.
With this change, tables with replication factor 1 are no longer
implicitly MX tables.

The GUC is similarly respected during empty shard creation for e.g.
existing append-partitioned tables. If the model is set to streaming
while replication factor is greater than one, table and shard creation
routines will error until this invalid combination is corrected.

Changing this parameter requires superuser permissions.
2017-01-23 09:05:14 -07:00
Burak Yucesoy 2e1df4c910 Reword error message for outer joins requiring repartition
We changed error message which appears when user tries to execute outer join command and
that command requires repartitioning. Old error message mentioned about 1-to-1 shard
partitioning which may not be clear to user.
2017-01-23 10:42:36 +03:00
Marco Slot ea855ddf86 Add an enable_deadlock_prevention flag to allow router transactions to expand to multiple nodes 2017-01-22 17:31:24 +01:00
Andres Freund 78b085106a Remove connection_cache.[ch]. 2017-01-21 09:01:15 -08:00
Önder Kalacı 594fa761e1 Merge branch 'master' into fix_command_counter_increment 2017-01-21 09:21:19 +02:00
Murat Tuncer d76f781ae4 Convert multi copy to use new connection api
This enables proper transactional behaviour for copy and relaxes some
restrictions like combining COPY with single-row modifications. It
also provides the basis for relaxing restrictions further, and for
optionally allowing connection caching.
2017-01-20 19:15:19 -08:00
Jason Petersen 4e7b23472c
Change default replication factor to one
Took the quick-and-dirty approach of changing it back to two during
test runs. Can update tests to expect one in due time.
2017-01-20 18:56:43 -07:00
Onder Kalaci bd825be340 Improve heap access methods
This commit improves heap access methods for reference table
upgrade and colocation group modifications.
2017-01-20 14:53:29 +02:00
Metin Doslu 93e626c896 Refactor get_shard_id_for_distribution_column() and other minor changes 2017-01-20 14:38:01 +02:00
Metin Doslu 7cff8719c2 Add worker_hash() and a stub for isolate_tenant_to_new_shard() 2017-01-20 14:38:01 +02:00
Murat Tuncer c12bd7b75e
Remove hint message from master_remove_node UDF
Hint about master_disable_node  was giving wrong
impression to users. Removal is better than keeping it.
2017-01-18 22:33:00 -07:00
Eren Basak 4def1ca696 Prevent COPY to reference tables from worker nodes 2017-01-18 17:38:01 +03:00
Eren Basak e7c15ecc1f Make `upgrade_to_reference_table` function MX-compatible 2017-01-18 16:49:50 +03:00
Eren Basak 56ca590daa Propagate metadata changes for deleted reference table placements on master_remove_node call 2017-01-18 16:00:07 +03:00
Eren Basak be78769ae4 Propagate new reference table placement metadata on `master_add_node` 2017-01-18 15:59:06 +03:00
Eren Basak 23b2619412 Make reference table metadata synced to workers 2017-01-18 15:59:05 +03:00
Eren Basak e44d226221 Propagate Metadata to Workers on `create_reference_table` call. 2017-01-18 11:05:24 +03:00
Eren Basak b686d9a025 Add Sequence Support for MX Tables
This change adds support for serial columns to be used with MX tables.
Prior to this change, sequences of serial columns were created in all
workers (for being able to create shards) but never used. With MX, we
need to set the sequences so that sequences in each worker create
unique values. This is done by setting the MINVALUE, MAXVALUE and
START values of the sequence.
2017-01-18 09:43:38 +03:00
Eren Basak b1ce8d61c0 Create Invalidation Trigger for pg_dist_local_group Table Updates 2017-01-18 09:43:38 +03:00