Commit Graph

234 Commits (290f9a63019f8597c2d649962d873d04736e13d3)

Author SHA1 Message Date
Marco Slot 290f9a6301 Move INSERT ... SELECT planning logic into one place 2017-06-29 15:03:14 +02:00
Onder Kalaci dc9b63504e Add some utility functions for partitioned tables
This commit is intended to be a base for supporting declarative partitioning
on distributed tables. Here we add the following utility functions and their
unit tests:

  * Very basic functions including differnentiating partitioned tables and
    partitions, listing the partitions
  * Generating the PARTITION BY (expr) and adding this to the DDL events
    of partitioned tables
  * Ability to generate text representations of the ranges for partitions
  * Ability to generate the `ALTER TABLE parent_table ATTACH PARTITION
    partition_table FOR VALUES value_range`
  * Ability to apply add shard ids to the above command using
    `worker_apply_inter_shard_ddl_command()`
  * Ability to generate `ALTER TABLE parent_table DETACH PARTITION`
2017-06-28 09:39:55 +03:00
Andres Freund c9c26ad81f Remove 9.5 related node wrappers.
Now that all branches support the extensible node infrastructure, we
don't need our wrappers anymore.
2017-06-26 08:46:32 -07:00
Andres Freund 7733154be0 Fix code only enabled for 9.5.
There's still supporting wrappers used, a subsequent commit will
remove those.

This also removes the already unused tuplecount_t define.
2017-06-26 08:46:32 -07:00
Jason Petersen 05aef7ce29 Support PostgreSQL 10 (#1379)
Adds support for PostgreSQL 10 by copying in the requisite ruleutils
and updating all API usages to conform with changes in PostgreSQL 10.
Most changes are fairly minor but they are numerous. One particular
obstacle was the change in \d behavior in PostgreSQL 10's psql; I had
to add SQL implementations (views, mostly) to mimic the pre-10 output.
2017-06-26 02:35:46 -06:00
Andres Freund 62e7b67dc1 Introduce per-database maintenance process.
This will be used for deadlock detection, prepared transaction
recovery amongst others, but currently is just idling around.
2017-06-23 11:53:39 -07:00
Andres Freund 81f92d44d2 Minimal infrastructure for per-backend citus initialization. 2017-06-23 11:20:10 -07:00
Marco Slot f1c6c966f0 Execute INSERT..SELECT via coordinator if it cannot be pushed down
Add a second implementation of INSERT INTO distributed_table SELECT ... that is used if
the query cannot be pushed down. The basic idea is to execute the SELECT query separately
and pass the results into the distributed table using a CopyDestReceiver, which is also
used for COPY and create_distributed_table. When planning the SELECT, we go through
planner hooks again, which means the SELECT can also be a distributed query.

EXPLAIN is supported, but EXPLAIN ANALYZE is not because preventing double execution was
a lot more complicated in this case.
2017-06-22 15:46:30 +02:00
Marco Slot 7675d21151 Simplify router planner call path 2017-06-22 15:45:57 +02:00
velioglu 0cf67a3164 Delete ExecuteRemoteCommand function 2017-06-15 17:11:19 +03:00
Burak Yucesoy 1eb77495b9 Fix incorrect call to CheckInstalledVersion
During version update, we indirectly calld CheckInstalledVersion via
ChackCitusVersions. This obviously fails because during version update it is
expected to have version mismatch between installed version and binary version.
Thus, we remove that ChackCitusVersions. We now only call ChackAvailableVersion.
2017-05-24 17:39:25 +03:00
Burak Yucesoy 2f075c43cc Only error out on distributed queries when there is version mismatch
Before this commit, we were erroring out at almost all queries if there is a
version mismatch. With this commit, we started to error out only requested
operation touches distributed tables.

Normally we would need to use distributed cache to understand whether a table
is distributed or not. However, it is not safe to read our metadata tables when
there is a version mismatch, thus it is not safe to create distributed cache.
Therefore for this specific occasion, we directly read from pg_dist_partition
table. However; reading from catalog is costly and we should not use this
method in other places as much as possible.
2017-05-22 09:53:29 +03:00
Jason Petersen b08c2a0ccc Use library and symbol name for bgw entry
PostgreSQL 10 takes away the ability to directly assign a function
pointer; the other approach (library and symbol name) is supported by
all versions.
2017-05-16 11:05:33 -06:00
Önder Kalacı df20f7c5ed Add support for parametrized execution for subquery pushdown (#1356)
Distributed query planning for subquery pushdown is done on the original
query. This prevents the usage of external parameters on the execution.
To overcome this, we manually replace the parameters on the original
query.
2017-05-10 09:38:48 +03:00
Marco Slot d936d535df Honour enable_ddl_propagation in truncate trigger 2017-04-29 03:32:52 +02:00
Önder Kalacı 76ea17a9bc Subquery pushdown - main branch (#1323)
* Enabling physical planner for subquery pushdown changes

This commit applies the logic that exists in INSERT .. SELECT
planning to the subquery pushdown changes.

The main algorithm is followed as :
   - pick an anchor relation (i.e., target relation)
   - per each target shard interval
       - add the target shard interval's shard range
         as a restriction to the relations (if all relations
         joined on the partition keys)
        - Check whether the query is router plannable per
          target shard interval.
        - If router plannable, create a task

* Add union support within the JOINS

This commit adds support for UNION/UNION ALL subqueries that are
in the following form:

     .... (Q1 UNION Q2 UNION ...) as union_query JOIN (QN) ...

In other words, we currently do NOT support the queries that are
in the following form where union query is not JOINed with
other relations/subqueries :

     .... (Q1 UNION Q2 UNION ...) as union_query ....

* Subquery pushdown planner uses original query

With this commit, we change the input to the logical planner for
subquery pushdown. Before this commit, the planner was relying
on the query tree that is transformed by the postgresql planner.
After this commit, the planner uses the original query. The main
motivation behind this change is the simplify deparsing of
subqueries.

* Enable top level subquery join queries

This work enables
- Top level subquery joins
- Joins between subqueries and relations
- Joins involving more than 2 range table entries

A new regression test file is added to reflect enabled test cases

* Add top level union support

This commit adds support for UNION/UNION ALL subqueries that are
in the following form:

     .... (Q1 UNION Q2 UNION ...) as union_query ....

In other words, Citus supports allow top level
unions being wrapped into aggregations queries
and/or simple projection queries that only selects
some fields from the lower level queries.

* Disallow subqueries without a relation in the range table list for subquery pushdown

This commit disallows subqueries without relation in the range table
list. This commit is only applied for subquery pushdown. In other words,
we do not add this limitation for single table re-partition subqueries.

The reasoning behind this limitation is that if we allow pushing down
such queries, the result would include (shardCount * expectedResults)
where in a non distributed world the result would be (expectedResult)
only.

* Disallow subqueries without a relation in the range table list for INSERT .. SELECT

This commit disallows subqueries without relation in the range table
list. This commit is only applied for INSERT.. SELECT queries.

The reasoning behind this limitation is that if we allow pushing down
such queries, the result would include (shardCount * expectedResults)
where in a non distributed world the result would be (expectedResult)
only.

* Change behaviour of subquery pushdown flag (#1315)

This commit changes the behaviour of the citus.subquery_pushdown flag.
Before this commit, the flag is used to enable subquery pushdown logic. But,
with this commit, that behaviour is enabled by default. In other words, the
flag is now useless. We prefer to keep the flag since we don't want to break
the backward compatibility. Also, we may consider using that flag for other
purposes in the next commits.

* Require subquery_pushdown when limit is used in subquery

Using limit in subqueries may cause returning incorrect
results. Therefore we allow limits in subqueries only
if user explicitly set subquery_pushdown flag.

* Evaluate expressions on the LIMIT clause (#1333)

Subquery pushdown uses orignal query, the LIMIT and OFFSET clauses
are not evaluated. However, logical optimizer expects these expressions
are already evaluated by the standard planner. This commit manually
evaluates the functions on the logical planner for subquery pushdown.

* Better format subquery regression tests (#1340)

* Style fix for subquery pushdown regression tests

With this commit we intented a more consistent style for the
regression tests we've added in the
  - multi_subquery_union.sql
  - multi_subquery_complex_queries.sql
  - multi_subquery_behavioral_analytics.sql

* Enable the tests that are temporarily commented

This commit enables some of the regression tests that were commented
out until all the development is done.

* Fix merge conflicts (#1347)

 - Update regression tests to meet the changes in the regression
   test output.
 - Replace Ifs with Asserts given that the check is already done
 - Update shard pruning outputs

* Add view regression tests for increased subquery coverage (#1348)

- joins between views and tables
- joins between views
- union/union all queries involving views
- views with limit
- explain queries with view

* Improve btree operators for the subquery tests

This commit adds the missing comprasion for subquery composite key
btree comparator.
2017-04-29 04:09:48 +03:00
Burak Yucesoy edd69310fd Fix check-vanilla tests
It semms that GEQO optimizations, when it is set to on, create their own memory context
and free it after when it is no longer necessary. In join multi_join_restriction_hook
we allocate our variables in the CurrentMemoryContext, which is GEQO's memory context
if it is active. To prevent deallocation of our variables when GEQO's memory context is
freed, we started to allocate memory fo these variables in separate MemoryContext.
2017-04-29 01:55:18 +02:00
Andres Freund f6ef7f2c03 Faster shard pruning.
So far citus used postgres' predicate proofing logic for shard
pruning, except for INSERT and COPY which were already optimized for
speed.  That turns out to be too slow:
* Shard pruning for SELECTs is currently O(#shards), because
  PruneShardList calls predicate_refuted_by() for every
  shard. Obviously using an O(N) type algorithm for general pruning
  isn't good.
* predicate_refuted_by() is quite expensive on its own right. That's
  primarily because it's optimized for doing a single refutation
  proof, rather than performing the same proof over and over.
* predicate_refuted_by() does not keep persistent state (see 2.) for
  function calls, which means that a lot of syscache lookups will be
  performed. That's particularly bad if the partitioning key is a
  composite key, because without a persistent FunctionCallInfo
  record_cmp() has to repeatedly look-up the type definition of the
  composite key. That's quite expensive.

Thus replace this with custom-code that works in two phases:
1) Search restrictions for constraints that can be pruned upon
2) Use those restrictions to search for matching shards in the most
   efficient manner available:
   a) Binary search / Hash Lookup in case of hash partitioned tables
   b) Binary search for equal clauses in case of range or append
      tables without overlapping shards.
   c) Binary search for inequality clauses, searching for both lower
      and upper boundaries, again in case of range or append
      tables without overlapping shards.
   d) exhaustive search testing each ShardInterval

My measurements suggest that we are considerably, often orders of
magnitude, faster than the previous solution, even if we have to fall
back to exhaustive pruning.
2017-04-28 14:40:41 -07:00
Andres Freund 2013090a77 Add DistTableCacheEntry->hasOverlappingShardInterval.
This determines whether it's possible to perform binary search on
sortedShardIntervalArray or not.  If e.g. two shards have overlapping
ranges, that'd be prohibitive.

That'll be useful in later commit introducing faster shard pruning.
2017-04-28 14:40:38 -07:00
Andres Freund 15d427f931 Add DistTableCacheEntry->shardValueCompareFunction.
That's useful when comparing values a hash-partitioned table is
filtered by.  The existing shardIntervalCompareFunction is about
comparing hashed values, not unhashed ones.

The added btree opclass function is so we can get a comparator
back. This should be changed much more widely, but is not necessary so
far.
2017-04-28 14:40:38 -07:00
Andres Freund f3172e9719 Build DistTableCacheEntry->shardIntervalCompareFunction even for 0 shards.
Previously we, unnecessarily, used a the first shard's type
information to to look up the comparison function.  But that
information is already available, so use it.  That's helpful because
we sometimes want to access the comparator function even if there's no
shards.
2017-04-28 14:40:38 -07:00
Metin Doslu d411892fe6 Send explain queries with savepoints
With this commit, we started to send explain queries within a savepoint. After
running explain query, we rollback to savepoint. This saves us from side effects
of EXPLAIN ANALYZE on DML queries.
2017-04-28 12:13:48 -07:00
Jason Petersen 1c353e68aa Remove FastShardPruning method
With the other simplifications, it doesn't make sense to keep around.
2017-04-27 13:32:36 -06:00
Jason Petersen 06497d74f5 Refactor FindShardInterval to use cacheEntry
All callers fetch a cache entry and extract/compute arguments for the
eventual FindShardInterval call, so it makes more sense to refactor
into that function itself; this solves the use-after-free bug, too.
2017-04-27 13:32:36 -06:00
Marco Slot 7d1f7b8923 Support expressions in the partition column in INSERTs 2017-04-21 14:05:52 +02:00
velioglu a26edd2249 Implement ALTER TABLE ADD CONSTRAINT command 2017-04-20 15:02:33 +03:00
velioglu 5b3e47de7a Log message of across shard queries according to the log level 2017-04-20 12:24:46 +03:00
velioglu be3cdb14ea Change native hash function with worker_hash 2017-04-19 22:16:55 +03:00
Marco Slot c7603215dd Stop using a sequence to generate unique job IDs 2017-04-18 11:31:51 +02:00
Marco Slot 2fbe546ddd Support query parameters in combination with function evaluation 2017-04-17 15:40:55 +02:00
Marco Slot ccc796cf66 Create indexes after worker_append_table_to_shard during shard repair 2017-04-17 15:17:21 +02:00
Burak Yucesoy d58cb416a4 Decouple reference table replication
With this change we add an option to add a node without replicating all reference
tables to that node. If a node is added with this option, we mark the node as
inactive and no queries will sent to that node.

We also added two new UDFs;
 - master_activate_node(host, port):
    - marks node as active and replicates all reference tables to that node
 - master_add_inactive_node(host, port):
    - only adds node to pg_dist_node
2017-04-17 13:33:31 +03:00
Onder Kalaci 6c9296aca0 Remove uninstantiated qual logic, use attribute equivalences
In this PR, we aim to deduce whether each of the RTE_RELATION
is joined with at least on another RTE_RELATION on their partition keys. If each
RTE_RELATION follows the above rule, we can conclude that all RTE_RELATIONs are
joined on their partition keys.

In order to do that, we invented a new equivalence class namely:
AttributeEquivalenceClass. In very simple words, a AttributeEquivalenceClass is
identified by an unique id and consists of a list of AttributeEquivalenceMembers.

Each AttributeEquivalenceMember is designed to identify attributes uniquely within the
whole query. The necessity of this arise since varno attributes are defined within
a single level of a query. Instead, here we want to identify each RTE_RELATION uniquely
and try to find equality among each RTE_RELATION's partition key.

Whenever we find an equality clause A = B, where both A and B originates from
relation attributes (i.e., not random expressions), we create an
AttributeEquivalenceClass to record this knowledge. If we later find another
equivalence B = C, we create another AttributeEquivalenceClass. Finally, we can
apply transitity rules and generate a new AttributeEquivalenceClass which includes
A, B and C.

Note that equality among the members are identified by the varattno and rteIdentity.

Each equality among RTE_RELATION is saved using an AttributeEquivalenceClass where
each member attribute is identified by a AttributeEquivalenceMember. In the final
step, we try generate a common attribute equivalence class that holds as much as
AttributeEquivalenceMembers whose attributes are a partition keys.
2017-04-13 11:51:26 +03:00
Burak Yucesoy 66a801dd4e Add enable_version_checks GUC and address feedback 2017-04-04 19:11:13 +03:00
Burak Yucesoy 63b232e4ba Error out if binary citus version does not match installed extension
With this change, we start to error out if loaded citus binaries does not match
the available major version or installed citus extension version. In this case
we force user to restart the server or run ALTER EXTENSION depending on the
situation
2017-04-03 17:36:13 -06:00
Jason Petersen 963090fe05 Address review feedback
Should just about do it.
2017-04-03 11:44:57 -06:00
Jason Petersen afe6908e26 Improve CONCURRENTLY-related error messages
Thought this looked slightly nicer than the default behavior.

Changed preventTransaction to concurrent to be clearer that this code
path presently affects CONCURRENTLY code only.
2017-04-03 11:19:15 -06:00
Jason Petersen ddc8d7111b Update documentation
Ensure all functions have comments, etc.
2017-04-03 11:19:15 -06:00
Jason Petersen d128ad723a Address MX CONCURRENTLY problems
Adds a non-transactional multi-command method to propagate DDLs to all
MX/metadata-synced nodes.
2017-04-03 11:19:15 -06:00
Jason Petersen 236c6900ff Remove CONCURRENTLY checks, fix tests
Still pending failure testing, which broke with my recent changes.
2017-04-03 11:19:15 -06:00
Jason Petersen 7173d85071 Change IndexStmt to generate worker DDL on master
Because we can't execute CREATE INDEX CONCURRENTLY during transactions,
worker_apply_shard_ddl_command is insufficient.
2017-04-03 11:19:14 -06:00
Marco Slot a339c7bbd6 Batch task_tracker_status calls to reduce task-tracker query times 2017-03-31 11:54:11 +02:00
Jason Petersen ef1a42c4dc Address code review comments 2017-03-22 17:29:17 -06:00
Jason Petersen 48db2f1fc8 Rework ReplicateGrantStmt to use new flow
This was the impetus for the previous commit that changed from using a
DDLJob * to a List * of them.
2017-03-22 17:29:16 -06:00
Jason Petersen 041aff8eed Refactor ExecuteDistDDLCommand to expect struct
Will let us separate out the determination of what to execute from its
actual execution.
2017-03-22 17:21:49 -06:00
velioglu d7e244792f Size UDFs implemented
citus_table_size, citus_relation_size and citus_total_relation_size UDFs are implemented.
2017-03-16 13:50:30 +03:00
Metin Doslu 76ab7040cb Use CustomScan API for query execution
Custom Scan is a node in the planned statement which helps external providers
to abstract data scan not just for foreign data wrappers but also for regular
relations so you can benefit your version of caching or hardware optimizations.
This sounds like only an abstraction on the data scan layer, but we can use it
as an abstraction for our distributed queries. The only thing we need to do is
to find distributable parts of the query, plan for them and replace them with
a Citus Custom Scan. Then, whenever PostgreSQL hits this custom scan node in
its Vulcano style execution, it will call our callback functions which run
distributed plan and provides tuples to the upper node as it scans a regular
relation. This means fewer code changes, fewer bugs and more supported features
for us!

First, in the distributed query planner phase, we create a Custom Scan which
wraps the distributed plan. For real-time and task-tracker executors, we add
this custom plan under the master query plan. For router executor, we directly
pass the custom plan because there is not any master query. Then, we simply let
the PostgreSQL executor run this plan. When it hits the custom scan node, we
call the related executor parts for distributed plan, fill the tuple store in
the custom scan and return results to PostgreSQL executor in Vulcano style,
a tuple per XXX_ExecScan() call.

* Modify planner to utilize Custom Scan node.
* Create different scan methods for different executors.
* Use native PostgreSQL Explain for master part of queries.
2017-03-14 12:17:51 +02:00
Andres Freund 2a6188d8a1 Initial temp table removal implementation 2017-03-14 12:09:49 +02:00
Jason Petersen 73e0e2a79a Revert "Remove unused SendCommandToWorker"
This reverts commit c8c308c109.
2017-03-13 15:48:51 -06:00
Brian Cloutier ebc7779457 Remove unused SendCommandToWorker 2017-03-08 16:30:23 +03:00