Commit Graph

650 Commits (c1566d464b1d7a3041d31bb7011c050daa770f5c)

Author SHA1 Message Date
Marco Slot c1566d464b Fix failure and isolation tests
On top of citus.max_cached_conns_per_worker GUC, with this commit
we're updating the regression tests to comply with the new behaviour.
2019-05-29 14:42:31 +02:00
Onder Kalaci d46b92d79a Add order by to multi_mx_schema_support 2019-05-28 12:23:28 +02:00
Onder Kalaci fa2a6e4d8f Add order by to multi_mx_router_planner 2019-05-28 12:23:28 +02:00
Onder Kalaci 0a7a173eee Add order by to multi_mx_reference_table 2019-05-28 12:23:28 +02:00
Onder Kalaci 1553e12ee4 Add order by to multi_subquery_complex_reference_clause 2019-05-28 12:06:57 +02:00
Philip Dubé b8871d9ff4 Propagate more ALTER FOREIGN TABLE to workers 2019-05-24 12:54:05 -07:00
Marco Slot b3fcf2a48f Deprecate master_modify_multiple_shards 2019-05-24 15:22:06 +02:00
Marco Slot 7fa5d36057 Stop using master_modify_multiple_shards in TRUNCATE 2019-05-24 14:35:46 +02:00
Hanefi Onaldi 7443191397
Improve tests for round robin & router queries 2019-05-24 14:16:56 +03:00
Onder Kalaci f1a80a609f Fix wrong test output
If replication factor eqauls to 2 and there are two worker nodes,
even if two modifications hit different shards, Citus doesn't use
2PC. The reason is that it doesn't fit into the definition of
"expanding participating worker nodes".

Thus, we're simply fixing the test to fit in the comment
on top of it.
2019-05-21 19:12:37 +03:00
Onder Kalaci f76abfe470 Add ORDER BY to multi_router_planner 2019-05-21 15:54:33 +03:00
Onder Kalaci f06a79563d Add ORDER BY to multi_foreign_key 2019-05-21 15:54:03 +03:00
Hanefi Onaldi 4030d603eb
Merge pull request #2691 from citusdata/update_changelog
Add 8.1.2 and 8.2.1 changelog entries
2019-05-15 09:18:58 +03:00
Onder Kalaci 5d68a13139 Add order by to multi_shard_update_delete 2019-05-02 20:09:33 +03:00
Onder Kalaci 2c76b4bc46 Add order by to multi_function_in_join test 2019-05-02 20:05:25 +03:00
Onder Kalaci 3d871c5334 Add some ORDER BYs to make the test output consistent 2019-05-02 18:00:46 +03:00
Hadi Moshayedi 32ecb6884c Test ROLLBACK TO SAVEPOINT with multi-shard CTE failures 2019-05-01 09:33:43 -07:00
Hadi Moshayedi aafd22dffa Fix savepoint rollback for INSERT INTO ... SELECT. 2019-05-01 09:33:43 -07:00
Hadi Moshayedi b69a762e0b Fix savepoint rollback after multi-shard update failure. 2019-05-01 09:33:43 -07:00
Onder Kalaci 82813a8796 Add ORDER BYs to multi_subquery and subqueries_deep tests 2019-04-24 13:36:11 +03:00
Onder Kalaci 64b323d9eb Add ORDER BY to set_operations 2019-04-23 11:51:58 +03:00
Onder Kalaci 913ffc9dcd Add ORDER BY to multi_subquery_in_where_clause 2019-04-23 11:46:00 +03:00
Onder Kalaci 753163b4d8 Be less verbose for printing worker ports in intermediate_results 2019-04-17 14:57:20 +03:00
Onder Kalaci b3af5b2cc4 Add order by multi_mx_modifications 2019-04-17 14:57:20 +03:00
Onder Kalaci a159bd9aed Add order by window_functions 2019-04-17 14:57:20 +03:00
Jason Petersen 5a017c684c Add repro case for #2484 2019-04-15 23:14:11 -06:00
Onder Kalaci 6d81fc518c Add order by subquery_complex_target_list 2019-04-10 19:55:41 +03:00
Onder Kalaci 298e95c441 Add order by multi_shard_update_delete 2019-04-09 12:41:46 +03:00
Onder Kalaci 6a8e2c260a Add order by multi_insert_select 2019-04-09 12:28:57 +03:00
Onder Kalaci af096a898c Add order by subquery_and_cte 2019-04-09 12:19:10 +03:00
Onder Kalaci 56a1a39fd4 Add order by multi_subquery_complex_queries 2019-04-09 12:12:26 +03:00
Onder Kalaci 4effa8c1f8 Add order by multi_schema_support 2019-04-09 11:52:08 +03:00
Onder Kalaci 92e87738dd Make sure that the regression test output is durable to different execution orders
Mostly add order bys and suppress worker node ports in the test
outputs.
2019-04-08 11:48:08 +03:00
Murat Tuncer 1424f75ec9 Support columns referencing an aliased joins
We used to rely on PG function flatten_join_alias_vars
to resolve actual columns referenced in target entry list.

The function goes deep and finds the actual relation. This logic
usually works fine. However, when joins are given an alias, inner
relation names are not visible to target entry entry. Thus relation
resolving should stop when we the target entry column refers an
rte of an aliased join.

We stopped using PG function and provided our own flatten function.
2019-03-26 09:46:22 +03:00
Jason Petersen 4c7f78bd7e Code review feedback 2019-03-25 22:07:27 -05:00
Jason Petersen 69adb627c3 Add Assert that will crash before coercion fix is in 2019-03-22 20:32:19 -06:00
Nils Dijk feaac69769
Implementation for asycn FinishConnectionListEstablishment (#2584) 2019-03-22 17:30:42 +01:00
Marco Slot e3b7e74f43 Allow rescan in DECLARE .. WITH HOLD 2019-03-22 11:25:55 +01:00
Jason Petersen a2c6f596f9 Address code review comments 2019-03-21 11:59:52 -06:00
Onder Kalaci 41d8c4030a Add some more regression tests for outer join pushdown 2019-03-19 11:49:38 +03:00
Onder Kalaci ad5ff1d01a Some queries lead to infinite recursion with recurisve planning
The rule for infinite recursion is the following:

    - If the query contains a subquery which is recursively planned, and
      no other subqueries can be recursively planned due to correlation
      (e.g., LATERAL joins), the planner keeps recursing again and again.

One interesting thing here is that even if a subquery contains only intermediate
result(s), we re-recursively plan that. In the end, the logic in the code does the following:

  - Try recursive planning any of the subqueries in the query tree
     - If any subquery is recursively planned, call the planner again
        where the subquery is replaced with the intermediate result.
        - Try recursively planning any of the queries
          - If any subquery is recursively planned, call the planner again
            where the subquery (in this case it is already intermediate result)
            is replaced with the intermediate result.
              - Try recursively planning any of the queries
                - If any subquery is recursively planned, call the planner again
                  where the subquery (in this case it is already intermediate result)
                  is replaced with the intermediate result.
                  - Try recursively planning any of the queries
                    - If any subquery is recursively planned, call the planner again
                      where the subquery (in this case it is already intermediate result)
                      is replaced with the intermediate result.
                      ......
2019-03-18 10:35:00 +03:00
Marco Slot f2abf2b8e5 Functions are treated as transaction blocks 2019-03-15 16:34:08 -06:00
Marco Slot 4b9bd54ae0 Remove create_insert_proxy_for_table 2019-03-15 14:13:03 -06:00
Hadi Moshayedi a9e6d06a98 Skip execution of ALTER TABLE constraint checks on the coordinator 2019-03-14 15:40:56 -07:00
Hadi Moshayedi cdd3b15ac8 Fix distributed deadlock for ALTER TABLE ... ATTACH PARTITION.
Following scenario resulted in distributed deadlock before this commit:

CREATE TABLE partitioning_test(id int, time date) PARTITION BY RANGE (time);
CREATE TABLE partitioning_test_2009 (LIKE partitioning_test);
CREATE TABLE partitioning_test_reference(id int PRIMARY KEY, subid int);

SELECT create_distributed_table('partitioning_test_2009', 'id'),
       create_distributed_table('partitioning_test', 'id'),
       create_reference_table('partitioning_test_reference');

ALTER TABLE partitioning_test ADD CONSTRAINT partitioning_reference_fkey FOREIGN KEY (id) REFERENCES partitioning_test_reference(id) ON DELETE CASCADE;
ALTER TABLE partitioning_test_2009 ADD CONSTRAINT partitioning_reference_fkey_2009 FOREIGN KEY (id) REFERENCES partitioning_test_reference(id) ON DELETE CASCADE;

ALTER TABLE partitioning_test ATTACH PARTITION partitioning_test_2009 FOR VALUES FROM ('2009-01-01') TO ('2010-01-01');
2019-03-14 15:28:37 -07:00
Murat Tuncer 2681231c98 Create column aliases for shard tables in worker queries when requested 2019-03-07 12:54:42 +03:00
velioglu faf50849d7 Enhance pushdown planning logic to handle full outer joins with using clause
Since flattening query may flatten outer joins' columns into coalesce expr that is
in the USING part, and that was not expected before this commit, these queries were
erroring out. It is fixed by this commit with considering coalesce expression as well.
2019-03-05 11:49:30 +03:00
Onder Kalaci f706772b2f Round-robin task assignment policy relies on local transaction id
Before this commit, round-robin task assignment policy was relying
on the taskId. Thus, even inside a transaction, the tasks were
assigned to different nodes. This was especially problematic
while reading from reference tables within transaction blocks.
Because, we had to expand the distributed transaction to many
nodes that are not necessarily already in the distributed transaction.
2019-02-22 19:26:38 +03:00
Onder Kalaci f144bb4911 Introduce fast path router planning
In this context, we define "Fast Path Planning for SELECT" as trivial
queries where Citus can skip relying on the standard_planner() and
handle all the planning.

For router planner, standard_planner() is mostly important to generate
the necessary restriction information. Later, the restriction information
generated by the standard_planner is used to decide whether all the shards
that a distributed query touches reside on a single worker node. However,
standard_planner() does a lot of extra things such as cost estimation and
execution path generations which are completely unnecessary in the context
of distributed planning.

There are certain types of queries where Citus could skip relying on
standard_planner() to generate the restriction information. For queries
in the following format, Citus does not need any information that the
standard_planner() generates:

  SELECT ... FROM single_table WHERE distribution_key = X;  or
  DELETE FROM single_table WHERE distribution_key = X; or
  UPDATE single_table SET value_1 = value_2 + 1 WHERE distribution_key = X;

Note that the queries might not be as simple as the above such that
GROUP BY, WINDOW FUNCIONS, ORDER BY or HAVING etc. are all acceptable. The
only rule is that the query is on a single distributed (or reference) table
and there is a "distribution_key = X;" in the WHERE clause. With that, we
could use to decide the shard that a distributed query touches reside on
a worker node.
2019-02-21 13:27:01 +03:00
Hanefi Onaldi 825666f912
Query samples in docs and better errors 2019-02-04 19:20:02 +03:00