Commit Graph

1417 Commits (d138bb89bf7efaf452226ef5106e85dfd92f419e)

Author SHA1 Message Date
Philip Dubé d138bb89bf Support creating collations as part of dependency resolution. Propagate ALTER/DROP on distributed collations
Propagate CREATE COLLATION when outside transaction
2019-12-09 04:42:51 +00:00
Alexander Pyhalov 6174a4d3d6 Fix build on illumos 2019-12-06 14:40:47 +01:00
Marco Slot 6a9c0ea7fe Fix errors in DML with sublinks hidden by null expressions 2019-12-06 14:25:04 +01:00
Hadi Moshayedi d28beb3711 Detect SQL UDF Calls. 2019-12-05 14:31:05 -08:00
Philip Dubé 5a17fd6d9d Test more reference/local cases, also ALTER ROLE
Test ALTER ROLE doesn't deadlock when coordinator added, or propagate from mx workers

Consolidate wait_until_metadata_sync & verify_metadata to multi_test_helpers
2019-12-03 22:23:14 +00:00
Philip Dubé 1597fbb369 aggregate_support test: test DISTINCT, ORDER BY, FILTER, & no intermediate results
Previously,
- we'd push down ORDER BY, but this doesn't order intermediate results between workers
- we'd keep FILTER on master aggregate, which would raise an error about unexpected cstrings
2019-12-03 15:46:01 +00:00
Philip Dubé 5fcc169a3a Stray depended to dependent tidy up 2019-12-03 15:28:32 +00:00
Marco Slot bb3bc10f0c Fix segfault in column_to_column_name 2019-12-01 23:57:25 +01:00
Marco Slot b1b13e394e Fix segfault when executing DDL via UDF 2019-12-01 22:54:41 +01:00
Marco Slot 4c8d43c5d0 Bump repo version to 9.2devel 2019-11-29 07:33:39 +01:00
Nils Dijk 1ef1667ddb
add gitref to the output of citus_version (#3246)
DESCRIPTION: add gitref to the output of citus_version

During debugging of custom builds it is hard to know the exact version of the citus build you are using. This patch will add a human readable/understandable git reference to the build of citus which can be retrieved by calling `citus_version();`.
2019-11-29 15:54:09 +01:00
Marco Slot 16d1ad3666 Remove distinction between SQL_TASK and ROUTER_TASK 2019-11-29 05:58:29 +01:00
SaitTalhaNisanci aeec3d1544
fix typo in dependent jobs and dependent task (#3244) 2019-11-28 23:47:28 +03:00
Philip Dubé 0d04ff1692 RECORD: Add support for more expression types
- OpExpr
- NullIfExpr
- MinMaxExpr
- CoalesceExpr
- CaseExpr

Also fix case where ARRAY[(1,2), NULL] was rejected
2019-11-27 17:07:22 +00:00
Philip Dubé 168e11cc9b Implement support for RECORD[] where we support RECORD
Support for ARRAY[] expressions is limited to having a consistent shape,
eg ARRAY[(int,text),(int,text)] as opposed to ARRAY[(int,text),(float,text)] or ARRAY[(int,text),(int,text,float)]
2019-11-27 15:02:43 +00:00
Hadi Moshayedi 2268a9cae6 Error for metadata commands if any metadata node is out-of-sync (#3226)
* Error for metadata commands if any metadata node is out-of-sync

* Make the functions have separate APIs for all workers/metadata workers
2019-11-27 09:52:57 +01:00
Önder Kalacı 1cfbeb89ec
Make NodeCanHaveDistTablePlacements() public (#3229)
Since it is required in rebalancer.
2019-11-26 12:15:38 +01:00
Marco Slot 60b741927f Add missing include to deparse_function_stmts.c 2019-11-24 06:04:22 +01:00
Philip Dubé 261a9de42d Fix typos:
VAR_SET_VALUE_KIND -> VAR_SET_VALUE kind
beginnig -> beginning
plannig -> planning
the the -> the
er then -> er than
2019-11-25 23:24:13 +00:00
Marco Slot 4b0ac4b0dd Properly escape ALTER FUNCTION .. SET deparsing. Also test 2019-11-25 23:01:30 +00:00
Philip Dubé 3c10c27b13 GetFunctionAlterOwnerCommand: use format_procedure_qualified
distributed_functions: test a function with a quote in name
AppendDefElemSet: quote variable names
2019-11-25 23:01:30 +00:00
Philip Dubé a81e6a81ab Fix distributed aggregation for non superuser roles
Moves support functions to pg_catalog for now. We'd prefer a different solution
for when we're creating these support functions dynamically
2019-11-25 20:46:25 +00:00
Khashayar Fereidani f81785ad14 Fix underflow initialization of default values
Initialization of queryWindowClause and queryOrderByLimit "memset" underflow these variables.
It's possible due to the invalid usage sizeof this part of the program cause buffer overflow and function return data corruption in future changes.
2019-11-25 19:25:51 +00:00
Onur TIRTIR bef32624c3
Escape extension name in extension command propagation (#3218) 2019-11-24 12:16:10 +03:00
Philip Dubé 99164398bf Fix potential segfault from standard_planner inlining functions 2019-11-21 18:47:36 +00:00
Philip Dubé c563e0825c Strip trailing whitespace and add final newline (#3186)
This brings files in line with our editorconfig file
2019-11-21 14:25:37 +01:00
Jelte Fennema 1d8dde232f
Automatically convert useless declarations using regex replace (#3181)
* Add declaration removal to CI

* Convert declarations
2019-11-21 13:47:29 +01:00
Onur TIRTIR 9961297d7b Improve extension command propagation logic and tests
* Improve extension command propagation tests

* patch for hardcoded citus extension name

(cherry picked from commit 0bb3dbac0afabda10e8928f9c17eda048dc4361a)
2019-11-21 11:24:39 +03:00
Marco Slot e0cccf7f9a Move C files into the appropriate directory 2019-11-16 11:36:17 +01:00
Hanefi Onaldi d82f3e9406
Introduce intermediate result broadcasting
In plain words, each distributed plan pulls the necessary intermediate
results to the worker nodes that the plan hits. This is primarily useful
in three ways. 

(i) If the distributed plan that uses intermediate
result(s) is a router query, then the intermediate results are only
broadcasted to a single node.

(ii) If a distributed plan consists of only intermediate results, which
is not uncommon, the intermediate results are broadcasted to a single
node only.

(iii) If a distributed query hits a sub-set of the shards in multiple
workers, the intermediate results will be broadcasted to the relevant
node(s).

The final item (iii) becomes crucial for append/range distributed
tables where typically the distributed queries hit a small subset of
shards/workers.

To do this, for each query that Citus creates a distributed plan, we keep
track of the subPlans used in the queryTree, and save it in the distributed
plan. Just before Citus executes each subPlan, Citus first keeps track of
every worker node that the distributed plan hits, and marks every subPlan
should be broadcasted to these nodes. Later, for each subPlan which is a
distributed plan, Citus does this operation recursively since these
distributed plans may access to different subPlans, and those have to be
recorded as well.
2019-11-20 15:26:36 +03:00
Philip Dubé b7fef5c31a Miscellaneous cleanup in prep for collation propagation 2019-11-19 17:28:59 +00:00
Onur TIRTIR 26c306d188
Add extensions to distributed object propagation infrastructure (#3185) 2019-11-19 17:56:28 +03:00
SaitTalhaNisanci 2cb82ae9bd
create a utility method to mark tasks as failed (#3150) 2019-11-19 16:35:56 +03:00
SaitTalhaNisanci 306d159072
refactor AfterXacthodtConnectionHandling (#3202) 2019-11-19 14:50:23 +03:00
Marco Slot 622462cad7 Return early in CitusHasBeenLoaded when creating a different extension 2019-11-15 03:00:20 +01:00
Önder Kalacı 40fa3862ce
Prevent Citus extension becoming distributed object (#3197)
Prevent Citus extension being distributed

Because that could prevent doing rolling upgrades, where users may
prefer to upgrade the version on the coordinator but not the workers.

There could be some other edge cases, so I'd prefer to keep Citus
extension outside the picture for now.
2019-11-18 16:57:10 +01:00
Halil Ozan Akgul 5ae7b219ff Create the ALTER ROLE propagation 2019-11-18 18:31:28 +03:00
Nils Dijk 217890af5f
Feature: Expression in reference join (#3180)
DESCRIPTION: Expression in reference join

Fixed: #2582

This patch allows arbitrary expressions in the join clause when joining to a reference table. An example of such joins could be found in CHbenCHmark queries 7, 8, 9 and 11; `mod((s_w_id * s_i_id),10000) = su_suppkey` and `ascii(substr(c_state,1,1)) = n2.n_nationkey`. Since the join is on a reference table these queries are able to be pushed down to the workers.

To implement these queries we will widen the `IsJoinClause` predicate to not check if the expressions are a type `Var` after stripping the implicit coerciens. Instead we define a join clause when the `Var`'s in a clause come from more than 1 table.

This allows more clauses to pass into the logical planner's `MultiNodeTree(...)` planning function. To compensate for this we tighten down the `LocalJoin`, `SinglePartitionJoin` and `DualPartitionJoin` to check for direct column references when planning. This allows the planner to work with arbitrary join expressions on reference tables.
2019-11-18 16:25:46 +01:00
Önder Kalacı a4c90b6ee1
Make distributed object dependency logic follow upto extensions (#3195)
With this commit, we're slightly changing the dependency traversal
logic to enable extension propagation.

The main idea is to "follow" the extension dependencies, but do not
"apply" them.

Since some extension dependencies are base types, and base types
could have circular dependencies, we implement a logic to prevent
revisiting an already visited object.
2019-11-17 17:21:21 +01:00
Hadi Moshayedi d9dcba25e3 Plan reference/local table joins locally 2019-11-15 07:36:50 -08:00
Onder Kalaci 90943a6ce6 Do not include coordinator shards when round-robin is selected
When the user picks "round-robin" policy, the aim is that the load
is distributed across nodes. However, for reference tables on the
coordinator, since local execution kicks in immediately, round-robin
is ignored.

With this change, we're excluding the placement on the coordinator.
Although the approach seems a little bit invasive because of
modifications in the placement list, that sounds acceptable.

We could have done this in some other ways such as:

1) Add a field to "Task->roundRobinPlacement" (or such), which is
updated as the first element after RoundRobinPolicy is applied.
During the execution, if that placement is local to the coordinator,
skip it and try the other remote placements.

2) On TaskAccessesLocalNode()@local_execution.c, check
task_assignment_policy, if round-robin selected and there is local
placement on the coordinator, skip it. However, task assignment is done
on planning, but this decision is happening on the execution, which
could create weird edge cases.
2019-11-15 06:03:32 -08:00
Hadi Moshayedi 15af1637aa Replicate reference tables to coordinator. 2019-11-15 05:50:19 -08:00
Hadi Moshayedi cb011bb30f Propagate isactive to metadata nodes. 2019-11-15 05:48:42 -08:00
SaitTalhaNisanci b9b7fd7660
add IsLoggableLevel utility function (#3149)
* add IsLoggableLevel utility function

* add function comment for IsLoggableLevel

* put ApplyLogRedaction to logutils
2019-11-15 14:59:13 +03:00
Jelte Fennema 1b2c438e69
Rename variables to not shadow globals in RHEL6 (#3194)
Fixes #2839
2019-11-15 12:12:24 +01:00
Jelte Fennema a8bd2d58f5
Update SQL definitions to prepare for drain node functionality (#3179) 2019-11-15 10:11:56 +01:00
Jelte Fennema 4b9b4b0995
Don't warn for declaration-after-statement since we only support GNU99 (#3132)
This change was actually already intended in #3124. However, the
postgres Makefile manually enables this warning too. This way we undo
that.

To confirm that it works two functions were changed to make use of not
having the warning anymore.
2019-11-15 09:46:06 +01:00
Philip Dubé 495c0f5117 Phase 1 implementation of custom aggregates
Phase 1 seeks to implement minimal infrastructure, so does not include:
	- dynamic generation of support aggregates to handle multiple arguments
	- configuration methods to direct aggregation strategy,
		or mark an aggregate's serialize/deserialize as safe to operate across nodes

Aggregates can be distributed when:
	- they have a single argument
	- they have a combinefunc
	- their transition type is not a pseudotype
2019-11-14 19:01:24 +00:00
Philip Dubé edc7a2ee38 Improve RECORD support 2019-11-14 18:32:22 +00:00
Philip Dubé eb35743c3f Remove citus.worker_list_file & master_initialize_node_metadata 2019-11-13 00:49:58 +00:00