Commit Graph

3397 Commits (605b9016375097a9642d8d976a0682becc0e0e3d)

Author SHA1 Message Date
Onur Tirtir 2e096d4eb9
Merge pull request #3531 from citusdata/refactor/utility-local
Refactor some pieces of code before implementing local drop & truncate execution
2020-02-24 18:35:07 +03:00
Onur Tirtir 873e9fd604 Refactor DropShards before introducing local DROP execution 2020-02-24 17:52:20 +03:00
Onur Tirtir 3c99db40b9 Some small typos & cleanup 2020-02-24 16:37:55 +03:00
Jelte Fennema 2a9fccc7a0
Remove READFUNCs (#3536)
We don't actually use these functions anymore since merging #1477.

Advantages of removing:
1. They add work whenever we add a new node.
2. They contain some usage of stdlib APIs that are banned by Microsoft.
   Removing it means we don't have to replace those with safe ones.
2020-02-24 12:43:28 +01:00
Philip Dubé c291fd5d11
Merge pull request #3522 from citusdata/fix-flaky-multi-extension-2
Address a couple issues with maintenace daemon management
2020-02-21 17:10:25 +00:00
Philip Dubé bcf54c5014 Address a couple issues with maintenace daemon management:
- Stop the daemon when citus extension is dropped
- Bail on maintenance daemon startup if myDbData is started with a non-zero pid
- Stop maintenance daemon from spawning itself
- Don't use postgres die, just wrap proc_exit(0)
- Assert(myDbData->workerPid == MyProcPid)

The two issues were that multiple daemons could be running for a database,
or that a daemon would be leftover after DROP EXTENSION citus
2020-02-21 16:49:01 +00:00
Nils Dijk 6ee82c381e
Add missing pieces for version bump of #3482 (#3523) 2020-02-21 12:35:29 +01:00
Jelte Fennema 00d667c41d
Semmle: Fix obvious issues (#3502)
Fixes some obvious issues found by the Semmle static analysis tool.
2020-02-21 10:16:00 +01:00
Onur Tirtir 2c51057013
Merge pull request #3521 from citusdata/null-relationname
Fix null relation name due to DROP on distributed table in a transaction block
2020-02-20 10:18:44 +03:00
Onur Tirtir 926a1a61b9 change "relation" with "table" in error messages related with foreign keys on reference tables 2020-02-20 09:58:47 +03:00
Onur Tirtir 001089783c Fix null relation name issue in CheckConflictingRelationAccesses 2020-02-19 19:10:35 +03:00
Philip Dubé d66f011f71
Merge pull request #3494 from citusdata/use-instr-time
Prefer instr_time to TimestampTz when we want CLOCK_MONOTONIC
2020-02-19 00:42:52 +00:00
Philip Dubé 52042d4a00 Prefer instr_time to TimestampTz when we want CLOCK_MONOTONIC 2020-02-19 00:34:17 +00:00
Philip Dubé 36bb85e5c0
Merge pull request #3495 from citusdata/fix-information-schema-join
Add test for issue 2717, does not reproduce issue
2020-02-19 00:33:38 +00:00
Philip Dubé d7a4ffdc46 Add test for issue, does not reproduce issue 2020-02-18 23:45:17 +00:00
Philip Dubé 6d6ef54775
Merge pull request #3517 from citusdata/fix-typos
Fix typos
2020-02-18 23:43:56 +00:00
Philip Dubé 08f6842d50 Fix typos
Equivalance -> Equivalence
utillity -> utility
shorted lived one -> shortly lived one
elegible -> eligible
2020-02-18 17:14:40 +00:00
Marco Slot 3e7d4fd739
Merge pull request #3361 from citusdata/copy_out
Implement direct COPY table TO stdout
2020-02-17 17:34:29 +01:00
Marco Slot 038e5999cb Implement direct COPY table TO stdout 2020-02-17 15:15:10 +01:00
Jelte Fennema 3f7c5a5cf6
Semmle: Fix possible infite loops caused by overflow (#3503)
Comparison between differently sized integers in loop conditions can cause
infinite loops. This can happen when doing something like this:

```c
int64 very_big = MAX_INT32 + 1;
for (int32 i = 0; i < very_big; i++) {
    // do something
}
// never reached because i overflows before it can reach the value of very_big
```
2020-02-17 14:35:10 +01:00
Jelte Fennema 15f1173b1d
Semmle: Ensure permissions of private keys are 0600 (#3506)
When using --allow-group-access option from initdb our keys and
certificates would be created with 0640 permissions. Which is a pretty
serious security issue: This changes that. This would not be exploitable
though, since postgres would not actually enable SSL and would output
the following message in the logs:

```
DETAIL:  File must have permissions u=rw (0600) or less if owned by the database user, or permissions u=rw,g=r (0640) or less if owned by root.
```

Since citus still expected the cluster to have SSL enabled handshakes
between workers and coordinator would fail. So instead of a security
issue the cluster would simply be unusable.
2020-02-17 12:58:40 +01:00
SaitTalhaNisanci 2b08916f93
Merge pull request #3491 from citusdata/refactor/runDistributedExecution
refactor RunDistributedExecution
2020-02-17 14:27:14 +03:00
SaitTalhaNisanci 9302e6e699 apply review items 2020-02-17 14:16:49 +03:00
SaitTalhaNisanci 1b78045867 rename AssignTasksToConnections with AssignTasksToConnectionsOrWorkerPool 2020-02-17 14:16:20 +03:00
SaitTalhaNisanci 355805c7d8 create ProcessWaitEvents for separating the logic of handling events 2020-02-17 14:16:20 +03:00
SaitTalhaNisanci c35981f9de create UpdateWaitEventSet for better readability 2020-02-17 14:16:20 +03:00
SaitTalhaNisanci a7e735a648 use a utility method to get event size 2020-02-17 14:16:20 +03:00
SaitTalhaNisanci 71f1aa48a3
remove unnecessary if check (#3500) 2020-02-17 14:15:36 +03:00
Markus Sintonen 099e266a6c Force task executor 2020-02-16 01:32:52 +02:00
Markus Sintonen cf8319b992 Add comment, add subquery NOT tests 2020-02-16 01:21:10 +02:00
Markus Sintonen 3d3d615040 Add comment about NOT_EXPR. Treat it as invalid constraint for safety. 2020-02-15 16:54:38 +02:00
Philip Dubé 7382c8be00 Clean up from code review
Only change to behavior is:
- don't ignore array const's constcollid in SAORestrictions
- don't end lines with commas in DebugLogPruningInstance
2020-02-14 17:58:23 +00:00
Markus Sintonen cdedb98c54 Improve shard pruning logic to understand OR-conditions.
Previously a limitation in the shard pruning logic caused multi distribution value queries to always go into all the shards/workers whenever query also used OR conditions in WHERE clause.

Related to https://github.com/citusdata/citus/issues/2593 and https://github.com/citusdata/citus/issues/1537
There was no good workaround for this limitation. The limitation caused quite a bit of overhead with simple queries being sent to all workers/shards (especially with setups having lot of workers/shards).

An example of a previous plan which was inadequately pruned:
```
EXPLAIN SELECT count(*) FROM orders_hash_partitioned
	WHERE (o_orderkey IN (1,2)) AND (o_custkey = 11 OR o_custkey = 22);
                                                          QUERY PLAN
---------------------------------------------------------------------
 Aggregate  (cost=0.00..0.00 rows=0 width=0)
   ->  Custom Scan (Citus Adaptive)  (cost=0.00..0.00 rows=0 width=0)
         Task Count: 4
         Tasks Shown: One of 4
         ->  Task
               Node: host=localhost port=xxxxx dbname=regression
               ->  Aggregate  (cost=13.68..13.69 rows=1 width=8)
                     ->  Seq Scan on orders_hash_partitioned_630000 orders_hash_partitioned  (cost=0.00..13.68 rows=1 width=0)
                           Filter: ((o_orderkey = ANY ('{1,2}'::integer[])) AND ((o_custkey = 11) OR (o_custkey = 22)))
(9 rows)
```

After this commit the task count is what one would expect from the query defining multiple distinct values for the distribution column:
```
EXPLAIN SELECT count(*) FROM orders_hash_partitioned
	WHERE (o_orderkey IN (1,2)) AND (o_custkey = 11 OR o_custkey = 22);
                                                          QUERY PLAN
---------------------------------------------------------------------
 Aggregate  (cost=0.00..0.00 rows=0 width=0)
   ->  Custom Scan (Citus Adaptive)  (cost=0.00..0.00 rows=0 width=0)
         Task Count: 2
         Tasks Shown: One of 2
         ->  Task
               Node: host=localhost port=xxxxx dbname=regression
               ->  Aggregate  (cost=13.68..13.69 rows=1 width=8)
                     ->  Seq Scan on orders_hash_partitioned_630000 orders_hash_partitioned  (cost=0.00..13.68 rows=1 width=0)
                           Filter: ((o_orderkey = ANY ('{1,2}'::integer[])) AND ((o_custkey = 11) OR (o_custkey = 22)))
(9 rows)
```

"Core" of the pruning logic works as previously where it uses `PrunableInstances` to queue ORable valid constraints for shard pruning.
The difference is that now we build a compact internal representation of the query expression tree with PruningTreeNodes before actual shard pruning is run.

Pruning tree nodes represent boolean operators and the associated constraints of it. This internal format allows us to have compact representation of the query WHERE clauses which allows "core" pruning logic to work with OR-clauses correctly.

For example query having
`WHERE (o_orderkey IN (1,2)) AND (o_custkey=11 OR (o_shippriority > 1 AND o_shippriority < 10))`
gets transformed into:
1. AND(o_orderkey IN (1,2), OR(X, AND(X, X)))
2. AND(o_orderkey IN (1,2), OR(X, X))
3. AND(o_orderkey IN (1,2), X)
Here X is any set of unknown condition(s) for shard pruning.

This allow the final shard pruning to correctly recognize that shard pruning is done with the valid condition of `o_orderkey IN (1,2)`.

Another example with unprunable condition in query
`WHERE (o_orderkey IN (1,2)) OR (o_custkey=11 AND o_custkey=22)`
gets transformed into:
1. OR(o_orderkey IN (1,2), AND(X, X))
2. OR(o_orderkey IN (1,2), X)

Which is recognized as unprunable due to the OR condition between distribution column and unknown constraint -> goes to all shards.

Issue https://github.com/citusdata/citus/issues/1537 originally suggested transforming the query conditions into a full disjunctive normal form (DNF),
but this process of transforming into DNF is quite a heavy operation. It may "blow up" into a really large DNF form with complex queries having non trivial `WHERE` clauses.

I think the logic for shard pruning could be simplified further but I decided to leave the "core" of the shard pruning untouched.
2020-02-14 17:58:13 +00:00
Jelte Fennema 3d8efe303e
Fix flaky test introduced by #3374 (#3504)
Since #3374 multi_utilities is not safe to run in parallel anymore. This
is because it now also shows locks on shards created outside it's own
test. This is not really possible to fix.

Example of flaky test:
- https://circleci.com/gh/citusdata/citus/89995
- https://circleci.com/gh/citusdata/citus/90017
2020-02-14 16:07:33 +01:00
Jelte Fennema 5ef3e83ce4
Make multi_utilities test take 2 seconds instead of 20 (#3507)
On worker 2 it was waiting for dustbunnies_990001 to be
vacuumed/analyzed. This table doesn't actually exist, so that never
happend. Now it waits for the correct table and throws an error if it
waits more than 10 seconds.
2020-02-14 15:38:51 +01:00
Onur Tirtir e4dd5ac2ad
Update CHANGELOG for 9.2.1 (#3501) 2020-02-14 11:18:40 +03:00
SaitTalhaNisanci 72d1850b4e
enhance local executor description (#3499) 2020-02-13 20:19:08 +03:00
Önder Kalacı 6225f37b91
Merge pull request #3498 from citusdata/fix_null_crash
Do not prune shards if the distribution key is NULL
2020-02-13 17:21:31 +01:00
Onder Kalaci 975c4c2264 Do not prune shards if the distribution key is NULL
The root of the problem is that, standard_planner() converts the following qual

```
   {OPEXPR
   :opno 98
   :opfuncid 67
   :opresulttype 16
   :opretset false
   :opcollid 0
   :inputcollid 100
   :args (
      {VAR
      :varno 1
      :varattno 1
      :vartype 25
      :vartypmod -1
      :varcollid 100
      :varlevelsup 0
      :varnoold 1
      :varoattno 1
      :location 45
      }
      {CONST
      :consttype 25
      :consttypmod -1
      :constcollid 100
      :constlen -1
      :constbyval false
      :constisnull true
      :location 51
      :constvalue <>
      }
   )
   :location 49
   }
```

To

```
(
   {CONST
   :consttype 16
   :consttypmod -1
   :constcollid 0
   :constlen 1
   :constbyval true
   :constisnull true
   :location -1
   :constvalue <>
   }
)
```

So, Citus doesn't deal with NULL values in real-time or non-fast path router queries.

And, in the FastPathRouter planner, we check constisnull in DistKeyInSimpleOpExpression().
However, in deferred pruning case, we do not check for isnull for const.

Thus, the fix consists of two parts:
- Let PruneShards() not crash when NULL parameter is passed
- For deferred shard pruning in fast-path queries, explicitly check that we have CONST which is not NULL
2020-02-13 15:00:31 +01:00
Onur Tirtir cd8210d516
Bump citus version to 9.3devel (#3482) 2020-02-13 16:22:05 +03:00
Hadi Moshayedi fc1fe0244e
Merge pull request #3488 from citusdata/fix-typos
Fix typos noticed while reading through code trying to understand HAVING
2020-02-11 15:40:13 -08:00
Philip Dubé 3a906b8210 Fix typos noticed while reading through code trying to understand HAVING 2020-02-11 19:55:10 +00:00
Onur Tirtir ab0b49db82
fix uninitialized variable warning (#3483) 2020-02-11 15:44:31 +01:00
Onur Tirtir e660f4f854 Add changelog entry for 9.2.0 (#3463) 2020-02-10 11:03:39 +03:00
Onur Tirtir 39df51e903
Introduce objects to dist. infrastructure when updating Citus (#3477)
Mark existing objects that are not included in distributed object infrastructure
in older versions of Citus (but now should be) as distributed, after updating
Citus successfully.
2020-02-07 18:07:59 +03:00
Nils Dijk d5433400f9
Fix: Unnecessary repartition on joins with more than 4 tables (#3473)
DESCRIPTION: Fix unnecessary repartition on joins with more than 4 tables

In 9.1 we have introduced support for all CH-benCHmark queries by widening our definitions of joins to include joins with expressions in them. This had the undesired side effect of Q5 regressing on its plan by implementing a repartition join.

It turned out this regression was not directly related to widening of the join clause, nor the schema employed by CH-benCHmark. Instead it had to do with 4 or more tables being joined in a chain. A chain meaning:

```sql
SELECT * FROM a,b,c,d WHERE a.part = b.part AND b.part = c.part AND ....
```

Due to how our join order planner was implemented it would only keep track of 1 of the partition columns when comparing if the join could be executed locally. This manifested in a join chain of 4 tables to _always_ be executed as a repartition join. 3 tables joined in a chain would have the middle table shared by the two outer tables causing the local join possibility to be found.

With this patch we keep a  unique list (or set) of all partition columns participating in the join. When a candidate table is checked for a possibility to execute a local join it will check if there is any partition column in that set that matches an equality join clause on the partition column of the candidate table.

By taking into account all partition columns in the left relation it will now find the local join path on >= 4 tables joined in a chain. 

fixes: #3276
2020-02-06 15:07:07 +01:00
Philip Dubé 345455d765
Merge pull request #3461 from citusdata/fix-adaptive-repartition-join-leak
Fix adaptive repartition join leak
2020-02-05 17:43:23 +00:00
Philip Dubé ecad4aa5e6 Fill in jobIdList field of DistributedExecution
Pass down jobIdList from ExecuteTasksInDependencyOrder

Also clean up comment for ExecuteTaskListOutsideTransaction
2020-02-05 17:32:22 +00:00
Philip Dubé c252811884 dont: don't, wont: won't, acylic: acyclic 2020-02-05 17:32:22 +00:00
Halil Ozan Akgül fff3866844
Merge pull request #3472 from citusdata/grant_on_public_schema
Fixes the bug of grants on public schema propagation
2020-02-05 18:40:45 +03:00