DESCRIPTION: Fix left join shard pruning in pushdown planner
Due to #2481 which moves outer join planning through the pushdown planner we caused a regression on the shard pruning behaviour for outer joins.
In the pushdown planner we make a union of the placement groups for all shards accessed by a query based on the filters we see during planning. Unfortunately implicit filters for left joins are not available during this part. This causes the inner part of an outer join to not prune any shards away. When we take the union of the placement groups it shows the behaviour of not having any shards pruned.
Since the inner part of an outer query will not return any rows if the outer part does not contain any rows we have observed we do not have to add the shard intervals of the inner part of an outer query to the list of shard intervals to query.
Fixes: #3512
* reimplement ExecuteUtilityTaskListWithoutResults for local utility command execution
* introduce new functions for local execution of utility commands
* change ErrorIfTransactionAccessedPlacementsLocally logic for local utility command execution
* enable local execution for TRUNCATE command on distributed & reference tables
* update existing tests for local utility command execution
* enable local execution for DDL commands on distributed & reference tables
* enable local execution for DROP command on distributed & reference tables
* add normalization rules for cascaded commands
* add new tests for local utility command execution
In between stat at the start of the loop and unlink/rmdir at the end the
item that the filename references might have changed. In some cases this
can be a security bug, but since we only delete the file/directory it
should not be for us as far as I can tell. It could in theory still
cause errors though if the a file is changed into a directory by some
other process. This commit makes the code robust against that, by not
using stat and only rely on error codes and retries.
This fixes 3 bugs:
1. `strtoul` never underflows, so that branch was useless
2. `strtoul` has ULONG_MAX instead of LONG_MAX when it overflows
3. `long` and `unsigned long` are not necessarily 64bit, they can be
either more or less. So now `strtoll` and `strtoull` are used
and 64 bit bounds are checked.
New stack memory can contain anything including passwords/private keys.
In these functions we return structs that can have their padding
bytes uninitialized. By first zeroing out the struct fully, we try to
ensure that any data that is in these padding bytes is at least
overwritten once. It might not be zero anymore after setting the fields,
but at least it shouldn't be private data anymore.
Calling ErrorIfUnsupportedConstraint was still giving errors on Semmle. This
makes sure that we check for NULL at runtime. This way we can safely ignore all
errors created by this function.
* Add third column to master_evaluation_modify table
It was already added in some tests, but now make it globally
applicable to the test file.
* Add third column to master_evaluation_select table
As we'll use the column in some tests
* Add modify regression tests
For the combinations of: local/remote, router/fast-path:
- Distribution key is a const.
- Contains a function
- A column which is not dist. key is parametrized
* Add select regression tests
For the combinations of: local/remote, router/fast-path:
- Distribution key is a const.
- Contains a function
- A column which is not dist. key is parametrized
* Make some tests consistent to check-base
As we don't have any other executors to run them.
These schedules were added when we had both the adaptive executor and
the real-time/router executors in the code. Since we only have adaptive
executor anymore, we can remove these.
Add failing tests, make changes to avoid crashes at least
Fix HAVING subquery pushdown ignoring reference table only subqueries,
also include HAVING in recursive planning
Given that we have a function IsDistributedTable which includes reference tables,
it seems best to have IsDistributedTableRTE & QueryContainsDistributedTableRTE
reflect that they do not include reference tables in their check
Similarly SublinkList's name should reflect that it only scans WHERE
contain_agg_clause asserts that we don't have SubLinks,
use contain_aggs_of_level as suggested by pg sourcecode
Before this commit, we considered !ContainsRecurringRTE() enough
for NotContainsOnlyRecurringTuples. However, instead, we can check
for existince of any distributed table.
DESCRIPTION: Fixes a bug that causes wrong results with complex outer joins
When ExecutorSlowStartInterval is set to 0, it has a special meaning
that we do not want to use slow start. Therefore, in the code we have
checks such as ExecutorSlowStartInterval > 0 to understand if it is
enabled or not. However, this is kind of subtle, and it creates an extra
mapping in our mind. Therefore, I thought that using a variable for the
special value removes the mapping and makes it easier to understand.
As @onderkalaci suggested removing the definition of GetWorkerNodeCount() that can potentially cause misunderstandings.
I can advise using ActiveReadableWorkerNodeCount() that returns the number of active primaries is a safer alternative than GetWorkerNodeCount() that returns the total number of workers containing inactives, primaries, and unavailable nodes. I introduced a bug #3556 and in the bugfix #3564 removed the single usage of said function
Fix early exits on intermediate result pruning
There are 2 problems with our early exit strategy that this commit fixes:
1- When we decide that a subplan results are sent to all worker nodes,
we used to skip traversing the whole distributed plan, instead of
skipping only the subplan.
2- We used to consider all available nodes in the cluster (secondaries
and inactive nodes as well as active primaries) when deciding on early
exit strategy. This resulted in failures to early exit when there are
secondaries or inactive nodes.
There are 2 problems with our early exit strategy that this commit fixes:
1- When we decide that a subplan results are sent to all worker nodes,
we used to skip traversing the whole distributed plan, instead of
skipping only the subplan.
2- We used to consider all available nodes in the cluster (secondaries
and inactive nodes as well as active primaries) when deciding on early
exit strategy. This resulted in failures to early exit when there are
secondaries or inactive nodes.