Before this commit clusters created with a version lower than 8.1.0
would not have SSL enabled by default. This changes that to enable it by
default always. The only difference is that old clusters (created before
8.1.0), will have "sslmode=prefer" in their `citus.node_conninfo`
setting. New clusters will have "sslmode=require". These settings are
equivalent from an attackers point of view. They both protect against
passive eavesdropping, but are vulnerable to an active MITM attack.
Citus coordinator (or MX nodes) caches `citus.max_cached_conns_per_worker` connections
per node. This means that, those connections are not terminated after each statement.
Instead, cached to avoid the cost of re-establishment. This is crucial for OLTP performance.
The problem with that approach is that, we never properly handle the termnation of
those cached connections. For instance, when a session on the coordinator disconnects,
you'd see the following logs on the workers:
```
2020-03-20 09:13:39.454 CET [64028] LOG: could not receive data from client: Connection reset by peer
```
With this patch, we're terminating the cached connections properly at the end of the connection.
We're getting a lot of random failures on CI regarding connection errors. This
works around that by not running that create lots of connections in parallel.
This is needed to automatically generate .bc (bitcode) files when
postgres is compiled with llvmjit support.
It also has the advantage that cmake is not required for the build
anymore.
As discussed with @JelteF; #3559 caused consistent errors on BSD (OSX). Given a group of people use this environment to develop on it is an undesirable change.
This reverts commit ca8f7119fe.
We have special logic to copy into intermediate results and we use a
custom format for that, "result" copy format. Postgres internally does
not know this format and if we use this locally it will error saying
that it does not know this format.
Files are visible to all transactions, which means that we can use any
connection to access files. In order to use the existing logic, it makes
sense that in case we have intermediate results, which means we will
write the results to a file, we preserve the same behavior, which is
opening connections to localhost. Therefore if we have intermediate
results we return false in ShouldExecuteCopyLocally.
We can use local copy in INSERT..SELECT, so the check that disables
local execution is removed.
Also a test for local copy where the data size >
LOCAL_COPY_FLUSH_THRESHOLD is added.
use local execution with insert..select
If current transaction is connected to local group we should not use
local copy, because we might not see some of the changes that are made
over the connection to the local group.
A copy will be executed locally if
- Local execution is enabled and current transaction accessed a local placement
- Local execution is enabled and we are inside a transaction block.
So even if local execution is enabled but we are not in a transaction block, the copy will not be run locally.
This will not run locally:
```
COPY distributed_table FROM STDIN;
....
```
This will run locally:
```
SET citus.enable_local_execution to 'on';
BEGIN;
COPY distributed_table FROM STDIN;
COMMIT;
....
```
.
There are 3 ways to do a copy in postgres programmatically:
- from a file
- from a program
- from a callback function
I have chosen to implement it with a callback function, which means that we write the rows of copy from a callback function to the output buffer, which is used to insert tuples into the actual table.
For each shard id, we have a buffer that keeps the current rows to be written, we perform the actual copy operation either when:
- copy buffer for the given shard id reaches to a threshold, which is currently 512KB
- we reach to the end of the copy
The buffer size is debatable(512KB). At a given time, we might allocate (local placement * buffer size) memory at most.
The local copy uses the same copy format as remote copy, which means that we serialize the data in the same format as remote copy and send it locally.
There was also the option to use ExecSimpleRelationInsert to insert
slots one by one, which would avoid the extra
serialization/deserialization but doing some benchmarks it seems that
using buffers are significantly better in terms of the performance.
You can see this comment for more details: https://github.com/citusdata/citus/pull/3557#discussion_r389499054
On some distros (e.g. Redhat 7) there is cmake version 2 and cmake version 3,
safestringlib requires cmake version 3. On those distros the binary is called
cmake3, so try to use that one before falling back to regular cmake binary.
DESCRIPTION: Fix left join shard pruning in pushdown planner
Due to #2481 which moves outer join planning through the pushdown planner we caused a regression on the shard pruning behaviour for outer joins.
In the pushdown planner we make a union of the placement groups for all shards accessed by a query based on the filters we see during planning. Unfortunately implicit filters for left joins are not available during this part. This causes the inner part of an outer join to not prune any shards away. When we take the union of the placement groups it shows the behaviour of not having any shards pruned.
Since the inner part of an outer query will not return any rows if the outer part does not contain any rows we have observed we do not have to add the shard intervals of the inner part of an outer query to the list of shard intervals to query.
Fixes: #3512
* reimplement ExecuteUtilityTaskListWithoutResults for local utility command execution
* introduce new functions for local execution of utility commands
* change ErrorIfTransactionAccessedPlacementsLocally logic for local utility command execution
* enable local execution for TRUNCATE command on distributed & reference tables
* update existing tests for local utility command execution
* enable local execution for DDL commands on distributed & reference tables
* enable local execution for DROP command on distributed & reference tables
* add normalization rules for cascaded commands
* add new tests for local utility command execution
In between stat at the start of the loop and unlink/rmdir at the end the
item that the filename references might have changed. In some cases this
can be a security bug, but since we only delete the file/directory it
should not be for us as far as I can tell. It could in theory still
cause errors though if the a file is changed into a directory by some
other process. This commit makes the code robust against that, by not
using stat and only rely on error codes and retries.
This fixes 3 bugs:
1. `strtoul` never underflows, so that branch was useless
2. `strtoul` has ULONG_MAX instead of LONG_MAX when it overflows
3. `long` and `unsigned long` are not necessarily 64bit, they can be
either more or less. So now `strtoll` and `strtoull` are used
and 64 bit bounds are checked.
New stack memory can contain anything including passwords/private keys.
In these functions we return structs that can have their padding
bytes uninitialized. By first zeroing out the struct fully, we try to
ensure that any data that is in these padding bytes is at least
overwritten once. It might not be zero anymore after setting the fields,
but at least it shouldn't be private data anymore.
Calling ErrorIfUnsupportedConstraint was still giving errors on Semmle. This
makes sure that we check for NULL at runtime. This way we can safely ignore all
errors created by this function.
* Add third column to master_evaluation_modify table
It was already added in some tests, but now make it globally
applicable to the test file.
* Add third column to master_evaluation_select table
As we'll use the column in some tests
* Add modify regression tests
For the combinations of: local/remote, router/fast-path:
- Distribution key is a const.
- Contains a function
- A column which is not dist. key is parametrized
* Add select regression tests
For the combinations of: local/remote, router/fast-path:
- Distribution key is a const.
- Contains a function
- A column which is not dist. key is parametrized
* Make some tests consistent to check-base
As we don't have any other executors to run them.
These schedules were added when we had both the adaptive executor and
the real-time/router executors in the code. Since we only have adaptive
executor anymore, we can remove these.
Add failing tests, make changes to avoid crashes at least
Fix HAVING subquery pushdown ignoring reference table only subqueries,
also include HAVING in recursive planning
Given that we have a function IsDistributedTable which includes reference tables,
it seems best to have IsDistributedTableRTE & QueryContainsDistributedTableRTE
reflect that they do not include reference tables in their check
Similarly SublinkList's name should reflect that it only scans WHERE
contain_agg_clause asserts that we don't have SubLinks,
use contain_aggs_of_level as suggested by pg sourcecode
Before this commit, we considered !ContainsRecurringRTE() enough
for NotContainsOnlyRecurringTuples. However, instead, we can check
for existince of any distributed table.
DESCRIPTION: Fixes a bug that causes wrong results with complex outer joins
When ExecutorSlowStartInterval is set to 0, it has a special meaning
that we do not want to use slow start. Therefore, in the code we have
checks such as ExecutorSlowStartInterval > 0 to understand if it is
enabled or not. However, this is kind of subtle, and it creates an extra
mapping in our mind. Therefore, I thought that using a variable for the
special value removes the mapping and makes it easier to understand.