Commit Graph

1634 Commits (4dd22cc4e429f56f4619c1eb0a47e03747e6e52d)

Author SHA1 Message Date
Sait Talha Nisanci 17388e2e91 update some tests 2020-08-04 15:18:27 +03:00
Sait Talha Nisanci de82d0ff79 add output for pg13 for propagate extension commands
CREATE EXTENSION <name> FROM <old_version> is not supported anymore with
postgres 13. An alternative output is added for pg13 where we basically
error for that statement.
2020-08-04 15:18:27 +03:00
Sait Talha Nisanci 80d2bc2317 normalize some output and sort test result 2020-08-04 15:18:27 +03:00
Sait Talha Nisanci 0f6c21d418 sort result in ch_bench_having_mx test 2020-08-04 15:10:22 +03:00
Sait Talha Nisanci 70f27c10e5 Add some normalization rules for tests
The not-null constraint message changed with pg13 slightly hence a
normalization rule is added for that, which converts it to pg < 13
output.

Commit on postgres:
05f18c6b6b6e4b44302ee20a042cedc664532aa2

An extra debug message is added related to indexes on postgres, these
are safe to be ignored, so we can delete them from tests.

Commit on Postgres side:
612a1ab76724aa1514b6509269342649f8cab375

varnoold is renamed as varnosyn and varoattno is renamed as varattnosyn
so in the output we normalize the values as the old ones to simply pass
the tests.
2020-08-04 15:10:22 +03:00
Onder Kalaci eeb8c81de2 Implement shared connection count reservation & enable `citus.max_shared_pool_size` for COPY
With this patch, we introduce `locally_reserved_shared_connections.c/h` files
which are responsible for reserving some space in shared memory counters
upfront.

We sometimes need to reserve connections, but not necessarily
establish them. For example:
-  COPY command should reserve connections as it cannot know which
   connections it needs in which order. COPY establishes connections
   as any input data hits the workers. For example, for router COPY
   command, it only establishes 1 connection.

   As discussed here (https://github.com/citusdata/citus/pull/3849#pullrequestreview-431792473),
   COPY needs to reserve connections up-front, otherwise we can end
   up with resource starvation/un-detected deadlocks.
2020-08-03 18:51:40 +02:00
nukoyluoglu 38987431e7
propagation of CHECK statements to workers with parentheses (#4039)
* ensure propagation of CHECK statements to workers with parantheses & adjust regression test outputs

* add tests for distributing tables with simple CHECK constraints

* added test for CHECK on bool variable
2020-07-27 15:08:37 +03:00
Benjamin Satzger a35a15a513
Distribute custom aggregates with multiple arguments (#4047)
Enable custom aggregates with multiple parameters to be executed on workers.

#2921 introduces distributed execution of custom aggregates. One of the limitations of this feature is that only aggregate functions with a single aggregation parameter can be pushed to worker nodes. Aim of this change is to remove that limitation and support handling of multi-parameter aggregates.

Resolves: #3997
See also: #2921
2020-07-24 15:16:00 -07:00
Halil Ozan Akgul 38b72ddd66 Fixes create index concurrently bug 2020-07-24 12:14:14 +03:00
Halil Ozan Akgül e9f89ed651
Fixes the non existing table bug (#4058) 2020-07-23 18:01:21 +03:00
Sait Talha Nisanci 01c23b0df2 update test outputs with task-tracker removal 2020-07-21 16:25:08 +03:00
Sait Talha Nisanci 1dbd545cf4 replace task-tracker with adaptive in tests 2020-07-21 16:21:01 +03:00
Sait Talha Nisanci 4308d867d9 remove task-tracker in comments, documentation 2020-07-21 16:21:01 +03:00
Hanefi Önaldı e534dbae4a
Accept list of values in a supported ALTER ROLE .. SET statement
Some GUCs support a list of values which is indicated by GUC_LIST_INPUT flag.

When an ALTER ROLE .. SET statement is executed, the new configuration
default for affected users and databases are stored in the
setconfig(text[]) column in a pg_db_role_setting record.

If a GUC that supports a list of values is used in an ALTER ROLE .. SET
statement, we need to split the text into items delimited by commas.
2020-07-21 03:49:57 +03:00
Nils Dijk 00a4a15d95
fix sorting on string litteral (#4045)
As noted by Talha https://github.com/citusdata/citus/pull/4029#issuecomment-660466972 there was still some sort order flappiness in the test.

The root cause is that sorting on `1::text` sorts on the literal `'1'` which causes sorting to be indeterministic.

This behaviour is consistent with Postgres' behaviour, so no bug on Citus' side.
2020-07-20 17:39:27 +02:00
Onder Kalaci c25de2cf22 Remove flag from
As it doesn't make any sense anymore
2020-07-20 12:45:05 +02:00
SaitTalhaNisanci b3af63c8ce
Remove task tracker executor (#3850)
* use adaptive executor even if task-tracker is set

* Update check-multi-mx tests for adaptive executor

Basically repartition joins are enabled where necessary. For parallel
tests max adaptive executor pool size is decresed to 2, otherwise we
would get too many clients error.

* Update limit_intermediate_size test

It seems that when we use adaptive executor instead of task tracker, we
exceed the intermediate result size less in the test. Therefore updated
the tests accordingly.

* Update multi_router_planner

It seems that there is one problem with multi_router_planner when we use
adaptive executor, we should fix the following error:
+ERROR:  relation "authors_range_840010" does not exist
+CONTEXT:  while executing command on localhost:57637

* update repartition join tests for check-multi

* update isolation tests for repartitioning

* Error out if shard_replication_factor > 1 with repartitioning

As we are removing the task tracker, we cannot switch to it if
shard_replication_factor > 1. In that case, we simply error out.

* Remove MULTI_EXECUTOR_TASK_TRACKER

* Remove multi_task_tracker_executor

Some utility methods are moved to task_execution_utils.c.

* Remove task tracker protocol methods

* Remove task_tracker.c methods

* remove unused methods from multi_server_executor

* fix style

* remove task tracker specific tests from worker_schedule

* comment out task tracker udf calls in tests

We were using task tracker udfs to test permissions in
multi_multiuser.sql. We should find some other way to test them, then we
should remove the commented out task tracker calls.

* remove task tracker test from follower schedule

* remove task tracker tests from multi mx schedule

* Remove task-tracker specific functions from worker functions

* remove multi task tracker extra schedule

* Remove unused methods from multi physical planner

* remove task_executor_type related things in tests

* remove LoadTuplesIntoTupleStore

* Do initial cleanup for repartition leftovers

During startup, task tracker would call TrackerCleanupJobDirectories and
TrackerCleanupJobSchemas to clean up leftover directories and job
schemas. With adaptive executor, while doing repartitions it is possible
to leak these things as well. We don't retry cleanups, so it is possible
to have leftover in case of errors.

TrackerCleanupJobDirectories is renamed as
RepartitionCleanupJobDirectories since it is repartition specific now,
however TrackerCleanupJobSchemas cannot be used currently because it is
task tracker specific. The thing is that this function is a no-op
currently.

We should add cleaning up intermediate schemas to DoInitialCleanup
method when that problem is solved(We might want to solve it in this PR
as well)

* Revert "remove task tracker tests from multi mx schedule"

This reverts commit 03ecc0a681.

* update multi mx repartition parallel tests

* not error with task_tracker_conninfo_cache_invalidate

* not run 4 repartition queries in parallel

It seems that when we run 4 repartition queries in parallel we get too
many clients error on CI even though we don't get it locally. Our guess
is that, it is because we open/close many connections without doing some
work and postgres has some delay to close the connections. Hence even
though connections are removed from the pg_stat_activity, they might
still not be closed. If the above assumption is correct, it is unlikely
for it to happen in practice because:
- There is some network latency in clusters, so this leaves some times
for connections to be able to close
- Repartition joins return some data and that also leaves some time for
connections to be fully closed.

As we don't get this error in our local, we currently assume that it is
not a bug. Ideally this wouldn't happen when we get rid of the
task-tracker repartition methods because they don't do any pruning and
might be opening more connections than necessary.

If this still gives us "too many clients" error, we can try to increase
the max_connections in our test suite(which is 100 by default).

Also there are different places where this error is given in postgres,
but adding some backtrace it seems that we get this from
ProcessStartupPacket. The backtraces can be found in this link:
https://circleci.com/gh/citusdata/citus/138702

* Set distributePlan->relationIdList when it is needed

It seems that we were setting the distributedPlan->relationIdList after
JobExecutorType is called, which would choose task-tracker if
replication factor > 1 and there is a repartition query. However, it
uses relationIdList to decide if the query has a repartition query, and
since it was not set yet, it would always think it is not a repartition
query and would choose adaptive executor when it should choose
task-tracker.

* use adaptive executor even with shard_replication_factor > 1

It seems that we were already using adaptive executor when
replication_factor > 1. So this commit removes the check.

* remove multi_resowner.c and deprecate some settings

* remove TaskExecution related leftovers

* change deprecated API error message

* not recursively plan single relatition repartition subquery

* recursively plan single relation repartition subquery

* test depreceated task tracker functions

* fix overlapping shard intervals in range-distributed test

* fix error message for citus_metadata_container

* drop task-tracker deprecated functions

* put the implemantation back to worker_cleanup_job_schema_cachesince citus cloud uses it

* drop some functions, add downgrade script

Some deprecated functions are dropped.
Downgrade script is added.
Some gucs are deprecated.
A new guc for repartition joins bucket size is added.

* order by a test to fix flappiness
2020-07-18 13:11:36 +03:00
Marco Slot 9cb8dc9d12 Improve error message when creating a foreign key to a local table 2020-07-13 13:57:22 +02:00
SaitTalhaNisanci bc011a6286
Add IsCitusTable check to citus table utilities (#4028) 2020-07-14 18:29:33 +03:00
Nils Dijk 23d44eba9f
fix flappy tests due to undeterministic order of test output (#4029)
As reported on #4011 https://github.com/citusdata/citus/pull/4011/files#r453804702 some of the tests were flapping due to an indeterministic order for test outputs.

This PR makes the test output ordered for all tests returning non-zero rows.

Needs to be backported to 9.2, 9.3, 9.4
2020-07-14 15:47:29 +02:00
SaitTalhaNisanci ab5be77709
test coordinator reference-distributed table join (#3698) 2020-07-14 11:43:03 +03:00
Sait Talha Nisanci 1b5ed45a58 add multi follower repartition tests 2020-07-13 19:50:50 +03:00
Sait Talha Nisanci 510535f558 address feedback 2020-07-13 19:45:02 +03:00
Sait Talha Nisanci 41ec76a6ad use ActiveReadableNodeList in JobExecutorType and task tracker
The reason we should use ActiveReadableNodeList instead of ActiveReadableNonCoordinatorNodeList is that if coordinator is added to cluster as a worker, it should be counted as well. Otherwise if there is only coordinator in the cluster, the count will be 0, hence we get a warning.

In MultiTaskTrackerExecute, we should connect to coordinator if it is
added to the cluster because it will also be assigned tasks.
2020-07-13 19:45:02 +03:00
Sait Talha Nisanci d97d03ec65 use ActivePrimaryNodeList to include coordinator
ActiveReadableWorkerNodeList doesn't include coordinator, however if
coordinator is added as a worker, we should also include that while
planning. The current methods are very easily misusable and this
requires a refactoring to make the distinction between methods that
include coordinator and that don't very explicit as they can introduce
subtle/major bugs pretty easily.
2020-07-13 19:20:15 +03:00
Sait Talha Nisanci db1b78148c send schema creation/cleanup to coordinator in repartitions
We were using ALL_WORKERS TargetWorkerSet while sending temporary schema
creation and cleanup. We(well mostly I) thought that ALL_WORKERS would also include coordinator when it is added as a worker. It turns out that it was FILTERING OUT the coordinator even if it is added as a worker to the cluster.

So to have some context here, in repartitions, for each jobId we create
(at least we were supposed to) a schema in each worker node in the cluster. Then we partition each shard table into some intermediate files, which is called the PARTITION step. So after this partition step each node has some intermediate files having tuples in those nodes. Then we fetch the partition files to necessary worker nodes, which is called the FETCH step. Then from the files we create intermediate tables in the temporarily created schemas, which is called a MERGE step. Then after evaluating the result, we remove the temporary schemas(one for each job ID in each node) and files.

If node 1 has file1, and node 2 has file2 after PARTITION step, it is
enough to either move file1 from node1 to node2 or vice versa. So we
prune one of them.

In the MERGE step, if the schema for a given jobID doesn't exist, the
node tries to use the `public` schema if it is a superuser, which is
actually added for testing in the past.

So when we were not sending schema creation comands for each job ID to
the coordinator(because we were using ALL_WORKERS flag, and it doesn't
include the coordinator), we would basically not have any schemas for
repartitions in the coordinator. The PARTITION step would be executed on
the coordinator (because the tasks are generated in the planner part)
and it wouldn't give us any error because it doesn't have anything to do
with the temporary schemas(that we didn't create). But later two things
would happen:

- If by chance the fetch is pruned on the coordinator side, we the other
nodes would fetch the partitioned files from the coordinator and execute
the query as expected, because it has all the information.
- If the fetch tasks are not pruned in the coordinator, in the MERGE
step, the coordinator would either error out saying that the necessary
schema doesn't exist, or it would try to create the temporary tables
under public schema ( if it is a superuser). But then if we had the same
task ID with different jobID it would fail saying that the table already
exists, which is an error we were getting.

In the first case, the query would work okay, but it would still not do
the cleanup, hence we would leave the partitioned files from the
PARTITION step there. Hence ensure_no_intermediate_data_leak would fail.

To make things more explicit and prevent such bugs in the future,
ALL_WORKERS is named as ALL_NON_COORD_WORKERS. And a new flag to return
all the active nodes is added as ALL_DATA_NODES. For repartition case,
we don't use the only-reference table nodes but this version makes the
code simpler and there shouldn't be any significant performance issue
with that.
2020-07-13 19:20:15 +03:00
SaitTalhaNisanci 76ddb85545
improve error message in secondaries (#4025) 2020-07-13 19:18:57 +03:00
Nils Dijk 449d1f0e91
force aliases in deparsing for queries with anonymous column references (#4011)
DESCRIPTION: Force aliases in deparsing for queries with anonymous column references

Fixes: #3985  

The root cause has todo with discrepancies in the query tree we create. I think in the future we should spend some time on categorising all changes we made to ruleutils and see if we can change the data structure `query` we pass to the deparser to have an actual valid postgres query for the deparser to render.

For now the fix is to keep track, besides changing the names of the entries in the target list, also if we have a reference to an anonymous columns. If there are anonymous columns we set the `printaliases` flag to true which forces the deparser to add the aliases.
2020-07-13 16:29:24 +02:00
Hadi Moshayedi 3651fc64ee Fix Subtransaction memory leak 2020-07-09 12:33:39 -07:00
Jelte Fennema 16242d5264
Fix write queries with const expressions and COLLATE in various places (#3973) 2020-07-08 18:19:53 +02:00
Jelte Fennema ab01571c9e
Fix crash with single node dummy placement (#3993)
Static analysis found an issue where we could dereference `NULL`, because 
`CreateDummyPlacement` could return `NULL` when there were no workers. This
PR changes it so that it never returns `NULL`, which was intended by 
@marcocitus when doing this change: https://github.com/citusdata/citus/pull/3887/files#r438136433

While adding tests for citus on a single node I also added some more basic
tests and it turns out we error out on repartition joins. This has been
present since `shouldhaveshards` was introduced and is not trivial to fix.
So I created a separate issue for this: https://github.com/citusdata/citus/issues/3996
2020-07-08 17:11:25 +02:00
Philip Dubé 444472ffc6 ruleutils: use get_rtable_name for deparsing resultRelation 2020-07-07 12:20:41 +00:00
Marco Slot b4fec63bc0 Rename master evaluation to coordinator evaluation 2020-07-07 10:37:41 +02:00
Jelte Fennema 8ab47f4f37
Add a CI check to see if all tests are part of a schedule (#3959)
I recently forgot to add tests to a schedule in two of my PRs. One of
these was caught by review, but the other one was not. This adds a
script to causes CI to ensure that each test in the repo is included in
at least one schedule.

Three tests were found that were currently not part of a schedule. This PR
adds those three tests to a schedule as well and it also fixes some small
issues with these tests.
2020-07-03 11:34:55 +02:00
Jelte Fennema 9311978487 Add README for CI scripts
We keep accumulating more and more scripts to flag issues in CI. This is
good, but we are currently missing consistent documentation for them.
This commit moves all these scripts to the `ci` directory and adds some
documentation for all of them in the README. It also makes sure that the
last line of output of a failed script points to this documentation.
2020-07-03 10:22:48 +02:00
Onur Tirtir be17ebb334 Bump citus version to 9.5devel 2020-07-01 14:46:55 +03:00
Hanefi Önaldı ca2ececb3b
Downgrade path from 9.4 to 9.3 to 9.2 2020-07-01 10:38:11 +03:00
Sait Talha Nisanci e5a21f07cb test aggregates with expressions 2020-06-30 11:41:16 -07:00
Jelte Fennema 392c5e2c34
Fix wrong cancellation message about distributed deadlocks (#3956) 2020-06-30 14:57:46 +02:00
Jelte Fennema 02fa942be1
Fix assertion error when rolling back to savepoint (#3868)
It was possible to get an assertion error, if a DML command was
cancelled that opened a connection and then "ROLLBACK TO SAVEPOINT" was
used to continue the transaction. The reason for this was that canceling
the transaction might leave the `claimedExclusively` flag on for (some
of) it's connections.

This caused an assertion failure because `CanUseExistingConnection`
would return false and a new connection would be opened, and then there
would be two connections doing DML for the same placement. Which is
disallowed. That this situation caused an assertion failure instead of
an error, means that without asserts this could possibly result in some
visibility bugs, similar to the ones described
https://github.com/citusdata/citus/issues/3867
2020-06-30 11:31:46 +02:00
Hadi Moshayedi 4ed59d2db3 Move more from insert_select_executor to insert_select_planner 2020-06-26 08:08:26 -07:00
Hadi Moshayedi cd25a27174 Fix crash caused by EXPLAIN EXECUTE INSERT ... SELECT 2020-06-25 08:55:48 -07:00
Hadi Moshayedi 4e8d79998e Save INSERT/SELECT method in DistributedPlan.
This is so we don't need to calculate it twice in
insert_select_executor.c and multi_explain.c, which can
cause discrepancy if an update in one of them is not
reflected in the other site.
2020-06-25 08:55:48 -07:00
Jelte Fennema 64506143e4
Replace flaky repartition analyze test with a non flaky one (#3950)
The flaky test was introduced in #3941. This removes that flaky test and
adds a new one that fails in the same manner when removing the fix in #3941.

An example of a random failure can be found here:
https://app.circleci.com/pipelines/github/citusdata/citus/9558/workflows/de76e7a5-6558-46c9-97e7-8b1dae1f173b/jobs/135876/steps
2020-06-25 15:19:15 +02:00
SaitTalhaNisanci 50e115fe3a
test task tracker repartition with replication >1 (#3944) 2020-06-24 14:54:20 +03:00
SaitTalhaNisanci f458d1fd1c
Fix/task execution (#3941)
* Not set TaskExecution with adaptive executor

Adaptive executor is using a utility method from task tracker for
repartition joins, however adaptive executor doesn't need taskExecution.
It is only used by task tracker. This causes a problem when explain
analyze is used because what taskExecution is pointing to might be
random.

We solve this by not setting taskExecution from adaptive executor. So it
will stay NULL as set by CreateTask.

* use same memory context as task for taskExecution

Co-authored-by: Jelte Fennema <github-tech@jeltef.nl>
2020-06-24 12:10:00 +03:00
Philip Dubé cd0b2ad5b5 citus_evaluate_expression: call expand_function_arguments beforehand to avoid segfaulting on implicit parameters 2020-06-23 18:06:46 +00:00
Jelte Fennema b3ec6fbe7a
Make check_enterprise_merge script stricter (#3918)
We've had two issues with merge conflicts to enterprise in the last week, that
suddenly happened. Because of this CI check this actually blocks all community
PRs from being merged.

This PR tries to improve on the previous script we had, by putting tougher
constraints on when a merge is allowed.

Previously the check would pass in two cases:
1. This PR be merged without conflicts into `enterprise-master`
2. A branch exists with the same name as this PR on enterprise and that can be
   merged into `enterprise-master`.

The first case stays the same, but I've changed the second case to require the
following instead:
1. A branch exists on enterprise with the same name as this PR
2. **NEW: This branch contains the the last commit of the community PR branch**
3. This branch can be merged into enterprise-master

This makes sure the enterprise branch is actually up to date and not forgotten about.

If we still get problems with this change, future improvements could be:
1. Check that the PR on enterprise passes CI
2. Check that the PR on enterprise has been approved
3. Require the enterprise PR branch to be merged before merging community.
2020-06-19 12:45:36 +02:00
SaitTalhaNisanci 3a789352b6
rename citus hammerdb branch prefix as citus_github_push (#3925)
When we are using hammerdb jobs, the job creates a branch on test
automation, since that branch should be deleted, it would have
`delete_me` prefix, however since the result branch on
release-test-results will have the test automation branch as prefix, it
will also have `delete_me` prefix, which seems a bit confusing.

This PR updates it as citus_github_push
2020-06-18 21:11:58 +03:00
Marco Slot 2a3234ca26 Rename masterQuery to combineQuery 2020-06-17 14:14:37 +02:00
Jelte Fennema 0259815d3a
Fix EXPLAIN ANALYZE received data counter issues (#3917)
In #3901 the "Data received from worker(s)" sections were added to EXPLAIN
ANALYZE. After merging @pykello posted some review comments. This addresses
those comments as well as fixing a other issues that I found while addressing 
them. The things this does:

1. Fix `EXPLAIN ANALYZE EXECUTE p1` to not increase received data on every
   execution
2. Fix `EXPLAIN ANALYZE EXECUTE p1(1)` to not return 0 bytes as received data
   allways.
3. Move `EXPLAIN ANALYZE` specific logic to `multi_explain.c` from
   `adaptive_executor.c`
4. Change naming of new explain sections to `Tuple data received from node(s)`.
   Firstly because a task can reference the coordinator too, so "worker(s)" was
   incorrect. Secondly to indicate that this is tuple data and not all network
   traffic that was performed.
5. Rename `totalReceivedData` in our codebase to `totalReceivedTupleData` to
   make it clearer that it's a tuple data counter, not all network traffic.
6. Actually add `binary_protocol` test to `multi_schedule` (woops)
7. Fix a randomly failing test in `local_shard_execution.sql`.
2020-06-17 11:33:38 +02:00
Jelte Fennema b71f82b31e
Use 5 second isolation test timeout (#3907)
Sometimes isolation tests get stuck in CI and we cannot see why, because
the job is killed by the CI runner. This will instead fail inside make
the testsuite continue, but mark it as a failure like this in the diff
output:
```diff
+isolationtester: canceling step s2-ddl-create-index-concurrently after 5 seconds
 step s2-ddl-create-index-concurrently: CREATE INDEX CONCURRENTLY select_append_index ON select_append(id);
+ERROR:  CONCURRENTLY-enabled index command failed
```

We should detect blockages very quickly and the queries we run are also
very fast, so 5 seconds should be more than enough to catch any random
slowness. The default from Postgres is 5 minutes, which is waaay to much
for us.
2020-06-16 14:57:49 +02:00
Jelte Fennema 799bfdab56
Temporarily disable connection leak tests that fail a lot (#3911)
MX connection leak failures:
1. https://app.circleci.com/pipelines/github/citusdata/citus/9296/workflows/e36d1088-662a-4f60-acec-293132632c2f/jobs/131908/steps
2. https://app.circleci.com/pipelines/github/citusdata/citus/9258/workflows/37659d82-2c5b-495e-b0e7-905811e30444/jobs/131299

Failure connection leak failures:
1. https://app.circleci.com/pipelines/github/citusdata/citus/9297/workflows/c0ebc326-8c93-468f-8b70-f470bd492fb9/jobs/131920
2. https://app.circleci.com/pipelines/github/citusdata/citus/9283/workflows/9af154d0-ff96-4c5d-ae19-81faae1e0c18/jobs/131668
2020-06-16 13:48:48 +02:00
Jelte Fennema 927de6d187
Show amount of data received in EXPLAIN ANALYZE (#3901)
Sadly this does not actually work yet for binary protocol data, because
when doing EXPLAIN ANALYZE we send two commands at the same time. This
means we cannot use `SendRemoteCommandParams`, and thus cannot use the
binary protocol. This can still be useful though when using the text
protocol, to find out that a lot of data is being sent.
2020-06-15 16:01:05 +02:00
Hadi Moshayedi ef778c1cd7 address feedback from Sait Talha & Hadi 2020-06-12 18:36:02 -07:00
Marco Slot 080f711e62 Remove useless debug message in router planner 2020-06-12 18:36:02 -07:00
Marco Slot 24feadc230 Handle joins between local/reference/cte via router planner 2020-06-12 18:36:01 -07:00
Nils Dijk f57711b3d2
fix test output for tdigest (#3909)
Due to the problem described in #3908 we don't cover the tdigest integration (and other extensions) on CI.

Due to this a bug got in the patch due to a change in `EXPLAIN VERBOSE` being merged concurrently with the tdigest integration. This PR fixes the test output that missed the newly added information.
2020-06-12 20:54:27 +02:00
Halil Ozan Akgül 8c5eb6b7ea
Insert Select Into Local Table (#3870)
* Insert select with master query

* Use relid to set custom_scan_tlist varno

* Reviews

* Fixes null check

Co-authored-by: Marco Slot <marco.slot@gmail.com>
2020-06-12 17:06:31 +03:00
Jelte Fennema 0e12d045b1
Support use of binary protocol in between nodes (#3877)
This can save a lot of data to be sent in some cases, thus improving
performance for which inter query bandwidth is the bottleneck.
There's some issues with enabling this as default, so that's currently not done.
2020-06-12 15:02:51 +02:00
Nils Dijk da8f2b0134
Feature: tdigest aggregate (#3897)
DESCRIPTION: Adds support to partially push down tdigest aggregates

tdigest extensions: https://github.com/tvondra/tdigest

This PR implements the partial pushdown of tdigest calculations when possible. The extension adds a tdigest type which can be combined into the same structure. There are several aggregate functions that can be used to get;
 - a quantile
 - a list of quantiles
 - the quantile of a hypothetical value
 - a list of quantiles for a list of hypothetical values

These function can work both on values or tdigest types.

Since we can create tdigest values either by combining them, or based on a group of values we can rewrite the aggregates in such a way that most of the computation gets delegated to the compute on the shards. This both speeds up the percentile calculations because the values don't have to be sorted while at the same time making the transfer size from the shards to the coordinator significantly less.
2020-06-12 13:50:28 +02:00
Philip Dubé 1722d8ac8b Allow routing modifying CTEs
We still recursively plan some cases, eg:
- INSERTs
- SELECT FOR UPDATE when reference tables in query
- Everything must be same single shard & replication model
2020-06-11 15:14:06 +00:00
Hadi Moshayedi 0e3140c14d Include execution duration in worker_last_saved_explain_analyze 2020-06-11 02:54:54 -07:00
Hadi Moshayedi 7c52c6edb0 CTE statistics in EXPLAIN ANALYZE 2020-06-11 02:39:59 -07:00
Hadi Moshayedi 1f6d6ee4a5 Show query text in EXPLAIN output 2020-06-11 02:19:55 -07:00
Hadi Moshayedi bb96ef5047 Does the EXPLAIN ANALYZE at the same time as execution, so avoids executing twice.
We wrap worker tasks in worker_save_query_explain_analyze() so we can fetch
their explain output later by a call worker_last_saved_explain_analyze().

Fixes #3519
Fixes #2347
Fixes #2613
Fixes #621
2020-06-11 01:55:57 -07:00
Hadi Moshayedi 6ca621bd16 Test we don't support multi-shard EXPLAIN EXECUTE 2020-06-10 17:11:27 -07:00
Onder Kalaci 640717bea2
Copy doesn't use more than MaxAdaptiveExecutor
Co-authored-by: Hanefi Önaldı <Hanefi.Onaldi@Microsoft.com>
2020-06-10 16:46:21 +03:00
Jelte Fennema b87bae71bb
Error out when using different users in the same transaction (#3869)
Fixes #3867

As described in the issue above we return incorrect results when
changing user within a transaction. This causes us to error out instead.
2020-06-10 14:07:40 +02:00
Onder Kalaci 06461ca55f Coerce types properly for INSERT
Also, unify similar code-paths to rely on more accurate function.
2020-06-10 10:40:28 +02:00
Hadi Moshayedi 5cdfa9f571 Implement EXPLAIN ANALYZE udfs.
Implements worker_save_query_explain_analyze and worker_last_saved_explain_analyze.

worker_save_query_explain_analyze executes and returns results of query while
saving its EXPLAIN ANALYZE to be fetched later.

worker_last_saved_explain_analyze returns the saved EXPLAIN ANALYZE result.
2020-06-09 10:02:05 -07:00
Hadi Moshayedi 45a41e249f Test EXPLAIN ANALYZE doesn't show repartition join tasks 2020-06-06 23:24:45 -07:00
Hadi Moshayedi 02cff1a7c6 Test that EXPLAIN ANALYZE is not supported for some forms of INSERT/SELECT 2020-06-06 23:24:45 -07:00
Onur Tirtir 8b39d12846
Append IF NOT EXISTS to deparsed CREATE SERVER commands (#3875)
Append IF NOT EXISTS to CREATE SERVER commands generated by
pg_get_serverdef_string function when deparsing an existing server
object that a foreign table depends.
2020-06-05 18:04:33 +03:00
Onur Tirtir dfcc18468c Error out for unsupported trigger objects
Error out if creating a citus table from a table having triggers.
Error out for CREATE TRIGGER commands that are run on citus tables.
2020-05-31 23:10:01 +03:00
MoYi 9e1f198155 Fix composite create type deparsing to preserve typmod 2020-05-15 13:12:54 +00:00
Sait Talha Nisanci 41fceb7849 Add optional ch_benchmark and tpcc_benchmark job
With this commit:
You can trigger two types of hammerdb benchmark jobs:
-ch_benchmark (analytical and transactional queries)
-tpcc_benchmark (only transactional queries)

Your branch will be run against `master` branch.

In order to trigger the jobs prepend `ch_benchmark/` or `tpcc_benchmark/` to your branch and push it.

For example if you were running on a feature/improvement branch with name `improve/adaptive_executor`. In order to trigger a tpcc benchmark, you can do the following:

```bash
git checkout improve/adaptive_executor
git checkout -b tpcc_benchmark/improve/adaptive_executor
git push origin tpcc_benchmark/improve/adaptive_executor # the tpcc benchmark job will be triggered.
```

You will see the results in a branch in [https://github.com/citusdata/release-test-results](https://github.com/citusdata/release-test-results).

The branch name will be something like: `delete_me/citusbot_tpcc_benchmark_rg/<date>/<date>`.

The resource groups will be deleted automatically but if the benchmark fails, they won't be deleted(If you don't see the results after a reasonable time, it might mean it failed, you can check the resource usage from portal, if it is almost 0 and you didn't see the results, it means it probably failed). In that case, you will need to delete the resource groups manually from portal, the resource groups are `citusbot_ch_benchmark_rg` and `citusbot_tpcc_benchmark_rg`.
2020-05-14 16:01:48 +03:00
SaitTalhaNisanci cf98b9d6d5
not wait forever for metadata sync in tests (#3760)
We shouldn't wait forever for metada sync in tests, otherwise when a
test gets stuck, we don't know which line causes the problem.
2020-05-14 10:51:24 +03:00
Nils Dijk 105de7beb8
Fix for pruned target list entries (#3818)
DESCRIPTION: Ignore pruned target list entries in coordinator plan

The postgres planner has the ability to prune target list entries that are proven not used in the output relation. When this happens at the `CitusCustomScan` boundary we need to _not_ return these pruned columns to not upset the rest of the planner.

By using the target list the planner asks us to return we fix issues that lead to Assertion failures, and potentially could be runtime errors when they hit in a production build.

Fixes #3809
2020-05-06 13:56:02 +02:00
Marco Slot 6ce2803777 Make sure we don't wrap GROUP BY expressions in any_value 2020-05-05 05:12:45 +02:00
SaitTalhaNisanci 4a9d516f1b
Add a job to check if merge to enterprise master would fail (#3777)
* add a job to check if merge to enterprise master would fail

Add a job to check if merge to enterprise master would fail.
The job does the following:
- It checks if there is already a branch with the same name on
enterprise, if so it tries to merge it to enterprise master, if the
merge fails the job fails.
- If the branch doesn't exist on the enterprise, it tries to merge the
current branch to enterprise master, it fails if there is any conflict
while merging.

The motivation is that if a branch on community would create a conflict
on enterprise-master, until we create a PR on enterprise that would
solve this conflict, we won't be able to merge the PR on community. This
way we won't have many conflicts when merging to enterprise master and
the author, who has the most context will be responsible for resolving
the conflict when he has the most context, not after 1 month.

* Improve test suite to be able to easily run locally

* Add documentation on how to resolve conflicts to enterprise master

* Improve enterprise merge script

* Improve merge conflict job README

* Improve merge conflict job README

* Improve merge conflict job README

* Improve merge conflict job README

Co-authored-by: Nils Dijk <nils@citusdata.com>
2020-05-04 17:08:17 +03:00
Onder Kalaci f9d4a9cf38 Remove assertion for subqueries in WHERE clause ANDed with FALSE
In the code, we had the assumption that if restriction information
is NULL, it means that we cannot have any disributetd tables in
the subquery.

However, for subqueries in WHERE clause, that is not the case when
the subquery is ANDed with FALSE. In that case, Citus operates
on the originalQuery (which doesn't go through the standard_planner()),
and rely on the restriction information generated by standard_plannner().
As Postgres is smart enough to no generate restriction information for
subqueries ANDed with FALSE, we hit the assertion.
2020-05-04 10:52:15 +02:00
Onder Kalaci 891d99efaf add order by to some tests to make the output consistent 2020-05-01 12:41:51 +02:00
SaitTalhaNisanci cbda951395
Fix task copy and appending empty task in ExtractLocalAndRemoteTasks (#3802)
* Not append empty task in ExtractLocalAndRemoteTasks

ExtractLocalAndRemoteTasks extracts the local and remote tasks. If we do
not have a local task the localTaskPlacementList will be NIL, in this
case we should not append anything to local tasks. Previously we would
first check if a task contains a single placement or not, now we first
check if there is any local task before doing anything.

* fix copy of node task

Task node has task query, which might contain a list of strings in its
fields. We were using postgres copyObject for these lists. Postgres
assumes that each element of list will be a node type. If it is not a
node type it will error.

As a solution to that, a new macro is introduced to copy a list of
strings.
2020-04-29 11:05:34 +03:00
Philip Dubé b6b3c1bc17 Fix COPY TO's COPY (SELECT) with distributed table having generated columns
It's necessary to omit generated columns from output
2020-04-28 14:40:47 +00:00
SaitTalhaNisanci 164c00cf08
Fix typo: longer visible -> no longer visible (#3803) 2020-04-27 16:32:46 +03:00
Onder Kalaci 0cb7ab2d05 Explicitly mark queries in physical planner for [not] having parameters
Physical planner doesn't support parameters. If the parameters have already
been resolved when the physical planner handling the queries, mark it.
The reason is that the executor is unaware of this, and sends the parameters
along with the worker queries, which fails for composite types.

(See `DissuadePlannerFromUsingPlan()` for the details of paramater resolving)
2020-04-24 12:49:43 +02:00
Onder Kalaci f517fa2e2a Re-enable isolation test for reference tables + distributed deadlock detection 2020-04-24 11:53:03 +02:00
SaitTalhaNisanci 07cbd84631
Add base isolation schedule (#3784)
We should do some setup steps in check-isolation-base target.
This PR adds base_isolation_schedule which will set up the cluster.
2020-04-24 12:38:37 +03:00
Onur Tirtir 2e927bd6b7
Bump Citus to 9.4devel (#3788) 2020-04-22 12:50:00 +03:00
Hanefi Önaldı e85b835065
Skip dependency setup on coordinator node 2020-04-21 12:06:31 +03:00
Jelte Fennema 1423433531
Fix running check-isolation-base (#3782) 2020-04-20 15:36:09 +02:00
Onder Kalaci e182215d96 Improve connection error message from the worker nodes
We currently put the actual error message to the detail part. However,
many drivers don't show detail part.

As connection errors are somehow common, and hard to trace back, can't
we added the detail to the message itself.

In addition to that, we changed "connection error" message, as it
was confusing to the users who think that the error was happening
while connecting to the coordinator. In fact, this error is showing
up when the coordinator fails to connect remote nodes.
2020-04-20 13:32:55 +02:00
Hadi Moshayedi 1250d691d3 Replicate reference tables before master_create_empty_shard 2020-04-17 16:47:03 -07:00
SaitTalhaNisanci 1d0f4bdcd2
invalidate plan cache in master_update_node (#3758)
* invalidate plan cache in master_update_node

If a plan is cached by postgres but a user uses master_update_node, then
when the plan cache is used for the updated node, they will get the old
nodename/nodepost in the plan. This is because the plan cache doesn't
know about the master_update_node. This could be a problem in prepared
statements or anything that goes into plancache. As a solution the plan
cache is invalidated inside master_update_node.

* add invalidate_inactive_shared_connections test function

We introduce invalidate_inactive_shared_connections udf to be used in
testing. It is possible that a connection count for an inactive node
will be greater than 0 and in that case it will not be removed at the
time of invalidation. However, later we don't have a mechanism to remove
it, which means that it will stay in the hash. For this not to cause a
problem, we use this udf in testing.

* move invalidate_inactive_shared_connections to udfs from test as it will be used in mx

* remove the test udf

* remove the IsInactive check
2020-04-17 17:43:48 +03:00
Önder Kalacı a919f09c96
Remove the entries from the shared connection counter hash when no connections remain (#3775)
We initially considered removing entries just before any change to
pg_dist_node. However, that ended-up being very complex and making
MX even more complex.

Instead, we're switching to a simpler solution, where we remove entries
when the counter gets to 0.

With certain workloads, this may have some performance penalty. But, two
notes on that:
 - When counter == 0, it implies that the cluster is not busy
 - With cached connections, that's not possible
2020-04-17 17:14:58 +03:00
Philip Dubé e4a4707f4a Avoid setting hasWindowFuncs true after window functions have been optimized out of query 2020-04-17 12:22:48 +00:00
SaitTalhaNisanci a9a3be15cc
introduce TASK_QUERY_NULL task type (#3774)
When we call SetTaskQueryString we would set the task type to
TASK_QUERY_TEXT, and some parts of the codebase rely on the fact that if
TASK_QUERY_TEXT is set, the data can be read safely. However if
SetTaskQueryString is called with a NULL taskQueryString this can cause
crashes. In that case taskQueryType will simply be set to
TASK_QUERY_NULL.
2020-04-17 14:59:22 +03:00
Hanefi Önaldı 0c5d0cfee9
Notice message to help truncate local data after distribution 2020-04-17 13:21:34 +03:00
Hanefi Önaldı d535121f8d
Introduce truncate_local_data_after_distributing_table() 2020-04-17 13:21:34 +03:00
Hadi Moshayedi 61198251fd Use block_writes for replicate_reference_tables 2020-04-16 19:25:41 -07:00
Nils Dijk 1d6ba1d09e
Refactor alter role to work on distributed roles (#3739)
DESCRIPTION: Alter role only works for citus managed roles

Alter role was implemented before we implemented good role management that hooks into the object propagation framework. This is a refactor of all alter role commands that have been implemented to
 - be on by default
 - only work for supported roles
 - make the citus extension owner a supported role

Instead of distributing the alter role commands for roles at the beginning of the node activation role it now _only_ executes the alter role commands for all users in all databases and in the current database.

In preparation of full role support small refactors have been done in the deparser.

Earlier tests targeting other roles than the citus extension owner have been either slightly changed or removed to be put back where we have full role support.

Fixes #2549
2020-04-16 12:23:27 +02:00
Hadi Moshayedi 59b9a4e5a1 Detect deadlocks in replicate_reference_tables() 2020-04-15 11:06:18 -07:00
SaitTalhaNisanci df9048ebaa
update outdated comments related to local_execution (#3759) 2020-04-15 16:15:43 +03:00
Marco Slot 8b83306a27 Issue worker messages with the same log level 2020-04-14 21:08:25 +02:00
SaitTalhaNisanci d58b5e67c1
not run multi_router_planner_fast_path in parallel (#3744) 2020-04-14 13:14:23 +03:00
Onder Kalaci aa6b641828 Throttle connections to the worker nodes
With this commit, we're introducing a new infrastructure to throttle
connections to the worker nodes. This infrastructure is useful for
multi-shard queries, router queries are have not been affected by this.

The goal is to prevent establishing more than citus.max_shared_pool_size
number of connections per worker node in total, across sessions.

To do that, we've introduced a new connection flag OPTIONAL_CONNECTION.
The idea is that some connections are optional such as the second
(and further connections) for the adaptive executor. A single connection
is enough to finish the distributed execution, the others are useful to
execute the query faster. Thus, they can be consider as optional connections.
When an optional connection is not allowed to the adaptive executor, it
simply skips it and continues the execution with the already established
connections. However, it'll keep retrying to establish optional
connections, in case some slots are open again.
2020-04-14 10:27:48 +02:00
Hadi Moshayedi 2639a9a19d Test master_copy_shard_placement errors on foreign constraints 2020-04-13 12:45:27 -07:00
Hadi Moshayedi f9de734329 Ensure metadata is synced on ReplicateColocatedShardPlacement 2020-04-13 11:45:21 -07:00
SaitTalhaNisanci 2b2a146af4
update gitignores with new files in test folder (#3749) 2020-04-13 17:09:18 +03:00
Philip Dubé 30f10984e1 Defer get_agg_clause_costs, it happens later & avoids errors 2020-04-10 13:26:05 +00:00
Halil Ozan Akgul 34c2b7e056 Fixes the psql connection bug 2020-04-10 15:54:47 +03:00
Halil Ozan Akgul 56e814a333 Adds public host to only hyperscale tests 2020-04-10 15:54:47 +03:00
Halil Ozan Akgul d574ac33a8 Adds next shard ids to multi_create_table tests 2020-04-10 15:54:47 +03:00
Halil Ozan Akgul a701fc774a Adds multi_schedule_hyperscale schedule 2020-04-10 15:54:47 +03:00
Halil Ozan Akgul 5bf350faf9 Removes failing tests
This task just removes the failing tests. It doesn't mean this tests cannot be saved. It's just a starting point
2020-04-10 15:54:47 +03:00
Halil Ozan Akgul 1aa1f55d8e Adds check_multi_hyperscale_superuser schedule 2020-04-10 13:05:07 +03:00
Halil Ozan Akgul c2edf989cf Adds public host parameters 2020-04-10 13:04:24 +03:00
Halil Ozan Akgul 4b9705f714 Adds worker host parameters 2020-04-10 13:03:28 +03:00
Halil Ozan Akgul 119bf590c8 Creates normalize_modified.sed 2020-04-10 13:03:19 +03:00
Halil Ozan Akgul c8a81ef1ce Changes copy to \copy 2020-04-10 13:03:15 +03:00
Halil Ozan Akgul 93b97248b2 Adds a connection string to run tests on that connection 2020-04-10 13:03:03 +03:00
SaitTalhaNisanci 17373d51da
not wait forever in upgrade distributed function before (#3731) 2020-04-10 09:43:42 +03:00
Marco Slot a4b2197450 Correctly handle non-constant LIMIT/OFFSET clauses 2020-04-09 19:59:50 +00:00
SaitTalhaNisanci 24dcb02bca
enable local table join with reference table (#3697)
* enable local table join with reference table

* test different cases with local table and reference join
2020-04-09 15:25:54 +03:00
SaitTalhaNisanci ebda3eff61
read database name inside the function (#3730) 2020-04-09 13:11:13 +03:00
SaitTalhaNisanci 233e4a24d1
use local execution within transaction block (#3714)
* use local executon when in a transaction block

When we are inside a transaction block, there could be other methods
that need local execution, therefore we will use local execution in a
transaction block.

* update test outputs with transaction block local execution

* add a test to verify we dont leak intermediate schemas
2020-04-09 12:41:58 +03:00
SaitTalhaNisanci fa88046ce1
test that we don't leak intermediate schemas (#3737)
* test that we don't leak intermediate schemas

We have tests to make sure that we don't intermediate any intermediate
files, tables etc but we don't test if we are leaking schemas. It makes
sense to test this as well.

* remove all repartition schemas in case of error

This solution is not an ideal one but it seems to be doing the job.
We should have a more generic solution for the cleanup but it seems that
putting the cleanup in the abort handler is dangerous and it was
crashing.
2020-04-09 12:17:41 +03:00
Hadi Moshayedi 9b8802ba2d Remove todo from reference_table_utils 2020-04-08 12:46:55 -07:00
Hadi Moshayedi dda53a0bba GUC for replicate reference tables on activate. 2020-04-08 12:42:45 -07:00
Hadi Moshayedi c168a53ebc Tests for replicate_reference_tables 2020-04-08 12:41:36 -07:00
Hadi Moshayedi acfa850c38 Make multi_replicate_reference_table check-base friendly 2020-04-08 12:41:36 -07:00
Hadi Moshayedi 0758a81287 Prevent reference tables being dropped when replicating reference tables 2020-04-08 12:41:36 -07:00
Marco Slot 924cd7343a Defer reference table replication to shard creation time 2020-04-08 12:41:36 -07:00
Önder Kalacı 70012dfd33 Do not error when an intermediate file does not exit (#3707)
When the file does not exist, it could mean two different things.
First -- and a lot more common -- case is that a failure happened
in a concurrent backend on the same distributed transaction. And,
one of the backends in that transaction has already been roll
backed, which has already removed the file. If we throw an error
here, the user might see this error instead of the actual error
message. Instead, we prefer to WARN the user and pretend that the
file has no data in it. In the end, the user would see the actual
error message for the failure.

Second, in case of any bugs in intermediate result broadcasts,
we could try to read a non-existing file. That is most likely
to happen during development. Thus, when asserts enabled, we throw
an error instead of WARNING so that the developers cannot miss.
2020-04-07 17:06:55 +02:00
Onder Kalaci a695b44ce9 Add new regression tests 2020-04-07 17:06:55 +02:00
Onder Kalaci 4b3d17f466 Make sure that tests are not failing randomly 2020-04-07 17:06:55 +02:00
Marco Slot 2632343f64 Fix intermediate result pruning for INSERT..SELECT 2020-04-07 11:07:49 +02:00
Marco Slot 84672c3dbd Simplify intermediate result pruning logic 2020-04-07 10:53:29 +02:00
SaitTalhaNisanci a710b3cdc5
fix null tupleStoreState case in ExecuteLocalTaskListExtended (#3711)
In case we don't care about the tupleStoreState in
ExecuteLocalTaskListExtended, it could be passed as null. In that case
we will get a seg error. This changes it so that a dummy tuple store
will be created when it is null.

Do not use local execution in ExecuteTaskListOutsideTransaction.
As we are going to run the tasks outside transaction, we shouldn't use local execution.
However, there is some problem when using local execution related to
repartition joins, when we solve that problem, we can execute the tasks
coming to this path with local execution.

Also logging the local command is simplified.

normalize job id in worker_hash_partition_table in test outputs.
2020-04-07 11:47:09 +03:00
Philip Dubé 4860e11561 Duplicate grouping on worker whenever possible
This is possible whenever we aren't pulling up intermediate rows

We want to do this because this was done in 9.2,
some queries rely on the performance of grouping causing distinct values

This change was introduced when implementing window functions on coordinator
2020-04-06 18:51:30 +00:00
Philip Dubé b01bae5937 Check connections from connection_placement before polling 2020-04-06 17:45:44 +00:00
SaitTalhaNisanci cd3e499834
not log in debug level in null parameters (#3718)
The purpose of null_parameters is to make sure that citus doesn't crash
with null parameters. (The related issue is #3493.) The logs in this
file are not that important and they are flaky. The flakiness is related
to postgres part as well so it is hard to reproduce them. Therefore it
makes sense to decrease the log level.
2020-04-06 17:59:46 +03:00
SaitTalhaNisanci 3d3605be80
simplify vacuum test and fix the flakiness (#3704)
look at sent commands to simplify complex logic in vacuum test

also normalize connection id as that can differ when we don't have to
choose a specific connection.
2020-04-03 21:39:54 +03:00
Jelte Fennema 459a4829ae
Fix isolation tests on OSX (#3706)
* Don't print out comments in make output

* Remove empty lines with sed
2020-04-03 16:28:06 +02:00
SaitTalhaNisanci 32156dbf5c
fix flaky log statement in null_parameters (#3705)
It seems that sometimes the pruning is deferred and sometimes not with
this statement. What we care in this test is to see that it doesn't
crash. I think we don't care about the log statement for this line. So
it makes sense to not log this statement, and care about the result.
2020-04-03 17:01:59 +03:00
Hanefi Önaldı d1223bd6cc
Remove migration paths to 9.3-1, introduce 9.3-2 2020-04-03 12:50:45 +03:00
SaitTalhaNisanci 710970407f
not wait forever in multi_extension test (#3702) 2020-04-03 12:21:02 +03:00
SaitTalhaNisanci 659283c9a7
fix multi utilities vacuum test (#3699) 2020-04-03 11:50:00 +03:00
Marco Slot fd8cdb92f4 Evaluate nextval in the target list on the coordinator 2020-04-02 02:53:19 +02:00
SaitTalhaNisanci df88ab71b6 normalize assign_distributed_transaction_id in tests 2020-04-01 18:23:16 +03:00
SaitTalhaNisanci 0aebd78ea7 use localExecution in ExecuteTaskListExtended
ExecuteTaskListExtended is the common method for different codepaths,
and instead of writing separate local execution logics in different
codepaths, it makes more sense to have the logic here. We still need to
do some refactoring, this is an initial step.

After this commit, we can run create shard commands locally. There is a
special case with shard creation commands. A create shard command might
have a concatenated query string, however local execution did not know
how to execute a task with multiple query strings. This is also
implemented in this commit. We go over each query in the concatenated
query string and plan/execute them one by one.

A more clean solution to this would be to make sure that each task has a
single query. We currently cannot do that because we need to ensure the
task dependencies. However, it would make sense to do that at some point
and it would simplify the code a lot.
2020-04-01 18:23:16 +03:00
Philip Dubé 3bb4f14efd upgrade_type_after: ORDER BY 2020-04-01 01:07:21 +00:00
Philip Dubé d155149c18 tests: remove stale comment, fix typo 2020-03-31 20:13:51 +00:00
Marco Slot 252abcce16 Allow table type to be used in target list 2020-03-31 11:11:01 -07:00
SaitTalhaNisanci 5bf9f32dd3
disable one of deadlock detection test (#3682)
It seems that one of the deadlock detection tests fails way too often in
our CI. The difference is only ordering. Currently it seems that it is a
good idea to disable this test for the sake of development.
2020-03-31 19:47:58 +03:00
Marco Slot 331b45348c Fix error when using LEFT JOIN with GROUP BY on primary key 2020-03-30 16:42:22 +02:00
Philip Dubé 67d2ad4e37
Fixes flaky test in multi_reference_table: ORDER BY (#3676)
Fixes app.circleci.com/pipelines/github/citusdata/citus/7744/workflows/0848f36c-af9e-46b7-9dda-a421df54ba56/jobs/109503
2020-03-30 23:31:10 +02:00
Hanefi Onaldi 0e8103b101
Propagate ALTER ROLE .. SET statements
In PostgreSQL, user defaults for config parameters can be changed by
ALTER ROLE .. SET statements. We wish to propagate those defaults
accross the Citus cluster so that the behaviour will be similar in
different workers.

The defaults can either be set in a specific database, or the whole
cluster, similarly they can be set for a single role or all roles.

We propagate the ALTER ROLE .. SET if all the conditions below are met:
- The query affects the current database, or all databases
- The user is already created in worker nodes
2020-03-27 13:02:48 +03:00
Marco Slot a65ffee266 Fixes a bug that causes some DML queries containing aggregates to fail 2020-03-26 16:08:34 +00:00
Marco Slot b89e9dc158 Fix a bug which caused queries with SRFs and function evalution to fail 2020-03-25 06:55:53 +01:00
Philip Dubé 917cb6ae93 Don't segfault on queries using GROUPING
GROUPING will always return 0 outside of GROUPING SETS, CUBE, or ROLLUP
Since we don't support those, it makes sense to reject GROUPING in queries
2020-03-25 15:46:43 +00:00
Philip Dubé 720525cfda Add support for window functions on coordinator
Some refactoring:
Consolidate expression which decides whether GROUP BY/HAVING are pushed down
Rename early pullUpIntermediateRows to hasNonDistributableAggregates
Create WorkerColumnName to handle formatting WORKER_COLUMN_FORMAT
Ignore NULL StringInfo pointers to SafeToPushdownWindowFunction
Fix bug where SubqueryPushdownMultiNodeTree mutates supplied Query,
	SafeToPushdownWindowFunction requires the original query as it relies on rtable
2020-03-25 15:31:20 +00:00
Jelte Fennema 149f0b2122
Use Microsoft approved cipher string (#3639)
This cipher string is approved by the Microsoft security team and only enables
TLSv1.2 ciphers.
2020-03-24 15:51:44 +01:00
Jelte Fennema 2aabe3e2ef
Mark all connections for shutdown when citus.node_conninfo chan… (#3642)
We cache connections between nodes in our connection management code.
This is good for speed. For security this can be a problem though. If
the user changes settings related to TLS encryption they want those to
be applied to future queries. This is especially important when they did
not have TLS enabled before and now they want to enable it. This can
normally be achieved by changing citus.node_conninfo.  However, because
connections are not reopened there will still be old connections that
might not be encrypted at all.

This commit changes that by marking all connections to be shutdown at
the end of their current transaction. This way running transactions will
succeed, even if placement requires connections to be reused for this
transaction. But after this transaction completes any future statements
will use a connection created with the new connection options.

If a connection is requested and a connection is found that is marked
for shutdown, then we don't return this connection. Instead a new one is
created. This is needed to make sure that if there are no running
transactions, then the next statement will not use an old cached
connection, since connections are only actually shutdown at the end of a
transaction.
2020-03-24 15:31:41 +01:00
Hadi Moshayedi b46b9a68ae Tests for master_copy_shard_placement 2020-03-23 08:33:55 -07:00
Marco Slot ede176d849 Implement shard placement copying 2020-03-23 08:33:08 -07:00
Philip Dubé dd2bd53e5b PartiallyEvaluateExpression: Avoid unrecognized paramkind: 2 2020-03-23 14:14:01 +00:00
SaitTalhaNisanci 3b7959a763
not run local shard copy test in parallel (#3640)
It seems that when logging is enabled we should not run local shard copy
in parallel with other tests. The reason is that it adds coordinator for
reference tables and if the parallel test creates a schema before this
test is run, the schema will be logged. So it is not deterministic.
2020-03-23 14:38:18 +03:00
SaitTalhaNisanci c5c446f84f
not run local_shard_copy in parallel (#3635) 2020-03-23 13:56:25 +03:00
SaitTalhaNisanci 3df578010e
add a UDF to update colocation (#3623)
If two tables have the same distribution column type, we implicitly
colocate them. This is useful since colocation has a big performance
impact in most applications.

When a table is rebalanced, all of the colocated tables are also
rebalanced. If table A and table B are colocated and we want to
rebalance table A, table B will also be rebalanced. We need replica
identity so that logical replication can replicate updates and deletes
during rebalancing. If table B does not have a replica identity we
error out.

A solution to this is to introduce a UDF so that colocation can be
updated. The remaining tables in the colocation group will stay
colocated. For example if table A, B and C are colocated and after
updating table B's colocations, table A and table C stay colocated.

The "updating colocation" step does not move any data around, it only
updated pg_dist_partition and pg_dist_colocation tables. Specifically it
creates a new colocation group for the table and updates the entry in
pg_dist_partition while invalidating any cache.
2020-03-23 13:22:24 +03:00
Jelte Fennema ed0376bb41
Unparallelize tests (#3629)
We're getting a lot of random failures on CI regarding connection errors. This
works around that by not running that create lots of connections in parallel.
2020-03-20 10:31:34 +01:00
SaitTalhaNisanci 9d2f3c392a enable local execution in INSERT..SELECT and add more tests
We can use local copy in INSERT..SELECT, so the check that disables
local execution is removed.

Also a test for local copy where the data size >
LOCAL_COPY_FLUSH_THRESHOLD is added.

use local execution with insert..select
2020-03-18 09:34:39 +03:00
SaitTalhaNisanci 42cfc4c0e9 apply review items
log shard id in local copy and add more comments
2020-03-18 09:33:55 +03:00
SaitTalhaNisanci c22068e75a use the right partition for partitioned tables 2020-03-18 09:28:59 +03:00
SaitTalhaNisanci 1df9601e13 not use local copy if current transaction is connected to local group
If current transaction is connected to local group we should not use
local copy, because we might not see some of the changes that are made
over the connection to the local group.
2020-03-18 09:28:59 +03:00
SaitTalhaNisanci 39bbec0f30 add tests for local copy execution 2020-03-18 09:28:59 +03:00
Nils Dijk e5237b9e20
Fix left join shard pruning (#3569)
DESCRIPTION: Fix left join shard pruning in pushdown planner

Due to #2481 which moves outer join planning through the pushdown planner we caused a regression on the shard pruning behaviour for outer joins.

In the pushdown planner we make a union of the placement groups for all shards accessed by a query based on the filters we see during planning. Unfortunately implicit filters for left joins are not available during this part. This causes the inner part of an outer join to not prune any shards away. When we take the union of the placement groups it shows the behaviour of not having any shards pruned.

Since the inner part of an outer query will not return any rows if the outer part does not contain any rows we have observed we do not have to add the shard intervals of the inner part of an outer query to the list of shard intervals to query. 

Fixes: #3512
2020-03-13 15:20:45 +01:00
Onur Tirtir a14739f808
Local execution of ddl/drop/truncate commands (#3514)
* reimplement ExecuteUtilityTaskListWithoutResults for local utility command execution

* introduce new functions for local execution of utility commands

* change ErrorIfTransactionAccessedPlacementsLocally logic for local utility command execution

* enable local execution for TRUNCATE command on distributed & reference tables

* update existing tests for local utility command execution

* enable local execution for DDL commands on distributed & reference tables

* enable local execution for DROP command on distributed & reference tables

* add normalization rules for cascaded commands

* add new tests for local utility command execution
2020-03-13 15:39:32 +03:00
SaitTalhaNisanci 77f96a1f87
retry vanilla tests if they fail once more (#3611) 2020-03-12 12:50:06 +03:00
Philip Dubé 11b968bc30 Add runtime type checking to AGGREGATE_CUSTOM_COMBINE helper functions 2020-03-11 17:20:30 +00:00
Philip Dubé 4b68ee12c6 Also check aggregates in havingQual when scanning for non pushdownable aggregates
Came across this while coming up with test cases,
'result "68_1" does not exist' I'll seek to address in a future PR,
for now avoid segfault
2020-03-11 15:47:04 +00:00
Önder Kalacı 63ced3d901
Improve master evaluation tests (#3609)
* Add third column to master_evaluation_modify table

It was already added in some tests, but now make it globally
applicable to the test file.

* Add third column to master_evaluation_select table

As we'll use the column in some tests

* Add modify regression tests

For the combinations of: local/remote, router/fast-path:
   - Distribution key is a const.
   - Contains a function
   - A column which is not dist. key is parametrized

* Add select regression tests

    For the combinations of: local/remote, router/fast-path:
       - Distribution key is a const.
       - Contains a function
       - A column which is not dist. key is parametrized

* Make some tests consistent to check-base
2020-03-11 15:38:08 +01:00
Önder Kalacı afc942c6af
Remove non-adaptive test schedules (#3605)
As we don't have any other executors to run them.

These schedules were added when we had both the adaptive executor and
the real-time/router executors in the code. Since we only have adaptive
executor anymore, we can remove these.
2020-03-11 09:58:49 +01:00
Onder Kalaci 7d787e3d5e Prevent create_distributed_function() from the workers
As this could cause weird edge cases.
2020-03-10 18:24:20 +01:00
Marco Slot cb3d90bdc8 Simplify INSERT logic in router planner 2020-03-10 15:54:40 +01:00
Philip Dubé 81cfa05d3d First phase of addressing HAVING subquery issues
Add failing tests, make changes to avoid crashes at least

Fix HAVING subquery pushdown ignoring reference table only subqueries,
also include HAVING in recursive planning

Given that we have a function IsDistributedTable which includes reference tables,
it seems best to have IsDistributedTableRTE & QueryContainsDistributedTableRTE
reflect that they do not include reference tables in their check

Similarly SublinkList's name should reflect that it only scans WHERE

contain_agg_clause asserts that we don't have SubLinks,
use contain_aggs_of_level as suggested by pg sourcecode
2020-03-09 17:58:30 +00:00
Onder Kalaci 2ed19181fe Improve definition of RelationInfoContainsOnlyRecurringTuples
Before this commit, we considered !ContainsRecurringRTE() enough
for NotContainsOnlyRecurringTuples. However, instead, we can check
for existince of any distributed table.

DESCRIPTION: Fixes a bug that causes wrong results with complex outer joins
2020-03-09 17:28:33 +01:00
Marco Slot 5b1d1dd413 Remove unnecessary use of max_parallel_workers_per_gather 2020-03-06 13:18:58 +01:00
Marco Slot d0fead6691 Disable Postgres parallelism by default in tests 2020-03-06 13:18:58 +01:00
Hanefi Onaldi c0ad44f975
Fix early exit bug on intermediate result pruning
There are 2 problems with our early exit strategy that this commit fixes:

1- When we decide that a subplan results are sent to all worker nodes,
we used to skip traversing the whole distributed plan, instead of
skipping only the subplan.

2- We used to consider all available nodes in the cluster (secondaries
and inactive nodes as well as active primaries) when deciding on early
exit strategy. This resulted in failures to early exit when there are
secondaries or inactive nodes.
2020-03-05 16:41:44 +03:00
Onder Kalaci f72916875f Expand test coverage for combinations of master evalution, deferred pruning, parameters, local execution
- Router           & Remote & Requires Master Evaluation & With Param & Without Param
- Fast Path Router & Remote & Requires Master Evaluation & With Param & Without Param
2020-03-05 12:37:22 +01:00
Nils Dijk 268ad741a9
Refactor the deparsing of a CREATE EXTENSION to prevent NULL POINTER dereferences (#3518)
DESCRIPTION: satisfy static analysis tool for a nullptr dereference

During the static analysis project on the codebase this code has been flagged as having the potential for a null pointer dereference. Funnily enough the author had already made a comment of it in the code this was not possible due to us setting the schema name before we pass in the statement. If we want to reuse this code in a later setting this comment might not always apply and we could actually run into null pointer dereference.

This patch changes a bit of the code around to first of all make sure there is no NULL pointer dereference in this code anymore.
Secondly we allow for better deparsing by setting and adhering to the `if_not_exists` flag on the statement.
And finally add support for all syntax described in the documentation of postgres (FROM was missing).
2020-03-04 16:47:07 +01:00
Marco Slot 27f23d2c89 Add some distribution column = composite type prepared statement tests 2020-03-04 05:01:43 +01:00
Onder Kalaci 087f6eb4c0 For composite types, add cast to the parameter to ease remote node detect
the type.
2020-03-04 11:27:45 +01:00
Philip Dubé 34f241af16 Fix create_distributed_table on a table using GENERATED ALWAYS AS
If the generated column does not come at the end of the column list,
columnNameList doesn't line up with the column indexes. Seek past

CREATE TABLE test_table (
    test_id int PRIMARY KEY,
    gen_n int GENERATED ALWAYS AS (1) STORED,
    created_at TIMESTAMPTZ NOT NULL DEFAULT now()
);
SELECT create_distributed_table('test_table', 'test_id');

Would raise ERROR: cannot cast 23 to 1184
2020-02-28 09:34:26 -08:00
Jelte Fennema 685b54b3de
Semmle: Check for NULL in some places where it might occur (#3509)
Semmle reported quite some places where we use a value that could be NULL. Most of these are not actually a real issue, but better to be on the safe side with these things and make the static analysis happy.
2020-02-27 10:45:29 +01:00
Hadi Moshayedi 1b3e58f0c3 Merge branch 'improve-shard-pruning' of https://github.com/MarkusSintonen/citus into MarkusSintonen-improve-shard-pruning 2020-02-26 07:13:33 -08:00
Nils Dijk a77ed9cd23
Refactor master query to be planned by postgres' planner (#3326)
DESCRIPTION: Replace the query planner for the coordinator part with the postgres planner

Closes #2761 

Citus had a simple rule based planner for the query executed on the query coordinator. This planner grew over time with the addigion of SQL support till it was getting close to the functionality of the postgres planner. Except the code was brittle and its complexity rose which made it hard to add new SQL support.

Given its resemblance with the postgres planner it was a long outstanding wish to replace our hand crafted planner with the well supported postgres planner. This patch replaces our planner with a call to postgres' planner.

Due to the functionality of the postgres planner we needed to support both projections and filters/quals on the citus custom scan node. When a sort operation is planned above the custom scan it might require fields to be reordered in the custom scan before returning the tuple (projection). The postgres planner assumes every custom scan node implements projections. Because we controlled the plan that was created we prevented reordering in the custom scan and never had implemented it before.

A same optimisation applies to having clauses that could have been where clauses. Instead of applying the filter as a having on the aggregate it will push it down into the plan which could reach a custom scan node.

For both filters and projections we have implemented them when tuples are read from the tuple store. If no projections or filters are required it will directly return the tuple from the tuple store. Otherwise it will loop tuples from the tuple store through the filter and projection until a tuple is found and returned.

Besides filters being pushed down a side effect of having quals that could have been a where clause is that a call to read intermediate result could be called before the first tuple is fetched from the custom scan. This failed because the intermediate result would only be pulled to the coordinator on the first tuple fetch. To overcome this problem we do run the distributed subplans now before we run the postgres executor. This ensures the intermediate result is present on the coordinator in time. We do account for total time instrumentation by removing the instrumentation before handing control to the psotgres executor and update the timings our self.

For future SQL support it is enough to create a valid query structure for the part of the query to be executed on the query coordinating node. As a utility we do serialise and print the query at debug level4 for engineers to inspect what kind of query is being planned on the query coordinator.
2020-02-25 14:39:56 +01:00
Philip Dubé 025cb94159 Fix multi_task_string_size sometimes leaking intermediate files 2020-02-24 16:33:34 +00:00