Commit Graph

3952 Commits (078dcae18c66999e95774cb9eebe0bb747c4567c)

Author SHA1 Message Date
Halil Ozan Akgül e9f89ed651
Fixes the non existing table bug (#4058) 2020-07-23 18:01:21 +03:00
Önder Kalacı 770610ab11
Merge pull request #4055 from citusdata/improve_find_available_connection
Make FindAvailableConnection() more strict
2020-07-23 16:09:41 +02:00
Onder Kalaci a2f53dff74 Make FindAvailableConnection() more strict
With adaptive connection management, we might have some connections
which are not fully initialized. Those connections should not be
qualified as available.
2020-07-23 15:59:50 +02:00
Önder Kalacı 20a46f8f57
Merge pull request #4054 from citusdata/rename_connection_flag
Minor refactorings in COPY command execution
2020-07-23 15:58:16 +02:00
Onder Kalaci cfb633601d Minor refactorings in COPY command execution
1) Rename CONNECTION_PER_PLACEMENT to REQUIRE_CLEAN_CONNECTION. This is
mostly to make things clear as the new name reveals more.

2) We also make sure that mark all the copy connections critical,
even if they are accessed earlier in the transction
2020-07-23 15:36:19 +02:00
SaitTalhaNisanci 64469708af
separate the logic in ManageWorkerPool (#3298) 2020-07-23 13:47:35 +03:00
Önder Kalacı 2c8066a313
Merge pull request #4052 from citusdata/refactor_adaptive_flags
Move executor specific logic to a function
2020-07-22 16:31:15 +02:00
Onder Kalaci 52c0fccb08 Move executor specific logic to a function
Because as we're planning to use the same logic, it'd be nice to
use the exact same functions.
2020-07-22 15:09:47 +02:00
Önder Kalacı d03c4aff2d
Merge pull request #4053 from citusdata/unify_node_comparisions
Unify node sort ordering
2020-07-22 11:23:18 +02:00
Onder Kalaci ff6555299c Unify node sort ordering
The executor relies on WorkerPool, and many other places rely on WorkerNode.
With this commit, we make sure that they are sorted via the same function/logic.
2020-07-22 11:03:25 +02:00
SaitTalhaNisanci 40405cd978
Merge pull request #4042 from citusdata/cleanup/task-tracker
Clean up task-tracker related comments documentation tests
2020-07-21 16:52:22 +03:00
Sait Talha Nisanci 01c23b0df2 update test outputs with task-tracker removal 2020-07-21 16:25:08 +03:00
Sait Talha Nisanci 1dbd545cf4 replace task-tracker with adaptive in tests 2020-07-21 16:21:01 +03:00
Sait Talha Nisanci 4308d867d9 remove task-tracker in comments, documentation 2020-07-21 16:21:01 +03:00
Sait Talha Nisanci a3dc8fe2b5 remove occurrences of task-tracker from gucs 2020-07-21 16:19:46 +03:00
Onur Tirtir 6aa29abd86
Merge pull request #4049 from citusdata/update-cl-934
Update CHANGELOG for 9.3.4
2020-07-21 14:03:11 +03:00
Onur Tirtir 9e12a39cb7 Update CHANGELOG for 9.3.4 2020-07-21 10:26:06 +03:00
Hanefi Onaldi 61bc47e6a8
Merge pull request #4035 from citusdata/fix-4012
Split list of configuration values properly
2020-07-21 04:24:17 +03:00
Hanefi Önaldı e534dbae4a
Accept list of values in a supported ALTER ROLE .. SET statement
Some GUCs support a list of values which is indicated by GUC_LIST_INPUT flag.

When an ALTER ROLE .. SET statement is executed, the new configuration
default for affected users and databases are stored in the
setconfig(text[]) column in a pg_db_role_setting record.

If a GUC that supports a list of values is used in an ALTER ROLE .. SET
statement, we need to split the text into items delimited by commas.
2020-07-21 03:49:57 +03:00
Nils Dijk 00a4a15d95
fix sorting on string litteral (#4045)
As noted by Talha https://github.com/citusdata/citus/pull/4029#issuecomment-660466972 there was still some sort order flappiness in the test.

The root cause is that sorting on `1::text` sorts on the literal `'1'` which causes sorting to be indeterministic.

This behaviour is consistent with Postgres' behaviour, so no bug on Citus' side.
2020-07-20 17:39:27 +02:00
Önder Kalacı 32d9cce8a2
Merge pull request #4041 from citusdata/remove_router_executable_flag
Remove `routerExecutable`  flag from `DistributedPlan`
2020-07-20 16:03:37 +02:00
Onder Kalaci c25de2cf22 Remove flag from
As it doesn't make any sense anymore
2020-07-20 12:45:05 +02:00
SaitTalhaNisanci b3af63c8ce
Remove task tracker executor (#3850)
* use adaptive executor even if task-tracker is set

* Update check-multi-mx tests for adaptive executor

Basically repartition joins are enabled where necessary. For parallel
tests max adaptive executor pool size is decresed to 2, otherwise we
would get too many clients error.

* Update limit_intermediate_size test

It seems that when we use adaptive executor instead of task tracker, we
exceed the intermediate result size less in the test. Therefore updated
the tests accordingly.

* Update multi_router_planner

It seems that there is one problem with multi_router_planner when we use
adaptive executor, we should fix the following error:
+ERROR:  relation "authors_range_840010" does not exist
+CONTEXT:  while executing command on localhost:57637

* update repartition join tests for check-multi

* update isolation tests for repartitioning

* Error out if shard_replication_factor > 1 with repartitioning

As we are removing the task tracker, we cannot switch to it if
shard_replication_factor > 1. In that case, we simply error out.

* Remove MULTI_EXECUTOR_TASK_TRACKER

* Remove multi_task_tracker_executor

Some utility methods are moved to task_execution_utils.c.

* Remove task tracker protocol methods

* Remove task_tracker.c methods

* remove unused methods from multi_server_executor

* fix style

* remove task tracker specific tests from worker_schedule

* comment out task tracker udf calls in tests

We were using task tracker udfs to test permissions in
multi_multiuser.sql. We should find some other way to test them, then we
should remove the commented out task tracker calls.

* remove task tracker test from follower schedule

* remove task tracker tests from multi mx schedule

* Remove task-tracker specific functions from worker functions

* remove multi task tracker extra schedule

* Remove unused methods from multi physical planner

* remove task_executor_type related things in tests

* remove LoadTuplesIntoTupleStore

* Do initial cleanup for repartition leftovers

During startup, task tracker would call TrackerCleanupJobDirectories and
TrackerCleanupJobSchemas to clean up leftover directories and job
schemas. With adaptive executor, while doing repartitions it is possible
to leak these things as well. We don't retry cleanups, so it is possible
to have leftover in case of errors.

TrackerCleanupJobDirectories is renamed as
RepartitionCleanupJobDirectories since it is repartition specific now,
however TrackerCleanupJobSchemas cannot be used currently because it is
task tracker specific. The thing is that this function is a no-op
currently.

We should add cleaning up intermediate schemas to DoInitialCleanup
method when that problem is solved(We might want to solve it in this PR
as well)

* Revert "remove task tracker tests from multi mx schedule"

This reverts commit 03ecc0a681.

* update multi mx repartition parallel tests

* not error with task_tracker_conninfo_cache_invalidate

* not run 4 repartition queries in parallel

It seems that when we run 4 repartition queries in parallel we get too
many clients error on CI even though we don't get it locally. Our guess
is that, it is because we open/close many connections without doing some
work and postgres has some delay to close the connections. Hence even
though connections are removed from the pg_stat_activity, they might
still not be closed. If the above assumption is correct, it is unlikely
for it to happen in practice because:
- There is some network latency in clusters, so this leaves some times
for connections to be able to close
- Repartition joins return some data and that also leaves some time for
connections to be fully closed.

As we don't get this error in our local, we currently assume that it is
not a bug. Ideally this wouldn't happen when we get rid of the
task-tracker repartition methods because they don't do any pruning and
might be opening more connections than necessary.

If this still gives us "too many clients" error, we can try to increase
the max_connections in our test suite(which is 100 by default).

Also there are different places where this error is given in postgres,
but adding some backtrace it seems that we get this from
ProcessStartupPacket. The backtraces can be found in this link:
https://circleci.com/gh/citusdata/citus/138702

* Set distributePlan->relationIdList when it is needed

It seems that we were setting the distributedPlan->relationIdList after
JobExecutorType is called, which would choose task-tracker if
replication factor > 1 and there is a repartition query. However, it
uses relationIdList to decide if the query has a repartition query, and
since it was not set yet, it would always think it is not a repartition
query and would choose adaptive executor when it should choose
task-tracker.

* use adaptive executor even with shard_replication_factor > 1

It seems that we were already using adaptive executor when
replication_factor > 1. So this commit removes the check.

* remove multi_resowner.c and deprecate some settings

* remove TaskExecution related leftovers

* change deprecated API error message

* not recursively plan single relatition repartition subquery

* recursively plan single relation repartition subquery

* test depreceated task tracker functions

* fix overlapping shard intervals in range-distributed test

* fix error message for citus_metadata_container

* drop task-tracker deprecated functions

* put the implemantation back to worker_cleanup_job_schema_cachesince citus cloud uses it

* drop some functions, add downgrade script

Some deprecated functions are dropped.
Downgrade script is added.
Some gucs are deprecated.
A new guc for repartition joins bucket size is added.

* order by a test to fix flappiness
2020-07-18 13:11:36 +03:00
Hadi Moshayedi 339d43357c
Merge pull request #4037 from citusdata/remove_per_placement_query
Refactor: Use TupleDestination API for partitioning in insert/select.
2020-07-17 10:20:07 -07:00
Hadi Moshayedi 13003d8d05 Use TupleDestination API for partitioning in insert/select. 2020-07-17 09:43:46 -07:00
Marco Slot f323033ce8
Merge pull request #4036 from citusdata/fix/overflow 2020-07-16 14:57:38 +02:00
Marco Slot b823f2127d Prevent integer overflow in FindShardIntervalIndex 2020-07-16 14:30:56 +02:00
Nils Dijk d0b6e62c9a
change wording to allowlist and the likes (#3906)
In the same line as #3904

Change wording to better reflect use and remove words that enforce/maintain bias.
2020-07-15 16:24:40 +02:00
Marco Slot 1baf6c3a45
Merge pull request #3976 from citusdata/fix/foreign-key-to-local-table-hint
Improve error message when creating a foreign key to a local table
2020-07-15 14:30:19 +02:00
Marco Slot e09860e9e3
Merge pull request #3991 from citusdata/fix/remove-level-assert
Remove executor/planner level asserts in abort handler
2020-07-14 22:43:24 +02:00
SaitTalhaNisanci bc011a6286
Add IsCitusTable check to citus table utilities (#4028) 2020-07-14 18:29:33 +03:00
Nils Dijk 23d44eba9f
fix flappy tests due to undeterministic order of test output (#4029)
As reported on #4011 https://github.com/citusdata/citus/pull/4011/files#r453804702 some of the tests were flapping due to an indeterministic order for test outputs.

This PR makes the test output ordered for all tests returning non-zero rows.

Needs to be backported to 9.2, 9.3, 9.4
2020-07-14 15:47:29 +02:00
Hanefi Onaldi 8189415731
Merge pull request #4004 from citusdata/move-downgrades 2020-07-14 13:56:33 +03:00
Hanefi Önaldı 315b323d47
Introduce new make targets for downgrade scripts
Here are the updated make targets:
- install: install everything except downgrade scripts.
- install-downgrades: build and install only the downgrade migration scripts.
- install-all: install everything along with the downgrade migration scripts.
2020-07-14 13:10:18 +03:00
SaitTalhaNisanci ab5be77709
test coordinator reference-distributed table join (#3698) 2020-07-14 11:43:03 +03:00
SaitTalhaNisanci fd760fa4b3
Merge pull request #4005 from citusdata/fix/coordinator_repartition_join
Send commands to coordinator when it is added as a worker
2020-07-13 20:22:43 +03:00
Sait Talha Nisanci 1b5ed45a58 add multi follower repartition tests 2020-07-13 19:50:50 +03:00
Sait Talha Nisanci 510535f558 address feedback 2020-07-13 19:45:02 +03:00
Sait Talha Nisanci 41ec76a6ad use ActiveReadableNodeList in JobExecutorType and task tracker
The reason we should use ActiveReadableNodeList instead of ActiveReadableNonCoordinatorNodeList is that if coordinator is added to cluster as a worker, it should be counted as well. Otherwise if there is only coordinator in the cluster, the count will be 0, hence we get a warning.

In MultiTaskTrackerExecute, we should connect to coordinator if it is
added to the cluster because it will also be assigned tasks.
2020-07-13 19:45:02 +03:00
Sait Talha Nisanci d97d03ec65 use ActivePrimaryNodeList to include coordinator
ActiveReadableWorkerNodeList doesn't include coordinator, however if
coordinator is added as a worker, we should also include that while
planning. The current methods are very easily misusable and this
requires a refactoring to make the distinction between methods that
include coordinator and that don't very explicit as they can introduce
subtle/major bugs pretty easily.
2020-07-13 19:20:15 +03:00
Sait Talha Nisanci db1b78148c send schema creation/cleanup to coordinator in repartitions
We were using ALL_WORKERS TargetWorkerSet while sending temporary schema
creation and cleanup. We(well mostly I) thought that ALL_WORKERS would also include coordinator when it is added as a worker. It turns out that it was FILTERING OUT the coordinator even if it is added as a worker to the cluster.

So to have some context here, in repartitions, for each jobId we create
(at least we were supposed to) a schema in each worker node in the cluster. Then we partition each shard table into some intermediate files, which is called the PARTITION step. So after this partition step each node has some intermediate files having tuples in those nodes. Then we fetch the partition files to necessary worker nodes, which is called the FETCH step. Then from the files we create intermediate tables in the temporarily created schemas, which is called a MERGE step. Then after evaluating the result, we remove the temporary schemas(one for each job ID in each node) and files.

If node 1 has file1, and node 2 has file2 after PARTITION step, it is
enough to either move file1 from node1 to node2 or vice versa. So we
prune one of them.

In the MERGE step, if the schema for a given jobID doesn't exist, the
node tries to use the `public` schema if it is a superuser, which is
actually added for testing in the past.

So when we were not sending schema creation comands for each job ID to
the coordinator(because we were using ALL_WORKERS flag, and it doesn't
include the coordinator), we would basically not have any schemas for
repartitions in the coordinator. The PARTITION step would be executed on
the coordinator (because the tasks are generated in the planner part)
and it wouldn't give us any error because it doesn't have anything to do
with the temporary schemas(that we didn't create). But later two things
would happen:

- If by chance the fetch is pruned on the coordinator side, we the other
nodes would fetch the partitioned files from the coordinator and execute
the query as expected, because it has all the information.
- If the fetch tasks are not pruned in the coordinator, in the MERGE
step, the coordinator would either error out saying that the necessary
schema doesn't exist, or it would try to create the temporary tables
under public schema ( if it is a superuser). But then if we had the same
task ID with different jobID it would fail saying that the table already
exists, which is an error we were getting.

In the first case, the query would work okay, but it would still not do
the cleanup, hence we would leave the partitioned files from the
PARTITION step there. Hence ensure_no_intermediate_data_leak would fail.

To make things more explicit and prevent such bugs in the future,
ALL_WORKERS is named as ALL_NON_COORD_WORKERS. And a new flag to return
all the active nodes is added as ALL_DATA_NODES. For repartition case,
we don't use the only-reference table nodes but this version makes the
code simpler and there shouldn't be any significant performance issue
with that.
2020-07-13 19:20:15 +03:00
SaitTalhaNisanci 76ddb85545
improve error message in secondaries (#4025) 2020-07-13 19:18:57 +03:00
Nils Dijk 449d1f0e91
force aliases in deparsing for queries with anonymous column references (#4011)
DESCRIPTION: Force aliases in deparsing for queries with anonymous column references

Fixes: #3985  

The root cause has todo with discrepancies in the query tree we create. I think in the future we should spend some time on categorising all changes we made to ruleutils and see if we can change the data structure `query` we pass to the deparser to have an actual valid postgres query for the deparser to render.

For now the fix is to keep track, besides changing the names of the entries in the target list, also if we have a reference to an anonymous columns. If there are anonymous columns we set the `printaliases` flag to true which forces the deparser to add the aliases.
2020-07-13 16:29:24 +02:00
Marco Slot 9cb8dc9d12 Improve error message when creating a foreign key to a local table 2020-07-13 13:57:22 +02:00
Marco Slot 5fbb925df1 Remove level asserts in abort handler 2020-07-12 22:54:35 +02:00
Onur Tirtir 50b2c5a7aa
Merge pull request #4023 from citusdata/dont-check-merge-to-enterprise-release
Don't check-merge-to-enterprise for release branches
2020-07-10 19:59:32 +03:00
Onur Tirtir 1c6439d1af Don't run check-merge-to-enterprise for release branches 2020-07-10 18:28:35 +03:00
Onur Tirtir f3a01482b4
Merge pull request #4021 from citusdata/update-cl-0710-93
Update CHANGELOG for 9.3.3
2020-07-10 17:47:28 +03:00
Onur Tirtir 4c26bb5ffc Update CHANGELOG for 9.3.3 2020-07-10 15:01:42 +03:00
SaitTalhaNisanci b8830d063f
remove no-op check in TaskListRequires2PC (#4018)
We already return true if replication model is REPLICATION_MODEL_2PC at
the very beginning of the function, hence the check later is not used.
2020-07-10 14:16:23 +03:00