Commit Graph

1483 Commits (c47309646761c23844ca0a1b6fd2284867ba91e5)

Author SHA1 Message Date
Marco Slot 7915f15637 Add worker_append_table_to_shard permissions tests 2021-09-10 13:46:24 +02:00
Hanefi Onaldi a9c5925d2e
Bump Citus version to 9.4.6 2021-08-11 11:03:35 +03:00
naisila c2c71eec9e Fix master_update_table_statistics scripts for 9.4 2021-08-03 16:44:12 +03:00
naisila 81bed61b4e Reimplement master_update_table_statistics to detect dist. deadlocks
(ALL CODE BORROWED from commit 2f30614fe3)
2021-08-03 16:44:12 +03:00
Hanefi Onaldi 2a17fdbb88
Bump Citus version to 9.4.5 2021-07-08 09:51:28 +03:00
Marco Slot 1a21c524b7 Fix PG upgrade scripts for 9.4 2021-07-05 16:15:51 +02:00
Onur Tirtir be3da3901e Fix lower boundary calculation when pruning range dist table shards (#5082)
This happens only when we have a "<" or "<=" filter on distribution
column of a range distributed table and that filter falls in between
two shards.

When the filter falls in between two shards:

  If the filter is ">" or ">=", then UpperShardBoundary was
  returning "upperBoundIndex - 1", where upperBoundIndex is
  exclusive shard index used during binary seach.
  This is expected since upperBoundIndex is an exclusive
  index.

  If the filter is "<" or "<=", then LowerShardBoundary was
  returning "lowerBoundIndex + 1", where lowerBoundIndex is
  inclusive shard index used during binary seach.
  On the other hand, since lowerBoundIndex is an inclusive
  index, we should just return lowerBoundIndex instead of
  doing "+ 1". Before this commit, we were missing leftmost
  shard in such queries.

* Remove useless conditional branches

The branch that we delete from UpperShardBoundary was obviously useless.

The other one in LowerShardBoundary became useless after we remove "+ 1"
from there.

This indeed is another proof of what & how we are fixing with this pr.

* Improve comments and add more

* Add some tests for upper bound calculation too

(cherry picked from commit b118d4188e)

* Also fix a debug message diff for 9.4
2021-07-02 14:59:36 +03:00
Gürkan İndibay d0000a15bd
Bump version to 9.4.4 (#4468)
Update configuration files version info
2021-01-04 13:56:34 +03:00
Onur Tirtir 51f422f3c6
Add some more tests with views to test recursive planning on views (#4404) 2020-12-18 17:42:35 +03:00
Marco Slot 8fae9aae96
Reliably detect local tables in router queries in 9.4 (#4418)
Co-authored-by: Marco Slot <marco.slot@gmail.com>
2020-12-17 13:38:16 +01:00
Onur Tirtir 7e99324bd9 Bump version to 9.4.3 2020-11-24 12:19:15 +03:00
Onder Kalaci a943696c44 Do not execute subplans multiple times with cursors
Before this commit, we let AdaptiveExecutorPreExecutorRun()
to be effective multiple times on every FETCH on cursors.
That does not affect the correctness of the query results,
but adds significant overhead.

(cherry picked from commit c433c66f2b)
2020-11-23 13:43:24 +03:00
Onur Tirtir 3d2e1a7464 Bump version to 9.4.2 2020-10-21 15:28:12 +03:00
Marco Slot b7b7b66beb Support reference table view in reference table modification 2020-10-16 13:16:16 +02:00
Marco Slot f6e5006dfd Fix a bug that could lead to multiple maintenance daemons 2020-10-16 13:15:51 +02:00
Marco Slot 03e4bec352 Add maintenance daemon error tests 2020-10-16 13:15:39 +02:00
Onur Tirtir 72c54b5cdd Bump version to 9.4.1 2020-09-30 10:52:31 +03:00
Marco Slot 637d93e8ff Fix EXPLAIN ANALYZE truncation
(cherry picked from commit c9d46c618b)

Conflicts:
	src/test/regress/expected/multi_explain.out
	src/test/regress/sql/multi_explain.sql
2020-09-28 15:49:58 +03:00
Halil Ozan Akgul 993a402c73 Fixes create index concurrently bug
(cherry picked from commit 38b72ddd66)
2020-07-27 10:32:08 +03:00
Halil Ozan Akgül 2271e9ded1 Fixes the non existing table bug (#4058)
(cherry picked from commit e9f89ed651)
2020-07-27 10:32:08 +03:00
Sait Talha Nisanci 4c90dbbd88 improve error message in secondaries
(cherry picked from commit 6f4686c741467b5c8bd6ca15c0788d8db856392a)
2020-07-21 13:55:12 +03:00
Sait Talha Nisanci 388893ce5e add multi follower repartition tests
(cherry picked from commit 6e5598fd58a1c0c6a597ca06539ac5e286cb6914)
2020-07-21 13:55:08 +03:00
Sait Talha Nisanci 4b98f6c5c2 address feedback
(cherry picked from commit 24043a3602abc7b525f2724a35168e4c45442165)
2020-07-21 13:55:04 +03:00
Sait Talha Nisanci 97dda868a0 use ActiveReadableNodeList in JobExecutorType and task tracker
The reason we should use ActiveReadableNodeList instead of ActiveReadableNonCoordinatorNodeList is that if coordinator is added to cluster as a worker, it should be counted as well. Otherwise if there is only coordinator in the cluster, the count will be 0, hence we get a warning.

In MultiTaskTrackerExecute, we should connect to coordinator if it is
added to the cluster because it will also be assigned tasks.

(cherry picked from commit ae6180ace2931223c58b87444a9e812f5e9f06e8)
2020-07-21 13:55:00 +03:00
Sait Talha Nisanci 27ef768f36 use ActivePrimaryNodeList to include coordinator
ActiveReadableWorkerNodeList doesn't include coordinator, however if
coordinator is added as a worker, we should also include that while
planning. The current methods are very easily misusable and this
requires a refactoring to make the distinction between methods that
include coordinator and that don't very explicit as they can introduce
subtle/major bugs pretty easily.

(cherry picked from commit 86b974e4ceddaf5e2c44799148a8cf485c7d90bf)
2020-07-21 13:54:56 +03:00
Sait Talha Nisanci c238e6c8b0 send schema creation/cleanup to coordinator in repartitions
We were using ALL_WORKERS TargetWorkerSet while sending temporary schema
creation and cleanup. We(well mostly I) thought that ALL_WORKERS would also include coordinator when it is added as a worker. It turns out that it was FILTERING OUT the coordinator even if it is added as a worker to the cluster.

So to have some context here, in repartitions, for each jobId we create
(at least we were supposed to) a schema in each worker node in the cluster. Then we partition each shard table into some intermediate files, which is called the PARTITION step. So after this partition step each node has some intermediate files having tuples in those nodes. Then we fetch the partition files to necessary worker nodes, which is called the FETCH step. Then from the files we create intermediate tables in the temporarily created schemas, which is called a MERGE step. Then after evaluating the result, we remove the temporary schemas(one for each job ID in each node) and files.

If node 1 has file1, and node 2 has file2 after PARTITION step, it is
enough to either move file1 from node1 to node2 or vice versa. So we
prune one of them.

In the MERGE step, if the schema for a given jobID doesn't exist, the
node tries to use the `public` schema if it is a superuser, which is
actually added for testing in the past.

So when we were not sending schema creation comands for each job ID to
the coordinator(because we were using ALL_WORKERS flag, and it doesn't
include the coordinator), we would basically not have any schemas for
repartitions in the coordinator. The PARTITION step would be executed on
the coordinator (because the tasks are generated in the planner part)
and it wouldn't give us any error because it doesn't have anything to do
with the temporary schemas(that we didn't create). But later two things
would happen:

- If by chance the fetch is pruned on the coordinator side, we the other
nodes would fetch the partitioned files from the coordinator and execute
the query as expected, because it has all the information.
- If the fetch tasks are not pruned in the coordinator, in the MERGE
step, the coordinator would either error out saying that the necessary
schema doesn't exist, or it would try to create the temporary tables
under public schema ( if it is a superuser). But then if we had the same
task ID with different jobID it would fail saying that the table already
exists, which is an error we were getting.

In the first case, the query would work okay, but it would still not do
the cleanup, hence we would leave the partitioned files from the
PARTITION step there. Hence ensure_no_intermediate_data_leak would fail.

To make things more explicit and prevent such bugs in the future,
ALL_WORKERS is named as ALL_NON_COORD_WORKERS. And a new flag to return
all the active nodes is added as ALL_DATA_NODES. For repartition case,
we don't use the only-reference table nodes but this version makes the
code simpler and there shouldn't be any significant performance issue
with that.

(cherry picked from commit 6532506f4b92b1316eea0812b2bcedb818d3b25c)
2020-07-21 13:54:51 +03:00
Nils Dijk 23f24a9668 fix flappy tests due to undeterministic order of test output (#4029)
As reported on #4011 https://github.com/citusdata/citus/pull/4011/files#r453804702 some of the tests were flapping due to an indeterministic order for test outputs.

This PR makes the test output ordered for all tests returning non-zero rows.

Needs to be backported to 9.2, 9.3, 9.4
(cherry picked from commit 23d44eba9f)
2020-07-21 11:01:49 +03:00
Nils Dijk d77e386e92 force aliases in deparsing for queries with anonymous column references (#4011)
DESCRIPTION: Force aliases in deparsing for queries with anonymous column references

Fixes: #3985

The root cause has todo with discrepancies in the query tree we create. I think in the future we should spend some time on categorising all changes we made to ruleutils and see if we can change the data structure `query` we pass to the deparser to have an actual valid postgres query for the deparser to render.

For now the fix is to keep track, besides changing the names of the entries in the target list, also if we have a reference to an anonymous columns. If there are anonymous columns we set the `printaliases` flag to true which forces the deparser to add the aliases.
(cherry picked from commit 449d1f0e91)
2020-07-21 11:01:49 +03:00
Marco Slot b7b960955c Rename master evaluation to coordinator evaluation
(cherry picked from commit b4fec63bc0)
2020-07-21 11:01:49 +03:00
Hadi Moshayedi 49f130fcd3 Fix Subtransaction memory leak
(cherry picked from commit 3651fc64ee)
2020-07-21 11:01:49 +03:00
Jelte Fennema 49d23229c4 Fix write queries with const expressions and COLLATE in various places (#3973)
(cherry picked from commit 16242d5264)
2020-07-21 11:01:48 +03:00
Jelte Fennema 9a4fddc9c5 Fix crash with single node dummy placement (#3993)
Static analysis found an issue where we could dereference `NULL`, because
`CreateDummyPlacement` could return `NULL` when there were no workers. This
PR changes it so that it never returns `NULL`, which was intended by
@marcocitus when doing this change: https://github.com/citusdata/citus/pull/3887/files#r438136433

While adding tests for citus on a single node I also added some more basic
tests and it turns out we error out on repartition joins. This has been
present since `shouldhaveshards` was introduced and is not trivial to fix.
So I created a separate issue for this: https://github.com/citusdata/citus/issues/3996
(cherry picked from commit ab01571c9e)
2020-07-21 11:01:48 +03:00
Philip Dubé 1d54b8f301 ruleutils: use get_rtable_name for deparsing resultRelation
(cherry picked from commit 444472ffc6)
2020-07-21 11:01:48 +03:00
Hanefi Önaldı 21ca434bef
Accept list of values in a supported ALTER ROLE .. SET statement
Some GUCs support a list of values which is indicated by GUC_LIST_INPUT flag.

When an ALTER ROLE .. SET statement is executed, the new configuration
default for affected users and databases are stored in the
setconfig(text[]) column in a pg_db_role_setting record.

If a GUC that supports a list of values is used in an ALTER ROLE .. SET
statement, we need to split the text into items delimited by commas.

(cherry picked from commit e534dbae4a)
2020-07-21 04:12:39 +03:00
Onur Tirtir 06c878b348 Bump Citus version to 9.4.0 2020-07-01 11:01:59 +03:00
Hanefi Önaldı ca2ececb3b
Downgrade path from 9.4 to 9.3 to 9.2 2020-07-01 10:38:11 +03:00
Sait Talha Nisanci e5a21f07cb test aggregates with expressions 2020-06-30 11:41:16 -07:00
Jelte Fennema 392c5e2c34
Fix wrong cancellation message about distributed deadlocks (#3956) 2020-06-30 14:57:46 +02:00
Jelte Fennema 02fa942be1
Fix assertion error when rolling back to savepoint (#3868)
It was possible to get an assertion error, if a DML command was
cancelled that opened a connection and then "ROLLBACK TO SAVEPOINT" was
used to continue the transaction. The reason for this was that canceling
the transaction might leave the `claimedExclusively` flag on for (some
of) it's connections.

This caused an assertion failure because `CanUseExistingConnection`
would return false and a new connection would be opened, and then there
would be two connections doing DML for the same placement. Which is
disallowed. That this situation caused an assertion failure instead of
an error, means that without asserts this could possibly result in some
visibility bugs, similar to the ones described
https://github.com/citusdata/citus/issues/3867
2020-06-30 11:31:46 +02:00
Hadi Moshayedi 4ed59d2db3 Move more from insert_select_executor to insert_select_planner 2020-06-26 08:08:26 -07:00
Hadi Moshayedi cd25a27174 Fix crash caused by EXPLAIN EXECUTE INSERT ... SELECT 2020-06-25 08:55:48 -07:00
Hadi Moshayedi 4e8d79998e Save INSERT/SELECT method in DistributedPlan.
This is so we don't need to calculate it twice in
insert_select_executor.c and multi_explain.c, which can
cause discrepancy if an update in one of them is not
reflected in the other site.
2020-06-25 08:55:48 -07:00
Jelte Fennema 64506143e4
Replace flaky repartition analyze test with a non flaky one (#3950)
The flaky test was introduced in #3941. This removes that flaky test and
adds a new one that fails in the same manner when removing the fix in #3941.

An example of a random failure can be found here:
https://app.circleci.com/pipelines/github/citusdata/citus/9558/workflows/de76e7a5-6558-46c9-97e7-8b1dae1f173b/jobs/135876/steps
2020-06-25 15:19:15 +02:00
SaitTalhaNisanci 50e115fe3a
test task tracker repartition with replication >1 (#3944) 2020-06-24 14:54:20 +03:00
SaitTalhaNisanci f458d1fd1c
Fix/task execution (#3941)
* Not set TaskExecution with adaptive executor

Adaptive executor is using a utility method from task tracker for
repartition joins, however adaptive executor doesn't need taskExecution.
It is only used by task tracker. This causes a problem when explain
analyze is used because what taskExecution is pointing to might be
random.

We solve this by not setting taskExecution from adaptive executor. So it
will stay NULL as set by CreateTask.

* use same memory context as task for taskExecution

Co-authored-by: Jelte Fennema <github-tech@jeltef.nl>
2020-06-24 12:10:00 +03:00
Philip Dubé cd0b2ad5b5 citus_evaluate_expression: call expand_function_arguments beforehand to avoid segfaulting on implicit parameters 2020-06-23 18:06:46 +00:00
Jelte Fennema b3ec6fbe7a
Make check_enterprise_merge script stricter (#3918)
We've had two issues with merge conflicts to enterprise in the last week, that
suddenly happened. Because of this CI check this actually blocks all community
PRs from being merged.

This PR tries to improve on the previous script we had, by putting tougher
constraints on when a merge is allowed.

Previously the check would pass in two cases:
1. This PR be merged without conflicts into `enterprise-master`
2. A branch exists with the same name as this PR on enterprise and that can be
   merged into `enterprise-master`.

The first case stays the same, but I've changed the second case to require the
following instead:
1. A branch exists on enterprise with the same name as this PR
2. **NEW: This branch contains the the last commit of the community PR branch**
3. This branch can be merged into enterprise-master

This makes sure the enterprise branch is actually up to date and not forgotten about.

If we still get problems with this change, future improvements could be:
1. Check that the PR on enterprise passes CI
2. Check that the PR on enterprise has been approved
3. Require the enterprise PR branch to be merged before merging community.
2020-06-19 12:45:36 +02:00
SaitTalhaNisanci 3a789352b6
rename citus hammerdb branch prefix as citus_github_push (#3925)
When we are using hammerdb jobs, the job creates a branch on test
automation, since that branch should be deleted, it would have
`delete_me` prefix, however since the result branch on
release-test-results will have the test automation branch as prefix, it
will also have `delete_me` prefix, which seems a bit confusing.

This PR updates it as citus_github_push
2020-06-18 21:11:58 +03:00
Marco Slot 2a3234ca26 Rename masterQuery to combineQuery 2020-06-17 14:14:37 +02:00
Jelte Fennema 0259815d3a
Fix EXPLAIN ANALYZE received data counter issues (#3917)
In #3901 the "Data received from worker(s)" sections were added to EXPLAIN
ANALYZE. After merging @pykello posted some review comments. This addresses
those comments as well as fixing a other issues that I found while addressing 
them. The things this does:

1. Fix `EXPLAIN ANALYZE EXECUTE p1` to not increase received data on every
   execution
2. Fix `EXPLAIN ANALYZE EXECUTE p1(1)` to not return 0 bytes as received data
   allways.
3. Move `EXPLAIN ANALYZE` specific logic to `multi_explain.c` from
   `adaptive_executor.c`
4. Change naming of new explain sections to `Tuple data received from node(s)`.
   Firstly because a task can reference the coordinator too, so "worker(s)" was
   incorrect. Secondly to indicate that this is tuple data and not all network
   traffic that was performed.
5. Rename `totalReceivedData` in our codebase to `totalReceivedTupleData` to
   make it clearer that it's a tuple data counter, not all network traffic.
6. Actually add `binary_protocol` test to `multi_schedule` (woops)
7. Fix a randomly failing test in `local_shard_execution.sql`.
2020-06-17 11:33:38 +02:00