Commit Graph

3735 Commits (48fab6f264e7c5d98f25ccf7415d24cb0b3a86e0)

Author SHA1 Message Date
Jelte Fennema 48fab6f264 Replace words that have bad associations (#3992)
We had a few words in our codebase that static analysis flagged as having bad
associations.
(cherry picked from commit f6e2f1b1cb)
2020-07-21 11:01:48 +03:00
Jelte Fennema 9a4fddc9c5 Fix crash with single node dummy placement (#3993)
Static analysis found an issue where we could dereference `NULL`, because
`CreateDummyPlacement` could return `NULL` when there were no workers. This
PR changes it so that it never returns `NULL`, which was intended by
@marcocitus when doing this change: https://github.com/citusdata/citus/pull/3887/files#r438136433

While adding tests for citus on a single node I also added some more basic
tests and it turns out we error out on repartition joins. This has been
present since `shouldhaveshards` was introduced and is not trivial to fix.
So I created a separate issue for this: https://github.com/citusdata/citus/issues/3996
(cherry picked from commit ab01571c9e)
2020-07-21 11:01:48 +03:00
Philip Dubé 1d54b8f301 ruleutils: use get_rtable_name for deparsing resultRelation
(cherry picked from commit 444472ffc6)
2020-07-21 11:01:48 +03:00
Hadi Moshayedi 5e648e1a78 Fix task->fetchedExplainAnalyzePlan memory issue.
(cherry picked from commit 23fa421639)
2020-07-21 11:01:48 +03:00
Sait Talha Nisanci fc711af85b Fix explain subplan duration
(cherry picked from commit 4d217819ff)
2020-07-21 11:01:48 +03:00
Hanefi Önaldı 21ca434bef
Accept list of values in a supported ALTER ROLE .. SET statement
Some GUCs support a list of values which is indicated by GUC_LIST_INPUT flag.

When an ALTER ROLE .. SET statement is executed, the new configuration
default for affected users and databases are stored in the
setconfig(text[]) column in a pg_db_role_setting record.

If a GUC that supports a list of values is used in an ALTER ROLE .. SET
statement, we need to split the text into items delimited by commas.

(cherry picked from commit e534dbae4a)
2020-07-21 04:12:39 +03:00
Onur Tirtir 61ab7006d0
Don't run check-merge-to-enterprise for release branches
(cherry picked from commit 1c6439d1af)
2020-07-17 12:54:23 +03:00
Hanefi Önaldı 3de2d2868d
Introduce new make targets for downgrade scripts
Here are the updated make targets:
- install: install everything except downgrade scripts.
- install-downgrades: build and install only the downgrade migration scripts.
- install-all: install everything along with the downgrade migration scripts.

Conflicts:
  src/backend/distributed/Makefile
  src/backend/distributed/sql/downgrades/citus--9.5-1--9.4-1.sql
    - file does not exist on release branch yet, only on master

(cherry picked from commit 315b323d47)
2020-07-17 12:44:16 +03:00
Marco Slot 77b4534c72 Prevent integer overflow in FindShardIntervalIndex 2020-07-16 14:58:53 +02:00
Onder Kalaci 4b493f088b Fix default value of EnableBinaryProtocol
(cherry picked from commit aa8a2866f3)
2020-07-02 14:29:11 +02:00
Onur Tirtir 06c878b348 Bump Citus version to 9.4.0 2020-07-01 11:01:59 +03:00
Hanefi Onaldi 8913d63ae2
Merge pull request #3927 from citusdata/downgrade-paths 2020-07-01 10:48:50 +03:00
Hanefi Önaldı ca2ececb3b
Downgrade path from 9.4 to 9.3 to 9.2 2020-07-01 10:38:11 +03:00
Hadi Moshayedi dd5277418f
Merge pull request #3961 from citusdata/fix/constant-pushdown
Don't push expressions to workers when aggregating without GROUP BY.
2020-06-30 14:06:15 -07:00
Sait Talha Nisanci e5a21f07cb test aggregates with expressions 2020-06-30 11:41:16 -07:00
Marco Slot eeffbde8bd Fix pushdown of constants in aggregate queries 2020-06-30 11:41:16 -07:00
Jelte Fennema 392c5e2c34
Fix wrong cancellation message about distributed deadlocks (#3956) 2020-06-30 14:57:46 +02:00
Marco Slot 634d6cf9d7
Improve performance of metadata cache (#3924)
#3866 removed the shard ID hash in metadata_cache.c to simplify cache management, 
but we observed a significant performance regression that was being masked by the
performance improvement provided by #3654 in our benchmarks, but #3654 only 
applies to specific workloads.

This PR brings back the shard ID cache as it existed before #3866 with some extra
 measures to handle invalidation. When we load a table entry, we overwrite 
ShardIdCacheEntry->tableEntry pointers for all the shards in that table, though 
it's possible that the table no longer contains the old shard ID or the table 
entry is never reloaded, which would leave a dangling pointer once the table 
entry is freed. To handle that case, we remove all shard ID cache entries that 
point exactly to that table entry when a table is freed (at the end of the 
transaction or any call to CitusTableCacheFlushInvalidatedEntries).

Co-authored-by: SaitTalhaNisanci <s.talhanisanci@gmail.com>
Co-authored-by: Marco Slot <marco.slot@gmail.com>
Co-authored-by: Jelte Fennema <github-tech@jeltef.nl>
2020-06-30 12:10:10 +02:00
Jelte Fennema 02fa942be1
Fix assertion error when rolling back to savepoint (#3868)
It was possible to get an assertion error, if a DML command was
cancelled that opened a connection and then "ROLLBACK TO SAVEPOINT" was
used to continue the transaction. The reason for this was that canceling
the transaction might leave the `claimedExclusively` flag on for (some
of) it's connections.

This caused an assertion failure because `CanUseExistingConnection`
would return false and a new connection would be opened, and then there
would be two connections doing DML for the same placement. Which is
disallowed. That this situation caused an assertion failure instead of
an error, means that without asserts this could possibly result in some
visibility bugs, similar to the ones described
https://github.com/citusdata/citus/issues/3867
2020-06-30 11:31:46 +02:00
SaitTalhaNisanci e28683a025
Upgrade codecov orb in circleci (#3945)
The only reason for this upgrade is to see if it will fix codecov
pushing the coverage many times to PRs, which is cluttering the PRs.

The reason for this change is that it is possible that "pushing many
times" is related to codecov internals so upgrading can help.
2020-06-30 11:33:21 +03:00
Hadi Moshayedi d022f80340
Merge pull request #3943 from citusdata/fix_explain_2
Report correct INSERT/SELECT method in EXPLAIN
2020-06-26 08:21:50 -07:00
Hadi Moshayedi 4ed59d2db3 Move more from insert_select_executor to insert_select_planner 2020-06-26 08:08:26 -07:00
Hadi Moshayedi d34c21890f Rename CoordinatorInsertSelect... to NonPushableInsertSelect 2020-06-25 08:55:48 -07:00
Hadi Moshayedi cd25a27174 Fix crash caused by EXPLAIN EXECUTE INSERT ... SELECT 2020-06-25 08:55:48 -07:00
Hadi Moshayedi 4e8d79998e Save INSERT/SELECT method in DistributedPlan.
This is so we don't need to calculate it twice in
insert_select_executor.c and multi_explain.c, which can
cause discrepancy if an update in one of them is not
reflected in the other site.
2020-06-25 08:55:48 -07:00
Jelte Fennema 64506143e4
Replace flaky repartition analyze test with a non flaky one (#3950)
The flaky test was introduced in #3941. This removes that flaky test and
adds a new one that fails in the same manner when removing the fix in #3941.

An example of a random failure can be found here:
https://app.circleci.com/pipelines/github/citusdata/citus/9558/workflows/de76e7a5-6558-46c9-97e7-8b1dae1f173b/jobs/135876/steps
2020-06-25 15:19:15 +02:00
SaitTalhaNisanci 50e115fe3a
test task tracker repartition with replication >1 (#3944) 2020-06-24 14:54:20 +03:00
SaitTalhaNisanci f458d1fd1c
Fix/task execution (#3941)
* Not set TaskExecution with adaptive executor

Adaptive executor is using a utility method from task tracker for
repartition joins, however adaptive executor doesn't need taskExecution.
It is only used by task tracker. This causes a problem when explain
analyze is used because what taskExecution is pointing to might be
random.

We solve this by not setting taskExecution from adaptive executor. So it
will stay NULL as set by CreateTask.

* use same memory context as task for taskExecution

Co-authored-by: Jelte Fennema <github-tech@jeltef.nl>
2020-06-24 12:10:00 +03:00
Philip Dubé ac3c646ed5
Merge pull request #3942 from citusdata/fix-default-func-param-evaluation
citus_evaluate_expression: call expand_function_arguments beforehand to avoid segfaulting on implicit parameters
2020-06-23 18:37:40 +00:00
Philip Dubé cd0b2ad5b5 citus_evaluate_expression: call expand_function_arguments beforehand to avoid segfaulting on implicit parameters 2020-06-23 18:06:46 +00:00
Jelte Fennema a98226842d
Use rename to make sure no files are inserted while deleting (#3912)
As suggested by @marcocitus in https://github.com/citusdata/citus/pull/3911#issuecomment-643978531, there was
a regression in #3893. If another backend would write a file during deletion of
the intermediate results directory, this file would not necessarily be deleted.

The approach used in `CitusRemoveDirectory` is to try recursive removal of the
directory again if it has failed. This does not work here, since when a file
can not be removed for other reasons (e.g. `EPERM`) it will not throw an error
anymore. So then we would get into an infinite removal loop. Instead I now
`rename` the directory before removing it. That way other backends will not
write files to it anymore.
2020-06-23 10:38:44 +02:00
Hanefi Onaldi 0e0695481c
Merge pull request #3935 from citusdata/disallow-long-changelog 2020-06-22 23:55:50 +03:00
Hanefi Önaldı e93c47f003
Fix long changelog items 2020-06-22 23:45:47 +03:00
Hanefi Önaldı e61ced53e3
Disallow long changelog entries 2020-06-22 23:45:46 +03:00
Önder Kalacı f41e1b1a60
Merge pull request #3923 from citusdata/assert_order
Sort WorkerPool in executions
2020-06-22 18:27:54 +02:00
Onder Kalaci 88c473e007 Sort WorkerPool in executions
We sort the workerList because adaptive connection management
(e.g., OPTIONAL_CONNECTION) requires any concurrent executions
to wait for the connections in the same order to prevent any
starvation. If we don't sort, we might end up with:
      Execution 1: Get connection for worker 1, wait for worker 2
      Execution 2: Get connection for worker 2, wait for worker 1

and, none could proceed. Instead, we enforce every execution establish
the required connections to workers in the same order.
2020-06-22 16:39:27 +02:00
Onur Tirtir fb46ef1d17
Merge pull request #3930 from citusdata/update-cl-0622
Update CHANGELOG for 9.2.6 & 9.3.2
2020-06-22 16:23:05 +03:00
Onur Tirtir d41ad47579 Update CHANGELOG for 9.3.2 2020-06-22 14:20:16 +03:00
Onur Tirtir 4a38685744 Update CHANGELOG for 9.2.6 2020-06-22 14:19:56 +03:00
Hanefi Onaldi ebd8de88d5
Merge pull request #3829 from citusdata/migrations-disallow-c-comment 2020-06-22 13:36:57 +03:00
Hanefi Önaldı 618453a2ba
Disallow C-style comments in migration files 2020-06-22 12:51:16 +03:00
Hanefi Önaldı 56285e6470
Use citus docker hub org 2020-06-22 12:51:16 +03:00
Jelte Fennema b3ec6fbe7a
Make check_enterprise_merge script stricter (#3918)
We've had two issues with merge conflicts to enterprise in the last week, that
suddenly happened. Because of this CI check this actually blocks all community
PRs from being merged.

This PR tries to improve on the previous script we had, by putting tougher
constraints on when a merge is allowed.

Previously the check would pass in two cases:
1. This PR be merged without conflicts into `enterprise-master`
2. A branch exists with the same name as this PR on enterprise and that can be
   merged into `enterprise-master`.

The first case stays the same, but I've changed the second case to require the
following instead:
1. A branch exists on enterprise with the same name as this PR
2. **NEW: This branch contains the the last commit of the community PR branch**
3. This branch can be merged into enterprise-master

This makes sure the enterprise branch is actually up to date and not forgotten about.

If we still get problems with this change, future improvements could be:
1. Check that the PR on enterprise passes CI
2. Check that the PR on enterprise has been approved
3. Require the enterprise PR branch to be merged before merging community.
2020-06-19 12:45:36 +02:00
SaitTalhaNisanci 3a789352b6
rename citus hammerdb branch prefix as citus_github_push (#3925)
When we are using hammerdb jobs, the job creates a branch on test
automation, since that branch should be deleted, it would have
`delete_me` prefix, however since the result branch on
release-test-results will have the test automation branch as prefix, it
will also have `delete_me` prefix, which seems a bit confusing.

This PR updates it as citus_github_push
2020-06-18 21:11:58 +03:00
Onur Tirtir c61e84c14b
Merge pull request #3921 from citusdata/update-cl-0617
Update CHANGELOG for 9.2.5 & 9.3.1
2020-06-17 19:05:45 +03:00
Onur Tirtir 4640f90933 Update CHANGELOG for 9.3.1 2020-06-17 18:45:54 +03:00
Onur Tirtir 74f20149cd Update CHANGELOG for 9.2.5 2020-06-17 18:45:54 +03:00
Marco Slot 004e0e4617
Merge pull request #3919 from citusdata/fix/combine-query
Rename masterQuery to combineQuery
2020-06-17 16:12:13 +02:00
Marco Slot 2a3234ca26 Rename masterQuery to combineQuery 2020-06-17 14:14:37 +02:00
Jelte Fennema 0259815d3a
Fix EXPLAIN ANALYZE received data counter issues (#3917)
In #3901 the "Data received from worker(s)" sections were added to EXPLAIN
ANALYZE. After merging @pykello posted some review comments. This addresses
those comments as well as fixing a other issues that I found while addressing 
them. The things this does:

1. Fix `EXPLAIN ANALYZE EXECUTE p1` to not increase received data on every
   execution
2. Fix `EXPLAIN ANALYZE EXECUTE p1(1)` to not return 0 bytes as received data
   allways.
3. Move `EXPLAIN ANALYZE` specific logic to `multi_explain.c` from
   `adaptive_executor.c`
4. Change naming of new explain sections to `Tuple data received from node(s)`.
   Firstly because a task can reference the coordinator too, so "worker(s)" was
   incorrect. Secondly to indicate that this is tuple data and not all network
   traffic that was performed.
5. Rename `totalReceivedData` in our codebase to `totalReceivedTupleData` to
   make it clearer that it's a tuple data counter, not all network traffic.
6. Actually add `binary_protocol` test to `multi_schedule` (woops)
7. Fix a randomly failing test in `local_shard_execution.sql`.
2020-06-17 11:33:38 +02:00