In our testing infra structure, even though we use pinned versions of postgres, the auxiliary libraries might pull in newer versions. This is for example the case for libpq, which will now use the libpq libraries from 14beta3.
The changes in this PR are a lot due to the libpq changes.
We also have changed the citus version that is used as a base for the citus upgrades, from 10.0 to 10.1 . This caused columnar to enforce some extra limits on the settings, which conflicted with our upgrade tests.
The changes in failure tests are due to the libpq changes.
There are also a lot of changes on isolation tests outputs, hence we
updated all of them.
Co-authored-by: Nils Dijk <nils@citusdata.com>
DESCRIPTION: Add tests to verify crash recovery for columnar tables
Based on the Postgres TAP tooling we add a new test suite to the array of test suites for citus. It is modelled after `src/test/recovery` in the postgres project and takes the same place in our repository. It uses the perl modules defined in the postgres project to control the postgres nodes.
The test we add here focus on crash recovery. Our follower tests should cover the streaming replication behaviour.
It is hooked to our CI for both postgres 12 and postgres 13. We omit the recovery tests for postgres 11 as we do not have support for the columnar table access method.
Small change that will upload lines hit during upgrades to codecov.
It got to our attention we are not capturing the codecov during upgrades in #4354 .
* check compilation of enterprise job
* test that enterprise merge job fails with compilation error
* Revert "test that enterprise merge job fails with compilation error"
This reverts commit 0eaccd58c207a4c15365186017bf47601cc95552.
* Update readme and use citus extbuilder:13beta3
* Update and separate test images
The build image was a single one and it would contain pg11, pg12 and
pg13. Now it is separated so that we can build each pg major
independently.
Tags are used as full postgres versions so that we can know which
version we use by looking at the tag. For example exttester:11.9 would
mean we are using pg11.9.
pg11 is updated from 11.5 to 11.9.
pg12 is updated from 12rc to 12.4.
* Ignore memory usage in pg13 explain
* Use citus instead of personal repo
Since we don't have any limitation on parallelism now, it makes sense to
isolate each test schedule so that:
- we can use more parallelism
- we will wait less on retries because if a job fails with multiple
schedules, we needed to rerun all of them.
* use adaptive executor even if task-tracker is set
* Update check-multi-mx tests for adaptive executor
Basically repartition joins are enabled where necessary. For parallel
tests max adaptive executor pool size is decresed to 2, otherwise we
would get too many clients error.
* Update limit_intermediate_size test
It seems that when we use adaptive executor instead of task tracker, we
exceed the intermediate result size less in the test. Therefore updated
the tests accordingly.
* Update multi_router_planner
It seems that there is one problem with multi_router_planner when we use
adaptive executor, we should fix the following error:
+ERROR: relation "authors_range_840010" does not exist
+CONTEXT: while executing command on localhost:57637
* update repartition join tests for check-multi
* update isolation tests for repartitioning
* Error out if shard_replication_factor > 1 with repartitioning
As we are removing the task tracker, we cannot switch to it if
shard_replication_factor > 1. In that case, we simply error out.
* Remove MULTI_EXECUTOR_TASK_TRACKER
* Remove multi_task_tracker_executor
Some utility methods are moved to task_execution_utils.c.
* Remove task tracker protocol methods
* Remove task_tracker.c methods
* remove unused methods from multi_server_executor
* fix style
* remove task tracker specific tests from worker_schedule
* comment out task tracker udf calls in tests
We were using task tracker udfs to test permissions in
multi_multiuser.sql. We should find some other way to test them, then we
should remove the commented out task tracker calls.
* remove task tracker test from follower schedule
* remove task tracker tests from multi mx schedule
* Remove task-tracker specific functions from worker functions
* remove multi task tracker extra schedule
* Remove unused methods from multi physical planner
* remove task_executor_type related things in tests
* remove LoadTuplesIntoTupleStore
* Do initial cleanup for repartition leftovers
During startup, task tracker would call TrackerCleanupJobDirectories and
TrackerCleanupJobSchemas to clean up leftover directories and job
schemas. With adaptive executor, while doing repartitions it is possible
to leak these things as well. We don't retry cleanups, so it is possible
to have leftover in case of errors.
TrackerCleanupJobDirectories is renamed as
RepartitionCleanupJobDirectories since it is repartition specific now,
however TrackerCleanupJobSchemas cannot be used currently because it is
task tracker specific. The thing is that this function is a no-op
currently.
We should add cleaning up intermediate schemas to DoInitialCleanup
method when that problem is solved(We might want to solve it in this PR
as well)
* Revert "remove task tracker tests from multi mx schedule"
This reverts commit 03ecc0a681.
* update multi mx repartition parallel tests
* not error with task_tracker_conninfo_cache_invalidate
* not run 4 repartition queries in parallel
It seems that when we run 4 repartition queries in parallel we get too
many clients error on CI even though we don't get it locally. Our guess
is that, it is because we open/close many connections without doing some
work and postgres has some delay to close the connections. Hence even
though connections are removed from the pg_stat_activity, they might
still not be closed. If the above assumption is correct, it is unlikely
for it to happen in practice because:
- There is some network latency in clusters, so this leaves some times
for connections to be able to close
- Repartition joins return some data and that also leaves some time for
connections to be fully closed.
As we don't get this error in our local, we currently assume that it is
not a bug. Ideally this wouldn't happen when we get rid of the
task-tracker repartition methods because they don't do any pruning and
might be opening more connections than necessary.
If this still gives us "too many clients" error, we can try to increase
the max_connections in our test suite(which is 100 by default).
Also there are different places where this error is given in postgres,
but adding some backtrace it seems that we get this from
ProcessStartupPacket. The backtraces can be found in this link:
https://circleci.com/gh/citusdata/citus/138702
* Set distributePlan->relationIdList when it is needed
It seems that we were setting the distributedPlan->relationIdList after
JobExecutorType is called, which would choose task-tracker if
replication factor > 1 and there is a repartition query. However, it
uses relationIdList to decide if the query has a repartition query, and
since it was not set yet, it would always think it is not a repartition
query and would choose adaptive executor when it should choose
task-tracker.
* use adaptive executor even with shard_replication_factor > 1
It seems that we were already using adaptive executor when
replication_factor > 1. So this commit removes the check.
* remove multi_resowner.c and deprecate some settings
* remove TaskExecution related leftovers
* change deprecated API error message
* not recursively plan single relatition repartition subquery
* recursively plan single relation repartition subquery
* test depreceated task tracker functions
* fix overlapping shard intervals in range-distributed test
* fix error message for citus_metadata_container
* drop task-tracker deprecated functions
* put the implemantation back to worker_cleanup_job_schema_cachesince citus cloud uses it
* drop some functions, add downgrade script
Some deprecated functions are dropped.
Downgrade script is added.
Some gucs are deprecated.
A new guc for repartition joins bucket size is added.
* order by a test to fix flappiness
I recently forgot to add tests to a schedule in two of my PRs. One of
these was caught by review, but the other one was not. This adds a
script to causes CI to ensure that each test in the repo is included in
at least one schedule.
Three tests were found that were currently not part of a schedule. This PR
adds those three tests to a schedule as well and it also fixes some small
issues with these tests.
We keep accumulating more and more scripts to flag issues in CI. This is
good, but we are currently missing consistent documentation for them.
This commit moves all these scripts to the `ci` directory and adds some
documentation for all of them in the README. It also makes sure that the
last line of output of a failed script points to this documentation.
The only reason for this upgrade is to see if it will fix codecov
pushing the coverage many times to PRs, which is cluttering the PRs.
The reason for this change is that it is possible that "pushing many
times" is related to codecov internals so upgrading can help.
With this commit:
You can trigger two types of hammerdb benchmark jobs:
-ch_benchmark (analytical and transactional queries)
-tpcc_benchmark (only transactional queries)
Your branch will be run against `master` branch.
In order to trigger the jobs prepend `ch_benchmark/` or `tpcc_benchmark/` to your branch and push it.
For example if you were running on a feature/improvement branch with name `improve/adaptive_executor`. In order to trigger a tpcc benchmark, you can do the following:
```bash
git checkout improve/adaptive_executor
git checkout -b tpcc_benchmark/improve/adaptive_executor
git push origin tpcc_benchmark/improve/adaptive_executor # the tpcc benchmark job will be triggered.
```
You will see the results in a branch in [https://github.com/citusdata/release-test-results](https://github.com/citusdata/release-test-results).
The branch name will be something like: `delete_me/citusbot_tpcc_benchmark_rg/<date>/<date>`.
The resource groups will be deleted automatically but if the benchmark fails, they won't be deleted(If you don't see the results after a reasonable time, it might mean it failed, you can check the resource usage from portal, if it is almost 0 and you didn't see the results, it means it probably failed). In that case, you will need to delete the resource groups manually from portal, the resource groups are `citusbot_ch_benchmark_rg` and `citusbot_tpcc_benchmark_rg`.
* add a job to check if merge to enterprise master would fail
Add a job to check if merge to enterprise master would fail.
The job does the following:
- It checks if there is already a branch with the same name on
enterprise, if so it tries to merge it to enterprise master, if the
merge fails the job fails.
- If the branch doesn't exist on the enterprise, it tries to merge the
current branch to enterprise master, it fails if there is any conflict
while merging.
The motivation is that if a branch on community would create a conflict
on enterprise-master, until we create a PR on enterprise that would
solve this conflict, we won't be able to merge the PR on community. This
way we won't have many conflicts when merging to enterprise master and
the author, who has the most context will be responsible for resolving
the conflict when he has the most context, not after 1 month.
* Improve test suite to be able to easily run locally
* Add documentation on how to resolve conflicts to enterprise master
* Improve enterprise merge script
* Improve merge conflict job README
* Improve merge conflict job README
* Improve merge conflict job README
* Improve merge conflict job README
Co-authored-by: Nils Dijk <nils@citusdata.com>
We sometimes get no output timeout in our cirleci jobs. There is no
point in waiting for 10 minutes to get those timeouts. Ideally we should
see some output, this will also prevent adding a test that takes longer
than 2 mins to run.
Just because we'll remove the executors soon, and it doesn't
make sense to keep them running.
I'll remove the tests files with a follow-up commit, but it seems
safe to remove them from circleci now.
* Add initial citus upgrade test
* Add restart databases and run tests in all nodes
* Add output for citus versions 8.0 8.1 8.2 and 8.3
* Add verify step for citus upgrade
* Add target for citus upgrade test in makefile
* Add check citus upgrade job
* Fix installation file path and add missing tar
* Run citus upgrade for v8.0 v8.1 v8.2 and v8.3
* Create upgrade_common file and rename upgrade check
* Add pg version to citus upgrade test
* Test with postgres 10 and 11 in citus upgrade tests
* Add readme for citus upgrade test
* Add some basic tests to citus upgrade tests
* Add citus upgrade mixed mode test
* Remove citus artifacts before installing another one
* Refactor citus upgrade test according to reviews
* quick and dirty rewrite of citus upgrade tests to support local execution.
I think we need to change the makefile in such a way that the tar files can be injected from the circle ci config file.
Also I removed some of the citus version checks you had to not have the requirement to pass that in separately from the pre tar file. I am not super happy with it, but two flags that need to be kept in sync is also not desirable. Instead I print out the citus version that is installed per node. This will not cause a failure if they are not what one would expect but it lets us verify we are running the expected version.
* use latest citusupgradetester in circleci
* update readme and use common alias for upgrade_common import
* Add PG12 test outputs
* Add jobs to run tests with pg 12
* use POSIX collate for compatibility between pg10/pg11/pg12
* do not override the new default value when running vanilla tests
* fix 2 problems with pg12 tests
* update pg12 images with pg12 rc1
* remove pg10 jobs
* Revert "Add PG12 test outputs"
This reverts commit f3545b92ef.
* change images to use latest instead of dev
* add missing coverage flags