Do not obtain AccessShareLock before acquiring the distributed locks. Acquiring an AccessShareLock ensures that the relations which we are trying to get a distributed lock on will not be dropped in the time between when the LOCK command is issued and the LOCK commands are send to the worker. However, this also leads to distributed deadlocks in such scenarios: ```sql -- for dist lock acquiring order coor, w1, w2 -- on w2 LOCK t1 IN ACCESS EXLUSIVE MODE; -- acquire AccessShareLock locally on t1 to ensure it is not dropped while we get ready to distribute the lock -- concurrently on w1 LOCK t1 IN ACCESS EXLUSIVE MODE; -- acquire AccessShareLock locally on t1 to ensure it is not dropped while we get ready to distribute the lock -- acquire dist lock on coor, w1, gets blocked on local AccessShareLock on w2 -- on w2 continuation of the execution above -- starts to acquire dist locks and gets blocked on the coor by the lock acquired by w1 -- distributed deadlock ``` We opt for avoiding such deadlocks with the cost of the possibility of running into errors when the relations on which we are trying to acquire locks on get dropped. |
||
---|---|---|
.. | ||
bin | ||
citus_tests | ||
data | ||
expected | ||
input | ||
mitmscripts | ||
output | ||
spec | ||
sql | ||
.gitignore | ||
Makefile | ||
Pipfile | ||
Pipfile.lock | ||
README.md | ||
after_citus_upgrade_coord_schedule | ||
after_pg_upgrade_schedule | ||
base_isolation_schedule | ||
base_schedule | ||
before_citus_upgrade_coord_schedule | ||
before_pg_upgrade_schedule | ||
columnar_isolation_schedule | ||
columnar_schedule | ||
create_schedule | ||
failure_base_schedule | ||
failure_schedule | ||
isolation_schedule | ||
log_test_times | ||
minimal_schedule | ||
mixed_after_citus_upgrade_schedule | ||
mixed_before_citus_upgrade_schedule | ||
multi_1_schedule | ||
multi_follower_schedule | ||
multi_mx_schedule | ||
multi_schedule | ||
multi_schedule_hyperscale | ||
multi_schedule_hyperscale_superuser | ||
mx_base_schedule | ||
mx_minimal_schedule | ||
operations_schedule | ||
pg_regress_multi.pl | ||
postgres_schedule | ||
sql_base_schedule | ||
sql_schedule |
README.md
How our testing works
We use the test tooling of postgres to run our tests. This tooling is very
simple but effective. The basics it runs a series of .sql
scripts, gets
their output and stores that in results/$sqlfilename.out
. It then compares the
actual output to the expected output with a simple diff
command:
diff results/$sqlfilename.out expected/$sqlfilename.out
Schedules
Which sql scripts to run is defined in a schedule file, e.g. multi_schedule
,
multi_mx_schedule
.
Makefile
In our Makefile
we have rules to run the different types of test schedules.
You can run them from the root of the repository like so:
# e.g. the multi_schedule
make install -j9 && make -C src/test/regress/ check-multi
Take a look at the makefile for a list of all the testing targets.
Running a specific test
Often you want to run a specific test and don't want to run everything. You can use one of the following commands to do so:
# If your tests needs almost no setup you can use check-minimal
make install -j9 && make -C src/test/regress/ check-minimal EXTRA_TESTS='multi_utility_warnings'
# Often tests need some testing data, if you get missing table errors using
# check-minimal you should try check-base
make install -j9 && make -C src/test/regress/ check-base EXTRA_TESTS='with_prepare'
# Sometimes this is still not enough and some other test needs to be run before
# the test you want to run. You can do so by adding it to EXTRA_TESTS too.
make install -j9 && make -C src/test/regress/ check-base EXTRA_TESTS='add_coordinator coordinator_shouldhaveshards'
Normalization
The output of tests is sadly not completely predictable. Still we want to
compare the output of different runs and error when the important things are
different. We do this by not using the regular system diff
to compare files.
Instead we use src/test/regress/bin/diff
which does the following things:
- Change the
$sqlfilename.out
file by running it throughsed
using thesrc/test/regress/bin/normalize.sed
file. This does stuff like replacing numbers that keep changing across runs with anXXX
string, e.g. portnumbers or transaction numbers. - Backup the original output to
$sqlfilename.out.unmodified
in case it's needed for debugging - Compare the changed
results
andexpected
files with the systemdiff
command.
Updating the expected test output
Sometimes you add a test to an existing file, or test output changes in a way
that's not bad (possibly even good if support for queries is added). In those
cases you want to update the expected test output.
The way to do this is very simple, you run the test and copy the new .out file
in the results
directory to the expected
directory, e.g.:
make install -j9 && make -C src/test/regress/ check-minimal EXTRA_TESTS='multi_utility_warnings'
cp src/test/regress/{results,expected}/multi_utility_warnings.out
Adding a new test file
Adding a new test file is quite simple:
- Write the SQL file in the
sql
directory - Add it to a schedule file, to make sure it's run in CI
- Run the test
- Check that the output is as expected
- Copy the
.out
file fromresults
toexpected
Isolation testing
See src/test/regress/spec/README.md
Upgrade testing
See src/test/regress/citus_tests/upgrade/README.md
Failure testing
See src/test/regress/mitmscripts/README.md
Perl test setup script
To automatically setup a citus cluster in tests we use our
src/test/regress/pg_regress_multi.pl
script. This sets up a citus cluster and
then starts the standard postgres test tooling. You almost never have to change
this file.