In CI our failure_connection_establishment sometimes failed randomly
with the following error:
```diff
-- verify a connection attempt was made to the intercepted node, this would have cause the
-- connection to have been delayed and thus caused a timeout
SELECT * FROM citus.dump_network_traffic() WHERE conn=0;
conn | source | message
------+--------+---------
- 0 | coordinator | [initial message]
-(1 row)
+(0 rows)
SELECT citus.mitmproxy('conn.allow()');
```
Source: https://app.circleci.com/pipelines/github/citusdata/citus/26318/workflows/d3354024-9a67-4b01-9416-5cf79aec6bd8/jobs/745558
The way I fixed this was by removing the dump_network_traffic call. This
might sound simple, but doing this while continuing to let the test
serve its intended purpose required quite some more changes.
This dump_network_traffic call was there because we didn't want to show
warnings in the queries above, because the exact warnings were not
reliable. The main reason this error was not reliable was because we
were using round-robin task assignment. We did the same query twice, so
that it would hit the node with the intercepted connection in one of
those connections. Instead of doing that I'm now using the
"first-replica" policy and do the queries only once. This works, because
the first placements by placementid for each of the used tables are on
the second node, so first-replica will cause the first connection to go
there.
This solved most of the flakyness, but when confirming that the
flakyness was fixed I found some additional errors:
```diff
-- show that INSERT failed
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
-----------
(1 row)
SELECT count(*) FROM single_replicatated WHERE key = 100;
- count
----------------------------------------------------------------------
- 0
-(1 row)
-
+ERROR: could not establish any connections to the node localhost:9060 after 400 ms
RESET client_min_messages;
```
Source: https://app.circleci.com/pipelines/github/citusdata/citus/26321/workflows/fd5f4622-400c-465e-8d82-83f5f55a87ec/jobs/745666
I addressed this with a combination of two things:
1. Only change citus.node_connection_timeout for the queries that we
want to test timeout behaviour for. When those queries are done I
reset the value to the default again.
2. Change our mitm framework to only delay the initial connection packet
instead of all packets. I think sometimes a follow on packet of a previous
connection attempt was causing the next connection attempt to be delayed
even if `conn.allow()` was already called. For our tests we only care about
connection timeouts, so there's no reason to delay any other packets than
the initial connection packet.
Then there was some last flakyness in the exact error that was given:
```diff
-- tests for connectivity checks
SELECT name FROM r1 WHERE id = 2;
WARNING: could not establish any connections to the node localhost:9060 after 900 ms
+WARNING: connection to the remote node localhost:9060 failed with the following error:
name
------
bar
(1 row)
```
Source: https://app.circleci.com/pipelines/github/citusdata/citus/26338/workflows/9610941c-4d01-4f62-84dc-b91abc56c252/jobs/746467
I don't have a good explaination for this slight change in error message, but
given that it is missing the actual error message I expected this to be related
to some small difference in timing: e.g. the server responding to the connection
attempt right after the coordinator determined that the connection timed out.
To solve this last flakyness I increased the connection timeouts and made the
difference between the timeout and the delay a bit bigger. With these tweaks
I wasn't able to reproduce this error on CI anymore.
Finally, I made most of the same changes to failure_failover_to_local_execution,
since it was using the `conn.delay()` mitm method too. The only change that
I left out was the timing increase, since it might not be strictly necessary and
increases time it takes to run the test. If this test ever becomes flaky the first
thing we should try is increase its timeout.
There is a vulnerability in mitmproxy with the version we are using.
It would be hard to exploit anything with regards to the artifacts we ship as its only used in our test suite. Still its good hygiene to _not_ use software with known vulnerabilities.
This PR updates the version of python, mitmproxy and the crypto libraries used.
The latest version of mitmproxy for python 3.6 is not patched, hence the upgrade of python.
For our CI images this cascades into upgrading debian as well :)
For CI we bake these versions in our images so we need to update them as well.
Changes to the CI images: https://github.com/citusdata/the-process/pull/65
- mitmdump now listens on port 9060
- Add some logging to fluent.py, making issues like this easier to debug in the future
- Fail the tests if something is already running on the port mitmProxy tries to use
- check-failure now works with VPATH builds
- Lots of detail is in src/test/regress/mitmscripts/README
- Create a new target, make check-failure, which runs tests
- Tells travis how to install everything and run the tests