Commit Graph

6797 Commits (b94424b160e6f7f3dd52b4576c5ea1633ffb6888)

Author SHA1 Message Date
naisila b94424b160
Revert "Test whether flaky citus_split_shard_by_split_points_failure"
This reverts commit 127ae71c235141e9e71eba21d52a47e99c882ed6.
2023-11-14 12:55:29 +03:00
naisila d75a05fbca
Revert "Test whether flaky citus_split_shard_by_split_points_deferred_drop"
This reverts commit 834c3d9c10c2d94348fd8a931ebb3c90c4ae1fa3.
2023-11-14 12:55:29 +03:00
naisila 9b249411d7
Revert "Test whether flaky citus_split_shard_by_split_points"
This reverts commit 76207e2f760aaf6d8a4b14d249d20691458d12ba.
2023-11-14 12:55:29 +03:00
naisila 0d8aaa7150
Revert "Test whether flaky citus_split_shard_columnar_partitioned"
This reverts commit 6c16793fcda5de528229695f802df3b594a640bd.
2023-11-14 12:55:28 +03:00
naisila 249382f181
Revert "Test whether flaky worker_split_copy_test"
This reverts commit b34d73545bc58690098aec35e1af2ce7ccedbb1b.
2023-11-14 12:55:28 +03:00
naisila 2456b6a129
Revert "Test whether flaky worker_split_text_copy_test"
This reverts commit a1074c2243a2c6f57fb4041ca3c2ae6770b9b48b.
2023-11-14 12:55:27 +03:00
naisila b28018ac22
Revert "Test whether flaky worker_split_binary_copy_test"
This reverts commit 99dcc93e0a0a00111e47d6c239fc754814c722c4.
2023-11-14 12:55:27 +03:00
naisila e8b4bcee2a
Fix style of run_test.py 2023-11-14 12:55:27 +03:00
naisila 114902453c
Test whether flaky split_shard 2023-11-14 12:55:26 +03:00
naisila d3401b26a3
Test whether flaky citus_split_shard_by_split_points_negative + cleanup
citus_split_shard_by_split_points_negative has something leftover.
If I run the test two times in a row in my local machine,
I get the following error in the second test run:

```diff
 -- UDF fails where source shard cannot be split further i.e min and max range is equal.
 -- Create a Shard where range cannot be split further
 SELECT isolate_tenant_to_new_shard('table_to_split', 1, shard_transfer_mode => 'block_writes');
- isolate_tenant_to_new_shard
----------------------------------------------------------------------
-                    49761305
-(1 row)
-
+ERROR:  could not open relation with OID 0
+CONTEXT:  while executing command on localhost:57637
 SELECT citus_split_shard_by_split_points(
        49761305,
        ARRAY['-1073741826'],
        ARRAY[:worker_1_node, :worker_2_node]);
-ERROR:  Cannot split shard id "49761305" as min/max range are equal: ('-1905060026', '-1905060026').
+ERROR:  could not find valid entry for shard 49761305
```

We need a resource cleanup here so I added that line.
2023-11-14 12:55:26 +03:00
naisila 58b2cff86f
Test whether flaky citus_split_shard_by_split_points_failure 2023-11-14 12:55:25 +03:00
naisila e90f59d2e4
Test whether flaky citus_split_shard_by_split_points_deferred_drop 2023-11-14 12:55:25 +03:00
naisila 114f624936
Test whether flaky citus_split_shard_by_split_points 2023-11-14 12:55:25 +03:00
naisila 4995e99975
Don't print node ids in multi_multiuser_auth and multi_poolinfo_usage tests 2023-11-14 12:55:24 +03:00
naisila d4efad30da
Test whether flaky citus_split_shard_columnar_partitioned 2023-11-14 12:55:24 +03:00
naisila b3af43b27f
Remove coordinator removal from add_coordinator test
Use remove_coordinator_from_metadata test where needed
2023-11-14 12:55:23 +03:00
naisila 220433ba5c
Test whether flaky worker_split_copy_test 2023-11-14 12:55:23 +03:00
naisila 81dbdd3231
Test whether flaky worker_split_text_copy_test 2023-11-14 12:55:22 +03:00
naisila b4958c4f13
Test whether flaky worker_split_binary_copy_test 2023-11-14 12:55:22 +03:00
naisila 75f92f7bc5
Add separate test to remove cluster management test nodes 2023-11-14 12:55:22 +03:00
naisila 10a11d79d3
citus_non_blocking_split_shard_cleanup, citus_non_blocking_split_shards and logical_replication ARE NOT FLAKY 2023-11-14 12:55:21 +03:00
naisila 1f68c7c8c5
Fix flaky multi_tenant_isolation_nonblocking 2023-11-14 12:55:21 +03:00
naisila b9d56f15fa
Try to get flaky multi_tenant_isolation_nonblocking/citus_non_blocking_split_shards 2023-11-14 12:55:20 +03:00
naisila e95a84a7a5
Try to get flaky result 2023-11-14 12:55:20 +03:00
naisila 1b923eb480
Try running cleanup function more than once 2023-11-14 12:55:20 +03:00
Naisila Puka cdef2d5224
Random tests refactoring (#7342)
While investigating replication slots leftovers
in PR https://github.com/citusdata/citus/pull/7338,
I ran into the following refactoring/cleanup
that can be done in our test suite:

- Add separate test to remove non default nodes
- Remove coordinator removal from `add_coordinator` test
  Use `remove_coordinator_from_metadata` test where needed
- Don't print nodeids in `multi_multiuser_auth` and
`multi_poolinfo_usage`
  tests
- Use `startswith` when checking for isolation or failure tests
- Add some dependencies accordingly in `run_test.py` for running flaky
test schedules
2023-11-14 12:49:15 +03:00
Naisila Puka e4ac3e6d9a
Bump PG versions to latest minors 14.10, 15.5, 16.1 (#7336)
Postgres got minor updates on Nov9, this starts using the images with
the latest version for our tests, namely 14.10, 15.5 and 16.1.
These minor updates were compatible with Citus.
Sister PR: https://github.com/citusdata/the-process/pull/152
2023-11-13 15:05:38 +03:00
Onur Tirtir 240313e286
Support role commands from any node (#7278)
DESCRIPTION: Adds support from issuing role management commands from worker nodes

It's unlikely to get into a distributed deadlock with role commands, we
don't care much about them at the moment.
There were several attempts to reduce the chances of a deadlock but we
didn't any of them merged into main branch yet, see:
#7325
#7016
#7009
2023-11-10 09:58:51 +00:00
Naisila Puka 57ff762c82
Fix VACUUM flakiness in multi_utilities (#7334)
When I run this test in my local, the size of the table after the DELETE
command is around 58785792. Hence, I assume that the diffs suggest that
the Vacuum had no effect. The current solution is to run the VACUUM
command three times instead of once.

Example diff:
https://github.com/citusdata/citus/actions/runs/6722231142/attempts/1#summary-18269870674
```diff
insert into local_vacuum_table select i from generate_series(1,1000000) i;
 delete from local_vacuum_table;
 VACUUM local_vacuum_table;
 SELECT CASE WHEN s BETWEEN 20000000 AND 25000000 THEN 22500000 ELSE s END
 FROM pg_total_relation_size('local_vacuum_table') s ;
     s     
 ----------
- 22500000
+ 58785792
 (1 row)
```
See more diff examples in the PR description
https://github.com/citusdata/citus/pull/7334
2023-11-09 21:00:24 +03:00
dependabot[bot] c028d929b5 Bump werkzeug from 2.3.7 to 3.0.1 in /.devcontainer/src/test/regress
Bumps [werkzeug](https://github.com/pallets/werkzeug) from 2.3.7 to 3.0.1.
- [Release notes](https://github.com/pallets/werkzeug/releases)
- [Changelog](https://github.com/pallets/werkzeug/blob/main/CHANGES.rst)
- [Commits](https://github.com/pallets/werkzeug/compare/2.3.7...3.0.1)

---
updated-dependencies:
- dependency-name: werkzeug
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-11-09 17:14:14 +01:00
dependabot[bot] d4663212f4 Bump werkzeug from 2.3.7 to 3.0.1 in /src/test/regress
Bumps [werkzeug](https://github.com/pallets/werkzeug) from 2.3.7 to 3.0.1.
- [Release notes](https://github.com/pallets/werkzeug/releases)
- [Changelog](https://github.com/pallets/werkzeug/blob/main/CHANGES.rst)
- [Commits](https://github.com/pallets/werkzeug/compare/2.3.7...3.0.1)

---
updated-dependencies:
- dependency-name: werkzeug
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-11-09 17:14:14 +01:00
Nils Dijk 0dac63afc0
move pg_version_constants.h to toplevel include (#7335)
In preparation of sorting and grouping all includes we wanted to move
this file to the toplevel includes for good grouping/sorting.
2023-11-09 15:09:39 +00:00
Hanefi Onaldi 92228b279a
Add changelog entries for 12.1.1 (#7332)
Co-authored-by: Onur Tirtir <onurcantirtir@gmail.com>
2023-11-09 14:19:28 +00:00
Naisila Puka 0dc41ee5a0
Fix flaky multi_mx_insert_select_repartition test (#7331)
https://github.com/citusdata/citus/actions/runs/6745019678/attempts/1#summary-18336188930
```diff
     insert into target_table SELECT a*2 FROM source_table RETURNING a;
-NOTICE:  executing the command locally: SELECT bytes FROM fetch_intermediate_results(ARRAY['repartitioned_results_xxxxx_from_4213582_to_0','repartitioned_results_xxxxx_from_4213584_to_0']::text[],'localhost',57638) bytes
+NOTICE:  executing the command locally: SELECT bytes FROM fetch_intermediate_results(ARRAY['repartitioned_results_3940758121873413_from_4213584_to_0','repartitioned_results_3940758121873413_from_4213582_to_0']::text[],'localhost',57638) bytes
```

The elements in the array passed to `fetch_intermediate_results` are the
same, but in the opposite order than expected.

To fix this flakiness, we can omit the `"SELECT bytes FROM
fetch_intermediate_results..."` line. From the following logs, it is
understandable that the intermediate results have been fetched.
2023-11-08 15:15:33 +03:00
Onur Tirtir 444e6cb7d6
Remove useless variables (#7327)
To fix warnings observed when using different compiler versions.
2023-11-07 16:39:08 +03:00
cvbhjkl e535f53ce5
Fix typo in local_executor.c (#7324)
Fix a typo 'remaning' -> 'remaining' in local_executor.c
2023-11-03 12:14:11 +00:00
Onur Tirtir 21646ca1e9
Fix flaky isolation_get_all_active_transactions.spec test (#7323)
Fix the flaky test that results in following diff by waiting until the
backend that we want to terminate really terminates, until 5secs.

```diff
--- /__w/citus/citus/src/test/regress/expected/isolation_get_all_active_transactions.out.modified	2023-11-01 16:30:57.648749795 +0000
+++ /__w/citus/citus/src/test/regress/results/isolation_get_all_active_transactions.out.modified	2023-11-01 16:30:57.656749877 +0000
@@ -114,13 +114,13 @@
 --------------------
 t                   
 (1 row)
 
 step s3-show-activity: 
  SET ROLE postgres;
  select count(*) from get_all_active_transactions() where process_id IN (SELECT * FROM selected_pid);
 
 count
 -----
-    0
+    1
 (1 row)
```
2023-11-03 09:00:32 +01:00
Onur Tirtir 5e2439a117
Make some more tests re-runable (#7322)
* multi_mx_create_table
* multi_mx_function_table_reference
* multi_mx_add_coordinator
* create_role_propagation
* metadata_sync_helpers
* text_search

https://github.com/citusdata/citus/pull/7278 requires this.
2023-11-02 18:32:56 +03:00
Jelte Fennema-Nio 85b997a0fb
Fix flaky multi_alter_table_statements (#7321)
Sometimes multi_alter_table_statements would fail in CI like this:

```diff
 -- Verify that DROP NOT NULL works
 ALTER TABLE lineitem_alter ALTER COLUMN int_column2 DROP NOT NULL;
 SELECT "Column", "Type", "Modifiers" FROM table_desc WHERE relid='lineitem_alter'::regclass;
-     Column      |         Type          | Modifiers
----------------------------------------------------------------------
- l_orderkey      | bigint                | not null
- l_partkey       | integer               | not null
- l_suppkey       | integer               | not null
- l_linenumber    | integer               | not null
- l_quantity      | numeric(15,2)         | not null
- l_extendedprice | numeric(15,2)         | not null
- l_discount      | numeric(15,2)         | not null
- l_tax           | numeric(15,2)         | not null
- l_returnflag    | character(1)          | not null
- l_linestatus    | character(1)          | not null
- l_shipdate      | date                  | not null
- l_commitdate    | date                  | not null
- l_receiptdate   | date                  | not null
- l_shipinstruct  | character(25)         | not null
- l_shipmode      | character(10)         | not null
- l_comment       | character varying(44) | not null
- float_column    | double precision      | default 1
- date_column     | date                  |
- int_column1     | integer               |
- int_column2     | integer               |
- null_column     | integer               |
-(21 rows)
-
+ERROR:  schema "alter_table_add_column" does not exist
 -- COPY should succeed now
 SELECT master_create_empty_shard('lineitem_alter') as shardid \gset
 ```

Reading from table_desc apparantly has an issue that if the schema gets
deleted from one of the items, while it is being read that we get such
an error.

This change fixes that by not running multi_alter_table_statements in parallel
with alter_table_add_column anymore.

This is another instance of the same issue as in #7294
2023-11-02 16:42:45 +03:00
Jelte Fennema-Nio f171ec98fc
Fix flaky failure_distributed_results (#7307)
Sometimes in CI we run into this failure:

```diff
   SELECT resultId, nodeport, rowcount, targetShardId, targetShardIndex
   FROM partition_task_list_results('test', $$ SELECT * FROM source_table $$, 'target_table')
           NATURAL JOIN pg_dist_node;
-WARNING:  connection to the remote node localhost:xxxxx failed with the following error: connection not open
+ERROR:  connection to the remote node localhost:9060 failed with the following error: connection not open
 SELECT * FROM distributed_result_info ORDER BY resultId;
-       resultid        | nodeport | rowcount | targetshardid | targetshardindex
----------------------------------------------------------------------
- test_from_100800_to_0 |     9060 |       22 |        100805 |                0
- test_from_100801_to_0 |    57637 |        2 |        100805 |                0
- test_from_100801_to_1 |    57637 |       15 |        100806 |                1
- test_from_100802_to_1 |    57637 |       10 |        100806 |                1
- test_from_100802_to_2 |    57637 |        5 |        100807 |                2
- test_from_100803_to_2 |    57637 |       18 |        100807 |                2
- test_from_100803_to_3 |    57637 |        4 |        100808 |                3
- test_from_100804_to_3 |     9060 |       24 |        100808 |                3
-(8 rows)
-
+ERROR:  current transaction is aborted, commands ignored until end of transaction block
 -- fetch from worker 2 should fail
 SAVEPOINT s1;
+ERROR:  current transaction is aborted, commands ignored until end of transaction block
 SELECT fetch_intermediate_results('{test_from_100802_to_1,test_from_100802_to_2}'::text[], 'localhost', :worker_2_port) > 0 AS fetched;
-ERROR:  could not open file "base/pgsql_job_cache/xx_x_xxx/test_from_100802_to_1.data": No such file or directory
-CONTEXT:  while executing command on localhost:xxxxx
+ERROR:  current transaction is aborted, commands ignored until end of transaction block
 ROLLBACK TO SAVEPOINT s1;
+ERROR:  savepoint "s1" does not exist
 -- fetch from worker 1 should succeed
 SELECT fetch_intermediate_results('{test_from_100802_to_1,test_from_100802_to_2}'::text[], 'localhost', :worker_1_port) > 0 AS fetched;
- fetched
----------------------------------------------------------------------
- t
-(1 row)
-
+ERROR:  current transaction is aborted, commands ignored until end of transaction block
 -- make sure the results read are same as the previous transaction block
 SELECT count(*), sum(x) FROM
   read_intermediate_results('{test_from_100802_to_1,test_from_100802_to_2}'::text[],'binary') AS res (x int);
- count | sum
----------------------------------------------------------------------
-    15 | 863
-(1 row)
-
+ERROR:  current transaction is aborted, commands ignored until end of transaction block
 ROLLBACk;
```

As outlined in the #7306 I created, the reason for this is related to
only having a single connection open to the node. Finding and fixing the
full cause is not trivial, so instead this PR starts working around
this bug by forcing maximum parallelism. Preferably we'd want
this workaround not to be necessary, but that requires
spending time to fix this. For now having a less flaky CI is
good enough.
2023-11-02 12:31:56 +00:00
Jelte Fennema-Nio b47c8b3fb0
Fix flaky insert_select_connection_leak (#7302)
Sometimes in CI insert_select_connection_leak would fail like this:

```diff
 END;
 SELECT worker_connection_count(:worker_1_port) - :pre_xact_worker_1_connections AS leaked_worker_1_connections,
        worker_connection_count(:worker_2_port) - :pre_xact_worker_2_connections AS leaked_worker_2_connections;
  leaked_worker_1_connections | leaked_worker_2_connections
 -----------------------------+-----------------------------
-                           0 |                           0
+                          -1 |                           0
 (1 row)

 -- ROLLBACK
 BEGIN;
 INSERT INTO target_table SELECT * FROM source_table;
 INSERT INTO target_table SELECT * FROM source_table;
 ROLLBACK;
 SELECT worker_connection_count(:worker_1_port) - :pre_xact_worker_1_connections AS leaked_worker_1_connections,
        worker_connection_count(:worker_2_port) - :pre_xact_worker_2_connections AS leaked_worker_2_connections;
  leaked_worker_1_connections | leaked_worker_2_connections
 -----------------------------+-----------------------------
-                           0 |                           0
+                          -1 |                           0
 (1 row)

 \set VERBOSITY TERSE
 -- Error on constraint failure
 BEGIN;
 INSERT INTO target_table SELECT * FROM source_table;
 SELECT worker_connection_count(:worker_1_port) AS worker_1_connections,
        worker_connection_count(:worker_2_port) AS worker_2_connections \gset
 SAVEPOINT s1;
 INSERT INTO target_table SELECT a, CASE WHEN a < 50 THEN b ELSE null END  FROM source_table;
@@ -89,15 +89,15 @@
  leaked_worker_1_connections | leaked_worker_2_connections
 -----------------------------+-----------------------------
                            0 |                           0
 (1 row)

 END;
 SELECT worker_connection_count(:worker_1_port) - :pre_xact_worker_1_connections AS leaked_worker_1_connections,
        worker_connection_count(:worker_2_port) - :pre_xact_worker_2_connections AS leaked_worker_2_connections;
  leaked_worker_1_connections | leaked_worker_2_connections
 -----------------------------+-----------------------------
-                           0 |                           0
+                          -1 |                           0
 (1 row)
```

Source:
https://github.com/citusdata/citus/actions/runs/6718401194/attempts/1#summary-18258258387

A negative amount of leaked connectios is obviously not possible. For
some reason there was a connection open when we checked the initial
amount of connections that was closed afterwards. This could be the
from the maintenance daemon or maybe from the previous test that had not
fully closed its connections just yet.

The change in this PR doesnt't actually fix the cause of the negative
connection, but it simply considers it good as well, by changing the
result to zero for negative values.

With this fix we might sometimes miss a leak, because the negative
number can cancel out the leak and still result in a 0. But since the
negative number only occurs sometimes, we'll still find the leak often
enough.
2023-11-02 13:15:43 +01:00
Cédric Villemain 0678a2fd89
Fix #7242, CALL(@0) crash backend (#7288)
When executing a prepared CALL, which is not pure SQL but available with
some drivers like npgsql and jpgdbc, Citus entered a code path where a
plan is not defined, while trying to increase its cost. Thus SIG11 when
plan is a NULL pointer.

Fix by only increasing plan cost when plan is not null.

However, it is a bit suspicious to get here with a NULL plan and maybe a
better change will be to not call
ShardPlacementForFunctionColocatedWithDistTable() with a NULL plan at
all (in call.c:134)

bug hit with for example:
```
CallableStatement proc = con.prepareCall("{CALL p(?)}");
proc.registerOutParameter(1, java.sql.Types.BIGINT);
proc.setInt(1, -100);
proc.execute();
```

where `p(bigint)` is a distributed "function" and the param the
distribution key (also in a distributed table), see #7242 for details

Fixes #7242
2023-11-02 13:15:24 +01:00
Jelte Fennema-Nio 5a48a1602e
Debug flaky logical_replication test (#7309)
Sometimes in CI our logical_replication test fails like this:

```diff
+++ /__w/citus/citus/src/test/regress/results/logical_replication.out.modified	2023-11-01 14:15:08.562758546 +0000
@@ -40,21 +40,21 @@

 SELECT count(*) from pg_publication;
  count
 -------
      0
 (1 row)

 SELECT count(*) from pg_replication_slots;
  count
 -------
-     0
+     1
 (1 row)

 SELECT count(*) FROM dist;
  count
 -------
```

It's hard to understand what is going on here, just based on the wrong
number. So this PR changes the test to show the name of the
subscription, publication and replication slot to make finding the cause
easier.

In passing this also fixes another flaky test in the same file that our
flaky test detection picked up. This is done by waiting for resource
cleanup after the shard move.
2023-11-02 13:15:02 +01:00
Jelte Fennema-Nio 6fed82609c
Do not download all artifacts for flaky test detection (#7320)
This is causing 404 failures due to a race condition:
https://github.com/actions/toolkit/issues/1235

It also makes the tests take unnecessarily long.

This was tested by changing a test file and seeing that the flaky test
detection was still working.
2023-11-02 12:13:29 +00:00
Onur Tirtir 9867c5b949
Fix flaky multi_mx_node_metadata.sql test (#7317)
Fixes the flaky test that results in following diff:
```diff
--- /__w/citus/citus/src/test/regress/expected/multi_mx_node_metadata.out.modified	2023-11-01 14:22:12.890476575 +0000
+++ /__w/citus/citus/src/test/regress/results/multi_mx_node_metadata.out.modified	2023-11-01 14:22:12.914476657 +0000
@@ -840,24 +840,26 @@
 (1 row)
 
 \c :datname - - :master_port
 SELECT datname FROM pg_stat_activity WHERE application_name LIKE 'Citus Met%';
   datname   
 ------------
  db_to_drop
 (1 row)
 
 DROP DATABASE db_to_drop;
+ERROR:  database "db_to_drop" is being accessed by other users
 SELECT datname FROM pg_stat_activity WHERE application_name LIKE 'Citus Met%';
   datname   
 ------------
-(0 rows)
+ db_to_drop
+(1 row)
 
 -- cleanup
 DROP SEQUENCE sequence CASCADE;
 NOTICE:  drop cascades to default value for column a of table reference_table
```
2023-11-02 11:02:34 +00:00
Gürkan İndibay 184c8fc1ee
Enriches statement propagation document (#7267)
Co-authored-by: Onur Tirtir <onurcantirtir@gmail.com>
Co-authored-by: Hanefi Onaldi <Hanefi.Onaldi@microsoft.com>
Co-authored-by: Jelte Fennema-Nio <jelte.fennema@microsoft.com>
2023-11-02 09:59:34 +00:00
Jelte Fennema-Nio a6e86884f6
Fix flaky isolation_metadata_sync_deadlock (#7312)
Sometimes isolation_metadata_sync_deadlock fails in CI like this:

```diff
diff -dU10 -w /__w/citus/citus/src/test/regress/expected/isolation_metadata_sync_deadlock.out /__w/citus/citus/src/test/regress/results/isolation_metadata_sync_deadlock.out
--- /__w/citus/citus/src/test/regress/expected/isolation_metadata_sync_deadlock.out.modified	2023-11-01 16:03:15.090199229 +0000
+++ /__w/citus/citus/src/test/regress/results/isolation_metadata_sync_deadlock.out.modified	2023-11-01 16:03:15.098199312 +0000
@@ -110,10 +110,14 @@
 t
 (1 row)

 step s2-stop-connection:
  SELECT stop_session_level_connection_to_node();

 stop_session_level_connection_to_node
 -------------------------------------

 (1 row)
+
+teardown failed: ERROR:  localhost:57638 is a metadata node, but is out of sync
+HINT:  If the node is up, wait until metadata gets synced to it and try again.
+CONTEXT:  SQL statement "SELECT master_remove_distributed_table_metadata_from_workers(v_obj.objid, v_obj.schema_name, v_obj.object_name)"
```

Source:
https://github.com/citusdata/citus/actions/runs/6721938040/attempts/1#summary-18268946448

To fix this we now wait for the metadata to be fully synced to all
nodes at the start of the teardown steps.
2023-11-02 10:39:05 +01:00
Jelte Fennema-Nio ea5551689e
Prepare github actions pipelines for merge queue (#7315)
Github has a built in merge queue. I think it would be good to try this
out, to speed up merging PRs when multiple people want to merge at the
same time. This PR does not enable it yet, but it starts triggering
Github actions also for the `merge_queue` event. This is a requirement
for trying them out.

Announcment:
https://github.blog/2023-07-12-github-merge-queue-is-generally-available/
Docs:
https://docs.github.com/en/repositories/configuring-branches-and-merges-in-your-repository/configuring-pull-request-merges/managing-a-merge-queue
2023-11-02 08:23:34 +00:00
Onur Tirtir 2cf4c04023
Fix flaky global_cancel.sql test (#7316) 2023-11-01 23:59:41 +01:00
Jelte Fennema-Nio e3c93c303d
Fix flaky citus_non_blocking_split_shard_cleanup (#7311)
Sometimes in CI citus_non_blocking_split_shard_cleanup failed like this:

```diff
--- /__w/citus/citus/src/test/regress/expected/citus_non_blocking_split_shard_cleanup.out.modified	2023-11-01 15:07:14.280551207 +0000
+++ /__w/citus/citus/src/test/regress/results/citus_non_blocking_split_shard_cleanup.out.modified	2023-11-01 15:07:14.292551358 +0000
@@ -106,21 +106,22 @@
 -----------------------------------

 (1 row)

 \c - - - :worker_2_port
 SET search_path TO "citus_split_test_schema";
 -- Replication slots should be cleaned up
 SELECT slot_name FROM pg_replication_slots;
             slot_name
 ---------------------------------
-(0 rows)
+ citus_shard_split_slot_19_10_17
+(1 row)

 -- Publications should be cleanedup
 SELECT count(*) FROM pg_publication;
  count
```

It's expected that the replication slot is sometimes not cleaned up if
we don't wait until resource cleanup completes. This PR starts doing
that here.
2023-11-01 16:21:12 +00:00