Commit Graph

4461 Commits (0ec43fb951daf6af082fb7dba59147e3930b109d)

Author SHA1 Message Date
Naisila Puka 57ff762c82
Fix VACUUM flakiness in multi_utilities (#7334)
When I run this test in my local, the size of the table after the DELETE
command is around 58785792. Hence, I assume that the diffs suggest that
the Vacuum had no effect. The current solution is to run the VACUUM
command three times instead of once.

Example diff:
https://github.com/citusdata/citus/actions/runs/6722231142/attempts/1#summary-18269870674
```diff
insert into local_vacuum_table select i from generate_series(1,1000000) i;
 delete from local_vacuum_table;
 VACUUM local_vacuum_table;
 SELECT CASE WHEN s BETWEEN 20000000 AND 25000000 THEN 22500000 ELSE s END
 FROM pg_total_relation_size('local_vacuum_table') s ;
     s     
 ----------
- 22500000
+ 58785792
 (1 row)
```
See more diff examples in the PR description
https://github.com/citusdata/citus/pull/7334
2023-11-09 21:00:24 +03:00
dependabot[bot] d4663212f4 Bump werkzeug from 2.3.7 to 3.0.1 in /src/test/regress
Bumps [werkzeug](https://github.com/pallets/werkzeug) from 2.3.7 to 3.0.1.
- [Release notes](https://github.com/pallets/werkzeug/releases)
- [Changelog](https://github.com/pallets/werkzeug/blob/main/CHANGES.rst)
- [Commits](https://github.com/pallets/werkzeug/compare/2.3.7...3.0.1)

---
updated-dependencies:
- dependency-name: werkzeug
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-11-09 17:14:14 +01:00
Nils Dijk 0dac63afc0
move pg_version_constants.h to toplevel include (#7335)
In preparation of sorting and grouping all includes we wanted to move
this file to the toplevel includes for good grouping/sorting.
2023-11-09 15:09:39 +00:00
Naisila Puka 0dc41ee5a0
Fix flaky multi_mx_insert_select_repartition test (#7331)
https://github.com/citusdata/citus/actions/runs/6745019678/attempts/1#summary-18336188930
```diff
     insert into target_table SELECT a*2 FROM source_table RETURNING a;
-NOTICE:  executing the command locally: SELECT bytes FROM fetch_intermediate_results(ARRAY['repartitioned_results_xxxxx_from_4213582_to_0','repartitioned_results_xxxxx_from_4213584_to_0']::text[],'localhost',57638) bytes
+NOTICE:  executing the command locally: SELECT bytes FROM fetch_intermediate_results(ARRAY['repartitioned_results_3940758121873413_from_4213584_to_0','repartitioned_results_3940758121873413_from_4213582_to_0']::text[],'localhost',57638) bytes
```

The elements in the array passed to `fetch_intermediate_results` are the
same, but in the opposite order than expected.

To fix this flakiness, we can omit the `"SELECT bytes FROM
fetch_intermediate_results..."` line. From the following logs, it is
understandable that the intermediate results have been fetched.
2023-11-08 15:15:33 +03:00
Onur Tirtir 444e6cb7d6
Remove useless variables (#7327)
To fix warnings observed when using different compiler versions.
2023-11-07 16:39:08 +03:00
cvbhjkl e535f53ce5
Fix typo in local_executor.c (#7324)
Fix a typo 'remaning' -> 'remaining' in local_executor.c
2023-11-03 12:14:11 +00:00
Onur Tirtir 21646ca1e9
Fix flaky isolation_get_all_active_transactions.spec test (#7323)
Fix the flaky test that results in following diff by waiting until the
backend that we want to terminate really terminates, until 5secs.

```diff
--- /__w/citus/citus/src/test/regress/expected/isolation_get_all_active_transactions.out.modified	2023-11-01 16:30:57.648749795 +0000
+++ /__w/citus/citus/src/test/regress/results/isolation_get_all_active_transactions.out.modified	2023-11-01 16:30:57.656749877 +0000
@@ -114,13 +114,13 @@
 --------------------
 t                   
 (1 row)
 
 step s3-show-activity: 
  SET ROLE postgres;
  select count(*) from get_all_active_transactions() where process_id IN (SELECT * FROM selected_pid);
 
 count
 -----
-    0
+    1
 (1 row)
```
2023-11-03 09:00:32 +01:00
Onur Tirtir 5e2439a117
Make some more tests re-runable (#7322)
* multi_mx_create_table
* multi_mx_function_table_reference
* multi_mx_add_coordinator
* create_role_propagation
* metadata_sync_helpers
* text_search

https://github.com/citusdata/citus/pull/7278 requires this.
2023-11-02 18:32:56 +03:00
Jelte Fennema-Nio 85b997a0fb
Fix flaky multi_alter_table_statements (#7321)
Sometimes multi_alter_table_statements would fail in CI like this:

```diff
 -- Verify that DROP NOT NULL works
 ALTER TABLE lineitem_alter ALTER COLUMN int_column2 DROP NOT NULL;
 SELECT "Column", "Type", "Modifiers" FROM table_desc WHERE relid='lineitem_alter'::regclass;
-     Column      |         Type          | Modifiers
----------------------------------------------------------------------
- l_orderkey      | bigint                | not null
- l_partkey       | integer               | not null
- l_suppkey       | integer               | not null
- l_linenumber    | integer               | not null
- l_quantity      | numeric(15,2)         | not null
- l_extendedprice | numeric(15,2)         | not null
- l_discount      | numeric(15,2)         | not null
- l_tax           | numeric(15,2)         | not null
- l_returnflag    | character(1)          | not null
- l_linestatus    | character(1)          | not null
- l_shipdate      | date                  | not null
- l_commitdate    | date                  | not null
- l_receiptdate   | date                  | not null
- l_shipinstruct  | character(25)         | not null
- l_shipmode      | character(10)         | not null
- l_comment       | character varying(44) | not null
- float_column    | double precision      | default 1
- date_column     | date                  |
- int_column1     | integer               |
- int_column2     | integer               |
- null_column     | integer               |
-(21 rows)
-
+ERROR:  schema "alter_table_add_column" does not exist
 -- COPY should succeed now
 SELECT master_create_empty_shard('lineitem_alter') as shardid \gset
 ```

Reading from table_desc apparantly has an issue that if the schema gets
deleted from one of the items, while it is being read that we get such
an error.

This change fixes that by not running multi_alter_table_statements in parallel
with alter_table_add_column anymore.

This is another instance of the same issue as in #7294
2023-11-02 16:42:45 +03:00
Jelte Fennema-Nio f171ec98fc
Fix flaky failure_distributed_results (#7307)
Sometimes in CI we run into this failure:

```diff
   SELECT resultId, nodeport, rowcount, targetShardId, targetShardIndex
   FROM partition_task_list_results('test', $$ SELECT * FROM source_table $$, 'target_table')
           NATURAL JOIN pg_dist_node;
-WARNING:  connection to the remote node localhost:xxxxx failed with the following error: connection not open
+ERROR:  connection to the remote node localhost:9060 failed with the following error: connection not open
 SELECT * FROM distributed_result_info ORDER BY resultId;
-       resultid        | nodeport | rowcount | targetshardid | targetshardindex
----------------------------------------------------------------------
- test_from_100800_to_0 |     9060 |       22 |        100805 |                0
- test_from_100801_to_0 |    57637 |        2 |        100805 |                0
- test_from_100801_to_1 |    57637 |       15 |        100806 |                1
- test_from_100802_to_1 |    57637 |       10 |        100806 |                1
- test_from_100802_to_2 |    57637 |        5 |        100807 |                2
- test_from_100803_to_2 |    57637 |       18 |        100807 |                2
- test_from_100803_to_3 |    57637 |        4 |        100808 |                3
- test_from_100804_to_3 |     9060 |       24 |        100808 |                3
-(8 rows)
-
+ERROR:  current transaction is aborted, commands ignored until end of transaction block
 -- fetch from worker 2 should fail
 SAVEPOINT s1;
+ERROR:  current transaction is aborted, commands ignored until end of transaction block
 SELECT fetch_intermediate_results('{test_from_100802_to_1,test_from_100802_to_2}'::text[], 'localhost', :worker_2_port) > 0 AS fetched;
-ERROR:  could not open file "base/pgsql_job_cache/xx_x_xxx/test_from_100802_to_1.data": No such file or directory
-CONTEXT:  while executing command on localhost:xxxxx
+ERROR:  current transaction is aborted, commands ignored until end of transaction block
 ROLLBACK TO SAVEPOINT s1;
+ERROR:  savepoint "s1" does not exist
 -- fetch from worker 1 should succeed
 SELECT fetch_intermediate_results('{test_from_100802_to_1,test_from_100802_to_2}'::text[], 'localhost', :worker_1_port) > 0 AS fetched;
- fetched
----------------------------------------------------------------------
- t
-(1 row)
-
+ERROR:  current transaction is aborted, commands ignored until end of transaction block
 -- make sure the results read are same as the previous transaction block
 SELECT count(*), sum(x) FROM
   read_intermediate_results('{test_from_100802_to_1,test_from_100802_to_2}'::text[],'binary') AS res (x int);
- count | sum
----------------------------------------------------------------------
-    15 | 863
-(1 row)
-
+ERROR:  current transaction is aborted, commands ignored until end of transaction block
 ROLLBACk;
```

As outlined in the #7306 I created, the reason for this is related to
only having a single connection open to the node. Finding and fixing the
full cause is not trivial, so instead this PR starts working around
this bug by forcing maximum parallelism. Preferably we'd want
this workaround not to be necessary, but that requires
spending time to fix this. For now having a less flaky CI is
good enough.
2023-11-02 12:31:56 +00:00
Jelte Fennema-Nio b47c8b3fb0
Fix flaky insert_select_connection_leak (#7302)
Sometimes in CI insert_select_connection_leak would fail like this:

```diff
 END;
 SELECT worker_connection_count(:worker_1_port) - :pre_xact_worker_1_connections AS leaked_worker_1_connections,
        worker_connection_count(:worker_2_port) - :pre_xact_worker_2_connections AS leaked_worker_2_connections;
  leaked_worker_1_connections | leaked_worker_2_connections
 -----------------------------+-----------------------------
-                           0 |                           0
+                          -1 |                           0
 (1 row)

 -- ROLLBACK
 BEGIN;
 INSERT INTO target_table SELECT * FROM source_table;
 INSERT INTO target_table SELECT * FROM source_table;
 ROLLBACK;
 SELECT worker_connection_count(:worker_1_port) - :pre_xact_worker_1_connections AS leaked_worker_1_connections,
        worker_connection_count(:worker_2_port) - :pre_xact_worker_2_connections AS leaked_worker_2_connections;
  leaked_worker_1_connections | leaked_worker_2_connections
 -----------------------------+-----------------------------
-                           0 |                           0
+                          -1 |                           0
 (1 row)

 \set VERBOSITY TERSE
 -- Error on constraint failure
 BEGIN;
 INSERT INTO target_table SELECT * FROM source_table;
 SELECT worker_connection_count(:worker_1_port) AS worker_1_connections,
        worker_connection_count(:worker_2_port) AS worker_2_connections \gset
 SAVEPOINT s1;
 INSERT INTO target_table SELECT a, CASE WHEN a < 50 THEN b ELSE null END  FROM source_table;
@@ -89,15 +89,15 @@
  leaked_worker_1_connections | leaked_worker_2_connections
 -----------------------------+-----------------------------
                            0 |                           0
 (1 row)

 END;
 SELECT worker_connection_count(:worker_1_port) - :pre_xact_worker_1_connections AS leaked_worker_1_connections,
        worker_connection_count(:worker_2_port) - :pre_xact_worker_2_connections AS leaked_worker_2_connections;
  leaked_worker_1_connections | leaked_worker_2_connections
 -----------------------------+-----------------------------
-                           0 |                           0
+                          -1 |                           0
 (1 row)
```

Source:
https://github.com/citusdata/citus/actions/runs/6718401194/attempts/1#summary-18258258387

A negative amount of leaked connectios is obviously not possible. For
some reason there was a connection open when we checked the initial
amount of connections that was closed afterwards. This could be the
from the maintenance daemon or maybe from the previous test that had not
fully closed its connections just yet.

The change in this PR doesnt't actually fix the cause of the negative
connection, but it simply considers it good as well, by changing the
result to zero for negative values.

With this fix we might sometimes miss a leak, because the negative
number can cancel out the leak and still result in a 0. But since the
negative number only occurs sometimes, we'll still find the leak often
enough.
2023-11-02 13:15:43 +01:00
Cédric Villemain 0678a2fd89
Fix #7242, CALL(@0) crash backend (#7288)
When executing a prepared CALL, which is not pure SQL but available with
some drivers like npgsql and jpgdbc, Citus entered a code path where a
plan is not defined, while trying to increase its cost. Thus SIG11 when
plan is a NULL pointer.

Fix by only increasing plan cost when plan is not null.

However, it is a bit suspicious to get here with a NULL plan and maybe a
better change will be to not call
ShardPlacementForFunctionColocatedWithDistTable() with a NULL plan at
all (in call.c:134)

bug hit with for example:
```
CallableStatement proc = con.prepareCall("{CALL p(?)}");
proc.registerOutParameter(1, java.sql.Types.BIGINT);
proc.setInt(1, -100);
proc.execute();
```

where `p(bigint)` is a distributed "function" and the param the
distribution key (also in a distributed table), see #7242 for details

Fixes #7242
2023-11-02 13:15:24 +01:00
Jelte Fennema-Nio 5a48a1602e
Debug flaky logical_replication test (#7309)
Sometimes in CI our logical_replication test fails like this:

```diff
+++ /__w/citus/citus/src/test/regress/results/logical_replication.out.modified	2023-11-01 14:15:08.562758546 +0000
@@ -40,21 +40,21 @@

 SELECT count(*) from pg_publication;
  count
 -------
      0
 (1 row)

 SELECT count(*) from pg_replication_slots;
  count
 -------
-     0
+     1
 (1 row)

 SELECT count(*) FROM dist;
  count
 -------
```

It's hard to understand what is going on here, just based on the wrong
number. So this PR changes the test to show the name of the
subscription, publication and replication slot to make finding the cause
easier.

In passing this also fixes another flaky test in the same file that our
flaky test detection picked up. This is done by waiting for resource
cleanup after the shard move.
2023-11-02 13:15:02 +01:00
Onur Tirtir 9867c5b949
Fix flaky multi_mx_node_metadata.sql test (#7317)
Fixes the flaky test that results in following diff:
```diff
--- /__w/citus/citus/src/test/regress/expected/multi_mx_node_metadata.out.modified	2023-11-01 14:22:12.890476575 +0000
+++ /__w/citus/citus/src/test/regress/results/multi_mx_node_metadata.out.modified	2023-11-01 14:22:12.914476657 +0000
@@ -840,24 +840,26 @@
 (1 row)
 
 \c :datname - - :master_port
 SELECT datname FROM pg_stat_activity WHERE application_name LIKE 'Citus Met%';
   datname   
 ------------
  db_to_drop
 (1 row)
 
 DROP DATABASE db_to_drop;
+ERROR:  database "db_to_drop" is being accessed by other users
 SELECT datname FROM pg_stat_activity WHERE application_name LIKE 'Citus Met%';
   datname   
 ------------
-(0 rows)
+ db_to_drop
+(1 row)
 
 -- cleanup
 DROP SEQUENCE sequence CASCADE;
 NOTICE:  drop cascades to default value for column a of table reference_table
```
2023-11-02 11:02:34 +00:00
Gürkan İndibay 184c8fc1ee
Enriches statement propagation document (#7267)
Co-authored-by: Onur Tirtir <onurcantirtir@gmail.com>
Co-authored-by: Hanefi Onaldi <Hanefi.Onaldi@microsoft.com>
Co-authored-by: Jelte Fennema-Nio <jelte.fennema@microsoft.com>
2023-11-02 09:59:34 +00:00
Jelte Fennema-Nio a6e86884f6
Fix flaky isolation_metadata_sync_deadlock (#7312)
Sometimes isolation_metadata_sync_deadlock fails in CI like this:

```diff
diff -dU10 -w /__w/citus/citus/src/test/regress/expected/isolation_metadata_sync_deadlock.out /__w/citus/citus/src/test/regress/results/isolation_metadata_sync_deadlock.out
--- /__w/citus/citus/src/test/regress/expected/isolation_metadata_sync_deadlock.out.modified	2023-11-01 16:03:15.090199229 +0000
+++ /__w/citus/citus/src/test/regress/results/isolation_metadata_sync_deadlock.out.modified	2023-11-01 16:03:15.098199312 +0000
@@ -110,10 +110,14 @@
 t
 (1 row)

 step s2-stop-connection:
  SELECT stop_session_level_connection_to_node();

 stop_session_level_connection_to_node
 -------------------------------------

 (1 row)
+
+teardown failed: ERROR:  localhost:57638 is a metadata node, but is out of sync
+HINT:  If the node is up, wait until metadata gets synced to it and try again.
+CONTEXT:  SQL statement "SELECT master_remove_distributed_table_metadata_from_workers(v_obj.objid, v_obj.schema_name, v_obj.object_name)"
```

Source:
https://github.com/citusdata/citus/actions/runs/6721938040/attempts/1#summary-18268946448

To fix this we now wait for the metadata to be fully synced to all
nodes at the start of the teardown steps.
2023-11-02 10:39:05 +01:00
Onur Tirtir 2cf4c04023
Fix flaky global_cancel.sql test (#7316) 2023-11-01 23:59:41 +01:00
Jelte Fennema-Nio e3c93c303d
Fix flaky citus_non_blocking_split_shard_cleanup (#7311)
Sometimes in CI citus_non_blocking_split_shard_cleanup failed like this:

```diff
--- /__w/citus/citus/src/test/regress/expected/citus_non_blocking_split_shard_cleanup.out.modified	2023-11-01 15:07:14.280551207 +0000
+++ /__w/citus/citus/src/test/regress/results/citus_non_blocking_split_shard_cleanup.out.modified	2023-11-01 15:07:14.292551358 +0000
@@ -106,21 +106,22 @@
 -----------------------------------

 (1 row)

 \c - - - :worker_2_port
 SET search_path TO "citus_split_test_schema";
 -- Replication slots should be cleaned up
 SELECT slot_name FROM pg_replication_slots;
             slot_name
 ---------------------------------
-(0 rows)
+ citus_shard_split_slot_19_10_17
+(1 row)

 -- Publications should be cleanedup
 SELECT count(*) FROM pg_publication;
  count
```

It's expected that the replication slot is sometimes not cleaned up if
we don't wait until resource cleanup completes. This PR starts doing
that here.
2023-11-01 16:21:12 +00:00
Jelte Fennema-Nio c9f2fc892d
Fix flaky failure_split_cleanup (#7299)
Sometimes failure_split_cleanup failed in CI like this:

```diff
 ERROR:  server closed the connection unexpectedly
 CONTEXT:  while executing command on localhost:9060
     SELECT operation_id, object_type, object_name, node_group_id, policy_type
     FROM pg_dist_cleanup where operation_id = 777 ORDER BY object_name;
  operation_id | object_type |                        object_name                        | node_group_id | policy_type
 --------------+-------------+-----------------------------------------------------------+---------------+-------------
           777 |           1 | citus_failure_split_cleanup_schema.table_to_split_8981000 |             1 |           0
-          777 |           1 | citus_failure_split_cleanup_schema.table_to_split_8981002 |             1 |           1
           777 |           1 | citus_failure_split_cleanup_schema.table_to_split_8981002 |             2 |           0
+          777 |           1 | citus_failure_split_cleanup_schema.table_to_split_8981002 |             1 |           1
           777 |           1 | citus_failure_split_cleanup_schema.table_to_split_8981003 |             2 |           1
           777 |           4 | citus_shard_split_publication_1_10_777                    |             2 |           0
 (5 rows)

     -- we need to allow connection so that we can connect to proxy
```

Source:
https://github.com/citusdata/citus/actions/runs/6717642291/attempts/1#summary-18256014949

It's the common problem where we're missing a column in the ORDER BY
clause. This fixes that by adding an node_group_id to the query in
question.
2023-11-01 14:08:51 +00:00
Jelte Fennema-Nio c83c556702
Fix flaky isolation_master_update_node (#7303)
Sometimes in CI isolation_master_update_node fails like this:

```diff
 ------------------

 (1 row)

 step s2-abort: ABORT;
 step s1-abort: ABORT;
 FATAL:  terminating connection due to administrator command
 FATAL:  terminating connection due to administrator command
 SSL connection has been closed unexpectedly
+server closed the connection unexpectedly

 master_remove_node
 ------------------

```

This just seesm like a random error line. The only way to reasonably fix
this is by adding an extra output file. So that's what this PR does.
2023-11-01 16:44:45 +03:00
Jelte Fennema-Nio 0d83ab57de
Fix flaky multi_cluster_management (#7295)
One of our most flaky and most anoying tests is
multi_cluster_management. It usually fails like this:
```diff
 SELECT citus_disable_node('localhost', :worker_2_port);
  citus_disable_node
 --------------------

 (1 row)

 SELECT public.wait_until_metadata_sync(60000);
+WARNING:  waiting for metadata sync timed out
  wait_until_metadata_sync
 --------------------------

 (1 row)

```

This tries to address that by hardening wait_until_metadata_sync. I
believe the reason for this warning is that there is a race condition in
wait_until_metadata_sync. It's possible for the pre-check to fail, then
have the maintenance daemon send a notification. And only then have the
backend start to listen. I tried to fix it in two ways:
1. First run LISTEN, and only then read do the pre-check.
2. If we time out, check again just to make sure that we did not miss
   the notification somehow. And don't show a warning if all metadata is
   synced after the timeout.

It's hard to know for sure that this fixes it because the test is not
repeatable and I could not reproduce it locally. Let's just hope for the
best.

---------

Co-authored-by: Onur Tirtir <onurcantirtir@gmail.com>
2023-11-01 10:46:01 +00:00
Jelte Fennema-Nio 20ae42e7fa
Fix flaky multi_reference_table test (#7294)
Sometimes multi_reference_table failed in CI like this:

```diff
 \c - - - :master_port
 DROP INDEX reference_schema.reference_index_2;
 \c - - - :worker_1_port
 SELECT "Column", "Type", "Modifiers" FROM table_desc WHERE relid='reference_schema.reference_table_ddl_1250019'::regclass;
- Column  |            Type             |  Modifiers
----------------------------------------------------------------------
- value_2 | double precision            | default 25.0
- value_3 | text                        | not null
- value_4 | timestamp without time zone |
- value_5 | double precision            |
-(4 rows)
-
+ERROR:  schema "citus_local_table_queries" does not exist
 \di reference_schema.reference_index_2*
           List of relations
  Schema | Name | Type | Owner | Table
```

Source:
https://github.com/citusdata/citus/actions/runs/6707535961/attempts/2#summary-18226879513

Reading from table_desc apparantly has an issue that if the schema gets
deleted from one of the items, while it is being read that we get such
an error.

This change fixes that by not running multi_reference_table in parallel
with citus_local_tables_queries anymore.
2023-11-01 10:12:06 +00:00
Cédric Villemain 37415ef8f5
Allow citus_*_size on index related to a distributed table (#7271)
I just enhanced the existing code to check if the relation is an index
belonging to a distributed table.
If so the shardId is appended to relation (index) name and the *_size
function are executed as before.

There is a change in an extern function:
  `extern StringInfo GenerateSizeQueryOnMultiplePlacements(...)`
It's possible to create a new function and deprecate this one later if
compatibility is an issue.

Fixes https://github.com/citusdata/citus/issues/6496.

DESCRIPTION: Allows using Citus size functions on distributed tables
indexes.

---------

Co-authored-by: Onur Tirtir <onurcantirtir@gmail.com>
2023-11-01 09:05:51 +00:00
Jelte Fennema-Nio a76a832553
Fix flaky validate_constraint test (#7293)
Sometimes validate constraint would fail like this:

```diff
  validatable_constraint_8000016 | t
 (10 rows)

 DROP TABLE constrained_table;
+ERROR:  deadlock detected
+DETAIL:  Process 16602 waits for ShareRowExclusiveLock on relation 56258 of database 16384; blocked by process 16601.
+Process 16601 waits for AccessShareLock on relation 56120 of database 16384; blocked by process 16602.
+HINT:  See server log for query details.
 DROP TABLE referenced_table CASCADE;
 DROP TABLE referencing_table;
 DROP SCHEMA validate_constraint CASCADE;
-NOTICE:  drop cascades to 3 other objects
+NOTICE:  drop cascades to 4 other objects
 DETAIL:  drop cascades to type constraint_validity
 drop cascades to view constraint_validations_in_workers
 drop cascades to view constraint_validations
+drop cascades to table constrained_table
 SET search_path TO DEFAULT;

```

Source:
https://github.com/citusdata/citus/actions/runs/6708383699?pr=7291

This change fixes that by not running together with the
foreign_key_to_reference_table test anymore. In passing it also
simplifies dropping of the test its resources.
2023-11-01 09:41:28 +01:00
Emel Şimşek ee8f4bb7e8
Start Maintenance Daemon for Main DB at the server start. (#7254)
DESCRIPTION: This change starts a maintenance deamon at the time of
server start if there is a designated main database.

This is the code flow:

1. User designates a main database:
   `ALTER SYSTEM SET citus.main_db =  "myadmindb";`

2. When postmaster starts, in _PG_Init, citus calls 
    `InitializeMaintenanceDaemonForMainDb`
  
This function registers a background worker to run
`CitusMaintenanceDaemonMain `with `databaseOid = 0 `

3. `CitusMaintenanceDaemonMain ` takes some special actions when
databaseOid is 0:
     - Gets the citus.main_db  value.
     - Connects to the  citus.main_db
     - Now the `MyDatabaseId `is available, creates a hash entry for it.
     - Then follows the same control flow as for a regular db,
2023-10-30 09:44:13 +03:00
Benjamin O f9218d9780
Support replacing IPv6 Loopback in `normalize.sed` (#7269)
I had a test failure issue due to my machine using the IPv6 loopback
address. This change to the `normalize.sed` solves that issue.
2023-10-27 16:42:55 +02:00
Naisila Puka 10198b18e8
Technical readme small fixes (#7261) 2023-10-23 13:43:43 +03:00
Naisila Puka 1fe16fa746
Remove unnecessary pre-fastpath code (#7262)
This code was here because we first implemented
`fast path planner` via
[#2606](https://github.com/citusdata/citus/pull/2606)
and then later `deferred pruning`
[#3369](https://github.com/citusdata/citus/pull/3369)
So, for some years, this code was useful.
2023-10-23 13:01:48 +03:00
zhjwpku 2d1444188c
Fix wrong comments around HasDistributionKey() (#7223)
HasDistributionKey & HasDistributionKeyCacheEntry returns true when the
corresponding table has a distribution key, the comments state the
opposite,
which should be fixed.

Signed-off-by: Zhao Junwang <zhjwpku@gmail.com>
Co-authored-by: Onur Tirtir <onurcantirtir@gmail.com>
2023-10-18 10:53:00 +02:00
Onur Tirtir db13afaa7b
Fix flaky columnar_create.sql test (#7266) 2023-10-17 16:58:17 +03:00
Gürkan İndibay 71a4633dad
Fixes typo and renames multi_process_utility (#7259) 2023-10-17 16:39:37 +03:00
Jelte Fennema-Nio 788e09a39a
Add a test for citus_shards where table names have spaces (#7224)
There was a bug reported for previous versions of Citus where
shard\_size was returning NULL for tables with spaces in them. It works
fine on the main branch though, but I'm still adding a test for this to
the main branch because it seems a good test to have.
2023-10-16 11:38:24 +02:00
Emel Şimşek e9035f6d32
Send keepalive messages in split decoder periodically to avoid wal receiver timeouts during large shard splits. (#7229)
DESCRIPTION: Send keepalive messages during the logical replication
phase of large shard splits to avoid timeouts.

During the logical replication part of the shard split process, split
decoder filters out the wal records produced by the initial copy. If the
number of wal records is big, then split decoder ends up processing for
a long time before sending out any wal records through pgoutput. Hence
the wal receiver may time out and restarts repeatedly causing our split
driver code catch up logic to fail.

Notes: 

1. If the wal_receiver_timeout is set to a very small number e.g. 600ms,
it may time out before receiving the keepalives. My tests show that this
code works best when the` wal_receiver_timeout `is set to 1minute, which
is the default value.

2. Once a logical replication worker time outs, a new one gets launched.
The new logical replication worker sets the pg_stat_subscription columns
to initial values. E.g. the latest_end_lsn is set to 0. Our driver logic
in `WaitForGroupedLogicalRepTargetsToCatchUp` can not handle LSN value
to go back. This is the main reason for it to get stuck in the infinite
loop.
2023-10-09 22:33:08 +03:00
Nils Dijk 6d8725efb0
Fix leaking of memory and memory contexts in Foreign Constraint Graphs (#7236)
DESCRIPTION: Fix leaking of memory and memory contexts in Foreign
Constraint Graphs

Previously, every time we (re)created the Foreign Constraint
Relationship Graph, we created a new Memory Context while loosing a
reference to the previous context. This old context could still have
left over memory in there causing a memory leak.

With this patch we statically have one memory context that we lazily
initialize the first time we create our foreign constraint relationship
graph. On every subsequent creation, beside destroying our previous
hashmap we also reset our memory context to remove any left over
references.
2023-10-09 13:05:51 +02:00
Onur Tirtir 858d99be33
Take improvement_threshold into the account in citus_add_rebalance_strategy() (#7247)
DESCRIPTION: Makes sure to take improvement_threshold into the account
in `citus_add_rebalance_strategy()`.

Fixes https://github.com/citusdata/citus/issues/7188.
2023-10-09 13:13:08 +03:00
Önder Kalacı 7d6c401dd3
Update technical readme (#7248)
Fix a wrong query, reported by @naisila
2023-10-06 13:37:37 +03:00
Önder Kalacı 0dca65c84d
Addd missing image to Technical Readme (#7243)
DESCRIPTION: PR description that will go into the change log, up to 78
characters
2023-09-29 22:24:10 +02:00
Önder Kalacı 185ac5e01e
Citus Technical Readme (#7207)
This commit aims to add a comprehensive guide that covers all essential
aspects of Citus, including planning, execution, locking mechanisms,
shard moves, 2PC, and many other major components of Citus.

Co-authored-by: Marco Slot <marco.slot@gmail.com>
2023-09-29 16:50:52 +03:00
dependabot[bot] c323f49e83
Bump cryptography from 41.0.3 to 41.0.4 in /src/test/regress (#7231)
Bumps [cryptography](https://github.com/pyca/cryptography) from 41.0.3
to 41.0.4.

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Nils Dijk <nils@citusdata.com>
2023-09-27 15:36:58 +02:00
Onur Tirtir 27ac44eb2a
Fix mixed Citus upgrade tests (#7218)
When testing rolling Citus upgrades, coordinator should not be upgraded
until we upgrade all the workers.

---------

Co-authored-by: Jelte Fennema-Nio <github-tech@jeltef.nl>
2023-09-26 17:52:52 +03:00
Nils Dijk b87fbcbf79
Shard moves/isolate report LSN's in lsn format (#7227)
DESCRIPTION: Shard moves/isolate report LSN's in lsn format

While investigating an issue with our catchup mechanism on certain
postgres versions we noticed we print LSN's in the format of the native
long type. This is an uncommon representation for LSN's in postgres
logs.

This patch changes the output of our log message to go from the long
type representation to the native LSN type representation. Making it
easier for postgres users to recognize and compare LSN's with other
related reports.

example of new output:
```
2023-09-25 17:28:47.544 CEST [11345] LOG:  The LSN of the target subscriptions on node localhost:9701 have increased from 0/0 to 0/E1ED20F8 at 2023-09-25 17:28:47.544165+02 where the source LSN is 1/415DCAD0
```
2023-09-26 13:47:50 +02:00
Gürkan İndibay 7fa109c977
Adds alter user missing features (#7204)
DESCRIPTION: Adds alter user rename propagation and enriches alter user
tests

---------

Co-authored-by: Jelte Fennema <github-tech@jeltef.nl>
2023-09-26 12:28:07 +03:00
Onur Tirtir 111b4c19bc
Make sure to disallow creating a replicated distributed table concurrently (#7219)
See explanation in https://github.com/citusdata/citus/issues/7216.
Fixes https://github.com/citusdata/citus/issues/7216.

DESCRIPTION: Makes sure to disallow creating a replicated distributed
table concurrently
2023-09-25 11:14:35 +03:00
Nils Dijk 0f28a69f12
Use the $(DLSUFFIX) instead of hard coded extensions for cdc (#7221)
When cdc got added the makefiles hardcoded the `.so` extension instead
of using the platform specifc `$(DLSUFFIX)` variable used by `pgxs.mk`.
Also don't remove installed cdc artifacts on `make clean`.
2023-09-22 16:24:18 +02:00
Jelte Fennema-Nio 71e556e090
Remove useless test output (#7209)
This was sometimes failing when running locally due to some local shard
still existing due to. This fixes that. We normally silence all
`drop schema cascade` output like this anyway to avoid unnecessary
diffs when modifying a test later on.
2023-09-19 14:12:46 +02:00
Naisila Puka 4e46708789
Adds PostgreSQL 16.0 Support (#7201)
This commit concludes PG16.0 Support in Citus.

The main PG16 support work has been done for 16beta3
https://github.com/citusdata/citus/pull/6952
There was some extra work needed for 16rc1
https://github.com/citusdata/citus/pull/7173
And this PR yet introduces some extra work needed to 16.0 :)

`pgstat_fetch_stat_local_beentry` has been renamed to
`pgstat_get_local_beentry_by_index` in PG16.0

Relevant PG commit:
8dfa37b797
8dfa37b797843a83a5756ea3309055e8953e1a86

Sister PR
https://github.com/citusdata/the-process/pull/150
2023-09-15 12:23:04 +03:00
Gürkan İndibay 7c0b289761
Adds alter database set option (#7181)
DESCRIPTION: Adds support for ALTER DATABASE <db_name> SET .. statement
propagation
SET statements in Postgres has a common structure which is already being
used in Alter Function
statement. 
In this PR, I added a util file; citus_setutils and made it usable for
both for
alter database<db_name>set .. and alter function ... set ... statements.
With this PR, below statements will be propagated
```sql
ALTER DATABASE name SET configuration_parameter { TO | = } { value | DEFAULT }
ALTER DATABASE name SET configuration_parameter FROM CURRENT
ALTER DATABASE name RESET configuration_parameter
ALTER DATABASE name RESET ALL
```
Additionally, there was a bug in processing float values in the common
code block.
I fixed this one as well

Previous
```C
case T_Float:
			{
				appendStringInfo(buf, " %s", strVal(value));
				break;
			}
```
Now
```C
case T_Float:
			{
				appendStringInfo(buf, " %s", nodeToString(value));
				break;
			}
```
2023-09-14 16:29:16 +03:00
aykut-bozkurt 26dc407f4a
bump citus and columnar into 12.2devel (#7200) 2023-09-14 12:03:09 +03:00
Gürkan İndibay e5e64b7454
Adds alter database propagation - with and refresh collation (#7172)
DESCRIPTION: Adds ALTER DATABASE WITH ... and REFRESH COLLATION VERSION
support

This PR adds supports for basic ALTER DATABASE statements propagation 
support. Below statements are supported:
ALTER DATABASE <database_name> with IS_TEMPLATE <true/false>;
ALTER DATABASE <database_name> with CONNECTION LIMIT <integer_value>;
ALTER DATABASE <database_name> REFRESH COLLATION VERSION;

---------

Co-authored-by: Jelte Fennema-Nio <jelte.fennema@microsoft.com>
2023-09-12 14:09:15 +03:00
Naisila Puka 1da99f8423
PG16 - Don't propagate GRANT ROLE with INHERIT/SET option (#7190)
We currently don't support propagating these options in Citus
Relevant PG commits:
https://github.com/postgres/postgres/commit/e3ce2de
https://github.com/postgres/postgres/commit/3d14e17

Limitation:
We also need to take care of generated GRANT statements by dependencies
in attempt to distribute something else. Specifically, this part of the
code in `GenerateGrantRoleStmtsOfRole`:
```
grantRoleStmt->admin_opt = membership->admin_option;
```
In PG16, membership also has `inherit_option` and `set_option` which
need to properly be part of the `grantRoleStmt`. We can skip for now
since #7164 will take care of this soon, and also this is not an
expected use-case.
2023-09-12 12:47:37 +03:00