Commit Graph

5407 Commits (cc4c83b1e56c5a8460deda15b28ff8a027c1a998)

Author SHA1 Message Date
Onur Tirtir cc4c83b1e5
HAVE_LZ4 -> HAVE_CITUS_LZ4 (#5541) 2021-12-16 16:21:52 +03:00
Talha Nisanci c0945d88de
Normalize a debug failure to WARNING failure (#4996) 2021-12-16 13:43:49 +03:00
Halil Ozan Akgül 7d0f4f11c3
Merge pull request #5537 from citusdata/turn_metadata_sync_on_in_mx_regular_user
Turn metadata sync on in mx_regular_user and remove_coordinator
2021-12-16 11:35:08 +03:00
Halil Ozan Akgul 8943d7b52f Turn metadata sync on in mx_regular_user and remove_coordinator 2021-12-16 11:26:24 +03:00
Halil Ozan Akgül 047ae2cad0
Merge pull request #5534 from citusdata/turn_metadata_sync_on_in_multi_unsupported_worker_operations
Turn metadata sync on in multi_size_queries, multi_drop_extension and multi_unsupported_worker_operations
2021-12-16 11:25:19 +03:00
Halil Ozan Akgul b82af4db3b Turn metadata sync on in multi_size_queries, multi_drop_extension and multi_unsupported_worker_operations 2021-12-16 11:10:54 +03:00
Hanefi Onaldi 9d4d73898a
Move healthcheck logic into new file (#5531)
and add a missing `CheckCitusVersion(ERROR)` call
2021-12-15 15:58:20 -08:00
Hanefi Onaldi acdcd9422c
Fix one flaky failure test (#5528)
Removes flaky test
2021-12-15 18:59:58 +03:00
Hanefi Onaldi 29e4516642 Introduce citus_check_cluster_node_health UDF
This UDF coordinates connectivity checks accross the whole cluster.

This UDF gets the list of active readable nodes in the cluster, and
coordinates all connectivity checks in sequential order.

The algorithm is:

for sourceNode in activeReadableWorkerList:
    c = connectToNode(sourceNode)
    for targetNode in activeReadableWorkerList:
        result = c.execute(
            "SELECT citus_check_connection_to_node(targetNode.name,
                                                   targetNode.port")
        emit sourceNode.name,
             sourceNode.port,
             targetNode.name,
             targetNode.port,
             result

- result -> true  ->  connection attempt from source to target succeeded
- result -> false -> connection attempt from source to target failed
- result -> NULL  -> connection attempt from the current node to source node failed

I suggest you use the following query to get an overview on the connectivity:

SELECT bool_and(COALESCE(result, false))
FROM citus_check_cluster_node_health();

Whenever this query returns false, there is a connectivity issue, check in detail.
2021-12-15 01:41:51 +03:00
Hanefi Onaldi 13fff9c37a Remove NOOP tuplestore_donestoring calls
PostgreSQL does not need calling this function since 7.4 release, and it
is a NOOP.

For more details, check PostgreSQL commit below :

commit dd04e958c8b03c0f0512497651678c7816af3198
Author: Tom Lane <tgl@sss.pgh.pa.us>
Date:   Sun Mar 9 03:34:10 2003 +0000

    tuplestore_donestoring() isn't needed anymore, but provide a no-op
    macro definition so as not to create compatibility problems.

diff --git a/src/include/utils/tuplestore.h b/src/include/utils/tuplestore.h
index b46babacd1..76fe9fb428 100644
--- a/src/include/utils/tuplestore.h
+++ b/src/include/utils/tuplestore.h
@@ -17,7 +17,7 @@
  * Portions Copyright (c) 1996-2002, PostgreSQL Global Development Group
  * Portions Copyright (c) 1994, Regents of the University of California
  *
- * $Id: tuplestore.h,v 1.8 2003/03/09 02:19:13 tgl Exp $
+ * $Id: tuplestore.h,v 1.9 2003/03/09 03:34:10 tgl Exp $
  *
  *-------------------------------------------------------------------------
  */
@@ -41,6 +41,9 @@ extern Tuplestorestate *tuplestore_begin_heap(bool randomAccess,

 extern void tuplestore_puttuple(Tuplestorestate *state, void *tuple);

+/* tuplestore_donestoring() used to be required, but is no longer used */
+#define tuplestore_donestoring(state)  ((void) 0)
+
 /* backwards scan is only allowed if randomAccess was specified 'true' */
 extern void *tuplestore_gettuple(Tuplestorestate *state, bool forward,
                                        bool *should_free);
2021-12-14 18:55:02 +03:00
Halil Ozan Akgül 1c5430635d
Merge pull request #5525 from citusdata/only_drop_dist_indexes_on_metadata_synced_nodes
Fix drop index trying to drop coordinator local indexes on metadata worker nodes
2021-12-14 15:37:45 +03:00
Halil Ozan Akgul e060720370 Fix metadata sync fails in multi_index_statements 2021-12-14 11:28:08 +03:00
Halil Ozan Akgul a951e52ce8 Fix drop index trying to drop coordinator local indexes on metadata worker nodes 2021-12-14 11:28:08 +03:00
Halil Ozan Akgül 811eda6d0f
Merge pull request #5527 from citusdata/turn_metadata_sync_on_in_multi_copy
Fix metadata sync fails on multi_copy
2021-12-14 11:12:15 +03:00
Halil Ozan Akgul 1d7dde2c4c Fix metadata sync fails on multi_copy 2021-12-14 10:59:59 +03:00
Halil Ozan Akgül 31ffb0981d
Merge pull request #5522 from citusdata/fix_metadata_sync_fails_on_failure_connection_establishment
Fix metadata sync fails on failure_connection_establishment
2021-12-14 10:12:45 +03:00
Halil Ozan Akgul 98e38e2e4e Fix metadata sync fails on failure_connection_establishment 2021-12-13 11:51:56 +03:00
Halil Ozan Akgül fed1ebaaed
Merge pull request #5521 from citusdata/turn_metadata_sync_on_in_propagate_statistics
Fix metadata sync fails on propagate_statistics and pg13_propagate_statistics tests
2021-12-13 10:22:14 +03:00
Halil Ozan Akgul 507df08422 Fix metadata sync fails on propagate_statistics and pg13_propagate_statistics tests 2021-12-09 12:28:11 +03:00
Halil Ozan Akgül 73ba38eac4
Merge pull request #5517 from citusdata/turn_metadata_sync_on_in_base_schedules
Turn metadata sync on in base/minimal schedules
2021-12-09 10:14:06 +03:00
Halil Ozan Akgul 351314f8a1 Turn metadata sync on in base/minimal schedules 2021-12-08 13:34:41 +03:00
Halil Ozan Akgül 9471735764
Merge pull request #5511 from citusdata/fix_metadata_sync_fails_on_follower_schedule
Fix metadata sync fails on multi_follower_schedule
2021-12-08 13:18:33 +03:00
Halil Ozan Akgul ee894c9e73 Fix metadata sync fails on multi_follower_schedule 2021-12-08 13:07:37 +03:00
Halil Ozan Akgül e443d9578f
Merge pull request #5502 from citusdata/turn_metadata_sync_on_in_failure_schedule
Turn metadata sync on in failure schedule
2021-12-08 12:32:58 +03:00
Halil Ozan Akgul 4c8f79d7dd Turn metadata sync on in failure schedule 2021-12-08 11:22:56 +03:00
Halil Ozan Akgül a7fc79860f
Merge pull request #5514 from citusdata/turn_metadata_sync_on_in_mx_schedule
Turn metadata sync on in mx schedule
2021-12-08 10:31:42 +03:00
Halil Ozan Akgul 4f272ea0e5 Fix metadata sync fails in multi_extension 2021-12-08 10:25:43 +03:00
Halil Ozan Akgul a3834edeaa Turn metadata sync on in multi_mx_schedule 2021-12-08 10:25:43 +03:00
Halil Ozan Akgül 3fcd8395e6
Merge pull request #5512 from citusdata/turn_metadata_sync_on_in_upgrade_schedules
Turn metadata sync on in upgrade schedules
2021-12-08 10:24:50 +03:00
Halil Ozan Akgul ea37f4fd29 Turn metadata sync on in upgrade schedules 2021-12-08 10:19:02 +03:00
Hanefi Onaldi 05a3dfa8a9 Remove redundant arbitrary config class
We had 2 class definitions for CitusCacheManyConnectionsConfig, where
one of them was a copy of CitusSmallCopyBuffersConfig.

This commit leaves the intended class definition that configures caching
many connections, and removes the one that is a copy of another class
2021-12-08 04:47:08 +03:00
Burak Velioglu a67f518ef0
Merge pull request #5415 from citusdata/velioglu/propagate_pg_dist_object
Propagate pg_dist_object to worker nodes
2021-12-06 20:19:07 +03:00
Burak Velioglu e8534c1dd5
Drop sequence metadata from workers explicitly 2021-12-06 19:25:51 +03:00
Burak Velioglu 21194c3b9d
Mark sequence distributed explicitly while syncing metadata
Since sequences are not marked as distributed while creating table if no
metadata worker node exists, we are marking all sequences distributed
while syncing metadata explicitly.
2021-12-06 19:25:51 +03:00
Burak Velioglu 6d849cf394
Allow delegating function from worker nodes
We've both allowed delegating functions and procedures from worker nodes
and also prevented delegation if a function/procedure has already been
propagated from another node.
2021-12-06 19:25:51 +03:00
Burak Velioglu a8b1ee87f7
Increment command counter after altering the sequence type 2021-12-06 19:25:51 +03:00
Burak Velioglu ed8e32de5e
Sync pg_dist_object on an update and propagate while syncing to a new node
Before that PR we were updating citus.pg_dist_object metadata, which keeps
the metadata related to objects on Citus, only on the coordinator node. In
order to allow using those object from worker nodes (or erroring out with
proper error message) we've started to propagate that metedata to worker
nodes as well.
2021-12-06 19:25:50 +03:00
Halil Ozan Akgül d4ed94d2f2
Merge pull request #5504 from citusdata/fix_multi_table_ddl_with_metadata_sync
Fix metadata sync fails of multi_table_ddl
2021-12-06 16:54:21 +03:00
Halil Ozan Akgul ef09ba0d06 Fix metadata sync fails of multi_table_ddl 2021-12-06 13:44:30 +03:00
Halil Ozan Akgül ae134c209f
Merge pull request #5503 from citusdata/fix_undist_table_test_metadata_sync_fails
Fix fails with metadata syncing in undistribute_table
2021-12-03 14:11:06 +03:00
Halil Ozan Akgul a6d0de060c Fix fails with metadata syncing in undistribute_table 2021-12-03 13:58:53 +03:00
Hanefi Onaldi 56e9b1b968 Introduce UDF to check worker connectivity
citus_check_connection_to_node runs a simple query on a remote node and
reports whether this attempt was successful.

This UDF will be used to make sure each worker node can connect to all
the worker nodes in the cluster.

parameters:
nodename: required
nodeport: optional (default: 5432)

return value:
boolean success
2021-12-03 02:30:28 +03:00
Talha Nisanci e4ead8f408
Update broken link for upgrade tests (#5408)
* Update broken link for upgrade tests

* Update src/test/regress/README.md

Co-authored-by: Nils Dijk <nils@citusdata.com>

Co-authored-by: Nils Dijk <nils@citusdata.com>
2021-12-02 15:25:36 +01:00
Önder Kalacı ab365a335d
Merge pull request #5486 from citusdata/disable_node_async
Allow disabling node(s) when multiple failures happen
2021-12-01 10:48:49 +01:00
Onder Kalaci 549edcabb6 Allow disabling node(s) when multiple failures happen
As of master branch, Citus does all the modifications to replicated tables
(e.g., reference tables and distributed tables with replication factor > 1),
via 2PC and avoids any shardstate=3. As a side-effect of those changes,
handling node failures for replicated tables change.

With this PR, when one (or multiple) node failures happen, the users would
see query errors on modifications. If the problem is intermitant, that's OK,
once the node failure(s) recover by themselves, the modification queries would
succeed. If the node failure(s) are permenant, the users should call
`SELECT citus_disable_node(...)` to disable the node. As soon as the node is
disabled, modification would start to succeed. However, now the old node gets
behind. It means that, when the node is up again, the placements should be
re-created on the node. First, use `SELECT citus_activate_node()`. Then, use
`SELECT replicate_table_shards(...)` to replicate the missing placements on
the re-activated node.
2021-12-01 10:19:48 +01:00
Halil Ozan Akgül 6feb009834
Merge pull request #5499 from citusdata/fix_enterprise_fails
Fix enterprise fails
2021-11-30 16:16:30 +03:00
Halil Ozan Akgul 316274b5f0 Add normalize.sed item for multi_fix_partition_shard_index_names test 2021-11-30 13:28:41 +03:00
Halil Ozan Akgul 11072b4cb8 Normalize create role command in drop_partitioned_table test 2021-11-30 12:46:22 +03:00
Onur Tirtir 1836361a51
Add changelog entries for 10.2.3 (#5498) 2021-11-29 11:48:00 +03:00
Önder Kalacı 5ef0bae06f
Merge pull request #5493 from citusdata/metadata_connecttion
Make sure to use a dedicated metadata connection
2021-11-26 14:47:49 +01:00