Commit Graph

2355 Commits (a2f5b068e683d2d95ea315bdd97086669a0b5896)

Author SHA1 Message Date
Marco Slot 43e4dd3808 Add a citus.internal_reserved_connections setting 2022-03-02 19:13:53 +01:00
Onder Kalaci e80a36c4b6 Improve visibility rules for non-priviledge roles
It seems like our approach is way too restrictive and some places
are wrong. Now, we follow very similar approach to pg_stat_activity.

Some of the changes are pre-requsite for implementing citus_dist_stat_activity
via citus_stat_activity.
2022-03-02 18:04:01 +01:00
Onder Kalaci 35ec9721b4 Add a new API for enabling Citus MX for clusters upgrading from earlier versions
Clusters created pre-Citus 11 mostly didn't have metadata sync enabled.
For those clusters, we add a utility UDF which fixes some minor issues
and sync the necessary objects to the workers.
2022-03-02 17:02:55 +01:00
Onder Kalaci 98751058a9 Add Primary key to the table
Otherwise enterprise tests fail
2022-03-02 12:03:59 +01:00
Ahmet Gedemenli e1809af376 Propagate CREATE AGGREGATE commands 2022-03-02 10:52:43 +03:00
Onder Kalaci b79a0052a4 Drop function in the tests on a never version
As dropping the function now relies on pg_dist_object, which exists with 9.0+
2022-03-02 08:45:35 +01:00
Nils Dijk 65bd540943
Feature: configure object propagation behaviour in transactions (#5724)
DESCRIPTION: Add GUC to control ddl creation behaviour in transactions

Historically we would _not_ propagate objects when we are in a transaction block. Creation of distributed tables would not always work in sequential mode, hence objects created in the same transaction as distributing a table that would use the just created object wouldn't work. The benefit was that the user could still benefit from parallelism.

Now that the creation of distributed tables is supported in sequential mode it would make sense for users to force transactional consistency of ddl commands for distributed tables. A transaction could switch more aggressively to sequential mode when creating new objects in a transaction.

We don't change the default behaviour just yet.

Also, many objects would not even propagate their creation when the transaction was already set to sequential, leaving the probability of a self deadlock. The new policy checks solve this discrepancy between objects as well.
2022-03-01 17:29:31 +03:00
Burak Velioglu f17872aed4
Expand functions while resolving dependencies 2022-03-01 17:08:46 +03:00
Gledis Zeneli b825232ecb
Handle rebalance / replication when a node is disabled (Fix #5664) (#5729)
The issue in question is caused when rebalance / replication call `FullShardPlacementList` which returns all shard placements (including those in disabled nodes with `citus_disable_node`).  Eventually, `FindFillStateForPlacement` looks for the state across active workers and fails to find a state for the placements which are in the disabled workers causing a seg fault shortly after.

Approach:
* `ActivePlacementHash` was not using the status of the shard placement's node to determine if the node it is active. Initially, I just fixed that.
* Additionally, I refactored the code which handles active shards in replication / rebalance to:
	* use a single function to determine if a shard placement is active. 
	* do the shard active shard filtering before calling `RebalancePlacementUpdates` and `ReplicationPlacementUpdates`, so test methods like `shard_placement_rebalance_array` and `shard_placement_replication_array` which have different shard placement active requirements can do their own filtering while using the same rebalance / replicate logic that `rebalance_table_shards` and `replicate_table_shards` use. 

Fix #5664
2022-02-25 19:54:30 +03:00
Onder Kalaci df95d59e33 Drop support for CitusInitiatedBackend
CitusInitiatedBackend was a pre-mature implemenation of the whole
GlobalPID infrastructure. We used it to track whether any individual
query is triggered by Citus or not.

As of now, after GlobalPID is already in place, we don't need
CitusInitiatedBackend, in fact it could even be wrong.
2022-02-24 12:12:43 +01:00
Marco Slot 0c4e3cb69c Drop worker_partition_query_result on downgrade 2022-02-24 10:18:56 +01:00
Hanefi Onaldi 7bd6c2c9ac
Isolation tests for various ddl operations and metadata sync 2022-02-24 03:19:56 +03:00
Marco Slot ef1ceb3953 Only use a single placement for map tasks 2022-02-23 19:40:21 +01:00
Marco Slot 8de802eec5 Enable local_shared_pool_size 5 in arbitrary configs test 2022-02-23 19:40:21 +01:00
Marco Slot 490765a754 Enable re-partition joins after local execution 2022-02-23 19:40:21 +01:00
Marco Slot 72d8fde28b Use intermediate results for re-partition joins 2022-02-23 19:40:21 +01:00
Nils Dijk 1fb970224e
Fix: partitioned index dependencies (#5741)
#5685 introduced the resolution of dependencies for indices. This missed support for indices on partitioned tables. This change adds support for partitioned indices to the dependency resolution code.
2022-02-23 17:53:26 +03:00
Jelte Fennema e1afd30263
Speed up test runs on WSL2 a lot (#5736)
It turns out `whereis` is incredibly slow on WSL2 (at least on my
machine):

```
$ time whereis diff
diff: /usr/bin/diff /usr/share/man/man1/diff.1.gz

real    0m0.408s
user    0m0.010s
sys 0m0.101s
```

This command is run by our custom `diff` script, which is run for every
test file that is run. So this adds lots of unnecessary runtime time to
tests.

This changes our custom `diff` script to only call `whereis` in the
strange case that `/usr/bin/diff` does not exist.

The impact of this small change on the total runtime of the tests on WSL
is huge. As an example the following command takes 18 seconds without
this change and 7 seconds with it:
```
make -C src/test/regress/ check-arbitrary-configs CONFIGS=PostgresConfig
```
2022-02-23 13:03:29 +01:00
Ahmet Gedemenli 8b9402540f Add use_citus_managed_tables to arbitrary configs
(cherry picked from commit 4e93afd1f78854e1aaab63690c441b0b0598a82c)
(cherry picked from commit 0295fe2f5b)
(cherry picked from commit 878510725fab9cb6870b4504e0b1f055d7bbc68d)
2022-02-22 11:39:30 +03:00
Teja Mupparti a62901396b Allow unsafe triggers via a GUC 2022-02-21 22:45:17 -08:00
Onder Kalaci 95d5918967 Properly set worker_query and use 2022-02-21 18:22:33 +01:00
Onder Kalaci dffcafc096 Use global pids in citus_lock_waits 2022-02-21 17:46:34 +01:00
Onder Kalaci 331af3dce8 Dumping wait edges becomes optionally scan all backends
Before this commit, dumping wait edges can only be used for
distributed deadlock detection purposes. With this commit,
we open the possibility that we can use it for any backend.
2022-02-21 17:37:07 +01:00
Halil Ozan Akgul f6cd4d0f07 Overrides pg_cancel_backend and pg_terminate_backend to accept global pid 2022-02-21 16:41:35 +03:00
Ahmet Gedemenli c1d5ca9896 Do distributed check first, for DropSchema stmts 2022-02-21 14:43:04 +03:00
Ahmet Gedemenli 28aa715ce2 Add test for citus local tables with dropped columns 2022-02-21 12:07:17 +03:00
Ahmet Gedemenli 2bc6a00408 Refactor CreateDistributedTable to take column name 2022-02-21 12:07:17 +03:00
yxu2162 8974b2de66 Copied CheckCitusVersion over to Columnar to handle dependency issue. If we split columnar into two extensions, this will later be changed tl CheckColumnarVersion. 2022-02-18 09:47:39 -08:00
Philip Dubé 3d044dc543
Merge branch 'master' into avoid-exceptional-control-flow-in-fluent-py 2022-02-18 16:10:45 +00:00
Burak Velioglu fa6866ed36
Start to propagate functions to worker nodes with
CREATE FUNCTION command together with it's dependencies.

If the function depends on any nondistributable object,
function will be created only locally. Parameterless
version of create_distributed_function becomes obsolete
with this change, it will deprecated from the code with a subsequent PR.
2022-02-18 13:56:51 +03:00
gledis69 a14fada153 Prevent Deadlocks When a Worker Tries to Create Collation (Fix #5583)
* When a worker tried to create a collation which had a dependency in the same worker node,
it would cause a deadlock, now it throws the correct "not a coordinator" error.
2022-02-18 12:28:02 +03:00
Teja Mupparti 46fa47beea Force-delegated functions' distribution argument must be reset as soon as the routine completes execution,
and not wait until the top level Executor ends. This fixes issue #5687
2022-02-17 10:48:30 -08:00
Philip Dubé e4420a6252 fluent.py: prefer simpler return based control flow in _accept rather than relying on raising an exception 2022-02-17 13:30:17 +00:00
Nils Dijk ea86f9f94e
Add support for TEXT SEARCH CONFIGURATION objects (#5685)
DESCRIPTION: Implement TEXT SEARCH CONFIGURATION propagation

The change adds support to Citus for propagating TEXT SEARCH CONFIGURATION objects. TSConfig objects cannot always be created in one create statement, and instead require a create statement followed by many alter statements to get turned into the object they should represent.

To support this we add functionality to the worker to create or replace objects based on a list of statements. When the lists of the local object and the remote object correspond 1:1 we skip the creation of the object and simply mark it distributed. This is especially important for TSConfig objects as initdb pre-populates databases with a dozen configurations (for many different languages).

When the user creates a new TSConfig based on the copy of an existing configuration there is no direct link to the object copied from. Since there is no link we can't simply rely on propagating the dependencies to the worker and send a qualified
2022-02-17 13:12:46 +01:00
Hanefi Onaldi ccc4cc6bf0
Move test in isolation schedule to prevent failure
We check for metadata consistency across the cluster in the test
isolation_metadata_sync_vs_all. However, some earlier tests in
enterprise repo leave invalid pg_dist_node entries in the worker nodes
that have Oid values for already dropped role objects.

To remedy that, I suggest that we move the test to earlier in the
schedule, thereby making the tests pass for the time being. We should
later introduce metadata checking either in a new isolation test or by
moving this test later in the schedule. However, we should do that after
we fix the underlying issue.
2022-02-17 13:15:21 +03:00
Ahmet Gedemenli a1c3580c64 Support TRUNCATE for foreign tables 2022-02-17 09:59:53 +03:00
Ahmet Gedemenli 0411a98c99
Refactor EnsureSequentialMode functions (#5704) 2022-02-14 18:38:21 +03:00
Gledis Zeneli badfd561b2
Prevent Citus table functions from being called on shards (Fix #5610) (#5694)
DESCRIPTION: Prevent Citus table functions from being called on shards

The operations that guard against using shards are:
* Create Local Table
* Create distributed table (which affects reference table creation as well).

* I used a `ErrorIfRaltionIsKnownShard` instead of `ErrorIfIllegallyChangingKnownShard`.
`ErrorIfIllegallyChangingKnownShard` allows the operation if `citus.enable_manual_changes_to_shards`,
but I am not sure if it ever makes sense to create a distributed, reference, or citus local table out of a shard.

I tried to go over the code to identify other UDF-s where shards could be illegaly changed, but I could not find any other.
My knowledge of the codebase is not solid enough for me to say for sure.

Fixes #5610
2022-02-14 16:06:48 +03:00
Hanefi Onaldi 2e5ca8ba2b
Add isolation tests for metadata sync vs all
This commit introduces several test cases for concurrent operations that
change metadata, and a concurrent metadata sync operation.

The overall structure is as follows:
- Session#1 starts metadata syncing in a transaction block
- Session#2 does an operation that change metadata
- Both sessions are committed
- Another session checks whether the metadata are the same accross all
  nodes in the cluster.
2022-02-11 01:55:04 +03:00
Önder Kalacı dc6c194916
Show IDLE backends in citus_dist_stat_activity (#5700)
* Break the dependency to CitusInitiatedBackend infrastructure

With this change, we start to show non-distributed backends as well
in citus_dist_stat_activity. I think that
  (a) it is essential for making citus_lock_waits to work for blocked
      on DDL commands.
  (b) it is more expected from the user's perspective. The name of
      the view is a little inconsistent now (e.g., citus_dist_stat_activity)
      but we are already planning to improve the names with followup
      PRs.

Also, we have global pids assigned, the CitusInitiatedBackend
becomes obsolete.
2022-02-10 08:59:28 -08:00
Ahmet Gedemenli 76b63a307b Propagate create/drop schema commands 2022-02-10 14:58:09 +03:00
Marco Slot d0711ea9b4 Delegate function calls in FROM outside of transaction block 2022-02-09 20:56:25 +01:00
Onder Kalaci 1c30f61a70 Prevent citus.node_conninfo to use "application_name"
With https://github.com/citusdata/citus/pull/5657, Citus uses
a fixed application_name while connecting to remote nodes
for internal purposes.

It means that we cannot allow users to override it via
citus.node_conninfo.
2022-02-09 13:22:04 +01:00
Teja Mupparti 1e3c8e34c0 Allow create_distributed_function() on a function owned by an extension
Implement #5649
Allow create_distributed_function() on functions owned by extensions

1) Only update pg_dist_object, and do not propagate CREATE FUNCTION.
2) Ensure corresponding extension is in pg_dist_object.
3) Verify if dependencies exist on the function they should resolve to the extension.
4) Impact on node-scaling: We build a list of ddl commands based on all objects in
   pg_dist_object. We need to omit the ddl's for the extension-function, as it
   will get propagated by the virtue of the extension creation.
5) Extra checks for functions coming from extensions, to not propagate changes
   via ddl commands, even though the function is marked as distributed in pg_dist_object
2022-02-08 11:52:56 -08:00
Halil Ozan Akgul 8ee02b29d0 Introduce global PID 2022-02-08 16:49:38 +03:00
Burak Velioglu 0a70b78bf5
Add test for dist type 2022-02-07 17:50:49 +03:00
Burak Velioglu c0aece64d0
Add test for checking distributed extension function 2022-02-07 17:50:48 +03:00
Teja Mupparti c8e504dd69 Fix the issue #5673
If the expression is simple, such as, SELECT function() or PEFORM function()
in PL/PgSQL code, PL engine does a simple expression evaluation which can't
interpret the Citus CustomScan Node. Code checks for simple expressions when
executing an UDF but missed the DO-Block scenario, this commit fixes it.
2022-02-04 15:44:53 -08:00
Ying Xu b5c116449b
Removed dependency from EnsureTableOwner (#5676)
Removed dependency for EnsureTableOwner. Also removed pg_fini() and columnar_tableam_finish() Still need to remove CheckCitusVersion dependency to make Columnar_tableam.h dependency free from Citus.
2022-02-04 12:45:07 -08:00
Onur Tirtir 79442df1b7
Fix coordinator/worker query targetlists for agg. that we cannot push-down (#5679)
Previously, we were wrapping targetlist nodes with Vars that reference
to the result of the worker query, if the node itself is not `Const` or
not a `Param`. Indeed, we should not do that unless the node itself is
a `Var` node or contains a `Var` within it (e.g.: `OpExpr(Var(column_a) > 2)`).
Otherwise, when worker query returns empty result set, then combine
query exec would crash since the `Var` would be pointing to an empty
tuple slot, which is not desirable for the node-executor methods.
2022-02-04 05:37:25 -08:00
Onder Kalaci 72d7d92611 Apply code review feedback 2022-02-04 10:52:57 +01:00
Onder Kalaci 923bb194a4 Move isolation_multiuser_locking to MX tests 2022-02-04 10:52:57 +01:00
Onder Kalaci bcb00e3318 remove not used files 2022-02-04 10:52:57 +01:00
Onder Kalaci ff234fbfd2 Unify old GUCs into a single one
Replaces citus.enable_object_propagation with citus.enable_metadata_sync

Also, within Citus 11 release cycle, we added citus.enable_metadata_sync_by_default,
that is also replaced with citus.enable_metadata_sync.

In essence, when citus.enable_metadata_sync is set to true, all the objects
and the metadata is send to the remote node.

We strongly advice that the users never changes the value of
this GUC.
2022-02-04 10:52:56 +01:00
Teja Mupparti f31bce5b48 Fixes the issue seen in https://github.com/citusdata/citus-enterprise/issues/745
With this commit, rebalancer backends are identified by application_name = citus_rebalancer
and the regular internal backends are identified by application_name = citus_internal
2022-02-03 09:40:46 -08:00
Onder Kalaci 650243927c Relax some transactional limications on activate node
We already enforce EnsureSequentialModeMetadataOperations(), and given that all activate node is transaction, we should be fine
2022-02-01 15:56:55 +01:00
Marco Slot 63c6896716 Enable function call pushdown from workers 2022-02-01 14:13:25 +01:00
Önder Kalacı f712dfc558
Add tests coverage (#5672)
For extension owned tables with sequences
2022-02-01 15:39:52 +03:00
Burak Velioglu f88cc230bf
Handle tables and objects as metadata. Update UDFs accordingly
With this commit we've started to propagate sequences and shell
tables within the object dependency resolution. So, ensuring any
dependencies for any object will consider shell tables and sequences
as well. Separate logics for both shell tables and sequences have
been removed.

Since both shell tables and sequences logic were implemented as a
part of the metadata handling before that logic, we were propagating
them while syncing table metadata. With this commit we've divided
metadata (which means anything except shards thereafter) syncing
logic into multiple parts and implemented it either as a part of
ActivateNode. You can check the functions called in ActivateNode
to check definition of different metadata.

Definitions of start_metadata_sync_to_node and citus_activate_node
have also been updated. citus_activate_node will basically create
an active node with all metadata and reference table shards.
start_metadata_sync_to_node will be same with citus_activate_node
except replicating reference tables. stop_metadata_sync_to_node
will remove all the metadata. All of those UDFs need to be called
by superuser.
2022-01-31 16:20:15 +03:00
Onder Kalaci 303540e494 Add PGAPPNAME env. variable to arbitrary configs 2022-01-27 11:00:15 +01:00
Onder Kalaci b9b419ef16 Allow creating distributed tables in sequential mode
With https://github.com/citusdata/citus/pull/2780, we allow
COPY to use any number of connections that the executor used
in a tx block.

Meaning that, while COPYing data to the shards, create_distributed_table
could allow sequential mode.
2022-01-26 12:58:18 +01:00
Onur Tirtir 8c8d696621
Not fail over to local execution when it's not supported (#5625)
We fall back to local execution if we cannot establish any more
connections to local node. However, we should not do that for the
commands that we don't know how to execute locally (or we know we
shouldn't execute locally). To fix that, we take localExecutionSupported
take into account in CanFailoverPlacementExecutionToLocalExecution too.

Moreover, we also prompt a more accurate hint message to inform user
about whether the execution is failed because local execution is
disabled by them, or because local execution wasn't possible for given
command.
2022-01-25 16:43:21 +01:00
Ahmet Gedemenli e6fc0c6f36 Turn mx on for test: multi_colocation_utils 2022-01-21 19:31:47 +03:00
Onur Tirtir 7b59295af2 Drop ruleutils copied for triggers 2022-01-20 17:28:19 +03:00
Önder Kalacı e8ba9dd9d3
Merge branch 'master' into make_minimal_work_again 2022-01-20 11:48:53 +01:00
Teja Mupparti 54862f8c22 (1) Functions will be delegated even when present in the scope of an explicit
BEGIN/COMMIT transaction block or in a UDF calling another UDF.
(2) Prohibit/Limit the delegated function not to do a 2PC (or any work on a
remote connection).
(3) Have a safety net to ensure the (2) i.e. we should block the connections
from the delegated procedure or make sure that no 2PC happens on the node.
(4) Such delegated functions are restricted to use only the distributed argument
value.

Note: To limit the scope of the project we are considering only Functions(not
procedures) for the initial work.

DESCRIPTION: Introduce a new flag "force_delegation" in create_distributed_function(),
which will allow a function to be delegated in an explicit transaction block.

Fixes #3265

Once the function is delegated to the worker, on that node during the planning

distributed_planner()
TryToDelegateFunctionCall()
CheckDelegatedFunctionExecution()
EnableInForceDelegatedFuncExecution()
Save the distribution argument (Constant)
ExecutorStart()
CitusBeginScan()
IsShardKeyValueAllowed()
Ensure to not use non-distribution argument.

ExecutorRun()
AdaptiveExecutor()
StartDistributedExecution()
EnsureNoRemoteExecutionFromWorkers()
Ensure all the shards are local to the node in the remoteTaskList.
NonPushableInsertSelectExecScan()
InitializeCopyShardState()
EnsureNoRemoteExecutionFromWorkers()
Ensure all the shards are local to the node in the placementList.

This also fixes a minor issue: Properly handle expressions+parameters in distribution arguments
2022-01-19 16:43:33 -08:00
Onder Kalaci 7f30222c90 Fix check-minimal
It seems like we broke check-minimal with the refactor on #5486

This commit fixes the minor issue
2022-01-19 16:21:59 +01:00
Ahmet Gedemenli 9e6ebe4826 Turn mx on for test file citus_local_tables, on multi-1 schedule 2022-01-19 13:55:51 +03:00
Ahmet Gedemenli 37b3f50447
Turn mx on for multi-1 schedule (#5627)
For test files: multi_generate_ddl_commands, multi_repair_shards, multi_create_shards, mixed_relkind_tests
2022-01-19 12:05:54 +03:00
Marco Slot 33bfa0b191 Hide shards from application_name's with a specific prefix 2022-01-18 15:20:55 +04:00
Onur Tirtir d98500ac22
Fix a flaky test related with temp columnar table cleanup (#5599)
Wait until old backend to expire to make sure that temp table cleanup
is complete.
2022-01-17 09:26:30 -08:00
Ying Xu 4dca662e97
Making Columnar Dependency Free from Citus (#5622)
* Removed distributed dependency in columnar_metadata.c

* Changed columnar_debug.c so that it no longer needed distributed/tuplestore and made it return a record instead of a tuplestore

* removed distributed/commands.h dependency

* Made columnar_tableam.c dependency-free

* Fixed spacing for columnar_store_memory_stats function

* indentation fix

* fixed test failures
2022-01-14 09:43:05 -08:00
Önder Kalacı 46ec7cd5cf Enable MX for rebalancer tests 2022-01-11 12:07:39 +01:00
Önder Kalacı 885601c02c
Require superuser while activating a node (#5609)
* Require superuser while activating a node

With this change, we require ActiveNode() (hence citus_add_node(),
citus_activate_node()) explicitly require for a superuser.

Before this commit, these functions were designed to work with
non-superuser roles with the relevent GRANTs given.

However, that is not a widely used way for calling the functions
above.

Due to possibility of non-super user calling the UDFs, they were
designed in a way that some commands were using some additional
short-lived superuser connections. That is:
	(a) breaking transactional behavior (e.g., ROLLBACK
 	    wouldn't fully rollback the whole transaction)
        (b) Making it very complicated to reason about which
	    parts of the node activation goes over which connections,
	    and becoming vulnerable to deadlocks / visibility issues.
2022-01-10 08:30:13 -08:00
Marco Slot ee3b50b026 Disallow remote execution from queries on shards 2022-01-07 17:46:21 +01:00
Önder Kalacı 8d1b188620
Enable MX for the remaining failure tests (#5606) 2022-01-07 17:24:31 +01:00
Ahmet Gedemenli 3c834e6693
Disable foreign distributed tables (#5605)
* Disable foreign distributed tables
* Add warning for existing distributed foreign tables
2022-01-07 18:12:23 +03:00
Onder Kalaci 9f2d9e1487 Move placement deletion from disable node to activate node
We prefer the background daemon to only sync node metadata. That's
why we move placement metadata changes from disable node to
activate node. With that, we can make sure that disable node
only changes node metadata, whereas activate node syncs all
the metadata changes. In essence, we already expect all
nodes to be up when a node is activated. So, this does not change
the behavior much.
2022-01-07 09:56:03 +01:00
Ahmet Gedemenli 45e423136c
Support foreign tables in MX (#5461) 2022-01-06 18:50:34 +03:00
Önder Kalacı 5305aa4246
Do not drop sequences when dropping metadata (#5584)
Dropping sequences means we need to recreate
and hence losing the sequence.

With this commit, we keep the existing sequences
such that resyncing wouldn't drop the sequence.

We do that by breaking the dependency of the sequence
from the table.
2022-01-06 09:48:34 +01:00
Önder Kalacı 8007adda25
Convert the function to a distributed function (#5596)
so that when metadata is synced, the table is on the worker
2022-01-06 11:32:40 +03:00
Önder Kalacı 6d9218540b
Enable single node tests with Citus MX (#5595)
* Enable single node tests with Citus MX

The test already has comment on the changes
2022-01-05 16:00:44 +03:00
Onder Kalaci 22b5175fd1 Make sure that the community and enterprise tests produce the same output 2022-01-04 13:30:31 +01:00
Önder Kalacı 0a8b0b06c6
Do not allow distributed functions on non-metadata synced nodes (#5586)
Before this commit, Citus was triggering metadata syncing
in the background when a function is distributed. However,
with Citus 11, we expect all clusters to have metadata synced
enabled. So, we do not expect any nodes not to have the metadata.

This change:
	(a) pro: simplifies the code and opens up possibilities
		 to simplify futher by reducing the scope of
		 bg worker to only sync node metadata
        (b) pro: explicitly asks users to sync the metadata such that
  	    any unforseen impact can be easily detected
        (c) con: For distributed functions without distribution
		 argument, we do not necessarily require the metadata
		 sycned. However, for completeness and simplicity, we
		 do so.
2022-01-04 13:12:57 +01:00
Halil Ozan Akgul 9547228e8d Add isolation_check_mx test 2021-12-30 14:58:30 +03:00
Halil Ozan Akgul aef2d83c7d Fix metadata sync fails on multi_transaction_recovery 2021-12-29 11:21:32 +03:00
Önder Kalacı d33650d1c1
Record if any partitioned Citus tables during upgrade (#5555)
With Citus 11, the default behavior is to sync the metadata.
However, partitioned tables created pre-Citus 11 might have
index names that are not compatiable with metadata syncing.

See https://github.com/citusdata/citus/issues/4962 for the
details.

With this commit, we record the existence of partitioned tables
such that we can fix it later if any exists.
2021-12-27 03:33:34 -08:00
Halil Ozan Akgul 0c292a74f5 Fix metadata sync fails on multi_truncate 2021-12-27 13:54:53 +03:00
Önder Kalacı c9127f921f
Avoid round trips while fixing index names (#5549)
With this commit, fix_partition_shard_index_names()
works significantly faster.

For example,

32 shards, 365 partitions, 5 indexes drop from ~120 seconds to ~44 seconds
32 shards, 1095 partitions, 5 indexes drop from ~600 seconds to ~265 seconds

`queryStringList` can be really long, because it may contain #partitions * #indexes entries.

Before this change, we were actually going through the executor where each command
in the query string triggers 1 round trip per entry in queryStringList.

The aim of this commit is to avoid the round-trips by creating a single query string.

I first simply tried sending `q1;q2;..;qn` . However, the executor is designed to
handle `q1;q2;..;qn` type of query executions via the infrastructure mentioned
above (e.g., by tracking the query indexes in the list and doing 1 statement
per round trip).

One another option could have been to change the executor such that only track
the query index when `queryStringList` is provided not with queryString
including multiple `;`s . That is (a) more work (b) could cause weird edge
cases with failure handling (c) felt like coding a special case in to the executor
2021-12-27 10:29:37 +01:00
Halil Ozan Akgul bb636e6a29 Fix metadata sync fails on multi_function_evaluation 2021-12-24 19:32:58 +03:00
Halil Ozan Akgul 70e68d5312 Fix metadata sync fails on multi_name_lengths 2021-12-24 14:33:32 +03:00
Halil Ozan Akgul 5c2fb06322 Fix metadata sync fails on multi_sequence_default 2021-12-24 14:33:32 +03:00
Halil Ozan Akgul b9c06a6762 Turn metadata sync on in multi_metadata_sync 2021-12-24 10:58:13 +03:00
Hanefi Onaldi 479b2da740 Fix one flaky failure test 2021-12-23 20:11:45 +03:00
Ahmet Gedemenli 042d45b263 Propagate foreign server ops 2021-12-23 17:54:04 +03:00
Onur Tirtir 61b5fb1cfc
Run failure_test_helpers in base schedule (#5559) 2021-12-23 12:54:12 +01:00
Ahmet Gedemenli 8e4ff34a2e Do not include return table params in the function arg list
(cherry picked from commit 90928cfd74)

Fix function signature generation

Fix comment typo

Add test for worker_create_or_replace_object

Add test for recreating distributed functions with OUT/TABLE params

Add test for recreating distributed function that returns setof int

Fix test output

Fix comment
2021-12-21 19:01:42 +03:00
Marco Slot 2eef71ccab Propagate SET TRANSACTION commands 2021-12-18 11:31:39 +01:00
Halil Ozan Akgul 46f718c76d Turn metadata sync on in add_coordinator, foreign_key_to_reference_table and replicate_reference_tables_to_coordinator 2021-12-17 16:33:25 +03:00
Halil Ozan Akgul 25755a7094 Turn ddl propagation off in worker on multi_copy 2021-12-17 15:54:20 +03:00
Onder Kalaci fc98f83af2 Add citus.grep_remote_commands
Simply applies

```SQL
SELECT textlike(command, citus.grep_remote_commands)
```
And, if returns true, the command is logged. Else, the log is ignored.

When citus.grep_remote_commands is empty string, all commands are
logged.
2021-12-17 11:47:40 +01:00
Halil Ozan Akgul df8d0f3db1 Turn metadata sync on in multi_replicate_reference_table and multi_citus_tools 2021-12-17 10:25:57 +03:00
Talha Nisanci c0945d88de
Normalize a debug failure to WARNING failure (#4996) 2021-12-16 13:43:49 +03:00
Halil Ozan Akgul 8943d7b52f Turn metadata sync on in mx_regular_user and remove_coordinator 2021-12-16 11:26:24 +03:00
Halil Ozan Akgul b82af4db3b Turn metadata sync on in multi_size_queries, multi_drop_extension and multi_unsupported_worker_operations 2021-12-16 11:10:54 +03:00
Hanefi Onaldi acdcd9422c
Fix one flaky failure test (#5528)
Removes flaky test
2021-12-15 18:59:58 +03:00
Hanefi Onaldi 29e4516642 Introduce citus_check_cluster_node_health UDF
This UDF coordinates connectivity checks accross the whole cluster.

This UDF gets the list of active readable nodes in the cluster, and
coordinates all connectivity checks in sequential order.

The algorithm is:

for sourceNode in activeReadableWorkerList:
    c = connectToNode(sourceNode)
    for targetNode in activeReadableWorkerList:
        result = c.execute(
            "SELECT citus_check_connection_to_node(targetNode.name,
                                                   targetNode.port")
        emit sourceNode.name,
             sourceNode.port,
             targetNode.name,
             targetNode.port,
             result

- result -> true  ->  connection attempt from source to target succeeded
- result -> false -> connection attempt from source to target failed
- result -> NULL  -> connection attempt from the current node to source node failed

I suggest you use the following query to get an overview on the connectivity:

SELECT bool_and(COALESCE(result, false))
FROM citus_check_cluster_node_health();

Whenever this query returns false, there is a connectivity issue, check in detail.
2021-12-15 01:41:51 +03:00
Halil Ozan Akgul e060720370 Fix metadata sync fails in multi_index_statements 2021-12-14 11:28:08 +03:00
Halil Ozan Akgul a951e52ce8 Fix drop index trying to drop coordinator local indexes on metadata worker nodes 2021-12-14 11:28:08 +03:00
Halil Ozan Akgul 1d7dde2c4c Fix metadata sync fails on multi_copy 2021-12-14 10:59:59 +03:00
Halil Ozan Akgul 98e38e2e4e Fix metadata sync fails on failure_connection_establishment 2021-12-13 11:51:56 +03:00
Halil Ozan Akgul 507df08422 Fix metadata sync fails on propagate_statistics and pg13_propagate_statistics tests 2021-12-09 12:28:11 +03:00
Halil Ozan Akgul 351314f8a1 Turn metadata sync on in base/minimal schedules 2021-12-08 13:34:41 +03:00
Halil Ozan Akgul ee894c9e73 Fix metadata sync fails on multi_follower_schedule 2021-12-08 13:07:37 +03:00
Halil Ozan Akgul 4c8f79d7dd Turn metadata sync on in failure schedule 2021-12-08 11:22:56 +03:00
Halil Ozan Akgul 4f272ea0e5 Fix metadata sync fails in multi_extension 2021-12-08 10:25:43 +03:00
Halil Ozan Akgul a3834edeaa Turn metadata sync on in multi_mx_schedule 2021-12-08 10:25:43 +03:00
Halil Ozan Akgul ea37f4fd29 Turn metadata sync on in upgrade schedules 2021-12-08 10:19:02 +03:00
Hanefi Onaldi 05a3dfa8a9 Remove redundant arbitrary config class
We had 2 class definitions for CitusCacheManyConnectionsConfig, where
one of them was a copy of CitusSmallCopyBuffersConfig.

This commit leaves the intended class definition that configures caching
many connections, and removes the one that is a copy of another class
2021-12-08 04:47:08 +03:00
Burak Velioglu ed8e32de5e
Sync pg_dist_object on an update and propagate while syncing to a new node
Before that PR we were updating citus.pg_dist_object metadata, which keeps
the metadata related to objects on Citus, only on the coordinator node. In
order to allow using those object from worker nodes (or erroring out with
proper error message) we've started to propagate that metedata to worker
nodes as well.
2021-12-06 19:25:50 +03:00
Halil Ozan Akgul ef09ba0d06 Fix metadata sync fails of multi_table_ddl 2021-12-06 13:44:30 +03:00
Halil Ozan Akgul a6d0de060c Fix fails with metadata syncing in undistribute_table 2021-12-03 13:58:53 +03:00
Hanefi Onaldi 56e9b1b968 Introduce UDF to check worker connectivity
citus_check_connection_to_node runs a simple query on a remote node and
reports whether this attempt was successful.

This UDF will be used to make sure each worker node can connect to all
the worker nodes in the cluster.

parameters:
nodename: required
nodeport: optional (default: 5432)

return value:
boolean success
2021-12-03 02:30:28 +03:00
Talha Nisanci e4ead8f408
Update broken link for upgrade tests (#5408)
* Update broken link for upgrade tests

* Update src/test/regress/README.md

Co-authored-by: Nils Dijk <nils@citusdata.com>

Co-authored-by: Nils Dijk <nils@citusdata.com>
2021-12-02 15:25:36 +01:00
Onder Kalaci 549edcabb6 Allow disabling node(s) when multiple failures happen
As of master branch, Citus does all the modifications to replicated tables
(e.g., reference tables and distributed tables with replication factor > 1),
via 2PC and avoids any shardstate=3. As a side-effect of those changes,
handling node failures for replicated tables change.

With this PR, when one (or multiple) node failures happen, the users would
see query errors on modifications. If the problem is intermitant, that's OK,
once the node failure(s) recover by themselves, the modification queries would
succeed. If the node failure(s) are permenant, the users should call
`SELECT citus_disable_node(...)` to disable the node. As soon as the node is
disabled, modification would start to succeed. However, now the old node gets
behind. It means that, when the node is up again, the placements should be
re-created on the node. First, use `SELECT citus_activate_node()`. Then, use
`SELECT replicate_table_shards(...)` to replicate the missing placements on
the re-activated node.
2021-12-01 10:19:48 +01:00
Halil Ozan Akgul 316274b5f0 Add normalize.sed item for multi_fix_partition_shard_index_names test 2021-11-30 13:28:41 +03:00
Halil Ozan Akgul 11072b4cb8 Normalize create role command in drop_partitioned_table test 2021-11-30 12:46:22 +03:00
Onder Kalaci 38b08ebde9 Generalize the error checks while removing node
The checks for preventing to remove a node are very much reference
table centric. We are soon going to add the same checks for replicated
tables. So, make the checks generic such that:
	 (a) replicated tables fit naturally
	 (b) we can the same checks in `citus_disable_node`.
2021-11-26 14:25:29 +01:00
Hanefi Onaldi 4c135de9e4 Introduce CI checks for hash comments in specs
We do not use comments starting with # in spec files because it creates
errors from C preprocessor that expects directives after this character.
Instead use C style comments, i.e:
// single line comment

You can also use multiline comments as well
/*
 * multi line comment
 */
2021-11-26 14:52:51 +03:00
Halil Ozan Akgul 87a1c760d9 Fix tests in multi-1-schedule that fail with metadata syncing 2021-11-26 12:09:53 +03:00
Onder Kalaci b4931f7345 Do not acquire locks on reference tables when a node is removed/disabled
Before this commit, we acquire the metadata locks on the reference
tables while removing/disabling a node on all the MX nodes.

Although it has some marginal benefits, such as a concurrent
modification during remove/disable node blocks, instead of erroring
out, the drawbacks seems worse. Both citus_remove_node and citus_disable_node
are not tolerant to multiple node failures.

With this commit, we relax the locks. The implication is that while
a node is removed/disabled, users might see query errors. On the
other hand, this change becomes removing/disabling nodes more
tolerant to multiple node failures.
2021-11-26 09:08:25 +01:00
Onur Tirtir 76b8006a9e
Allow overwriting columnar storage pages written by aborted xacts (#5484)
When refactoring storage layer in #4907, we deleted the code that allows
overwriting a disk page previously written but not known by metadata.

Readers can see the change that introduced the code allows doing so in
commit a8da9acc63.

The reasoning was that; as of 10.2, we started aligning page
reservations (`AlignReservation`) for subsequent writes right after
allocating pages from disk. That means, even if writer transaction
fails, subsequent writes are guaranteed to allocate a new page and write
to there. For this reason, attempting to write to a page allocated
before is not possible for a columnar table that user created when using
v10.2.x.

However, since the older versions of columnar doesn't do that, following
example scenario can still result in writing to such disk page, even if
user now upgraded to v10.2.x. This is because, when upgrading storage to
2.0 (`ColumnarStorageUpdateIfNeeded`), we calculate `reservedOffset` of
the metapage based on the highest used address known by stripe
metadata (`GetHighestUsedAddressAndId`). However, stripe metadata
doesn't have entries for aborted writes. As a result, highest used
address would be computed by ignoring pages that are allocated but not
used.

- User attempts writing to columnar table on Citus v10.0x/v10.1x.
- Write operation fails for some reason.
- User upgrades Citus to v10.2.x.
- When attempting to write to same columnar table, they hit to "attempt
  to write columnar data .." error since write operation done in the
  older version of columnar already allocated that page, and now we are
  overwriting it.

For this reason, with this commit, we re-do the change done in
a8da9acc63.

And for the reasons given above, it wasn't possible to add a test for
this commit via usual code-paths. For this reason, added a UDF only for
testing purposes so that we can reproduce the exact scenario in our
regression test suite.
2021-11-26 07:51:13 +01:00
Onur Tirtir 85da4fc2e0
Merge branch 'master' into col/pg-upgrade-dependency 2021-11-26 09:34:43 +03:00
Onur Tirtir 81af605e07
Fix typo: "no sharding pruning constraints" -> "no shard pruning constraints" (#5490) 2021-11-25 21:00:44 +01:00
Onur Tirtir 73f06323d8 Introduce dependencies from columnarAM to columnar metadata objects
During pg upgrades, we have seen that it is not guaranteed that a
columnar table will be created after metadata objects got created.
Prior to changes done in this commit, we had such a dependency
relationship in `pg_depend`:

```
columnar_table ----> columnarAM ----> citus extension
                                           ^  ^
                                           |  |
columnar.storage_id_seq --------------------  |
                                              |
columnar.stripe -------------------------------
```

Since `pg_upgrade` just knows to follow topological sort of the objects
when creating database dump, above dependency graph doesn't imply that
`columnar_table` should be created before metadata objects such as
`columnar.storage_id_seq` and `columnar.stripe` are created.

For this reason, with this commit we add new records to `pg_depend` to
make columnarAM depending on all rel objects living in `columnar`
schema. That way, `pg_upgrade` will know it needs to create those before
creating `columnarAM`, and similarly, before creating any tables using
`columnarAM`.

Note that in addition to inserting those records via installation script,
we also do the same in `citus_finish_pg_upgrade()`. This is because,
`pg_upgrade` rebuilds catalog tables in the new cluster and that means,
we must insert them in the new cluster too.
2021-11-23 13:14:00 +03:00
Onur Tirtir ef2ca03f24 Reproduce bug via test suite 2021-11-23 13:14:00 +03:00
Marco Slot f49d26fbeb Remove citus_update_table_statistics isolation test 2021-11-19 10:51:15 +01:00
Marco Slot 56eae48daf Stop updating shard range in citus_update_shard_statistics 2021-11-19 10:51:15 +01:00
Hanefi Onaldi c0d43d4905
Prevent cache usage on citus_drop_trigger codepaths 2021-11-18 20:24:51 +03:00
Hanefi Onaldi e6160ad131
Document failing tests for issue 5099 2021-11-18 20:01:34 +03:00
Marco Slot 9e6ca23286 Remove cstore_fdw-related logic 2021-11-16 13:59:03 +01:00
Önder Kalacı 8c0bc94b51
Enable replication factor > 1 in metadata syncing (#5392)
- [x] Add some more regression test coverage
- [x] Make sure returning works fine in case of
     local execution + remote execution
     (task->partiallyLocalOrRemote works as expected, already added tests)
- [x] Implement locking properly (and add isolation tests)
     - [x] We do #shardcount round-trips on `SerializeNonCommutativeWrites`.
           We made it a single round-trip.
- [x] Acquire locks for subselects on the workers & add isolation tests
- [x] Add a GUC to prevent modification from the workers, hence increase the
      coordinator-only throughput
       - The performance slightly drops (~%15), unless
         `citus.allow_modifications_from_workers_to_replicated_tables`
         is set to false
2021-11-15 15:10:18 +03:00
Onur Tirtir 25024b776e
Skip deleting options if columnar.options is already dropped (#5458)
Drop extension might cascade to columnar.options before dropping a
columnar table. In that case, we were getting below error when opening
columnar.options to delete records for the columnar table that we are
about to drop.: "ERROR:  could not open relation with OID 0".

I somehow reproduced this bug easily when upgrading pg, that is why
adding added the test to after_pg_upgrade_schedule.
2021-11-12 12:30:09 +03:00
Ahmet Gedemenli 14a33d4e8e Introduce GUC citus.use_citus_managed_tables 2021-11-11 14:09:06 +03:00
Hanefi Onaldi 3d9cec70fd
Update migration paths from 10.2 to 11.0 (#5459)
We recently introduced a set of patches to 10.2, and introduced 10.2-4
migration version. This migration version only resides on `release-10.2`
branch, and is missing on our default branch. This creates a problem
because we do not have a valid migration path from 10.2 to latest 11.0.

To remedy this issue, I copied the relevant migration files from
`release-10.2` branch, and renamed some of our migration files on
default branch to make sure we have a linear upgrade path.
2021-11-11 13:55:28 +03:00
Önder Kalacı 6f5a343ff4
Make sure that enterprise tests pass (#5451) 2021-11-08 18:11:19 +03:00
Önder Kalacı 98ca6ba6ca
Allow lock_shard_resources to be called by the users with privileges (#5441)
Before this commit, we required the user to be owner of the shard/table
in order to call lock_shard_resources.

However, that is too restrictive. We can have users with GRANTS
to the table who are not owners of the tables/shards.

With this commit, we allow such patterns.
2021-11-08 15:36:51 +01:00
Önder Kalacı d5b371b2e0
Merge branch 'master' into naisila/fix-partitioned-index 2021-11-08 10:53:16 +01:00
naisila 385ba94d15 Run fix_partition_shard_index_names after each wrong naming command 2021-11-08 10:43:34 +01:00
Marco Slot 78866df13c Remove master_append_table_to_shard UDF 2021-11-08 10:43:24 +01:00