Commit Graph

179 Commits (dd45a2c8743b805e5a9aec0c7431dc7bd06ffc4b)

Author SHA1 Message Date
gurkanindibay dd45a2c874 Fixes drop role pg_dist_object removal issue 2024-01-31 17:22:16 +03:00
gurkanindibay c57d642408 Merges with prev branches 2024-01-30 00:07:06 +03:00
Gürkan İndibay e7b389f420
Merge branch 'grant_database_2pc' into create_user_2pc 2024-01-29 23:32:37 +03:00
gurkanindibay 9a2cabec8f Merge branch 'grant_role_2pc' into grant_database_2pc 2024-01-29 23:28:33 +03:00
gurkanindibay 848cf685c2 Changes postprocess logic 2024-01-29 23:26:05 +03:00
gurkanindibay 1ed2f915d2 Fixes merge problems 2024-01-29 23:22:21 +03:00
Gürkan İndibay a759aaf797
Merge branch 'grant_role_2pc' into grant_database_2pc 2024-01-29 22:18:05 +03:00
gurkanindibay 4553e1ea5b Fixes indentation 2024-01-29 17:57:09 +03:00
gurkanindibay 71a733cdec Fixes indentation 2024-01-29 17:50:52 +03:00
gurkanindibay 754d5899f2 Refactors MarkObjectDistributed logic 2024-01-29 17:50:27 +03:00
gurkanindibay e2e868166c Fixes naming suggestions 2024-01-29 16:34:37 +03:00
Gürkan İndibay 510e5581ad
Update src/backend/distributed/commands/utility_hook.c
Co-authored-by: Onur Tirtir <onurcantirtir@gmail.com>
2024-01-29 16:33:49 +03:00
Gürkan İndibay 2f45493366
Update src/backend/distributed/commands/utility_hook.c
Co-authored-by: Onur Tirtir <onurcantirtir@gmail.com>
2024-01-29 16:33:42 +03:00
Gürkan İndibay 42864d7069
Update src/backend/distributed/commands/utility_hook.c
Co-authored-by: Onur Tirtir <onurcantirtir@gmail.com>
2024-01-29 16:33:27 +03:00
Gürkan İndibay 9632798dc5
Update src/backend/distributed/commands/utility_hook.c
Co-authored-by: Onur Tirtir <onurcantirtir@gmail.com>
2024-01-29 16:32:54 +03:00
Gürkan İndibay 3533461712
Update src/backend/distributed/commands/utility_hook.c
Co-authored-by: Onur Tirtir <onurcantirtir@gmail.com>
2024-01-29 13:16:27 +03:00
Gürkan İndibay 58bddca01c
Update src/backend/distributed/commands/utility_hook.c
Co-authored-by: Onur Tirtir <onurcantirtir@gmail.com>
2024-01-29 13:16:14 +03:00
gurkanindibay 172f866ba6 Adds command propagation 2024-01-26 19:14:30 +03:00
gurkanindibay cf50b0cea9 Merge branch 'grant_role_2pc' into grant_database_2pc 2024-01-26 09:57:25 +03:00
Gürkan İndibay ee59e7eaf5
Merge branch 'main' into grant_role_2pc 2024-01-25 16:00:21 +03:00
Halil Ozan Akgül 1cb2e1e4e8
Fixes create user queries from Citus non-main databases with other users (#7442)
This PR makes the connections to other nodes for
`mark_object_distributed` use the same user as
`execute_command_on_remote_nodes_as_user` so they'll use the same
connection.
2024-01-24 12:57:54 +03:00
gurkanindibay 42105686a3 Fixes code style 2024-01-23 16:42:49 +03:00
gurkanindibay 2da1fb1ff8 Adds comnment for the new method 2024-01-23 16:07:55 +03:00
gurkanindibay f7dce78534 Refactors Object Type check code 2024-01-23 16:07:15 +03:00
gurkanindibay 20caccb408 Adds grant database 2pc support 2024-01-23 14:55:13 +03:00
gurkanindibay 340baa1242 Fixes indentation 2024-01-10 22:12:53 +03:00
gurkanindibay 6d259d5941 Fixes review issues 2024-01-10 22:07:57 +03:00
Gürkan İndibay 665c65cf0e
Merge branch 'main' into grant_role_2pc 2024-01-09 10:58:04 +03:00
Onur Tirtir 1d55debb98
Support CREATE / DROP database commands from any node (#7359)
DESCRIPTION: Adds support for issuing `CREATE`/`DROP` DATABASE commands
from worker nodes

With this commit, we allow issuing CREATE / DROP DATABASE commands from
worker nodes too.
As in #7278, this is not allowed when the coordinator is not added to
metadata because we don't ever sync metadata changes to coordinator
when adding coordinator to the metadata via
`SELECT citus_set_coordinator_host('<hostname>')`, or equivalently, via
`SELECT citus_add_node(<coordinator_node_name>, <coordinator_node_port>, 0)`.

We serialize database management commands by acquiring a Citus specific
advisory lock on the first primary worker node if there are any workers in the
cluster. As opposed to what we've done in https://github.com/citusdata/citus/pull/7278
for role management commands, we try to avoid from running into distributed deadlocks
as much as possible. This is because, while distributed deadlocks that can happen around
role management commands can be detected by Citus, this is not the case for database
management commands because most of them cannot be run inside in a transaction block.
In that case, Citus cannot even detect the distributed deadlock because the command is not
part of a distributed transaction at all, then the command execution might not return the
control back to the user for an indefinite amount of time.
2024-01-08 16:47:49 +00:00
gurkanindibay febbaddc40 Fixes test errors 2024-01-05 13:49:04 +03:00
gurkanindibay 20dc90289b Adds Grant Role support into 2PC 2024-01-05 13:12:21 +03:00
Halil Ozan Akgül b877d606c7
Adds 2PC distributed commands from other databases (#7203)
DESCRIPTION: Adds support for 2PC from non-Citus main databases

This PR only adds support for `CREATE USER` queries, other queries need
to be added. But it should be simple because this PR creates the
underlying structure.

Citus main database is the database where the Citus extension is
created. A non-main database is all the other databases that are in the
same node with a Citus main database.

When a `CREATE USER` query is run on a non-main database we:

1. Run `start_management_transaction` on the main database. This
function saves the outer transaction's xid (the non-main database
query's transaction id) and marks the current query as main db command.
2. Run `execute_command_on_remote_nodes_as_user("CREATE USER
<username>", <username to run the command>)` on the main database. This
function creates the users in the rest of the cluster by running the
query on the other nodes. The user on the current node is created by the
query on the outer, non-main db, query to make sure consequent commands
in the same transaction can see this user.
3. Run `mark_object_distributed` on the main database. This function
adds the user to `pg_dist_object` in all of the nodes, including the
current one.

This PR also implements transaction recovery for the queries from
non-main databases.
2023-12-22 19:19:41 +03:00
Nils Dijk 0620c8f9a6
Sort includes (#7326)
This change adds a script to programatically group all includes in a
specific order. The script was used as a one time invocation to group
and sort all includes throught our formatted code. The grouping is as
follows:

 - System includes (eg. `#include<...>`)
 - Postgres.h (eg. `#include "postgres.h"`)
- Toplevel imports from postgres, not contained in a directory (eg.
`#include "miscadmin.h"`)
 - General postgres includes (eg . `#include "nodes/..."`)
- Toplevel citus includes, not contained in a directory (eg. `#include
"citus_verion.h"`)
 - Columnar includes (eg. `#include "columnar/..."`)
 - Distributed includes (eg. `#include "distributed/..."`)

Because it is quite hard to understand the difference between toplevel
citus includes and toplevel postgres includes it hardcodes the list of
toplevel citus includes. In the same manner it assumes anything not
prefixed with `columnar/` or `distributed/` as a postgres include.

The sorting/grouping is enforced by CI. Since we do so with our own
script there are not changes required in our uncrustify configuration.
2023-11-23 18:19:54 +01:00
Gürkan İndibay 3b556cb5ed
Adds create / drop database propagation support (#7240)
DESCRIPTION: Adds support for propagating `CREATE`/`DROP` database

In this PR, create and drop database support is added.

For CREATE DATABASE:
* "oid" option is not supported
* specifying "strategy" to be different than "wal_log" is not supported
* specifying "template" to be different than "template1" is not
supported

The last two are because those are not saved in `pg_database` and when
activating a node, we cannot assume what parameters were provided when
creating the database.

And "oid" is not supported because whether user specified an arbitrary
oid when creating the database is not saved in pg_database and we want
to avoid from oid collisions that might arise from attempting to use an
auto-assigned oid on workers.

Finally, in case of node activation, GRANTs for the database are also
propagated.

---------

Co-authored-by: Jelte Fennema-Nio <github-tech@jeltef.nl>
Co-authored-by: Jelte Fennema-Nio <jelte.fennema@microsoft.com>
Co-authored-by: Onur Tirtir <onurcantirtir@gmail.com>
2023-11-21 16:43:51 +03:00
Onur Tirtir 240313e286
Support role commands from any node (#7278)
DESCRIPTION: Adds support from issuing role management commands from worker nodes

It's unlikely to get into a distributed deadlock with role commands, we
don't care much about them at the moment.
There were several attempts to reduce the chances of a deadlock but we
didn't any of them merged into main branch yet, see:
#7325
#7016
#7009
2023-11-10 09:58:51 +00:00
Nils Dijk 0dac63afc0
move pg_version_constants.h to toplevel include (#7335)
In preparation of sorting and grouping all includes we wanted to move
this file to the toplevel includes for good grouping/sorting.
2023-11-09 15:09:39 +00:00
Gürkan İndibay 71a4633dad
Fixes typo and renames multi_process_utility (#7259) 2023-10-17 16:39:37 +03:00
Gürkan İndibay 7fa109c977
Adds alter user missing features (#7204)
DESCRIPTION: Adds alter user rename propagation and enriches alter user
tests

---------

Co-authored-by: Jelte Fennema <github-tech@jeltef.nl>
2023-09-26 12:28:07 +03:00
aykut-bozkurt 8eb3360017
Fixes visibility problems with dependency propagation (#7028)
**Problem:**
Previously we always used an outside superuser connection to overcome
permission issues for the current user while propagating dependencies.
That has mainly 2 problems:
1. Visibility issues during dependency propagation, (metadata connection
propagates some objects like a schema, and outside transaction does not
see it and tries to create it again)
2. Security issues (it is preferrable to use current user's connection
instead of extension superuser)

**Solution (high level):**
Now, we try to make a smarter decision on whether should we use an
outside superuser connection or current user's metadata connection. We
prefer using current user's connection if any of the objects, which is
already propagated in the current transaction, is a dependency for a
target object. We do that since we assume if current user has
permissions to create the dependency, then it can most probably
propagate the target as well.

Our assumption is expected to hold most of the times but it can still be
wrong. In those cases, transaction would fail and user should set the
GUC `citus.create_object_propagation` to `deferred` to work around it.

**Solution:**
1. We track all objects propagated in the current transaction (we can
handle subtransactions),
2. We propagate dependencies via the current user's metadata connection
if any dependency is created in the current transaction to address
issues listed above. Otherwise, we still use an outside superuser
connection.


DESCRIPTION: Fixes some object propagation errors seen with transaction
blocks.

Fixes https://github.com/citusdata/citus/issues/6614

---------

Co-authored-by: Nils Dijk <nils@citusdata.com>
2023-09-05 18:04:16 +03:00
Emel Şimşek a849570f3f
Improve the performance of CitusHasBeenLoaded function for a database that does not do CREATE EXTENSION citus but load citus.so. (#7123)
For a database that does not create the citus extension by running

`  CREATE EXTENSION citus;`

`CitusHasBeenLoaded ` function ends up querying the `pg_extension` table
every time it is invoked. This is not an ideal situation for a such a
database.

The idea in this PR is as follows:

### A new field in MetadataCache.
 Add a new variable `extensionCreatedState `of the following type:

```
typedef enum ExtensionCreatedState
{
        UNKNOWN = 0,
        CREATED = 1,
        NOTCREATED = 2,
} ExtensionCreatedState;
```
When the MetadataCache is invalidated, `ExtensionCreatedState` will be
set to UNKNOWN.
     
### Invalidate MetadataCache when CREATE/DROP/ALTER EXTENSION citus
commands are run.

- Register a callback function, named
`InvalidateDistRelationCacheCallback`, for relcache invalidation during
the shared library initialization for `citus.so`. This callback function
is invoked in all the backends whenever the relcache is invalidated in
one of the backends. (This could be caused many DDLs operations).

- In the cache invalidation callback,`
InvalidateDistRelationCacheCallback`, invalidate `MetadataCache` zeroing
it out.
 
- In `CitusHasBeenLoaded`, perform the costly citus is loaded check only
if the `MetadataCache` is not valid.
 
### Downsides

Any relcache invalidation (caused by various DDL operations) will case
Citus MetadataCache to get invalidated. Most of the time it will be
unnecessary. But we rely on that DDL operations on relations will not be
too frequent.
2023-09-05 13:29:35 +03:00
Naisila Puka 69af3e8509
Drop PG13 Support Phase 2 - Remove PG13 specific paths/tests (#7007)
This commit is the second and last phase of dropping PG13 support.

It consists of the following:

- Removes all PG_VERSION_13 & PG_VERSION_14 from codepaths
- Removes pg_version_compat entries and columnar_version_compat entries
specific for PG13
- Removes alternative pg13 test outputs 
- Removes PG13 normalize lines and fix the test outputs based on that

It is a continuation of 5bf163a27d
2023-06-21 14:18:23 +03:00
aykut-bozkurt f667f14029
Rewind tuple store to fix scrollable with hold cursor fetches (#7014)
We need to rewind the tuplestorestate's tuple index to get correct
results on fetching scrollable with hold cursors.


`PersistHoldablePortal` is responsible for persisting out
tuplestorestate inside a with hold cursor before commiting a
transaction.

It rewinds the cursor like below (`ExecutorRewindcalls` calls `rescan`):
```c
if (portal->cursorOptions & CURSOR_OPT_SCROLL)
{
  ExecutorRewind(queryDesc);
}
```

At the end, it adjusts tuple index for holdStore in the portal properly.
```c
if (portal->cursorOptions & CURSOR_OPT_SCROLL)
{
         if (!tuplestore_skiptuples(portal->holdStore,
	                                         portal->portalPos,
	                                         true))
	    elog(ERROR, "unexpected end of tuple stream");
}
```

DESCRIPTION: Fixes incorrect results on fetching scrollable with hold
cursors.

Fixes https://github.com/citusdata/citus/issues/7010
2023-06-19 23:00:18 +03:00
aykut-bozkurt fba5c8dd30
ALTER TABLE <tblname> SET SCHEMA <schemaname> for single shard tables (#7004)
Adds support for altering schema of single shard tables. We do that in 2
steps.
1. Undistribute the tenant table at `preprocess` step,
2. Distribute new schema if it is a distributed schema after DDLs are
propagated.

DESCRIPTION: Adds support for altering a table's schema to/from
distributed schemas.
2023-06-19 10:21:13 +03:00
Naisila Puka 48f068d08e
Remove AssertArg and AssertState (#6970)
PG16 removed them. They were already identical to Assert. We can merge
this directly to main branch

Relevant PG commit:

b1099eca8f
b1099eca8f38ff5cfaf0901bb91cb6a22f909bc6

Co-authored-by: onderkalaci <onderkalaci@gmail.com>
2023-06-05 13:25:21 +03:00
Onur Tirtir 246b054a7d
Add support for schema-based-sharding via a GUC (#6866)
DESCRIPTION: Adds citus.enable_schema_based_sharding GUC that allows
sharding the database based on schemas when enabled.

* Refactor the logic that automatically creates Citus managed tables 

* Refactor CreateSingleShardTable() to allow specifying colocation id
instead

* Add support for schema-based-sharding via a GUC

### What this PR is about:
Add **citus.enable_schema_based_sharding GUC** to enable schema-based
sharding. Each schema created while this GUC is ON will be considered
as a tenant schema. Later on, regardless of whether the GUC is ON or
OFF, any table created in a tenant schema will be converted to a
single shard distributed table (without a shard key). All the tenant
tables that belong to a particular schema will be co-located with each
other and will have a shard count of 1.

We introduce a new metadata table --pg_dist_tenant_schema-- to do the
bookkeeping for tenant schemas:
```sql
psql> \d pg_dist_tenant_schema
          Table "pg_catalog.pg_dist_tenant_schema"
┌───────────────┬─────────┬───────────┬──────────┬─────────┐
│    Column     │  Type   │ Collation │ Nullable │ Default │
├───────────────┼─────────┼───────────┼──────────┼─────────┤
│ schemaid      │ oid     │           │ not null │         │
│ colocationid  │ integer │           │ not null │         │
└───────────────┴─────────┴───────────┴──────────┴─────────┘
Indexes:
    "pg_dist_tenant_schema_pkey" PRIMARY KEY, btree (schemaid)
    "pg_dist_tenant_schema_unique_colocationid_index" UNIQUE, btree (colocationid)

psql> table pg_dist_tenant_schema;
┌───────────┬───────────────┐
│ schemaid  │ colocationid  │
├───────────┼───────────────┤
│     41963 │            91 │
│     41962 │            90 │
└───────────┴───────────────┘
(2 rows)
```

Colocation id column of pg_dist_tenant_schema can never be NULL even
for the tenant schemas that don't have a tenant table yet. This is
because, we assign colocation ids to tenant schemas as soon as they
are created. That way, we can keep associating tenant schemas with
particular colocation groups even if all the tenant tables of a tenant
schema are dropped and recreated later on.

When a tenant schema is dropped, we delete the corresponding row from
pg_dist_tenant_schema. In that case, we delete the corresponding
colocation group from pg_dist_colocation as well.

### Future work for 12.0 release:
We're building schema-based sharding on top of the infrastructure that
adds support for creating distributed tables without a shard key
(https://github.com/citusdata/citus/pull/6867).
However, not all the operations that can be done on distributed tables
without a shard key necessarily make sense (in the same way) in the
context of schema-based sharding. For example, we need to think about
what happens if user attempts altering schema of a tenant table. We
will tackle such scenarios in a future PR.

We will also add a new UDF --citus.schema_tenant_set() or such-- to
allow users to use an existing schema as a tenant schema, and another
one --citus.schema_tenant_unset() or such-- to stop using a schema as
a tenant schema in future PRs.
2023-05-26 10:49:58 +03:00
Onur Tirtir 56d217b108
Mark objects as distributed even when pg_dist_node is empty (#6900)
We mark objects as distributed objects in Citus metadata only if we need
to propagate given the command that creates it to worker nodes. For this
reason, we were not doing this for the objects that are created while
pg_dist_node is empty.

One implication of doing so is that we defer the schema propagation to
the time when user creates the first distributed table in the schema.
However, this doesn't help for schema-based sharding (#6866) because we
want to sync pg_dist_tenant_schema to the worker nodes even for empty
schemas too.

* Support test dependencies for isolation tests without a schedule

* Comment out a test due to a known issue (#6901)

* Also, reduce the verbosity for some log messages and make some
   tests compatible with run_test.py.
2023-05-16 11:45:42 +03:00
Marco Slot 343d1c5072
Refactor executor utility functions into multiple files (#6593)
Co-authored-by: Marco Slot <marco.slot@gmail.com>
2023-03-31 13:07:48 +02:00
Gokhan Gulbiz b6a4652849
Stop background daemon before dropping the database (#6688)
DESCRIPTION: Stop maintenance daemon when dropping a database even
without Citus extension

Fixes #6670
2023-02-03 15:15:44 +03:00
Gokhan Gulbiz 4e26464969
Allow plain pg foreign tables without a table_name option (#6652) 2023-01-27 16:34:11 +03:00
Jelte Fennema 81dcddd1ef
Actually skip constraint validation on shards after shard move (#6640)
DESCRIPTION: Fix foreign key validation skip at the end of shard move

In eadc88a we started completely skipping foreign key constraint
validation at the end of a non blocking shard move, instead of only for
foreign keys to reference tables. However, it turns out that this didn't
work at all because of a hard to notice bug: By resetting the
SkipConstraintValidation flag at the end of our utility hook, we
actually make the SET command that sets it a no-op.

This fixes that bug by removing the code that resets it. This is fine
because #6543 removed the only place where we set the flag in C code. So
the resetting of the flag has no purpose anymore. This PR also adds a
regression test, because it turned out we didn't have any otherwise we
would have caught that the feature was completely broken.

It also moves the constraint validation skipping to the utility hook.
The reason is that #6550 showed us that this is the better place to skip
it, because it will also skip the planning phase and not just the
execution.
2023-01-27 13:08:05 +01:00