Commit Graph

352 Commits (c7a55c8606a795ff81347840cb4216cfa73da8eb)

Author SHA1 Message Date
Naisila Puka 287abea661
PG18 compatibility - varreturningtype additions (#8231)
This PR solves the following diffs, originating from the addition of
`varreturningtype` field to the `Var` struct in PG18:
https://github.com/postgres/postgres/commit/80feb727c

Previously we didn't account for this new field (as it's new), so this
wouldn't allow the parser to correctly reconstruct the `Var` node
structure, but rather it would error out with `did not find '}' at end
of input node`:

```diff
 SELECT column_to_column_name(logicalrelid, partkey)
 FROM pg_dist_partition WHERE partkey IS NOT NULL ORDER BY 1 LIMIT 1;
- column_to_column_name
----------------------------------------------------------------------
- a
-(1 row)
-
+ERROR:  did not find '}' at end of input node
```

Solution follows precedent https://github.com/citusdata/citus/pull/7107,
when varnullingrels field was added to the `Var` struct in PG16.

The solution includes:
- Taking care of the `partkey` in `pg_dist_partition` table because it's
coming from the `Var` struct. This mainly includes fixing the upgrade
script to PG18, by saving all the `partkey` infos before upgrading to
PG18 (in `citus_prepare_pg_upgrade`), and then re-generating `partkey`
columns in `pg_dist_partition` (using `UPDATE`) after upgrading to PG18
(in `citus_finish_pg_upgrade`).
- Adding a normalize rule to fix output differences among PG versions.
Note that we need two normalize lines: one for PG15 since it doesn't
have `varnullingrels`, and one for PG16/PG17.
- Small trick on `metadata_sync_helpers` to use different text when
generating the `partkey`, based on the PG version.

Fixes #8189
2025-10-09 17:35:03 +03:00
Naisila Puka f0014cf0df
PG18 compatibility: misc output diffs pt2 (#8234)
3 minor changes to reduce some noise from the regression diffs.

1 - Reduce verbosity when ALTER EXTENSION fails
PG18 has improved reporting of errors in extension script files
Relevant PG commit:
https://github.com/postgres/postgres/commit/774171c4f
There was more context in PG18, so reducing verbosity
```
ALTER EXTENSION citus UPDATE TO '11.0-1';
 ERROR:  cstore_fdw tables are deprecated as of Citus 11.0
 HINT:  Install Citus 10.2 and convert your cstore_fdw tables to the
        columnar access method before upgrading further
 CONTEXT:  PL/pgSQL function inline_code_block line 4 at RAISE
+SQL statement "DO LANGUAGE plpgsql
+$$
+BEGIN
+    IF EXISTS (SELECT 1 FROM pg_dist_shard where shardstorage = 'c') THEN
+     RAISE EXCEPTION 'cstore_fdw tables are deprecated as of Citus 11.0'
+        USING HINT = 'Install Citus 10.2 and convert your cstore_fdw tables
                       to the columnar access method before upgrading further';
+ END IF;
+END;
+$$"
+extension script file "citus--10.2-5--11.0-1.sql", near line 532
```

2 - Fix backend type order in tests for PG18
PG18 added another backend type which messed the order
in this test
Adding a separate IF condition for PG18
Relevant PG commit:
https://github.com/postgres/postgres/commit/18d67a8d7d

3 - Ignore "DEBUG: find_in_path" lines in output
Relevant PG commit:
https://github.com/postgres/postgres/commit/4f7f7b0375
The new GUC extension_control_path specifies a path to look for
extension control files.
2025-10-09 16:50:41 +03:00
Naisila Puka b4cb1a94e9
Bump citus and citus_columnar to 14.0devel (#8170) 2025-09-19 12:54:55 +03:00
Naisila Puka eaa609f510
Add citus_stats UDF (#8026)
DESCRIPTION: Add `citus_stats` UDF

This UDF acts on a Citus table, and provides `null_frac`,
`most_common_vals` and `most_common_freqs` for each column in the table,
based on the definitions of these columns in the Postgres view
`pg_stats`.

**Aggregated Views: pg\_stats > citus\_stats** 

citus\_stats, is a **view** intended for use in **Citus**, a distributed
extension of PostgreSQL. It collects and returns **column-level**
**statistics** for a distributed table—specifically, the **most common
values**, their **frequencies,** and **fraction of null values**, like
pg\_stats view does for regular Postgres tables.

**Use Case** 

This view is useful when: 

- You need **column-level insights** on a distributed table. 
- You're performing **query optimization**, **cardinality estimation**,
or **data profiling** across shards.

**What It Returns** 

A **table** with: 

| Column Name | Data Type | Description |

|---------------------|-----------|-----------------------------------------------------------------------------|
| schemaname | text | Name of the schema containing the distributed
table |
| tablename | text | Name of the distributed table |
| attname | text | Name of the column (attribute) |
| null_frac | float4 | Estimated fraction of NULLs in the column across
all shards |
| most_common_vals | text[] | Array of most common values for the column
|
| most_common_freqs | float4[] | Array of corresponding frequencies (as
fractions) of the most common values|

**Caveats** 
- The function assumes that the array of the most common values among
different shards will be the same, therefore it just adds everything up.
2025-08-19 23:17:13 +03:00
Muhammad Usama be6668e440
Snapshot-Based Node Split – Foundation and Core Implementation (#8122)
**DESCRIPTION:**
This pull request introduces the foundation and core logic for the
snapshot-based node split feature in Citus. This feature enables
promoting a streaming replica (referred to as a clone in this feature
and UI) to a primary node and rebalancing shards between the original
and the newly promoted node without requiring a full data copy.

This significantly reduces rebalance times for scale-out operations
where the new node already contains a full copy of the data via
streaming replication.

Key Highlights:
**1. Replica (Clone) Registration & Management Infrastructure**

Introduces a new set of UDFs to register and manage clone nodes:

- citus_add_clone_node()
- citus_add_clone_node_with_nodeid()
- citus_remove_clone_node()
- citus_remove_clone_node_with_nodeid()

These functions allow administrators to register a streaming replica of
an existing worker node as a clone, making it eligible for later
promotion via snapshot-based split.

**2. Snapshot-Based Node Split (Core Implementation)**
New core UDF: 

- citus_promote_clone_and_rebalance()

This function implements the full workflow to promote a clone and
rebalance shards between the old and new primaries. Steps include:

1. Ensuring Exclusivity – Blocks any concurrent placement-changing
operations.
2. Blocking Writes – Temporarily blocks writes on the primary to ensure
consistency.
3. Replica Catch-up – Waits for the replica to be fully in sync.
4. Promotion – Promotes the replica to a primary using pg_promote.
5. Metadata Update – Updates metadata to reflect the newly promoted
primary node.
6. Shard Rebalancing – Redistributes shards between the old and new
primary nodes.


**3. Split Plan Preview**
A new helper UDF get_snapshot_based_node_split_plan() provides a preview
of the shard distribution post-split, without executing the promotion.

**Example:**

```
reb 63796> select * from pg_catalog.get_snapshot_based_node_split_plan('127.0.0.1',5433,'127.0.0.1',5453);
  table_name  | shardid | shard_size | placement_node 
--------------+---------+------------+----------------
 companies    |  102008 |          0 | Primary Node
 campaigns    |  102010 |          0 | Primary Node
 ads          |  102012 |          0 | Primary Node
 mscompanies  |  102014 |          0 | Primary Node
 mscampaigns  |  102016 |          0 | Primary Node
 msads        |  102018 |          0 | Primary Node
 mscompanies2 |  102020 |          0 | Primary Node
 mscampaigns2 |  102022 |          0 | Primary Node
 msads2       |  102024 |          0 | Primary Node
 companies    |  102009 |          0 | Clone Node
 campaigns    |  102011 |          0 | Clone Node
 ads          |  102013 |          0 | Clone Node
 mscompanies  |  102015 |          0 | Clone Node
 mscampaigns  |  102017 |          0 | Clone Node
 msads        |  102019 |          0 | Clone Node
 mscompanies2 |  102021 |          0 | Clone Node
 mscampaigns2 |  102023 |          0 | Clone Node
 msads2       |  102025 |          0 | Clone Node
(18 rows)

```
**4 Test Infrastructure Enhancements**

- Added a new test case scheduler for snapshot-based split scenarios.
- Enhanced pg_regress_multi.pl to support creating node backups with
slightly modified options to simulate real-world backup-based clone
creation.

### 5. Usage Guide
The snapshot-based node split can be performed using the following
workflow:

**- Take a Backup of the Worker Node**
Run pg_basebackup (or an equivalent tool) against the existing worker
node to create a physical backup.

`pg_basebackup -h <primary_worker_host> -p <port> -D
/path/to/replica/data --write-recovery-conf
`

**- Start the Replica Node**
Start PostgreSQL on the replica using the backup data directory,
ensuring it is configured as a streaming replica of the original worker
node.

**- Register the Backup Node as a Clone**
Mark the registered replica as a clone of its original worker node:

`SELECT * FROM citus_add_clone_node('<clone_host>', <clone_port>,
'<primary_host>', <primary_port>);
`

**- Promote and Rebalance the Clone**
Promote the clone to a primary and rebalance shards between it and the
original worker:

`SELECT * FROM citus_promote_clone_and_rebalance('clone_node_id');
`

**- Drop Any Replication Slots from the Original Worker**
After promotion, clean up any unused replication slots from the original
worker:

`SELECT pg_drop_replication_slot('<slot_name>');
`
2025-08-19 14:13:55 +03:00
Muhammad Usama f743b35fc2
Parallelize Shard Rebalancing & Unlock Concurrent Logical Shard Moves (#7983)
DESCRIPTION: Parallelizes shard rebalancing and removes the bottlenecks
that previously blocked concurrent logical-replication moves.
These improvements reduce rebalance windows—particularly for clusters
with large reference tables and enable multiple shard transfers to run in parallel.

Motivation:
Citus’ shard rebalancer has some key performance bottlenecks:
**Sequential Movement of Reference Tables:**
Reference tables are often assumed to be small, but in real-world
deployments, they can grow significantly large. Previously, reference
table shards were transferred as a single unit, making the process
monolithic and time-consuming.
**No Parallelism Within a Colocation Group:**
Although Citus distributes data using colocated shards, shard
movements within the same colocation group were serialized. In
environments with hundreds of distributed tables colocated
together, this serialization significantly slowed down rebalance
operations.
 **Excessive Locking:**
 Rebalancer used restrictive locks and redundant logical replication
guards, further limiting concurrency.
The goal of this commit is to eliminate these inefficiencies and enable
maximum parallelism during rebalance, without compromising correctness
or compatibility. Parallelize shard rebalancing to reduce rebalance
time.

Feature Summary:

**1. Parallel Reference Table Rebalancing**
Each reference-table shard is now copied in its own background task.
Foreign key and other constraints are deferred until all shards are
copied.
For single shard movement without considering colocation a new
internal-only UDF '`citus_internal_copy_single_shard_placement`' is
introduced to allow single-shard copy/move operations.
Since this function is internal, we do not allow users to call it
directly.

**Temporary Hack to Set Background Task Context** Background tasks
cannot currently set custom GUCs like application_name before executing
internal-only functions. 'citus_rebalancer ...' statement as a prefix in
the task command. This is a temporary hack to label internal tasks until
proper GUC injection support is added to the background task executor.

**2. Changes in Locking Strategy**

- Drop the leftover replication lock that previously serialized shard
moves performed via logical replication. This lock was only needed when
we used to drop and recreate the subscriptions/publications before each
move. Since Citus now removes those objects later as part of the “unused
distributed objects” cleanup, shard moves via logical replication can
safely run in parallel without additional locking.

- Introduced a per-shard advisory lock to prevent concurrent operations
on the same shard while allowing maximum parallelism elsewhere.

- Change the lock mode in AcquirePlacementColocationLock from
ExclusiveLock to RowExclusiveLock to allow concurrent updates within the
same colocation group, while still preventing concurrent DDL operations.

**3. citus_rebalance_start() enhancements**
The citus_rebalance_start() function now accepts two new optional
parameters:

```
- parallel_transfer_colocated_shards BOOLEAN DEFAULT false,
- parallel_transfer_reference_tables BOOLEAN DEFAULT false
```
This ensures backward compatibility by preserving the existing behavior
and avoiding any disruption to user expectations and when both are set
to true, the rebalancer operates with full parallelism.

**Previous Rebalancer Behavior:**
`SELECT citus_rebalance_start(shard_transfer_mode := 'force_logical');`
This would:
Start a single background task for replicating all reference tables
Then, move all shards serially, one at a time.
```
Task 1: replicate_reference_tables()
         ↓
         Task 2: move_shard_1()
         ↓
         Task 3: move_shard_2()
         ↓
         Task 4: move_shard_3()
```
Slow and sequential. Reference table copy is a bottleneck. Colocated
shards must wait for each other.

**New Parallel Rebalancer:**
```
SELECT citus_rebalance_start(
        shard_transfer_mode := 'force_logical',
        parallel_transfer_colocated_shards := true,
        parallel_transfer_reference_tables := true
      );
```
This would:

- Schedule independent background tasks for each reference-table shard.
- Move colocated shards in parallel, while still maintaining dependency
order.
- Defer constraint application until all reference shards are in place.
-     
```
Task 1: copy_ref_shard_1()
          Task 2: copy_ref_shard_2()
          Task 3: copy_ref_shard_3()
            → Task 4: apply_constraints()
          ↓
         Task 5: copy_shard_1()
         Task 6: copy_shard_2()
         Task 7: copy_shard_3()
         ↓
         Task 8-10: move_shard_1..3()
```
Each operation is scheduled independently and can run as soon as
dependencies are satisfied.
2025-08-18 17:44:14 +03:00
Onur Tirtir 87a1b631e8
Not automatically create citus_columnar when creating citus extension (#8081)
DESCRIPTION: Not automatically create citus_columnar when there are no
relations using it.

Previously, we were always creating citus_columnar when creating citus
with version >= 11.1. And how we were doing was as follows:
* Detach SQL objects owned by old columnar, i.e., "drop" them from
citus, but not actually drop them from the database
* "old columnar" is the one that we had before Citus 11.1 as part of
citus, i.e., before splitting the access method ands its catalog to
citus_columnar.
* Create citus_columnar and attach the SQL objects leftover from old
columnar to it so that we can continue supporting the columnar tables
that user had before Citus 11.1 with citus_columnar.

First part is unchanged, however, now we don't create citus_columnar
automatically anymore if the user didn't have any relations using
columnar. For this reason, as of Citus 13.2, when these SQL objects are
not owned by an extension and there are no relations using columnar
access method, we drop these SQL objects when updating Citus to 13.2.

The net effect is still the same as if we automatically created
citus_columnar and user dropped citus_columnar later, so we should not
have any issues with dropping them.

(**Update:** Seems we've made some assumptions in citus, e.g.,
citus_finish_pg_upgrade() still assumes columnar metadata exists and
tries to apply some fixes for it, so this PR fixes them as well. See the
last section of this PR description.)

Also, ideally I was hoping to just remove some lines of code from
extension.c, where we decide automatically creating citus_columnar when
creating citus, however, this didn't happen to be the case for two
reasons:
* We still need to automatically create it for the servers using
columnar access method.
* We need to clean-up the leftover SQL objects from old columnar when
the above is not case otherwise we would have leftover SQL objects from
old columnar for no reason, and that would confuse users too.
* Old columnar cannot be used to create columnar tables properly, so we
should clean them up and let the user decide whether they want to create
citus_columnar when they really need it later.

---

Also made several changes in the test suite because similarly, we don't
always want to have citus_columnar created in citus tests anymore:
* Now, columnar specific test targets, which cover **41** test sql
files, always install columnar by default, by using
"--load-extension=citus_columnar".
* "--load-extension=citus_columnar" is not added to citus specific test
targets because by default we don't want to have citus_columnar created
during citus tests.
* Excluding citus_columnar specific tests, we have **601** sql files
that we have as citus tests and in **27** of them we manually create
citus_columnar at the very beginning of the test because these tests do
test some functionalities of citus together with columnar tables.

Also, before and after schedules for PG upgrade tests are now duplicated
so we have two versions of each: one with columnar tests and one
without. To choose between them, check-pg-upgrade now supports a
"test-with-columnar" option, which can be set to "true" or anything else
to logically indicate "false". In CI, we run the check-pg-upgrade test
target with both options. The purpose is to ensure we can test PG
upgrades where citus_columnar is not created in the cluster before the
upgrade as well.

Finally, added more tests to multi_extension.sql to test Citus upgrade
scenarios with / without columnar tables / citus_columnar extension.

---

Also, seems citus_finish_pg_upgrade was assuming that citus_columnar is
always created but actually we should have never made such an
assumption. To fix that, moved columnar specific post-PG-upgrade work
from citus to a new columnar UDF, which is columnar_finish_pg_upgrade.
But to avoid breaking existing customer / managed service scripts, we
continue to automatically perform post PG-upgrade work for columnar
within citus_finish_pg_upgrade, but only if columnar access method
exists this time.
2025-08-18 08:29:27 +01:00
Teja Mupparti 889aa92ac0
EXPLAIN ANALYZE - Prevent execution of the plan during the plan-print (#8017)
DESCRIPTION: Fixed a bug in EXPLAIN ANALYZE to prevent unintended (duplicate) execution of the (sub)plans during the explain phase.

Fixes #4212 

### 🐞 Bug #4212 : Redundant (Subplan) Execution in `EXPLAIN ANALYZE`
codepath

#### 🔍 Background
In the standard PostgreSQL execution path, `ExplainOnePlan()` is
responsible for two distinct operations depending on whether `EXPLAIN
ANALYZE` is requested:

1. **Execute the plan**

   ```c
   if (es->analyze)
       ExecutorRun(queryDesc, direction, 0L, true);
   ```

2. **Print the plan tree** 

   ```c
   ExplainPrintPlan(es, queryDesc);
   ```

When printing the plan, the executor should **not run the plan again**.
Execution is only expected to happen once—at the top level when
`es->analyze = true`.

---

#### ⚠️ Issue in Citus

In the Citus implementation of `CustomScanMethods.ExplainCustomScan =
CitusExplainScan`, which is a custom scan explain callback function used
to print explain information of a Citus plan incorrectly performs
**redundant execution** inside the explain path of `ExplainPrintPlan()`

```c
ExplainOnePlan()
  ExplainPrintPlan()
      ExplainNode()
        CitusExplainScan()
          if (distributedPlan->subPlanList != NIL)
          {
              ExplainSubPlans(distributedPlan, es);
             {
              PlannedStmt *plan = subPlan->plan;
              ExplainOnePlan(plan, ...);  // ⚠️ May re-execute subplan if es->analyze is true
             }
         }
```
This causes the subplans to be **executed again**, even though they have
already been executed during the top-level plan execution. This behavior
violates the expectation in PostgreSQL where `EXPLAIN ANALYZE` should
**execute each node exactly once** for analysis.

---
####  Fix (proposed)
Save the output of Subplans during `ExecuteSubPlans()`, and later use it
in `ExplainSubPlans()`
2025-07-30 11:29:50 -07:00
naisila 4cd8bb1b67 Bump Citus version to 13.2devel 2025-06-24 16:21:48 +02:00
Onur Tirtir 55a0d1f730
Add skip_qualify_public param to shard_name() to allow qualifying for "public" schema (#8014)
DESCRIPTION: Adds skip_qualify_public param to `shard_name()` UDF to
allow qualifying for "public" schema when needed.
2025-06-02 10:15:32 +03:00
Alper Kocatas 088ba75057
Add citus_nodes view (#7968)
DESCRIPTION: Adds `citus_nodes` view that displays the node name, port,
role, and "active" for nodes in the cluster.

This PR adds `citus_nodes` view to the `pg_catalog` schema. The
`citus_nodes` view is created in the `citus` schema and is used to
display the node name, port, role, and active status of each node in the
`pg_dist_node` table.

The view is granted `SELECT` permission to the `PUBLIC` role and is set
to the `pg_catalog` schema.

Test cases was added to `multi_cluster_management` tests. 

structs.py was modified to add white spaces as `citus_indent` required.

---------

Co-authored-by: Alper Kocatas <alperkocatas@microsoft.com>
2025-05-14 15:05:12 +03:00
Onur Tirtir 3d61c4dc71
Add citus_stat_counters view and citus_stat_counters_reset() function to reset it (#7917)
DESCRIPTION: Adds citus_stat_counters view that can be used to query
stat counters that Citus collects while the feature is enabled, which is
controlled by citus.enable_stat_counters. citus_stat_counters() can be
used to query the stat counters for the provided database oid and
citus_stat_counters_reset() can be used to reset them for the provided
database oid or for the current database if nothing or 0 is provided.

Today we don't persist stat counters on server shutdown. In other words,
stat counters are automatically reset in case of a server restart.

Details on the underlying design can be found in header comment of
stat_counters.c and in the technical readme.

-------

Here are the details about what we track as of this PR:

For connection management, we have three statistics about the inter-node
connections initiated by the node itself:

* **connection_establishment_succeeded**
* **connection_establishment_failed**
* **connection_reused**

While the first two are relatively easier to understand, the third one
covers the case where a connection is reused. This can happen when a
connection was already established to the desired node, Citus decided to
cache it for some time (see citus.max_cached_conns_per_worker &
citus.max_cached_connection_lifetime), and then reused it for a new
remote operation. Here are the other important details about these
connection statistics:

1. connection_establishment_failed doesn't care about the connections
that we could establish but are lost later in the transaction. Plus, we
cannot guarantee that the connections that are counted in
connection_establishment_succeeded were not lost later.
2. connection_establishment_failed doesn't care about the optional
connections (see OPTIONAL_CONNECTION flag) that we gave up establishing
because of the connection throttling rules we follow (see
citus.max_shared_pool_size & citus.local_shared_pool_size). The reaason
for this is that we didn't even try to establish these connections.
3. For the rest of the cases where a connection failed for some reason,
we always increment connection_establishment_failed even if the caller
was okay with the failure and know how to recover from it (e.g., the
adaptive executor knows how to fall back local execution when the target
node is the local node and if it cannot establish a connection to the
local node). The reason is that even if it's likely that we can still
serve the operation, we still failed to establish the connection and we
want to track this.
4. Finally, the connection failures that we count in
connection_establishment_failed might be caused by any of the following
reasons and for now we prefer to _not_ further distinguish them for
simplicity:
a. remote node is down or cannot accept any more connections, or
overloaded such that citus.node_connection_timeout is not enough to
establish a connection
b. any internal Citus error that might result in preparing a bad
connection string so that libpq fails when parsing the connection string
even before actually trying to establish a connection via connect() call
c. broken citus.node_conninfo or such Citus configuration that was
incorrectly set by the user can also result in similar outcomes as in b
d. internal waitevent set / poll errors or OOM in local node

We also track two more statistics for query execution:

* **query_execution_single_shard**
* **query_execution_multi_shard**

And more importantly, both query_execution_single_shard and
query_execution_multi_shard are not only tracked for the top-level
queries but also for the subplans etc. The reason is that for some
queries, e.g., the ones that go through recursive planning, after Citus
performs the heavy work as part of subplans, the work that needs to be
done for the top-level query becomes quite straightforward. And for such
query types, it would be deceiving if we only incremented the query stat
counters for the top-level query. Similarly, for non-pushable INSERT ..
SELECT and MERGE queries, we perform separate counter increments for the
SELECT / source part of the query besides the final INSERT / MERGE
query.
2025-04-28 12:23:52 +00:00
German Eichberger 1c09469dd2
Adds a method to determine if current node is primary (#7720)
DESCRIPTION: Adds citus_is_primary_node() UDF to determine if the
current node is a primary node in the cluster.

---------

Co-authored-by: German Eichberger <geeichbe@microsoft.com>
Co-authored-by: Onur Tirtir <onurcantirtir@gmail.com>
2025-03-18 15:12:42 +00:00
naisila 1d0bdbd749 Bump Citus into 13.1devel 2025-03-13 15:13:56 +03:00
Naisila Puka 2b5dfbbd08 Bump Citus version to 13.0.1 (#7872) 2025-03-12 12:43:01 +03:00
Naisila Puka 3b1c082791 Drops PG14 support (#7753)
DESCRIPTION: Drops PG14 support

1. Remove "$version_num" != 'xx' from configure file
2. delete all PG_VERSION_NUM = PG_VERSION_XX references in the code
3. Look at pg_version_compat.h file, remove all _compat functions etc
defined specifically for PGXX differences
4. delete all PG_VERSION_NUM >= PG_VERSION_(XX+1), PG_VERSION_NUM <
PG_VERSION_(XX+1) ifs in the codebase
5. delete ruleutils_xx.c file
6. cleanup normalize.sed file from pg14 specific lines
7. delete all alternative output files for that particular PG version,
server_version_ge variable helps here
2025-03-12 12:43:01 +03:00
Naisila Puka 28b0b0e7a8 Bump Citus version into 13.0.0 (#7792)
We are using `release-13.0` branch for both development and release, to
deliver PG17 support in Citus.

Afterwards, we will (probably) merge this branch into main.

Some potential changes for main branch, after we are done working on
release-13.0:
- Merge changes from `release-13.0` to `main`
- Figure out what changes were there on 12.2, move them to 13.1 version.
In a nutshell: rename `12.1--12.2` to `13.0--13.1` and fix issues.
- Set version to 13.1devel
2025-03-12 12:25:49 +03:00
Naisila Puka ed71e65333 PG17 compatibility: Adjust print_extension_changes function for extra type outputs in PG17 (#7761)
In PG17, Auto-generated array types, multirange types, and relation
rowtypes
are treated as dependent objects, hence changing the output of the
print_extension_changes function.

Relevant PG commit:
e5bc9454e527b1cba97553531d8d4992892fdeef

e5bc9454e5

Here we create a table with only the basic extension types
in order to avoid printing extra ones for now.
This can be removed when we drop PG16 support.


https://github.com/citusdata/citus/actions/runs/11960253650/attempts/1#summary-33343972656
```diff

                  | table pg_dist_rebalance_strategy 
+                 | type citus.distribution_type[] 
+                 | type citus.pg_dist_object 
+                 | type pg_dist_shard 
+                 | type pg_dist_shard[] 
+                 | type pg_dist_shard_placement 
+                 | type pg_dist_shard_placement[] 
+                 | type pg_dist_transaction 
+                 | type pg_dist_transaction[] 
                  | view citus_dist_stat_activity 
                  | view pg_dist_shard_placement 
```
2025-03-12 12:25:49 +03:00
eaydingol 117bd1d04f
Disable nonmaindb interface (#7905)
DESCRIPTION: The PR disables the non-main db related features. 

The non-main db related features were introduced in
https://github.com/citusdata/citus/pull/7203.
2025-02-21 13:36:19 +03:00
Gürkan İndibay 51009d0191
Add support for alter/drop role propagation from non-main databases (#7461)
DESCRIPTION: Adds support for distributed `ALTER/DROP ROLE` commands
from the databases where Citus is not installed

---------

Co-authored-by: Onur Tirtir <onurcantirtir@gmail.com>
2024-02-28 08:58:28 +00:00
eaydingol f01c5f2593
Move remaining citus_internal functions (#7478)
Moves the following functions to the Citus internal schema: 

citus_internal_local_blocked_processes
citus_internal_global_blocked_processes
citus_internal_mark_node_not_synced
citus_internal_unregister_tenant_schema_globally
citus_internal_update_none_dist_table_metadata
citus_internal_update_placement_metadata
citus_internal_update_relation_colocation
citus_internal_start_replication_origin_tracking
citus_internal_stop_replication_origin_tracking
citus_internal_is_replication_origin_tracking_active


#7405

---------

Co-authored-by: Jelte Fennema-Nio <jelte.fennema@microsoft.com>
2024-02-07 16:58:17 +03:00
eaydingol 594cb6f274
Move more citus internal functions (#7473)
Moves the following functions:

 citus_internal_delete_colocation_metadata 
 citus_internal_delete_partition_metadata 
 citus_internal_delete_placement_metadata 
 citus_internal_delete_shard_metadata 
 citus_internal_delete_tenant_schema
2024-01-31 23:00:04 +03:00
eaydingol d05174093b
Move citus internal functions (#7470)
Move more functions to citus_internal schema, the list:

citus_internal_add_placement_metadata
citus_internal_add_shard_metadata
citus_internal_add_tenant_schema
citus_internal_adjust_local_clock_to_remote
citus_internal_database_command

#7405
2024-01-31 11:45:19 +00:00
eaydingol f6ea619e27
Move citus internal functions (#7466)
Move the following functions from pg_catalog to citus_internal:

citus_internal_add_object_metadata
citus_internal_add_partition_metadata


#7405
2024-01-30 12:27:10 +03:00
eaydingol 5d673874f7
Move citus internal functions (#7456)
Move citus_internal_acquire_citus_advisory_object_class_lock and
citus_internal_add_colocation_metadata functions from pg_catalog to
citus_internal.

#7405
2024-01-26 11:46:05 +03:00
Halil Ozan Akgül 1cb2e1e4e8
Fixes create user queries from Citus non-main databases with other users (#7442)
This PR makes the connections to other nodes for
`mark_object_distributed` use the same user as
`execute_command_on_remote_nodes_as_user` so they'll use the same
connection.
2024-01-24 12:57:54 +03:00
Onur Tirtir 1d55debb98
Support CREATE / DROP database commands from any node (#7359)
DESCRIPTION: Adds support for issuing `CREATE`/`DROP` DATABASE commands
from worker nodes

With this commit, we allow issuing CREATE / DROP DATABASE commands from
worker nodes too.
As in #7278, this is not allowed when the coordinator is not added to
metadata because we don't ever sync metadata changes to coordinator
when adding coordinator to the metadata via
`SELECT citus_set_coordinator_host('<hostname>')`, or equivalently, via
`SELECT citus_add_node(<coordinator_node_name>, <coordinator_node_port>, 0)`.

We serialize database management commands by acquiring a Citus specific
advisory lock on the first primary worker node if there are any workers in the
cluster. As opposed to what we've done in https://github.com/citusdata/citus/pull/7278
for role management commands, we try to avoid from running into distributed deadlocks
as much as possible. This is because, while distributed deadlocks that can happen around
role management commands can be detected by Citus, this is not the case for database
management commands because most of them cannot be run inside in a transaction block.
In that case, Citus cannot even detect the distributed deadlock because the command is not
part of a distributed transaction at all, then the command execution might not return the
control back to the user for an indefinite amount of time.
2024-01-08 16:47:49 +00:00
Halil Ozan Akgül b877d606c7
Adds 2PC distributed commands from other databases (#7203)
DESCRIPTION: Adds support for 2PC from non-Citus main databases

This PR only adds support for `CREATE USER` queries, other queries need
to be added. But it should be simple because this PR creates the
underlying structure.

Citus main database is the database where the Citus extension is
created. A non-main database is all the other databases that are in the
same node with a Citus main database.

When a `CREATE USER` query is run on a non-main database we:

1. Run `start_management_transaction` on the main database. This
function saves the outer transaction's xid (the non-main database
query's transaction id) and marks the current query as main db command.
2. Run `execute_command_on_remote_nodes_as_user("CREATE USER
<username>", <username to run the command>)` on the main database. This
function creates the users in the rest of the cluster by running the
query on the other nodes. The user on the current node is created by the
query on the outer, non-main db, query to make sure consequent commands
in the same transaction can see this user.
3. Run `mark_object_distributed` on the main database. This function
adds the user to `pg_dist_object` in all of the nodes, including the
current one.

This PR also implements transaction recovery for the queries from
non-main databases.
2023-12-22 19:19:41 +03:00
Gürkan İndibay 3b556cb5ed
Adds create / drop database propagation support (#7240)
DESCRIPTION: Adds support for propagating `CREATE`/`DROP` database

In this PR, create and drop database support is added.

For CREATE DATABASE:
* "oid" option is not supported
* specifying "strategy" to be different than "wal_log" is not supported
* specifying "template" to be different than "template1" is not
supported

The last two are because those are not saved in `pg_database` and when
activating a node, we cannot assume what parameters were provided when
creating the database.

And "oid" is not supported because whether user specified an arbitrary
oid when creating the database is not saved in pg_database and we want
to avoid from oid collisions that might arise from attempting to use an
auto-assigned oid on workers.

Finally, in case of node activation, GRANTs for the database are also
propagated.

---------

Co-authored-by: Jelte Fennema-Nio <github-tech@jeltef.nl>
Co-authored-by: Jelte Fennema-Nio <jelte.fennema@microsoft.com>
Co-authored-by: Onur Tirtir <onurcantirtir@gmail.com>
2023-11-21 16:43:51 +03:00
aykut-bozkurt 26dc407f4a
bump citus and columnar into 12.2devel (#7200) 2023-09-14 12:03:09 +03:00
Onur Tirtir d628a4c21a
Add citus_schema_move() function (#7180)
Add citus_schema_move() that can be used to move tenant tables within a distributed
schema to another node. The function has two variations as simple wrappers around
citus_move_shard_placement() and citus_move_shard_placement_with_nodeid() respectively.
They pick a shard that belongs to the given tenant schema and resolve the source node
that contain the shards under given tenant schema. Hence their signatures are quite
similar to underlying functions:

```sql
-- citus_schema_move(), using target node name and node port
CREATE OR REPLACE FUNCTION pg_catalog.citus_schema_move(
	schema_id regnamespace,
	target_node_name text,
	target_node_port integer,
	shard_transfer_mode citus.shard_transfer_mode default 'auto')
RETURNS void
LANGUAGE C STRICT
AS 'MODULE_PATHNAME', $$citus_schema_move$$;

-- citus_schema_move(), using target node id
CREATE OR REPLACE FUNCTION pg_catalog.citus_schema_move(
	schema_id regnamespace,
	target_node_id integer,
	shard_transfer_mode citus.shard_transfer_mode default 'auto')
RETURNS void
LANGUAGE C STRICT
AS 'MODULE_PATHNAME', $$citus_schema_move_with_nodeid$$;
```
2023-09-08 12:03:53 +03:00
Gürkan İndibay b8bded6454
Adds citus_pause_node udf (#7089)
DESCRIPTION: Presenting citus_pause_node UDF enabling pausing by
node_id.

citus_pause_node takes a node_id parameter and fetches all the shards in
that node and puts AccessExclusiveLock on all the shards inside that
node. With this lock, insert is disabled, until citus_pause_node
transaction is closed.

---------

Co-authored-by: Hanefi Onaldi <Hanefi.Onaldi@microsoft.com>
2023-09-01 11:39:30 +03:00
Onur Tirtir a830862717 Not undistribute Citus local table when converting it to a reference table / single-shard table 2023-08-29 12:57:28 +03:00
Naisila Puka 2c50b5f7ff
PG16 compatibility - varnullingrels additions (#7107)
PG16 compatibility - part 7

Check out part 1 42d956888d
part 2 0d503dd5ac
part 3 907d72e60d
part 4 7c6b4ce103
part 5 6056cb2c29
part 6 b36c431abb
part 7 ee3153fe50

This commit is in the series of PG16 compatibility commits. PG16 introduced a new entry
varnnullingrels to Var, which represents our partkey in pg_dist_partition.
This commit does the necessary changes in Citus to support this.
Relevant PG commit:
2489d76c49
2489d76c4906f4461a364ca8ad7e0751ead8aa0d

More PG16 compatibility commits are coming soon ...
2023-08-15 13:07:55 +03:00
Naisila Puka 0d503dd5ac
PG16 compatibility: ruleutils and successful CREATE EXTENSION (#7087)
PG16 compatibility - Part 2

Part 1 provided successful compilation against pg16beta2.
42d956888d

This PR provides ruleutils changes with pg16beta2 and successful CREATE EXTENSION command.
Note that more changes are needed in order to have successful regression tests.
More commits are coming soon ...

For any_value changes, I referred to this commit
8ef94dc1f5
where we did something similar for PG14 support.
2023-08-02 16:04:51 +03:00
Halil Ozan Akgül c99a93ffa7
Move SQL file changes for citus_shard_sizes fixes into the new 11.3-2 version (#7050)
This PR moves `citus_shard_sizes` changes from #7003, and #7018 to into
a new Citus version, 11.3-2
2023-07-14 17:19:54 +03:00
aykut-bozkurt 609a5465ea
Bump Citus version into 12.1devel (#7061) 2023-07-14 13:12:30 +03:00
Halil Ozan Akgül 04f6868ed2
Add citus_schemas view (#6979)
DESCRIPTION: Adds citus_schemas view

The citus_schemas view will be created in public schema if it exists, if
not the view will be created in pg_catalog.

Need to:
- [x] Add tests
- [x] Fix tests
2023-06-16 14:21:58 +03:00
Ahmet Gedemenli 002a88ae7f
Error for single shard table creation if replication factor > 1 (#7006)
Error for single shard table creation if replication factor > 1
2023-06-15 13:13:45 +03:00
Onur Tirtir dbdf04e8ba
Rename pg_dist tenant_schema to pg_dist_schema (#7001) 2023-06-14 12:12:15 +03:00
Gokhan Gulbiz e0ccd155ab
Make citus_stat_tenants work with schema-based tenants. (#6936)
DESCRIPTION: Enabling citus_stat_tenants to support schema-based
tenants.

This pull request modifies the existing logic to enable tenant
monitoring with schema-based tenants. The changes made are as follows:

- If a query has a partitionKeyValue (which serves as a tenant
key/identifier for distributed tables), Citus annotates the query with
both the partitionKeyValue and colocationId. This allows for accurate
tracking of the query.
- If a query does not have a partitionKeyValue, but its colocationId
belongs to a distributed schema, Citus annotates the query with only the
colocationId. The tenant monitor can then easily look up the schema to
determine if it's a distributed schema and make a decision on whether to
track the query.

---------

Co-authored-by: Jelte Fennema <jelte.fennema@microsoft.com>
2023-06-13 14:11:45 +03:00
aykut-bozkurt 213d363bc3
Add citus_schema_distribute/undistribute udfs to convert a schema into a tenant schema / back to a regular schema (#6933)
* Currently we do not allow any Citus tables other than Citus local
tables inside a regular schema before executing
`citus_schema_distribute`.
* `citus_schema_undistribute` expects only single shard distributed
tables inside a tenant schema.

DESCRIPTION: Adds the udf `citus_schema_distribute` to convert a regular
schema into a tenant schema.
DESCRIPTION: Adds the udf `citus_schema_undistribute` to convert a
tenant schema back to a regular schema.

---------

Co-authored-by: Onur Tirtir <onurcantirtir@gmail.com>
2023-06-12 18:41:31 +03:00
Onur Tirtir 246b054a7d
Add support for schema-based-sharding via a GUC (#6866)
DESCRIPTION: Adds citus.enable_schema_based_sharding GUC that allows
sharding the database based on schemas when enabled.

* Refactor the logic that automatically creates Citus managed tables 

* Refactor CreateSingleShardTable() to allow specifying colocation id
instead

* Add support for schema-based-sharding via a GUC

### What this PR is about:
Add **citus.enable_schema_based_sharding GUC** to enable schema-based
sharding. Each schema created while this GUC is ON will be considered
as a tenant schema. Later on, regardless of whether the GUC is ON or
OFF, any table created in a tenant schema will be converted to a
single shard distributed table (without a shard key). All the tenant
tables that belong to a particular schema will be co-located with each
other and will have a shard count of 1.

We introduce a new metadata table --pg_dist_tenant_schema-- to do the
bookkeeping for tenant schemas:
```sql
psql> \d pg_dist_tenant_schema
          Table "pg_catalog.pg_dist_tenant_schema"
┌───────────────┬─────────┬───────────┬──────────┬─────────┐
│    Column     │  Type   │ Collation │ Nullable │ Default │
├───────────────┼─────────┼───────────┼──────────┼─────────┤
│ schemaid      │ oid     │           │ not null │         │
│ colocationid  │ integer │           │ not null │         │
└───────────────┴─────────┴───────────┴──────────┴─────────┘
Indexes:
    "pg_dist_tenant_schema_pkey" PRIMARY KEY, btree (schemaid)
    "pg_dist_tenant_schema_unique_colocationid_index" UNIQUE, btree (colocationid)

psql> table pg_dist_tenant_schema;
┌───────────┬───────────────┐
│ schemaid  │ colocationid  │
├───────────┼───────────────┤
│     41963 │            91 │
│     41962 │            90 │
└───────────┴───────────────┘
(2 rows)
```

Colocation id column of pg_dist_tenant_schema can never be NULL even
for the tenant schemas that don't have a tenant table yet. This is
because, we assign colocation ids to tenant schemas as soon as they
are created. That way, we can keep associating tenant schemas with
particular colocation groups even if all the tenant tables of a tenant
schema are dropped and recreated later on.

When a tenant schema is dropped, we delete the corresponding row from
pg_dist_tenant_schema. In that case, we delete the corresponding
colocation group from pg_dist_colocation as well.

### Future work for 12.0 release:
We're building schema-based sharding on top of the infrastructure that
adds support for creating distributed tables without a shard key
(https://github.com/citusdata/citus/pull/6867).
However, not all the operations that can be done on distributed tables
without a shard key necessarily make sense (in the same way) in the
context of schema-based sharding. For example, we need to think about
what happens if user attempts altering schema of a tenant table. We
will tackle such scenarios in a future PR.

We will also add a new UDF --citus.schema_tenant_set() or such-- to
allow users to use an existing schema as a tenant schema, and another
one --citus.schema_tenant_unset() or such-- to stop using a schema as
a tenant schema in future PRs.
2023-05-26 10:49:58 +03:00
Onur Tirtir e7abde7e81
Prevent downgrades when there is a single-shard table in the cluster (#6908)
Also add a few tests for Citus/PG upgrade/downgrade scenarios.
2023-05-16 09:44:28 +02:00
Onur Tirtir db2514ef78 Call null-shard-key tables as single-shard distributed tables in code 2023-05-03 17:02:43 +03:00
Onur Tirtir fa467e05e7 Add support for creating distributed tables with a null shard key (#6745)
With this PR, we allow creating distributed tables with without
specifying a shard key via create_distributed_table(). Here are the
the important details about those tables:
* Specifying `shard_count` is not allowed because it is assumed to be 1.
* We mostly call such tables as "null shard-key" table in code /
comments.
* To avoid doing a breaking layout change in create_distributed_table();
instead of throwing an error, it will inform the user that
`distribution_type`
  param is ignored unless it's explicitly set to NULL or  'h'.
* `colocate_with` param allows colocating such null shard-key tables to
  each other.
* We define this table type, i.e., NULL_SHARD_KEY_TABLE, as a subclass
of
  DISTRIBUTED_TABLE because we mostly want to treat them as distributed
  tables in terms of SQL / DDL / operation support.
* Metadata for such tables look like:
  - distribution method => DISTRIBUTE_BY_NONE
  - replication model => REPLICATION_MODEL_STREAMING
- colocation id => **!=** INVALID_COLOCATION_ID (distinguishes from
Citus local tables)
* We assign colocation groups for such tables to different nodes in a
  round-robin fashion based on the modulo of "colocation id".

Note that this PR doesn't care about DDL (except CREATE TABLE) / SQL /
operation (i.e., Citus UDFs) support for such tables but adds a
preliminary
API.
2023-05-03 16:18:27 +03:00
Onur Tirtir 0194657c5d
Bump Citus to 12.0devel (#6840) 2023-04-10 12:05:18 +03:00
Gokhan Gulbiz fa00fc6e3e
Add upgrade/downgrade paths between v11.2.2 and v11.3.1 (#6820)
DESCRIPTION: PR description that will go into the change log, up to 78
characters

---------

Co-authored-by: Hanefi Onaldi <Hanefi.Onaldi@microsoft.com>
2023-04-06 12:46:09 +03:00
Ahmet Gedemenli 83a2cfbfcf
Move cleanup record test to upgrade schedule (#6794)
DESCRIPTION: Move cleanup record test to upgrade schedule
2023-04-06 11:42:49 +03:00
Halil Ozan Akgül 52ad2d08c7
Multi tenant monitoring (#6725)
DESCRIPTION: Adds views that monitor statistics on tenant usages

This PR adds `citus_stats_tenants` view that monitors the tenants on the
cluster.

`citus_stats_tenants` shows the node id, colocation id, tenant
attribute, read count in this period and last period, and query count in
this period and last period of the tenant.
Tenant attribute currently is the tenant's distribution column value,
later when schema based sharding is introduced, this meaning might
change.
A period is a time bucket the queries are counted by. Read and query
counts for this period can increase until the current period ends. After
that those counts are moved to last period's counts, which cannot
change. The period length can be set using 'citus.stats_tenants_period'.

`SELECT` queries are counted as _read_ queries, `INSERT`, `UPDATE` and
`DELETE` queries are counted as _write_ queries. So in the view read
counts are `SELECT` counts and query counts are `SELECT`, `INSERT`,
`UPDATE` and `DELETE` count.

The data is stored in shared memory, in a struct named
`MultiTenantMonitor`.

`citus_stats_tenants` shows the data from local tenants.

`citus_stats_tenants` show up to `citus.stats_tenant_limit` number of
tenants.
The tenants are scored based on the number of queries they run and the
recency of those queries. Every query ran increases the score of tenant
by `ONE_QUERY_SCORE`, and after every period ends the scores are halved.
Halving is done lazily.
To retain information a longer the monitor keeps up to 3 times
`citus.stats_tenant_limit` tenants. When the tenant count hits `3 *
citus.stats_tenant_limit`, last `citus.stats_tenant_limit` tenants are
removed. To see all stored tenants you can use
`citus_stats_tenants(return_all_tenants := true)`

- [x] Create collector view that gets data from all nodes. #6761 
- [x] Add monitoring log #6762 
- [x] Create enable/disable GUC #6769 
- [x] Parse the annotation string correctly #6796 
- [x] Add local queries and prepared statements #6797
- [x] Rename to citus_stat_statements #6821 
- [x] Run pgbench
- [x] Fix role permissions #6812

---------

Co-authored-by: Gokhan Gulbiz <ggulbiz@gmail.com>
Co-authored-by: Jelte Fennema <github-tech@jeltef.nl>
2023-04-05 17:44:17 +03:00