Compare commits

...

89 Commits

Author SHA1 Message Date
Mehmet YILMAZ 31911d8297
PG18 – Respect VACUUM/ANALYZE ONLY semantics for Citus tables (#8365)
fixes #8364

PostgreSQL 18 changes VACUUM/ANALYZE to recurse into inheritance
children by default, and introduces `ONLY` to limit processing to the
parent. Upstream change:

[https://github.com/postgres/postgres/commit/62ddf7ee9](https://github.com/postgres/postgres/commit/62ddf7ee9)

For Citus tables, we should treat shard placements as “children” and
avoid propagating `VACUUM/ANALYZE` to shards when the user explicitly
asks for `ONLY`.

This PR adjusts the Citus VACUUM handling to align with PG18 semantics,
and adds regression coverage on both regular distributed tables and
partitioned distributed tables.

---

### Behavior changes

* Introduce a per-relation helper struct:

  ```c
  typedef struct CitusVacuumRelation
  {
      VacuumRelation *vacuumRelation;
      Oid             relationId;
  } CitusVacuumRelation;
  ```

  This lets us keep both:

  * the resolved relation OID (for `IsCitusTable`, task building), and
* the original `VacuumRelation` node (for column list and ONLY/inh
flag).

* Replace the old `VacuumRelationIdList` / `ExtractVacuumTargetRels`
flow with:

  ```c
  static List *VacuumRelationList(VacuumStmt *vacuumStmt,
                                  CitusVacuumParams vacuumParams);
  ```

  `VacuumRelationList` now:

  * Iterates over `vacuumStmt->rels`.
* Resolves `relid` via `RangeVarGetRelidExtended` when `relation` is
present.
* Falls back to locking `VacuumRelation->oid` when only an OID is
available.
* Respects `VACOPT_FULL` for lock mode and `VACOPT_SKIP_LOCKED` for
locking behavior.
  * Builds a `List *` of `CitusVacuumRelation` entries.

* Update:

  ```c
  IsDistributedVacuumStmt(List *vacuumRelationList);
  ExecuteVacuumOnDistributedTables(VacuumStmt *vacuumStmt,
                                   List *vacuumRelationList,
                                   CitusVacuumParams vacuumParams);
  ```

  to operate on `CitusVacuumRelation` instead of bare OIDs.

* Implement `ONLY` semantics in `ExecuteVacuumOnDistributedTables`:

  ```c
  RangeVar *relation = vacuumRelation->relation;
  if (relation != NULL && !relation->inh)
  {
      /* ONLY specified, so don't recurse to shard placements */
      continue;
  }
  ```

  Effect:

* `VACUUM / ANALYZE` (no `ONLY`) on a Citus table: behavior unchanged,
Citus creates tasks and propagates to shard placements.
  * `VACUUM ONLY <citus_table>` / `ANALYZE ONLY <citus_table>`:

    * Core still processes the coordinator relation as usual.
* Citus **skips** building tasks for shard placements, so we do not
recurse into distributed children.
* The code compiles and behaves as before on pre-PG18; the new behavior
becomes observable only when the core planner starts setting `inh =
false` for `ONLY` (PG18).

* Unqualified `VACUUM` / `ANALYZE` (no rels) is unchanged and still
handled via `ExecuteUnqualifiedVacuumTasks`.

* Remove now-redundant helpers:

  * `VacuumColumnList`
  * `ExtractVacuumTargetRels`

Column lists are now taken directly from `vacuumRelation->va_cols` via
`CitusVacuumRelation`.

---

### Testing

Extend `src/test/regress/sql/pg18.sql` and `expected/pg18.out` with two
PG18-only blocks that verify we do not recurse into shard placements
when `ONLY` is used:

1. **Simple distributed table (`pg18_vacuum_part`)**

   * Create and distribute a regular table:

     ```sql
     CREATE SCHEMA pg18_vacuum_part;
     SET search_path TO pg18_vacuum_part;
     CREATE TABLE vac_analyze_only (a int);
     SELECT create_distributed_table('vac_analyze_only', 'a');
     INSERT INTO vac_analyze_only VALUES (1), (2), (3);
     ```

   * On the coordinator:

* Run `ANALYZE vac_analyze_only;` and later `ANALYZE ONLY
vac_analyze_only;`.
* Run `VACUUM vac_analyze_only;` and later `VACUUM ONLY
vac_analyze_only;`.

   * On `worker_1`:

* Capture `coalesce(max(last_analyze), 'epoch')` from
`pg_stat_user_tables` for `vac_analyze_only_%` into
`:analyze_before_only`, then assert:

       ```sql
SELECT max(last_analyze) = :'analyze_before_only'::timestamptz AS
analyze_only_skipped;
       ```

* Capture `coalesce(max(last_vacuum), 'epoch')` into
`:vacuum_before_only`, then assert:

       ```sql
SELECT max(last_vacuum) = :'vacuum_before_only'::timestamptz AS
vacuum_only_skipped;
       ```

Both checks return `t`, confirming `ONLY` does not change `last_analyze`
/ `last_vacuum` on shard tables.

2. **Partitioned distributed table (`pg18_vacuum_part_dist`)**

   * Create a partitioned table whose parent is distributed:

     ```sql
     CREATE SCHEMA pg18_vacuum_part_dist;
     SET search_path TO pg18_vacuum_part_dist;
     SET citus.shard_count = 2;
     SET citus.shard_replication_factor = 1;

     CREATE TABLE part_dist (id int, v int) PARTITION BY RANGE (id);
CREATE TABLE part_dist_1 PARTITION OF part_dist FOR VALUES FROM (1) TO
(100);
CREATE TABLE part_dist_2 PARTITION OF part_dist FOR VALUES FROM (100) TO
(200);

     SELECT create_distributed_table('part_dist', 'id');
     INSERT INTO part_dist SELECT g, g FROM generate_series(1, 199) g;
     ```

   * On the coordinator:

     * Run `ANALYZE part_dist;` then `ANALYZE ONLY part_dist;`.
* Run `VACUUM part_dist;` then `VACUUM ONLY part_dist;` (PG18 emits the
expected warning: `VACUUM ONLY of partitioned table "part_dist" has no
effect`).

   * On `worker_1`:

* Capture `coalesce(max(last_analyze), 'epoch')` for `part_dist_%` into
`:analyze_before_only`, then assert:

       ```sql
       SELECT max(last_analyze) = :'analyze_before_only'::timestamptz
              AS analyze_only_partitioned_skipped;
       ```

* Capture `coalesce(max(last_vacuum), 'epoch')` into
`:vacuum_before_only`, then assert:

       ```sql
       SELECT max(last_vacuum) = :'vacuum_before_only'::timestamptz
              AS vacuum_only_partitioned_skipped;
       ```

Both checks return `t`, confirming that even for a partitioned
distributed parent, `VACUUM/ANALYZE ONLY` does not recurse into shard
placements, and Citus behavior matches PG18’s “ONLY = parent only”
semantics.
2025-12-05 16:50:42 +03:00
Colm 3399d660f3
PG18: fix incorrectly failing tests in pg18 regress. (#8370)
Some tests showed diffs because of dependencies on citus GUCs and
getting shuffled around in the test order. This commit fixes that.
2025-12-05 12:37:05 +00:00
Colm 002046b87b
PG18: Add support for virtual generated columns. (#8346)
Generated columns can be virtual (not stored) and this is the default.
This PG18 feature requires tweaking citus_ruleutils and deparse table to
support in Citus. Relevant PG commit: 83ea6c540.
2025-12-04 19:51:45 +00:00
Colm 79cabe7eca
PG18: CHECK constraints can be ENFORCED / NOT ENFORCED. (#8349)
DESCRIPTION: Adds propagation of ENFORCED / NOT ENFORCED on CHECK
constraints.
    
Add propagation support to Citus ruleutils and appropriate regress
tests. Relevant PG commit: ca87c41.
2025-12-03 08:01:01 +00:00
Colm e28591df08
PG18: Foreign key constraint can be specified NOT ENFORCED. (#8347)
Test that FOREIGN KEY .. NOT ENFORCED is propagated when applied to
Citus tables. No C code changes required, ruleutils handles it. Relevant
PG commit eec0040c4. Given that Citus does not yet support ALTER TABLE
.. ALTER CONSTRAINT its usefulness is questionable, but we propagate its
definition at least.
2025-12-03 07:10:55 +00:00
Mehmet YILMAZ a39ce7942f
PG18 - small fix on pg18.out (#8367) 2025-12-03 09:41:24 +03:00
Mehmet YILMAZ ae2eb65be0
PG18 - Stabilize EXPLAIN outputs: prefer YAML; drop JSON-side “Disabled” filtering; add alt expected files (#8336)
PG18 introduces `Disabled: <bool>` and planning/memory details that
cause unstable diffs, especially with `FORMAT JSON` when keys are
removed. This PR:

* Switches EXPLAIN assertions to `FORMAT YAML` wherever JSON structure
is not required.
* Stops normalizing/removing `"Disabled"` in JSON to avoid creating
invalid trailing commas.
* Adds alternative expected files for tests that must remain in JSON and
include `Disabled` lines.

### Rationale

* YAML is line-oriented and avoids JSON’s trailing-comma pitfalls; it
yields more stable diffs across PG15–PG18.
* Some tests intentionally assert JSON structure; those stay JSON but
use dedicated `.out` alternates rather than regex-based key removal.
* A proper “parse → modify → re-emit JSON” helper is being explored but
is non-trivial; this PR adopts the lowest-risk path now.

### What changed

1. **Test format**

* Convert `EXPLAIN (..., FORMAT JSON)` to `FORMAT YAML` in tests that
only care about plan shape/fields.
* Keep JSON only where the test explicitly validates JSON; introduce alt
expected files for those.

2. **Normalization**

* Remove JSON-side filtering of `"Disabled"` (no deletion, no regex
rewrite).
* Retain existing normalization for text/YAML (numbers, `Index
Searches`, `Window: ...`, buffers, etc.).

3. **Expected files**

   * Update YAML-based expected outputs.
   * Add `.out` alternates for JSON cases that include `Disabled`.
2025-12-01 15:29:03 +03:00
Mehmet YILMAZ c600eabd82
PG18 - Handle publish_generated_columns in distributed publications (#8360)
https://github.com/postgres/postgres/commit/7054186c4

fixes #8358 

This PR wires up PostgreSQL 18’s `publish_generated_columns` publication
option in Citus and adds regression coverage to ensure it behaves
correctly for distributed tables, without changing existing DDL output
for publications that rely on the default.

---

### 1. Preserve `publish_generated_columns` when rebuilding publications

In `BuildCreatePublicationStmt`:

* On PG18+ we now read the new `pubgencols` field from `pg_publication`
and map it as follows:

  * `'n'` → default (`none`)
  * `'s'` → `stored`

* For `pubgencols == 's'` we append a `publish_generated_columns`
defelem to the reconstructed statement:

  ```c
  #if PG_VERSION_NUM >= PG_VERSION_18
      if (publicationForm->pubgencols == 's')    /* stored */
      {
          DefElem *pubGenColsOption =
              makeDefElem("publish_generated_columns",
                          (Node *) makeString("stored"),
                          -1);

          createPubStmt->options =
              lappend(createPubStmt->options, pubGenColsOption);
      }
else if (publicationForm->pubgencols != 'n') /* 'n' = none (default) */
      {
          ereport(ERROR,
(errmsg("unexpected pubgencols value '%c' for publication %u",
                          publicationForm->pubgencols, publicationId)));
      }
  #endif
  ```

* For `pubgencols == 'n'` we do **not** emit an option and rely on
PostgreSQL’s default.

* Any value other than `'n'` or `'s'` raises an error rather than
silently producing incorrect DDL.

This ensures:

* Publications that explicitly use `publish_generated_columns = stored`
are reconstructed with that option on workers, so workers get
`pubgencols = 's'`.
* Publications that use the default (`none`) continue to produce the
same `CREATE PUBLICATION ... WITH (...)` text as before (no extra
`publish_generated_columns = 'none'` noise), fixing the unintended diffs
in existing publication tests.

---

### 2. New PG18 regression coverage for distributed publications

In `src/test/regress/sql/pg18.sql`:

* Create a table with a stored generated column and make it distributed
so the publication goes through Citus DDL propagation:

  ```sql
  CREATE TABLE gen_pub_tab (
      id int primary key,
      a  int,
      b  int GENERATED ALWAYS AS (a * 10) STORED
  );

SELECT create_distributed_table('gen_pub_tab', 'id', colocate_with :=
'none');
  ```

* Create two publications that exercise both `pubgencols` values:

  ```sql
  CREATE PUBLICATION pub_gen_cols_stored
      FOR TABLE gen_pub_tab
WITH (publish = 'insert, update', publish_generated_columns = stored);

  CREATE PUBLICATION pub_gen_cols_none
      FOR TABLE gen_pub_tab
WITH (publish = 'insert, update', publish_generated_columns = none);
  ```

* On coordinator and both workers, assert the catalog contents:

  ```sql
  SELECT pubname, pubgencols
  FROM pg_publication
  WHERE pubname IN ('pub_gen_cols_stored', 'pub_gen_cols_none')
  ORDER BY pubname;
  ```

  Expected on all three nodes:

  * `pub_gen_cols_stored | s`
  * `pub_gen_cols_none   | n`

This test verifies that:

* `pubgencols` is correctly set on the coordinator for both `stored` and
`none`.
* Citus propagates the setting unchanged to all workers for a
distributed table.
2025-12-01 09:17:57 +00:00
Mehmet YILMAZ 662b7248db
PG18 - Normalize window output and add filter (#8344)
fixes #8156


8b1b342544


* Extend `normalize.sed` to:

* Rewrite auto-generated window names like `OVER w1` back to `OVER (?)`
on:

    * `Sort Key: …`
    * `Group Key: …`
    * `Output: …`
* Leave functional window specs like `OVER (PARTITION BY …)` untouched.

* Use `public.explain_filter(...)` around EXPLAINs in window-related
tests to:

* Avoid plan text churn from PG18 planner/EXPLAIN changes while still
checking that we use the Citus executor and expected node types.

* Update expected outputs in:

  * `mixed_relkind_tests.out`
  * `multi_explain*.out`
  * `multi_outer_join_columns*.out`
  * `multi_subquery_window_functions.out`
  * `multi_test_helpers.out`
  * `window_functions.out`
to match the filtered EXPLAIN output on PG18 while remaining compatible
with older PG versions.
2025-11-19 22:48:24 +03:00
Mehmet YILMAZ c843cb2060
PG18: Normalize EXPLAIN “Actual Rows” decimals in normalize.sed (#8290)
fixes #8266 

* **Strip trivial `.0…`** tails early (e.g., `111111.00 → 111111`) for
text EXPLAIN.
* **Special-case for Seq Scan**: map sub-1 averages to `0` only on `Seq
Scan` lines (e.g., `0.50 → 0`) to match pre-PG18 behavior seen in
expected files.
* **General text EXPLAIN**: map sub-1 averages (`0.xxx`) to `1` for
non-Seq-Scan nodes (e.g., `Append`, `Custom Scan`) to align with
historic “at least one row observed” semantics in our expected plans.
* **Fallback normalizer across all formats**: convert any remaining
decimals to a placeholder `N.N`, then collapse to integer `N` for
text/YAML/JSON.



Ordered block added to `src/test/regress/bin/normalize.sed`:

```sed
# --- PG18 Actual Rows normalization ---
# New in PG18: Actual Rows in EXPLAIN output are now rounded to
# 1) 0.50 (and 0.5, 0.5000...) -> 0
s/(actual[[:space:]]*rows[[:space:]]*[=:][[:space:]]*)0\.50*/\10/gI
s/(actual[^)]*rows[[:space:]]*=[[:space:]]*)0\.50*/\10/gI

# 2) 0.51+ -> 1
s/(actual[[:space:]]*rows[[:space:]]*[=:][[:space:]]*)0\.(5[1-9][0-9]*|[6-9][0-9]*)/\11/gI
s/(actual[^)]*rows[[:space:]]*=[[:space:]]*)0\.(5[1-9][0-9]*|[6-9][0-9]*)/\11/gI

# 3) Strip trivial trailing ".0..." (6.00 -> 6)  [keep your existing cross-format rules]
s/(actual[[:space:]]*rows[[:space:]]*[=:][[:space:]]*)([0-9]+)\.0+/\1\2/gI
s/(actual[^)]*rows[[:space:]]*=[[:space:]]*)([0-9]+)\.0+/\1\2/gI

# 4) YAML/XML/JSON: strip trailing ".0..."
s/(Actual[[:space:]]+Rows:[[:space:]]*[0-9]+)\.0+/\1/gI
s/(<Actual-Rows>[0-9]+)\.0+(<\/Actual-Rows>)/\1\2/g
s/("Actual[[:space:]]+Rows":[[:space:]]*[0-9]+)\.0+/\1/gI

# 5) Placeholder cleanups (kept from existing rules; harmless if unused)
#    JSON placeholder cleanup: '"Actual Rows": N.N' -> N
s/("Actual[[:space:]]+Rows":[[:space:]]*)N\.N/\1N/gI
#    Text EXPLAIN collapse: "rows=N.N" -> "rows=N"
s/(rows[[:space:]]*=[[:space:]]*)N\.N/\1N/gI
#    YAML placeholder: "Actual Rows: N.N" -> "Actual Rows: N"
s/(Actual[[:space:]]+Rows:[[:space:]]*)N\.N/\1N/gI
# --- PG18 Actual Rows normalization ---
```

### Examples

**Before (PG18):**

```
Append (actual rows=0.60 loops=5)
Custom Scan (ColumnarScan) ... (actual rows=0.67 loops=3)
Seq Scan ... (actual rows=0.50 loops=2)
Custom Scan (ColumnarScan) ... (actual rows=111111.00 loops=1)
```

**After normalization:**

```
Append (actual rows=1 loops=5)
Custom Scan (ColumnarScan) ... (actual rows=1 loops=3)
Seq Scan ... (actual rows=0 loops=2)
Custom Scan (ColumnarScan) ... (actual rows=111111 loops=1)
```
2025-11-19 15:19:12 +00:00
Colm 4e47293f9f
PG18: syntax & semantics behavior in Citus, part 1. (#8335)
PG18: syntax & semantics behavior in Citus, part 1.
    
Includes PG18 tests for:
- OLD/NEW support in RETURNING clause of DML queries (PG commit
80feb727c)
- WITHOUT OVERLAPS in PRIMARY KEY and UNIQUE constraints (PG commit
fc0438b4e)
  - COLUMNS clause in JSON_TABLE (PG commit bb766cd)
- Foreign tables created with LIKE <table> clause (PG commit 302cf1575)
  - Foreign Key constraint with PERIOD clause (PG commit 89f908a6d)
  - COPY command REJECT_LIMIT option (PG commit 4ac2a9bec)
  - COPY TABLE TO on a materialized view (PG commit 534874fac)

Partially addresses issue #8250
2025-11-17 11:08:30 +00:00
Mehmet YILMAZ accd01fbf6
PG18 - add alternative out for multi_mx_hide_shard_names test (#8333)
fixes #8279

PostgreSQL 18 planner changes (probably AIO and updated cost model) make
sequential scans cheaper, so the psql `\d table`-style query that uses a
regex on `pg_class.relname` no longer chooses an index scan. This causes
a plan difference in the `mx_hide_shard_names` regression test.

This patch adds an alternative out for the multi_mx_hide_shard_names
test.
2025-11-14 08:55:45 +00:00
eaydingol cf533ebae9
Add breaking change detection for minor version upgrades (#8334)
This PR introduces infrastructure and validation to detect breaking
changes during Citus minor version upgrades, designed to run in release
branches only.

**Breaking change detection:**

- [GUCs] Detects removed GUCs and changes to default values
- [UDFs] Detects removed functions and function signature changes
-- Supports backward-compatible function overloading (new optional
parameters allowed)
- [types] Detects removed data types
- [tables/views] Detects removed tables/views and removed/changed
columns
- New make targets for minor version upgrade tests
- Follow-up PRs will add test schedules with different upgrade scenarios

The test will be enabled in release branches (e.g., release-13) via the
new test-citus-minor-upgrade job shown below. It will not run on the
main branch.

Testing
Verified locally with sample breaking changes:
`make check-citus-minor-upgrade-local citus-old-version=v13.2.0
`

**Test case 1:** Backward-compatible signature change (allowed)
```
-- Old: CREATE FUNCTION pg_catalog.citus_blocking_pids(pBlockedPid integer)
-- New: CREATE FUNCTION pg_catalog.citus_blocking_pids(pBlockedPid integer, pBlockedByPid integer DEFAULT NULL)
```
No breaking change detected (new parameter has DEFAULT)

**Test case 2:** Incompatible signature change (breaking)
```
-- Old: CREATE FUNCTION pg_catalog.citus_blocking_pids(pBlockedPid integer)
-- New: CREATE FUNCTION pg_catalog.citus_blocking_pids(pBlockedPid integer, pBlockedByPid integer)
```
Breaking change detected:
`UDF signature removed: pg_catalog.citus_blocking_pids(pblockedpid
integer) RETURNS integer[]`

**Test case 3:** GUC changes (breaking)

- Removed `citus.max_worker_nodes_tracked`
- Changed default value of `citus.max_shared_pool_size` from 0 to 4
Breaking change detected:

```
The default value of GUC citus.max_shared_pool_size was changed from 0 to 4
GUC citus.max_worker_nodes_tracked was removed
```

**Test case 4:** Table/view changes

- Dropped `pg_catalog.pg_dist_rebalance_strategy` and removed a column
from `pg_catalog.citus_lock_waits`

```
  - Column blocking_nodeid in table/view pg_catalog.citus_lock_waits was removed
  - Table/view pg_catalog.pg_dist_rebalance_strategy was removed
```

**Test case 5:** Remove a custom type 
- Dropped `cluster_clock` and the objects depend on it. In addition to
the dependent objects, test shows:
```
 - Type pg_catalog.cluster_clock was removed
```

Sample new job for build and test workflow (for release branches):

```
  test-citus-minor-upgrade:
    name: PG17 - check-citus-minor-upgrade
    runs-on: ubuntu-latest
    container:
      image: "${{ needs.params.outputs.citusupgrade_image_name }}:${{ fromJson(needs.params.outputs.pg17_version).full }}${{ needs.params.outputs.image_suffix }}"
      options: --user root
    needs:
    - params
    - build
    env:
      citus_version: 13.2
    steps:
    - uses: actions/checkout@v4
    - uses: "./.github/actions/setup_extension"
      with:
        skip_installation: true
    - name: Install and test citus minor version upgrade
      run: |-
        gosu circleci \
          make -C src/test/regress \
            check-citus-minor-upgrade \
            bindir=/usr/lib/postgresql/${PG_MAJOR}/bin \
            citus-pre-tar=/install-pg${PG_MAJOR}-citus${citus_version}.tar \
            citus-post-tar=${GITHUB_WORKSPACE}/install-$PG_MAJOR.tar;
    - uses: "./.github/actions/save_logs_and_results"
      if: always()
      with:
        folder: ${{ env.PG_MAJOR }}_citus_minor_upgrade
    - uses: "./.github/actions/upload_coverage"
      if: always()
      with:
        flags: ${{ env.PG_MAJOR }}_citus_minor_upgrade
        codecov_token: ${{ secrets.CODECOV_TOKEN }}
```
2025-11-14 06:48:35 +00:00
Mehmet YILMAZ 8bba66f207
Fix EXPLAIN output in regression tests for consistency (#8332) 2025-11-13 11:16:33 +03:00
Mehmet YILMAZ f80fa1c83b
PG18 - Adjust columnar path tests for PG18 OR clause optimization (#8337)
fixes #8264

PostgreSQL 18 introduced a planner improvement (commit `ae4569161`) that
rewrites simple `OR` equality clauses into `= ANY(...)` forms, allowing
the use of a single index scan instead of multiple scans or a custom
scan.
This change affects the columnar path tests where queries like `a=0 OR
a=5` previously chose a Columnar or Seq Scan plan.

In this PR:

* Updated test expectations for `uses_custom_scan` and `uses_seq_scan`
to reflect the new index scan plan.

This keeps the test output consistent with PostgreSQL 18’s updated
planner behavior.
2025-11-13 09:32:21 +03:00
Mehmet YILMAZ 4244bc8516
PG18: Normalize verbose CREATE SUBSCRIPTION connect errors (#8326)
fixes #8317


0d8bd0a72e

PG18 changed the wording of connection failures during `CREATE
SUBSCRIPTION` to include a subscription prefix and a verbose “connection
to server … failed:” preamble. This breaks one regression output
(`multi_move_mx`). This PR adds normalization rules to map PG18 output
back to the prior form so results are stable across PG15–PG18.

**What changes**
Add two rules in `src/test/regress/bin/normalize.sed`:

```sed
# PG18: drop 'subscription "<name>"' prefix
# remove when PG18 is the minimum supported version
s/^[[:space:]]*ERROR:[[:space:]]+subscription "[^"]+" could not connect to the publisher:[[:space:]]*/ERROR: could not connect to the publisher: /I

# PG18: drop verbose 'connection to server … failed:' preamble
s/^[[:space:]]*ERROR:[[:space:]]+could not connect to the publisher:[[:space:]]*connection to server .* failed:[[:space:]]*/ERROR: could not connect to the publisher: /I
```

**Before (PG18)**

```
ERROR:  subscription "subs_01" could not connect to the publisher:
        connection to server at "localhost" (::1), port 57637 failed:
        root certificate file "/non/existing/certificate.crt" does not exist
```

**After normalization**

```
ERROR:  could not connect to the publisher:
        root certificate file "/non/existing/certificate.crt" does not exist
```

**Why**
Maintain identical regression outputs across supported PG versions while
Citus still supports PG<18.
2025-11-10 08:01:47 +00:00
Mehmet YILMAZ b2356f1c85
PG18: Make EXPLAIN ANALYZE output stable by routing through explain_filter and hiding footers (#8325)
PostgreSQL 18 adds a new line to text EXPLAIN with ANALYZE (`Index
Searches: N`). That extra line both creates noise and bumps psql’s `(N
rows)` footer. This PR keeps ANALYZE (so statements still execute) while
removing the version-specific churn in our regress outputs.

### What changed

* **Use `explain_filter(...)` instead of raw text EXPLAIN**

* In `local_shard_execution.sql` and
`local_shard_execution_replicated.sql`, replace direct:

    ```sql
EXPLAIN (ANALYZE, COSTS OFF, SUMMARY OFF, TIMING OFF, BUFFERS OFF)
<stmt>;
    ```

    with:

    ```sql
    \pset footer off
SELECT public.explain_filter('EXPLAIN (ANALYZE, COSTS OFF, SUMMARY OFF,
TIMING OFF, BUFFERS OFF) <stmt>');
    \pset footer on
    ```
* Expected files updated accordingly to show the `explain_filter` output
block instead of raw EXPLAIN text.
* **Extend `explain_filter` to drop the PG18 line**

* Filter now removes any `Index Searches: <number>` line before
normalizing numeric fields, preventing the “N” version of the same line
from sneaking in.
* **Keep suite-wide normalizer intact**
2025-11-10 10:43:11 +03:00
manaldush daa69bec8f
Keep temp reloid for columnar cases (#8309)
Fixes https://github.com/citusdata/citus/issues/8235

PG18 and PG latest minors ignore temporary relations in
`RelidByRelfilenumber` (`RelidByRelfilenode` in PG15)
Relevant PG commit:
https://github.com/postgres/postgres/commit/86831952

Here we are keeping temp reloids instead of getting it with
RelidByRelfilenumber, for example, in some cases, we can directly get
reloid from relations, in other cases we keep it in some structures.

Note: there is still an outstanding issue with columnar temp tables in
concurrent sessions, that will be fixed in PR
https://github.com/citusdata/citus/pull/8252
2025-11-06 23:15:52 +03:00
Colm bc41e7b94f
PG18: fix query results diff in merge regress test. (#8323)
The `merge` regress test uses SQL functions which can be cached in PG18+
since commit
[0dca5d68d](https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=0dca5d68d7bebf2c1036fd84875533afef6df992).
Distributed plan's copy function did not include the
`sourceResultRepartitionColumnIndex` field, which is critical for MERGE
queries, and for cached distributed plans this field was always 0
leading to the problem (#8285). Ensuring it is copied fixes it. This was
an oversight in Citus, and not specific to PG18.
2025-11-06 12:31:43 +00:00
Mehmet YILMAZ b10aa02908
PG18: Stabilize EXPLAIN output by disabling ANALYZE in index-scan test (#8322)
fixes #8318 

PostgreSQL 18 started printing an extra line in `EXPLAIN (ANALYZE …)`
for index scans:

```
Index Searches: N
```

This makes our test output flap (extra line, footer row count changes)
while the intent of the test is simply to **prove we plan an index
scan**, not to assert runtime counters.

### What this PR does

  ```sql
  EXPLAIN (... analyze on, ...)
  ```

  to

  ```sql
  EXPLAIN (... analyze off, ...)
  ```

### Why this approach

* **Minimal change**: keeps the test’s purpose (“we exercise an index
scan”) without introducing normalization rules
* **Version-stable**: avoids PG18’s new text while remaining valid on
PG15–PG18.
* **No behavioral impact**: we still validate the plan node is an Index
Scan; we just don’t request execution.

### Before → After (essence)

Before (PG18):

```
Aggregate
  ->  Index Scan ... (actual rows=1 loops=1)
        Index Cond: ...
        Index Searches: 1
```

After:

```
Aggregate
  ->  Index Scan ...
        Index Cond: ...
```
2025-11-05 13:53:36 +00:00
Mehmet YILMAZ b63572d72f
PG18 deparser: map Vars through JOIN aliases (fixes whole-row join column names) (#8300)
fixes #8278

Please check issue:
https://github.com/citusdata/citus/issues/8278#issuecomment-3431707484


f4e7756ef9

### What PG18 changed

SELECT creates diff has a **named join**:

```sql
(...) AS unsupported_join (x,y,z,t,e,f,q)
```

On PG17, `COUNT(unsupported_join.*)` stayed as a single whole-row Var
that referenced the **join alias**.

On PG18, the parser expands that whole-row Var **early** into a
`ROW(...)` of **base** columns:

```
ROW(a.user_id, a.item_id, a.buy_count,
    b.id, b.it_name, b.k_no,
    c.id, c.it_name, c.k_no)
```

But since the join is *named*, inner aliases `a/b/c` are hidden.
Referencing them later blows up with
“invalid reference to FROM-clause entry for table ‘a’”.

### What this PR changes

1. **Retarget at `RowExpr` deparse (not in `get_variable`)**

* In `get_rule_expr()`’s `T_RowExpr` branch, each element `e` of
`ROW(...)` is examined.
* If `e` unwraps to a simple, same-level `Var` (`varlevelsup == 0`,
`varattno > 0`) and there is a **named `RTE_JOIN`** with
`joinaliasvars`, we **do not** change `varno/varattno`.
* Instead, we build a copy of the Var and set **`varnosyn/varattnosyn`**
to the matching join alias column (from `joinaliasvars`).
* Then we deparse that Var via `get_rule_expr_toplevel(...)`, which
naturally prints `join_alias.colname`.
* Scope is limited to **query deparsing** (`dpns->plan == NULL`),
exactly where PG18 expands whole-row vars into `ROW(...)` of base Vars.

2. **Helpers (PG18-only file)**

* `unwrap_simple_var(Node*)`: strips trivial wrappers (`RelabelType`,
`CoerceToDomain`, `CollateExpr`) to reveal a `Var`.
* `var_matches_base(const Var*, int varno, AttrNumber attno)`: matches
canonical or synonym identity.
* `dpns_has_named_join(const deparse_namespace*)`: fast precheck for any
named join with `joinaliasvars`.
* `map_var_through_join_alias(...)`: scans `joinaliasvars` to locate the
**JOIN RTE index + attno** for a 1:1 alias; the caller uses these to set
`varnosyn/varattnosyn`.

3. **Safety and non-goals**

   * **No effect on plan deparsing** (`dpns->plan != NULL`).
* **No change to semantic identity**: we leave `varno/varattno`
untouched; only set `varnosyn/varattnosyn`.
* Skip whole-row/system columns (`attno <= 0`) and non-simple join
columns (computed expressions).
* Works with named joins **with or without** an explicit column list (we
rely on `joinaliasvars`, not the alias collist).

### Reproducer

```sql
CREATE TABLE distributed_table(user_id int, item_id int, buy_count int);
CREATE TABLE reference_table(id int, it_name varchar(25), k_no int);
SELECT create_distributed_table('distributed_table', 'user_id');

SELECT COUNT(unsupported_join.*)
FROM (distributed_table a
      LEFT JOIN reference_table b ON true
      RIGHT JOIN reference_table c ON true)
     AS unsupported_join (x,y,z,t,e,f,q)
JOIN (reference_table d JOIN reference_table e ON true) ON true;
```

**Before (PG18):** deparser emitted `ROW(a.user_id, …)` → `ERROR:
invalid reference to FROM-clause entry for table "a"`
**After:** deparser emits
`ROW(unsupported_join.x, ..., unsupported_join.k_no)` → runs
successfully.

Now maps to `unsupported_join.<auto_col_names>` and runs.
2025-11-05 11:08:58 +00:00
Colm 7a7a0ba9c7
PG18: Fix missing Planning Fast Path Query DEBUG messages (#8320)
With PG18's GROUP RTE, queries that should have been eligible for fast
path planning were skipped because the fast path planner allows exactly
one range table only. This fix extends that to account for a GROUP RTE.
2025-11-04 12:33:21 +00:00
Colm 5a71f0d1ca
PG18: Print names in order in tables are not colocated error detail. (#8319)
Fixes #8275 by printing the names in order so that in every message
`DETAIL: x and y are not co-located` x precedes (or is lexicographically
less than) y.
2025-11-03 18:56:17 +00:00
Naisila Puka fa7ca79c6f
Change tupledesc->attrs[n] to TupleDescAttr(tupledesc, n) (#8240)
TupleDescAttr is available since PG11.
https://github.com/postgres/postgres/commit/2cd7084
PG18 simply forces you to use it, but there is no need to
guard it with pg18 version.
2025-11-03 21:39:11 +03:00
Naisila Puka 94653c1f4e
PG18 - Exclude child fk constraints from output (#8307)
Fixes #8280 
Similar to
https://github.com/citusdata/citus/commit/432b69e
2025-11-03 16:38:56 +03:00
Mehmet YILMAZ be2fcda071
PG18 - Normalize PG18 EXPLAIN: hide “Storage … Maximum Storage …” line (#8292)
fixes #8267 

* Extend `src/test/regress/bin/normalize.sed` to drop the new PG18
EXPLAIN instrumentation line:

  ```
  Storage: <Memory|Disk|Memory and Disk>  Maximum Storage: <size>
  ```

which appears under `Materialize`, some `CTE Scan`s, etc. when `ANALYZE`
is on.

**Why**

* PG18 added storage usage reporting for materialization/tuplestore
nodes. It’s useful for humans but creates noisy, non-semantic diffs in
regression output. There’s no EXPLAIN flag to suppress it, so we
normalize in tests instead. This PR wires that normalization into our
sed pipeline.

https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=1eff8279d

**How**

* Add a narrowly scoped sed rule that matches only lines starting with
`Storage:` (keeping `Sort Method`, `Hash`, `Buffers`, etc. intact). Use
ERE compatible with `sed -Ef` and Python `re` (no POSIX character
classes), e.g.:

  ```
  /^[ \t]*Storage:[ \t].*$/d
  ```
2025-11-03 15:59:00 +03:00
Mehmet YILMAZ 61b491f0f4
PG18: Add _plan_json() test helper and switch columnar index tests to JSON plan checks (#8299)
fixes #8263

* Introduces `columnar_test_helpers._plan_json(q text) -> jsonb`, which
runs `EXPLAIN (FORMAT JSON, COSTS OFF, ANALYZE OFF)` and returns the
plan as `jsonb`. This lets us assert on plan structure instead of text
matching.

* Updates `columnar_indexes.sql` tests to detect whether any index-based
scan is used by searching the JSON plan’s `"Node Type"` (e.g., `Index
Scan`, `Index Only Scan`, `Bitmap Index Scan`).

## Notable changes

* New helper in `src/test/regress/sql/columnar_test_helpers.sql`:

* `EXECUTE format('EXPLAIN (FORMAT JSON, COSTS OFF, ANALYZE OFF) %s', q)
INTO j;`
  * Returns the `jsonb` plan for downstream assertions.
  
  ```sql
  SELECT NOT jsonb_path_exists(
columnar_test_helpers._plan_json('SELECT b FROM columnar_table WHERE b =
30000'),
'$[*].Plan.** ? (@."Node Type" like_regex "^(Index|Bitmap
Index).*Scan$")'
  ) AS uses_no_index_scan;
  ```

to verify no index scan occurs at the partial index boundary (`b =
30000`) and that an index scan is used where expected (`b = 30001`).

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-11-03 15:34:28 +03:00
Mehmet YILMAZ 6251eab9b7
PG18: Make SSL tests resilient & validate TLSv1.3 cipher config (#8298)
fixes #8277 


https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=45188c2ea

PostgreSQL 18 + newer OpenSSL builds surface `ssl_ciphers` as a **rule
string** (e.g., `HIGH:MEDIUM:+3DES:!aNULL`) instead of an expanded
cipher list. Our tests hard-pinned the literal list and started failing
on PG18. Also, with TLS 1.3 in the picture, we need to assert that
cipher configuration is sane without coupling to OpenSSL’s expansion.

**What changed**

* **sql/ssl_by_default.sql**

* Replace brittle `SHOW ssl_ciphers` string matching with invariant
checks:

    * non-empty ciphers: `current_setting('ssl_ciphers') <> ''`
* looks like a rule/list: `position(':' in
current_setting('ssl_ciphers')) > 0`
  * Run the same checks on **workers** via `run_command_on_workers`.
* Keep existing validations for `ssl=on`, `sslmode=require` in
`citus.node_conninfo`, and `pg_stat_ssl.ssl = true`.


* **expected/ssl_by_default.out**

* Update expected output to booleans for the new checks (less diff-prone
across PG/SSL variants).
2025-11-03 14:51:39 +03:00
Naisila Puka e0570baad6
PG18 - Remove REINDEX SYSTEM test because of aio debug lines (#8305)
Fixes #8268 

PG18 added core async I/O infrastructure. Relevant PG commit:
https://github.com/postgres/postgres/commit/da72269

In `pg16.sql test`, we tested `REINDEX SYSTEM` command, which would
later cause io debug lines in a command of `multi_insert_select.sql`
test, that runs _after_ `pg16.sql` test.

"This is expected because the I/O subsystem is repopulating its internal
callback pool after heavy catalog I/O churn caused by REINDEX SYSTEM." -
chatgpt (seems right 😄)

This PR removes `REINDEX SYSTEM` test as its not crucial anyway.
`REINDEX DATABASE` is sufficient for the purpose presented in pg16.sql
test.
2025-11-01 19:36:16 +00:00
Ivan Kush 503a2aba73
Move PushActiveSnapshot outside a for loop (#8253)
In the PR [8142](https://github.com/citusdata/citus/pull/8142) was added
`PushActiveSnapshot`.

This commit places it outside a for loop as taking a snapshot is a resource
heavy operation.
2025-10-31 21:20:12 +03:00
Vinod Sridharan 86010de733
Update GUC setting to not crash with ASAN (#8301)
The GUC configuration for SkipAdvisoryLockPermissionChecks had
misconfigured the settings for GUC_SUPERUSER_ONLY for PGC_SUSET - when
PostgreSQL running with ASAN, this fails when querying pg_settings due
to exceeding the size of the array GucContext_Names. Fix up this GUC
declaration to not crash with ASAN.
2025-10-31 10:21:58 +03:00
Colm 458299035b
PG18: fix regress test failures in subquery_in_targetlist.
The failing queries all have a GROUP BY, and the fix teaches the Citus recursive planner how to handle a PG18 GROUP range table in the outer query:
- In recursive query planning, don't recurse into subquery expressions in a GROUP BY clause
- Flatten references to a GROUP rte before creating the worker subquery in pushdown planning
- If a PARAM node points to a GROUP rte then tunnel through to the underlying expression
    
Fixes #8296.
2025-10-30 11:49:28 +00:00
Mehmet YILMAZ 188c182be4
PG18 - Enable NUMA syscalls in CI containers to fix PG18 numa.out regression test failures (#8258)
fixes #8246

PostgreSQL 18 introduced stricter NUMA page-inquiry permissions for the
`pg_shmem_allocations_numa` view.
Without the required kernel capabilities, the test fails with:

```
ERROR:  failed NUMA pages inquiry status: Operation not permitted
```

This PR updates our test containers to include the necessary privileges:

* Adds `--cap-add=SYS_NICE` and `--security-opt seccomp=unconfined`

When PostgreSQL’s new NUMA views (`pg_shmem_allocations_numa`,
`pg_buffercache_numa`) run, they call `move_pages()` to ask the kernel
which NUMA node holds each shared memory page.


https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=8cc139bec

That syscall (`move_pages()`) requires `CAP_SYS_NICE` when inspecting
another process.

So: `--cap-add=SYS_NICE` grants the container permission to perform that
NUMA page query.


https://man7.org/linux/man-pages/man2/move_pages.2.html#:~:text=must%20be%20privileged%0A%20%20%20%20%20%20%20%20%20%20(-,CAP_SYS_NICE,-)%20or%20the%20real


`--security-opt seccomp=unconfined`

Docker containers still run under a seccomp filter which a kernel-level
sandbox that blocks many system calls entirely for safety.
The default Docker seccomp profile blocks `move_pages()` outright,
because it can expose kernel memory layout information.


https://docs.docker.com/engine/security/seccomp/#:~:text=You%20can%20pass-,unconfined,-to%20run%20a


**In combination**

Both flags are required for NUMA introspection inside a container:
- `SYS_NICE` → permission
- `seccomp=unconfined` → ability
2025-10-27 21:00:32 +03:00
Mehmet YILMAZ dba9379ea5
PG18: Normalize EXPLAIN ANALYZE output for drop “Index Searches” line (#8291)
fixes #8265


0fbceae841

PostgreSQL 18 started printing an extra line in `EXPLAIN (ANALYZE …)`
for index scans:

```
Index Searches: N
```

**normalize.sed**: add a rule to remove the PG18-only line

 ```
 /^\s*Index Searches:\s*\d+\s*$/d
 ```
2025-10-27 14:37:20 +03:00
Mehmet YILMAZ 785a87c659
PG18: adapt multi_subquery_misc expected output to SQL-function plan cache (#8289)
0dca5d68d7
fixes #8153 

```diff
/citus/src/test/regress/expected/multi_subquery_misc.out

 -- should error out
 SELECT sql_subquery_test(1,1);
-ERROR:  could not create distributed plan
-DETAIL:  Possibly this is caused by the use of parameters in SQL functions, which is not supported in Citus.
-HINT:  Consider using PL/pgSQL functions instead.
-CONTEXT:  SQL function "sql_subquery_test" statement 1
+ sql_subquery_test 
+-------------------
+               307
+(1 row)
+

```

PostgreSQL 18 changes planner behavior for inlining/parameter handling
in SQL functions (pg18 commit `0dca5d68d`). As a result, a query in
`multi_subquery_misc` that previously failed to create a distributed
plan now succeeds. This PR updates the regression **expected** file to
reflect the new outcome on PG18+.

### What changed

* Updated `src/test/regress/expected/multi_subquery_misc.out`:

  * Replaced the previous error block:

    ```
    ERROR:  could not create distributed plan
DETAIL: Possibly this is caused by the use of parameters in SQL
functions, which is not supported in Citus.
    HINT:   Consider using PL/pgSQL functions instead.
    CONTEXT: SQL function "sql_subquery_test" statement 1
    ```
  * with the actual successful result on PG18+:

    ```
    sql_subquery_test
    --------------------------------------------
    307
    (1 row)
    ```

* **PG < 18:** Behavior remains unchanged; the test still errors on
older versions.
* **PG ≥ 18:** The call succeeds; the updated expected output matches
actual results.

similar PR: https://github.com/citusdata/citus/pull/8184
2025-10-24 15:48:08 +03:00
Mehmet YILMAZ 95477e6d02
PG18 - Add BUFFERS OFF to remaining EXPLAIN calls (#8288)
fixes #8093 


c2a4078eba

- Enable buffer-usage reporting by default in `EXPLAIN ANALYZE` on
PostgreSQL 18 and above.
- Introduce the explicit `BUFFERS OFF` option in every existing
regression test to maintain pre-PG18 output consistency.
- This appends, `BUFFERS OFF` to all `EXPLAIN(...)` calls in
src/test/regress/sql and the corresponding .out files.
2025-10-24 15:09:49 +03:00
ibrahim halatci 5fc4cea1ce
pin PostgreSQL server development package version to 17 (#8286)
DESCRIPTION: pin PostgreSQL server development package version to 17
rather than full dev package which now pulls in 18 and Citus does not
yet support pg18
2025-10-21 11:32:32 +03:00
Colm bf959de39e
PG18: Fix diffs in EXPLAINs introduced by PR #8242 in pg18 goldfile (#8262) 2025-10-19 21:20:16 +01:00
Onur Tirtir 90f2ab6648
Actually deprecate mark_tables_colocated() 2025-10-17 11:57:36 +00:00
Colm 3ca66e1fcc
PG18: Fix "Unrecognized range table id" in INSERT .. SELECT planning (#8256)
The error `Unrecognized range table id` seen in regress test
`insert_select_into_local_tables` is a consequence of the INSERT ..
SELECT planner getting confused by a SELECT query with a GROUP BY and
hence a Group RTE, introduced in PG18 (commit 247dea89f). The solution
is to flatten the relevant parts of the SELECT query before preparing
the INSERT .. SELECT query tree for use by Citus.
2025-10-17 11:21:25 +01:00
Colm 5d71fca3b4
PG18 regress sanity: disable `enable_self_join_elimination` on queries (#8242)
.. involving Citus tables. Interim fix for #8217 to achieve regress
sanity with PG18. A complete fix will follow with PG18 feature
integration.
2025-10-17 10:25:33 +01:00
naisila 76f18624e5 PG18 - use systable_inplace_update_* in UpdateStripeMetadataRow
PG18 has removed heap_inplace_update(), which is crucial for
citus_columnar extension because we always want to update
stripe entries for columnar in-place.
Relevant PG18 commit:
https://github.com/postgres/postgres/commit/a07e03f

heap_inplace_update() has been replaced by
heap_inplace_update_and_unlock, which is used inside
systable_inplace_update_finish, which is used together with
systable_inplace_update_begin. This change has been back-patched
up to v12, which is enough for us since the oldest version
Citus supports is v15.

In PG<18, a deprecated heap_inplace_update() is retained,
however, let's start using the new functions because they are
better, and such that we don't need to wrap these changes in
PG18 version conditionals.

Basically, in this commit we replace the following:

SysScanDesc scanDescriptor = systable_beginscan(columnarStripes,
indexId, indexOk, &dirtySnapshot, 2, scanKey);
heap_inplace_update(columnarStripes, modifiedTuple);

with the following:

systable_inplace_update_begin(columnarStripes, indexId, indexOk,
NULL, 2, scanKey, &tuple, &state);
systable_inplace_update_finish(state, tuple);

For more understanding, it's best to refer to an example:
REL_18_0/src/backend/catalog/toasting.c#L349-L371
of how systable_inplace_update_begin and
systable_inplace_update_finish are used in PG18, because they
mirror the need of citus columnar.

Fixes #8207
2025-10-17 11:11:08 +03:00
naisila abd50a0bb8 Revert "PG18 - Adapt columnar stripe metadata updates (#8030)"
This reverts commit 5d805eb10b.
heap_inplace_update was incorrectly replaced by
CatalogTupleUpdate in 5d805eb. In Citus, we assume a stripe
entry with some columns set to null means that a write
is in-progress, because otherwise we wouldn't see a such row.
But this breaks when we use CatalogTupleUpdate because it
inserts a new version of the row, which leaves the
in-progress version behind. Among other things, this was
causing various issues in PG18 - check-columnar test.
2025-10-17 11:11:08 +03:00
eaydingol aa0ac0af60
Citus upgrade tests (#8237)
Expand the citus upgrade tests matrix:
PG15: v11.1.0 v11.3.0 v12.1.10 
PG16: v12.1.10

See https://github.com/citusdata/the-process/pull/174
2025-10-15 15:28:44 +03:00
Naisila Puka 432b69eb9d
PG18 - fix naming diffs of child FK constraints (#8247)
PG18 changed the names generated for child foreign key constraints.
https://github.com/postgres/postgres/commit/3db61db48

The test failures in Citus regression suite are all changing the name of
a constraint from `'sensors%'` to `'%to_parent%_1'`: the naming is very
nice here because `to_parent` means that we have a foreign key to a
parent table.

To fix the diff, we exclude those constraints from the output. To verify
correctness, we still count the problematic constraints to make sure
they are there - we are simply removing them from the first output (we
add this count query right after the previous one)

Fixes #8126

Co-authored-by: Mehmet YILMAZ <mehmety87@gmail.com>
2025-10-13 13:33:38 +03:00
Naisila Puka f1dd976a14
Fix vanilla tests with domain creation (#8238)
Qualify create domain stmt after local execution, to avoid such diffs in
PG vanilla tests:

```diff
create domain d_fail as anyelement;
-ERROR:  "anyelement" is not a valid base type for a domain
+ERROR:  "pg_catalog.anyelement" is not a valid base type for a domain
```

These tests were newly added in PG18, however this is not new PG18
behavior, just some added tests.
https://github.com/postgres/postgres/commit/0172b4c94

Fixes #8042
2025-10-10 15:34:32 +03:00
Naisila Puka 351cb2044d
PG18 - define some EXPLAIN funcs and structs only in PG17 (#8239)
PG18 changed the visibility of various Explain Serialize functions and
structs to `extern`. Previously, for PG17 support, these were `static`,
so we had to copy paste their definitions from `explain.c` to Citus's
`multi_explain.c`.
Relevant PG18 commits:
https://github.com/postgres/postgres/commit/555960a0
https://github.com/postgres/postgres/commit/77cb08be

Now we don't need to define the following anymore in Citus, since they
are extern in PG18:
- typedef struct SerializeMetrics
- void ExplainIndentText(ExplainState *es);
- SerializeMetrics GetSerializationMetrics(DestReceiver *dest);
- typedef struct SerializeDestReceiver (this is not extern, however it
is only used by GetSerializationMetrics function)

This was incorrectly handled in
https://github.com/citusdata/citus/commit/9e42f3f2c
by wrapping these definitions and usages in PG17 only,
causing such diffs in PG18 (not able to see serialization at all):
```diff
citus/src/test/regress/expected/pg17.out

select public.explain_filter('explain (analyze,
serialize binary,buffers,timing) select * from int8_tbl i8');
...
              Planning Time: N.N ms
-             Serialization: time=N.N ms  output=NkB  format=binary
              Execution Time: N.N ms
  Planning Time: N.N ms
  Serialization: time=N.N ms  output=NkB  format=binary
  Execution Time: N.N ms
-(14 rows)
+(13 rows)
```
2025-10-10 15:05:47 +03:00
Naisila Puka 287abea661
PG18 compatibility - varreturningtype additions (#8231)
This PR solves the following diffs, originating from the addition of
`varreturningtype` field to the `Var` struct in PG18:
https://github.com/postgres/postgres/commit/80feb727c

Previously we didn't account for this new field (as it's new), so this
wouldn't allow the parser to correctly reconstruct the `Var` node
structure, but rather it would error out with `did not find '}' at end
of input node`:

```diff
 SELECT column_to_column_name(logicalrelid, partkey)
 FROM pg_dist_partition WHERE partkey IS NOT NULL ORDER BY 1 LIMIT 1;
- column_to_column_name
----------------------------------------------------------------------
- a
-(1 row)
-
+ERROR:  did not find '}' at end of input node
```

Solution follows precedent https://github.com/citusdata/citus/pull/7107,
when varnullingrels field was added to the `Var` struct in PG16.

The solution includes:
- Taking care of the `partkey` in `pg_dist_partition` table because it's
coming from the `Var` struct. This mainly includes fixing the upgrade
script to PG18, by saving all the `partkey` infos before upgrading to
PG18 (in `citus_prepare_pg_upgrade`), and then re-generating `partkey`
columns in `pg_dist_partition` (using `UPDATE`) after upgrading to PG18
(in `citus_finish_pg_upgrade`).
- Adding a normalize rule to fix output differences among PG versions.
Note that we need two normalize lines: one for PG15 since it doesn't
have `varnullingrels`, and one for PG16/PG17.
- Small trick on `metadata_sync_helpers` to use different text when
generating the `partkey`, based on the PG version.

Fixes #8189
2025-10-09 17:35:03 +03:00
Naisila Puka f0014cf0df
PG18 compatibility: misc output diffs pt2 (#8234)
3 minor changes to reduce some noise from the regression diffs.

1 - Reduce verbosity when ALTER EXTENSION fails
PG18 has improved reporting of errors in extension script files
Relevant PG commit:
https://github.com/postgres/postgres/commit/774171c4f
There was more context in PG18, so reducing verbosity
```
ALTER EXTENSION citus UPDATE TO '11.0-1';
 ERROR:  cstore_fdw tables are deprecated as of Citus 11.0
 HINT:  Install Citus 10.2 and convert your cstore_fdw tables to the
        columnar access method before upgrading further
 CONTEXT:  PL/pgSQL function inline_code_block line 4 at RAISE
+SQL statement "DO LANGUAGE plpgsql
+$$
+BEGIN
+    IF EXISTS (SELECT 1 FROM pg_dist_shard where shardstorage = 'c') THEN
+     RAISE EXCEPTION 'cstore_fdw tables are deprecated as of Citus 11.0'
+        USING HINT = 'Install Citus 10.2 and convert your cstore_fdw tables
                       to the columnar access method before upgrading further';
+ END IF;
+END;
+$$"
+extension script file "citus--10.2-5--11.0-1.sql", near line 532
```

2 - Fix backend type order in tests for PG18
PG18 added another backend type which messed the order
in this test
Adding a separate IF condition for PG18
Relevant PG commit:
https://github.com/postgres/postgres/commit/18d67a8d7d

3 - Ignore "DEBUG: find_in_path" lines in output
Relevant PG commit:
https://github.com/postgres/postgres/commit/4f7f7b0375
The new GUC extension_control_path specifies a path to look for
extension control files.
2025-10-09 16:50:41 +03:00
Naisila Puka d9652bf5f9
PG18 compatibility: misc output diffs (#8233)
6 minor changes to reduce some noise from the regression diffs.

1 - Add ORDER BY to fix subquery_in_where diff

2 - Disable buffers in explain analyze calls
Leftover work from
https://github.com/citusdata/citus/commit/f1f0b09f7

3 - Reduce verbosity to avoid diffs between PG versions
Relevant PG commit:
https://github.com/postgres/postgres/commit/0dca5d68d7
diff was:
```
CALL test_procedure_commit(2,5);
 ERROR:  COMMIT is not allowed in an SQL function
-CONTEXT:  SQL function "test_procedure_commit" during startup
+CONTEXT:  SQL function "test_procedure_commit" statement 2
```

4 - Rename array_sort to array_sort_citus since PG18 added array_sort
Relevant PG commit:
https://github.com/postgres/postgres/commit/6c12ae09f5a
Diff we were seeing in multi_array_agg, because the PG18 test was using
PG18's array_sort function instead:
```
-- Check that we return NULL in case there are no input rows to array_agg()
 SELECT array_sort(array_agg(l_orderkey))
     FROM lineitem WHERE l_orderkey < 0;
  array_sort
 ------------
- {}
+
 (1 row)
```

5 - Exclude not-null constraints from output to avoid diffs
PG18 has added pg_constraint rows for not-null constraints
Relevant PG commit
https://github.com/postgres/postgres/commit/14e87ffa5c
Remove them by condition contype <> 'n'

6 - Reduce verbosity to avoid md5 pwd deprecation warning in PG18
PG18 has deprecated MD5 passwords
Relevant PG commit:
https://github.com/postgres/postgres/commit/db6a4a985

Fixes #8154 
Fixes #8157
2025-10-08 13:23:55 +03:00
Naisila Puka 77d5807fd6
Update changelog with 12.1.9, 12.1.10, 13.0.5, 13.1.1 entries (#8224) 2025-10-07 22:33:12 +03:00
Naisila Puka 2a6414c727
PG18: use data-checksums by default in upgrades (#8230)
Checksums are now on by default in PG18: 04bec894a

Upgrade to PG18 fails with the following error:
`old cluster does not use data checksums but the new one does`

To overcome this error, we add --data-checksums option such that
clusters with PG less than 18 also use data checksums.

Fixes #8229
2025-10-07 12:17:08 +03:00
Naisila Puka c5dde4b115
Fix crash on create statistics with non-RangeVar type pt2 (#8227)
Fixes #8225 
very similar to #8213 
Also the error message changed between pg18rc1 and pg18.0
2025-10-07 11:56:20 +03:00
Naisila Puka 5a3648b2cb
PG18: fix yet another unregistered snapshot crash (#8228)
PG18 added an assertion that a snapshot is active or registered before
it's used. Relevant PG commit
8076c00592

Fixes #8209
2025-10-07 11:31:41 +03:00
Mehmet YILMAZ d4dfdd765b
PG18 - Normalize \d+ output in PG18 by filtering “Not-null constraints” blocks (#8183)
DESCRIPTION: Normalize \d+ output in PG18 by filtering “Not-null
constraints” blocks
fixes #8095 


**PR Description**
Postgres 18 started representing column `NOT NULL` as named constraints
in `pg_constraint`, and `psql \d+` now prints them under a `Not-null
constraints:` section. This caused extra diffs in our regression tests.

14e87ffa5c

This PR updates the normalization rules to strip those sections during
diff filtering by adding two regex rules:

* remove the `Not-null constraints:` header
* remove any indented constraint lines ending in `_not_null`
2025-10-02 13:48:27 +03:00
Mehmet YILMAZ cec1848b13
PG18: adapt multi_sql_function expected output to SQL-function plan cache (#8184)
0dca5d68d7
fixes #8153 

```diff
diff -dU10 -w /__w/citus/citus/src/test/regress/expected/multi_sql_function.out /__w/citus/citus/src/test/regress/results/multi_sql_function.out
--- /__w/citus/citus/src/test/regress/expected/multi_sql_function.out.modified	2025-08-25 12:43:24.373634581 +0000
+++ /__w/citus/citus/src/test/regress/results/multi_sql_function.out.modified	2025-08-25 12:43:24.383634533 +0000
@@ -317,24 +317,25 @@
 $$ LANGUAGE SQL STABLE;
 INSERT INTO test_parameterized_sql VALUES(1, 1);
 -- all of them should fail
 SELECT * FROM test_parameterized_sql_function(1);
 ERROR:  cannot perform distributed planning on this query because parameterized queries for SQL functions referencing distributed tables are not supported
 HINT:  Consider using PL/pgSQL functions instead.
 SELECT (SELECT 1 FROM test_parameterized_sql limit 1) FROM test_parameterized_sql_function(1);
 ERROR:  cannot perform distributed planning on this query because parameterized queries for SQL functions referencing distributed tables are not supported
 HINT:  Consider using PL/pgSQL functions instead.
 SELECT test_parameterized_sql_function_in_subquery_where(1);
-ERROR:  could not create distributed plan
-DETAIL:  Possibly this is caused by the use of parameters in SQL functions, which is not supported in Citus.
-HINT:  Consider using PL/pgSQL functions instead.
-CONTEXT:  SQL function "test_parameterized_sql_function_in_subquery_where" statement 1
+ test_parameterized_sql_function_in_subquery_where 
+---------------------------------------------------
+                                                 1
+(1 row)
+
```

allows custom vs. generic plans for SQL functions; arguments can be
folded to consts, enabling more rewrites/optimizations (and in your
case, routable Citus plans)

seems that P18 rewrote how LANGUAGE SQL functions are planned/executed:
they now go through the plan cache (like PL/pgSQL does) and can produce
custom plans with the function arguments substituted as constants. That
means your call
SELECT test_parameterized_sql_function_in_subquery_where(1);
is planned with org_id_val = 1 baked in, so Citus no longer sees an
unresolved Param inside the function body and is able to build a
distributed plan instead of tripping the old “params in SQL functions”
error path.


**What’s in here**
- Update `expected/multi_sql_function.out` to reflect PG18 behavior
- Add `expected/multi_sql_function_0.out` as an alternate expected file
that retains the pre-PG18 error output for the same test
2025-10-01 16:03:21 +03:00
Naisila Puka bb840e58a7
Fix crash on create statistics with non-RangeVar type (#8213)
This crash has been there for a while but wasn't tested before pg18.

PG18 added this test:
CREATE STATISTICS tst ON a FROM (VALUES (x)) AS foo;

which tries to create statistics on a derived-on-the-fly table (which is
not allowed) However Citus assumes we always have a valid table when
intercepting CREATE STATISTICS command to check for Citus tables
Added a check to return early if needed.

pg18 commit: https://github.com/postgres/postgres/commit/3eea4dc2c

Fixes #8212
2025-10-01 00:09:11 +03:00
Onur Tirtir 5eb1d93be1
Properly detect no-op shard-key updates via UPDATE / MERGE (#8214)
DESCRIPTION: Fixes a bug that causes allowing UPDATE / MERGE queries
that may change the distribution column value.

Fixes: #8087.

Probably as of #769, we were not properly checking if UPDATE
may change the distribution column.

In #769, we had these checks:
```c
	if (targetEntry->resno != column->varattno)
	{
		/* target entry of the form SET some_other_col = <x> */
		isColumnValueChanged = false;
	}
	else if (IsA(setExpr, Var))
	{
		Var *newValue = (Var *) setExpr;
		if (newValue->varattno == column->varattno)
		{
			/* target entry of the form SET col = table.col */
			isColumnValueChanged = false;
		}
	}
```

However, what we check in "if" and in the "else if" are not so
different in the sense they both attempt to verify if SET expr
of the target entry points to the attno of given column. So, in
#5220, we even removed the first check because it was redundant.
Also see this PR comment from #5220:
https://github.com/citusdata/citus/pull/5220#discussion_r699230597.
In #769, probably we actually wanted to first check whether both
SET expr of the target entry and given variable are pointing to the
same range var entry, but this wasn't what the "if" was checking,
so removed.

As a result, in the cases that are mentioned in the linked issue,
we were incorrectly concluding that the SET expr of the target
entry won't change given column just because it's pointing to the
same attno as given variable, regardless of what range var entries
the column and the SET expr are pointing to. Then we also started
using the same function to check for such cases for update action
of MERGE, so we have the same bug there as well.

So with this PR, we properly check for such cases by comparing
varno as well in TargetEntryChangesValue(). However, then some of
the existing tests started failing where the SET expr doesn't
directly assign the column to itself but the "where" clause could
actually imply that the distribution column won't change. Even before
we were not attempting to verify if "where" cluse quals could imply a
no-op assignment for the SET expr in such cases but that was not a
problem. This is because, for the most cases, we were always qualifying
such SET expressions as a no-op update as long as the SET expr's
attno is the same as given column's. For this reason, to prevent
regressions, this PR also adds some extra logic as well to understand
if the "where" clause quals could imply that SET expr for the
distribution key is a no-op.

Ideally, we should instead use "relation restriction equivalence"
mechanism to understand if the "where" clause implies a no-op
update. This is because, for instance, right now we're not able to
deduce that the update is a no-op when the "where" clause transitively
implies a no-op update, as in the case where we're setting "column a"
to "column c" and where clause looks like:
  "column a = column b AND column b = column c".
If this means a regression for some users, we can consider doing it
that way. Until then, as a workaround, we can suggest adding additional
quals to "where" clause that would directly imply equivalence.

Also, after fixing TargetEntryChangesValue(), we started successfully
deducing that the update action is a no-op for such MERGE queries:
```sql
MERGE INTO dist_1
USING dist_1 src
ON (dist_1.a = src.b)
WHEN MATCHED THEN UPDATE SET a = src.b;
```
However, we then started seeing below error for above query even
though now the update is qualified as a no-op update:
```
ERROR:  Unexpected column index of the source list
```
This was because of #8180 and #8201 fixed that.

In summary, with this PR:

* We disallow such queries,
  ```sql
  -- attno for dist_1.a, dist_1.b: 1, 2
  -- attno for dist_different_order_1.a, dist_different_order_1.b: 2, 1
  UPDATE dist_1 SET a = dist_different_order_1.b
  FROM dist_different_order_1
  WHERE dist_1.a dist_different_order_1.a;

  -- attno for dist_1.a, dist_1.b: 1, 2
  -- but ON (..) doesn't imply a no-op update for SET expr
  MERGE INTO dist_1
  USING dist_1 src
  ON (dist_1.a = src.b)
  WHEN MATCHED THEN UPDATE SET a = src.a;
  ```

* .. and allow such queries,
  ```sql
  MERGE INTO dist_1
  USING dist_1 src
  ON (dist_1.a = src.b)
  WHEN MATCHED THEN UPDATE SET a = src.b;
  ```
2025-09-30 10:13:47 +00:00
Naisila Puka de045402f3
PG18 - register snapshot where needed (#8196)
Register and push snapshots as needed per the relevant PG18 commits

8076c00592
https://github.com/postgres/postgres/commit/706054b

`citus_split_shard_columnar_partitioned`, `multi_partitioning` tests are
handled.

Fixes #8195
2025-09-26 18:04:34 +03:00
Colm McHugh 81776fe190 Fix crash in Range Table identity check.
The range table entry array created by the Postgres planner for each
SELECT in a query may have NULL entries as of PG18. Add a NULL check
to skip over these when looking for matches in rte identities.
2025-09-26 14:53:30 +01:00
Colm 80945212ae
PG18 regress sanity: update pg18 ruleutils with fix #7675 (#8216)
Fix deparsing of UPDATE statements with indirection (#7675) involved
changing ruleutils of our supported Postgres versions. It means that
when integrating a new Postgres version we need to update its ruleutils
with the relevant parts of #7675; basically PG ruleutils needs to call
the `citus_ruleutils.c` functions added by #7675.
2025-09-26 13:19:47 +01:00
Onur Tirtir 83b25e1fb1
Fix unexpected column index error for repartitioned merge (#8201)
DESCRIPTION: Fixes a bug that causes an unexpected error when executing
repartitioned merge.

Fixes #8180.

This was happening because of a bug in
SourceResultPartitionColumnIndex(). And to fix it, this PR avoids
using DistributionColumnIndex() in SourceResultPartitionColumnIndex().
Instead, invents FindTargetListEntryWithVarExprAttno(), which finds
the index of the target entry in the source query's target list that
can be used to repartition the source for a repartitioned merge. In
short, to find the source target entry that refences the Var used in
ON (..) clause and that references the source rte, we should check the
varattno of the underlying expr, which presumably is always a Var for
repartitioned merge as we always wrap the source rte with a subquery,
where all target entries point to the columns of the original source
relation.

Using DistributionColumnIndex() prior to 13.0 wasn't causing such an
issue because prior to 13.0, the varattno of the underlying expr of
the source target entries was almost (*1) always equal to resno of the
target entry as we were including all target entries of the source
relation. However, starting with #7659, which is merged to main before
13.0, we started using CreateFilteredTargetListForRelation() instead of 
CreateAllTargetListForRelation() to compute the target entry list for
the source rte to fix another bug. So we cannot revert to using
CreateAllTargetListForRelation() because otherwise we would re-introduce
bug that it helped fixing, so we instead had to find a way to properly
deal with the "filtered target list"s, as in this commit. Plus (*1),
even before #7659, probably we would still fail when the source relation
has dropped attributes or such because that would probably also cause
such a mismatch between the varattno of the underlying expr of the
target entry and its resno.
2025-09-23 11:17:51 +00:00
Colm b5e70f56ab Postgres 18: Fix regress tests caused by GROUP RTE. (#8206)
The change in `merge_planner.c` fixes _unrecognized range table entry_
diffs in merge regress tests (category 2 diffs in #7992), the change in
`multi_router_planner.c` fixes _column reference ... is ambiguous_ diffs
in `multi_insert_select` and `multi_insert_select_window` (category 3
diffs in #7992). Edit to `common.py` enables standalone regress tests
with pg18 (e..g `citus_tests/run_test.py merge`).
2025-09-22 16:13:59 +03:00
Colm d2ea4043d4 Postgres 18: fix 'column does not exist' errors in grouping regress tests. (#8199)
DESCRIPTION: Fix 'column does not exist' errors in grouping regress
tests.

Postgres 18's GROUP RTE was being ignored by query pushdown planning
when constructing the query tree for the worker subquery. The solution
is straightforward - ensure the worker subquery tree has the same
groupRTE property as the original query. Postgres ruleutils then does
the right thing when generating the pushed down query. Fixes category 1
in #7992.
2025-09-22 16:13:59 +03:00
Mehmet YILMAZ 10d62d50ea
Stabilize table_checks across PG15–PG18: switch to pg_constraint, remove dupes, exclude NOT NULL (#8140)
DESCRIPTION: Stabilize table_checks across PG15–PG18: switch to
pg_constraint, remove dupes, exclude NOT NUL

fixes #8138
fixes #8131 

**Problem**

```diff
diff -dU10 -w /__w/citus/citus/src/test/regress/expected/multi_create_table_constraints.out /__w/citus/citus/src/test/regress/results/multi_create_table_constraints.out
--- /__w/citus/citus/src/test/regress/expected/multi_create_table_constraints.out.modified	2025-08-18 12:26:51.991598284 +0000
+++ /__w/citus/citus/src/test/regress/results/multi_create_table_constraints.out.modified	2025-08-18 12:26:52.004598519 +0000
@@ -403,22 +403,30 @@
     relid = 'check_example_partition_col_key_365068'::regclass;
     Column     |  Type   |  Definition   
 ---------------+---------+---------------
  partition_col | integer | partition_col
 (1 row)
 
 SELECT "Constraint", "Definition" FROM table_checks WHERE relid='public.check_example_365068'::regclass;
              Constraint              |            Definition             
 -------------------------------------+-----------------------------------
  check_example_other_col_check       | CHECK other_col >= 100
+ check_example_other_col_check       | CHECK other_col >= 100
+ check_example_other_col_check       | CHECK other_col >= 100
+ check_example_other_col_check       | CHECK other_col >= 100
+ check_example_other_col_check       | CHECK other_col >= 100
  check_example_other_other_col_check | CHECK abs(other_other_col) >= 100
-(2 rows)
+ check_example_other_other_col_check | CHECK abs(other_other_col) >= 100
+ check_example_other_other_col_check | CHECK abs(other_other_col) >= 100
+ check_example_other_other_col_check | CHECK abs(other_other_col) >= 100
+ check_example_other_other_col_check | CHECK abs(other_other_col) >= 100
+(10 rows)
```

On PostgreSQL 18, `NOT NULL` is represented as a cataloged constraint
and surfaces through `information_schema.check_constraints`.
14e87ffa5c
Our helper view `table_checks` (built on
`information_schema.check_constraints` + `constraint_column_usage`)
started returning:

* Extra `…_not_null` rows (noise for our tests)
* Duplicate rows for real CHECKs due to the one-to-many join via
`constraint_column_usage`
* Occasional literal formatting differences (e.g., dates) coming from
the information\_schema deparser

### What changed

1. **Rewrite `table_checks` to use system catalogs directly**
We now select only expression-based, table-level constraints—excluding
NOT NULL—by filtering on `contype <> 'n'` and requiring `conbin IS NOT
NULL`. This yields the same effective set as real CHECKs while remaining
future-proof against non-CHECK constraint types.

```sql
CREATE OR REPLACE VIEW table_checks AS
SELECT
  c.conname AS "Constraint",
  'CHECK ' ||
  -- drop a single pair of outer parens if the deparser adds them
  regexp_replace(pg_get_expr(c.conbin, c.conrelid, true), '^\((.*)\)$', '\1')
    AS "Definition",
  c.conrelid AS relid
FROM pg_catalog.pg_constraint AS c
WHERE c.contype <> 'n'         -- drop NOT NULL (PG18)
  AND c.conbin IS NOT NULL     -- only expression-bearing constraints (i.e., CHECKs)
  AND c.conrelid <> 0          -- table-level only (exclude domains)
ORDER BY "Constraint", "Definition";
```

Why this filter?

* `contype <> 'n'` excludes PG18’s NOT NULL rows.
* `conbin IS NOT NULL` restricts to expression-backed constraints
(CHECKs); PK/UNIQUE/FK/EXCLUSION don’t have `conbin`.
* `conrelid <> 0` removes domain constraints.

2. **Add a PG18-specific regression test for `contype = 'n'`**
   New test (`pg18_not_null_constraints`) verifies:

* Coordinator tables have `n` rows for NOT NULL (columns `a`, `c`),
* A worker shard has matching `n` rows,
* Dropping a NOT NULL on the coordinator propagates to shards (count
goes from 2 → 1),
* `table_checks` *never* reports NOT NULL, but does report a real CHECK
added for the test.

---

### Why this works (PG15–PG18)

* **Stable source of truth:** Directly reads `pg_constraint` instead of
`information_schema`.
* **No duplicates:** Eliminates the `constraint_column_usage` join,
removing multiplicity.
* **No NOT NULL noise:** PG18’s `contype = 'n'` is filtered out by
design.
* **Deterministic text:** Uses `pg_get_expr` and strips a single outer
set of parentheses for consistent output.

---

### Impact on tests

* Removes spurious `…_not_null` entries and duplicate `checky_…` rows
(e.g., in `multi_name_lengths` and similar).
* Existing expected files stabilize without adding brittle
normalizations.
* New PG18 test asserts correct catalog behavior and Citus propagation
while remaining a no-op on earlier PG versions.

---
2025-09-22 15:50:32 +03:00
Naisila Puka b4cb1a94e9
Bump citus and citus_columnar to 14.0devel (#8170) 2025-09-19 12:54:55 +03:00
Naisila Puka becc02b398
Cleanup from dropping pg14 in merge isolation tests (#8204)
These alternative test outputs are redundant since we have dropped PG14
support on main.
2025-09-19 12:01:29 +03:00
eaydingol 360fbe3b99
Technical document update for outer join pushdown (#8200)
Outer join pushdown entry and an example.
2025-09-17 17:01:45 +03:00
Mehmet YILMAZ b58af1c8d5
PG18: stabilize constraint-name tests by filtering pg_constraint on contype (#8185)
14e87ffa5c

PostgreSQL 18 now records column `NOT NULL` constraints in
`pg_constraint` (`contype = 'n'`). That means queries that previously
listed “all constraints” for a relation now return extra rows, causing
noisy diffs in Citus regression tests. This PR narrows each catalog
probe to the specific constraint type under test
(PK/UNIQUE/EXCLUDE/CHECK), keeping results stable across PG15–PG18.

## What changed

* Update
`src/test/regress/sql/multi_alter_table_add_constraints_without_name.sql`
to:

* Add `AND con.contype IN ('p'|'u'|'x'|'c')` in each query, matching the
constraint just created.
  * Join namespace via `rel.relnamespace` for robustness.
* Refresh
`src/test/regress/expected/multi_alter_table_add_constraints_without_name.out`
to reflect the filtered results.

## Why

* PG18 adds named `NOT NULL` entries to `pg_constraint`, which
previously lived only in `pg_attribute`. Tests that select from
`pg_constraint` without filtering now see extra rows (e.g.,
`*_not_null`), breaking expectations. Filtering by `contype` validates
exactly what the test intends (PK/UNIQUE/EXCLUDE/CHECK
naming/propagation) and ignores unrelated `NOT NULL` rows.



```diff
diff -dU10 -w /__w/citus/citus/src/test/regress/expected/multi_alter_table_add_constraints_without_name.out /__w/citus/citus/src/test/regress/results/multi_alter_table_add_constraints_without_name.out
--- /__w/citus/citus/src/test/regress/expected/multi_alter_table_add_constraints_without_name.out.modified	2025-09-11 14:36:52.521254512 +0000
+++ /__w/citus/citus/src/test/regress/results/multi_alter_table_add_constraints_without_name.out.modified	2025-09-11 14:36:52.549254440 +0000
@@ -20,34 +20,36 @@
 
 ALTER TABLE AT_AddConstNoName.products ADD PRIMARY KEY(product_no);
 SELECT con.conname
     FROM pg_catalog.pg_constraint con
       INNER JOIN pg_catalog.pg_class rel ON rel.oid = con.conrelid
       INNER JOIN pg_catalog.pg_namespace nsp ON nsp.oid = connamespace
 	      WHERE rel.relname = 'products';
            conname            
 ------------------------------
  products_pkey
-(1 row)
+ products_product_no_not_null
+(2 rows)
 
 -- Check that the primary key name created on the coordinator is sent to workers and
 -- the constraints created for the shard tables conform to the <conname>_shardid naming scheme.
 \c - - :public_worker_1_host :worker_1_port
 SELECT con.conname
     FROM pg_catalog.pg_constraint con
       INNER JOIN pg_catalog.pg_class rel ON rel.oid = con.conrelid
       INNER JOIN pg_catalog.pg_namespace nsp ON nsp.oid = connamespace
 		WHERE rel.relname = 'products_5410000';
                conname                
 --------------------------------------
+ products_5410000_product_no_not_null
  products_pkey_5410000
-(1 row)
+(2 rows)
```

after pr:
https://github.com/citusdata/citus/actions/runs/17697415668/job/50298622183#step:5:265
2025-09-17 14:12:15 +03:00
Mehmet YILMAZ 4012e5938a
PG18 - normalize PG18 “RESTRICT” FK error wording to legacy form (#8188)
fixes #8186


086c84b23d

PG18 emitting a more specific message for foreign-key violations when
the action is `RESTRICT` (SQLSTATE 23001), e.g.
`violates RESTRICT setting of foreign key constraint ...` and `Key (...)
is referenced from table ...`.
Older versions printed the generic FK text (SQLSTATE 23503), e.g.
`violates foreign key constraint ...` and `Key (...) is still referenced
from table ...`.

This change was causing noisy diffs in our regression tests (e.g.,
`multi_foreign_key.out`).
To keep a single set of expected files across PG15–PG18, this PR adds
two normalization rules to the test filter:

```sed
# PG18 FK wording -> legacy generic form
s/violates RESTRICT setting of foreign key constraint/violates foreign key constraint/g

# DETAIL line: "is referenced" -> "is still referenced"
s/\<is referenced from table\>/is still referenced from table/g
```

**Scope / impact**

* Test-only change; runtime behavior is unaffected.
* Keeps outputs stable across PG15–PG18 without version-splitting
expected files.
* Rules are narrowly targeted to the FK wording introduced in PG18.

with pr:
https://github.com/citusdata/citus/actions/runs/17698469722/job/50300960878#step:5:252
2025-09-17 10:46:36 +03:00
Mehmet YILMAZ 8bb8b2ce2d
Remove Code Climate coverage upload steps from GitHub Actions workflow (#8182)
DESCRIPTION: Remove Code Climate coverage upload steps from GitHub
Actions workflow

CI: remove Code Climate coverage reporting (cc-test-reporter) and
related jobs; keep Codecov as source of truth

* **Why**
Code Climate’s test-reporter has been archived; their download/API path
is no longer served, which breaks our CC upload step (`cc-test-reporter
…` ends up downloading HTML/404).

* **What changed**

* Drop the Code Climate formatting/artifact steps from the composite
action `.github/actions/upload_coverage/action.yml`.
* Delete the `upload-coverage` job that aggregated and pushed to Code
Climate (`cc-test-reporter sum-coverage` / `upload-coverage`).


* **Impact**

  * Codecov uploads remain; coverage stays visible via Codecov.
  * No test/build behavior change—only removes a failing reporter path.
2025-09-15 13:53:35 +03:00
Colm b7bfe42f1a
Document delayed fast path planning in README (#8176)
Added detailed explanation of delayed fast path planning in Citus 13.2,
including conditions and processes involved.

---------

Co-authored-by: Onur Tirtir <onurcantirtir@gmail.com>
2025-09-09 11:54:13 +01:00
Onur Tirtir 0c658b73fc
Fix an assertion failure in Citus maintenance daemon that can happen in very slow systems (#8158)
Fixes #5808.

DESCRIPTION: Fixes an assertion failure in Citus maintenance daemon that
can happen in very slow systems.

Try running `make -C src/test/regress/ check-multi-1-vg` - while the
tests will exit with code 2 at least %50 of the times in the very early
stages of the test suite by producing a core-dump on main, it won't be
the case on this branch, at least based on my trials :)
2025-09-04 12:13:57 +00:00
manaldush 2834fa26c9
Fix an undefined behavior for bit shift in citus_stat_tenants.c (#7954)
DESCRIPTION: Fixes an undefined behavior that could happen when
computing tenant score for citus_stat_tenants

Add check for shift size, reset to zero in case of overflow

Fixes #7953.

---------

Co-authored-by: Onur Tirtir <onurcantirtir@gmail.com>
2025-09-04 10:57:45 +00:00
Onur Tirtir 8ece8acac7
Check citus version in citus_promote_clone_and_rebalance (#8169) 2025-08-29 11:19:50 +03:00
Naisila Puka 0fd95d71e4
Order same frequency common values, and add test (#8167)
Added similar test to what @colm-mchugh tested in the original PR
https://github.com/citusdata/citus/pull/8026#discussion_r2279021218
2025-08-29 01:41:32 +03:00
Naisila Puka d5f0ec5cd1
Fix invalid input syntax for type bigint (#8166)
Fixes #8164
2025-08-29 01:01:18 +03:00
Naisila Puka 544b6c4716
Add GUC for queries with outer joins and pseudoconstant quals (#8163)
Users can turn on this GUC at their own risk.
2025-08-27 22:31:22 +03:00
Onur Tirtir 2e1de77744
Also use pid in valgrind logfile name (#8150)
Also use pid in valgrind logfile name to avoid overwriting the valgrind
logs due to the memory errors that can happen in different processes
concurrently:

(from https://valgrind.org/docs/manual/manual-core.html)
```
--log-file=<filename>
Specifies that Valgrind should send all of its messages to the specified file. If the file name is empty, it causes an abort. There are three special format specifiers that can be used in the file name.

%p is replaced with the current process ID. This is very useful for program that invoke multiple processes. WARNING: If you use --trace-children=yes and your program invokes multiple processes OR your program forks without calling exec afterwards, and you don't use this specifier (or the %q specifier below), the Valgrind output from all those processes will go into one file, possibly jumbled up, and possibly incomplete.
```

With this change, we'll start having lots of valgrind output files
generated under "src/test/regress" with the same prefix,
citus_valgrind_test_log.txt, by default, during valgrind tests, so it'll
look a bit ugly; but one can use `cat
src/test/regress/citus_valgrind_test_log.txt.[0-9]*"` or such to combine
them into a single valgrind log file later.
2025-08-27 14:01:25 +00:00
Colm bb6eeb17cc
Fix bug in redundant WHERE clause detection. (#8162)
Need to also check Postgres plan's rangetables for relations used in Initplans.

DESCRIPTION: Fix a bug in redundant WHERE clause detection; we need to
additionally check the Postgres plan's range tables for the presence of
citus tables, to account for relations that are referenced from scalar
subqueries.

There is a fundamental flaw in 4139370, the assumption that, after
Postgres planning has completed, all tables used in a query can be
obtained by walking the query tree. This is not the case for scalar
subqueries, which will be referenced by `PARAM` nodes. The fix adds an
additional check of the Postgres plan range tables; if there is at least
one citus table in there we do not need to change the needs distributed
planning flag.

Fixes #8159
2025-08-27 13:32:02 +01:00
Colm 0a5cae19ed
In UPDATE deparse, check for a subscript before processing the targets. (#8155)
DESCRIPTION: Checking first for the presence of subscript ops avoids a
shallow copy of the target list for target lists where there are no
array or json subscripts.

Commit 0c1b31c fixed a bug in UPDATE statements with array or json
subscripting in the target list. This commit modifies that to first
check that the target list has a subscript and avoid a shallow copy of
the target list for UPDATE statements with no array/json subscripting.
2025-08-27 11:00:27 +00:00
Muhammad Usama 62e5fcfe09
Enhance clone node replication status messages (#8152)
- Downgrade replication lag reporting from NOTICE to DEBUG to reduce
noise and improve regression test stability.
- Add hints to certain replication status messages for better clarity.
- Update expected output files accordingly.
2025-08-26 21:48:07 +03:00
Naisila Puka ce7ddc0d3d
Bump PG versions to 17.6, 16.10, 15.14 (#8142)
Sister PR https://github.com/citusdata/the-process/pull/172

Fixes #8134 #8149
2025-08-25 15:34:13 +03:00
Naisila Puka aaa31376e0
Make columnar_chunk_filtering pass consecutive runs (#8147)
Test was not cleaning up after itself therefore failed consecutive runs

Test locally with:
make check-columnar-minimal
\ EXTRA_TESTS='columnar_chunk_filtering columnar_chunk_filtering'
2025-08-25 14:35:37 +03:00
Onur Tirtir 439870f3a9
Fix incorrect usage of TupleDescSize() in #7950, #8120, #8124, #8121 and #8114 (#8146)
In #7950, #8120, #8124, #8121 and #8114, TupleDescSize() was used to
check whether the tuple length is `Natts_<catalog_table_name>`. However
this was wrong because TupleDescSize() returns the size of the
tupledesc, not the length of it (i.e., number of attributes).

Actually `TupleDescSize(tupleDesc) == Natts_<catalog_table_name>` was
always returning false but this didn't cause any problems because using
`tupleDesc->natts - 1` when `tupleDesc->natts ==
Natts_<catalog_table_name>` too had the same effect as using
`Anum_<column_added_later> - 1` in that case.

So this also makes me thinking of always returning `tupleDesc->natts -
1` (or `tupleDesc->natts - 2` if it's the second to last attribute) but
being more explicit seems more useful.

Even more, in the future we should probably switch to a different
implementation if / when we think of adding more columns to those
tables. We should probably scan non-dropped attributes of the relation,
enumerate them and return the attribute number of the one that we're
looking for, but seems this is not needed right now.
2025-08-22 11:46:06 +00:00
Onur Tirtir 785287c58f
Fix memory corruptions around pg_dist_node accessors after a Citus downgrade is followed by an upgrade (#8144)
Unlike what has been fixed in #7950, #8120, #8124, #8121 and #8114, this
was not an issue in older releases but is a potential issue to be
introduced by the current (13.2) release because in one of recent
commits (#8122) two columns has been added to pg_dist_node. In other
words, none of the older releases since we started supporting downgrades
added new columns to pg_dist_node.

The mentioned PR actually attempted avoiding these kind of issues in one
of the code-paths but not in some others.

So, this PR, avoids memory corruptions around pg_dist_node accessors in
a standardized way (as implemented in other example PRs) and in all
code-paths.
2025-08-22 14:07:44 +03:00
Mehmet YILMAZ 86b5bc6a20
Normalize Actual Rows output in regression tests for PG18 compatibility (#8141)
DESCRIPTION: Normalize Actual Rows output in regression tests for PG18
compatibility

PostgreSQL 18 changed `EXPLAIN ANALYZE` to always print fractional row
counts (e.g. `1.00` instead of `1`).
95dbd827f2
This caused diffs across multiple output formats in Citus regression
tests:

* Text EXPLAIN: `actual rows=50.00` vs `actual rows=50`
* YAML: `Actual Rows: 1.00` vs `Actual Rows: 1`
* XML: `<Actual-Rows>1.00</Actual-Rows>` vs
`<Actual-Rows>1</Actual-Rows>`
* JSON: `"Actual Rows": 1.00` vs `"Actual Rows": 1`
* Placeholders: `rows=N.N` vs `rows=N`

This patch extends `normalize.sed` to strip trailing `.0…` from `Actual
Rows` in all supported formats and collapses placeholder values back to
`N`. With these changes, regression tests produce stable output across
PG15–PG18.

No functional changes to Citus itself — only test normalization was
updated.
2025-08-21 17:47:46 +03:00
Mehmet YILMAZ f1f0b09f73
PG18 - Add BUFFERS OFF to EXPLAIN ANALYZE calls (#8101)
Relevant PG18 commit:
c2a4078eba
- Enable buffer-usage reporting by default in `EXPLAIN ANALYZE` on
PostgreSQL 18 and above.

Solution:
- Introduce the explicit `BUFFERS OFF` option in every existing
regression test to maintain pre-PG18 output consistency.
- This appends, `BUFFERS OFF` to all `EXPLAIN ANALYZE(...)` calls in
src/test/regress/sql and the corresponding .out files.

fixes #8093
2025-08-21 13:48:50 +03:00
Onur Tirtir 683ead9607
Add changelog for 13.2.0 (#8130) 2025-08-21 09:22:42 +00:00
228 changed files with 15510 additions and 2792 deletions

View File

@ -73,7 +73,7 @@ USER citus
# build postgres versions separately for effective parrallelism and caching of already built versions when changing only certain versions
FROM base AS pg15
RUN MAKEFLAGS="-j $(nproc)" pgenv build 15.13
RUN MAKEFLAGS="-j $(nproc)" pgenv build 15.14
RUN rm .pgenv/src/*.tar*
RUN make -C .pgenv/src/postgresql-*/ clean
RUN make -C .pgenv/src/postgresql-*/src/include install
@ -85,7 +85,7 @@ RUN cp -r .pgenv/src .pgenv/pgsql-* .pgenv/config .pgenv-staging/
RUN rm .pgenv-staging/config/default.conf
FROM base AS pg16
RUN MAKEFLAGS="-j $(nproc)" pgenv build 16.9
RUN MAKEFLAGS="-j $(nproc)" pgenv build 16.10
RUN rm .pgenv/src/*.tar*
RUN make -C .pgenv/src/postgresql-*/ clean
RUN make -C .pgenv/src/postgresql-*/src/include install
@ -97,7 +97,7 @@ RUN cp -r .pgenv/src .pgenv/pgsql-* .pgenv/config .pgenv-staging/
RUN rm .pgenv-staging/config/default.conf
FROM base AS pg17
RUN MAKEFLAGS="-j $(nproc)" pgenv build 17.5
RUN MAKEFLAGS="-j $(nproc)" pgenv build 17.6
RUN rm .pgenv/src/*.tar*
RUN make -C .pgenv/src/postgresql-*/ clean
RUN make -C .pgenv/src/postgresql-*/src/include install
@ -216,7 +216,7 @@ COPY --chown=citus:citus .psqlrc .
RUN sudo chown --from=root:root citus:citus -R ~
# sets default pg version
RUN pgenv switch 17.5
RUN pgenv switch 17.6
# make connecting to the coordinator easy
ENV PGPORT=9700

View File

@ -2,6 +2,8 @@
"image": "ghcr.io/citusdata/citus-devcontainer:main",
"runArgs": [
"--cap-add=SYS_PTRACE",
"--cap-add=SYS_NICE", // allow NUMA page inquiry
"--security-opt=seccomp=unconfined", // unblocks move_pages() in the container
"--ulimit=core=-1",
],
"forwardPorts": [

View File

@ -13,15 +13,3 @@ runs:
token: ${{ inputs.codecov_token }}
verbose: true
gcov: true
- name: Create codeclimate coverage
run: |-
lcov --directory . --capture --output-file lcov.info
lcov --remove lcov.info -o lcov.info '/usr/*'
sed "s=^SF:$PWD/=SF:=g" -i lcov.info # relative pats are required by codeclimate
mkdir -p /tmp/codeclimate
cc-test-reporter format-coverage -t lcov -o /tmp/codeclimate/${{ inputs.flags }}.json lcov.info
shell: bash
- uses: actions/upload-artifact@v4.6.0
with:
path: "/tmp/codeclimate/*.json"
name: codeclimate-${{ inputs.flags }}

View File

@ -31,12 +31,12 @@ jobs:
pgupgrade_image_name: "ghcr.io/citusdata/pgupgradetester"
style_checker_image_name: "ghcr.io/citusdata/stylechecker"
style_checker_tools_version: "0.8.18"
sql_snapshot_pg_version: "17.5"
image_suffix: "-vb17c33b"
pg15_version: '{ "major": "15", "full": "15.13" }'
pg16_version: '{ "major": "16", "full": "16.9" }'
pg17_version: '{ "major": "17", "full": "17.5" }'
upgrade_pg_versions: "15.13-16.9-17.5"
sql_snapshot_pg_version: "17.6"
image_suffix: "-va20872f"
pg15_version: '{ "major": "15", "full": "15.14" }'
pg16_version: '{ "major": "16", "full": "16.10" }'
pg17_version: '{ "major": "17", "full": "17.6" }'
upgrade_pg_versions: "15.14-16.10-17.6"
steps:
# Since GHA jobs need at least one step we use a noop step here.
- name: Set up parameters
@ -225,10 +225,16 @@ jobs:
runs-on: ubuntu-latest
container:
image: "${{ matrix.image_name }}:${{ fromJson(matrix.pg_version).full }}${{ needs.params.outputs.image_suffix }}"
options: --user root --dns=8.8.8.8
options: >-
--user root
--dns=8.8.8.8
--cap-add=SYS_NICE
--security-opt seccomp=unconfined
# Due to Github creates a default network for each job, we need to use
# --dns= to have similar DNS settings as our other CI systems or local
# machines. Otherwise, we may see different results.
# and grant caps so PG18's NUMA introspection (pg_shmem_allocations_numa -> move_pages)
# doesn't fail with EPERM in CI.
needs:
- params
- build
@ -358,14 +364,20 @@ jobs:
flags: ${{ env.old_pg_major }}_${{ env.new_pg_major }}_upgrade
codecov_token: ${{ secrets.CODECOV_TOKEN }}
test-citus-upgrade:
name: PG${{ fromJson(needs.params.outputs.pg15_version).major }} - check-citus-upgrade
name: PG${{ fromJson(matrix.pg_version).major }} - check-citus-upgrade
runs-on: ubuntu-latest
container:
image: "${{ needs.params.outputs.citusupgrade_image_name }}:${{ fromJson(needs.params.outputs.pg15_version).full }}${{ needs.params.outputs.image_suffix }}"
image: "${{ needs.params.outputs.citusupgrade_image_name }}:${{ fromJson(matrix.pg_version).full }}${{ needs.params.outputs.image_suffix }}"
options: --user root
needs:
- params
- build
strategy:
fail-fast: false
matrix:
pg_version:
- ${{ needs.params.outputs.pg15_version }}
- ${{ needs.params.outputs.pg16_version }}
steps:
- uses: actions/checkout@v4
- uses: "./.github/actions/setup_extension"
@ -374,7 +386,7 @@ jobs:
- name: Install and test citus upgrade
run: |-
# run make check-citus-upgrade for all citus versions
# the image has ${CITUS_VERSIONS} set with all verions it contains the binaries of
# the image has ${CITUS_VERSIONS} set with all versions it contains the binaries of
for citus_version in ${CITUS_VERSIONS}; do \
gosu circleci \
make -C src/test/regress \
@ -385,7 +397,7 @@ jobs:
citus-post-tar=${GITHUB_WORKSPACE}/install-$PG_MAJOR.tar; \
done;
# run make check-citus-upgrade-mixed for all citus versions
# the image has ${CITUS_VERSIONS} set with all verions it contains the binaries of
# the image has ${CITUS_VERSIONS} set with all versions it contains the binaries of
for citus_version in ${CITUS_VERSIONS}; do \
gosu circleci \
make -C src/test/regress \
@ -404,30 +416,6 @@ jobs:
with:
flags: ${{ env.PG_MAJOR }}_citus_upgrade
codecov_token: ${{ secrets.CODECOV_TOKEN }}
upload-coverage:
# secret below is not available for forks so disabling upload action for them
if: ${{ github.event.pull_request.head.repo.full_name == github.repository || github.event_name != 'pull_request' }}
env:
CC_TEST_REPORTER_ID: ${{ secrets.CC_TEST_REPORTER_ID }}
runs-on: ubuntu-latest
container:
image: ${{ needs.params.outputs.test_image_name }}:${{ fromJson(needs.params.outputs.pg17_version).full }}${{ needs.params.outputs.image_suffix }}
needs:
- params
- test-citus
- test-arbitrary-configs
- test-citus-upgrade
- test-pg-upgrade
steps:
- uses: actions/download-artifact@v4.1.8
with:
pattern: codeclimate*
path: codeclimate
merge-multiple: true
- name: Upload coverage results to Code Climate
run: |-
cc-test-reporter sum-coverage codeclimate/*.json -o total.json
cc-test-reporter upload-coverage -i total.json
ch_benchmark:
name: CH Benchmark
if: startsWith(github.ref, 'refs/heads/ch_benchmark/')

View File

@ -60,8 +60,7 @@ jobs:
libzstd-dev \
libzstd1 \
lintian \
postgresql-server-dev-15 \
postgresql-server-dev-all \
postgresql-server-dev-17 \
python3-pip \
python3-setuptools \
wget \

View File

@ -1,3 +1,97 @@
### citus v13.1.1 (Oct 1st, 2025) ###
* Adds support for latest PG minors: 14.19, 15.14, 16.10 (#8142)
* Fixes an assertion failure when an expression in the query references
a CTE (#8106)
* Fixes a bug that causes an unexpected error when executing
repartitioned MERGE (#8201)
* Fixes a bug that causes allowing UPDATE / MERGE queries that may
change the distribution column value (#8214)
* Updates dynamic_library_path automatically when CDC is enabled (#8025)
### citus v13.0.5 (Oct 1st, 2025) ###
* Adds support for latest PG minors: 14.19, 15.14, 16.10 (#7986, #8142)
* Fixes a bug that causes an unexpected error when executing
repartitioned MERGE (#8201)
* Fixes a bug that causes allowing UPDATE / MERGE queries that may
change the distribution column value (#8214)
* Fixes a bug in redundant WHERE clause detection (#8162)
* Updates dynamic_library_path automatically when CDC is enabled (#8025)
### citus v12.1.10 (Oct 1, 2025) ###
* Adds support for latest PG minors: 14.19, 15.14, 16.10 (#7986, #8142)
* Fixes a bug that causes allowing UPDATE / MERGE queries that may
change the distribution column value (#8214)
* Fixes an assertion failure that happens when querying a view that is
defined on distributed tables (#8136)
### citus v12.1.9 (Sep 3, 2025) ###
* Adds a GUC for queries with outer joins and pseudoconstant quals (#8163)
* Updates dynamic_library_path automatically when CDC is enabled (#7715)
### citus v13.2.0 (August 18, 2025) ###
* Adds `citus_add_clone_node()`, `citus_add_clone_node_with_nodeid()`,
`citus_remove_clone_node()` and `citus_remove_clone_node_with_nodeid()`
UDFs to support snapshot-based node splits. This feature allows promoting
a streaming replica (clone) to a primary node and rebalancing shards
between the original and newly promoted node without requiring a full data
copy. This greatly reduces rebalance times for scale-out operations when
the new node already has the data via streaming replication (#8122)
* Improves performance of shard rebalancer by parallelizing moves and removing
bottlenecks that blocked concurrent logical-replication transfers. This
reduces rebalance windows especially for clusters with large reference
tables and allows multiple shard transfers to run in parallel (#7983)
* Adds citus.enable_recurring_outer_join_pushdown GUC (enabled by default)
to allow pushing down LEFT/RIGHT outer joins having a reference table in
the outer side and a distributed table on the inner side (e.g.,
\<reference table\> LEFT JOIN \<distributed table\>) (#7973)
* Adds citus.enable_local_fast_path_query_optimization (enabled by default)
GUC to avoid unnecessary query deparsing to improve performance of
fast-path queries targeting local shards (#8035)
* Adds `citus_stats()` UDF that can be used to retrieve distributed `pg_stats`
for the provided Citus table. (#8026)
* Avoids automatically creating citus_columnar when there are no relations
using it (#8081)
* Makes sure to check if the distribution key is in the target list before
pushing down a query with a union and an outer join (#8092)
* Fixes a bug in EXPLAIN ANALYZE to prevent unintended (duplicate) execution
of the (sub)plans during the explain phase (#8017)
* Fixes potential memory corruptions that could happen when accessing
various catalog tables after a Citus downgrade is followed by a Citus
upgrade (#7950, #8120, #8124, #8121, #8114, #8146)
* Fixes UPDATE statements with indirection and array/jsonb subscripting with
more than one field (#7675)
* Fixes an assertion failure that happens when an expression in the query
references a CTE (#8106)
* Fixes an assertion failure that happens when querying a view that is
defined on distributed tables (#8136)
### citus v13.1.0 (May 30th, 2025) ###
* Adds `citus_stat_counters` view that can be used to query

18
configure vendored
View File

@ -1,6 +1,6 @@
#! /bin/sh
# Guess values for system-dependent variables and create Makefiles.
# Generated by GNU Autoconf 2.69 for Citus 13.2devel.
# Generated by GNU Autoconf 2.69 for Citus 14.0devel.
#
#
# Copyright (C) 1992-1996, 1998-2012 Free Software Foundation, Inc.
@ -579,8 +579,8 @@ MAKEFLAGS=
# Identity of this package.
PACKAGE_NAME='Citus'
PACKAGE_TARNAME='citus'
PACKAGE_VERSION='13.2devel'
PACKAGE_STRING='Citus 13.2devel'
PACKAGE_VERSION='14.0devel'
PACKAGE_STRING='Citus 14.0devel'
PACKAGE_BUGREPORT=''
PACKAGE_URL=''
@ -1262,7 +1262,7 @@ if test "$ac_init_help" = "long"; then
# Omit some internal or obsolete options to make the list less imposing.
# This message is too long to be a string in the A/UX 3.1 sh.
cat <<_ACEOF
\`configure' configures Citus 13.2devel to adapt to many kinds of systems.
\`configure' configures Citus 14.0devel to adapt to many kinds of systems.
Usage: $0 [OPTION]... [VAR=VALUE]...
@ -1324,7 +1324,7 @@ fi
if test -n "$ac_init_help"; then
case $ac_init_help in
short | recursive ) echo "Configuration of Citus 13.2devel:";;
short | recursive ) echo "Configuration of Citus 14.0devel:";;
esac
cat <<\_ACEOF
@ -1429,7 +1429,7 @@ fi
test -n "$ac_init_help" && exit $ac_status
if $ac_init_version; then
cat <<\_ACEOF
Citus configure 13.2devel
Citus configure 14.0devel
generated by GNU Autoconf 2.69
Copyright (C) 2012 Free Software Foundation, Inc.
@ -1912,7 +1912,7 @@ cat >config.log <<_ACEOF
This file contains any messages produced by compilers while
running configure, to aid debugging if configure makes a mistake.
It was created by Citus $as_me 13.2devel, which was
It was created by Citus $as_me 14.0devel, which was
generated by GNU Autoconf 2.69. Invocation command line was
$ $0 $@
@ -5393,7 +5393,7 @@ cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1
# report actual input values of CONFIG_FILES etc. instead of their
# values after options handling.
ac_log="
This file was extended by Citus $as_me 13.2devel, which was
This file was extended by Citus $as_me 14.0devel, which was
generated by GNU Autoconf 2.69. Invocation command line was
CONFIG_FILES = $CONFIG_FILES
@ -5455,7 +5455,7 @@ _ACEOF
cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1
ac_cs_config="`$as_echo "$ac_configure_args" | sed 's/^ //; s/[\\""\`\$]/\\\\&/g'`"
ac_cs_version="\\
Citus config.status 13.2devel
Citus config.status 14.0devel
configured by $0, generated by GNU Autoconf 2.69,
with options \\"\$ac_cs_config\\"

View File

@ -5,7 +5,7 @@
# everyone needing autoconf installed, the resulting files are checked
# into the SCM.
AC_INIT([Citus], [13.2devel])
AC_INIT([Citus], [14.0devel])
AC_COPYRIGHT([Copyright (c) Citus Data, Inc.])
# we'll need sed and awk for some of the version commands

View File

@ -1,6 +1,6 @@
# Columnar extension
comment = 'Citus Columnar extension'
default_version = '13.2-1'
default_version = '14.0-1'
module_pathname = '$libdir/citus_columnar'
relocatable = false
schema = pg_catalog

View File

@ -1556,8 +1556,7 @@ ColumnarPerStripeScanCost(RelOptInfo *rel, Oid relationId, int numberOfColumnsRe
ereport(ERROR, (errmsg("could not open relation with OID %u", relationId)));
}
List *stripeList = StripesForRelfilelocator(RelationPhysicalIdentifier_compat(
relation));
List *stripeList = StripesForRelfilelocator(relation);
RelationClose(relation);
uint32 maxColumnCount = 0;
@ -1614,8 +1613,7 @@ ColumnarTableStripeCount(Oid relationId)
ereport(ERROR, (errmsg("could not open relation with OID %u", relationId)));
}
List *stripeList = StripesForRelfilelocator(RelationPhysicalIdentifier_compat(
relation));
List *stripeList = StripesForRelfilelocator(relation);
int stripeCount = list_length(stripeList);
RelationClose(relation);

View File

@ -125,7 +125,7 @@ static Oid ColumnarChunkGroupRelationId(void);
static Oid ColumnarChunkIndexRelationId(void);
static Oid ColumnarChunkGroupIndexRelationId(void);
static Oid ColumnarNamespaceId(void);
static uint64 LookupStorageId(RelFileLocator relfilelocator);
static uint64 LookupStorageId(Oid relationId, RelFileLocator relfilelocator);
static uint64 GetHighestUsedRowNumber(uint64 storageId);
static void DeleteStorageFromColumnarMetadataTable(Oid metadataTableId,
AttrNumber storageIdAtrrNumber,
@ -606,7 +606,7 @@ ReadColumnarOptions(Oid regclass, ColumnarOptions *options)
* of columnar.chunk.
*/
void
SaveStripeSkipList(RelFileLocator relfilelocator, uint64 stripe,
SaveStripeSkipList(Oid relid, RelFileLocator relfilelocator, uint64 stripe,
StripeSkipList *chunkList,
TupleDesc tupleDescriptor)
{
@ -614,11 +614,17 @@ SaveStripeSkipList(RelFileLocator relfilelocator, uint64 stripe,
uint32 chunkIndex = 0;
uint32 columnCount = chunkList->columnCount;
uint64 storageId = LookupStorageId(relfilelocator);
uint64 storageId = LookupStorageId(relid, relfilelocator);
Oid columnarChunkOid = ColumnarChunkRelationId();
Relation columnarChunk = table_open(columnarChunkOid, RowExclusiveLock);
ModifyState *modifyState = StartModifyRelation(columnarChunk);
bool pushed_snapshot = false;
if (!ActiveSnapshotSet())
{
PushActiveSnapshot(GetTransactionSnapshot());
pushed_snapshot = true;
}
for (columnIndex = 0; columnIndex < columnCount; columnIndex++)
{
for (chunkIndex = 0; chunkIndex < chunkList->chunkCount; chunkIndex++)
@ -649,20 +655,25 @@ SaveStripeSkipList(RelFileLocator relfilelocator, uint64 stripe,
{
values[Anum_columnar_chunk_minimum_value - 1] =
PointerGetDatum(DatumToBytea(chunk->minimumValue,
Attr(tupleDescriptor, columnIndex)));
TupleDescAttr(tupleDescriptor,
columnIndex)));
values[Anum_columnar_chunk_maximum_value - 1] =
PointerGetDatum(DatumToBytea(chunk->maximumValue,
Attr(tupleDescriptor, columnIndex)));
TupleDescAttr(tupleDescriptor,
columnIndex)));
}
else
{
nulls[Anum_columnar_chunk_minimum_value - 1] = true;
nulls[Anum_columnar_chunk_maximum_value - 1] = true;
}
InsertTupleAndEnforceConstraints(modifyState, values, nulls);
}
}
if (pushed_snapshot)
{
PopActiveSnapshot();
}
FinishModifyRelation(modifyState);
table_close(columnarChunk, RowExclusiveLock);
@ -673,10 +684,10 @@ SaveStripeSkipList(RelFileLocator relfilelocator, uint64 stripe,
* SaveChunkGroups saves the metadata for given chunk groups in columnar.chunk_group.
*/
void
SaveChunkGroups(RelFileLocator relfilelocator, uint64 stripe,
SaveChunkGroups(Oid relid, RelFileLocator relfilelocator, uint64 stripe,
List *chunkGroupRowCounts)
{
uint64 storageId = LookupStorageId(relfilelocator);
uint64 storageId = LookupStorageId(relid, relfilelocator);
Oid columnarChunkGroupOid = ColumnarChunkGroupRelationId();
Relation columnarChunkGroup = table_open(columnarChunkGroupOid, RowExclusiveLock);
ModifyState *modifyState = StartModifyRelation(columnarChunkGroup);
@ -709,7 +720,7 @@ SaveChunkGroups(RelFileLocator relfilelocator, uint64 stripe,
* ReadStripeSkipList fetches chunk metadata for a given stripe.
*/
StripeSkipList *
ReadStripeSkipList(RelFileLocator relfilelocator, uint64 stripe,
ReadStripeSkipList(Relation rel, uint64 stripe,
TupleDesc tupleDescriptor,
uint32 chunkCount, Snapshot snapshot)
{
@ -718,7 +729,8 @@ ReadStripeSkipList(RelFileLocator relfilelocator, uint64 stripe,
uint32 columnCount = tupleDescriptor->natts;
ScanKeyData scanKey[2];
uint64 storageId = LookupStorageId(relfilelocator);
uint64 storageId = LookupStorageId(RelationPrecomputeOid(rel),
RelationPhysicalIdentifier_compat(rel));
Oid columnarChunkOid = ColumnarChunkRelationId();
Relation columnarChunk = table_open(columnarChunkOid, AccessShareLock);
@ -807,9 +819,9 @@ ReadStripeSkipList(RelFileLocator relfilelocator, uint64 stripe,
datumArray[Anum_columnar_chunk_maximum_value - 1]);
chunk->minimumValue =
ByteaToDatum(minValue, Attr(tupleDescriptor, columnIndex));
ByteaToDatum(minValue, TupleDescAttr(tupleDescriptor, columnIndex));
chunk->maximumValue =
ByteaToDatum(maxValue, Attr(tupleDescriptor, columnIndex));
ByteaToDatum(maxValue, TupleDescAttr(tupleDescriptor, columnIndex));
chunk->hasMinMax = true;
}
@ -1262,11 +1274,26 @@ InsertEmptyStripeMetadataRow(uint64 storageId, uint64 stripeId, uint32 columnCou
* of the given relfilenode.
*/
List *
StripesForRelfilelocator(RelFileLocator relfilelocator)
StripesForRelfilelocator(Relation rel)
{
uint64 storageId = LookupStorageId(relfilelocator);
uint64 storageId = LookupStorageId(RelationPrecomputeOid(rel),
RelationPhysicalIdentifier_compat(rel));
return ReadDataFileStripeList(storageId, GetTransactionSnapshot());
/*
* PG18 requires snapshot to be active or registered before it's used
* Without this, we hit
* Assert(snapshot->regd_count > 0 || snapshot->active_count > 0);
* when reading columnar stripes.
* Relevant PG18 commit:
* 8076c00592e40e8dbd1fce7a98b20d4bf075e4ba
*/
Snapshot snapshot = RegisterSnapshot(GetTransactionSnapshot());
List *readDataFileStripeList = ReadDataFileStripeList(storageId, snapshot);
UnregisterSnapshot(snapshot);
return readDataFileStripeList;
}
@ -1279,9 +1306,10 @@ StripesForRelfilelocator(RelFileLocator relfilelocator)
* returns 0.
*/
uint64
GetHighestUsedAddress(RelFileLocator relfilelocator)
GetHighestUsedAddress(Relation rel)
{
uint64 storageId = LookupStorageId(relfilelocator);
uint64 storageId = LookupStorageId(RelationPrecomputeOid(rel),
RelationPhysicalIdentifier_compat(rel));
uint64 highestUsedAddress = 0;
uint64 highestUsedId = 0;
@ -1291,6 +1319,24 @@ GetHighestUsedAddress(RelFileLocator relfilelocator)
}
/*
* In case if relid hasn't been defined yet, we should use RelidByRelfilenumber
* to get correct relid value.
*
* Now it is basically used for temp rels, because since PG18(it was backpatched
* through PG13) RelidByRelfilenumber skip temp relations and we should use
* alternative ways to get relid value in case of temp objects.
*/
Oid
ColumnarRelationId(Oid relid, RelFileLocator relfilelocator)
{
return OidIsValid(relid) ? relid : RelidByRelfilenumber(RelationTablespace_compat(
relfilelocator),
RelationPhysicalIdentifierNumber_compat(
relfilelocator));
}
/*
* GetHighestUsedAddressAndId returns the highest used address and id for
* the given relfilenode across all active and inactive transactions.
@ -1379,9 +1425,6 @@ static StripeMetadata *
UpdateStripeMetadataRow(uint64 storageId, uint64 stripeId, uint64 fileOffset,
uint64 dataLength, uint64 rowCount, uint64 chunkCount)
{
SnapshotData dirtySnapshot;
InitDirtySnapshot(dirtySnapshot);
ScanKeyData scanKey[2];
ScanKeyInit(&scanKey[0], Anum_columnar_stripe_storageid,
BTEqualStrategyNumber, F_INT8EQ, Int64GetDatum(storageId));
@ -1390,23 +1433,16 @@ UpdateStripeMetadataRow(uint64 storageId, uint64 stripeId, uint64 fileOffset,
Oid columnarStripesOid = ColumnarStripeRelationId();
#if PG_VERSION_NUM >= 180000
/* CatalogTupleUpdate performs a normal heap UPDATE → RowExclusiveLock */
const LOCKMODE openLockMode = RowExclusiveLock;
#else
/* Inplace update never changed tuple length → AccessShareLock was enough */
const LOCKMODE openLockMode = AccessShareLock;
#endif
Relation columnarStripes = table_open(columnarStripesOid, openLockMode);
Relation columnarStripes = table_open(columnarStripesOid, AccessShareLock);
TupleDesc tupleDescriptor = RelationGetDescr(columnarStripes);
Oid indexId = ColumnarStripePKeyIndexRelationId();
bool indexOk = OidIsValid(indexId);
SysScanDesc scanDescriptor = systable_beginscan(columnarStripes, indexId, indexOk,
&dirtySnapshot, 2, scanKey);
void *state;
HeapTuple tuple;
systable_inplace_update_begin(columnarStripes, indexId, indexOk, NULL,
2, scanKey, &tuple, &state);
static bool loggedSlowMetadataAccessWarning = false;
if (!indexOk && !loggedSlowMetadataAccessWarning)
@ -1415,8 +1451,7 @@ UpdateStripeMetadataRow(uint64 storageId, uint64 stripeId, uint64 fileOffset,
loggedSlowMetadataAccessWarning = true;
}
HeapTuple oldTuple = systable_getnext(scanDescriptor);
if (!HeapTupleIsValid(oldTuple))
if (!HeapTupleIsValid(tuple))
{
ereport(ERROR, (errmsg("attempted to modify an unexpected stripe, "
"columnar storage with id=" UINT64_FORMAT
@ -1424,6 +1459,11 @@ UpdateStripeMetadataRow(uint64 storageId, uint64 stripeId, uint64 fileOffset,
storageId, stripeId)));
}
/*
* systable_inplace_update_finish already doesn't allow changing size of the original
* tuple, so we don't allow setting any Datum's to NULL values.
*/
Datum *newValues = (Datum *) palloc(tupleDescriptor->natts * sizeof(Datum));
bool *newNulls = (bool *) palloc0(tupleDescriptor->natts * sizeof(bool));
bool *update = (bool *) palloc0(tupleDescriptor->natts * sizeof(bool));
@ -1438,43 +1478,21 @@ UpdateStripeMetadataRow(uint64 storageId, uint64 stripeId, uint64 fileOffset,
newValues[Anum_columnar_stripe_row_count - 1] = UInt64GetDatum(rowCount);
newValues[Anum_columnar_stripe_chunk_count - 1] = Int32GetDatum(chunkCount);
HeapTuple modifiedTuple = heap_modify_tuple(oldTuple,
tupleDescriptor,
newValues,
newNulls,
update);
tuple = heap_modify_tuple(tuple,
tupleDescriptor,
newValues,
newNulls,
update);
#if PG_VERSION_NUM < PG_VERSION_18
systable_inplace_update_finish(state, tuple);
/*
* heap_inplace_update already doesn't allow changing size of the original
* tuple, so we don't allow setting any Datum's to NULL values.
*/
heap_inplace_update(columnarStripes, modifiedTuple);
/*
* Existing tuple now contains modifications, because we used
* heap_inplace_update().
*/
HeapTuple newTuple = oldTuple;
#else
/* Regular catalog UPDATE keeps indexes in sync */
CatalogTupleUpdate(columnarStripes, &oldTuple->t_self, modifiedTuple);
HeapTuple newTuple = modifiedTuple;
#endif
StripeMetadata *modifiedStripeMetadata = BuildStripeMetadata(columnarStripes,
tuple);
CommandCounterIncrement();
/*
* Must not pass modifiedTuple, because BuildStripeMetadata expects a real
* heap tuple with MVCC fields.
*/
StripeMetadata *modifiedStripeMetadata =
BuildStripeMetadata(columnarStripes, newTuple);
systable_endscan(scanDescriptor);
table_close(columnarStripes, openLockMode);
heap_freetuple(tuple);
table_close(columnarStripes, AccessShareLock);
pfree(newValues);
pfree(newNulls);
@ -1594,7 +1612,7 @@ BuildStripeMetadata(Relation columnarStripes, HeapTuple heapTuple)
* metadata tables.
*/
void
DeleteMetadataRows(RelFileLocator relfilelocator)
DeleteMetadataRows(Relation rel)
{
/*
* During a restore for binary upgrade, metadata tables and indexes may or
@ -1605,7 +1623,8 @@ DeleteMetadataRows(RelFileLocator relfilelocator)
return;
}
uint64 storageId = LookupStorageId(relfilelocator);
uint64 storageId = LookupStorageId(RelationPrecomputeOid(rel),
RelationPhysicalIdentifier_compat(rel));
DeleteStorageFromColumnarMetadataTable(ColumnarStripeRelationId(),
Anum_columnar_stripe_storageid,
@ -2004,13 +2023,11 @@ ColumnarNamespaceId(void)
* false if the relation doesn't have a meta page yet.
*/
static uint64
LookupStorageId(RelFileLocator relfilelocator)
LookupStorageId(Oid relid, RelFileLocator relfilelocator)
{
Oid relationId = RelidByRelfilenumber(RelationTablespace_compat(relfilelocator),
RelationPhysicalIdentifierNumber_compat(
relfilelocator));
relid = ColumnarRelationId(relid, relfilelocator);
Relation relation = relation_open(relationId, AccessShareLock);
Relation relation = relation_open(relid, AccessShareLock);
uint64 storageId = ColumnarStorageGetStorageId(relation, false);
table_close(relation, AccessShareLock);
@ -2132,7 +2149,7 @@ GetHighestUsedRowNumber(uint64 storageId)
static int
GetFirstRowNumberAttrIndexInColumnarStripe(TupleDesc tupleDesc)
{
return TupleDescSize(tupleDesc) == Natts_columnar_stripe
return tupleDesc->natts == Natts_columnar_stripe
? (Anum_columnar_stripe_first_row_number - 1)
: tupleDesc->natts - 1;
}

View File

@ -986,8 +986,7 @@ ColumnarTableRowCount(Relation relation)
{
ListCell *stripeMetadataCell = NULL;
uint64 totalRowCount = 0;
List *stripeList = StripesForRelfilelocator(RelationPhysicalIdentifier_compat(
relation));
List *stripeList = StripesForRelfilelocator(relation);
foreach(stripeMetadataCell, stripeList)
{
@ -1015,8 +1014,7 @@ LoadFilteredStripeBuffers(Relation relation, StripeMetadata *stripeMetadata,
bool *projectedColumnMask = ProjectedColumnMask(columnCount, projectedColumnList);
StripeSkipList *stripeSkipList = ReadStripeSkipList(RelationPhysicalIdentifier_compat(
relation),
StripeSkipList *stripeSkipList = ReadStripeSkipList(relation,
stripeMetadata->id,
tupleDescriptor,
stripeMetadata->chunkCount,

View File

@ -872,7 +872,7 @@ columnar_relation_set_new_filelocator(Relation rel,
RelationPhysicalIdentifier_compat(rel)),
GetCurrentSubTransactionId());
DeleteMetadataRows(RelationPhysicalIdentifier_compat(rel));
DeleteMetadataRows(rel);
}
*freezeXid = RecentXmin;
@ -897,7 +897,7 @@ columnar_relation_nontransactional_truncate(Relation rel)
NonTransactionDropWriteState(RelationPhysicalIdentifierNumber_compat(relfilelocator));
/* Delete old relfilenode metadata */
DeleteMetadataRows(relfilelocator);
DeleteMetadataRows(rel);
/*
* No need to set new relfilenode, since the table was created in this
@ -960,8 +960,7 @@ columnar_relation_copy_for_cluster(Relation OldHeap, Relation NewHeap,
ColumnarOptions columnarOptions = { 0 };
ReadColumnarOptions(OldHeap->rd_id, &columnarOptions);
ColumnarWriteState *writeState = ColumnarBeginWrite(RelationPhysicalIdentifier_compat(
NewHeap),
ColumnarWriteState *writeState = ColumnarBeginWrite(NewHeap,
columnarOptions,
targetDesc);
@ -1012,7 +1011,7 @@ NeededColumnsList(TupleDesc tupdesc, Bitmapset *attr_needed)
for (int i = 0; i < tupdesc->natts; i++)
{
if (Attr(tupdesc, i)->attisdropped)
if (TupleDescAttr(tupdesc, i)->attisdropped)
{
continue;
}
@ -1036,8 +1035,7 @@ NeededColumnsList(TupleDesc tupdesc, Bitmapset *attr_needed)
static uint64
ColumnarTableTupleCount(Relation relation)
{
List *stripeList = StripesForRelfilelocator(RelationPhysicalIdentifier_compat(
relation));
List *stripeList = StripesForRelfilelocator(relation);
uint64 tupleCount = 0;
ListCell *lc = NULL;
@ -1228,7 +1226,6 @@ static void
LogRelationStats(Relation rel, int elevel)
{
ListCell *stripeMetadataCell = NULL;
RelFileLocator relfilelocator = RelationPhysicalIdentifier_compat(rel);
StringInfo infoBuf = makeStringInfo();
int compressionStats[COMPRESSION_COUNT] = { 0 };
@ -1239,19 +1236,23 @@ LogRelationStats(Relation rel, int elevel)
uint64 droppedChunksWithData = 0;
uint64 totalDecompressedLength = 0;
List *stripeList = StripesForRelfilelocator(relfilelocator);
List *stripeList = StripesForRelfilelocator(rel);
int stripeCount = list_length(stripeList);
foreach(stripeMetadataCell, stripeList)
{
StripeMetadata *stripe = lfirst(stripeMetadataCell);
StripeSkipList *skiplist = ReadStripeSkipList(relfilelocator, stripe->id,
Snapshot snapshot = RegisterSnapshot(GetTransactionSnapshot());
StripeSkipList *skiplist = ReadStripeSkipList(rel, stripe->id,
RelationGetDescr(rel),
stripe->chunkCount,
GetTransactionSnapshot());
snapshot);
UnregisterSnapshot(snapshot);
for (uint32 column = 0; column < skiplist->columnCount; column++)
{
bool attrDropped = Attr(tupdesc, column)->attisdropped;
bool attrDropped = TupleDescAttr(tupdesc, column)->attisdropped;
for (uint32 chunk = 0; chunk < skiplist->chunkCount; chunk++)
{
ColumnChunkSkipNode *skipnode =
@ -1381,8 +1382,7 @@ TruncateColumnar(Relation rel, int elevel)
* new stripes be added beyond highestPhysicalAddress while
* we're truncating.
*/
uint64 newDataReservation = Max(GetHighestUsedAddress(
RelationPhysicalIdentifier_compat(rel)) + 1,
uint64 newDataReservation = Max(GetHighestUsedAddress(rel) + 1,
ColumnarFirstLogicalOffset);
BlockNumber old_rel_pages = smgrnblocks(RelationGetSmgr(rel), MAIN_FORKNUM);
@ -2150,7 +2150,7 @@ ColumnarTableDropHook(Oid relid)
Relation rel = table_open(relid, AccessExclusiveLock);
RelFileLocator relfilelocator = RelationPhysicalIdentifier_compat(rel);
DeleteMetadataRows(relfilelocator);
DeleteMetadataRows(rel);
DeleteColumnarTableOptions(rel->rd_id, true);
MarkRelfilenumberDropped(RelationPhysicalIdentifierNumber_compat(relfilelocator),
@ -2634,7 +2634,7 @@ detoast_values(TupleDesc tupleDesc, Datum *orig_values, bool *isnull)
for (int i = 0; i < tupleDesc->natts; i++)
{
if (!isnull[i] && Attr(tupleDesc, i)->attlen == -1 &&
if (!isnull[i] && TupleDescAttr(tupleDesc, i)->attlen == -1 &&
VARATT_IS_EXTENDED(values[i]))
{
/* make a copy */

View File

@ -48,6 +48,12 @@ struct ColumnarWriteState
FmgrInfo **comparisonFunctionArray;
RelFileLocator relfilelocator;
/*
* We can't rely on RelidByRelfilenumber for temp tables since
* PG18(it was backpatched through PG13).
*/
Oid temp_relid;
MemoryContext stripeWriteContext;
MemoryContext perTupleContext;
StripeBuffers *stripeBuffers;
@ -93,10 +99,12 @@ static StringInfo CopyStringInfo(StringInfo sourceString);
* data load operation.
*/
ColumnarWriteState *
ColumnarBeginWrite(RelFileLocator relfilelocator,
ColumnarBeginWrite(Relation rel,
ColumnarOptions options,
TupleDesc tupleDescriptor)
{
RelFileLocator relfilelocator = RelationPhysicalIdentifier_compat(rel);
/* get comparison function pointers for each of the columns */
uint32 columnCount = tupleDescriptor->natts;
FmgrInfo **comparisonFunctionArray = palloc0(columnCount * sizeof(FmgrInfo *));
@ -134,6 +142,7 @@ ColumnarBeginWrite(RelFileLocator relfilelocator,
ColumnarWriteState *writeState = palloc0(sizeof(ColumnarWriteState));
writeState->relfilelocator = relfilelocator;
writeState->temp_relid = RelationPrecomputeOid(rel);
writeState->options = options;
writeState->tupleDescriptor = CreateTupleDescCopy(tupleDescriptor);
writeState->comparisonFunctionArray = comparisonFunctionArray;
@ -183,10 +192,9 @@ ColumnarWriteRow(ColumnarWriteState *writeState, Datum *columnValues, bool *colu
writeState->stripeSkipList = stripeSkipList;
writeState->compressionBuffer = makeStringInfo();
Oid relationId = RelidByRelfilenumber(RelationTablespace_compat(
writeState->relfilelocator),
RelationPhysicalIdentifierNumber_compat(
writeState->relfilelocator));
Oid relationId = ColumnarRelationId(writeState->temp_relid,
writeState->relfilelocator);
Relation relation = relation_open(relationId, NoLock);
writeState->emptyStripeReservation =
ReserveEmptyStripe(relation, columnCount, chunkRowCount,
@ -404,10 +412,9 @@ FlushStripe(ColumnarWriteState *writeState)
elog(DEBUG1, "Flushing Stripe of size %d", stripeBuffers->rowCount);
Oid relationId = RelidByRelfilenumber(RelationTablespace_compat(
writeState->relfilelocator),
RelationPhysicalIdentifierNumber_compat(
writeState->relfilelocator));
Oid relationId = ColumnarRelationId(writeState->temp_relid,
writeState->relfilelocator);
Relation relation = relation_open(relationId, NoLock);
/*
@ -499,10 +506,12 @@ FlushStripe(ColumnarWriteState *writeState)
}
}
SaveChunkGroups(writeState->relfilelocator,
SaveChunkGroups(writeState->temp_relid,
writeState->relfilelocator,
stripeMetadata->id,
writeState->chunkGroupRowCounts);
SaveStripeSkipList(writeState->relfilelocator,
SaveStripeSkipList(writeState->temp_relid,
writeState->relfilelocator,
stripeMetadata->id,
stripeSkipList, tupleDescriptor);

View File

@ -0,0 +1,2 @@
-- citus_columnar--13.2-1--14.0-1
-- bump version to 14.0-1

View File

@ -0,0 +1,2 @@
-- citus_columnar--14.0-1--13.2-1
-- downgrade version to 13.2-1

View File

@ -191,8 +191,7 @@ columnar_init_write_state(Relation relation, TupleDesc tupdesc,
ReadColumnarOptions(tupSlotRelationId, &columnarOptions);
SubXidWriteState *stackEntry = palloc0(sizeof(SubXidWriteState));
stackEntry->writeState = ColumnarBeginWrite(RelationPhysicalIdentifier_compat(
relation),
stackEntry->writeState = ColumnarBeginWrite(relation,
columnarOptions,
tupdesc);
stackEntry->subXid = currentSubXid;

View File

@ -355,6 +355,15 @@ DEBUG: Total number of commands sent over the session 8: 1 to node localhost:97
(0 rows)
```
### Delaying the Fast Path Plan
As of Citus 13.2, if it can be determined at plan-time that a fast path query is against a local shard then a shortcut can be taken so that deparse and parse/plan of the shard query is avoided. Citus must be in MX mode and the shard must be local to the Citus node processing the query. If so, the OID of the distributed table is replaced by the OID of the shard in the parse tree. The parse tree is then given to the Postgres planner which returns a plan that is stored in the distributed plan's task. That plan can be repeatedly used by the local executor (described in the next section), avoiding the need to deparse and plan the shard query on each execution.
We call this delayed fast path planning because if a query is eligible for fast path planning then `FastPathPlanner()` is delayed if the following properties hold:
- The query is a SELECT or UPDATE on a distributed table (schema or column sharded) or Citus managed local table
- The query has no volatile functions
If so, then `FastPathRouterQuery()` sets a flag indicating that making the fast path plan should be delayed until after the worker job has been created. At that point the router planner uses `CheckAndBuildDelayedFastPathPlan()` to see if the task's shard placement is local (and not a dummy placement) and the metadata of the shard table and distributed table are consistent (no DDL in progress on the distributed table). If so the parse tree with OID of the distributed table replaced by the OID of the shard table is fed to `standard_planner()` and the resultant plan is saved in the task. Otherwise, if the worker job has been marked for deferred pruning or the shard is not local or the shard is local but it's not safe to swap OIDs, then `CheckAndBuildDelayedFastPathPlan()` calls `FastPathPlanner()` to ensure a complete plan context. Reference tables are not currently supported, but this may be relaxed for SELECT statements in the future. Delayed fast path planning can be disabled by turning off `citus.enable_local_fast_path_query_optimization` (it is on by default).
## Router Planner in Citus
@ -788,14 +797,13 @@ WHERE l.user_id = o.user_id AND o.primary_key = 55;
### Ref table LEFT JOIN distributed table JOINs via recursive planning
### Outer joins between reference and distributed tables
Very much like local-distributed table joins, Citus can't push down queries formatted as:
In general, when the outer side of an outer join is a recurring tuple (e.g., reference table, intermediate results, or set returning functions), it is not safe to push down the join.
```sql
"... ref_table LEFT JOIN distributed_table ..."
"... distributed_table RIGHT JOIN ref_table ..."
```
This is the case when the outer side is a recurring tuple (e.g., reference table, intermediate results, or set returning functions).
In these situations, Citus recursively plans the "distributed" part of the join. Even though it may seem excessive to recursively plan a distributed table, remember that Citus pushes down the filters and projections. Functions involved here include `RequiredAttrNumbersForRelation()` and `ReplaceRTERelationWithRteSubquery()`.
The core function handling this logic is `RecursivelyPlanRecurringTupleOuterJoinWalker()`. There are likely numerous optimizations possible (e.g., first pushing down an inner JOIN then an outer join), but these have not been implemented due to their complexity.
@ -819,6 +827,45 @@ DEBUG: Wrapping relation "orders_table" "o" to a subquery
DEBUG: generating subplan 45_1 for subquery SELECT order_id, status FROM public.orders_table o WHERE true
```
As of Citus 13.2, under certain conditions, Citus can push down these types of LEFT and RIGHT outer joins by injecting constraints—derived from the shard intervals of distributed tables—into shard queries for the reference table. The eligibility rules for pushdown are defined in `CanPushdownRecurringOuterJoin()`, while the logic for computing and injecting the constraints is implemented in `UpdateWhereClauseToPushdownRecurringOuterJoin()`.
#### Example Query
In the example below, Citus pushes down the query by injecting interval constraints on the reference table. The injected constraints are visible in the EXPLAIN output.
```sql
SELECT pc.category_name, count(pt.product_id)
FROM product_categories pc
LEFT JOIN products_table pt ON pc.category_id = pt.product_id
GROUP BY pc.category_name;
```
#### Debug Messages
```
DEBUG: Router planner cannot handle multi-shard select queries
DEBUG: a push down safe left join with recurring left side
```
#### Explain Output
```
HashAggregate
Group Key: remote_scan.category_name
-> Custom Scan (Citus Adaptive)
Task Count: 32
Tasks Shown: One of 32
-> Task
Node: host=localhost port=9701 dbname=ebru
-> HashAggregate
Group Key: pc.category_name
-> Hash Right Join
Hash Cond: (pt.product_id = pc.category_id)
-> Seq Scan on products_table_102072 pt
-> Hash
-> Seq Scan on product_categories_102106 pc
Filter: ((category_id IS NULL) OR ((btint4cmp('-2147483648'::integer, hashint8((category_id)::bigint)) < 0) AND (btint4cmp(hashint8((category_id::bigint), '-2013265921'::integer) <= 0)))
```
### Recursive Planning When FROM Clause has Reference Table (or Recurring Tuples)
This section discusses a specific scenario in Citus's recursive query planning: handling queries where the main query's `FROM` clause is recurring, but there are subqueries in the `SELECT` or `WHERE` clauses involving distributed tables.

View File

@ -1,6 +1,6 @@
# Citus extension
comment = 'Citus distributed database'
default_version = '13.2-1'
default_version = '14.0-1'
module_pathname = '$libdir/citus'
relocatable = false
schema = pg_catalog

View File

@ -1927,14 +1927,10 @@ GetNonGeneratedStoredColumnNameList(Oid relationId)
for (int columnIndex = 0; columnIndex < tupleDescriptor->natts; columnIndex++)
{
Form_pg_attribute currentColumn = TupleDescAttr(tupleDescriptor, columnIndex);
if (currentColumn->attisdropped)
{
/* skip dropped columns */
continue;
}
if (currentColumn->attgenerated == ATTRIBUTE_GENERATED_STORED)
if (IsDroppedOrGenerated(currentColumn))
{
/* skip dropped or generated columns */
continue;
}

View File

@ -19,6 +19,7 @@
#include "nodes/parsenodes.h"
#include "tcop/utility.h"
#include "distributed/citus_depended_object.h"
#include "distributed/commands.h"
#include "distributed/commands/utility_hook.h"
#include "distributed/deparser.h"
@ -63,6 +64,13 @@ PostprocessCreateDistributedObjectFromCatalogStmt(Node *stmt, const char *queryS
return NIL;
}
if (ops->qualify && DistOpsValidityState(stmt, ops) ==
ShouldQualifyAfterLocalCreation)
{
/* qualify the statement after local creation */
ops->qualify(stmt);
}
List *addresses = GetObjectAddressListFromParseTree(stmt, false, true);
/* the code-path only supports a single object */

View File

@ -175,8 +175,9 @@ static bool DistributionColumnUsesNumericColumnNegativeScale(TupleDesc relationD
static int numeric_typmod_scale(int32 typmod);
static bool is_valid_numeric_typmod(int32 typmod);
static bool DistributionColumnUsesGeneratedStoredColumn(TupleDesc relationDesc,
Var *distributionColumn);
static void DistributionColumnIsGeneratedCheck(TupleDesc relationDesc,
Var *distributionColumn,
const char *relationName);
static bool CanUseExclusiveConnections(Oid relationId, bool localTableEmpty);
static uint64 DoCopyFromLocalTableIntoShards(Relation distributedRelation,
DestReceiver *copyDest,
@ -2103,13 +2104,10 @@ EnsureRelationCanBeDistributed(Oid relationId, Var *distributionColumn,
/* verify target relation is not distributed by a generated stored column
*/
if (distributionMethod != DISTRIBUTE_BY_NONE &&
DistributionColumnUsesGeneratedStoredColumn(relationDesc, distributionColumn))
if (distributionMethod != DISTRIBUTE_BY_NONE)
{
ereport(ERROR, (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
errmsg("cannot distribute relation: %s", relationName),
errdetail("Distribution column must not use GENERATED ALWAYS "
"AS (...) STORED.")));
DistributionColumnIsGeneratedCheck(relationDesc, distributionColumn,
relationName);
}
/* verify target relation is not distributed by a column of type numeric with negative scale */
@ -2829,9 +2827,7 @@ TupleDescColumnNameList(TupleDesc tupleDescriptor)
Form_pg_attribute currentColumn = TupleDescAttr(tupleDescriptor, columnIndex);
char *columnName = NameStr(currentColumn->attname);
if (currentColumn->attisdropped ||
currentColumn->attgenerated == ATTRIBUTE_GENERATED_STORED
)
if (IsDroppedOrGenerated(currentColumn))
{
continue;
}
@ -2893,22 +2889,43 @@ DistributionColumnUsesNumericColumnNegativeScale(TupleDesc relationDesc,
/*
* DistributionColumnUsesGeneratedStoredColumn returns whether a given relation uses
* GENERATED ALWAYS AS (...) STORED on distribution column
* DistributionColumnIsGeneratedCheck throws an error if a given relation uses
* GENERATED ALWAYS AS (...) STORED | VIRTUAL on distribution column
*/
static bool
DistributionColumnUsesGeneratedStoredColumn(TupleDesc relationDesc,
Var *distributionColumn)
static void
DistributionColumnIsGeneratedCheck(TupleDesc relationDesc,
Var *distributionColumn,
const char *relationName)
{
Form_pg_attribute attributeForm = TupleDescAttr(relationDesc,
distributionColumn->varattno - 1);
if (attributeForm->attgenerated == ATTRIBUTE_GENERATED_STORED)
switch (attributeForm->attgenerated)
{
return true;
}
case ATTRIBUTE_GENERATED_STORED:
{
ereport(ERROR, (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
errmsg("cannot distribute relation: %s", relationName),
errdetail("Distribution column must not use GENERATED ALWAYS "
"AS (...) STORED.")));
break;
}
return false;
#if PG_VERSION_NUM >= PG_VERSION_18
case ATTRIBUTE_GENERATED_VIRTUAL:
{
ereport(ERROR, (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
errmsg("cannot distribute relation: %s", relationName),
errdetail("Distribution column must not use GENERATED ALWAYS "
"AS (...) VIRTUAL.")));
break;
}
#endif
default:
{
break;
}
}
}

View File

@ -854,8 +854,11 @@ PostprocessIndexStmt(Node *node, const char *queryString)
table_close(relation, NoLock);
index_close(indexRelation, NoLock);
PushActiveSnapshot(GetTransactionSnapshot());
/* mark index as invalid, in-place (cannot be rolled back) */
index_set_state_flags(indexRelationId, INDEX_DROP_CLEAR_VALID);
PopActiveSnapshot();
/* re-open a transaction command from here on out */
CommitTransactionCommand();
@ -1370,8 +1373,11 @@ MarkIndexValid(IndexStmt *indexStmt)
schemaId);
Relation indexRelation = index_open(indexRelationId, RowExclusiveLock);
PushActiveSnapshot(GetTransactionSnapshot());
/* mark index as valid, in-place (cannot be rolled back) */
index_set_state_flags(indexRelationId, INDEX_CREATE_SET_VALID);
PopActiveSnapshot();
table_close(relation, NoLock);
index_close(indexRelation, NoLock);

View File

@ -350,7 +350,6 @@ static void LogLocalCopyToRelationExecution(uint64 shardId);
static void LogLocalCopyToFileExecution(uint64 shardId);
static void ErrorIfMergeInCopy(CopyStmt *copyStatement);
/* exports for SQL callable functions */
PG_FUNCTION_INFO_V1(citus_text_send_as_jsonb);
@ -484,9 +483,7 @@ CopyToExistingShards(CopyStmt *copyStatement, QueryCompletion *completionTag)
Form_pg_attribute currentColumn = TupleDescAttr(tupleDescriptor, columnIndex);
char *columnName = NameStr(currentColumn->attname);
if (currentColumn->attisdropped ||
currentColumn->attgenerated == ATTRIBUTE_GENERATED_STORED
)
if (IsDroppedOrGenerated(currentColumn))
{
continue;
}
@ -804,9 +801,7 @@ CanUseBinaryCopyFormat(TupleDesc tupleDescription)
{
Form_pg_attribute currentColumn = TupleDescAttr(tupleDescription, columnIndex);
if (currentColumn->attisdropped ||
currentColumn->attgenerated == ATTRIBUTE_GENERATED_STORED
)
if (IsDroppedOrGenerated(currentColumn))
{
continue;
}
@ -1316,9 +1311,7 @@ TypeArrayFromTupleDescriptor(TupleDesc tupleDescriptor)
for (int columnIndex = 0; columnIndex < columnCount; columnIndex++)
{
Form_pg_attribute attr = TupleDescAttr(tupleDescriptor, columnIndex);
if (attr->attisdropped ||
attr->attgenerated == ATTRIBUTE_GENERATED_STORED
)
if (IsDroppedOrGenerated(attr))
{
typeArray[columnIndex] = InvalidOid;
}
@ -1486,9 +1479,7 @@ AppendCopyRowData(Datum *valueArray, bool *isNullArray, TupleDesc rowDescriptor,
value = CoerceColumnValue(value, &columnCoercionPaths[columnIndex]);
}
if (currentColumn->attisdropped ||
currentColumn->attgenerated == ATTRIBUTE_GENERATED_STORED
)
if (IsDroppedOrGenerated(currentColumn))
{
continue;
}
@ -1607,9 +1598,7 @@ AvailableColumnCount(TupleDesc tupleDescriptor)
{
Form_pg_attribute currentColumn = TupleDescAttr(tupleDescriptor, columnIndex);
if (!currentColumn->attisdropped &&
currentColumn->attgenerated != ATTRIBUTE_GENERATED_STORED
)
if (!IsDroppedOrGenerated(currentColumn))
{
columnCount++;
}
@ -3999,3 +3988,20 @@ UnclaimCopyConnections(List *connectionStateList)
UnclaimConnection(connectionState->connection);
}
}
/*
* IsDroppedOrGenerated - helper function for determining if an attribute is
* dropped or generated. Used by COPY and Citus DDL to skip such columns.
*/
inline bool
IsDroppedOrGenerated(Form_pg_attribute attr)
{
/*
* If the "is dropped" flag is true or the generated column flag
* is not the default nul character (in which case its value is 's'
* for ATTRIBUTE_GENERATED_STORED or possibly 'v' with PG18+ for
* ATTRIBUTE_GENERATED_VIRTUAL) then return true.
*/
return attr->attisdropped || (attr->attgenerated != '\0');
}

View File

@ -196,6 +196,27 @@ BuildCreatePublicationStmt(Oid publicationId)
-1);
createPubStmt->options = lappend(createPubStmt->options, pubViaRootOption);
/* WITH (publish_generated_columns = ...) option (PG18+) */
#if PG_VERSION_NUM >= PG_VERSION_18
if (publicationForm->pubgencols == 's') /* stored */
{
DefElem *pubGenColsOption =
makeDefElem("publish_generated_columns",
(Node *) makeString("stored"),
-1);
createPubStmt->options =
lappend(createPubStmt->options, pubGenColsOption);
}
else if (publicationForm->pubgencols != 'n') /* 'n' = none (default) */
{
ereport(ERROR,
(errmsg("unexpected pubgencols value '%c' for publication %u",
publicationForm->pubgencols, publicationId)));
}
#endif
/* WITH (publish = 'insert, update, delete, truncate') option */
List *publishList = NIL;

View File

@ -177,8 +177,7 @@ ExtractDefaultColumnsAndOwnedSequences(Oid relationId, List **columnNameList,
{
Form_pg_attribute attributeForm = TupleDescAttr(tupleDescriptor, attributeIndex);
if (attributeForm->attisdropped ||
attributeForm->attgenerated == ATTRIBUTE_GENERATED_STORED)
if (IsDroppedOrGenerated(attributeForm))
{
/* skip dropped columns and columns with GENERATED AS ALWAYS expressions */
continue;

View File

@ -69,7 +69,15 @@ PreprocessCreateStatisticsStmt(Node *node, const char *queryString,
{
CreateStatsStmt *stmt = castNode(CreateStatsStmt, node);
RangeVar *relation = (RangeVar *) linitial(stmt->relations);
Node *relationNode = (Node *) linitial(stmt->relations);
if (!IsA(relationNode, RangeVar))
{
return NIL;
}
RangeVar *relation = (RangeVar *) relationNode;
Oid relationId = RangeVarGetRelid(relation, ShareUpdateExclusiveLock, false);
if (!IsCitusTable(relationId) || !ShouldPropagate())

View File

@ -48,21 +48,27 @@ typedef struct CitusVacuumParams
#endif
} CitusVacuumParams;
/*
* Information we track per VACUUM/ANALYZE target relation.
*/
typedef struct CitusVacuumRelation
{
VacuumRelation *vacuumRelation;
Oid relationId;
} CitusVacuumRelation;
/* Local functions forward declarations for processing distributed table commands */
static bool IsDistributedVacuumStmt(List *vacuumRelationIdList);
static bool IsDistributedVacuumStmt(List *vacuumRelationList);
static List * VacuumTaskList(Oid relationId, CitusVacuumParams vacuumParams,
List *vacuumColumnList);
static char * DeparseVacuumStmtPrefix(CitusVacuumParams vacuumParams);
static char * DeparseVacuumColumnNames(List *columnNameList);
static List * VacuumColumnList(VacuumStmt *vacuumStmt, int relationIndex);
static List * ExtractVacuumTargetRels(VacuumStmt *vacuumStmt);
static void ExecuteVacuumOnDistributedTables(VacuumStmt *vacuumStmt, List *relationIdList,
static void ExecuteVacuumOnDistributedTables(VacuumStmt *vacuumStmt, List *relationList,
CitusVacuumParams vacuumParams);
static void ExecuteUnqualifiedVacuumTasks(VacuumStmt *vacuumStmt,
CitusVacuumParams vacuumParams);
static CitusVacuumParams VacuumStmtParams(VacuumStmt *vacstmt);
static List * VacuumRelationIdList(VacuumStmt *vacuumStmt, CitusVacuumParams
vacuumParams);
static List * VacuumRelationList(VacuumStmt *vacuumStmt, CitusVacuumParams vacuumParams);
/*
* PostprocessVacuumStmt processes vacuum statements that may need propagation to
@ -97,7 +103,7 @@ PostprocessVacuumStmt(Node *node, const char *vacuumCommand)
* when no table is specified propagate the command as it is;
* otherwise, only propagate when there is at least 1 citus table
*/
List *relationIdList = VacuumRelationIdList(vacuumStmt, vacuumParams);
List *vacuumRelationList = VacuumRelationList(vacuumStmt, vacuumParams);
if (list_length(vacuumStmt->rels) == 0)
{
@ -105,11 +111,11 @@ PostprocessVacuumStmt(Node *node, const char *vacuumCommand)
ExecuteUnqualifiedVacuumTasks(vacuumStmt, vacuumParams);
}
else if (IsDistributedVacuumStmt(relationIdList))
else if (IsDistributedVacuumStmt(vacuumRelationList))
{
/* there is at least 1 citus table specified */
ExecuteVacuumOnDistributedTables(vacuumStmt, relationIdList,
ExecuteVacuumOnDistributedTables(vacuumStmt, vacuumRelationList,
vacuumParams);
}
@ -120,39 +126,58 @@ PostprocessVacuumStmt(Node *node, const char *vacuumCommand)
/*
* VacuumRelationIdList returns the oid of the relations in the given vacuum statement.
* VacuumRelationList returns the list of relations in the given vacuum statement,
* along with their resolved Oids (if they can be locked).
*/
static List *
VacuumRelationIdList(VacuumStmt *vacuumStmt, CitusVacuumParams vacuumParams)
VacuumRelationList(VacuumStmt *vacuumStmt, CitusVacuumParams vacuumParams)
{
LOCKMODE lockMode = (vacuumParams.options & VACOPT_FULL) ? AccessExclusiveLock :
ShareUpdateExclusiveLock;
bool skipLocked = (vacuumParams.options & VACOPT_SKIP_LOCKED);
List *vacuumRelationList = ExtractVacuumTargetRels(vacuumStmt);
List *relationList = NIL;
List *relationIdList = NIL;
RangeVar *vacuumRelation = NULL;
foreach_declared_ptr(vacuumRelation, vacuumRelationList)
VacuumRelation *vacuumRelation = NULL;
foreach_declared_ptr(vacuumRelation, vacuumStmt->rels)
{
Oid relationId = InvalidOid;
/*
* If skip_locked option is enabled, we are skipping that relation
* if the lock for it is currently not available; else, we get the lock.
* if the lock for it is currently not available; otherwise, we get the lock.
*/
Oid relationId = RangeVarGetRelidExtended(vacuumRelation,
if (vacuumRelation->relation)
{
relationId = RangeVarGetRelidExtended(vacuumRelation->relation,
lockMode,
skipLocked ? RVR_SKIP_LOCKED : 0, NULL,
NULL);
}
else if (OidIsValid(vacuumRelation->oid))
{
/* fall back to the Oid directly when provided */
if (!skipLocked || ConditionalLockRelationOid(vacuumRelation->oid, lockMode))
{
if (!skipLocked)
{
LockRelationOid(vacuumRelation->oid, lockMode);
}
relationId = vacuumRelation->oid;
}
}
if (OidIsValid(relationId))
{
relationIdList = lappend_oid(relationIdList, relationId);
CitusVacuumRelation *relation = palloc(sizeof(CitusVacuumRelation));
relation->vacuumRelation = vacuumRelation;
relation->relationId = relationId;
relationList = lappend(relationList, relation);
}
}
return relationIdList;
return relationList;
}
@ -161,12 +186,13 @@ VacuumRelationIdList(VacuumStmt *vacuumStmt, CitusVacuumParams vacuumParams)
* otherwise, it returns false.
*/
static bool
IsDistributedVacuumStmt(List *vacuumRelationIdList)
IsDistributedVacuumStmt(List *vacuumRelationList)
{
Oid relationId = InvalidOid;
foreach_declared_oid(relationId, vacuumRelationIdList)
CitusVacuumRelation *vacuumRelation = NULL;
foreach_declared_ptr(vacuumRelation, vacuumRelationList)
{
if (OidIsValid(relationId) && IsCitusTable(relationId))
if (OidIsValid(vacuumRelation->relationId) &&
IsCitusTable(vacuumRelation->relationId))
{
return true;
}
@ -181,24 +207,31 @@ IsDistributedVacuumStmt(List *vacuumRelationIdList)
* if they are citus tables.
*/
static void
ExecuteVacuumOnDistributedTables(VacuumStmt *vacuumStmt, List *relationIdList,
ExecuteVacuumOnDistributedTables(VacuumStmt *vacuumStmt, List *relationList,
CitusVacuumParams vacuumParams)
{
int relationIndex = 0;
Oid relationId = InvalidOid;
foreach_declared_oid(relationId, relationIdList)
CitusVacuumRelation *vacuumRelationEntry = NULL;
foreach_declared_ptr(vacuumRelationEntry, relationList)
{
Oid relationId = vacuumRelationEntry->relationId;
VacuumRelation *vacuumRelation = vacuumRelationEntry->vacuumRelation;
RangeVar *relation = vacuumRelation->relation;
if (relation != NULL && !relation->inh)
{
/* ONLY specified, so don't recurse to shard placements */
continue;
}
if (IsCitusTable(relationId))
{
List *vacuumColumnList = VacuumColumnList(vacuumStmt, relationIndex);
List *vacuumColumnList = vacuumRelation->va_cols;
List *taskList = VacuumTaskList(relationId, vacuumParams, vacuumColumnList);
/* local execution is not implemented for VACUUM commands */
bool localExecutionSupported = false;
ExecuteUtilityTaskList(taskList, localExecutionSupported);
}
relationIndex++;
}
}
@ -484,39 +517,6 @@ DeparseVacuumColumnNames(List *columnNameList)
}
/*
* VacuumColumnList returns list of columns from relation
* in the vacuum statement at specified relationIndex.
*/
static List *
VacuumColumnList(VacuumStmt *vacuumStmt, int relationIndex)
{
VacuumRelation *vacuumRelation = (VacuumRelation *) list_nth(vacuumStmt->rels,
relationIndex);
return vacuumRelation->va_cols;
}
/*
* ExtractVacuumTargetRels returns list of target
* relations from vacuum statement.
*/
static List *
ExtractVacuumTargetRels(VacuumStmt *vacuumStmt)
{
List *vacuumList = NIL;
VacuumRelation *vacuumRelation = NULL;
foreach_declared_ptr(vacuumRelation, vacuumStmt->rels)
{
vacuumList = lappend(vacuumList, vacuumRelation->relation);
}
return vacuumList;
}
/*
* VacuumStmtParams returns a CitusVacuumParams based on the supplied VacuumStmt.
*/

View File

@ -471,6 +471,13 @@ pg_get_tableschemadef_string(Oid tableRelationId, IncludeSequenceDefaults
appendStringInfo(&buffer, " GENERATED ALWAYS AS (%s) STORED",
defaultString);
}
#if PG_VERSION_NUM >= PG_VERSION_18
else if (attributeForm->attgenerated == ATTRIBUTE_GENERATED_VIRTUAL)
{
appendStringInfo(&buffer, " GENERATED ALWAYS AS (%s) VIRTUAL",
defaultString);
}
#endif
else
{
Oid seqOid = GetSequenceOid(tableRelationId, defaultValue->adnum);
@ -547,6 +554,13 @@ pg_get_tableschemadef_string(Oid tableRelationId, IncludeSequenceDefaults
appendStringInfoString(&buffer, "(");
appendStringInfoString(&buffer, checkString);
appendStringInfoString(&buffer, ")");
#if PG_VERSION_NUM >= PG_VERSION_18
if (!checkConstraint->ccenforced)
{
appendStringInfoString(&buffer, " NOT ENFORCED");
}
#endif
}
/* close create table's outer parentheses */
@ -1837,6 +1851,62 @@ ensure_update_targetlist_in_param_order(List *targetList)
}
/*
* isSubsRef checks if a given node is a SubscriptingRef or can be
* reached through an implicit coercion.
*/
static
bool
isSubsRef(Node *node)
{
if (node == NULL)
{
return false;
}
if (IsA(node, CoerceToDomain))
{
CoerceToDomain *coerceToDomain = (CoerceToDomain *) node;
if (coerceToDomain->coercionformat != COERCE_IMPLICIT_CAST)
{
/* not an implicit coercion, cannot reach to a SubscriptingRef */
return false;
}
node = (Node *) coerceToDomain->arg;
}
return (IsA(node, SubscriptingRef));
}
/*
* checkTlistForSubsRef - checks if any target entry in the list contains a
* SubscriptingRef or can be reached through an implicit coercion. Used by
* ExpandMergedSubscriptingRefEntries() to identify if any target entries
* need to be expanded - if not the original target list is preserved.
*/
static
bool
checkTlistForSubsRef(List *targetEntryList)
{
ListCell *tgtCell = NULL;
foreach(tgtCell, targetEntryList)
{
TargetEntry *targetEntry = (TargetEntry *) lfirst(tgtCell);
Expr *expr = targetEntry->expr;
if (isSubsRef((Node *) expr))
{
return true;
}
}
return false;
}
/*
* ExpandMergedSubscriptingRefEntries takes a list of target entries and expands
* each one that references a SubscriptingRef node that indicates multiple (field)
@ -1848,6 +1918,12 @@ ExpandMergedSubscriptingRefEntries(List *targetEntryList)
List *newTargetEntryList = NIL;
ListCell *tgtCell = NULL;
if (!checkTlistForSubsRef(targetEntryList))
{
/* No subscripting refs found, return original list */
return targetEntryList;
}
foreach(tgtCell, targetEntryList)
{
TargetEntry *targetEntry = (TargetEntry *) lfirst(tgtCell);

View File

@ -649,13 +649,18 @@ AppendAlterTableCmdAddColumn(StringInfo buf, AlterTableCmd *alterTableCmd,
}
else if (constraint->contype == CONSTR_GENERATED)
{
char attgenerated = 's';
appendStringInfo(buf, " GENERATED %s AS (%s) STORED",
char attgenerated = ATTRIBUTE_GENERATED_STORED;
#if PG_VERSION_NUM >= PG_VERSION_18
attgenerated = constraint->generated_kind;
#endif
appendStringInfo(buf, " GENERATED %s AS (%s) %s",
GeneratedWhenStr(constraint->generated_when),
DeparseRawExprForColumnDefault(relationId, typeOid, typmod,
columnDefinition->colname,
attgenerated,
constraint->raw_expr));
constraint->raw_expr),
(attgenerated == ATTRIBUTE_GENERATED_STORED ? "STORED" :
"VIRTUAL"));
}
else if (constraint->contype == CONSTR_CHECK ||
constraint->contype == CONSTR_PRIMARY ||

View File

@ -34,7 +34,14 @@ QualifyCreateStatisticsStmt(Node *node)
{
CreateStatsStmt *stmt = castNode(CreateStatsStmt, node);
RangeVar *relation = (RangeVar *) linitial(stmt->relations);
Node *relationNode = (Node *) linitial(stmt->relations);
if (!IsA(relationNode, RangeVar))
{
return;
}
RangeVar *relation = (RangeVar *) relationNode;
if (relation->schemaname == NULL)
{

View File

@ -510,6 +510,12 @@ static void get_json_table_nested_columns(TableFunc *tf, JsonTablePlan *plan,
deparse_context *context,
bool showimplicit,
bool needcomma);
static void
map_var_through_join_alias(deparse_namespace *dpns, Var *v);
static Var *unwrap_simple_var(Node *expr);
static bool dpns_has_named_join(const deparse_namespace *dpns);
static inline bool
var_matches_base(const Var *v, Index want_varno, AttrNumber want_attno);
#define only_marker(rte) ((rte)->inh ? "" : "ONLY ")
@ -3804,6 +3810,8 @@ get_update_query_targetlist_def(Query *query, List *targetList,
SubLink *cur_ma_sublink;
List *ma_sublinks;
targetList = ExpandMergedSubscriptingRefEntries(targetList);
/*
* Prepare to deal with MULTIEXPR assignments: collect the source SubLinks
* into a list. We expect them to appear, in ID order, in resjunk tlist
@ -3827,6 +3835,8 @@ get_update_query_targetlist_def(Query *query, List *targetList,
}
}
}
ensure_update_targetlist_in_param_order(targetList);
}
next_ma_cell = list_head(ma_sublinks);
cur_ma_sublink = NULL;
@ -4572,6 +4582,103 @@ get_variable(Var *var, int levelsup, bool istoplevel, deparse_context *context)
return attname;
}
/* Any named join with joinaliasvars hides its inner aliases. */
static inline bool
dpns_has_named_join(const deparse_namespace *dpns)
{
if (!dpns || dpns->rtable == NIL)
return false;
ListCell *lc;
foreach (lc, dpns->rtable)
{
RangeTblEntry *rte = (RangeTblEntry *) lfirst(lc);
if (rte && rte->rtekind == RTE_JOIN &&
rte->alias != NULL && rte->joinaliasvars != NIL)
return true;
}
return false;
}
/* Unwrap trivial wrappers around a Var; return Var* or NULL. */
static Var *
unwrap_simple_var(Node *expr)
{
for (;;)
{
if (expr == NULL)
return NULL;
if (IsA(expr, Var))
return (Var *) expr;
if (IsA(expr, RelabelType))
{ expr = (Node *) ((RelabelType *) expr)->arg; continue; }
if (IsA(expr, CoerceToDomain))
{ expr = (Node *) ((CoerceToDomain *) expr)->arg; continue; }
if (IsA(expr, CollateExpr))
{ expr = (Node *) ((CollateExpr *) expr)->arg; continue; }
/* Not a simple Var */
return NULL;
}
}
/* Base identity (canonical/synonym) match against a wanted (varno,attno) pair. */
static inline bool
var_matches_base(const Var *v, Index want_varno, AttrNumber want_attno)
{
if (v->varlevelsup != 0)
return false;
if (v->varno == want_varno && v->varattno == want_attno)
return true;
if (v->varnosyn > 0 && v->varattnosyn > 0 &&
v->varnosyn == want_varno && v->varattnosyn == want_attno)
return true;
return false;
}
/* Mutate v in place: if v maps to a named JOIN's column, set varnosyn/varattnosyn.
* Returns true iff SYN fields were set. Query deparse only (dpns->plan == NULL). */
static void
map_var_through_join_alias(deparse_namespace *dpns, Var *v)
{
if (!dpns || dpns->plan != NULL || !v ||
v->varlevelsup != 0 || v->varattno <= 0)
return;
int rti = 0;
ListCell *lc;
foreach (lc, dpns->rtable)
{
rti++;
RangeTblEntry *jrte = (RangeTblEntry *) lfirst(lc);
if (!jrte || jrte->rtekind != RTE_JOIN ||
jrte->alias == NULL || jrte->joinaliasvars == NIL)
continue;
AttrNumber jattno = 0;
ListCell *vlc;
foreach (vlc, jrte->joinaliasvars)
{
jattno++;
Var *aliasVar = unwrap_simple_var((Node *) lfirst(vlc));
if (!aliasVar)
continue;
if (var_matches_base(aliasVar, v->varno, v->varattno))
{
v->varnosyn = (Index) rti;
v->varattnosyn = jattno;
return;
}
}
}
}
/*
* Deparse a Var which references OUTER_VAR, INNER_VAR, or INDEX_VAR. This
* routine is actually a callback for get_special_varno, which handles finding
@ -6586,6 +6693,11 @@ get_rule_expr(Node *node, deparse_context *context,
Assert(list_length(rowexpr->args) <= tupdesc->natts);
}
/* Precompute deparse ns and whether we even need to try mapping */
deparse_namespace *dpns = (context->namespaces != NIL)
? (deparse_namespace *) linitial(context->namespaces) : NULL;
bool try_map = (dpns && dpns->plan == NULL && dpns_has_named_join(dpns));
/*
* SQL99 allows "ROW" to be omitted when there is more than
* one column, but for simplicity we always print it.
@ -6601,6 +6713,17 @@ get_rule_expr(Node *node, deparse_context *context,
!TupleDescAttr(tupdesc, i)->attisdropped)
{
appendStringInfoString(buf, sep);
/* PG18: if element is a simple base Var, set its SYN to the JOIN alias */
if (try_map)
{
Var *v = unwrap_simple_var(e);
if (v)
{
map_var_through_join_alias(dpns, v);
}
}
/* Whole-row Vars need special treatment here */
get_rule_expr_toplevel(e, context, true);
sep = ", ";

View File

@ -801,7 +801,7 @@ DistributedSequenceList(void)
int
GetForceDelegationAttrIndexInPgDistObject(TupleDesc tupleDesc)
{
return TupleDescSize(tupleDesc) == Natts_pg_dist_object
return tupleDesc->natts == Natts_pg_dist_object
? (Anum_pg_dist_object_force_delegation - 1)
: tupleDesc->natts - 1;
}

View File

@ -4486,7 +4486,7 @@ UnblockDependingBackgroundTasks(BackgroundTask *task)
int
GetAutoConvertedAttrIndexInPgDistPartition(TupleDesc tupleDesc)
{
return TupleDescSize(tupleDesc) == Natts_pg_dist_partition
return tupleDesc->natts == Natts_pg_dist_partition
? (Anum_pg_dist_partition_autoconverted - 1)
: tupleDesc->natts - 1;
}
@ -4506,7 +4506,7 @@ GetAutoConvertedAttrIndexInPgDistPartition(TupleDesc tupleDesc)
int
GetNodesInvolvedAttrIndexInPgDistBackgroundTask(TupleDesc tupleDesc)
{
return TupleDescSize(tupleDesc) == Natts_pg_dist_background_task
return tupleDesc->natts == Natts_pg_dist_background_task
? (Anum_pg_dist_background_task_nodes_involved - 1)
: tupleDesc->natts - 1;
}

View File

@ -115,6 +115,8 @@ static bool NodeIsLocal(WorkerNode *worker);
static void SetLockTimeoutLocally(int32 lock_cooldown);
static void UpdateNodeLocation(int32 nodeId, char *newNodeName, int32 newNodePort,
bool localOnly);
static int GetNodePrimaryNodeIdAttrIndexInPgDistNode(TupleDesc tupleDesc);
static int GetNodeIsCloneAttrIndexInPgDistNode(TupleDesc tupleDesc);
static bool UnsetMetadataSyncedForAllWorkers(void);
static char * GetMetadataSyncCommandToSetNodeColumn(WorkerNode *workerNode,
int columnIndex,
@ -1196,6 +1198,11 @@ ActivateNodeList(MetadataSyncContext *context)
void
ActivateCloneNodeAsPrimary(WorkerNode *workerNode)
{
Relation pgDistNode = table_open(DistNodeRelationId(), AccessShareLock);
TupleDesc tupleDescriptor = RelationGetDescr(pgDistNode);
TupleDesc copiedTupleDescriptor = CreateTupleDescCopy(tupleDescriptor);
table_close(pgDistNode, AccessShareLock);
/*
* Set the node as primary and active.
*/
@ -1203,9 +1210,13 @@ ActivateCloneNodeAsPrimary(WorkerNode *workerNode)
ObjectIdGetDatum(PrimaryNodeRoleId()));
SetWorkerColumnLocalOnly(workerNode, Anum_pg_dist_node_isactive,
BoolGetDatum(true));
SetWorkerColumnLocalOnly(workerNode, Anum_pg_dist_node_nodeisclone,
SetWorkerColumnLocalOnly(workerNode,
GetNodeIsCloneAttrIndexInPgDistNode(copiedTupleDescriptor) +
1,
BoolGetDatum(false));
SetWorkerColumnLocalOnly(workerNode, Anum_pg_dist_node_nodeprimarynodeid,
SetWorkerColumnLocalOnly(workerNode,
GetNodePrimaryNodeIdAttrIndexInPgDistNode(
copiedTupleDescriptor) + 1,
Int32GetDatum(0));
SetWorkerColumnLocalOnly(workerNode, Anum_pg_dist_node_hasmetadata,
BoolGetDatum(true));
@ -1779,14 +1790,14 @@ UpdateNodeLocation(int32 nodeId, char *newNodeName, int32 newNodePort, bool loca
{
const bool indexOK = true;
ScanKeyData scanKey[1];
Datum values[Natts_pg_dist_node];
bool isnull[Natts_pg_dist_node];
bool replace[Natts_pg_dist_node];
Relation pgDistNode = table_open(DistNodeRelationId(), RowExclusiveLock);
TupleDesc tupleDescriptor = RelationGetDescr(pgDistNode);
ScanKeyData scanKey[1];
Datum *values = palloc0(tupleDescriptor->natts * sizeof(Datum));
bool *isnull = palloc0(tupleDescriptor->natts * sizeof(bool));
bool *replace = palloc0(tupleDescriptor->natts * sizeof(bool));
ScanKeyInit(&scanKey[0], Anum_pg_dist_node_nodeid,
BTEqualStrategyNumber, F_INT4EQ, Int32GetDatum(nodeId));
@ -1801,8 +1812,6 @@ UpdateNodeLocation(int32 nodeId, char *newNodeName, int32 newNodePort, bool loca
newNodeName, newNodePort)));
}
memset(replace, 0, sizeof(replace));
values[Anum_pg_dist_node_nodeport - 1] = Int32GetDatum(newNodePort);
isnull[Anum_pg_dist_node_nodeport - 1] = false;
replace[Anum_pg_dist_node_nodeport - 1] = true;
@ -1835,6 +1844,10 @@ UpdateNodeLocation(int32 nodeId, char *newNodeName, int32 newNodePort, bool loca
systable_endscan(scanDescriptor);
table_close(pgDistNode, NoLock);
pfree(values);
pfree(isnull);
pfree(replace);
}
@ -2105,11 +2118,10 @@ citus_internal_mark_node_not_synced(PG_FUNCTION_ARGS)
Relation pgDistNode = table_open(DistNodeRelationId(), AccessShareLock);
TupleDesc tupleDescriptor = RelationGetDescr(pgDistNode);
Datum values[Natts_pg_dist_node];
bool isnull[Natts_pg_dist_node];
bool replace[Natts_pg_dist_node];
Datum *values = palloc0(tupleDescriptor->natts * sizeof(Datum));
bool *isnull = palloc0(tupleDescriptor->natts * sizeof(bool));
bool *replace = palloc0(tupleDescriptor->natts * sizeof(bool));
memset(replace, 0, sizeof(replace));
values[Anum_pg_dist_node_metadatasynced - 1] = DatumGetBool(false);
isnull[Anum_pg_dist_node_metadatasynced - 1] = false;
replace[Anum_pg_dist_node_metadatasynced - 1] = true;
@ -2123,6 +2135,10 @@ citus_internal_mark_node_not_synced(PG_FUNCTION_ARGS)
table_close(pgDistNode, NoLock);
pfree(values);
pfree(isnull);
pfree(replace);
PG_RETURN_VOID();
}
@ -2831,9 +2847,9 @@ SetWorkerColumnLocalOnly(WorkerNode *workerNode, int columnIndex, Datum value)
TupleDesc tupleDescriptor = RelationGetDescr(pgDistNode);
HeapTuple heapTuple = GetNodeTuple(workerNode->workerName, workerNode->workerPort);
Datum values[Natts_pg_dist_node];
bool isnull[Natts_pg_dist_node];
bool replace[Natts_pg_dist_node];
Datum *values = palloc0(tupleDescriptor->natts * sizeof(Datum));
bool *isnull = palloc0(tupleDescriptor->natts * sizeof(bool));
bool *replace = palloc0(tupleDescriptor->natts * sizeof(bool));
if (heapTuple == NULL)
{
@ -2841,7 +2857,6 @@ SetWorkerColumnLocalOnly(WorkerNode *workerNode, int columnIndex, Datum value)
workerNode->workerName, workerNode->workerPort)));
}
memset(replace, 0, sizeof(replace));
values[columnIndex - 1] = value;
isnull[columnIndex - 1] = false;
replace[columnIndex - 1] = true;
@ -2857,6 +2872,10 @@ SetWorkerColumnLocalOnly(WorkerNode *workerNode, int columnIndex, Datum value)
table_close(pgDistNode, NoLock);
pfree(values);
pfree(isnull);
pfree(replace);
return newWorkerNode;
}
@ -3241,16 +3260,15 @@ InsertPlaceholderCoordinatorRecord(void)
static void
InsertNodeRow(int nodeid, char *nodeName, int32 nodePort, NodeMetadata *nodeMetadata)
{
Datum values[Natts_pg_dist_node];
bool isNulls[Natts_pg_dist_node];
Relation pgDistNode = table_open(DistNodeRelationId(), RowExclusiveLock);
TupleDesc tupleDescriptor = RelationGetDescr(pgDistNode);
Datum *values = palloc0(tupleDescriptor->natts * sizeof(Datum));
bool *isNulls = palloc0(tupleDescriptor->natts * sizeof(bool));
Datum nodeClusterStringDatum = CStringGetDatum(nodeMetadata->nodeCluster);
Datum nodeClusterNameDatum = DirectFunctionCall1(namein, nodeClusterStringDatum);
/* form new shard tuple */
memset(values, 0, sizeof(values));
memset(isNulls, false, sizeof(isNulls));
values[Anum_pg_dist_node_nodeid - 1] = UInt32GetDatum(nodeid);
values[Anum_pg_dist_node_groupid - 1] = Int32GetDatum(nodeMetadata->groupId);
values[Anum_pg_dist_node_nodename - 1] = CStringGetTextDatum(nodeName);
@ -3264,17 +3282,15 @@ InsertNodeRow(int nodeid, char *nodeName, int32 nodePort, NodeMetadata *nodeMeta
values[Anum_pg_dist_node_nodecluster - 1] = nodeClusterNameDatum;
values[Anum_pg_dist_node_shouldhaveshards - 1] = BoolGetDatum(
nodeMetadata->shouldHaveShards);
values[Anum_pg_dist_node_nodeisclone - 1] = BoolGetDatum(
nodeMetadata->nodeisclone);
values[Anum_pg_dist_node_nodeprimarynodeid - 1] = Int32GetDatum(
nodeMetadata->nodeprimarynodeid);
Relation pgDistNode = table_open(DistNodeRelationId(), RowExclusiveLock);
TupleDesc tupleDescriptor = RelationGetDescr(pgDistNode);
values[GetNodeIsCloneAttrIndexInPgDistNode(tupleDescriptor)] =
BoolGetDatum(nodeMetadata->nodeisclone);
values[GetNodePrimaryNodeIdAttrIndexInPgDistNode(tupleDescriptor)] =
Int32GetDatum(nodeMetadata->nodeprimarynodeid);
HeapTuple heapTuple = heap_form_tuple(tupleDescriptor, values, isNulls);
CATALOG_INSERT_WITH_SNAPSHOT(pgDistNode, heapTuple);
PushActiveSnapshot(GetTransactionSnapshot());
CatalogTupleInsert(pgDistNode, heapTuple);
PopActiveSnapshot();
CitusInvalidateRelcacheByRelid(DistNodeRelationId());
@ -3283,6 +3299,9 @@ InsertNodeRow(int nodeid, char *nodeName, int32 nodePort, NodeMetadata *nodeMeta
/* close relation */
table_close(pgDistNode, NoLock);
pfree(values);
pfree(isNulls);
}
@ -3397,43 +3416,30 @@ TupleToWorkerNode(Relation pgDistNode, TupleDesc tupleDescriptor, HeapTuple heap
1]);
/*
* Attributes above this line are guaranteed to be present at the
* exact defined attribute number. Atleast till now. If you are droping or
* adding any of the above columns consider adjusting the code above
* nodecluster, nodeisclone and nodeprimarynodeid columns can be missing. In case
* of extension creation/upgrade, master_initialize_node_metadata function is
* called before the nodecluster column is added to pg_dist_node table.
*/
Oid pgDistNodeRelId = RelationGetRelid(pgDistNode);
AttrNumber nodeClusterAttno = get_attnum(pgDistNodeRelId, "nodecluster");
if (nodeClusterAttno > 0 &&
!TupleDescAttr(tupleDescriptor, nodeClusterAttno - 1)->attisdropped &&
!isNullArray[nodeClusterAttno - 1])
if (!isNullArray[Anum_pg_dist_node_nodecluster - 1])
{
Name nodeClusterName =
DatumGetName(datumArray[nodeClusterAttno - 1]);
DatumGetName(datumArray[Anum_pg_dist_node_nodecluster - 1]);
char *nodeClusterString = NameStr(*nodeClusterName);
strlcpy(workerNode->nodeCluster, nodeClusterString, NAMEDATALEN);
}
if (nAtts > Anum_pg_dist_node_nodeisclone)
int nodeIsCloneIdx = GetNodeIsCloneAttrIndexInPgDistNode(tupleDescriptor);
int nodePrimaryNodeIdIdx = GetNodePrimaryNodeIdAttrIndexInPgDistNode(tupleDescriptor);
if (!isNullArray[nodeIsCloneIdx])
{
AttrNumber nodeIsCloneAttno = get_attnum(pgDistNodeRelId, "nodeisclone");
if (nodeIsCloneAttno > 0 &&
!TupleDescAttr(tupleDescriptor, nodeIsCloneAttno - 1)->attisdropped &&
!isNullArray[nodeIsCloneAttno - 1])
{
workerNode->nodeisclone = DatumGetBool(datumArray[nodeIsCloneAttno - 1]);
}
AttrNumber nodePrimaryNodeIdAttno = get_attnum(pgDistNodeRelId,
"nodeprimarynodeid");
if (nodePrimaryNodeIdAttno > 0 &&
!TupleDescAttr(tupleDescriptor, nodePrimaryNodeIdAttno - 1)->attisdropped &&
!isNullArray[nodePrimaryNodeIdAttno - 1])
{
workerNode->nodeprimarynodeid = DatumGetInt32(datumArray[
nodePrimaryNodeIdAttno - 1])
;
}
workerNode->nodeisclone = DatumGetBool(datumArray[nodeIsCloneIdx]);
}
if (!isNullArray[nodePrimaryNodeIdIdx])
{
workerNode->nodeprimarynodeid = DatumGetInt32(datumArray[nodePrimaryNodeIdIdx]);
}
pfree(datumArray);
@ -3443,6 +3449,48 @@ TupleToWorkerNode(Relation pgDistNode, TupleDesc tupleDescriptor, HeapTuple heap
}
/*
* GetNodePrimaryNodeIdAttrIndexInPgDistNode returns attrnum for nodeprimarynodeid attr.
*
* nodeprimarynodeid attr was added to table pg_dist_node using alter operation
* after the version where Citus started supporting downgrades, and it's one of
* the two columns that we've introduced to pg_dist_node since then.
*
* And in case of a downgrade + upgrade, tupleDesc->natts becomes greater than
* Natts_pg_dist_node and when this happens, then we know that attrnum
* nodeprimarynodeid is not Anum_pg_dist_node_nodeprimarynodeid anymore but
* tupleDesc->natts - 1.
*/
static int
GetNodePrimaryNodeIdAttrIndexInPgDistNode(TupleDesc tupleDesc)
{
return tupleDesc->natts == Natts_pg_dist_node
? (Anum_pg_dist_node_nodeprimarynodeid - 1)
: tupleDesc->natts - 1;
}
/*
* GetNodeIsCloneAttrIndexInPgDistNode returns attrnum for nodeisclone attr.
*
* Like, GetNodePrimaryNodeIdAttrIndexInPgDistNode(), performs a similar
* calculation for nodeisclone attribute because this is column added to
* pg_dist_node after we started supporting downgrades.
*
* Only difference with the mentioned function is that we know
* the attrnum for nodeisclone is not Anum_pg_dist_node_nodeisclone anymore
* but tupleDesc->natts - 2 because we added these columns consecutively
* and we first add nodeisclone attribute and then nodeprimarynodeid attribute.
*/
static int
GetNodeIsCloneAttrIndexInPgDistNode(TupleDesc tupleDesc)
{
return tupleDesc->natts == Natts_pg_dist_node
? (Anum_pg_dist_node_nodeisclone - 1)
: tupleDesc->natts - 2;
}
/*
* StringToDatum transforms a string representation into a Datum.
*/
@ -3519,15 +3567,15 @@ UnsetMetadataSyncedForAllWorkers(void)
updatedAtLeastOne = true;
}
Datum *values = palloc(tupleDescriptor->natts * sizeof(Datum));
bool *isnull = palloc(tupleDescriptor->natts * sizeof(bool));
bool *replace = palloc(tupleDescriptor->natts * sizeof(bool));
while (HeapTupleIsValid(heapTuple))
{
Datum values[Natts_pg_dist_node];
bool isnull[Natts_pg_dist_node];
bool replace[Natts_pg_dist_node];
memset(replace, false, sizeof(replace));
memset(isnull, false, sizeof(isnull));
memset(values, 0, sizeof(values));
memset(values, 0, tupleDescriptor->natts * sizeof(Datum));
memset(isnull, 0, tupleDescriptor->natts * sizeof(bool));
memset(replace, 0, tupleDescriptor->natts * sizeof(bool));
values[Anum_pg_dist_node_metadatasynced - 1] = BoolGetDatum(false);
replace[Anum_pg_dist_node_metadatasynced - 1] = true;
@ -3550,6 +3598,10 @@ UnsetMetadataSyncedForAllWorkers(void)
CatalogCloseIndexes(indstate);
table_close(relation, NoLock);
pfree(values);
pfree(isnull);
pfree(replace);
return updatedAtLeastOne;
}

View File

@ -43,6 +43,8 @@ PG_FUNCTION_INFO_V1(citus_promote_clone_and_rebalance);
Datum
citus_promote_clone_and_rebalance(PG_FUNCTION_ARGS)
{
CheckCitusVersion(ERROR);
/* Ensure superuser and coordinator */
EnsureSuperUser();
EnsureCoordinator();

View File

@ -13,6 +13,7 @@
#include "postgres.h"
#include "funcapi.h"
#include "miscadmin.h"
#include "access/htup_details.h"
#include "access/xact.h"
@ -124,8 +125,10 @@ static void AdjustReadIntermediateResultsCostInternal(RelOptInfo *relOptInfo,
Const *resultFormatConst);
static List * OuterPlanParamsList(PlannerInfo *root);
static List * CopyPlanParamList(List *originalPlanParamList);
static PlannerRestrictionContext * CreateAndPushPlannerRestrictionContext(
FastPathRestrictionContext *fastPathContext);
static void CreateAndPushPlannerRestrictionContext(
DistributedPlanningContext *planContext,
FastPathRestrictionContext *
fastPathContext);
static PlannerRestrictionContext * CurrentPlannerRestrictionContext(void);
static void PopPlannerRestrictionContext(void);
static void ResetPlannerRestrictionContext(
@ -141,10 +144,12 @@ static RouterPlanType GetRouterPlanType(Query *query,
bool hasUnresolvedParams);
static void ConcatenateRTablesAndPerminfos(PlannedStmt *mainPlan,
PlannedStmt *concatPlan);
static bool CheckPostPlanDistribution(bool isDistributedQuery,
Query *origQuery,
List *rangeTableList,
Query *plannedQuery);
static bool CheckPostPlanDistribution(DistributedPlanningContext *planContext,
bool isDistributedQuery,
List *rangeTableList);
#if PG_VERSION_NUM >= PG_VERSION_18
static int DisableSelfJoinElimination(void);
#endif
/* Distributed planner hook */
PlannedStmt *
@ -156,6 +161,9 @@ distributed_planner(Query *parse,
bool needsDistributedPlanning = false;
bool fastPathRouterQuery = false;
FastPathRestrictionContext fastPathContext = { 0 };
#if PG_VERSION_NUM >= PG_VERSION_18
int saveNestLevel = -1;
#endif
List *rangeTableList = ExtractRangeTableEntryList(parse);
@ -219,6 +227,10 @@ distributed_planner(Query *parse,
bool setPartitionedTablesInherited = false;
AdjustPartitioningForDistributedPlanning(rangeTableList,
setPartitionedTablesInherited);
#if PG_VERSION_NUM >= PG_VERSION_18
saveNestLevel = DisableSelfJoinElimination();
#endif
}
}
@ -235,9 +247,9 @@ distributed_planner(Query *parse,
*/
HideCitusDependentObjectsOnQueriesOfPgMetaTables((Node *) parse, NULL);
/* create a restriction context and put it at the end of context list */
planContext.plannerRestrictionContext = CreateAndPushPlannerRestrictionContext(
&fastPathContext);
/* create a restriction context and put it at the end of our plan context's context list */
CreateAndPushPlannerRestrictionContext(&planContext,
&fastPathContext);
/*
* We keep track of how many times we've recursed into the planner, primarily
@ -265,10 +277,19 @@ distributed_planner(Query *parse,
planContext.plan = standard_planner(planContext.query, NULL,
planContext.cursorOptions,
planContext.boundParams);
needsDistributedPlanning = CheckPostPlanDistribution(needsDistributedPlanning,
planContext.originalQuery,
rangeTableList,
planContext.query);
#if PG_VERSION_NUM >= PG_VERSION_18
if (needsDistributedPlanning)
{
Assert(saveNestLevel > 0);
AtEOXact_GUC(true, saveNestLevel);
}
/* Pop the plan context from the current restriction context */
planContext.plannerRestrictionContext->planContext = NULL;
#endif
needsDistributedPlanning = CheckPostPlanDistribution(&planContext,
needsDistributedPlanning,
rangeTableList);
if (needsDistributedPlanning)
{
@ -2017,6 +2038,32 @@ multi_relation_restriction_hook(PlannerInfo *root, RelOptInfo *relOptInfo,
lappend(relationRestrictionContext->relationRestrictionList, relationRestriction);
MemoryContextSwitchTo(oldMemoryContext);
#if PG_VERSION_NUM >= PG_VERSION_18
if (root->query_level == 1 && plannerRestrictionContext->planContext != NULL)
{
/* We're at the top query with a distributed context; see if Postgres
* has changed the query tree we passed to it in distributed_planner().
* This check was necessitated by PG commit 1e4351a, becuase in it the
* planner modfies a copy of the passed in query tree with the consequence
* that changes are not reflected back to the caller of standard_planner().
*/
Query *query = plannerRestrictionContext->planContext->query;
if (root->parse != query)
{
/*
* The Postgres planner has reconstructed the query tree, so the query
* tree our distributed context passed in (to standard_planner() is
* updated to track the new query tree.
*/
ereport(DEBUG4, (errmsg(
"Detected query reconstruction by Postgres planner, updating "
"planContext to track it")));
plannerRestrictionContext->planContext->query = root->parse;
}
}
#endif
}
@ -2394,11 +2441,13 @@ CopyPlanParamList(List *originalPlanParamList)
* context with an empty relation restriction context and an empty join and
* a copy of the given fast path restriction context (if present). Finally,
* the planner restriction context is inserted to the beginning of the
* global plannerRestrictionContextList and it is returned.
* global plannerRestrictionContextList and, in PG18+, given a reference to
* its distributed plan context.
*/
static PlannerRestrictionContext *
CreateAndPushPlannerRestrictionContext(
FastPathRestrictionContext *fastPathRestrictionContext)
static void
CreateAndPushPlannerRestrictionContext(DistributedPlanningContext *planContext,
FastPathRestrictionContext *
fastPathRestrictionContext)
{
PlannerRestrictionContext *plannerRestrictionContext =
palloc0(sizeof(PlannerRestrictionContext));
@ -2435,7 +2484,11 @@ CreateAndPushPlannerRestrictionContext(
plannerRestrictionContextList = lcons(plannerRestrictionContext,
plannerRestrictionContextList);
return plannerRestrictionContext;
planContext->plannerRestrictionContext = plannerRestrictionContext;
#if PG_VERSION_NUM >= PG_VERSION_18
plannerRestrictionContext->planContext = planContext;
#endif
}
@ -2496,6 +2549,18 @@ CurrentPlannerRestrictionContext(void)
static void
PopPlannerRestrictionContext(void)
{
#if PG_VERSION_NUM >= PG_VERSION_18
/*
* PG18+: Clear the restriction context's planContext pointer; this is done
* by distributed_planner() when popping the context, but in case of error
* during standard_planner() we want to clean up here also.
*/
PlannerRestrictionContext *plannerRestrictionContext =
(PlannerRestrictionContext *) linitial(plannerRestrictionContextList);
plannerRestrictionContext->planContext = NULL;
#endif
plannerRestrictionContextList = list_delete_first(plannerRestrictionContextList);
}
@ -2740,12 +2805,13 @@ WarnIfListHasForeignDistributedTable(List *rangeTableList)
static bool
CheckPostPlanDistribution(bool isDistributedQuery,
Query *origQuery, List *rangeTableList,
Query *plannedQuery)
CheckPostPlanDistribution(DistributedPlanningContext *planContext, bool
isDistributedQuery, List *rangeTableList)
{
if (isDistributedQuery)
{
Query *origQuery = planContext->originalQuery;
Query *plannedQuery = planContext->query;
Node *origQuals = origQuery->jointree->quals;
Node *plannedQuals = plannedQuery->jointree->quals;
@ -2764,6 +2830,23 @@ CheckPostPlanDistribution(bool isDistributedQuery,
*/
if (origQuals != NULL && plannedQuals == NULL)
{
bool planHasDistTable = ListContainsDistributedTableRTE(
planContext->plan->rtable, NULL);
/*
* If the Postgres plan has a distributed table, we know for sure that
* the query requires distributed planning.
*/
if (planHasDistTable)
{
return true;
}
/*
* Otherwise, if the query has less range table entries after Postgres,
* planning, we should re-evaluate the distribution of the query. Postgres
* may have optimized away all citus tables, per issues 7782, 7783.
*/
List *rtesPostPlan = ExtractRangeTableEntryList(plannedQuery);
if (list_length(rtesPostPlan) < list_length(rangeTableList))
{
@ -2775,3 +2858,27 @@ CheckPostPlanDistribution(bool isDistributedQuery,
return isDistributedQuery;
}
#if PG_VERSION_NUM >= PG_VERSION_18
/*
* DisableSelfJoinElimination is used to prevent self join elimination
* during distributed query planning to ensure shard queries are correctly
* generated. PG18's self join elimination (fc069a3a6) changes the Query
* in a way that can cause problems for queries with a mix of Citus and
* Postgres tables. Self join elimination is allowed on Postgres tables
* only so queries involving shards get the benefit of it.
*/
static int
DisableSelfJoinElimination(void)
{
int NestLevel = NewGUCNestLevel();
set_config_option("enable_self_join_elimination", "off",
(superuser() ? PGC_SUSET : PGC_USERSET), PGC_S_SESSION,
GUC_ACTION_LOCAL, true, 0, false);
return NestLevel;
}
#endif

View File

@ -273,8 +273,16 @@ FastPathRouterQuery(Query *query, FastPathRestrictionContext *fastPathContext)
return true;
}
/* make sure that the only range table in FROM clause */
if (list_length(query->rtable) != 1)
int numFromRels = list_length(query->rtable);
/* make sure that there is only one range table in FROM clause */
if ((numFromRels != 1)
#if PG_VERSION_NUM >= PG_VERSION_18
/* with a PG18+ twist for GROUP rte - if present make sure there's two range tables */
&& (!query->hasGroupRTE || numFromRels != 2)
#endif
)
{
return false;
}

View File

@ -428,11 +428,10 @@ CreateInsertSelectIntoLocalTablePlan(uint64 planId, Query *insertSelectQuery,
ParamListInfo boundParams, bool hasUnresolvedParams,
PlannerRestrictionContext *plannerRestrictionContext)
{
RangeTblEntry *selectRte = ExtractSelectRangeTableEntry(insertSelectQuery);
PrepareInsertSelectForCitusPlanner(insertSelectQuery);
/* get the SELECT query (may have changed after PrepareInsertSelectForCitusPlanner) */
RangeTblEntry *selectRte = ExtractSelectRangeTableEntry(insertSelectQuery);
Query *selectQuery = selectRte->subquery;
bool allowRecursivePlanning = true;
@ -513,6 +512,13 @@ PrepareInsertSelectForCitusPlanner(Query *insertSelectQuery)
bool isWrapped = false;
/*
* PG18 is stricter about GroupRTE/GroupVar. For INSERT SELECT with a GROUP BY,
* flatten the SELECTs targetList and havingQual so Vars point to base RTEs and
* avoid Unrecognized range table id.
*/
FlattenGroupExprs(selectRte->subquery);
if (selectRte->subquery->setOperations != NULL)
{
/*
@ -1431,11 +1437,6 @@ static DistributedPlan *
CreateNonPushableInsertSelectPlan(uint64 planId, Query *parse, ParamListInfo boundParams)
{
Query *insertSelectQuery = copyObject(parse);
RangeTblEntry *selectRte = ExtractSelectRangeTableEntry(insertSelectQuery);
RangeTblEntry *insertRte = ExtractResultRelationRTEOrError(insertSelectQuery);
Oid targetRelationId = insertRte->relid;
DistributedPlan *distributedPlan = CitusMakeNode(DistributedPlan);
distributedPlan->modLevel = RowModifyLevelForQuery(insertSelectQuery);
@ -1450,6 +1451,7 @@ CreateNonPushableInsertSelectPlan(uint64 planId, Query *parse, ParamListInfo bou
PrepareInsertSelectForCitusPlanner(insertSelectQuery);
/* get the SELECT query (may have changed after PrepareInsertSelectForCitusPlanner) */
RangeTblEntry *selectRte = ExtractSelectRangeTableEntry(insertSelectQuery);
Query *selectQuery = selectRte->subquery;
/*
@ -1472,6 +1474,9 @@ CreateNonPushableInsertSelectPlan(uint64 planId, Query *parse, ParamListInfo bou
PlannedStmt *selectPlan = pg_plan_query(selectQueryCopy, NULL, cursorOptions,
boundParams);
/* decide whether we can repartition the results */
RangeTblEntry *insertRte = ExtractResultRelationRTEOrError(insertSelectQuery);
Oid targetRelationId = insertRte->relid;
bool repartitioned = IsRedistributablePlan(selectPlan->planTree) &&
IsSupportedRedistributionTarget(targetRelationId);

View File

@ -41,6 +41,7 @@
static int SourceResultPartitionColumnIndex(Query *mergeQuery,
List *sourceTargetList,
CitusTableCacheEntry *targetRelation);
static int FindTargetListEntryWithVarExprAttno(List *targetList, AttrNumber varattno);
static Var * ValidateAndReturnVarIfSupported(Node *entryExpr);
static DeferredErrorMessage * DeferErrorIfTargetHasFalseClause(Oid targetRelationId,
PlannerRestrictionContext *
@ -422,10 +423,13 @@ ErrorIfMergeHasUnsupportedTables(Oid targetRelationId, List *rangeTableList)
case RTE_VALUES:
case RTE_JOIN:
case RTE_CTE:
{
/* Skip them as base table(s) will be checked */
continue;
}
#if PG_VERSION_NUM >= PG_VERSION_18
case RTE_GROUP:
#endif
{
/* Skip them as base table(s) will be checked */
continue;
}
/*
* RTE_NAMEDTUPLESTORE is typically used in ephmeral named relations,
@ -628,6 +632,22 @@ MergeQualAndTargetListFunctionsSupported(Oid resultRelationId, Query *query,
}
}
/*
* joinTree->quals, retrieved by GetMergeJoinTree() - either from
* mergeJoinCondition (PG >= 17) or jointree->quals (PG < 17),
* only contains the quals that present in "ON (..)" clause. Action
* quals that can be specified for each specific action, as in
* "WHEN <match condition> AND <action quals> THEN <action>"", are
* saved into "qual" field of the corresponding action's entry in
* mergeActionList, see
* https://github.com/postgres/postgres/blob/e6da68a6e1d60a037b63a9c9ed36e5ef0a996769/src/backend/parser/parse_merge.c#L285-L293.
*
* For this reason, even if TargetEntryChangesValue() could prove that
* an action's quals ensure that the action cannot change the distribution
* key, this is not the case as we don't provide action quals to
* TargetEntryChangesValue(), but just joinTree, which only contains
* the "ON (..)" clause quals.
*/
if (targetEntryDistributionColumn &&
TargetEntryChangesValue(targetEntry, distributionColumn, joinTree))
{
@ -1411,7 +1431,8 @@ SourceResultPartitionColumnIndex(Query *mergeQuery, List *sourceTargetList,
Assert(sourceRepartitionVar);
int sourceResultRepartitionColumnIndex =
DistributionColumnIndex(sourceTargetList, sourceRepartitionVar);
FindTargetListEntryWithVarExprAttno(sourceTargetList,
sourceRepartitionVar->varattno);
if (sourceResultRepartitionColumnIndex == -1)
{
@ -1562,6 +1583,33 @@ FetchAndValidateInsertVarIfExists(Oid targetRelationId, Query *query)
}
/*
* FindTargetListEntryWithVarExprAttno finds the index of the target
* entry whose expr is a Var that points to input varattno.
*
* If no such target entry is found, it returns -1.
*/
static int
FindTargetListEntryWithVarExprAttno(List *targetList, AttrNumber varattno)
{
int targetEntryIndex = 0;
TargetEntry *targetEntry = NULL;
foreach_declared_ptr(targetEntry, targetList)
{
if (IsA(targetEntry->expr, Var) &&
((Var *) targetEntry->expr)->varattno == varattno)
{
return targetEntryIndex;
}
targetEntryIndex++;
}
return -1;
}
/*
* IsLocalTableModification returns true if the table modified is a Postgres table.
* We do not support recursive planning for MERGE yet, so we could have a join

View File

@ -149,13 +149,6 @@ typedef struct ExplainAnalyzeDestination
#if PG_VERSION_NUM >= PG_VERSION_17 && PG_VERSION_NUM < PG_VERSION_18
/*
* Various places within need to convert bytes to kilobytes. Round these up
* to the next whole kilobyte.
* copied from explain.c
*/
#define BYTES_TO_KILOBYTES(b) (((b) + 1023) / 1024)
/* copied from explain.c */
/* Instrumentation data for SERIALIZE option */
typedef struct SerializeMetrics
@ -166,13 +159,7 @@ typedef struct SerializeMetrics
} SerializeMetrics;
/* copied from explain.c */
static bool peek_buffer_usage(ExplainState *es, const BufferUsage *usage);
static void show_buffer_usage(ExplainState *es, const BufferUsage *usage);
static void show_memory_counters(ExplainState *es,
const MemoryContextCounters *mem_counters);
static void ExplainIndentText(ExplainState *es);
static void ExplainPrintSerialize(ExplainState *es,
SerializeMetrics *metrics);
static SerializeMetrics GetSerializationMetrics(DestReceiver *dest);
/*
@ -200,6 +187,23 @@ typedef struct SerializeDestReceiver
} SerializeDestReceiver;
#endif
#if PG_VERSION_NUM >= PG_VERSION_17
/*
* Various places within need to convert bytes to kilobytes. Round these up
* to the next whole kilobyte.
* copied from explain.c
*/
#define BYTES_TO_KILOBYTES(b) (((b) + 1023) / 1024)
/* copied from explain.c */
static bool peek_buffer_usage(ExplainState *es, const BufferUsage *usage);
static void show_buffer_usage(ExplainState *es, const BufferUsage *usage);
static void show_memory_counters(ExplainState *es,
const MemoryContextCounters *mem_counters);
static void ExplainPrintSerialize(ExplainState *es,
SerializeMetrics *metrics);
#endif
/* Explain functions for distributed queries */
static void ExplainSubPlans(DistributedPlan *distributedPlan, ExplainState *es);
@ -2409,7 +2413,7 @@ ExplainWorkerPlan(PlannedStmt *plannedstmt, DistributedSubPlan *subPlan, DestRec
/* Create textual dump of plan tree */
ExplainPrintPlan(es, queryDesc);
#if PG_VERSION_NUM >= PG_VERSION_17 && PG_VERSION_NUM < PG_VERSION_18
#if PG_VERSION_NUM >= PG_VERSION_17
/* Show buffer and/or memory usage in planning */
if (peek_buffer_usage(es, bufusage) || mem_counters)
{
@ -2455,7 +2459,7 @@ ExplainWorkerPlan(PlannedStmt *plannedstmt, DistributedSubPlan *subPlan, DestRec
if (es->costs)
ExplainPrintJITSummary(es, queryDesc);
#if PG_VERSION_NUM >= PG_VERSION_17 && PG_VERSION_NUM < PG_VERSION_18
#if PG_VERSION_NUM >= PG_VERSION_17
if (es->serialize != EXPLAIN_SERIALIZE_NONE)
{
/* the SERIALIZE option requires its own tuple receiver */
@ -2530,6 +2534,50 @@ elapsed_time(instr_time *starttime)
#if PG_VERSION_NUM >= PG_VERSION_17 && PG_VERSION_NUM < PG_VERSION_18
/*
* Indent a text-format line.
*
* We indent by two spaces per indentation level. However, when emitting
* data for a parallel worker there might already be data on the current line
* (cf. ExplainOpenWorker); in that case, don't indent any more.
*
* Copied from explain.c.
*/
static void
ExplainIndentText(ExplainState *es)
{
Assert(es->format == EXPLAIN_FORMAT_TEXT);
if (es->str->len == 0 || es->str->data[es->str->len - 1] == '\n')
appendStringInfoSpaces(es->str, es->indent * 2);
}
/*
* GetSerializationMetrics - collect metrics
*
* We have to be careful here since the receiver could be an IntoRel
* receiver if the subject statement is CREATE TABLE AS. In that
* case, return all-zeroes stats.
*
* Copied from explain.c.
*/
static SerializeMetrics
GetSerializationMetrics(DestReceiver *dest)
{
SerializeMetrics empty;
if (dest->mydest == DestExplainSerialize)
return ((SerializeDestReceiver *) dest)->metrics;
memset(&empty, 0, sizeof(SerializeMetrics));
INSTR_TIME_SET_ZERO(empty.timeSpent);
return empty;
}
#endif
#if PG_VERSION_NUM >= PG_VERSION_17
/*
* Return whether show_buffer_usage would have anything to print, if given
* the same 'usage' data. Note that when the format is anything other than
@ -2747,24 +2795,6 @@ show_buffer_usage(ExplainState *es, const BufferUsage *usage)
}
/*
* Indent a text-format line.
*
* We indent by two spaces per indentation level. However, when emitting
* data for a parallel worker there might already be data on the current line
* (cf. ExplainOpenWorker); in that case, don't indent any more.
*
* Copied from explain.c.
*/
static void
ExplainIndentText(ExplainState *es)
{
Assert(es->format == EXPLAIN_FORMAT_TEXT);
if (es->str->len == 0 || es->str->data[es->str->len - 1] == '\n')
appendStringInfoSpaces(es->str, es->indent * 2);
}
/*
* Show memory usage details.
*
@ -2850,28 +2880,4 @@ ExplainPrintSerialize(ExplainState *es, SerializeMetrics *metrics)
ExplainCloseGroup("Serialization", "Serialization", true, es);
}
/*
* GetSerializationMetrics - collect metrics
*
* We have to be careful here since the receiver could be an IntoRel
* receiver if the subject statement is CREATE TABLE AS. In that
* case, return all-zeroes stats.
*
* Copied from explain.c.
*/
static SerializeMetrics
GetSerializationMetrics(DestReceiver *dest)
{
SerializeMetrics empty;
if (dest->mydest == DestExplainSerialize)
return ((SerializeDestReceiver *) dest)->metrics;
memset(&empty, 0, sizeof(SerializeMetrics));
INSTR_TIME_SET_ZERO(empty.timeSpent);
return empty;
}
#endif

View File

@ -297,8 +297,19 @@ TargetListOnPartitionColumn(Query *query, List *targetEntryList)
bool
FindNodeMatchingCheckFunctionInRangeTableList(List *rtable, CheckNodeFunc checker)
{
int rtWalkFlags = QTW_EXAMINE_RTES_BEFORE;
#if PG_VERSION_NUM >= PG_VERSION_18
/*
* PG18+: Do not descend into GROUP BY expressions subqueries, they
* have already been visited as recursive planning is depth-first.
*/
rtWalkFlags |= QTW_IGNORE_GROUPEXPRS;
#endif
return range_table_walker(rtable, FindNodeMatchingCheckFunction, checker,
QTW_EXAMINE_RTES_BEFORE);
rtWalkFlags);
}

View File

@ -2501,11 +2501,16 @@ ErrorIfUnsupportedShardDistribution(Query *query)
currentRelationId);
if (!coPartitionedTables)
{
char *firstRelName = get_rel_name(firstTableRelationId);
char *currentRelName = get_rel_name(currentRelationId);
int compareResult = strcmp(firstRelName, currentRelName);
ereport(ERROR, (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
errmsg("cannot push down this subquery"),
errdetail("%s and %s are not colocated",
get_rel_name(firstTableRelationId),
get_rel_name(currentRelationId))));
(compareResult > 0 ? currentRelName : firstRelName),
(compareResult > 0 ? firstRelName :
currentRelName))));
}
}
}
@ -3173,16 +3178,25 @@ BuildBaseConstraint(Var *column)
/*
* MakeOpExpression builds an operator expression node. This operator expression
* implements the operator clause as defined by the variable and the strategy
* number.
* MakeOpExpressionExtended builds an operator expression node that's of
* the form "Var <op> Expr", where, Expr must either be a Const or a Var
* (*1).
*
* This operator expression implements the operator clause as defined by
* the variable and the strategy number.
*/
OpExpr *
MakeOpExpression(Var *variable, int16 strategyNumber)
MakeOpExpressionExtended(Var *leftVar, Expr *rightArg, int16 strategyNumber)
{
Oid typeId = variable->vartype;
Oid typeModId = variable->vartypmod;
Oid collationId = variable->varcollid;
/*
* Other types of expressions are probably also fine to be used, but
* none of the callers need support for them for now, so we haven't
* tested them (*1).
*/
Assert(IsA(rightArg, Const) || IsA(rightArg, Var));
Oid typeId = leftVar->vartype;
Oid collationId = leftVar->varcollid;
Oid accessMethodId = BTREE_AM_OID;
@ -3200,18 +3214,16 @@ MakeOpExpression(Var *variable, int16 strategyNumber)
*/
if (operatorClassInputType != typeId && typeType != TYPTYPE_PSEUDO)
{
variable = (Var *) makeRelabelType((Expr *) variable, operatorClassInputType,
-1, collationId, COERCE_IMPLICIT_CAST);
leftVar = (Var *) makeRelabelType((Expr *) leftVar, operatorClassInputType,
-1, collationId, COERCE_IMPLICIT_CAST);
}
Const *constantValue = makeNullConst(operatorClassInputType, typeModId, collationId);
/* Now make the expression with the given variable and a null constant */
OpExpr *expression = (OpExpr *) make_opclause(operatorId,
InvalidOid, /* no result type yet */
false, /* no return set */
(Expr *) variable,
(Expr *) constantValue,
(Expr *) leftVar,
rightArg,
InvalidOid, collationId);
/* Set implementing function id and result type */
@ -3222,6 +3234,31 @@ MakeOpExpression(Var *variable, int16 strategyNumber)
}
/*
* MakeOpExpression is a wrapper around MakeOpExpressionExtended
* that creates a null constant of the appropriate type for right
* hand side operator class input type. As a result, it builds an
* operator expression node that's of the form "Var <op> NULL".
*/
OpExpr *
MakeOpExpression(Var *leftVar, int16 strategyNumber)
{
Oid typeId = leftVar->vartype;
Oid typeModId = leftVar->vartypmod;
Oid collationId = leftVar->varcollid;
Oid accessMethodId = BTREE_AM_OID;
OperatorCacheEntry *operatorCacheEntry = LookupOperatorByType(typeId, accessMethodId,
strategyNumber);
Oid operatorClassInputType = operatorCacheEntry->operatorClassInputType;
Const *constantValue = makeNullConst(operatorClassInputType, typeModId, collationId);
return MakeOpExpressionExtended(leftVar, (Expr *) constantValue, strategyNumber);
}
/*
* LookupOperatorByType is a wrapper around GetOperatorByType(),
* operatorClassInputType() and get_typtype() functions that uses a cache to avoid

View File

@ -372,6 +372,25 @@ AddPartitionKeyNotNullFilterToSelect(Query *subqery)
/* we should have found target partition column */
Assert(targetPartitionColumnVar != NULL);
#if PG_VERSION_NUM >= PG_VERSION_18
if (subqery->hasGroupRTE)
{
/* if the partition column is a grouped column, we need to flatten it
* to ensure query deparsing works correctly. We choose to do this here
* instead of in ruletils.c because we want to keep the flattening logic
* close to the NOT NULL filter injection.
*/
RangeTblEntry *partitionRTE = rt_fetch(targetPartitionColumnVar->varno,
subqery->rtable);
if (partitionRTE->rtekind == RTE_GROUP)
{
targetPartitionColumnVar = (Var *) flatten_group_exprs(NULL, subqery,
(Node *)
targetPartitionColumnVar);
}
}
#endif
/* create expression for partition_column IS NOT NULL */
NullTest *nullTest = makeNode(NullTest);
nullTest->nulltesttype = IS_NOT_NULL;
@ -1609,10 +1628,19 @@ MasterIrreducibleExpressionFunctionChecker(Oid func_id, void *context)
/*
* TargetEntryChangesValue determines whether the given target entry may
* change the value in a given column, given a join tree. The result is
* true unless the expression refers directly to the column, or the
* expression is a value that is implied by the qualifiers of the join
* tree, or the target entry sets a different column.
* change the value given a column and a join tree.
*
* The function assumes that the "targetEntry" references given "column"
* Var via its "resname" and is used as part of a modify query. This means
* that, for example, for an update query, the input "targetEntry" constructs
* the following assignment operation as part of the SET clause:
* "col_a = expr_a ", where, "col_a" refers to input "column" Var (via
* "resname") as per the assumption written above. And we want to understand
* if "expr_a" (which is pointed to by targetEntry->expr) refers directly to
* the "column" Var, or "expr_a" is a value that is implied to be equal
* to "column" Var by the qualifiers of the join tree. If so, we know that
* the value of "col_a" effectively cannot be changed by this assignment
* operation.
*/
bool
TargetEntryChangesValue(TargetEntry *targetEntry, Var *column, FromExpr *joinTree)
@ -1623,11 +1651,36 @@ TargetEntryChangesValue(TargetEntry *targetEntry, Var *column, FromExpr *joinTre
if (IsA(setExpr, Var))
{
Var *newValue = (Var *) setExpr;
if (newValue->varattno == column->varattno)
if (column->varno == newValue->varno &&
column->varattno == newValue->varattno)
{
/* target entry of the form SET col = table.col */
/*
* Target entry is of the form "SET col_a = foo.col_b",
* where foo also points to the same range table entry
* and col_a and col_b are the same. So, effectively
* they're literally referring to the same column.
*/
isColumnValueChanged = false;
}
else
{
List *restrictClauseList = WhereClauseList(joinTree);
OpExpr *equalityExpr = MakeOpExpressionExtended(column, (Expr *) newValue,
BTEqualStrategyNumber);
bool predicateIsImplied = predicate_implied_by(list_make1(equalityExpr),
restrictClauseList, false);
if (predicateIsImplied)
{
/*
* Target entry is of the form
* "SET col_a = foo.col_b WHERE col_a = foo.col_b (AND (...))",
* where foo points to a different relation or it points
* to the same relation but col_a is not the same column as col_b.
*/
isColumnValueChanged = false;
}
}
}
else if (IsA(setExpr, Const))
{
@ -1648,7 +1701,10 @@ TargetEntryChangesValue(TargetEntry *targetEntry, Var *column, FromExpr *joinTre
restrictClauseList, false);
if (predicateIsImplied)
{
/* target entry of the form SET col = <x> WHERE col = <x> AND ... */
/*
* Target entry is of the form
* "SET col_a = const_a WHERE col_a = const_a (AND (...))".
*/
isColumnValueChanged = false;
}
}
@ -2136,7 +2192,11 @@ CheckAndBuildDelayedFastPathPlan(DistributedPlanningContext *planContext,
static bool
ConvertToQueryOnShard(Query *query, Oid citusTableOid, Oid shardId)
{
Assert(list_length(query->rtable) == 1);
Assert(list_length(query->rtable) == 1
#if PG_VERSION_NUM >= PG_VERSION_18
|| (list_length(query->rtable) == 2 && query->hasGroupRTE)
#endif
);
RangeTblEntry *citusTableRte = (RangeTblEntry *) linitial(query->rtable);
Assert(citusTableRte->relid == citusTableOid);

View File

@ -333,7 +333,9 @@ WhereOrHavingClauseContainsSubquery(Query *query)
bool
TargetListContainsSubquery(List *targetList)
{
return FindNodeMatchingCheckFunction((Node *) targetList, IsNodeSubquery);
bool hasSubquery = FindNodeMatchingCheckFunction((Node *) targetList, IsNodeSubquery);
return hasSubquery;
}
@ -1093,6 +1095,28 @@ DeferErrorIfCannotPushdownSubquery(Query *subqueryTree, bool outerMostQueryHasLi
}
/*
* FlattenGroupExprs flattens the GROUP BY expressions in the query tree
* by replacing VAR nodes referencing the GROUP range table with the actual
* GROUP BY expression. This is used by Citus planning to ensure correctness
* when analysing and building the distributed plan.
*/
void
FlattenGroupExprs(Query *queryTree)
{
#if PG_VERSION_NUM >= PG_VERSION_18
if (queryTree->hasGroupRTE)
{
queryTree->targetList = (List *)
flatten_group_exprs(NULL, queryTree,
(Node *) queryTree->targetList);
queryTree->havingQual =
flatten_group_exprs(NULL, queryTree, queryTree->havingQual);
}
#endif
}
/*
* DeferErrorIfSubqueryRequiresMerge returns a deferred error if the subquery
* requires a merge step on the coordinator (e.g. limit, group by non-distribution
@ -1953,6 +1977,13 @@ static MultiNode *
SubqueryPushdownMultiNodeTree(Query *originalQuery)
{
Query *queryTree = copyObject(originalQuery);
/*
* PG18+ need to flatten GROUP BY expressions to ensure correct processing
* later on, such as identification of partition columns in GROUP BY.
*/
FlattenGroupExprs(queryTree);
List *targetEntryList = queryTree->targetList;
MultiCollect *subqueryCollectNode = CitusMakeNode(MultiCollect);
@ -2029,7 +2060,9 @@ SubqueryPushdownMultiNodeTree(Query *originalQuery)
pushedDownQuery->setOperations = copyObject(queryTree->setOperations);
pushedDownQuery->querySource = queryTree->querySource;
pushedDownQuery->hasSubLinks = queryTree->hasSubLinks;
#if PG_VERSION_NUM >= PG_VERSION_18
pushedDownQuery->hasGroupRTE = queryTree->hasGroupRTE;
#endif
MultiTable *subqueryNode = MultiSubqueryPushdownTable(pushedDownQuery);
SetChild((MultiUnaryNode *) subqueryCollectNode, (MultiNode *) subqueryNode);

View File

@ -97,6 +97,7 @@
#include "distributed/version_compat.h"
bool EnableRecurringOuterJoinPushdown = true;
bool EnableOuterJoinsWithPseudoconstantQualsPrePG17 = false;
/*
* RecursivePlanningContext is used to recursively plan subqueries
@ -260,7 +261,6 @@ GenerateSubplansForSubqueriesAndCTEs(uint64 planId, Query *originalQuery,
*/
context.allDistributionKeysInQueryAreEqual =
AllDistributionKeysInQueryAreEqual(originalQuery, plannerRestrictionContext);
DeferredErrorMessage *error = RecursivelyPlanSubqueriesAndCTEs(originalQuery,
&context);
if (error != NULL)
@ -509,7 +509,7 @@ ShouldRecursivelyPlanOuterJoins(Query *query, RecursivePlanningContext *context)
bool hasOuterJoin =
context->plannerRestrictionContext->joinRestrictionContext->hasOuterJoin;
#if PG_VERSION_NUM < PG_VERSION_17
if (!hasOuterJoin)
if (!EnableOuterJoinsWithPseudoconstantQualsPrePG17 && !hasOuterJoin)
{
/*
* PG15 commit d1ef5631e620f9a5b6480a32bb70124c857af4f1
@ -1122,14 +1122,10 @@ ExtractSublinkWalker(Node *node, List **sublinkList)
static bool
ShouldRecursivelyPlanSublinks(Query *query)
{
if (FindNodeMatchingCheckFunctionInRangeTableList(query->rtable,
IsDistributedTableRTE))
{
/* there is a distributed table in the FROM clause */
return false;
}
return true;
bool hasDistributedTable = (FindNodeMatchingCheckFunctionInRangeTableList(
query->rtable,
IsDistributedTableRTE));
return !hasDistributedTable;
}

View File

@ -971,6 +971,40 @@ GetVarFromAssignedParam(List *outerPlanParamsList, Param *plannerParam,
}
}
#if PG_VERSION_NUM >= PG_VERSION_18
/*
* In PG18+, the dereferenced PARAM node could be a GroupVar if the
* query has a GROUP BY. In that case, we need to make an extra
* hop to get the underlying Var from the grouping expressions.
*/
if (assignedVar != NULL)
{
Query *parse = (*rootContainingVar)->parse;
if (parse->hasGroupRTE)
{
RangeTblEntry *rte = rt_fetch(assignedVar->varno, parse->rtable);
if (rte->rtekind == RTE_GROUP)
{
Assert(assignedVar->varattno >= 1 &&
assignedVar->varattno <= list_length(rte->groupexprs));
Node *groupVar = list_nth(rte->groupexprs, assignedVar->varattno - 1);
if (IsA(groupVar, Var))
{
assignedVar = (Var *) groupVar;
}
else
{
/* todo: handle PlaceHolderVar case if needed */
ereport(DEBUG2, (errmsg(
"GroupVar maps to non-Var group expr; bailing out")));
assignedVar = NULL;
}
}
}
}
#endif
return assignedVar;
}
@ -2431,7 +2465,7 @@ FilterJoinRestrictionContext(JoinRestrictionContext *joinRestrictionContext, Rel
/*
* RangeTableArrayContainsAnyRTEIdentities returns true if any of the range table entries
* int rangeTableEntries array is an range table relation specified in queryRteIdentities.
* in rangeTableEntries array is a range table relation specified in queryRteIdentities.
*/
static bool
RangeTableArrayContainsAnyRTEIdentities(RangeTblEntry **rangeTableEntries, int
@ -2444,6 +2478,18 @@ RangeTableArrayContainsAnyRTEIdentities(RangeTblEntry **rangeTableEntries, int
List *rangeTableRelationList = NULL;
ListCell *rteRelationCell = NULL;
#if PG_VERSION_NUM >= PG_VERSION_18
/*
* In PG18+, planner array simple_rte_array may contain NULL entries
* for "dead relations". See PG commits 5f6f951 and e9a20e4 for details.
*/
if (rangeTableEntry == NULL)
{
continue;
}
#endif
/*
* Get list of all RTE_RELATIONs in the given range table entry
* (i.e.,rangeTableEntry could be a subquery where we're interested

View File

@ -1480,6 +1480,23 @@ RegisterCitusConfigVariables(void)
GUC_NO_SHOW_ALL | GUC_NOT_IN_SAMPLE,
NULL, NULL, NULL);
DefineCustomBoolVariable(
"citus.enable_outer_joins_with_pseudoconstant_quals_pre_pg17",
gettext_noop("Enables running distributed queries with outer joins "
"and pseudoconstant quals pre PG17."),
gettext_noop("Set to false by default. If set to true, enables "
"running distributed queries with outer joins and "
"pseudoconstant quals, at user's own risk, because "
"pre PG17, Citus doesn't have access to "
"set_join_pathlist_hook, which doesn't guarantee correct"
"query results. Note that in PG17+, this GUC has no effect"
"and the user can run such queries"),
&EnableOuterJoinsWithPseudoconstantQualsPrePG17,
false,
PGC_USERSET,
GUC_NO_SHOW_ALL | GUC_NOT_IN_SAMPLE,
NULL, NULL, NULL);
DefineCustomBoolVariable(
"citus.enable_recurring_outer_join_pushdown",
gettext_noop("Enables outer join pushdown for recurring relations."),
@ -2493,8 +2510,8 @@ RegisterCitusConfigVariables(void)
NULL,
&SkipAdvisoryLockPermissionChecks,
false,
GUC_SUPERUSER_ONLY,
GUC_NO_SHOW_ALL | GUC_NOT_IN_SAMPLE,
PGC_SUSET,
GUC_SUPERUSER_ONLY | GUC_NO_SHOW_ALL | GUC_NOT_IN_SAMPLE,
NULL, NULL, NULL);
DefineCustomBoolVariable(

View File

@ -0,0 +1,5 @@
-- citus--13.2-1--14.0-1
-- bump version to 14.0-1
#include "udfs/citus_prepare_pg_upgrade/14.0-1.sql"
#include "udfs/citus_finish_pg_upgrade/14.0-1.sql"

View File

@ -0,0 +1,5 @@
-- citus--14.0-1--13.2-1
-- downgrade version to 13.2-1
#include "../udfs/citus_prepare_pg_upgrade/13.0-1.sql"
#include "../udfs/citus_finish_pg_upgrade/13.2-1.sql"

View File

@ -0,0 +1,268 @@
CREATE OR REPLACE FUNCTION pg_catalog.citus_finish_pg_upgrade()
RETURNS void
LANGUAGE plpgsql
SET search_path = pg_catalog
AS $cppu$
DECLARE
table_name regclass;
command text;
trigger_name text;
BEGIN
IF substring(current_Setting('server_version'), '\d+')::int >= 14 THEN
EXECUTE $cmd$
-- disable propagation to prevent EnsureCoordinator errors
-- the aggregate created here does not depend on Citus extension (yet)
-- since we add the dependency with the next command
SET citus.enable_ddl_propagation TO OFF;
CREATE AGGREGATE array_cat_agg(anycompatiblearray) (SFUNC = array_cat, STYPE = anycompatiblearray);
COMMENT ON AGGREGATE array_cat_agg(anycompatiblearray)
IS 'concatenate input arrays into a single array';
RESET citus.enable_ddl_propagation;
$cmd$;
ELSE
EXECUTE $cmd$
SET citus.enable_ddl_propagation TO OFF;
CREATE AGGREGATE array_cat_agg(anyarray) (SFUNC = array_cat, STYPE = anyarray);
COMMENT ON AGGREGATE array_cat_agg(anyarray)
IS 'concatenate input arrays into a single array';
RESET citus.enable_ddl_propagation;
$cmd$;
END IF;
--
-- Citus creates the array_cat_agg but because of a compatibility
-- issue between pg13-pg14, we drop and create it during upgrade.
-- And as Citus creates it, there needs to be a dependency to the
-- Citus extension, so we create that dependency here.
-- We are not using:
-- ALTER EXENSION citus DROP/CREATE AGGREGATE array_cat_agg
-- because we don't have an easy way to check if the aggregate
-- exists with anyarray type or anycompatiblearray type.
INSERT INTO pg_depend
SELECT
'pg_proc'::regclass::oid as classid,
(SELECT oid FROM pg_proc WHERE proname = 'array_cat_agg') as objid,
0 as objsubid,
'pg_extension'::regclass::oid as refclassid,
(select oid from pg_extension where extname = 'citus') as refobjid,
0 as refobjsubid ,
'e' as deptype;
-- PG16 has its own any_value, so only create it pre PG16.
-- We can remove this part when we drop support for PG16
IF substring(current_Setting('server_version'), '\d+')::int < 16 THEN
EXECUTE $cmd$
-- disable propagation to prevent EnsureCoordinator errors
-- the aggregate created here does not depend on Citus extension (yet)
-- since we add the dependency with the next command
SET citus.enable_ddl_propagation TO OFF;
CREATE OR REPLACE FUNCTION pg_catalog.any_value_agg ( anyelement, anyelement )
RETURNS anyelement AS $$
SELECT CASE WHEN $1 IS NULL THEN $2 ELSE $1 END;
$$ LANGUAGE SQL STABLE;
CREATE AGGREGATE pg_catalog.any_value (
sfunc = pg_catalog.any_value_agg,
combinefunc = pg_catalog.any_value_agg,
basetype = anyelement,
stype = anyelement
);
COMMENT ON AGGREGATE pg_catalog.any_value(anyelement) IS
'Returns the value of any row in the group. It is mostly useful when you know there will be only 1 element.';
RESET citus.enable_ddl_propagation;
--
-- Citus creates the any_value aggregate but because of a compatibility
-- issue between pg15-pg16 -- any_value is created in PG16, we drop
-- and create it during upgrade IF upgraded version is less than 16.
-- And as Citus creates it, there needs to be a dependency to the
-- Citus extension, so we create that dependency here.
INSERT INTO pg_depend
SELECT
'pg_proc'::regclass::oid as classid,
(SELECT oid FROM pg_proc WHERE proname = 'any_value_agg') as objid,
0 as objsubid,
'pg_extension'::regclass::oid as refclassid,
(select oid from pg_extension where extname = 'citus') as refobjid,
0 as refobjsubid ,
'e' as deptype;
INSERT INTO pg_depend
SELECT
'pg_proc'::regclass::oid as classid,
(SELECT oid FROM pg_proc WHERE proname = 'any_value') as objid,
0 as objsubid,
'pg_extension'::regclass::oid as refclassid,
(select oid from pg_extension where extname = 'citus') as refobjid,
0 as refobjsubid ,
'e' as deptype;
$cmd$;
END IF;
--
-- restore citus catalog tables
--
INSERT INTO pg_catalog.pg_dist_partition SELECT * FROM public.pg_dist_partition;
-- if we are upgrading from PG14/PG15 to PG16+,
-- we need to regenerate the partkeys because they will include varnullingrels as well.
UPDATE pg_catalog.pg_dist_partition
SET partkey = column_name_to_column(pg_dist_partkeys_pre_16_upgrade.logicalrelid, col_name)
FROM public.pg_dist_partkeys_pre_16_upgrade
WHERE pg_dist_partkeys_pre_16_upgrade.logicalrelid = pg_dist_partition.logicalrelid;
DROP TABLE public.pg_dist_partkeys_pre_16_upgrade;
-- if we are upgrading to PG18+,
-- we need to regenerate the partkeys because they will include varreturningtype as well.
UPDATE pg_catalog.pg_dist_partition
SET partkey = column_name_to_column(pg_dist_partkeys_pre_18_upgrade.logicalrelid, col_name)
FROM public.pg_dist_partkeys_pre_18_upgrade
WHERE pg_dist_partkeys_pre_18_upgrade.logicalrelid = pg_dist_partition.logicalrelid;
DROP TABLE public.pg_dist_partkeys_pre_18_upgrade;
INSERT INTO pg_catalog.pg_dist_shard SELECT * FROM public.pg_dist_shard;
INSERT INTO pg_catalog.pg_dist_placement SELECT * FROM public.pg_dist_placement;
INSERT INTO pg_catalog.pg_dist_node_metadata SELECT * FROM public.pg_dist_node_metadata;
INSERT INTO pg_catalog.pg_dist_node SELECT * FROM public.pg_dist_node;
INSERT INTO pg_catalog.pg_dist_local_group SELECT * FROM public.pg_dist_local_group;
INSERT INTO pg_catalog.pg_dist_transaction SELECT * FROM public.pg_dist_transaction;
INSERT INTO pg_catalog.pg_dist_colocation SELECT * FROM public.pg_dist_colocation;
INSERT INTO pg_catalog.pg_dist_cleanup SELECT * FROM public.pg_dist_cleanup;
INSERT INTO pg_catalog.pg_dist_schema SELECT schemaname::regnamespace, colocationid FROM public.pg_dist_schema;
-- enterprise catalog tables
INSERT INTO pg_catalog.pg_dist_authinfo SELECT * FROM public.pg_dist_authinfo;
INSERT INTO pg_catalog.pg_dist_poolinfo SELECT * FROM public.pg_dist_poolinfo;
-- Temporarily disable trigger to check for validity of functions while
-- inserting. The current contents of the table might be invalid if one of
-- the functions was removed by the user without also removing the
-- rebalance strategy. Obviously that's not great, but it should be no
-- reason to fail the upgrade.
ALTER TABLE pg_catalog.pg_dist_rebalance_strategy DISABLE TRIGGER pg_dist_rebalance_strategy_validation_trigger;
INSERT INTO pg_catalog.pg_dist_rebalance_strategy SELECT
name,
default_strategy,
shard_cost_function::regprocedure::regproc,
node_capacity_function::regprocedure::regproc,
shard_allowed_on_node_function::regprocedure::regproc,
default_threshold,
minimum_threshold,
improvement_threshold
FROM public.pg_dist_rebalance_strategy;
ALTER TABLE pg_catalog.pg_dist_rebalance_strategy ENABLE TRIGGER pg_dist_rebalance_strategy_validation_trigger;
--
-- drop backup tables
--
DROP TABLE public.pg_dist_authinfo;
DROP TABLE public.pg_dist_colocation;
DROP TABLE public.pg_dist_local_group;
DROP TABLE public.pg_dist_node;
DROP TABLE public.pg_dist_node_metadata;
DROP TABLE public.pg_dist_partition;
DROP TABLE public.pg_dist_placement;
DROP TABLE public.pg_dist_poolinfo;
DROP TABLE public.pg_dist_shard;
DROP TABLE public.pg_dist_transaction;
DROP TABLE public.pg_dist_rebalance_strategy;
DROP TABLE public.pg_dist_cleanup;
DROP TABLE public.pg_dist_schema;
--
-- reset sequences
--
PERFORM setval('pg_catalog.pg_dist_shardid_seq', (SELECT MAX(shardid)+1 AS max_shard_id FROM pg_dist_shard), false);
PERFORM setval('pg_catalog.pg_dist_placement_placementid_seq', (SELECT MAX(placementid)+1 AS max_placement_id FROM pg_dist_placement), false);
PERFORM setval('pg_catalog.pg_dist_groupid_seq', (SELECT MAX(groupid)+1 AS max_group_id FROM pg_dist_node), false);
PERFORM setval('pg_catalog.pg_dist_node_nodeid_seq', (SELECT MAX(nodeid)+1 AS max_node_id FROM pg_dist_node), false);
PERFORM setval('pg_catalog.pg_dist_colocationid_seq', (SELECT MAX(colocationid)+1 AS max_colocation_id FROM pg_dist_colocation), false);
PERFORM setval('pg_catalog.pg_dist_operationid_seq', (SELECT MAX(operation_id)+1 AS max_operation_id FROM pg_dist_cleanup), false);
PERFORM setval('pg_catalog.pg_dist_cleanup_recordid_seq', (SELECT MAX(record_id)+1 AS max_record_id FROM pg_dist_cleanup), false);
PERFORM setval('pg_catalog.pg_dist_clock_logical_seq', (SELECT last_value FROM public.pg_dist_clock_logical_seq), false);
DROP TABLE public.pg_dist_clock_logical_seq;
--
-- register triggers
--
FOR table_name IN SELECT logicalrelid FROM pg_catalog.pg_dist_partition JOIN pg_class ON (logicalrelid = oid) WHERE relkind <> 'f'
LOOP
trigger_name := 'truncate_trigger_' || table_name::oid;
command := 'create trigger ' || trigger_name || ' after truncate on ' || table_name || ' execute procedure pg_catalog.citus_truncate_trigger()';
EXECUTE command;
command := 'update pg_trigger set tgisinternal = true where tgname = ' || quote_literal(trigger_name);
EXECUTE command;
END LOOP;
--
-- set dependencies
--
INSERT INTO pg_depend
SELECT
'pg_class'::regclass::oid as classid,
p.logicalrelid::regclass::oid as objid,
0 as objsubid,
'pg_extension'::regclass::oid as refclassid,
(select oid from pg_extension where extname = 'citus') as refobjid,
0 as refobjsubid ,
'n' as deptype
FROM pg_catalog.pg_dist_partition p;
-- If citus_columnar extension exists, then perform the post PG-upgrade work for columnar as well.
--
-- First look if pg_catalog.columnar_finish_pg_upgrade function exists as part of the citus_columnar
-- extension. (We check whether it's part of the extension just for security reasons). If it does, then
-- call it. If not, then look for columnar_internal.columnar_ensure_am_depends_catalog function and as
-- part of the citus_columnar extension. If so, then call it. We alternatively check for the latter UDF
-- just because pg_catalog.columnar_finish_pg_upgrade function is introduced in citus_columnar 13.2-1
-- and as of today all it does is to call columnar_internal.columnar_ensure_am_depends_catalog function.
IF EXISTS (
SELECT 1 FROM pg_depend
JOIN pg_proc ON (pg_depend.objid = pg_proc.oid)
JOIN pg_namespace ON (pg_proc.pronamespace = pg_namespace.oid)
JOIN pg_extension ON (pg_depend.refobjid = pg_extension.oid)
WHERE
-- Looking if pg_catalog.columnar_finish_pg_upgrade function exists and
-- if there is a dependency record from it (proc class = 1255) ..
pg_depend.classid = 1255 AND pg_namespace.nspname = 'pg_catalog' AND pg_proc.proname = 'columnar_finish_pg_upgrade' AND
-- .. to citus_columnar extension (3079 = extension class), if it exists.
pg_depend.refclassid = 3079 AND pg_extension.extname = 'citus_columnar'
)
THEN PERFORM pg_catalog.columnar_finish_pg_upgrade();
ELSIF EXISTS (
SELECT 1 FROM pg_depend
JOIN pg_proc ON (pg_depend.objid = pg_proc.oid)
JOIN pg_namespace ON (pg_proc.pronamespace = pg_namespace.oid)
JOIN pg_extension ON (pg_depend.refobjid = pg_extension.oid)
WHERE
-- Looking if columnar_internal.columnar_ensure_am_depends_catalog function exists and
-- if there is a dependency record from it (proc class = 1255) ..
pg_depend.classid = 1255 AND pg_namespace.nspname = 'columnar_internal' AND pg_proc.proname = 'columnar_ensure_am_depends_catalog' AND
-- .. to citus_columnar extension (3079 = extension class), if it exists.
pg_depend.refclassid = 3079 AND pg_extension.extname = 'citus_columnar'
)
THEN PERFORM columnar_internal.columnar_ensure_am_depends_catalog();
END IF;
-- restore pg_dist_object from the stable identifiers
TRUNCATE pg_catalog.pg_dist_object;
INSERT INTO pg_catalog.pg_dist_object (classid, objid, objsubid, distribution_argument_index, colocationid)
SELECT
address.classid,
address.objid,
address.objsubid,
naming.distribution_argument_index,
naming.colocationid
FROM
public.pg_dist_object naming,
pg_catalog.pg_get_object_address(naming.type, naming.object_names, naming.object_args) address;
DROP TABLE public.pg_dist_object;
END;
$cppu$;
COMMENT ON FUNCTION pg_catalog.citus_finish_pg_upgrade()
IS 'perform tasks to restore citus settings from a location that has been prepared before pg_upgrade';

View File

@ -115,6 +115,14 @@ BEGIN
WHERE pg_dist_partkeys_pre_16_upgrade.logicalrelid = pg_dist_partition.logicalrelid;
DROP TABLE public.pg_dist_partkeys_pre_16_upgrade;
-- if we are upgrading to PG18+,
-- we need to regenerate the partkeys because they will include varreturningtype as well.
UPDATE pg_catalog.pg_dist_partition
SET partkey = column_name_to_column(pg_dist_partkeys_pre_18_upgrade.logicalrelid, col_name)
FROM public.pg_dist_partkeys_pre_18_upgrade
WHERE pg_dist_partkeys_pre_18_upgrade.logicalrelid = pg_dist_partition.logicalrelid;
DROP TABLE public.pg_dist_partkeys_pre_18_upgrade;
INSERT INTO pg_catalog.pg_dist_shard SELECT * FROM public.pg_dist_shard;
INSERT INTO pg_catalog.pg_dist_placement SELECT * FROM public.pg_dist_placement;
INSERT INTO pg_catalog.pg_dist_node_metadata SELECT * FROM public.pg_dist_node_metadata;

View File

@ -0,0 +1,111 @@
CREATE OR REPLACE FUNCTION pg_catalog.citus_prepare_pg_upgrade()
RETURNS void
LANGUAGE plpgsql
SET search_path = pg_catalog
AS $cppu$
BEGIN
DELETE FROM pg_depend WHERE
objid IN (SELECT oid FROM pg_proc WHERE proname = 'array_cat_agg') AND
refobjid IN (select oid from pg_extension where extname = 'citus');
--
-- We are dropping the aggregates because postgres 14 changed
-- array_cat type from anyarray to anycompatiblearray. When
-- upgrading to pg14, specifically when running pg_restore on
-- array_cat_agg we would get an error. So we drop the aggregate
-- and create the right one on citus_finish_pg_upgrade.
DROP AGGREGATE IF EXISTS array_cat_agg(anyarray);
DROP AGGREGATE IF EXISTS array_cat_agg(anycompatiblearray);
-- We should drop any_value because PG16+ has its own any_value function
-- We can remove this part when we drop support for PG16
IF substring(current_Setting('server_version'), '\d+')::int < 16 THEN
DELETE FROM pg_depend WHERE
objid IN (SELECT oid FROM pg_proc WHERE proname = 'any_value' OR proname = 'any_value_agg') AND
refobjid IN (select oid from pg_extension where extname = 'citus');
DROP AGGREGATE IF EXISTS pg_catalog.any_value(anyelement);
DROP FUNCTION IF EXISTS pg_catalog.any_value_agg(anyelement, anyelement);
END IF;
--
-- Drop existing backup tables
--
DROP TABLE IF EXISTS public.pg_dist_partition;
DROP TABLE IF EXISTS public.pg_dist_shard;
DROP TABLE IF EXISTS public.pg_dist_placement;
DROP TABLE IF EXISTS public.pg_dist_node_metadata;
DROP TABLE IF EXISTS public.pg_dist_node;
DROP TABLE IF EXISTS public.pg_dist_local_group;
DROP TABLE IF EXISTS public.pg_dist_transaction;
DROP TABLE IF EXISTS public.pg_dist_colocation;
DROP TABLE IF EXISTS public.pg_dist_authinfo;
DROP TABLE IF EXISTS public.pg_dist_poolinfo;
DROP TABLE IF EXISTS public.pg_dist_rebalance_strategy;
DROP TABLE IF EXISTS public.pg_dist_object;
DROP TABLE IF EXISTS public.pg_dist_cleanup;
DROP TABLE IF EXISTS public.pg_dist_schema;
DROP TABLE IF EXISTS public.pg_dist_clock_logical_seq;
--
-- backup citus catalog tables
--
CREATE TABLE public.pg_dist_partition AS SELECT * FROM pg_catalog.pg_dist_partition;
CREATE TABLE public.pg_dist_shard AS SELECT * FROM pg_catalog.pg_dist_shard;
CREATE TABLE public.pg_dist_placement AS SELECT * FROM pg_catalog.pg_dist_placement;
CREATE TABLE public.pg_dist_node_metadata AS SELECT * FROM pg_catalog.pg_dist_node_metadata;
CREATE TABLE public.pg_dist_node AS SELECT * FROM pg_catalog.pg_dist_node;
CREATE TABLE public.pg_dist_local_group AS SELECT * FROM pg_catalog.pg_dist_local_group;
CREATE TABLE public.pg_dist_transaction AS SELECT * FROM pg_catalog.pg_dist_transaction;
CREATE TABLE public.pg_dist_colocation AS SELECT * FROM pg_catalog.pg_dist_colocation;
CREATE TABLE public.pg_dist_cleanup AS SELECT * FROM pg_catalog.pg_dist_cleanup;
-- save names of the tenant schemas instead of their oids because the oids might change after pg upgrade
CREATE TABLE public.pg_dist_schema AS SELECT schemaid::regnamespace::text AS schemaname, colocationid FROM pg_catalog.pg_dist_schema;
-- enterprise catalog tables
CREATE TABLE public.pg_dist_authinfo AS SELECT * FROM pg_catalog.pg_dist_authinfo;
CREATE TABLE public.pg_dist_poolinfo AS SELECT * FROM pg_catalog.pg_dist_poolinfo;
-- sequences
CREATE TABLE public.pg_dist_clock_logical_seq AS SELECT last_value FROM pg_catalog.pg_dist_clock_logical_seq;
CREATE TABLE public.pg_dist_rebalance_strategy AS SELECT
name,
default_strategy,
shard_cost_function::regprocedure::text,
node_capacity_function::regprocedure::text,
shard_allowed_on_node_function::regprocedure::text,
default_threshold,
minimum_threshold,
improvement_threshold
FROM pg_catalog.pg_dist_rebalance_strategy;
-- store upgrade stable identifiers on pg_dist_object catalog
CREATE TABLE public.pg_dist_object AS SELECT
address.type,
address.object_names,
address.object_args,
objects.distribution_argument_index,
objects.colocationid
FROM pg_catalog.pg_dist_object objects,
pg_catalog.pg_identify_object_as_address(objects.classid, objects.objid, objects.objsubid) address;
-- if we are upgrading from PG14/PG15 to PG16+,
-- we will need to regenerate the partkeys because they will include varnullingrels as well.
-- so we save the partkeys as column names here
CREATE TABLE IF NOT EXISTS public.pg_dist_partkeys_pre_16_upgrade AS
SELECT logicalrelid, column_to_column_name(logicalrelid, partkey) as col_name
FROM pg_catalog.pg_dist_partition WHERE partkey IS NOT NULL AND partkey NOT ILIKE '%varnullingrels%';
-- similarly, if we are upgrading to PG18+,
-- we will need to regenerate the partkeys because they will include varreturningtype as well.
-- so we save the partkeys as column names here
CREATE TABLE IF NOT EXISTS public.pg_dist_partkeys_pre_18_upgrade AS
SELECT logicalrelid, column_to_column_name(logicalrelid, partkey) as col_name
FROM pg_catalog.pg_dist_partition WHERE partkey IS NOT NULL AND partkey NOT ILIKE '%varreturningtype%';
-- remove duplicates (we would only have duplicates if we are upgrading from pre-16 to PG18+)
DELETE FROM public.pg_dist_partkeys_pre_18_upgrade USING public.pg_dist_partkeys_pre_16_upgrade p16
WHERE public.pg_dist_partkeys_pre_18_upgrade.logicalrelid = p16.logicalrelid
AND public.pg_dist_partkeys_pre_18_upgrade.col_name = p16.col_name;
END;
$cppu$;
COMMENT ON FUNCTION pg_catalog.citus_prepare_pg_upgrade()
IS 'perform tasks to copy citus settings to a location that could later be restored after pg_upgrade is done';

View File

@ -93,6 +93,17 @@ BEGIN
CREATE TABLE IF NOT EXISTS public.pg_dist_partkeys_pre_16_upgrade AS
SELECT logicalrelid, column_to_column_name(logicalrelid, partkey) as col_name
FROM pg_catalog.pg_dist_partition WHERE partkey IS NOT NULL AND partkey NOT ILIKE '%varnullingrels%';
-- similarly, if we are upgrading to PG18+,
-- we will need to regenerate the partkeys because they will include varreturningtype as well.
-- so we save the partkeys as column names here
CREATE TABLE IF NOT EXISTS public.pg_dist_partkeys_pre_18_upgrade AS
SELECT logicalrelid, column_to_column_name(logicalrelid, partkey) as col_name
FROM pg_catalog.pg_dist_partition WHERE partkey IS NOT NULL AND partkey NOT ILIKE '%varreturningtype%';
-- remove duplicates (we would only have duplicates if we are upgrading from pre-16 to PG18+)
DELETE FROM public.pg_dist_partkeys_pre_18_upgrade USING public.pg_dist_partkeys_pre_16_upgrade p16
WHERE public.pg_dist_partkeys_pre_18_upgrade.logicalrelid = p16.logicalrelid
AND public.pg_dist_partkeys_pre_18_upgrade.col_name = p16.col_name;
END;
$cppu$;

View File

@ -22,7 +22,7 @@ most_common_vals_json AS (
table_reltuples_json AS (
SELECT distinct(shardid),
(json_array_elements(result::json)->>'reltuples')::bigint AS shard_reltuples,
CAST( CAST((json_array_elements(result::json)->>'reltuples') AS DOUBLE PRECISION) AS bigint) AS shard_reltuples,
(json_array_elements(result::json)->>'citus_table')::regclass AS citus_table
FROM most_common_vals_json),
@ -32,8 +32,8 @@ table_reltuples AS (
null_frac_json AS (
SELECT (json_array_elements(result::json)->>'citus_table')::regclass AS citus_table,
(json_array_elements(result::json)->>'reltuples')::bigint AS shard_reltuples,
(json_array_elements(result::json)->>'null_frac')::float4 AS null_frac,
CAST( CAST((json_array_elements(result::json)->>'reltuples') AS DOUBLE PRECISION) AS bigint) AS shard_reltuples,
CAST((json_array_elements(result::json)->>'null_frac') AS float4) AS null_frac,
(json_array_elements(result::json)->>'attname')::text AS attname
FROM most_common_vals_json
),
@ -49,8 +49,8 @@ most_common_vals AS (
SELECT (json_array_elements(result::json)->>'citus_table')::regclass AS citus_table,
(json_array_elements(result::json)->>'attname')::text AS attname,
json_array_elements_text((json_array_elements(result::json)->>'most_common_vals')::json)::text AS common_val,
json_array_elements_text((json_array_elements(result::json)->>'most_common_freqs')::json)::float4 AS common_freq,
(json_array_elements(result::json)->>'reltuples')::bigint AS shard_reltuples
CAST(json_array_elements_text((json_array_elements(result::json)->>'most_common_freqs')::json) AS float4) AS common_freq,
CAST( CAST((json_array_elements(result::json)->>'reltuples') AS DOUBLE PRECISION) AS bigint) AS shard_reltuples
FROM most_common_vals_json),
common_val_occurrence AS (
@ -58,7 +58,7 @@ common_val_occurrence AS (
sum(common_freq * shard_reltuples)::bigint AS occurrence
FROM most_common_vals m
GROUP BY citus_table, m.attname, common_val
ORDER BY 1, 2, occurrence DESC)
ORDER BY 1, 2, occurrence DESC, 3)
SELECT nsp.nspname AS schemaname, p.relname AS tablename, c.attname,

View File

@ -22,7 +22,7 @@ most_common_vals_json AS (
table_reltuples_json AS (
SELECT distinct(shardid),
(json_array_elements(result::json)->>'reltuples')::bigint AS shard_reltuples,
CAST( CAST((json_array_elements(result::json)->>'reltuples') AS DOUBLE PRECISION) AS bigint) AS shard_reltuples,
(json_array_elements(result::json)->>'citus_table')::regclass AS citus_table
FROM most_common_vals_json),
@ -32,8 +32,8 @@ table_reltuples AS (
null_frac_json AS (
SELECT (json_array_elements(result::json)->>'citus_table')::regclass AS citus_table,
(json_array_elements(result::json)->>'reltuples')::bigint AS shard_reltuples,
(json_array_elements(result::json)->>'null_frac')::float4 AS null_frac,
CAST( CAST((json_array_elements(result::json)->>'reltuples') AS DOUBLE PRECISION) AS bigint) AS shard_reltuples,
CAST((json_array_elements(result::json)->>'null_frac') AS float4) AS null_frac,
(json_array_elements(result::json)->>'attname')::text AS attname
FROM most_common_vals_json
),
@ -49,8 +49,8 @@ most_common_vals AS (
SELECT (json_array_elements(result::json)->>'citus_table')::regclass AS citus_table,
(json_array_elements(result::json)->>'attname')::text AS attname,
json_array_elements_text((json_array_elements(result::json)->>'most_common_vals')::json)::text AS common_val,
json_array_elements_text((json_array_elements(result::json)->>'most_common_freqs')::json)::float4 AS common_freq,
(json_array_elements(result::json)->>'reltuples')::bigint AS shard_reltuples
CAST(json_array_elements_text((json_array_elements(result::json)->>'most_common_freqs')::json) AS float4) AS common_freq,
CAST( CAST((json_array_elements(result::json)->>'reltuples') AS DOUBLE PRECISION) AS bigint) AS shard_reltuples
FROM most_common_vals_json),
common_val_occurrence AS (
@ -58,7 +58,7 @@ common_val_occurrence AS (
sum(common_freq * shard_reltuples)::bigint AS occurrence
FROM most_common_vals m
GROUP BY citus_table, m.attname, common_val
ORDER BY 1, 2, occurrence DESC)
ORDER BY 1, 2, occurrence DESC, 3)
SELECT nsp.nspname AS schemaname, p.relname AS tablename, c.attname,

View File

@ -49,6 +49,9 @@ ExecutorEnd_hook_type prev_ExecutorEnd = NULL;
#define STAT_TENANTS_COLUMNS 9
#define ONE_QUERY_SCORE 1000000000
/* this doesn't attempt dereferencing given input and is computed in compile-time, so it's safe */
#define TENANT_STATS_SCORE_FIELD_BIT_LENGTH (sizeof(((TenantStats *) NULL)->score) * 8)
static char AttributeToTenant[MAX_TENANT_ATTRIBUTE_LENGTH] = "";
static CmdType AttributeToCommandType = CMD_UNKNOWN;
static int AttributeToColocationGroupId = INVALID_COLOCATION_ID;
@ -605,8 +608,13 @@ ReduceScoreIfNecessary(TenantStats *tenantStats, TimestampTz queryTime)
*/
if (periodCountAfterLastScoreReduction > 0)
{
tenantStats->score >>= periodCountAfterLastScoreReduction;
tenantStats->lastScoreReduction = queryTime;
/* addtional check to avoid undefined behavior */
tenantStats->score = (periodCountAfterLastScoreReduction <
TENANT_STATS_SCORE_FIELD_BIT_LENGTH)
? tenantStats->score >> periodCountAfterLastScoreReduction
: 0;
}
}

View File

@ -106,7 +106,9 @@ LogTransactionRecord(int32 groupId, char *transactionName, FullTransactionId out
HeapTuple heapTuple = heap_form_tuple(tupleDescriptor, values, isNulls);
/* insert new tuple */
CATALOG_INSERT_WITH_SNAPSHOT(pgDistTransaction, heapTuple);
PushActiveSnapshot(GetTransactionSnapshot());
CatalogTupleInsert(pgDistTransaction, heapTuple);
PopActiveSnapshot();
CommandCounterIncrement();
@ -685,7 +687,7 @@ DeleteWorkerTransactions(WorkerNode *workerNode)
int
GetOuterXidAttrIndexInPgDistTransaction(TupleDesc tupleDesc)
{
return TupleDescSize(tupleDesc) == Natts_pg_dist_transaction
return tupleDesc->natts == Natts_pg_dist_transaction
? (Anum_pg_dist_transaction_outerxid - 1)
: tupleDesc->natts - 1;
}

View File

@ -137,6 +137,8 @@ CopyNodeDistributedPlan(COPYFUNC_ARGS)
COPY_SCALAR_FIELD(fastPathRouterPlan);
COPY_SCALAR_FIELD(numberOfTimesExecuted);
COPY_NODE_FIELD(planningError);
COPY_SCALAR_FIELD(sourceResultRepartitionColumnIndex);
}

View File

@ -370,6 +370,20 @@ DistOpsValidityState(Node *node, const DistributeObjectOps *ops)
{
if (ops && ops->operationType == DIST_OPS_CREATE)
{
/*
* We should beware of qualifying the CREATE statement too early.
*/
if (nodeTag(node) == T_CreateDomainStmt)
{
/*
* Create Domain statements should be qualified after local creation
* because in case of an error in creation, we don't want to print
* the error with the qualified name, as that would differ with
* vanilla Postgres error output.
*/
return ShouldQualifyAfterLocalCreation;
}
/*
* We should not validate CREATE statements because no address exists
* here yet.

View File

@ -203,6 +203,7 @@ OutDistributedPlan(OUTFUNC_ARGS)
WRITE_UINT_FIELD(numberOfTimesExecuted);
WRITE_NODE_FIELD(planningError);
WRITE_INT_FIELD(sourceResultRepartitionColumnIndex);
}

View File

@ -140,10 +140,10 @@ GetReplicationLag(WorkerNode *primaryWorkerNode, WorkerNode *replicaWorkerNode)
ForgetResults(replicaConnection);
CloseConnection(replicaConnection);
ereport(DEBUG1, (errmsg(
ereport(DEBUG2, (errmsg(
"successfully measured replication lag: primary LSN %s, clone LSN %s",
primary_lsn_str, replica_lsn_str)));
ereport(NOTICE, (errmsg("replication lag between %s:%d and %s:%d is %ld bytes",
ereport(DEBUG1, (errmsg("replication lag between %s:%d and %s:%d is %ld bytes",
primaryWorkerNode->workerName, primaryWorkerNode->workerPort,
replicaWorkerNode->workerName, replicaWorkerNode->workerPort,
lag_bytes)));
@ -244,9 +244,9 @@ EnsureValidCloneMode(WorkerNode *primaryWorkerNode,
freeaddrinfo(result);
}
ereport(NOTICE, (errmsg("checking replication for node %s (resolved IP: %s)",
ereport(NOTICE, (errmsg("checking replication status of clone node %s:%d",
cloneHostname,
resolvedIP ? resolvedIP : "unresolved")));
clonePort)));
/* Build query to check if clone is connected and get its sync state */
@ -278,6 +278,11 @@ EnsureValidCloneMode(WorkerNode *primaryWorkerNode,
cloneHostname);
}
ereport(DEBUG2, (errmsg("sending replication status check query: %s to primary %s:%d",
replicationCheckQuery->data,
primaryWorkerNode->workerName,
primaryWorkerNode->workerPort)));
int replicationCheckResultCode = SendRemoteCommand(primaryConnection,
replicationCheckQuery->data);
if (replicationCheckResultCode == 0)
@ -305,8 +310,9 @@ EnsureValidCloneMode(WorkerNode *primaryWorkerNode,
primaryWorkerNode->workerName, primaryWorkerNode->
workerPort),
errdetail(
"The clone must be actively replicating from the specified primary node. "
"Check that the clone is running and properly configured for replication.")));
"The clone must be actively replicating from the specified primary node"),
errhint(
"Verify the clone is running and properly configured for replication")));
}
/* Check if clone is synchronous */
@ -322,9 +328,9 @@ EnsureValidCloneMode(WorkerNode *primaryWorkerNode,
"cannot %s clone %s:%d as it is configured as a synchronous replica",
operation, cloneHostname, clonePort),
errdetail(
"Promoting a synchronous clone can cause data consistency issues. "
"Please configure it as an asynchronous replica first.")))
;
"Promoting a synchronous clone can cause data consistency issues"),
errhint(
"Configure clone as an asynchronous replica")));
}
}

View File

@ -73,34 +73,8 @@ PG_FUNCTION_INFO_V1(update_distributed_table_colocation);
Datum
mark_tables_colocated(PG_FUNCTION_ARGS)
{
CheckCitusVersion(ERROR);
EnsureCoordinator();
Oid sourceRelationId = PG_GETARG_OID(0);
ArrayType *relationIdArrayObject = PG_GETARG_ARRAYTYPE_P(1);
int relationCount = ArrayObjectCount(relationIdArrayObject);
if (relationCount < 1)
{
ereport(ERROR, (errmsg("at least one target table is required for this "
"operation")));
}
EnsureTableOwner(sourceRelationId);
Datum *relationIdDatumArray = DeconstructArrayObject(relationIdArrayObject);
for (int relationIndex = 0; relationIndex < relationCount; relationIndex++)
{
Oid nextRelationOid = DatumGetObjectId(relationIdDatumArray[relationIndex]);
/* we require that the user either owns all tables or is superuser */
EnsureTableOwner(nextRelationOid);
MarkTablesColocated(sourceRelationId, nextRelationOid);
}
PG_RETURN_VOID();
ereport(ERROR, (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
errmsg("this function is deprecated and no longer is used")));
}
@ -1306,7 +1280,7 @@ ColocatedShardIdInRelation(Oid relationId, int shardIndex)
/*
* DeleteColocationGroupIfNoTablesBelong function deletes given co-location group if there
* is no relation in that co-location group. A co-location group may become empty after
* mark_tables_colocated or upgrade_reference_table UDF calls. In that case we need to
* update_distributed_table_colocation UDF calls. In that case we need to
* remove empty co-location group to prevent orphaned co-location groups.
*/
void

View File

@ -1040,12 +1040,30 @@ MaintenanceDaemonShmemExit(int code, Datum arg)
if (myDbData != NULL)
{
/*
* Confirm that I am still the registered maintenance daemon before exiting.
* Once the maintenance daemon fails (e.g., due to an error in the main loop),
* both Postgres tries to restart the failed daemon and Citus attempt to start
* a new one. In that case, the one started by Citus ends up here.
*
* As the maintenance daemon that Citus tried to start, we might see the entry
* for the daemon restarted by Postgres if the system was so slow that it
* took a long time for us to be re-scheduled to call MaintenanceDaemonShmemExit(),
* e.g., under valgrind testing.
*
* In that case, we should unregister ourself only if we are still the registered
* maintenance daemon.
*/
Assert(myDbData->workerPid == MyProcPid);
myDbData->daemonStarted = false;
myDbData->workerPid = 0;
if (myDbData->workerPid == MyProcPid)
{
myDbData->daemonStarted = false;
myDbData->workerPid = 0;
}
else
{
ereport(LOG, (errmsg(
"maintenance daemon for database %u has already been replaced by "
"Postgres, skipping to unregister this maintenance daemon",
databaseOid)));
}
}
LWLockRelease(&MaintenanceDaemonControl->lock);

View File

@ -64,6 +64,14 @@
/*global variables for citus_columnar fake version Y */
#define CITUS_COLUMNAR_INTERNAL_VERSION "11.1-0"
/*
* We can't rely on RelidByRelfilenumber for temp tables since PG18(it was backpatched
* through PG13), so we can use this macro to define relid within relation in case of
* temp relations. Otherwise RelidByRelfilenumber should be used.
*/
#define RelationPrecomputeOid(a) (RelationUsesLocalBuffers(a) ? RelationGetRelid(a) : \
InvalidOid)
/*
* ColumnarOptions holds the option values to be used when reading or writing
* a columnar table. To resolve these values, we first check foreign table's options,
@ -232,7 +240,7 @@ extern void columnar_init_gucs(void);
extern CompressionType ParseCompressionType(const char *compressionTypeString);
/* Function declarations for writing to a columnar table */
extern ColumnarWriteState * ColumnarBeginWrite(RelFileLocator relfilelocator,
extern ColumnarWriteState * ColumnarBeginWrite(Relation rel,
ColumnarOptions options,
TupleDesc tupleDescriptor);
extern uint64 ColumnarWriteRow(ColumnarWriteState *state, Datum *columnValues,
@ -287,21 +295,21 @@ extern PGDLLEXPORT bool ReadColumnarOptions(Oid regclass, ColumnarOptions *optio
extern PGDLLEXPORT bool IsColumnarTableAmTable(Oid relationId);
/* columnar_metadata_tables.c */
extern void DeleteMetadataRows(RelFileLocator relfilelocator);
extern void DeleteMetadataRows(Relation rel);
extern uint64 ColumnarMetadataNewStorageId(void);
extern uint64 GetHighestUsedAddress(RelFileLocator relfilelocator);
extern uint64 GetHighestUsedAddress(Relation rel);
extern EmptyStripeReservation * ReserveEmptyStripe(Relation rel, uint64 columnCount,
uint64 chunkGroupRowCount,
uint64 stripeRowCount);
extern StripeMetadata * CompleteStripeReservation(Relation rel, uint64 stripeId,
uint64 sizeBytes, uint64 rowCount,
uint64 chunkCount);
extern void SaveStripeSkipList(RelFileLocator relfilelocator, uint64 stripe,
extern void SaveStripeSkipList(Oid relid, RelFileLocator relfilelocator, uint64 stripe,
StripeSkipList *stripeSkipList,
TupleDesc tupleDescriptor);
extern void SaveChunkGroups(RelFileLocator relfilelocator, uint64 stripe,
extern void SaveChunkGroups(Oid relid, RelFileLocator relfilelocator, uint64 stripe,
List *chunkGroupRowCounts);
extern StripeSkipList * ReadStripeSkipList(RelFileLocator relfilelocator, uint64 stripe,
extern StripeSkipList * ReadStripeSkipList(Relation rel, uint64 stripe,
TupleDesc tupleDescriptor,
uint32 chunkCount,
Snapshot snapshot);
@ -317,6 +325,7 @@ extern uint64 StripeGetHighestRowNumber(StripeMetadata *stripeMetadata);
extern StripeMetadata * FindStripeWithHighestRowNumber(Relation relation,
Snapshot snapshot);
extern Datum columnar_relation_storageid(PG_FUNCTION_ARGS);
extern Oid ColumnarRelationId(Oid relid, RelFileLocator relfilelocator);
/* write_state_management.c */

View File

@ -61,7 +61,7 @@ typedef struct EmptyStripeReservation
uint64 stripeFirstRowNumber;
} EmptyStripeReservation;
extern List * StripesForRelfilelocator(RelFileLocator relfilelocator);
extern List * StripesForRelfilelocator(Relation rel);
extern void ColumnarStorageUpdateIfNeeded(Relation rel, bool isUpgrade);
extern List * ExtractColumnarRelOptions(List *inOptions, List **outColumnarOptions);
extern void SetColumnarRelOptions(RangeVar *rv, List *reloptions);

View File

@ -25,12 +25,4 @@
#define ExplainPropertyLong(qlabel, value, es) \
ExplainPropertyInteger(qlabel, NULL, value, es)
/* tuple-descriptor attributes moved in PostgreSQL 18: */
#if PG_VERSION_NUM >= PG_VERSION_18
#define Attr(tupdesc, colno) TupleDescAttr((tupdesc), (colno))
#else
#define Attr(tupdesc, colno) (&((tupdesc)->attrs[(colno)]))
#endif
#endif /* COLUMNAR_COMPAT_H */

View File

@ -25,7 +25,8 @@ typedef enum DistOpsValidationState
HasAtLeastOneValidObject,
HasNoneValidObject,
HasObjectWithInvalidOwnership,
NoAddressResolutionRequired
NoAddressResolutionRequired,
ShouldQualifyAfterLocalCreation
} DistOpsValidationState;
extern void SetLocalClientMinMessagesIfRunningPGTests(int

View File

@ -112,5 +112,6 @@ extern void UndistributeDisconnectedCitusLocalTables(void);
extern void NotifyUtilityHookConstraintDropped(void);
extern void ResetConstraintDropped(void);
extern void ExecuteDistributedDDLJob(DDLJob *ddlJob);
extern bool IsDroppedOrGenerated(Form_pg_attribute attr);
#endif /* MULTI_UTILITY_H */

View File

@ -119,6 +119,7 @@ typedef struct FastPathRestrictionContext
bool delayFastPathPlanning;
} FastPathRestrictionContext;
struct DistributedPlanningContext;
typedef struct PlannerRestrictionContext
{
RelationRestrictionContext *relationRestrictionContext;
@ -132,6 +133,18 @@ typedef struct PlannerRestrictionContext
*/
FastPathRestrictionContext *fastPathRestrictionContext;
MemoryContext memoryContext;
#if PG_VERSION_NUM >= PG_VERSION_18
/*
* Enable access to the distributed planning context from
* planner hooks called by Postgres. Enables Citus to track
* changes made by Postgres to the query tree (such as
* expansion of virtual columns) and ensure they are reflected
* back to subsequent distributed planning.
*/
struct DistributedPlanningContext *planContext;
#endif
} PlannerRestrictionContext;
typedef struct RelationShard

View File

@ -586,7 +586,8 @@ extern DistributedPlan * CreatePhysicalDistributedPlan(MultiTreeRoot *multiTree,
plannerRestrictionContext);
extern Task * CreateBasicTask(uint64 jobId, uint32 taskId, TaskType taskType,
char *queryString);
extern OpExpr * MakeOpExpressionExtended(Var *leftVar, Expr *rightArg,
int16 strategyNumber);
extern OpExpr * MakeOpExpression(Var *variable, int16 strategyNumber);
extern Node * WrapUngroupedVarsInAnyValueAggregate(Node *expression,
List *groupClauseList,

View File

@ -49,6 +49,6 @@ extern DeferredErrorMessage * DeferErrorIfCannotPushdownSubquery(Query *subquery
extern DeferredErrorMessage * DeferErrorIfUnsupportedUnionQuery(Query *queryTree);
extern bool IsJsonTableRTE(RangeTblEntry *rte);
extern bool IsOuterJoinExpr(Node *node);
extern void FlattenGroupExprs(Query *query);
#endif /* QUERY_PUSHDOWN_PLANNING_H */

View File

@ -22,6 +22,7 @@
#include "distributed/relation_restriction_equivalence.h"
extern bool EnableRecurringOuterJoinPushdown;
extern bool EnableOuterJoinsWithPseudoconstantQualsPrePG17;
typedef struct RecursivePlanningContextInternal RecursivePlanningContext;
typedef struct RangeTblEntryIndex

View File

@ -13,10 +13,6 @@
#include "pg_version_constants.h"
/* we need these for PG-18s PushActiveSnapshot/PopActiveSnapshot APIs */
#include "access/xact.h"
#include "utils/snapmgr.h"
#if PG_VERSION_NUM >= PG_VERSION_18
#define create_foreignscan_path_compat(a, b, c, d, e, f, g, h, i, j, k) \
create_foreignscan_path( \
@ -40,14 +36,6 @@
/* PG-18 unified row-compare operator codes under COMPARE_* */
#define ROWCOMPARE_NE COMPARE_NE
#define CATALOG_INSERT_WITH_SNAPSHOT(rel, tup) \
do { \
Snapshot __snap = GetTransactionSnapshot(); \
PushActiveSnapshot(__snap); \
CatalogTupleInsert((rel), (tup)); \
PopActiveSnapshot(); \
} while (0)
#elif PG_VERSION_NUM >= PG_VERSION_17
#define create_foreignscan_path_compat(a, b, c, d, e, f, g, h, i, j, k) \
create_foreignscan_path( \
@ -56,9 +44,6 @@
(g), (h), (i), (j), (k) \
)
/* no-op wrapper on older PGs */
#define CATALOG_INSERT_WITH_SNAPSHOT(rel, tup) \
CatalogTupleInsert((rel), (tup))
#endif
#if PG_VERSION_NUM >= PG_VERSION_17
@ -469,10 +454,6 @@ getStxstattarget_compat(HeapTuple tup)
k) create_foreignscan_path(a, b, c, d, e, f, g, h, \
i, k)
/* no-op wrapper on older PGs */
#define CATALOG_INSERT_WITH_SNAPSHOT(rel, tup) \
CatalogTupleInsert((rel), (tup))
#define getProcNo_compat(a) (a->pgprocno)
#define getLxid_compat(a) (a->lxid)

View File

@ -320,6 +320,21 @@ check-citus-upgrade-mixed-local: all clean-upgrade-artifacts
--citus-old-version=$(citus-old-version) \
--mixed
check-citus-minor-upgrade: all
$(citus_upgrade_check) \
--bindir=$(bindir) \
--pgxsdir=$(pgxsdir) \
--citus-pre-tar=$(citus-pre-tar) \
--citus-post-tar=$(citus-post-tar) \
--minor-upgrade
check-citus-minor-upgrade-local: all clean-upgrade-artifacts
$(citus_upgrade_check) \
--bindir=$(bindir) \
--pgxsdir=$(pgxsdir) \
--citus-old-version=$(citus-old-version) \
--minor-upgrade
clean-upgrade-artifacts:
rm -rf $(citus_abs_srcdir)/tmp_citus_upgrade/ /tmp/citus_copy/

View File

@ -297,7 +297,13 @@ s/(NOTICE: issuing CREATE EXTENSION IF NOT EXISTS citus_columnar WITH SCHEMA p
s/, password_required=false//g
s/provide the file or change sslmode/provide the file, use the system's trusted roots with sslrootcert=system, or change sslmode/g
s/(:varcollid [0-9]+) :varlevelsup 0/\1 :varnullingrels (b) :varlevelsup 0/g
#pg18 varreturningtype - change needed for PG16, PG17 tests
s/(:varnullingrels \(b\) :varlevelsup 0) (:varnosyn 1)/\1 :varreturningtype 0 \2/g
#pg16 varnullingrels and pg18 varreturningtype - change needed for PG15 tests
s/(:varcollid [0-9]+) :varlevelsup 0/\1 :varnullingrels (b) :varlevelsup 0 :varreturningtype 0/g
s/table_name_for_view\.([_a-z0-9]+)(,| |$)/\1\2/g
s/permission denied to terminate process/must be a superuser to terminate superuser process/g
s/permission denied to cancel query/must be a superuser to cancel superuser query/g
@ -326,16 +332,38 @@ s/\| CHECK ([a-zA-Z])(.*)/| CHECK \(\1\2\)/g
/DEBUG: drop auto-cascades to type [a-zA-Z_]*.pg_temp_[0-9]*/d
# pg18 change: strip trailing “.00” (or “.0…”) from actual rows counts
s/(actual rows=[0-9]+)\.[0-9]+/\1/g
# --- PG18 Actual Rows normalization ---
# New in PG18: Actual Rows in EXPLAIN output are now rounded to
# 1) 0.50 (and 0.5, 0.5000...) -> 0
s/(actual[[:space:]]*rows[[:space:]]*[=:][[:space:]]*)0\.50*/\10/gI
s/(actual[^)]*rows[[:space:]]*=[[:space:]]*)0\.50*/\10/gI
# 2) 0.51+ -> 1
s/(actual[[:space:]]*rows[[:space:]]*[=:][[:space:]]*)0\.(5[1-9][0-9]*|[6-9][0-9]*)/\11/gI
s/(actual[^)]*rows[[:space:]]*=[[:space:]]*)0\.(5[1-9][0-9]*|[6-9][0-9]*)/\11/gI
# 3) Strip trivial trailing ".0..." (6.00 -> 6) [keep your existing cross-format rules]
s/(actual[[:space:]]*rows[[:space:]]*[=:][[:space:]]*)([0-9]+)\.0+/\1\2/gI
s/(actual[^)]*rows[[:space:]]*=[[:space:]]*)([0-9]+)\.0+/\1\2/gI
# 4) YAML/XML/JSON: strip trailing ".0..."
s/(Actual[[:space:]]+Rows:[[:space:]]*[0-9]+)\.0+/\1/gI
s/(<Actual-Rows>[0-9]+)\.0+(<\/Actual-Rows>)/\1\2/g
s/("Actual[[:space:]]+Rows":[[:space:]]*[0-9]+)\.0+/\1/gI
# 5) Placeholder cleanups (kept from existing rules; harmless if unused)
# JSON placeholder cleanup: '"Actual Rows": N.N' -> N
s/("Actual[[:space:]]+Rows":[[:space:]]*)N\.N/\1N/gI
# Text EXPLAIN collapse: "rows=N.N" -> "rows=N"
s/(rows[[:space:]]*=[[:space:]]*)N\.N/\1N/gI
# YAML placeholder: "Actual Rows: N.N" -> "Actual Rows: N"
s/(Actual[[:space:]]+Rows:[[:space:]]*)N\.N/\1N/gI
# --- PG18 Actual Rows normalization ---
# pg18 “Disabled” change start
# ignore any “Disabled:” lines in test output
/^\s*Disabled:/d
# ignore any JSON-style Disabled field
/^\s*"Disabled":/d
# ignore XML <Disabled>true</Disabled> or <Disabled>false</Disabled>
/^\s*<Disabled>.*<\/Disabled>/d
# pg18 “Disabled” change end
@ -344,3 +372,34 @@ s/(actual rows=[0-9]+)\.[0-9]+/\1/g
s/^([ \t]*)List of tables$/\1List of relations/g
s/^([ \t]*)List of indexes$/\1List of relations/g
s/^([ \t]*)List of sequences$/\1List of relations/g
# --- PG18 FK wording -> legacy generic form ---
# e.g., "violates RESTRICT setting of foreign key constraint" -> "violates foreign key constraint"
s/violates RESTRICT setting of foreign key constraint/violates foreign key constraint/g
# DETAIL line changed "is referenced" -> old "is still referenced"
s/\<is referenced from table\>/is still referenced from table/g
# pg18 extension_control_path GUC debugs
# ignore any "find_in_path:" lines in test output
/DEBUG: find_in_path: trying .*/d
# EXPLAIN (PG18+): hide Materialize storage instrumentation
# this rule can be removed when PG18 is the minimum supported version
/^[ \t]*Storage:[ \t].*$/d
# PG18: drop 'subscription "<name>"' prefix
# this rule can be removed when PG18 is the minimum supported version
s/^[[:space:]]*ERROR:[[:space:]]+subscription "[^"]+" could not connect to the publisher:[[:space:]]*/ERROR: could not connect to the publisher: /I
# PG18: drop verbose 'connection to server … failed:' preamble
s/^[[:space:]]*ERROR:[[:space:]]+could not connect to the publisher:[[:space:]]*connection to server .* failed:[[:space:]]*/ERROR: could not connect to the publisher: /I
# PG18: replace named window refs like "OVER w1" with neutral "OVER (?)"
# this rule can be removed when PG18 is the minimum supported version
# only on Sort Key / Group Key / Output lines
# Sort Key
/^[[:space:]]*Sort Key:/ s/(OVER[[:space:]]+)w[0-9]+/\1(?)/g
# Group Key
/^[[:space:]]*Group Key:/ s/(OVER[[:space:]]+)w[0-9]+/\1(?)/g
# Output
/^[[:space:]]*Output:/ s/(OVER[[:space:]]+)w[0-9]+/\1(?)/g
# end PG18 window ref normalization

View File

@ -137,6 +137,7 @@ def initialize_db_for_cluster(pg_path, rel_data_path, settings, node_names):
# --allow-group-access is used to ensure we set permissions on
# private keys correctly
"--allow-group-access",
"--data-checksums",
"--encoding",
"UTF8",
"--locale",

View File

@ -49,7 +49,7 @@ CITUS_ARBITRARY_TEST_DIR = "./tmp_citus_test"
MASTER = "master"
# This should be updated when citus version changes
MASTER_VERSION = "13.2"
MASTER_VERSION = "14.0"
HOME = expanduser("~")
@ -194,6 +194,7 @@ class CitusUpgradeConfig(CitusBaseClusterConfig):
self.new_settings = {"citus.enable_version_checks": "false"}
self.user = SUPER_USER_NAME
self.mixed_mode = arguments["--mixed"]
self.minor_upgrade = arguments.get("--minor-upgrade", False)
self.fixed_port = 57635

View File

@ -262,6 +262,18 @@ DEPS = {
"multi_subquery_in_where_reference_clause": TestDeps(
"minimal_schedule", ["multi_behavioral_analytics_create_table"]
),
"subquery_in_where": TestDeps(
"minimal_schedule", ["multi_behavioral_analytics_create_table"]
),
"subquery_in_targetlist": TestDeps(
"minimal_schedule", ["multi_behavioral_analytics_create_table"]
),
"window_functions": TestDeps(
"minimal_schedule", ["multi_behavioral_analytics_create_table"]
),
"multi_subquery_window_functions": TestDeps(
"minimal_schedule", ["multi_behavioral_analytics_create_table"]
),
}

View File

@ -11,6 +11,7 @@ Options:
--pgxsdir=<pgxsdir> Path to the PGXS directory(ex: ~/.pgenv/src/postgresql-11.3)
--citus-old-version=<citus-old-version> Citus old version for local run(ex v8.0.0)
--mixed Run the verification phase with one node not upgraded.
--minor-upgrade Use minor version upgrade test schedules instead of major version schedules.
"""
import multiprocessing
@ -55,7 +56,14 @@ def run_citus_upgrade_tests(config, before_upgrade_schedule, after_upgrade_sched
)
report_initial_version(config)
# Store the pre-upgrade GUCs and UDFs for minor version upgrades
pre_upgrade = None
if config.minor_upgrade:
pre_upgrade = get_citus_catalog_info(config)
run_test_on_coordinator(config, before_upgrade_schedule)
remove_citus(config.pre_tar_path)
if after_upgrade_schedule is None:
return
@ -66,9 +74,226 @@ def run_citus_upgrade_tests(config, before_upgrade_schedule, after_upgrade_sched
run_alter_citus(config.bindir, config.mixed_mode, config)
verify_upgrade(config, config.mixed_mode, config.node_name_to_ports.values())
# For minor version upgrades, verify GUCs and UDFs does not have breaking changes
breaking_changes = []
if config.minor_upgrade:
breaking_changes = compare_citus_catalog_info(config, pre_upgrade)
run_test_on_coordinator(config, after_upgrade_schedule)
remove_citus(config.post_tar_path)
# Fail the test if there are any breaking changes
if breaking_changes:
common.eprint("\n=== BREAKING CHANGES DETECTED ===")
for change in breaking_changes:
common.eprint(f" - {change}")
common.eprint("==================================\n")
sys.exit(1)
def get_citus_catalog_info(config):
results = {}
# Store GUCs
guc_results = utils.psql_capture(
config.bindir,
config.coordinator_port(),
"SELECT name, boot_val FROM pg_settings WHERE name LIKE 'citus.%' ORDER BY name;",
)
guc_lines = guc_results.decode("utf-8").strip().split("\n")
results["gucs"] = {}
for line in guc_lines[2:]: # Skip header lines
name, boot_val = line.split("|")
results["gucs"][name.strip()] = boot_val.strip()
# Store UDFs
udf_results = utils.psql_capture(
config.bindir,
config.coordinator_port(),
"""
SELECT
n.nspname AS schema_name,
p.proname AS function_name,
pg_get_function_arguments(p.oid) AS full_args,
pg_get_function_result(p.oid) AS return_type
FROM pg_proc p
JOIN pg_namespace n ON n.oid = p.pronamespace
JOIN pg_depend d ON d.objid = p.oid
JOIN pg_extension e ON e.oid = d.refobjid
WHERE e.extname = 'citus'
AND d.deptype = 'e'
ORDER BY schema_name, function_name, full_args;
""",
)
udf_lines = udf_results.decode("utf-8").strip().split("\n")
results["udfs"] = {}
for line in udf_lines[2:]: # Skip header lines
schema_name, function_name, full_args, return_type = line.split("|")
key = (schema_name.strip(), function_name.strip())
signature = (full_args.strip(), return_type.strip())
if key not in results["udfs"]:
results["udfs"][key] = set()
results["udfs"][key].add(signature)
# Store types, exclude composite types (t.typrelid = 0) and
# exclude auto-created array types
# (t.typname LIKE '\_%' AND t.typelem <> 0)
type_results = utils.psql_capture(
config.bindir,
config.coordinator_port(),
"""
SELECT n.nspname, t.typname, t.typtype
FROM pg_type t
JOIN pg_depend d ON d.objid = t.oid
JOIN pg_extension e ON e.oid = d.refobjid
JOIN pg_namespace n ON n.oid = t.typnamespace
WHERE e.extname = 'citus'
AND t.typrelid = 0
AND NOT (t.typname LIKE '\\_%%' AND t.typelem <> 0)
ORDER BY n.nspname, t.typname;
""",
)
type_lines = type_results.decode("utf-8").strip().split("\n")
results["types"] = {}
for line in type_lines[2:]: # Skip header lines
nspname, typname, typtype = line.split("|")
key = (nspname.strip(), typname.strip())
results["types"][key] = typtype.strip()
# Store tables and views
table_results = utils.psql_capture(
config.bindir,
config.coordinator_port(),
"""
SELECT n.nspname, c.relname, a.attname, t.typname
FROM pg_class c
JOIN pg_namespace n ON n.oid = c.relnamespace
JOIN pg_attribute a ON a.attrelid = c.oid
JOIN pg_type t ON t.oid = a.atttypid
JOIN pg_depend d ON d.objid = c.oid
JOIN pg_extension e ON e.oid = d.refobjid
WHERE e.extname = 'citus'
AND a.attnum > 0
AND NOT a.attisdropped
ORDER BY n.nspname, c.relname, a.attname;
""",
)
table_lines = table_results.decode("utf-8").strip().split("\n")
results["tables"] = {}
for line in table_lines[2:]: # Skip header lines
nspname, relname, attname, typname = line.split("|")
key = (nspname.strip(), relname.strip())
if key not in results["tables"]:
results["tables"][key] = {}
results["tables"][key][attname.strip()] = typname.strip()
return results
def compare_citus_catalog_info(config, pre_upgrade):
post_upgrade = get_citus_catalog_info(config)
breaking_changes = []
# Compare GUCs
for name, boot_val in pre_upgrade["gucs"].items():
if name not in post_upgrade["gucs"]:
breaking_changes.append(f"GUC {name} was removed")
elif post_upgrade["gucs"][name] != boot_val and name != "citus.version":
breaking_changes.append(
f"The default value of GUC {name} was changed from {boot_val} to {post_upgrade['gucs'][name]}"
)
# Compare UDFs - check if any pre-upgrade signatures were removed
for (schema_name, function_name), pre_signatures in pre_upgrade["udfs"].items():
if (schema_name, function_name) not in post_upgrade["udfs"]:
breaking_changes.append(
f"UDF {schema_name}.{function_name} was completely removed"
)
else:
post_signatures = post_upgrade["udfs"][(schema_name, function_name)]
removed_signatures = pre_signatures - post_signatures
if removed_signatures:
for full_args, return_type in removed_signatures:
if not find_compatible_udf_signature(
full_args, return_type, post_signatures
):
breaking_changes.append(
f"UDF signature removed: {schema_name}.{function_name}({full_args}) RETURNS {return_type}"
)
# Compare Types - check if any pre-upgrade types were removed or changed
for (nspname, typname), typtype in pre_upgrade["types"].items():
if (nspname, typname) not in post_upgrade["types"]:
breaking_changes.append(f"Type {nspname}.{typname} was removed")
elif post_upgrade["types"][(nspname, typname)] != typtype:
breaking_changes.append(
f"Type {nspname}.{typname} changed type from {typtype} to {post_upgrade['types'][(nspname, typname)]}"
)
# Compare tables / views - check if any pre-upgrade tables or columns were removed or changed
for (nspname, relname), columns in pre_upgrade["tables"].items():
if (nspname, relname) not in post_upgrade["tables"]:
breaking_changes.append(f"Table/view {nspname}.{relname} was removed")
else:
post_columns = post_upgrade["tables"][(nspname, relname)]
for col_name, col_type in columns.items():
if col_name not in post_columns:
breaking_changes.append(
f"Column {col_name} in table/view {nspname}.{relname} was removed"
)
elif post_columns[col_name] != col_type:
breaking_changes.append(
f"Column {col_name} in table/view {nspname}.{relname} changed type from {col_type} to {post_columns[col_name]}"
)
return breaking_changes
def find_compatible_udf_signature(full_args, return_type, post_signatures):
pre_args_list = [arg.strip() for arg in full_args.split(",") if arg.strip()]
for post_full_args, post_return_type in post_signatures:
if post_return_type == return_type:
post_args_list = [
arg.strip() for arg in post_full_args.split(",") if arg.strip()
]
""" Here check if the function signatures are compatible, they are compatible if:
post_args_list has all the arguments of pre_args_list in the same order, but can have
additional arguments with default values """
pre_index = 0
post_index = 0
compatible = True
while pre_index < len(pre_args_list) and post_index < len(post_args_list):
if pre_args_list[pre_index] == post_args_list[post_index]:
pre_index += 1
else:
# Check if the argument in post_args_list has a default value
if "default" not in post_args_list[post_index].lower():
compatible = False
break
post_index += 1
if pre_index < len(pre_args_list):
compatible = False
continue
while post_index < len(post_args_list):
if "default" not in post_args_list[post_index].lower():
compatible = False
break
post_index += 1
if compatible:
return True
return False
def install_citus(tar_path):
if tar_path:

View File

@ -338,7 +338,9 @@ SELECT workers.result AS worker_password, pg_authid.rolpassword AS coord_passwor
|
(2 rows)
SET client_min_messages TO ERROR;
ALTER ROLE new_role PASSWORD 'new_password';
RESET client_min_messages;
SELECT workers.result AS worker_password, pg_authid.rolpassword AS coord_password, workers.result = pg_authid.rolpassword AS password_is_same FROM run_command_on_workers($$SELECT rolpassword FROM pg_authid WHERE rolname = 'new_role'$$) workers, pg_authid WHERE pg_authid.rolname = 'new_role';
worker_password | coord_password | password_is_same
---------------------------------------------------------------------

View File

@ -379,6 +379,7 @@ ORDER BY indexname;
SELECT conname FROM pg_constraint
WHERE conrelid = 'heap_\''tbl'::regclass
AND contype <> 'n'
ORDER BY conname;
conname
---------------------------------------------------------------------
@ -416,6 +417,7 @@ ORDER BY indexname;
SELECT conname FROM pg_constraint
WHERE conrelid = 'heap_\''tbl'::regclass
AND contype <> 'n'
ORDER BY conname;
conname
---------------------------------------------------------------------

View File

@ -10,7 +10,7 @@ SELECT con.conname
FROM pg_catalog.pg_constraint con
INNER JOIN pg_catalog.pg_class rel ON rel.oid = con.conrelid
INNER JOIN pg_catalog.pg_namespace nsp ON nsp.oid = connamespace
WHERE rel.relname = 'products';
WHERE rel.relname = 'products' AND con.contype <> 'n';
conname
---------------------------------------------------------------------
products_pkey
@ -27,7 +27,7 @@ SELECT con.conname
FROM pg_catalog.pg_constraint con
INNER JOIN pg_catalog.pg_class rel ON rel.oid = con.conrelid
INNER JOIN pg_catalog.pg_namespace nsp ON nsp.oid = connamespace
WHERE rel.relname = 'products_ref';
WHERE rel.relname = 'products_ref' AND con.contype <> 'n';
conname
---------------------------------------------------------------------
products_ref_pkey2
@ -41,7 +41,7 @@ SELECT con.conname
FROM pg_catalog.pg_constraint con
INNER JOIN pg_catalog.pg_class rel ON rel.oid = con.conrelid
INNER JOIN pg_catalog.pg_namespace nsp ON nsp.oid = connamespace
WHERE rel.relname LIKE 'very%';
WHERE rel.relname LIKE 'very%' AND con.contype <> 'n';
conname
---------------------------------------------------------------------
verylonglonglonglonglonglonglonglonglonglonglonglonglonglo_pkey
@ -55,7 +55,7 @@ SELECT con.conname
FROM pg_catalog.pg_constraint con
INNER JOIN pg_catalog.pg_class rel ON rel.oid = con.conrelid
INNER JOIN pg_catalog.pg_namespace nsp ON nsp.oid = connamespace
WHERE rel.relname = 'dist_partitioned_table';
WHERE rel.relname = 'dist_partitioned_table' AND con.contype <> 'n';
conname
---------------------------------------------------------------------
dist_partitioned_table_pkey
@ -68,7 +68,7 @@ SELECT con.conname
FROM pg_catalog.pg_constraint con
INNER JOIN pg_catalog.pg_class rel ON rel.oid = con.conrelid
INNER JOIN pg_catalog.pg_namespace nsp ON nsp.oid = connamespace
WHERE rel.relname = 'citus_local_table';
WHERE rel.relname = 'citus_local_table' AND con.contype <> 'n';
conname
---------------------------------------------------------------------
citus_local_table_pkey

View File

@ -49,7 +49,7 @@ SELECT id, id, id, id, id,
10 | 10 | 10 | 10 | 10 | 10 | 10 | 10 | 10 | 10 | 10 | 10 | 10 | 10 | 10 | 10 | 10 | 10 | 10 | 10 | 10 | 10 | 10 | 10 | 10 | 10 | 10 | 10 | 10 | 10
(10 rows)
EXPLAIN (ANALYZE TRUE, TIMING FALSE, COSTS FALSE, SUMMARY FALSE) SELECT id FROM t ORDER BY 1;
EXPLAIN (ANALYZE TRUE, TIMING FALSE, COSTS FALSE, SUMMARY FALSE, BUFFERS OFF) SELECT id FROM t ORDER BY 1;
QUERY PLAN
---------------------------------------------------------------------
Sort (actual rows=10 loops=1)
@ -66,7 +66,7 @@ EXPLAIN (ANALYZE TRUE, TIMING FALSE, COSTS FALSE, SUMMARY FALSE) SELECT id FROM
(11 rows)
SET citus.explain_all_tasks TO ON;
EXPLAIN (ANALYZE TRUE, TIMING FALSE, COSTS FALSE, SUMMARY FALSE) SELECT id FROM t ORDER BY 1;
EXPLAIN (ANALYZE TRUE, TIMING FALSE, COSTS FALSE, SUMMARY FALSE, BUFFERS OFF) SELECT id FROM t ORDER BY 1;
QUERY PLAN
---------------------------------------------------------------------
Sort (actual rows=10 loops=1)

View File

@ -110,13 +110,100 @@ SELECT * FROM citus_stats
citus_aggregated_stats | citus_local_current_check | rlsuser | 0.142857 | {user1} | {0.714286}
(9 rows)
-- create a dist table with million rows to simulate 3.729223e+06 in reltuples
-- this tests casting numbers like 3.729223e+06 to bigint
CREATE TABLE organizations (
org_id bigint,
id int
);
SELECT create_distributed_table('organizations', 'org_id');
create_distributed_table
---------------------------------------------------------------------
(1 row)
INSERT INTO organizations(org_id, id)
SELECT i, 1
FROM generate_series(1,2000000) i;
ANALYZE organizations;
SELECT attname, null_frac, most_common_vals, most_common_freqs FROM citus_stats
WHERE tablename IN ('organizations')
ORDER BY 1;
attname | null_frac | most_common_vals | most_common_freqs
---------------------------------------------------------------------
id | 0 | {1} | {1}
(1 row)
-- more real-world scenario:
-- outputs of pg_stats and citus_stats are NOT the same
-- but citus_stats does a fair estimation job
SELECT setseed(0.42);
setseed
---------------------------------------------------------------------
(1 row)
CREATE TABLE orders (id bigint , custid int, product text, quantity int);
INSERT INTO orders(id, custid, product, quantity)
SELECT i, (random() * 100)::int, 'product' || (random() * 10)::int, NULL
FROM generate_series(1,11) d(i);
-- frequent customer
INSERT INTO orders(id, custid, product, quantity)
SELECT 1200, 17, 'product' || (random() * 10)::int, NULL
FROM generate_series(1, 57) sk(i);
-- popular product
INSERT INTO orders(id, custid, product, quantity)
SELECT i+100 % 17, NULL, 'product3', (random() * 40)::int
FROM generate_series(1, 37) sk(i);
-- frequent customer
INSERT INTO orders(id, custid, product, quantity)
SELECT 1390, 76, 'product' || ((random() * 20)::int % 3), (random() * 30)::int
FROM generate_series(1, 33) sk(i);
ANALYZE orders;
-- pg_stats
SELECT schemaname, tablename, attname, null_frac, most_common_vals, most_common_freqs FROM pg_stats
WHERE tablename IN ('orders')
ORDER BY 3;
schemaname | tablename | attname | null_frac | most_common_vals | most_common_freqs
---------------------------------------------------------------------
citus_aggregated_stats | orders | custid | 0.268116 | {17,76} | {0.413043,0.23913}
citus_aggregated_stats | orders | id | 0 | {1200,1390} | {0.413043,0.23913}
citus_aggregated_stats | orders | product | 0 | {product3,product2,product0,product1,product9,product4,product8,product5,product10,product6} | {0.347826,0.15942,0.115942,0.108696,0.0652174,0.057971,0.0507246,0.0362319,0.0289855,0.0289855}
citus_aggregated_stats | orders | quantity | 0.492754 | {26,23,6,8,11,12,13,17,20,25,30,4,14,15,16,19,24,27,35,36,38,40} | {0.0362319,0.0289855,0.0217391,0.0217391,0.0217391,0.0217391,0.0217391,0.0217391,0.0217391,0.0217391,0.0217391,0.0144928,0.0144928,0.0144928,0.0144928,0.0144928,0.0144928,0.0144928,0.0144928,0.0144928,0.0144928,0.0144928}
(4 rows)
SELECT create_distributed_table('orders', 'id');
NOTICE: Copying data from local table...
NOTICE: copying the data has completed
DETAIL: The local data in the table is no longer visible, but is still on disk.
HINT: To remove the local data, run: SELECT truncate_local_data_after_distributing_table($$citus_aggregated_stats.orders$$)
create_distributed_table
---------------------------------------------------------------------
(1 row)
ANALYZE orders;
-- citus_stats
SELECT schemaname, tablename, attname, null_frac, most_common_vals, most_common_freqs FROM citus_stats
WHERE tablename IN ('orders')
ORDER BY 3;
schemaname | tablename | attname | null_frac | most_common_vals | most_common_freqs
---------------------------------------------------------------------
citus_aggregated_stats | orders | custid | 0.268116 | {17,76} | {0.413043,0.23913}
citus_aggregated_stats | orders | id | 0 | {1200,1390} | {0.413043,0.23913}
citus_aggregated_stats | orders | product | 0 | {product3,product2,product0,product1,product9,product4,product8,product5,product10,product6} | {0.347826,0.15942,0.115942,0.108696,0.0652174,0.057971,0.0507246,0.0362319,0.0289855,0.0289855}
citus_aggregated_stats | orders | quantity | 0.492754 | {26,13,17,20,23,8,11,12,14,16,19,24,25,27,30,35,38,40,6} | {0.0362319,0.0217391,0.0217391,0.0217391,0.0217391,0.0217391,0.0144928,0.0144928,0.0144928,0.0144928,0.0144928,0.0144928,0.0144928,0.0144928,0.0144928,0.0144928,0.0144928,0.0144928,0.0144928}
(4 rows)
RESET SESSION AUTHORIZATION;
DROP SCHEMA citus_aggregated_stats CASCADE;
NOTICE: drop cascades to 6 other objects
NOTICE: drop cascades to 8 other objects
DETAIL: drop cascades to table current_check
drop cascades to table dist_current_check
drop cascades to table ref_current_check
drop cascades to table citus_local_current_check_1870003
drop cascades to table ref_current_check_1870002
drop cascades to table citus_local_current_check
drop cascades to table organizations
drop cascades to table orders
DROP USER user1;

View File

@ -635,7 +635,7 @@ FROM pg_dist_partition WHERE logicalrelid = 'citus_local_table_4'::regclass;
SELECT column_name_to_column('citus_local_table_4', 'a');
column_name_to_column
---------------------------------------------------------------------
{VAR :varno 1 :varattno 1 :vartype 23 :vartypmod -1 :varcollid 0 :varnullingrels (b) :varlevelsup 0 :varnosyn 1 :varattnosyn 1 :location -1}
{VAR :varno 1 :varattno 1 :vartype 23 :vartypmod -1 :varcollid 0 :varnullingrels (b) :varlevelsup 0 :varreturningtype 0 :varnosyn 1 :varattnosyn 1 :location -1}
(1 row)
SELECT master_update_shard_statistics(shardid)

View File

@ -371,11 +371,11 @@ CREATE TABLE cas_1 (a INT UNIQUE);
CREATE TABLE cas_par (a INT UNIQUE) PARTITION BY RANGE(a);
CREATE TABLE cas_par_1 PARTITION OF cas_par FOR VALUES FROM (1) TO (4);
CREATE TABLE cas_par_2 PARTITION OF cas_par FOR VALUES FROM (5) TO (8);
ALTER TABLE cas_par_1 ADD CONSTRAINT fkey_cas_test_1 FOREIGN KEY (a) REFERENCES cas_1(a);
ALTER TABLE cas_par_1 ADD CONSTRAINT fkey_cas_test_first FOREIGN KEY (a) REFERENCES cas_1(a);
CREATE TABLE cas_par2 (a INT UNIQUE) PARTITION BY RANGE(a);
CREATE TABLE cas_par2_1 PARTITION OF cas_par2 FOR VALUES FROM (1) TO (4);
CREATE TABLE cas_par2_2 PARTITION OF cas_par2 FOR VALUES FROM (5) TO (8);
ALTER TABLE cas_par2_1 ADD CONSTRAINT fkey_cas_test_2 FOREIGN KEY (a) REFERENCES cas_1(a);
ALTER TABLE cas_par2_1 ADD CONSTRAINT fkey_cas_test_second FOREIGN KEY (a) REFERENCES cas_1(a);
CREATE TABLE cas_par3 (a INT UNIQUE) PARTITION BY RANGE(a);
CREATE TABLE cas_par3_1 PARTITION OF cas_par3 FOR VALUES FROM (1) TO (4);
CREATE TABLE cas_par3_2 PARTITION OF cas_par3 FOR VALUES FROM (5) TO (8);
@ -390,11 +390,11 @@ ERROR: cannot cascade operation via foreign keys as partition table citus_local
SELECT citus_add_local_table_to_metadata('cas_par2', cascade_via_foreign_keys=>true);
ERROR: cannot cascade operation via foreign keys as partition table citus_local_tables_mx.cas_par2_1 involved in a foreign key relationship that is not inherited from its parent table
-- drop the foreign keys and establish them again using the parent table
ALTER TABLE cas_par_1 DROP CONSTRAINT fkey_cas_test_1;
ALTER TABLE cas_par2_1 DROP CONSTRAINT fkey_cas_test_2;
ALTER TABLE cas_par ADD CONSTRAINT fkey_cas_test_1 FOREIGN KEY (a) REFERENCES cas_1(a);
ALTER TABLE cas_par2 ADD CONSTRAINT fkey_cas_test_2 FOREIGN KEY (a) REFERENCES cas_1(a);
ALTER TABLE cas_par3 ADD CONSTRAINT fkey_cas_test_3 FOREIGN KEY (a) REFERENCES cas_par(a);
ALTER TABLE cas_par_1 DROP CONSTRAINT fkey_cas_test_first;
ALTER TABLE cas_par2_1 DROP CONSTRAINT fkey_cas_test_second;
ALTER TABLE cas_par ADD CONSTRAINT fkey_cas_test_first FOREIGN KEY (a) REFERENCES cas_1(a);
ALTER TABLE cas_par2 ADD CONSTRAINT fkey_cas_test_second FOREIGN KEY (a) REFERENCES cas_1(a);
ALTER TABLE cas_par3 ADD CONSTRAINT fkey_cas_test_third FOREIGN KEY (a) REFERENCES cas_par(a);
-- this should error out as cascade_via_foreign_keys is not set to true
SELECT citus_add_local_table_to_metadata('cas_par2');
ERROR: relation citus_local_tables_mx.cas_par2 is involved in a foreign key relationship with another table
@ -414,27 +414,29 @@ select inhrelid::regclass from pg_inherits where inhparent='cas_par'::regclass o
(2 rows)
-- verify the fkeys + fkeys with shard ids are created
select conname from pg_constraint where conname like 'fkey_cas_test%' order by conname;
conname
select conname from pg_constraint
where conname like 'fkey_cas_test%' and conname not like '%_1' and conname not like '%_2'
order by conname;
conname
---------------------------------------------------------------------
fkey_cas_test_1
fkey_cas_test_1
fkey_cas_test_1
fkey_cas_test_1_1330008
fkey_cas_test_1_1330008
fkey_cas_test_1_1330008
fkey_cas_test_2
fkey_cas_test_2
fkey_cas_test_2
fkey_cas_test_2_1330006
fkey_cas_test_2_1330006
fkey_cas_test_2_1330006
fkey_cas_test_3
fkey_cas_test_3
fkey_cas_test_3
fkey_cas_test_3_1330013
fkey_cas_test_3_1330013
fkey_cas_test_3_1330013
fkey_cas_test_first
fkey_cas_test_first
fkey_cas_test_first
fkey_cas_test_first_1330008
fkey_cas_test_first_1330008
fkey_cas_test_first_1330008
fkey_cas_test_second
fkey_cas_test_second
fkey_cas_test_second
fkey_cas_test_second_1330006
fkey_cas_test_second_1330006
fkey_cas_test_second_1330006
fkey_cas_test_third
fkey_cas_test_third
fkey_cas_test_third
fkey_cas_test_third_1330013
fkey_cas_test_third_1330013
fkey_cas_test_third_1330013
(18 rows)
-- when all partitions are converted, there should be 40 tables and indexes
@ -457,18 +459,20 @@ SELECT count(*) FROM pg_class WHERE relname LIKE 'cas\_%' AND relnamespace IN
(1 row)
-- verify that the shell foreign keys are created on the worker as well
select conname from pg_constraint where conname like 'fkey_cas_test%' order by conname;
conname
select conname from pg_constraint
where conname like 'fkey_cas_test%' and conname not like '%_1' and conname not like '%_2'
order by conname;
conname
---------------------------------------------------------------------
fkey_cas_test_1
fkey_cas_test_1
fkey_cas_test_1
fkey_cas_test_2
fkey_cas_test_2
fkey_cas_test_2
fkey_cas_test_3
fkey_cas_test_3
fkey_cas_test_3
fkey_cas_test_first
fkey_cas_test_first
fkey_cas_test_first
fkey_cas_test_second
fkey_cas_test_second
fkey_cas_test_second
fkey_cas_test_third
fkey_cas_test_third
fkey_cas_test_third
(9 rows)
\c - - - :master_port
@ -494,18 +498,20 @@ select inhrelid::regclass from pg_inherits where inhparent='cas_par'::regclass o
(2 rows)
-- verify that the foreign keys with shard ids are gone, due to undistribution
select conname from pg_constraint where conname like 'fkey_cas_test%' order by conname;
conname
select conname from pg_constraint
where conname like 'fkey_cas_test%' and conname not like '%_1' and conname not like '%_2'
order by conname;
conname
---------------------------------------------------------------------
fkey_cas_test_1
fkey_cas_test_1
fkey_cas_test_1
fkey_cas_test_2
fkey_cas_test_2
fkey_cas_test_2
fkey_cas_test_3
fkey_cas_test_3
fkey_cas_test_3
fkey_cas_test_first
fkey_cas_test_first
fkey_cas_test_first
fkey_cas_test_second
fkey_cas_test_second
fkey_cas_test_second
fkey_cas_test_third
fkey_cas_test_third
fkey_cas_test_third
(9 rows)
-- add a non-inherited fkey and verify it fails when trying to convert
@ -769,8 +775,8 @@ SELECT logicalrelid, partmethod, partkey FROM pg_dist_partition
ORDER BY logicalrelid;
logicalrelid | partmethod | partkey
---------------------------------------------------------------------
parent_dropped_col | h | {VAR :varno 1 :varattno 1 :vartype 1082 :vartypmod -1 :varcollid 0 :varnullingrels (b) :varlevelsup 0 :varnosyn 1 :varattnosyn 1 :location -1}
parent_dropped_col_2 | h | {VAR :varno 1 :varattno 5 :vartype 23 :vartypmod -1 :varcollid 0 :varnullingrels (b) :varlevelsup 0 :varnosyn 1 :varattnosyn 5 :location -1}
parent_dropped_col | h | {VAR :varno 1 :varattno 1 :vartype 1082 :vartypmod -1 :varcollid 0 :varnullingrels (b) :varlevelsup 0 :varreturningtype 0 :varnosyn 1 :varattnosyn 1 :location -1}
parent_dropped_col_2 | h | {VAR :varno 1 :varattno 5 :vartype 23 :vartypmod -1 :varcollid 0 :varnullingrels (b) :varlevelsup 0 :varreturningtype 0 :varnosyn 1 :varattnosyn 5 :location -1}
(2 rows)
-- some tests for view propagation on citus local tables

View File

@ -160,32 +160,42 @@ SELECT pg_reload_conf();
FROM pg_catalog.pg_class tbl
JOIN public.table_fkeys fk on tbl.oid = fk.relid
WHERE tbl.relname like '%_89%'
AND fk."Constraint" NOT LIKE 'sensors%' AND fk."Constraint" NOT LIKE '%to\_parent%\_1'
ORDER BY 1, 2;
relname | Constraint | Definition
relname | Constraint | Definition
---------------------------------------------------------------------
sensors_2020_01_01_8970002 | fkey_from_child_to_child_8970002 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8970010(eventdatetime, measureid)
sensors_2020_01_01_8970002 | fkey_from_child_to_dist_8970002 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8970008(measureid)
sensors_2020_01_01_8970002 | fkey_from_child_to_parent_8970002 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8970009(eventdatetime, measureid)
sensors_2020_01_01_8970002 | fkey_from_child_to_ref_8970002 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_2020_01_01_8970002 | fkey_from_parent_to_child_8970000 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8970010(eventdatetime, measureid)
sensors_2020_01_01_8970002 | fkey_from_parent_to_dist_8970000 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8970008(measureid)
sensors_2020_01_01_8970002 | fkey_from_parent_to_parent_8970000 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8970009(eventdatetime, measureid)
sensors_2020_01_01_8970002 | fkey_from_parent_to_ref_8970000 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_2020_01_01_8970002 | sensors_2020_01_01_8970002_measureid_eventdatetime_fkey | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8970010(eventdatetime, measureid)
sensors_8970000 | fkey_from_parent_to_child_8970000 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8970010(eventdatetime, measureid)
sensors_8970000 | fkey_from_parent_to_dist_8970000 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8970008(measureid)
sensors_8970000 | fkey_from_parent_to_parent_8970000 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8970009(eventdatetime, measureid)
sensors_8970000 | fkey_from_parent_to_ref_8970000 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_8970000 | sensors_8970000_measureid_eventdatetime_fkey | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8970010(eventdatetime, measureid)
sensors_news_8970003 | fkey_from_parent_to_child_8970000 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8970010(eventdatetime, measureid)
sensors_news_8970003 | fkey_from_parent_to_dist_8970000 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8970008(measureid)
sensors_news_8970003 | fkey_from_parent_to_parent_8970000 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8970009(eventdatetime, measureid)
sensors_news_8970003 | fkey_from_parent_to_ref_8970000 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_old_8970001 | fkey_from_parent_to_child_8970000 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8970010(eventdatetime, measureid)
sensors_old_8970001 | fkey_from_parent_to_dist_8970000 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8970008(measureid)
sensors_old_8970001 | fkey_from_parent_to_parent_8970000 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8970009(eventdatetime, measureid)
sensors_old_8970001 | fkey_from_parent_to_ref_8970000 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
(22 rows)
sensors_2020_01_01_8970002 | fkey_from_child_to_child_8970002 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8970010(eventdatetime, measureid)
sensors_2020_01_01_8970002 | fkey_from_child_to_dist_8970002 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8970008(measureid)
sensors_2020_01_01_8970002 | fkey_from_child_to_parent_8970002 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8970009(eventdatetime, measureid)
sensors_2020_01_01_8970002 | fkey_from_child_to_ref_8970002 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_2020_01_01_8970002 | fkey_from_parent_to_child_8970000 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8970010(eventdatetime, measureid)
sensors_2020_01_01_8970002 | fkey_from_parent_to_dist_8970000 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8970008(measureid)
sensors_2020_01_01_8970002 | fkey_from_parent_to_parent_8970000 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8970009(eventdatetime, measureid)
sensors_2020_01_01_8970002 | fkey_from_parent_to_ref_8970000 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_8970000 | fkey_from_parent_to_child_8970000 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8970010(eventdatetime, measureid)
sensors_8970000 | fkey_from_parent_to_dist_8970000 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8970008(measureid)
sensors_8970000 | fkey_from_parent_to_parent_8970000 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8970009(eventdatetime, measureid)
sensors_8970000 | fkey_from_parent_to_ref_8970000 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_news_8970003 | fkey_from_parent_to_child_8970000 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8970010(eventdatetime, measureid)
sensors_news_8970003 | fkey_from_parent_to_dist_8970000 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8970008(measureid)
sensors_news_8970003 | fkey_from_parent_to_parent_8970000 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8970009(eventdatetime, measureid)
sensors_news_8970003 | fkey_from_parent_to_ref_8970000 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_old_8970001 | fkey_from_parent_to_child_8970000 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8970010(eventdatetime, measureid)
sensors_old_8970001 | fkey_from_parent_to_dist_8970000 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8970008(measureid)
sensors_old_8970001 | fkey_from_parent_to_parent_8970000 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8970009(eventdatetime, measureid)
sensors_old_8970001 | fkey_from_parent_to_ref_8970000 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
(20 rows)
-- separating generated child FK constraints since PG18 changed their naming (3db61db4)
SELECT count(*) AS generated_child_fk_constraints
FROM pg_catalog.pg_class tbl
JOIN public.table_fkeys fk on tbl.oid = fk.relid
WHERE tbl.relname like '%_89%'
AND (fk."Constraint" LIKE 'sensors%' OR fk."Constraint" LIKE '%to\_parent%\_1');
generated_child_fk_constraints
---------------------------------------------------------------------
2
(1 row)
SELECT tablename, indexdef FROM pg_indexes WHERE tablename like '%_89%' ORDER BY 1,2;
tablename | indexdef
@ -240,11 +250,22 @@ SELECT pg_reload_conf();
FROM pg_catalog.pg_class tbl
JOIN public.table_fkeys fk on tbl.oid = fk.relid
WHERE tbl.relname like '%_89%'
AND fk."Constraint" NOT LIKE 'sensors%' AND fk."Constraint" NOT LIKE '%to\_parent%\_1'
ORDER BY 1, 2;
relname | Constraint | Definition
---------------------------------------------------------------------
(0 rows)
SELECT count(*) AS generated_child_fk_constraints
FROM pg_catalog.pg_class tbl
JOIN public.table_fkeys fk on tbl.oid = fk.relid
WHERE tbl.relname like '%_89%'
AND (fk."Constraint" LIKE 'sensors%' OR fk."Constraint" LIKE '%to\_parent%\_1');
generated_child_fk_constraints
---------------------------------------------------------------------
0
(1 row)
SELECT tablename, indexdef FROM pg_indexes WHERE tablename like '%_89%' ORDER BY 1,2;
tablename | indexdef
---------------------------------------------------------------------
@ -368,32 +389,41 @@ SELECT public.wait_for_resource_cleanup();
FROM pg_catalog.pg_class tbl
JOIN public.table_fkeys fk on tbl.oid = fk.relid
WHERE tbl.relname like '%_89%'
AND fk."Constraint" NOT LIKE 'sensors%' AND fk."Constraint" NOT LIKE '%to\_parent%\_1'
ORDER BY 1, 2;
relname | Constraint | Definition
relname | Constraint | Definition
---------------------------------------------------------------------
sensors_2020_01_01_8999004 | fkey_from_child_to_child_8999004 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999020(eventdatetime, measureid)
sensors_2020_01_01_8999004 | fkey_from_child_to_dist_8999004 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999016(measureid)
sensors_2020_01_01_8999004 | fkey_from_child_to_parent_8999004 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999018(eventdatetime, measureid)
sensors_2020_01_01_8999004 | fkey_from_child_to_ref_8999004 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_2020_01_01_8999004 | fkey_from_parent_to_child_8999000 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999020(eventdatetime, measureid)
sensors_2020_01_01_8999004 | fkey_from_parent_to_dist_8999000 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999016(measureid)
sensors_2020_01_01_8999004 | fkey_from_parent_to_parent_8999000 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999018(eventdatetime, measureid)
sensors_2020_01_01_8999004 | fkey_from_parent_to_ref_8999000 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_2020_01_01_8999004 | sensors_2020_01_01_8999004_measureid_eventdatetime_fkey | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999020(eventdatetime, measureid)
sensors_8999000 | fkey_from_parent_to_child_8999000 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999020(eventdatetime, measureid)
sensors_8999000 | fkey_from_parent_to_dist_8999000 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999016(measureid)
sensors_8999000 | fkey_from_parent_to_parent_8999000 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999018(eventdatetime, measureid)
sensors_8999000 | fkey_from_parent_to_ref_8999000 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_8999000 | sensors_8999000_measureid_eventdatetime_fkey | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999020(eventdatetime, measureid)
sensors_news_8999006 | fkey_from_parent_to_child_8999000 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999020(eventdatetime, measureid)
sensors_news_8999006 | fkey_from_parent_to_dist_8999000 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999016(measureid)
sensors_news_8999006 | fkey_from_parent_to_parent_8999000 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999018(eventdatetime, measureid)
sensors_news_8999006 | fkey_from_parent_to_ref_8999000 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_old_8999002 | fkey_from_parent_to_child_8999000 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999020(eventdatetime, measureid)
sensors_old_8999002 | fkey_from_parent_to_dist_8999000 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999016(measureid)
sensors_old_8999002 | fkey_from_parent_to_parent_8999000 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999018(eventdatetime, measureid)
sensors_old_8999002 | fkey_from_parent_to_ref_8999000 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
(22 rows)
sensors_2020_01_01_8999004 | fkey_from_child_to_child_8999004 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999020(eventdatetime, measureid)
sensors_2020_01_01_8999004 | fkey_from_child_to_dist_8999004 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999016(measureid)
sensors_2020_01_01_8999004 | fkey_from_child_to_parent_8999004 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999018(eventdatetime, measureid)
sensors_2020_01_01_8999004 | fkey_from_child_to_ref_8999004 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_2020_01_01_8999004 | fkey_from_parent_to_child_8999000 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999020(eventdatetime, measureid)
sensors_2020_01_01_8999004 | fkey_from_parent_to_dist_8999000 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999016(measureid)
sensors_2020_01_01_8999004 | fkey_from_parent_to_parent_8999000 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999018(eventdatetime, measureid)
sensors_2020_01_01_8999004 | fkey_from_parent_to_ref_8999000 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_8999000 | fkey_from_parent_to_child_8999000 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999020(eventdatetime, measureid)
sensors_8999000 | fkey_from_parent_to_dist_8999000 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999016(measureid)
sensors_8999000 | fkey_from_parent_to_parent_8999000 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999018(eventdatetime, measureid)
sensors_8999000 | fkey_from_parent_to_ref_8999000 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_news_8999006 | fkey_from_parent_to_child_8999000 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999020(eventdatetime, measureid)
sensors_news_8999006 | fkey_from_parent_to_dist_8999000 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999016(measureid)
sensors_news_8999006 | fkey_from_parent_to_parent_8999000 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999018(eventdatetime, measureid)
sensors_news_8999006 | fkey_from_parent_to_ref_8999000 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_old_8999002 | fkey_from_parent_to_child_8999000 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999020(eventdatetime, measureid)
sensors_old_8999002 | fkey_from_parent_to_dist_8999000 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999016(measureid)
sensors_old_8999002 | fkey_from_parent_to_parent_8999000 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999018(eventdatetime, measureid)
sensors_old_8999002 | fkey_from_parent_to_ref_8999000 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
(20 rows)
SELECT count(*) AS generated_child_fk_constraints
FROM pg_catalog.pg_class tbl
JOIN public.table_fkeys fk on tbl.oid = fk.relid
WHERE tbl.relname like '%_89%'
AND (fk."Constraint" LIKE 'sensors%' OR fk."Constraint" LIKE '%to\_parent%\_1');
generated_child_fk_constraints
---------------------------------------------------------------------
2
(1 row)
SELECT tablename, indexdef FROM pg_indexes WHERE tablename like '%_89%' ORDER BY 1,2;
tablename | indexdef
@ -448,32 +478,41 @@ SELECT public.wait_for_resource_cleanup();
FROM pg_catalog.pg_class tbl
JOIN public.table_fkeys fk on tbl.oid = fk.relid
WHERE tbl.relname like '%_89%'
AND fk."Constraint" NOT LIKE 'sensors%' AND fk."Constraint" NOT LIKE '%to\_parent%\_1'
ORDER BY 1, 2;
relname | Constraint | Definition
relname | Constraint | Definition
---------------------------------------------------------------------
sensors_2020_01_01_8999005 | fkey_from_child_to_child_8999005 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999021(eventdatetime, measureid)
sensors_2020_01_01_8999005 | fkey_from_child_to_dist_8999005 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999017(measureid)
sensors_2020_01_01_8999005 | fkey_from_child_to_parent_8999005 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999019(eventdatetime, measureid)
sensors_2020_01_01_8999005 | fkey_from_child_to_ref_8999005 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_2020_01_01_8999005 | fkey_from_parent_to_child_8999001 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999021(eventdatetime, measureid)
sensors_2020_01_01_8999005 | fkey_from_parent_to_dist_8999001 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999017(measureid)
sensors_2020_01_01_8999005 | fkey_from_parent_to_parent_8999001 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999019(eventdatetime, measureid)
sensors_2020_01_01_8999005 | fkey_from_parent_to_ref_8999001 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_2020_01_01_8999005 | sensors_2020_01_01_8999005_measureid_eventdatetime_fkey | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999021(eventdatetime, measureid)
sensors_8999001 | fkey_from_parent_to_child_8999001 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999021(eventdatetime, measureid)
sensors_8999001 | fkey_from_parent_to_dist_8999001 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999017(measureid)
sensors_8999001 | fkey_from_parent_to_parent_8999001 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999019(eventdatetime, measureid)
sensors_8999001 | fkey_from_parent_to_ref_8999001 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_8999001 | sensors_8999001_measureid_eventdatetime_fkey | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999021(eventdatetime, measureid)
sensors_news_8999007 | fkey_from_parent_to_child_8999001 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999021(eventdatetime, measureid)
sensors_news_8999007 | fkey_from_parent_to_dist_8999001 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999017(measureid)
sensors_news_8999007 | fkey_from_parent_to_parent_8999001 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999019(eventdatetime, measureid)
sensors_news_8999007 | fkey_from_parent_to_ref_8999001 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_old_8999003 | fkey_from_parent_to_child_8999001 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999021(eventdatetime, measureid)
sensors_old_8999003 | fkey_from_parent_to_dist_8999001 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999017(measureid)
sensors_old_8999003 | fkey_from_parent_to_parent_8999001 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999019(eventdatetime, measureid)
sensors_old_8999003 | fkey_from_parent_to_ref_8999001 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
(22 rows)
sensors_2020_01_01_8999005 | fkey_from_child_to_child_8999005 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999021(eventdatetime, measureid)
sensors_2020_01_01_8999005 | fkey_from_child_to_dist_8999005 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999017(measureid)
sensors_2020_01_01_8999005 | fkey_from_child_to_parent_8999005 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999019(eventdatetime, measureid)
sensors_2020_01_01_8999005 | fkey_from_child_to_ref_8999005 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_2020_01_01_8999005 | fkey_from_parent_to_child_8999001 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999021(eventdatetime, measureid)
sensors_2020_01_01_8999005 | fkey_from_parent_to_dist_8999001 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999017(measureid)
sensors_2020_01_01_8999005 | fkey_from_parent_to_parent_8999001 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999019(eventdatetime, measureid)
sensors_2020_01_01_8999005 | fkey_from_parent_to_ref_8999001 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_8999001 | fkey_from_parent_to_child_8999001 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999021(eventdatetime, measureid)
sensors_8999001 | fkey_from_parent_to_dist_8999001 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999017(measureid)
sensors_8999001 | fkey_from_parent_to_parent_8999001 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999019(eventdatetime, measureid)
sensors_8999001 | fkey_from_parent_to_ref_8999001 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_news_8999007 | fkey_from_parent_to_child_8999001 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999021(eventdatetime, measureid)
sensors_news_8999007 | fkey_from_parent_to_dist_8999001 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999017(measureid)
sensors_news_8999007 | fkey_from_parent_to_parent_8999001 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999019(eventdatetime, measureid)
sensors_news_8999007 | fkey_from_parent_to_ref_8999001 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_old_8999003 | fkey_from_parent_to_child_8999001 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999021(eventdatetime, measureid)
sensors_old_8999003 | fkey_from_parent_to_dist_8999001 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999017(measureid)
sensors_old_8999003 | fkey_from_parent_to_parent_8999001 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999019(eventdatetime, measureid)
sensors_old_8999003 | fkey_from_parent_to_ref_8999001 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
(20 rows)
SELECT count(*) AS generated_child_fk_constraints
FROM pg_catalog.pg_class tbl
JOIN public.table_fkeys fk on tbl.oid = fk.relid
WHERE tbl.relname like '%_89%'
AND (fk."Constraint" LIKE 'sensors%' OR fk."Constraint" LIKE '%to\_parent%\_1');
generated_child_fk_constraints
---------------------------------------------------------------------
2
(1 row)
SELECT tablename, indexdef FROM pg_indexes WHERE tablename like '%_89%' ORDER BY 1,2;
tablename | indexdef
@ -634,32 +673,41 @@ SELECT public.wait_for_resource_cleanup();
FROM pg_catalog.pg_class tbl
JOIN public.table_fkeys fk on tbl.oid = fk.relid
WHERE tbl.relname like '%_89%'
AND fk."Constraint" NOT LIKE 'sensors%' AND fk."Constraint" NOT LIKE '%to\_parent%\_1'
ORDER BY 1, 2;
relname | Constraint | Definition
relname | Constraint | Definition
---------------------------------------------------------------------
sensors_2020_01_01_8999104 | fkey_from_child_to_child_8999104 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999120(eventdatetime, measureid)
sensors_2020_01_01_8999104 | fkey_from_child_to_dist_8999104 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999116(measureid)
sensors_2020_01_01_8999104 | fkey_from_child_to_parent_8999104 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999118(eventdatetime, measureid)
sensors_2020_01_01_8999104 | fkey_from_child_to_ref_8999104 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_2020_01_01_8999104 | fkey_from_parent_to_child_8999100 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999120(eventdatetime, measureid)
sensors_2020_01_01_8999104 | fkey_from_parent_to_dist_8999100 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999116(measureid)
sensors_2020_01_01_8999104 | fkey_from_parent_to_parent_8999100 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999118(eventdatetime, measureid)
sensors_2020_01_01_8999104 | fkey_from_parent_to_ref_8999100 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_2020_01_01_8999104 | sensors_2020_01_01_8999104_measureid_eventdatetime_fkey | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999120(eventdatetime, measureid)
sensors_8999100 | fkey_from_parent_to_child_8999100 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999120(eventdatetime, measureid)
sensors_8999100 | fkey_from_parent_to_dist_8999100 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999116(measureid)
sensors_8999100 | fkey_from_parent_to_parent_8999100 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999118(eventdatetime, measureid)
sensors_8999100 | fkey_from_parent_to_ref_8999100 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_8999100 | sensors_8999100_measureid_eventdatetime_fkey | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999120(eventdatetime, measureid)
sensors_news_8999106 | fkey_from_parent_to_child_8999100 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999120(eventdatetime, measureid)
sensors_news_8999106 | fkey_from_parent_to_dist_8999100 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999116(measureid)
sensors_news_8999106 | fkey_from_parent_to_parent_8999100 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999118(eventdatetime, measureid)
sensors_news_8999106 | fkey_from_parent_to_ref_8999100 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_old_8999102 | fkey_from_parent_to_child_8999100 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999120(eventdatetime, measureid)
sensors_old_8999102 | fkey_from_parent_to_dist_8999100 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999116(measureid)
sensors_old_8999102 | fkey_from_parent_to_parent_8999100 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999118(eventdatetime, measureid)
sensors_old_8999102 | fkey_from_parent_to_ref_8999100 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
(22 rows)
sensors_2020_01_01_8999104 | fkey_from_child_to_child_8999104 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999120(eventdatetime, measureid)
sensors_2020_01_01_8999104 | fkey_from_child_to_dist_8999104 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999116(measureid)
sensors_2020_01_01_8999104 | fkey_from_child_to_parent_8999104 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999118(eventdatetime, measureid)
sensors_2020_01_01_8999104 | fkey_from_child_to_ref_8999104 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_2020_01_01_8999104 | fkey_from_parent_to_child_8999100 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999120(eventdatetime, measureid)
sensors_2020_01_01_8999104 | fkey_from_parent_to_dist_8999100 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999116(measureid)
sensors_2020_01_01_8999104 | fkey_from_parent_to_parent_8999100 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999118(eventdatetime, measureid)
sensors_2020_01_01_8999104 | fkey_from_parent_to_ref_8999100 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_8999100 | fkey_from_parent_to_child_8999100 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999120(eventdatetime, measureid)
sensors_8999100 | fkey_from_parent_to_dist_8999100 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999116(measureid)
sensors_8999100 | fkey_from_parent_to_parent_8999100 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999118(eventdatetime, measureid)
sensors_8999100 | fkey_from_parent_to_ref_8999100 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_news_8999106 | fkey_from_parent_to_child_8999100 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999120(eventdatetime, measureid)
sensors_news_8999106 | fkey_from_parent_to_dist_8999100 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999116(measureid)
sensors_news_8999106 | fkey_from_parent_to_parent_8999100 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999118(eventdatetime, measureid)
sensors_news_8999106 | fkey_from_parent_to_ref_8999100 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_old_8999102 | fkey_from_parent_to_child_8999100 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999120(eventdatetime, measureid)
sensors_old_8999102 | fkey_from_parent_to_dist_8999100 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999116(measureid)
sensors_old_8999102 | fkey_from_parent_to_parent_8999100 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999118(eventdatetime, measureid)
sensors_old_8999102 | fkey_from_parent_to_ref_8999100 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
(20 rows)
SELECT count(*) AS generated_child_fk_constraints
FROM pg_catalog.pg_class tbl
JOIN public.table_fkeys fk on tbl.oid = fk.relid
WHERE tbl.relname like '%_89%'
AND (fk."Constraint" LIKE 'sensors%' OR fk."Constraint" LIKE '%to\_parent%\_1');
generated_child_fk_constraints
---------------------------------------------------------------------
2
(1 row)
SELECT tablename, indexdef FROM pg_indexes WHERE tablename like '%_89%' ORDER BY 1,2;
tablename | indexdef
@ -714,54 +762,61 @@ SELECT public.wait_for_resource_cleanup();
FROM pg_catalog.pg_class tbl
JOIN public.table_fkeys fk on tbl.oid = fk.relid
WHERE tbl.relname like '%_89%'
AND fk."Constraint" NOT LIKE 'sensors%' AND fk."Constraint" NOT LIKE '%to\_parent%\_1'
ORDER BY 1, 2;
relname | Constraint | Definition
relname | Constraint | Definition
---------------------------------------------------------------------
sensors_2020_01_01_8999005 | fkey_from_child_to_child_8999005 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999021(eventdatetime, measureid)
sensors_2020_01_01_8999005 | fkey_from_child_to_dist_8999005 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999017(measureid)
sensors_2020_01_01_8999005 | fkey_from_child_to_parent_8999005 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999019(eventdatetime, measureid)
sensors_2020_01_01_8999005 | fkey_from_child_to_ref_8999005 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_2020_01_01_8999005 | fkey_from_parent_to_child_8999001 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999021(eventdatetime, measureid)
sensors_2020_01_01_8999005 | fkey_from_parent_to_dist_8999001 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999017(measureid)
sensors_2020_01_01_8999005 | fkey_from_parent_to_parent_8999001 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999019(eventdatetime, measureid)
sensors_2020_01_01_8999005 | fkey_from_parent_to_ref_8999001 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_2020_01_01_8999005 | sensors_2020_01_01_8999005_measureid_eventdatetime_fkey | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999021(eventdatetime, measureid)
sensors_2020_01_01_8999105 | fkey_from_child_to_child_8999105 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999121(eventdatetime, measureid)
sensors_2020_01_01_8999105 | fkey_from_child_to_dist_8999105 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999117(measureid)
sensors_2020_01_01_8999105 | fkey_from_child_to_parent_8999105 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999119(eventdatetime, measureid)
sensors_2020_01_01_8999105 | fkey_from_child_to_ref_8999105 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_2020_01_01_8999105 | fkey_from_parent_to_child_8999101 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999121(eventdatetime, measureid)
sensors_2020_01_01_8999105 | fkey_from_parent_to_dist_8999101 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999117(measureid)
sensors_2020_01_01_8999105 | fkey_from_parent_to_parent_8999101 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999119(eventdatetime, measureid)
sensors_2020_01_01_8999105 | fkey_from_parent_to_ref_8999101 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_2020_01_01_8999105 | sensors_2020_01_01_8999105_measureid_eventdatetime_fkey | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999121(eventdatetime, measureid)
sensors_8999001 | fkey_from_parent_to_child_8999001 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999021(eventdatetime, measureid)
sensors_8999001 | fkey_from_parent_to_dist_8999001 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999017(measureid)
sensors_8999001 | fkey_from_parent_to_parent_8999001 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999019(eventdatetime, measureid)
sensors_8999001 | fkey_from_parent_to_ref_8999001 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_8999001 | sensors_8999001_measureid_eventdatetime_fkey | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999021(eventdatetime, measureid)
sensors_8999101 | fkey_from_parent_to_child_8999101 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999121(eventdatetime, measureid)
sensors_8999101 | fkey_from_parent_to_dist_8999101 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999117(measureid)
sensors_8999101 | fkey_from_parent_to_parent_8999101 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999119(eventdatetime, measureid)
sensors_8999101 | fkey_from_parent_to_ref_8999101 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_8999101 | sensors_8999101_measureid_eventdatetime_fkey | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999121(eventdatetime, measureid)
sensors_news_8999007 | fkey_from_parent_to_child_8999001 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999021(eventdatetime, measureid)
sensors_news_8999007 | fkey_from_parent_to_dist_8999001 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999017(measureid)
sensors_news_8999007 | fkey_from_parent_to_parent_8999001 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999019(eventdatetime, measureid)
sensors_news_8999007 | fkey_from_parent_to_ref_8999001 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_news_8999107 | fkey_from_parent_to_child_8999101 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999121(eventdatetime, measureid)
sensors_news_8999107 | fkey_from_parent_to_dist_8999101 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999117(measureid)
sensors_news_8999107 | fkey_from_parent_to_parent_8999101 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999119(eventdatetime, measureid)
sensors_news_8999107 | fkey_from_parent_to_ref_8999101 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_old_8999003 | fkey_from_parent_to_child_8999001 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999021(eventdatetime, measureid)
sensors_old_8999003 | fkey_from_parent_to_dist_8999001 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999017(measureid)
sensors_old_8999003 | fkey_from_parent_to_parent_8999001 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999019(eventdatetime, measureid)
sensors_old_8999003 | fkey_from_parent_to_ref_8999001 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_old_8999103 | fkey_from_parent_to_child_8999101 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999121(eventdatetime, measureid)
sensors_old_8999103 | fkey_from_parent_to_dist_8999101 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999117(measureid)
sensors_old_8999103 | fkey_from_parent_to_parent_8999101 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999119(eventdatetime, measureid)
sensors_old_8999103 | fkey_from_parent_to_ref_8999101 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
(44 rows)
sensors_2020_01_01_8999005 | fkey_from_child_to_child_8999005 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999021(eventdatetime, measureid)
sensors_2020_01_01_8999005 | fkey_from_child_to_dist_8999005 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999017(measureid)
sensors_2020_01_01_8999005 | fkey_from_child_to_parent_8999005 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999019(eventdatetime, measureid)
sensors_2020_01_01_8999005 | fkey_from_child_to_ref_8999005 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_2020_01_01_8999005 | fkey_from_parent_to_child_8999001 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999021(eventdatetime, measureid)
sensors_2020_01_01_8999005 | fkey_from_parent_to_dist_8999001 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999017(measureid)
sensors_2020_01_01_8999005 | fkey_from_parent_to_parent_8999001 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999019(eventdatetime, measureid)
sensors_2020_01_01_8999005 | fkey_from_parent_to_ref_8999001 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_2020_01_01_8999105 | fkey_from_child_to_child_8999105 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999121(eventdatetime, measureid)
sensors_2020_01_01_8999105 | fkey_from_child_to_dist_8999105 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999117(measureid)
sensors_2020_01_01_8999105 | fkey_from_child_to_parent_8999105 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999119(eventdatetime, measureid)
sensors_2020_01_01_8999105 | fkey_from_child_to_ref_8999105 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_2020_01_01_8999105 | fkey_from_parent_to_child_8999101 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999121(eventdatetime, measureid)
sensors_2020_01_01_8999105 | fkey_from_parent_to_dist_8999101 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999117(measureid)
sensors_2020_01_01_8999105 | fkey_from_parent_to_parent_8999101 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999119(eventdatetime, measureid)
sensors_2020_01_01_8999105 | fkey_from_parent_to_ref_8999101 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_8999001 | fkey_from_parent_to_child_8999001 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999021(eventdatetime, measureid)
sensors_8999001 | fkey_from_parent_to_dist_8999001 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999017(measureid)
sensors_8999001 | fkey_from_parent_to_parent_8999001 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999019(eventdatetime, measureid)
sensors_8999001 | fkey_from_parent_to_ref_8999001 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_8999101 | fkey_from_parent_to_child_8999101 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999121(eventdatetime, measureid)
sensors_8999101 | fkey_from_parent_to_dist_8999101 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999117(measureid)
sensors_8999101 | fkey_from_parent_to_parent_8999101 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999119(eventdatetime, measureid)
sensors_8999101 | fkey_from_parent_to_ref_8999101 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_news_8999007 | fkey_from_parent_to_child_8999001 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999021(eventdatetime, measureid)
sensors_news_8999007 | fkey_from_parent_to_dist_8999001 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999017(measureid)
sensors_news_8999007 | fkey_from_parent_to_parent_8999001 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999019(eventdatetime, measureid)
sensors_news_8999007 | fkey_from_parent_to_ref_8999001 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_news_8999107 | fkey_from_parent_to_child_8999101 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999121(eventdatetime, measureid)
sensors_news_8999107 | fkey_from_parent_to_dist_8999101 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999117(measureid)
sensors_news_8999107 | fkey_from_parent_to_parent_8999101 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999119(eventdatetime, measureid)
sensors_news_8999107 | fkey_from_parent_to_ref_8999101 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_old_8999003 | fkey_from_parent_to_child_8999001 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999021(eventdatetime, measureid)
sensors_old_8999003 | fkey_from_parent_to_dist_8999001 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999017(measureid)
sensors_old_8999003 | fkey_from_parent_to_parent_8999001 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999019(eventdatetime, measureid)
sensors_old_8999003 | fkey_from_parent_to_ref_8999001 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_old_8999103 | fkey_from_parent_to_child_8999101 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999121(eventdatetime, measureid)
sensors_old_8999103 | fkey_from_parent_to_dist_8999101 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999117(measureid)
sensors_old_8999103 | fkey_from_parent_to_parent_8999101 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999119(eventdatetime, measureid)
sensors_old_8999103 | fkey_from_parent_to_ref_8999101 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
(40 rows)
SELECT count(*) AS generated_child_fk_constraints
FROM pg_catalog.pg_class tbl
JOIN public.table_fkeys fk on tbl.oid = fk.relid
WHERE tbl.relname like '%_89%'
AND (fk."Constraint" LIKE 'sensors%' OR fk."Constraint" LIKE '%to\_parent%\_1');
generated_child_fk_constraints
---------------------------------------------------------------------
4
(1 row)
SELECT tablename, indexdef FROM pg_indexes WHERE tablename like '%_89%' ORDER BY 1,2;
tablename | indexdef

View File

@ -152,32 +152,42 @@ SELECT pg_reload_conf();
FROM pg_catalog.pg_class tbl
JOIN public.table_fkeys fk on tbl.oid = fk.relid
WHERE tbl.relname like '%_89%'
AND fk."Constraint" NOT LIKE 'sensors%' AND fk."Constraint" NOT LIKE '%to\_parent%\_1'
ORDER BY 1, 2;
relname | Constraint | Definition
relname | Constraint | Definition
---------------------------------------------------------------------
sensors_2020_01_01_8970002 | fkey_from_child_to_child_8970002 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8970010(eventdatetime, measureid)
sensors_2020_01_01_8970002 | fkey_from_child_to_dist_8970002 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8970008(measureid)
sensors_2020_01_01_8970002 | fkey_from_child_to_parent_8970002 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8970009(eventdatetime, measureid)
sensors_2020_01_01_8970002 | fkey_from_child_to_ref_8970002 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_2020_01_01_8970002 | fkey_from_parent_to_child_8970000 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8970010(eventdatetime, measureid)
sensors_2020_01_01_8970002 | fkey_from_parent_to_dist_8970000 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8970008(measureid)
sensors_2020_01_01_8970002 | fkey_from_parent_to_parent_8970000 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8970009(eventdatetime, measureid)
sensors_2020_01_01_8970002 | fkey_from_parent_to_ref_8970000 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_2020_01_01_8970002 | sensors_2020_01_01_8970002_measureid_eventdatetime_fkey | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8970010(eventdatetime, measureid)
sensors_8970000 | fkey_from_parent_to_child_8970000 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8970010(eventdatetime, measureid)
sensors_8970000 | fkey_from_parent_to_dist_8970000 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8970008(measureid)
sensors_8970000 | fkey_from_parent_to_parent_8970000 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8970009(eventdatetime, measureid)
sensors_8970000 | fkey_from_parent_to_ref_8970000 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_8970000 | sensors_8970000_measureid_eventdatetime_fkey | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8970010(eventdatetime, measureid)
sensors_news_8970003 | fkey_from_parent_to_child_8970000 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8970010(eventdatetime, measureid)
sensors_news_8970003 | fkey_from_parent_to_dist_8970000 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8970008(measureid)
sensors_news_8970003 | fkey_from_parent_to_parent_8970000 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8970009(eventdatetime, measureid)
sensors_news_8970003 | fkey_from_parent_to_ref_8970000 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_old_8970001 | fkey_from_parent_to_child_8970000 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8970010(eventdatetime, measureid)
sensors_old_8970001 | fkey_from_parent_to_dist_8970000 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8970008(measureid)
sensors_old_8970001 | fkey_from_parent_to_parent_8970000 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8970009(eventdatetime, measureid)
sensors_old_8970001 | fkey_from_parent_to_ref_8970000 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
(22 rows)
sensors_2020_01_01_8970002 | fkey_from_child_to_child_8970002 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8970010(eventdatetime, measureid)
sensors_2020_01_01_8970002 | fkey_from_child_to_dist_8970002 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8970008(measureid)
sensors_2020_01_01_8970002 | fkey_from_child_to_parent_8970002 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8970009(eventdatetime, measureid)
sensors_2020_01_01_8970002 | fkey_from_child_to_ref_8970002 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_2020_01_01_8970002 | fkey_from_parent_to_child_8970000 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8970010(eventdatetime, measureid)
sensors_2020_01_01_8970002 | fkey_from_parent_to_dist_8970000 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8970008(measureid)
sensors_2020_01_01_8970002 | fkey_from_parent_to_parent_8970000 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8970009(eventdatetime, measureid)
sensors_2020_01_01_8970002 | fkey_from_parent_to_ref_8970000 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_8970000 | fkey_from_parent_to_child_8970000 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8970010(eventdatetime, measureid)
sensors_8970000 | fkey_from_parent_to_dist_8970000 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8970008(measureid)
sensors_8970000 | fkey_from_parent_to_parent_8970000 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8970009(eventdatetime, measureid)
sensors_8970000 | fkey_from_parent_to_ref_8970000 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_news_8970003 | fkey_from_parent_to_child_8970000 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8970010(eventdatetime, measureid)
sensors_news_8970003 | fkey_from_parent_to_dist_8970000 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8970008(measureid)
sensors_news_8970003 | fkey_from_parent_to_parent_8970000 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8970009(eventdatetime, measureid)
sensors_news_8970003 | fkey_from_parent_to_ref_8970000 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_old_8970001 | fkey_from_parent_to_child_8970000 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8970010(eventdatetime, measureid)
sensors_old_8970001 | fkey_from_parent_to_dist_8970000 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8970008(measureid)
sensors_old_8970001 | fkey_from_parent_to_parent_8970000 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8970009(eventdatetime, measureid)
sensors_old_8970001 | fkey_from_parent_to_ref_8970000 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
(20 rows)
-- separating generated child FK constraints since PG18 changed their naming (3db61db4)
SELECT count(*) AS generated_child_fk_constraints
FROM pg_catalog.pg_class tbl
JOIN public.table_fkeys fk on tbl.oid = fk.relid
WHERE tbl.relname like '%_89%'
AND (fk."Constraint" LIKE 'sensors%' OR fk."Constraint" LIKE '%to\_parent%\_1');
generated_child_fk_constraints
---------------------------------------------------------------------
2
(1 row)
SELECT tablename, indexdef FROM pg_indexes WHERE tablename like '%_89%' ORDER BY 1,2;
tablename | indexdef
@ -232,11 +242,22 @@ SELECT pg_reload_conf();
FROM pg_catalog.pg_class tbl
JOIN public.table_fkeys fk on tbl.oid = fk.relid
WHERE tbl.relname like '%_89%'
AND fk."Constraint" NOT LIKE 'sensors%' AND fk."Constraint" NOT LIKE '%to\_parent%\_1'
ORDER BY 1, 2;
relname | Constraint | Definition
---------------------------------------------------------------------
(0 rows)
SELECT count(*) AS generated_child_fk_constraints
FROM pg_catalog.pg_class tbl
JOIN public.table_fkeys fk on tbl.oid = fk.relid
WHERE tbl.relname like '%_89%'
AND (fk."Constraint" LIKE 'sensors%' OR fk."Constraint" LIKE '%to\_parent%\_1');
generated_child_fk_constraints
---------------------------------------------------------------------
0
(1 row)
SELECT tablename, indexdef FROM pg_indexes WHERE tablename like '%_89%' ORDER BY 1,2;
tablename | indexdef
---------------------------------------------------------------------
@ -360,32 +381,41 @@ SELECT public.wait_for_resource_cleanup();
FROM pg_catalog.pg_class tbl
JOIN public.table_fkeys fk on tbl.oid = fk.relid
WHERE tbl.relname like '%_89%'
AND fk."Constraint" NOT LIKE 'sensors%' AND fk."Constraint" NOT LIKE '%to\_parent%\_1'
ORDER BY 1, 2;
relname | Constraint | Definition
relname | Constraint | Definition
---------------------------------------------------------------------
sensors_2020_01_01_8999004 | fkey_from_child_to_child_8999004 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999020(eventdatetime, measureid)
sensors_2020_01_01_8999004 | fkey_from_child_to_dist_8999004 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999016(measureid)
sensors_2020_01_01_8999004 | fkey_from_child_to_parent_8999004 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999018(eventdatetime, measureid)
sensors_2020_01_01_8999004 | fkey_from_child_to_ref_8999004 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_2020_01_01_8999004 | fkey_from_parent_to_child_8999000 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999020(eventdatetime, measureid)
sensors_2020_01_01_8999004 | fkey_from_parent_to_dist_8999000 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999016(measureid)
sensors_2020_01_01_8999004 | fkey_from_parent_to_parent_8999000 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999018(eventdatetime, measureid)
sensors_2020_01_01_8999004 | fkey_from_parent_to_ref_8999000 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_2020_01_01_8999004 | sensors_2020_01_01_8999004_measureid_eventdatetime_fkey | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999020(eventdatetime, measureid)
sensors_8999000 | fkey_from_parent_to_child_8999000 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999020(eventdatetime, measureid)
sensors_8999000 | fkey_from_parent_to_dist_8999000 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999016(measureid)
sensors_8999000 | fkey_from_parent_to_parent_8999000 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999018(eventdatetime, measureid)
sensors_8999000 | fkey_from_parent_to_ref_8999000 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_8999000 | sensors_8999000_measureid_eventdatetime_fkey | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999020(eventdatetime, measureid)
sensors_news_8999006 | fkey_from_parent_to_child_8999000 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999020(eventdatetime, measureid)
sensors_news_8999006 | fkey_from_parent_to_dist_8999000 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999016(measureid)
sensors_news_8999006 | fkey_from_parent_to_parent_8999000 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999018(eventdatetime, measureid)
sensors_news_8999006 | fkey_from_parent_to_ref_8999000 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_old_8999002 | fkey_from_parent_to_child_8999000 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999020(eventdatetime, measureid)
sensors_old_8999002 | fkey_from_parent_to_dist_8999000 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999016(measureid)
sensors_old_8999002 | fkey_from_parent_to_parent_8999000 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999018(eventdatetime, measureid)
sensors_old_8999002 | fkey_from_parent_to_ref_8999000 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
(22 rows)
sensors_2020_01_01_8999004 | fkey_from_child_to_child_8999004 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999020(eventdatetime, measureid)
sensors_2020_01_01_8999004 | fkey_from_child_to_dist_8999004 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999016(measureid)
sensors_2020_01_01_8999004 | fkey_from_child_to_parent_8999004 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999018(eventdatetime, measureid)
sensors_2020_01_01_8999004 | fkey_from_child_to_ref_8999004 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_2020_01_01_8999004 | fkey_from_parent_to_child_8999000 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999020(eventdatetime, measureid)
sensors_2020_01_01_8999004 | fkey_from_parent_to_dist_8999000 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999016(measureid)
sensors_2020_01_01_8999004 | fkey_from_parent_to_parent_8999000 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999018(eventdatetime, measureid)
sensors_2020_01_01_8999004 | fkey_from_parent_to_ref_8999000 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_8999000 | fkey_from_parent_to_child_8999000 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999020(eventdatetime, measureid)
sensors_8999000 | fkey_from_parent_to_dist_8999000 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999016(measureid)
sensors_8999000 | fkey_from_parent_to_parent_8999000 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999018(eventdatetime, measureid)
sensors_8999000 | fkey_from_parent_to_ref_8999000 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_news_8999006 | fkey_from_parent_to_child_8999000 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999020(eventdatetime, measureid)
sensors_news_8999006 | fkey_from_parent_to_dist_8999000 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999016(measureid)
sensors_news_8999006 | fkey_from_parent_to_parent_8999000 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999018(eventdatetime, measureid)
sensors_news_8999006 | fkey_from_parent_to_ref_8999000 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_old_8999002 | fkey_from_parent_to_child_8999000 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999020(eventdatetime, measureid)
sensors_old_8999002 | fkey_from_parent_to_dist_8999000 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999016(measureid)
sensors_old_8999002 | fkey_from_parent_to_parent_8999000 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999018(eventdatetime, measureid)
sensors_old_8999002 | fkey_from_parent_to_ref_8999000 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
(20 rows)
SELECT count(*) AS generated_child_fk_constraints
FROM pg_catalog.pg_class tbl
JOIN public.table_fkeys fk on tbl.oid = fk.relid
WHERE tbl.relname like '%_89%'
AND (fk."Constraint" LIKE 'sensors%' OR fk."Constraint" LIKE '%to\_parent%\_1');
generated_child_fk_constraints
---------------------------------------------------------------------
2
(1 row)
SELECT tablename, indexdef FROM pg_indexes WHERE tablename like '%_89%' ORDER BY 1,2;
tablename | indexdef
@ -440,32 +470,41 @@ SELECT public.wait_for_resource_cleanup();
FROM pg_catalog.pg_class tbl
JOIN public.table_fkeys fk on tbl.oid = fk.relid
WHERE tbl.relname like '%_89%'
AND fk."Constraint" NOT LIKE 'sensors%' AND fk."Constraint" NOT LIKE '%to\_parent%\_1'
ORDER BY 1, 2;
relname | Constraint | Definition
relname | Constraint | Definition
---------------------------------------------------------------------
sensors_2020_01_01_8999005 | fkey_from_child_to_child_8999005 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999021(eventdatetime, measureid)
sensors_2020_01_01_8999005 | fkey_from_child_to_dist_8999005 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999017(measureid)
sensors_2020_01_01_8999005 | fkey_from_child_to_parent_8999005 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999019(eventdatetime, measureid)
sensors_2020_01_01_8999005 | fkey_from_child_to_ref_8999005 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_2020_01_01_8999005 | fkey_from_parent_to_child_8999001 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999021(eventdatetime, measureid)
sensors_2020_01_01_8999005 | fkey_from_parent_to_dist_8999001 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999017(measureid)
sensors_2020_01_01_8999005 | fkey_from_parent_to_parent_8999001 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999019(eventdatetime, measureid)
sensors_2020_01_01_8999005 | fkey_from_parent_to_ref_8999001 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_2020_01_01_8999005 | sensors_2020_01_01_8999005_measureid_eventdatetime_fkey | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999021(eventdatetime, measureid)
sensors_8999001 | fkey_from_parent_to_child_8999001 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999021(eventdatetime, measureid)
sensors_8999001 | fkey_from_parent_to_dist_8999001 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999017(measureid)
sensors_8999001 | fkey_from_parent_to_parent_8999001 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999019(eventdatetime, measureid)
sensors_8999001 | fkey_from_parent_to_ref_8999001 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_8999001 | sensors_8999001_measureid_eventdatetime_fkey | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999021(eventdatetime, measureid)
sensors_news_8999007 | fkey_from_parent_to_child_8999001 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999021(eventdatetime, measureid)
sensors_news_8999007 | fkey_from_parent_to_dist_8999001 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999017(measureid)
sensors_news_8999007 | fkey_from_parent_to_parent_8999001 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999019(eventdatetime, measureid)
sensors_news_8999007 | fkey_from_parent_to_ref_8999001 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_old_8999003 | fkey_from_parent_to_child_8999001 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999021(eventdatetime, measureid)
sensors_old_8999003 | fkey_from_parent_to_dist_8999001 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999017(measureid)
sensors_old_8999003 | fkey_from_parent_to_parent_8999001 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999019(eventdatetime, measureid)
sensors_old_8999003 | fkey_from_parent_to_ref_8999001 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
(22 rows)
sensors_2020_01_01_8999005 | fkey_from_child_to_child_8999005 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999021(eventdatetime, measureid)
sensors_2020_01_01_8999005 | fkey_from_child_to_dist_8999005 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999017(measureid)
sensors_2020_01_01_8999005 | fkey_from_child_to_parent_8999005 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999019(eventdatetime, measureid)
sensors_2020_01_01_8999005 | fkey_from_child_to_ref_8999005 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_2020_01_01_8999005 | fkey_from_parent_to_child_8999001 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999021(eventdatetime, measureid)
sensors_2020_01_01_8999005 | fkey_from_parent_to_dist_8999001 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999017(measureid)
sensors_2020_01_01_8999005 | fkey_from_parent_to_parent_8999001 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999019(eventdatetime, measureid)
sensors_2020_01_01_8999005 | fkey_from_parent_to_ref_8999001 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_8999001 | fkey_from_parent_to_child_8999001 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999021(eventdatetime, measureid)
sensors_8999001 | fkey_from_parent_to_dist_8999001 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999017(measureid)
sensors_8999001 | fkey_from_parent_to_parent_8999001 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999019(eventdatetime, measureid)
sensors_8999001 | fkey_from_parent_to_ref_8999001 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_news_8999007 | fkey_from_parent_to_child_8999001 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999021(eventdatetime, measureid)
sensors_news_8999007 | fkey_from_parent_to_dist_8999001 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999017(measureid)
sensors_news_8999007 | fkey_from_parent_to_parent_8999001 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999019(eventdatetime, measureid)
sensors_news_8999007 | fkey_from_parent_to_ref_8999001 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_old_8999003 | fkey_from_parent_to_child_8999001 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999021(eventdatetime, measureid)
sensors_old_8999003 | fkey_from_parent_to_dist_8999001 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999017(measureid)
sensors_old_8999003 | fkey_from_parent_to_parent_8999001 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999019(eventdatetime, measureid)
sensors_old_8999003 | fkey_from_parent_to_ref_8999001 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
(20 rows)
SELECT count(*) AS generated_child_fk_constraints
FROM pg_catalog.pg_class tbl
JOIN public.table_fkeys fk on tbl.oid = fk.relid
WHERE tbl.relname like '%_89%'
AND (fk."Constraint" LIKE 'sensors%' OR fk."Constraint" LIKE '%to\_parent%\_1');
generated_child_fk_constraints
---------------------------------------------------------------------
2
(1 row)
SELECT tablename, indexdef FROM pg_indexes WHERE tablename like '%_89%' ORDER BY 1,2;
tablename | indexdef
@ -626,32 +665,41 @@ SELECT public.wait_for_resource_cleanup();
FROM pg_catalog.pg_class tbl
JOIN public.table_fkeys fk on tbl.oid = fk.relid
WHERE tbl.relname like '%_89%'
AND fk."Constraint" NOT LIKE 'sensors%' AND fk."Constraint" NOT LIKE '%to\_parent%\_1'
ORDER BY 1, 2;
relname | Constraint | Definition
relname | Constraint | Definition
---------------------------------------------------------------------
sensors_2020_01_01_8999104 | fkey_from_child_to_child_8999104 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999120(eventdatetime, measureid)
sensors_2020_01_01_8999104 | fkey_from_child_to_dist_8999104 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999116(measureid)
sensors_2020_01_01_8999104 | fkey_from_child_to_parent_8999104 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999118(eventdatetime, measureid)
sensors_2020_01_01_8999104 | fkey_from_child_to_ref_8999104 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_2020_01_01_8999104 | fkey_from_parent_to_child_8999100 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999120(eventdatetime, measureid)
sensors_2020_01_01_8999104 | fkey_from_parent_to_dist_8999100 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999116(measureid)
sensors_2020_01_01_8999104 | fkey_from_parent_to_parent_8999100 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999118(eventdatetime, measureid)
sensors_2020_01_01_8999104 | fkey_from_parent_to_ref_8999100 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_2020_01_01_8999104 | sensors_2020_01_01_8999104_measureid_eventdatetime_fkey | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999120(eventdatetime, measureid)
sensors_8999100 | fkey_from_parent_to_child_8999100 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999120(eventdatetime, measureid)
sensors_8999100 | fkey_from_parent_to_dist_8999100 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999116(measureid)
sensors_8999100 | fkey_from_parent_to_parent_8999100 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999118(eventdatetime, measureid)
sensors_8999100 | fkey_from_parent_to_ref_8999100 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_8999100 | sensors_8999100_measureid_eventdatetime_fkey | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999120(eventdatetime, measureid)
sensors_news_8999106 | fkey_from_parent_to_child_8999100 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999120(eventdatetime, measureid)
sensors_news_8999106 | fkey_from_parent_to_dist_8999100 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999116(measureid)
sensors_news_8999106 | fkey_from_parent_to_parent_8999100 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999118(eventdatetime, measureid)
sensors_news_8999106 | fkey_from_parent_to_ref_8999100 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_old_8999102 | fkey_from_parent_to_child_8999100 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999120(eventdatetime, measureid)
sensors_old_8999102 | fkey_from_parent_to_dist_8999100 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999116(measureid)
sensors_old_8999102 | fkey_from_parent_to_parent_8999100 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999118(eventdatetime, measureid)
sensors_old_8999102 | fkey_from_parent_to_ref_8999100 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
(22 rows)
sensors_2020_01_01_8999104 | fkey_from_child_to_child_8999104 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999120(eventdatetime, measureid)
sensors_2020_01_01_8999104 | fkey_from_child_to_dist_8999104 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999116(measureid)
sensors_2020_01_01_8999104 | fkey_from_child_to_parent_8999104 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999118(eventdatetime, measureid)
sensors_2020_01_01_8999104 | fkey_from_child_to_ref_8999104 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_2020_01_01_8999104 | fkey_from_parent_to_child_8999100 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999120(eventdatetime, measureid)
sensors_2020_01_01_8999104 | fkey_from_parent_to_dist_8999100 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999116(measureid)
sensors_2020_01_01_8999104 | fkey_from_parent_to_parent_8999100 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999118(eventdatetime, measureid)
sensors_2020_01_01_8999104 | fkey_from_parent_to_ref_8999100 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_8999100 | fkey_from_parent_to_child_8999100 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999120(eventdatetime, measureid)
sensors_8999100 | fkey_from_parent_to_dist_8999100 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999116(measureid)
sensors_8999100 | fkey_from_parent_to_parent_8999100 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999118(eventdatetime, measureid)
sensors_8999100 | fkey_from_parent_to_ref_8999100 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_news_8999106 | fkey_from_parent_to_child_8999100 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999120(eventdatetime, measureid)
sensors_news_8999106 | fkey_from_parent_to_dist_8999100 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999116(measureid)
sensors_news_8999106 | fkey_from_parent_to_parent_8999100 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999118(eventdatetime, measureid)
sensors_news_8999106 | fkey_from_parent_to_ref_8999100 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_old_8999102 | fkey_from_parent_to_child_8999100 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999120(eventdatetime, measureid)
sensors_old_8999102 | fkey_from_parent_to_dist_8999100 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999116(measureid)
sensors_old_8999102 | fkey_from_parent_to_parent_8999100 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999118(eventdatetime, measureid)
sensors_old_8999102 | fkey_from_parent_to_ref_8999100 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
(20 rows)
SELECT count(*) AS generated_child_fk_constraints
FROM pg_catalog.pg_class tbl
JOIN public.table_fkeys fk on tbl.oid = fk.relid
WHERE tbl.relname like '%_89%'
AND (fk."Constraint" LIKE 'sensors%' OR fk."Constraint" LIKE '%to\_parent%\_1');
generated_child_fk_constraints
---------------------------------------------------------------------
2
(1 row)
SELECT tablename, indexdef FROM pg_indexes WHERE tablename like '%_89%' ORDER BY 1,2;
tablename | indexdef
@ -706,54 +754,61 @@ SELECT public.wait_for_resource_cleanup();
FROM pg_catalog.pg_class tbl
JOIN public.table_fkeys fk on tbl.oid = fk.relid
WHERE tbl.relname like '%_89%'
AND fk."Constraint" NOT LIKE 'sensors%' AND fk."Constraint" NOT LIKE '%to\_parent%\_1'
ORDER BY 1, 2;
relname | Constraint | Definition
relname | Constraint | Definition
---------------------------------------------------------------------
sensors_2020_01_01_8999005 | fkey_from_child_to_child_8999005 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999021(eventdatetime, measureid)
sensors_2020_01_01_8999005 | fkey_from_child_to_dist_8999005 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999017(measureid)
sensors_2020_01_01_8999005 | fkey_from_child_to_parent_8999005 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999019(eventdatetime, measureid)
sensors_2020_01_01_8999005 | fkey_from_child_to_ref_8999005 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_2020_01_01_8999005 | fkey_from_parent_to_child_8999001 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999021(eventdatetime, measureid)
sensors_2020_01_01_8999005 | fkey_from_parent_to_dist_8999001 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999017(measureid)
sensors_2020_01_01_8999005 | fkey_from_parent_to_parent_8999001 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999019(eventdatetime, measureid)
sensors_2020_01_01_8999005 | fkey_from_parent_to_ref_8999001 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_2020_01_01_8999005 | sensors_2020_01_01_8999005_measureid_eventdatetime_fkey | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999021(eventdatetime, measureid)
sensors_2020_01_01_8999105 | fkey_from_child_to_child_8999105 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999121(eventdatetime, measureid)
sensors_2020_01_01_8999105 | fkey_from_child_to_dist_8999105 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999117(measureid)
sensors_2020_01_01_8999105 | fkey_from_child_to_parent_8999105 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999119(eventdatetime, measureid)
sensors_2020_01_01_8999105 | fkey_from_child_to_ref_8999105 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_2020_01_01_8999105 | fkey_from_parent_to_child_8999101 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999121(eventdatetime, measureid)
sensors_2020_01_01_8999105 | fkey_from_parent_to_dist_8999101 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999117(measureid)
sensors_2020_01_01_8999105 | fkey_from_parent_to_parent_8999101 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999119(eventdatetime, measureid)
sensors_2020_01_01_8999105 | fkey_from_parent_to_ref_8999101 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_2020_01_01_8999105 | sensors_2020_01_01_8999105_measureid_eventdatetime_fkey | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999121(eventdatetime, measureid)
sensors_8999001 | fkey_from_parent_to_child_8999001 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999021(eventdatetime, measureid)
sensors_8999001 | fkey_from_parent_to_dist_8999001 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999017(measureid)
sensors_8999001 | fkey_from_parent_to_parent_8999001 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999019(eventdatetime, measureid)
sensors_8999001 | fkey_from_parent_to_ref_8999001 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_8999001 | sensors_8999001_measureid_eventdatetime_fkey | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999021(eventdatetime, measureid)
sensors_8999101 | fkey_from_parent_to_child_8999101 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999121(eventdatetime, measureid)
sensors_8999101 | fkey_from_parent_to_dist_8999101 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999117(measureid)
sensors_8999101 | fkey_from_parent_to_parent_8999101 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999119(eventdatetime, measureid)
sensors_8999101 | fkey_from_parent_to_ref_8999101 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_8999101 | sensors_8999101_measureid_eventdatetime_fkey | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999121(eventdatetime, measureid)
sensors_news_8999007 | fkey_from_parent_to_child_8999001 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999021(eventdatetime, measureid)
sensors_news_8999007 | fkey_from_parent_to_dist_8999001 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999017(measureid)
sensors_news_8999007 | fkey_from_parent_to_parent_8999001 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999019(eventdatetime, measureid)
sensors_news_8999007 | fkey_from_parent_to_ref_8999001 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_news_8999107 | fkey_from_parent_to_child_8999101 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999121(eventdatetime, measureid)
sensors_news_8999107 | fkey_from_parent_to_dist_8999101 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999117(measureid)
sensors_news_8999107 | fkey_from_parent_to_parent_8999101 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999119(eventdatetime, measureid)
sensors_news_8999107 | fkey_from_parent_to_ref_8999101 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_old_8999003 | fkey_from_parent_to_child_8999001 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999021(eventdatetime, measureid)
sensors_old_8999003 | fkey_from_parent_to_dist_8999001 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999017(measureid)
sensors_old_8999003 | fkey_from_parent_to_parent_8999001 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999019(eventdatetime, measureid)
sensors_old_8999003 | fkey_from_parent_to_ref_8999001 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_old_8999103 | fkey_from_parent_to_child_8999101 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999121(eventdatetime, measureid)
sensors_old_8999103 | fkey_from_parent_to_dist_8999101 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999117(measureid)
sensors_old_8999103 | fkey_from_parent_to_parent_8999101 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999119(eventdatetime, measureid)
sensors_old_8999103 | fkey_from_parent_to_ref_8999101 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
(44 rows)
sensors_2020_01_01_8999005 | fkey_from_child_to_child_8999005 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999021(eventdatetime, measureid)
sensors_2020_01_01_8999005 | fkey_from_child_to_dist_8999005 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999017(measureid)
sensors_2020_01_01_8999005 | fkey_from_child_to_parent_8999005 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999019(eventdatetime, measureid)
sensors_2020_01_01_8999005 | fkey_from_child_to_ref_8999005 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_2020_01_01_8999005 | fkey_from_parent_to_child_8999001 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999021(eventdatetime, measureid)
sensors_2020_01_01_8999005 | fkey_from_parent_to_dist_8999001 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999017(measureid)
sensors_2020_01_01_8999005 | fkey_from_parent_to_parent_8999001 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999019(eventdatetime, measureid)
sensors_2020_01_01_8999005 | fkey_from_parent_to_ref_8999001 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_2020_01_01_8999105 | fkey_from_child_to_child_8999105 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999121(eventdatetime, measureid)
sensors_2020_01_01_8999105 | fkey_from_child_to_dist_8999105 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999117(measureid)
sensors_2020_01_01_8999105 | fkey_from_child_to_parent_8999105 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999119(eventdatetime, measureid)
sensors_2020_01_01_8999105 | fkey_from_child_to_ref_8999105 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_2020_01_01_8999105 | fkey_from_parent_to_child_8999101 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999121(eventdatetime, measureid)
sensors_2020_01_01_8999105 | fkey_from_parent_to_dist_8999101 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999117(measureid)
sensors_2020_01_01_8999105 | fkey_from_parent_to_parent_8999101 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999119(eventdatetime, measureid)
sensors_2020_01_01_8999105 | fkey_from_parent_to_ref_8999101 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_8999001 | fkey_from_parent_to_child_8999001 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999021(eventdatetime, measureid)
sensors_8999001 | fkey_from_parent_to_dist_8999001 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999017(measureid)
sensors_8999001 | fkey_from_parent_to_parent_8999001 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999019(eventdatetime, measureid)
sensors_8999001 | fkey_from_parent_to_ref_8999001 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_8999101 | fkey_from_parent_to_child_8999101 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999121(eventdatetime, measureid)
sensors_8999101 | fkey_from_parent_to_dist_8999101 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999117(measureid)
sensors_8999101 | fkey_from_parent_to_parent_8999101 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999119(eventdatetime, measureid)
sensors_8999101 | fkey_from_parent_to_ref_8999101 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_news_8999007 | fkey_from_parent_to_child_8999001 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999021(eventdatetime, measureid)
sensors_news_8999007 | fkey_from_parent_to_dist_8999001 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999017(measureid)
sensors_news_8999007 | fkey_from_parent_to_parent_8999001 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999019(eventdatetime, measureid)
sensors_news_8999007 | fkey_from_parent_to_ref_8999001 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_news_8999107 | fkey_from_parent_to_child_8999101 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999121(eventdatetime, measureid)
sensors_news_8999107 | fkey_from_parent_to_dist_8999101 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999117(measureid)
sensors_news_8999107 | fkey_from_parent_to_parent_8999101 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999119(eventdatetime, measureid)
sensors_news_8999107 | fkey_from_parent_to_ref_8999101 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_old_8999003 | fkey_from_parent_to_child_8999001 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999021(eventdatetime, measureid)
sensors_old_8999003 | fkey_from_parent_to_dist_8999001 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999017(measureid)
sensors_old_8999003 | fkey_from_parent_to_parent_8999001 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999019(eventdatetime, measureid)
sensors_old_8999003 | fkey_from_parent_to_ref_8999001 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
sensors_old_8999103 | fkey_from_parent_to_child_8999101 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_2020_01_01_8999121(eventdatetime, measureid)
sensors_old_8999103 | fkey_from_parent_to_dist_8999101 | FOREIGN KEY (measureid) REFERENCES colocated_dist_table_8999117(measureid)
sensors_old_8999103 | fkey_from_parent_to_parent_8999101 | FOREIGN KEY (eventdatetime, measureid) REFERENCES colocated_partitioned_table_8999119(eventdatetime, measureid)
sensors_old_8999103 | fkey_from_parent_to_ref_8999101 | FOREIGN KEY (measureid) REFERENCES reference_table_8970011(measureid)
(40 rows)
SELECT count(*) AS generated_child_fk_constraints
FROM pg_catalog.pg_class tbl
JOIN public.table_fkeys fk on tbl.oid = fk.relid
WHERE tbl.relname like '%_89%'
AND (fk."Constraint" LIKE 'sensors%' OR fk."Constraint" LIKE '%to\_parent%\_1');
generated_child_fk_constraints
---------------------------------------------------------------------
4
(1 row)
SELECT tablename, indexdef FROM pg_indexes WHERE tablename like '%_89%' ORDER BY 1,2;
tablename | indexdef

View File

@ -10,6 +10,8 @@
-- If chunks get filtered by columnar, less rows are passed to WHERE
-- clause, so this function should return a lower number.
--
CREATE SCHEMA columnar_chunk_filtering;
SET search_path TO columnar_chunk_filtering, public;
CREATE OR REPLACE FUNCTION filtered_row_count (query text) RETURNS bigint AS
$$
DECLARE
@ -125,7 +127,7 @@ SELECT * FROM collation_chunk_filtering_test WHERE A > 'B';
CREATE TABLE simple_chunk_filtering(i int) USING COLUMNAR;
INSERT INTO simple_chunk_filtering SELECT generate_series(0,234567);
EXPLAIN (analyze on, costs off, timing off, summary off)
EXPLAIN (analyze on, costs off, timing off, summary off, BUFFERS OFF)
SELECT * FROM simple_chunk_filtering WHERE i > 123456;
QUERY PLAN
---------------------------------------------------------------------
@ -138,7 +140,7 @@ EXPLAIN (analyze on, costs off, timing off, summary off)
(6 rows)
SET columnar.enable_qual_pushdown = false;
EXPLAIN (analyze on, costs off, timing off, summary off)
EXPLAIN (analyze on, costs off, timing off, summary off, BUFFERS OFF)
SELECT * FROM simple_chunk_filtering WHERE i > 123456;
QUERY PLAN
---------------------------------------------------------------------
@ -153,7 +155,7 @@ SET columnar.enable_qual_pushdown TO DEFAULT;
TRUNCATE simple_chunk_filtering;
INSERT INTO simple_chunk_filtering SELECT generate_series(0,200000);
COPY (SELECT * FROM simple_chunk_filtering WHERE i > 180000) TO '/dev/null';
EXPLAIN (analyze on, costs off, timing off, summary off)
EXPLAIN (analyze on, costs off, timing off, summary off, BUFFERS OFF)
SELECT * FROM simple_chunk_filtering WHERE i > 180000;
QUERY PLAN
---------------------------------------------------------------------
@ -168,7 +170,7 @@ EXPLAIN (analyze on, costs off, timing off, summary off)
DROP TABLE simple_chunk_filtering;
CREATE TABLE multi_column_chunk_filtering(a int, b int) USING columnar;
INSERT INTO multi_column_chunk_filtering SELECT i,i+1 FROM generate_series(0,234567) i;
EXPLAIN (analyze on, costs off, timing off, summary off)
EXPLAIN (analyze on, costs off, timing off, summary off, BUFFERS OFF)
SELECT count(*) FROM multi_column_chunk_filtering WHERE a > 50000;
QUERY PLAN
---------------------------------------------------------------------
@ -181,7 +183,7 @@ EXPLAIN (analyze on, costs off, timing off, summary off)
Columnar Chunk Groups Removed by Filter: 5
(7 rows)
EXPLAIN (analyze on, costs off, timing off, summary off)
EXPLAIN (analyze on, costs off, timing off, summary off, BUFFERS OFF)
SELECT count(*) FROM multi_column_chunk_filtering WHERE a > 50000 AND b > 50000;
QUERY PLAN
---------------------------------------------------------------------
@ -197,7 +199,7 @@ EXPLAIN (analyze on, costs off, timing off, summary off)
-- make next tests faster
TRUNCATE multi_column_chunk_filtering;
INSERT INTO multi_column_chunk_filtering SELECT generate_series(0,5);
EXPLAIN (analyze on, costs off, timing off, summary off)
EXPLAIN (analyze on, costs off, timing off, summary off, BUFFERS OFF)
SELECT b FROM multi_column_chunk_filtering WHERE a > 50000 AND b > 50000;
QUERY PLAN
---------------------------------------------------------------------
@ -208,7 +210,7 @@ EXPLAIN (analyze on, costs off, timing off, summary off)
Columnar Chunk Groups Removed by Filter: 1
(5 rows)
EXPLAIN (analyze on, costs off, timing off, summary off)
EXPLAIN (analyze on, costs off, timing off, summary off, BUFFERS OFF)
SELECT b, a FROM multi_column_chunk_filtering WHERE b > 50000;
QUERY PLAN
---------------------------------------------------------------------
@ -220,7 +222,7 @@ EXPLAIN (analyze on, costs off, timing off, summary off)
Columnar Chunk Groups Removed by Filter: 0
(6 rows)
EXPLAIN (analyze on, costs off, timing off, summary off)
EXPLAIN (analyze on, costs off, timing off, summary off, BUFFERS OFF)
SELECT FROM multi_column_chunk_filtering WHERE a > 50000;
QUERY PLAN
---------------------------------------------------------------------
@ -231,7 +233,7 @@ EXPLAIN (analyze on, costs off, timing off, summary off)
Columnar Chunk Groups Removed by Filter: 1
(5 rows)
EXPLAIN (analyze on, costs off, timing off, summary off)
EXPLAIN (analyze on, costs off, timing off, summary off, BUFFERS OFF)
SELECT FROM multi_column_chunk_filtering;
QUERY PLAN
---------------------------------------------------------------------
@ -242,7 +244,7 @@ EXPLAIN (analyze on, costs off, timing off, summary off)
BEGIN;
ALTER TABLE multi_column_chunk_filtering DROP COLUMN a;
ALTER TABLE multi_column_chunk_filtering DROP COLUMN b;
EXPLAIN (analyze on, costs off, timing off, summary off)
EXPLAIN (analyze on, costs off, timing off, summary off, BUFFERS OFF)
SELECT * FROM multi_column_chunk_filtering;
QUERY PLAN
---------------------------------------------------------------------
@ -253,7 +255,7 @@ BEGIN;
ROLLBACK;
CREATE TABLE another_columnar_table(x int, y int) USING columnar;
INSERT INTO another_columnar_table SELECT generate_series(0,5);
EXPLAIN (analyze on, costs off, timing off, summary off)
EXPLAIN (analyze on, costs off, timing off, summary off, BUFFERS OFF)
SELECT a, y FROM multi_column_chunk_filtering, another_columnar_table WHERE x > 1;
QUERY PLAN
---------------------------------------------------------------------
@ -364,7 +366,7 @@ set enable_mergejoin=false;
set enable_hashjoin=false;
set enable_material=false;
-- test different kinds of expressions
EXPLAIN (analyze on, costs off, timing off, summary off)
EXPLAIN (analyze on, costs off, timing off, summary off, BUFFERS OFF)
SELECT * FROM r1, coltest WHERE
id1 = id AND x1 > 15000 AND x1::text > '000000' AND n1 % 10 = 0;
QUERY PLAN
@ -391,7 +393,7 @@ SELECT * FROM r1, coltest WHERE
(3 rows)
-- test equivalence classes
EXPLAIN (analyze on, costs off, timing off, summary off)
EXPLAIN (analyze on, costs off, timing off, summary off, BUFFERS OFF)
SELECT * FROM r1, r2, r3, r4, r5, r6, r7, coltest WHERE
id = id1 AND id1 = id2 AND id2 = id3 AND id3 = id4 AND
id4 = id5 AND id5 = id6 AND id6 = id7;
@ -561,7 +563,7 @@ set columnar.max_custom_scan_paths to default;
set columnar.planner_debug_level to default;
-- test more complex parameterization
set columnar.planner_debug_level = 'notice';
EXPLAIN (analyze on, costs off, timing off, summary off)
EXPLAIN (analyze on, costs off, timing off, summary off, BUFFERS OFF)
SELECT * FROM r1, r2, r3, coltest WHERE
id1 = id2 AND id2 = id3 AND id3 = id AND
n1 > x1 AND n2 > x2 AND n3 > x3;
@ -613,7 +615,7 @@ SELECT * FROM r1, r2, r3, coltest WHERE
(3 rows)
-- test partitioning parameterization
EXPLAIN (analyze on, costs off, timing off, summary off)
EXPLAIN (analyze on, costs off, timing off, summary off, BUFFERS OFF)
SELECT * FROM r1, coltest_part WHERE
id1 = id AND n1 > x1;
QUERY PLAN
@ -680,7 +682,7 @@ DETAIL: unparameterized; 0 clauses pushed down
---------------------------------------------------------------------
(0 rows)
EXPLAIN (analyze on, costs off, timing off, summary off)
EXPLAIN (analyze on, costs off, timing off, summary off, BUFFERS OFF)
SELECT * FROM coltest c1 WHERE ceil(x1) > 4222;
NOTICE: columnar planner: cannot push down clause: must match 'Var <op> Expr' or 'Expr <op> Var'
HINT: Var must only reference this rel, and Expr must not reference this rel
@ -832,7 +834,7 @@ BEGIN;
COMMIT;
SET columnar.max_custom_scan_paths TO 50;
SET columnar.qual_pushdown_correlation_threshold TO 0.0;
EXPLAIN (analyze on, costs off, timing off, summary off)
EXPLAIN (analyze on, costs off, timing off, summary off, BUFFERS OFF)
SELECT sum(a) FROM pushdown_test WHERE a = 204356 or a = 104356 or a = 76556;
NOTICE: columnar planner: adding CustomScan path for pushdown_test
DETAIL: unparameterized; 1 clauses pushed down
@ -855,7 +857,7 @@ DETAIL: unparameterized; 1 clauses pushed down
180912
(1 row)
EXPLAIN (analyze on, costs off, timing off, summary off)
EXPLAIN (analyze on, costs off, timing off, summary off, BUFFERS OFF)
SELECT sum(a) FROM pushdown_test WHERE a = 194356 or a = 104356 or a = 76556;
NOTICE: columnar planner: adding CustomScan path for pushdown_test
DETAIL: unparameterized; 1 clauses pushed down
@ -878,7 +880,7 @@ DETAIL: unparameterized; 1 clauses pushed down
375268
(1 row)
EXPLAIN (analyze on, costs off, timing off, summary off)
EXPLAIN (analyze on, costs off, timing off, summary off, BUFFERS OFF)
SELECT sum(a) FROM pushdown_test WHERE a = 204356 or a > a*-1 + b;
NOTICE: columnar planner: cannot push down clause: must match 'Var <op> Expr' or 'Expr <op> Var'
HINT: Var must only reference this rel, and Expr must not reference this rel
@ -894,7 +896,7 @@ DETAIL: unparameterized; 0 clauses pushed down
Columnar Projected Columns: a, b
(5 rows)
EXPLAIN (analyze on, costs off, timing off, summary off)
EXPLAIN (analyze on, costs off, timing off, summary off, BUFFERS OFF)
SELECT sum(a) FROM pushdown_test where (a > 1000 and a < 10000) or (a > 20000 and a < 50000);
NOTICE: columnar planner: adding CustomScan path for pushdown_test
DETAIL: unparameterized; 1 clauses pushed down
@ -917,7 +919,7 @@ DETAIL: unparameterized; 1 clauses pushed down
1099459500
(1 row)
EXPLAIN (analyze on, costs off, timing off, summary off)
EXPLAIN (analyze on, costs off, timing off, summary off, BUFFERS OFF)
SELECT sum(a) FROM pushdown_test where (a > random() and a < 2*a) or (a > 100);
NOTICE: columnar planner: cannot push down clause: must match 'Var <op> Expr' or 'Expr <op> Var'
HINT: Var must only reference this rel, and Expr must not reference this rel
@ -949,7 +951,7 @@ DETAIL: unparameterized; 0 clauses pushed down
20000100000
(1 row)
EXPLAIN (analyze on, costs off, timing off, summary off)
EXPLAIN (analyze on, costs off, timing off, summary off, BUFFERS OFF)
SELECT sum(a) FROM pushdown_test where (a > random() and a <= 2000) or (a > 200000-1010);
NOTICE: columnar planner: cannot push down clause: must match 'Var <op> Expr' or 'Expr <op> Var'
HINT: Var must only reference this rel, and Expr must not reference this rel
@ -977,8 +979,9 @@ DETAIL: unparameterized; 1 clauses pushed down
(1 row)
SET hash_mem_multiplier = 1.0;
\pset footer off
SELECT columnar_test_helpers.explain_with_pg16_subplan_format($Q$
EXPLAIN (analyze on, costs off, timing off, summary off)
EXPLAIN (analyze on, costs off, timing off, summary off, BUFFERS OFF)
SELECT sum(a) FROM pushdown_test where
(
a > random()
@ -1015,8 +1018,8 @@ CONTEXT: PL/pgSQL function columnar_test_helpers.explain_with_pg16_subplan_form
-> Materialize (actual rows=100 loops=199)
-> Custom Scan (ColumnarScan) on pushdown_test pushdown_test_1 (actual rows=199 loops=1)
Columnar Projected Columns: a
(11 rows)
\pset footer on
RESET hash_mem_multiplier;
SELECT sum(a) FROM pushdown_test where
(
@ -1043,7 +1046,7 @@ DETAIL: unparameterized; 1 clauses pushed down
create function stable_1(arg int) returns int language plpgsql STRICT IMMUTABLE as
$$ BEGIN RETURN 1+arg; END; $$;
EXPLAIN (analyze on, costs off, timing off, summary off)
EXPLAIN (analyze on, costs off, timing off, summary off, BUFFERS OFF)
SELECT sum(a) FROM pushdown_test where (a = random() and a < stable_1(a) and a < stable_1(6000));
NOTICE: columnar planner: cannot push down clause: must match 'Var <op> Expr' or 'Expr <op> Var'
HINT: Var must only reference this rel, and Expr must not reference this rel
@ -1096,7 +1099,7 @@ BEGIN;
INSERT INTO pushdown_test VALUES(7, 'USA');
INSERT INTO pushdown_test VALUES(8, 'ZW');
END;
EXPLAIN (analyze on, costs off, timing off, summary off)
EXPLAIN (analyze on, costs off, timing off, summary off, BUFFERS OFF)
SELECT id FROM pushdown_test WHERE country IN ('USA', 'BR', 'ZW');
QUERY PLAN
---------------------------------------------------------------------
@ -1123,7 +1126,7 @@ BEGIN
return 'AL';
END;
$$;
EXPLAIN (analyze on, costs off, timing off, summary off)
EXPLAIN (analyze on, costs off, timing off, summary off, BUFFERS OFF)
SELECT * FROM pushdown_test WHERE country IN ('USA', 'ZW', volatileFunction());
QUERY PLAN
---------------------------------------------------------------------
@ -1141,4 +1144,6 @@ SELECT * FROM pushdown_test WHERE country IN ('USA', 'ZW', volatileFunction());
8 | ZW
(3 rows)
DROP TABLE pushdown_test;
SET client_min_messages TO WARNING;
DROP SCHEMA columnar_chunk_filtering CASCADE;
RESET client_min_messages;

View File

@ -10,6 +10,8 @@
-- If chunks get filtered by columnar, less rows are passed to WHERE
-- clause, so this function should return a lower number.
--
CREATE SCHEMA columnar_chunk_filtering;
SET search_path TO columnar_chunk_filtering, public;
CREATE OR REPLACE FUNCTION filtered_row_count (query text) RETURNS bigint AS
$$
DECLARE
@ -125,7 +127,7 @@ SELECT * FROM collation_chunk_filtering_test WHERE A > 'B';
CREATE TABLE simple_chunk_filtering(i int) USING COLUMNAR;
INSERT INTO simple_chunk_filtering SELECT generate_series(0,234567);
EXPLAIN (analyze on, costs off, timing off, summary off)
EXPLAIN (analyze on, costs off, timing off, summary off, BUFFERS OFF)
SELECT * FROM simple_chunk_filtering WHERE i > 123456;
QUERY PLAN
---------------------------------------------------------------------
@ -138,7 +140,7 @@ EXPLAIN (analyze on, costs off, timing off, summary off)
(6 rows)
SET columnar.enable_qual_pushdown = false;
EXPLAIN (analyze on, costs off, timing off, summary off)
EXPLAIN (analyze on, costs off, timing off, summary off, BUFFERS OFF)
SELECT * FROM simple_chunk_filtering WHERE i > 123456;
QUERY PLAN
---------------------------------------------------------------------
@ -153,7 +155,7 @@ SET columnar.enable_qual_pushdown TO DEFAULT;
TRUNCATE simple_chunk_filtering;
INSERT INTO simple_chunk_filtering SELECT generate_series(0,200000);
COPY (SELECT * FROM simple_chunk_filtering WHERE i > 180000) TO '/dev/null';
EXPLAIN (analyze on, costs off, timing off, summary off)
EXPLAIN (analyze on, costs off, timing off, summary off, BUFFERS OFF)
SELECT * FROM simple_chunk_filtering WHERE i > 180000;
QUERY PLAN
---------------------------------------------------------------------
@ -168,7 +170,7 @@ EXPLAIN (analyze on, costs off, timing off, summary off)
DROP TABLE simple_chunk_filtering;
CREATE TABLE multi_column_chunk_filtering(a int, b int) USING columnar;
INSERT INTO multi_column_chunk_filtering SELECT i,i+1 FROM generate_series(0,234567) i;
EXPLAIN (analyze on, costs off, timing off, summary off)
EXPLAIN (analyze on, costs off, timing off, summary off, BUFFERS OFF)
SELECT count(*) FROM multi_column_chunk_filtering WHERE a > 50000;
QUERY PLAN
---------------------------------------------------------------------
@ -181,7 +183,7 @@ EXPLAIN (analyze on, costs off, timing off, summary off)
Columnar Chunk Groups Removed by Filter: 5
(7 rows)
EXPLAIN (analyze on, costs off, timing off, summary off)
EXPLAIN (analyze on, costs off, timing off, summary off, BUFFERS OFF)
SELECT count(*) FROM multi_column_chunk_filtering WHERE a > 50000 AND b > 50000;
QUERY PLAN
---------------------------------------------------------------------
@ -197,7 +199,7 @@ EXPLAIN (analyze on, costs off, timing off, summary off)
-- make next tests faster
TRUNCATE multi_column_chunk_filtering;
INSERT INTO multi_column_chunk_filtering SELECT generate_series(0,5);
EXPLAIN (analyze on, costs off, timing off, summary off)
EXPLAIN (analyze on, costs off, timing off, summary off, BUFFERS OFF)
SELECT b FROM multi_column_chunk_filtering WHERE a > 50000 AND b > 50000;
QUERY PLAN
---------------------------------------------------------------------
@ -208,7 +210,7 @@ EXPLAIN (analyze on, costs off, timing off, summary off)
Columnar Chunk Groups Removed by Filter: 1
(5 rows)
EXPLAIN (analyze on, costs off, timing off, summary off)
EXPLAIN (analyze on, costs off, timing off, summary off, BUFFERS OFF)
SELECT b, a FROM multi_column_chunk_filtering WHERE b > 50000;
QUERY PLAN
---------------------------------------------------------------------
@ -220,7 +222,7 @@ EXPLAIN (analyze on, costs off, timing off, summary off)
Columnar Chunk Groups Removed by Filter: 0
(6 rows)
EXPLAIN (analyze on, costs off, timing off, summary off)
EXPLAIN (analyze on, costs off, timing off, summary off, BUFFERS OFF)
SELECT FROM multi_column_chunk_filtering WHERE a > 50000;
QUERY PLAN
---------------------------------------------------------------------
@ -231,7 +233,7 @@ EXPLAIN (analyze on, costs off, timing off, summary off)
Columnar Chunk Groups Removed by Filter: 1
(5 rows)
EXPLAIN (analyze on, costs off, timing off, summary off)
EXPLAIN (analyze on, costs off, timing off, summary off, BUFFERS OFF)
SELECT FROM multi_column_chunk_filtering;
QUERY PLAN
---------------------------------------------------------------------
@ -242,7 +244,7 @@ EXPLAIN (analyze on, costs off, timing off, summary off)
BEGIN;
ALTER TABLE multi_column_chunk_filtering DROP COLUMN a;
ALTER TABLE multi_column_chunk_filtering DROP COLUMN b;
EXPLAIN (analyze on, costs off, timing off, summary off)
EXPLAIN (analyze on, costs off, timing off, summary off, BUFFERS OFF)
SELECT * FROM multi_column_chunk_filtering;
QUERY PLAN
---------------------------------------------------------------------
@ -253,7 +255,7 @@ BEGIN;
ROLLBACK;
CREATE TABLE another_columnar_table(x int, y int) USING columnar;
INSERT INTO another_columnar_table SELECT generate_series(0,5);
EXPLAIN (analyze on, costs off, timing off, summary off)
EXPLAIN (analyze on, costs off, timing off, summary off, BUFFERS OFF)
SELECT a, y FROM multi_column_chunk_filtering, another_columnar_table WHERE x > 1;
QUERY PLAN
---------------------------------------------------------------------
@ -364,7 +366,7 @@ set enable_mergejoin=false;
set enable_hashjoin=false;
set enable_material=false;
-- test different kinds of expressions
EXPLAIN (analyze on, costs off, timing off, summary off)
EXPLAIN (analyze on, costs off, timing off, summary off, BUFFERS OFF)
SELECT * FROM r1, coltest WHERE
id1 = id AND x1 > 15000 AND x1::text > '000000' AND n1 % 10 = 0;
QUERY PLAN
@ -391,7 +393,7 @@ SELECT * FROM r1, coltest WHERE
(3 rows)
-- test equivalence classes
EXPLAIN (analyze on, costs off, timing off, summary off)
EXPLAIN (analyze on, costs off, timing off, summary off, BUFFERS OFF)
SELECT * FROM r1, r2, r3, r4, r5, r6, r7, coltest WHERE
id = id1 AND id1 = id2 AND id2 = id3 AND id3 = id4 AND
id4 = id5 AND id5 = id6 AND id6 = id7;
@ -561,7 +563,7 @@ set columnar.max_custom_scan_paths to default;
set columnar.planner_debug_level to default;
-- test more complex parameterization
set columnar.planner_debug_level = 'notice';
EXPLAIN (analyze on, costs off, timing off, summary off)
EXPLAIN (analyze on, costs off, timing off, summary off, BUFFERS OFF)
SELECT * FROM r1, r2, r3, coltest WHERE
id1 = id2 AND id2 = id3 AND id3 = id AND
n1 > x1 AND n2 > x2 AND n3 > x3;
@ -613,7 +615,7 @@ SELECT * FROM r1, r2, r3, coltest WHERE
(3 rows)
-- test partitioning parameterization
EXPLAIN (analyze on, costs off, timing off, summary off)
EXPLAIN (analyze on, costs off, timing off, summary off, BUFFERS OFF)
SELECT * FROM r1, coltest_part WHERE
id1 = id AND n1 > x1;
QUERY PLAN
@ -680,7 +682,7 @@ DETAIL: unparameterized; 0 clauses pushed down
---------------------------------------------------------------------
(0 rows)
EXPLAIN (analyze on, costs off, timing off, summary off)
EXPLAIN (analyze on, costs off, timing off, summary off, BUFFERS OFF)
SELECT * FROM coltest c1 WHERE ceil(x1) > 4222;
NOTICE: columnar planner: cannot push down clause: must match 'Var <op> Expr' or 'Expr <op> Var'
HINT: Var must only reference this rel, and Expr must not reference this rel
@ -832,7 +834,7 @@ BEGIN;
COMMIT;
SET columnar.max_custom_scan_paths TO 50;
SET columnar.qual_pushdown_correlation_threshold TO 0.0;
EXPLAIN (analyze on, costs off, timing off, summary off)
EXPLAIN (analyze on, costs off, timing off, summary off, BUFFERS OFF)
SELECT sum(a) FROM pushdown_test WHERE a = 204356 or a = 104356 or a = 76556;
NOTICE: columnar planner: adding CustomScan path for pushdown_test
DETAIL: unparameterized; 1 clauses pushed down
@ -855,7 +857,7 @@ DETAIL: unparameterized; 1 clauses pushed down
180912
(1 row)
EXPLAIN (analyze on, costs off, timing off, summary off)
EXPLAIN (analyze on, costs off, timing off, summary off, BUFFERS OFF)
SELECT sum(a) FROM pushdown_test WHERE a = 194356 or a = 104356 or a = 76556;
NOTICE: columnar planner: adding CustomScan path for pushdown_test
DETAIL: unparameterized; 1 clauses pushed down
@ -878,7 +880,7 @@ DETAIL: unparameterized; 1 clauses pushed down
375268
(1 row)
EXPLAIN (analyze on, costs off, timing off, summary off)
EXPLAIN (analyze on, costs off, timing off, summary off, BUFFERS OFF)
SELECT sum(a) FROM pushdown_test WHERE a = 204356 or a > a*-1 + b;
NOTICE: columnar planner: cannot push down clause: must match 'Var <op> Expr' or 'Expr <op> Var'
HINT: Var must only reference this rel, and Expr must not reference this rel
@ -894,7 +896,7 @@ DETAIL: unparameterized; 0 clauses pushed down
Columnar Projected Columns: a, b
(5 rows)
EXPLAIN (analyze on, costs off, timing off, summary off)
EXPLAIN (analyze on, costs off, timing off, summary off, BUFFERS OFF)
SELECT sum(a) FROM pushdown_test where (a > 1000 and a < 10000) or (a > 20000 and a < 50000);
NOTICE: columnar planner: adding CustomScan path for pushdown_test
DETAIL: unparameterized; 1 clauses pushed down
@ -917,7 +919,7 @@ DETAIL: unparameterized; 1 clauses pushed down
1099459500
(1 row)
EXPLAIN (analyze on, costs off, timing off, summary off)
EXPLAIN (analyze on, costs off, timing off, summary off, BUFFERS OFF)
SELECT sum(a) FROM pushdown_test where (a > random() and a < 2*a) or (a > 100);
NOTICE: columnar planner: cannot push down clause: must match 'Var <op> Expr' or 'Expr <op> Var'
HINT: Var must only reference this rel, and Expr must not reference this rel
@ -949,7 +951,7 @@ DETAIL: unparameterized; 0 clauses pushed down
20000100000
(1 row)
EXPLAIN (analyze on, costs off, timing off, summary off)
EXPLAIN (analyze on, costs off, timing off, summary off, BUFFERS OFF)
SELECT sum(a) FROM pushdown_test where (a > random() and a <= 2000) or (a > 200000-1010);
NOTICE: columnar planner: cannot push down clause: must match 'Var <op> Expr' or 'Expr <op> Var'
HINT: Var must only reference this rel, and Expr must not reference this rel
@ -977,8 +979,9 @@ DETAIL: unparameterized; 1 clauses pushed down
(1 row)
SET hash_mem_multiplier = 1.0;
\pset footer off
SELECT columnar_test_helpers.explain_with_pg16_subplan_format($Q$
EXPLAIN (analyze on, costs off, timing off, summary off)
EXPLAIN (analyze on, costs off, timing off, summary off, BUFFERS OFF)
SELECT sum(a) FROM pushdown_test where
(
a > random()
@ -1015,8 +1018,8 @@ CONTEXT: PL/pgSQL function columnar_test_helpers.explain_with_pg16_subplan_form
-> Materialize (actual rows=100 loops=199)
-> Custom Scan (ColumnarScan) on pushdown_test pushdown_test_1 (actual rows=199 loops=1)
Columnar Projected Columns: a
(11 rows)
\pset footer on
RESET hash_mem_multiplier;
SELECT sum(a) FROM pushdown_test where
(
@ -1043,7 +1046,7 @@ DETAIL: unparameterized; 1 clauses pushed down
create function stable_1(arg int) returns int language plpgsql STRICT IMMUTABLE as
$$ BEGIN RETURN 1+arg; END; $$;
EXPLAIN (analyze on, costs off, timing off, summary off)
EXPLAIN (analyze on, costs off, timing off, summary off, BUFFERS OFF)
SELECT sum(a) FROM pushdown_test where (a = random() and a < stable_1(a) and a < stable_1(6000));
NOTICE: columnar planner: cannot push down clause: must match 'Var <op> Expr' or 'Expr <op> Var'
HINT: Var must only reference this rel, and Expr must not reference this rel
@ -1096,7 +1099,7 @@ BEGIN;
INSERT INTO pushdown_test VALUES(7, 'USA');
INSERT INTO pushdown_test VALUES(8, 'ZW');
END;
EXPLAIN (analyze on, costs off, timing off, summary off)
EXPLAIN (analyze on, costs off, timing off, summary off, BUFFERS OFF)
SELECT id FROM pushdown_test WHERE country IN ('USA', 'BR', 'ZW');
QUERY PLAN
---------------------------------------------------------------------
@ -1123,7 +1126,7 @@ BEGIN
return 'AL';
END;
$$;
EXPLAIN (analyze on, costs off, timing off, summary off)
EXPLAIN (analyze on, costs off, timing off, summary off, BUFFERS OFF)
SELECT * FROM pushdown_test WHERE country IN ('USA', 'ZW', volatileFunction());
QUERY PLAN
---------------------------------------------------------------------
@ -1141,4 +1144,6 @@ SELECT * FROM pushdown_test WHERE country IN ('USA', 'ZW', volatileFunction());
8 | ZW
(3 rows)
DROP TABLE pushdown_test;
SET client_min_messages TO WARNING;
DROP SCHEMA columnar_chunk_filtering CASCADE;
RESET client_min_messages;

View File

@ -4,7 +4,7 @@
CREATE TABLE test_cursor (a int, b int) USING columnar;
INSERT INTO test_cursor SELECT i, j FROM generate_series(0, 100)i, generate_series(100, 200)j;
-- A case where the WHERE clause might filter out some chunks
EXPLAIN (analyze on, costs off, timing off, summary off) SELECT * FROM test_cursor WHERE a = 25;
EXPLAIN (analyze on, costs off, timing off, summary off, BUFFERS OFF) SELECT * FROM test_cursor WHERE a = 25;
QUERY PLAN
---------------------------------------------------------------------
Custom Scan (ColumnarScan) on test_cursor (actual rows=101 loops=1)
@ -107,7 +107,7 @@ UPDATE test_cursor SET a = 8000 WHERE CURRENT OF a_25;
ERROR: UPDATE and CTID scans not supported for ColumnarScan
COMMIT;
-- A case where the WHERE clause doesn't filter out any chunks
EXPLAIN (analyze on, costs off, timing off, summary off) SELECT * FROM test_cursor WHERE a > 25;
EXPLAIN (analyze on, costs off, timing off, summary off, BUFFERS OFF) SELECT * FROM test_cursor WHERE a > 25;
QUERY PLAN
---------------------------------------------------------------------
Custom Scan (ColumnarScan) on test_cursor (actual rows=7575 loops=1)

View File

@ -176,12 +176,15 @@ SELECT pg_total_relation_size('columnar_table_b_idx') * 5 <
(1 row)
-- can't use index scan due to partial index boundaries
EXPLAIN (COSTS OFF) SELECT b FROM columnar_table WHERE b = 30000;
QUERY PLAN
SELECT NOT jsonb_path_exists(
columnar_test_helpers._plan_json('SELECT b FROM columnar_table WHERE b = 30000'),
-- Regex matches any index-based scan: "Index Scan", "Index Only Scan", "Bitmap Index Scan".
'$[*].Plan.** ? (@."Node Type" like_regex "^(Index|Bitmap Index).*Scan$")'
) AS uses_no_index_scan; -- expect: t
uses_no_index_scan
---------------------------------------------------------------------
Seq Scan on columnar_table
Filter: (b = 30000)
(2 rows)
t
(1 row)
-- can use index scan
EXPLAIN (COSTS OFF) SELECT b FROM columnar_table WHERE b = 30001;

View File

@ -204,18 +204,24 @@ $$
t
(1 row)
SELECT columnar_test_helpers.uses_custom_scan (
$$
SELECT a FROM full_correlated WHERE a=0 OR a=5;
$$
);
BEGIN;
SET LOCAL enable_indexscan TO 'OFF';
SET LOCAL enable_bitmapscan TO 'OFF';
SELECT columnar_test_helpers.uses_custom_scan (
$$
SELECT a FROM full_correlated WHERE a=0 OR a=5;
$$
);
uses_custom_scan
---------------------------------------------------------------------
t
(1 row)
ROLLBACK;
BEGIN;
SET LOCAL columnar.enable_custom_scan TO 'OFF';
SET LOCAL enable_indexscan TO 'OFF';
SET LOCAL enable_bitmapscan TO 'OFF';
SELECT columnar_test_helpers.uses_seq_scan (
$$
SELECT a FROM full_correlated WHERE a=0 OR a=5;
@ -579,7 +585,7 @@ CREATE INDEX correlated_idx ON correlated(x);
CREATE INDEX uncorrelated_idx ON uncorrelated(x);
ANALYZE correlated, uncorrelated;
-- should choose chunk group filtering; selective and correlated
EXPLAIN (analyze on, costs off, timing off, summary off)
EXPLAIN (analyze on, costs off, timing off, summary off, BUFFERS OFF)
SELECT * FROM correlated WHERE x = 78910;
QUERY PLAN
---------------------------------------------------------------------
@ -598,11 +604,11 @@ SELECT * FROM correlated WHERE x = 78910;
(1 row)
-- should choose index scan; selective but uncorrelated
EXPLAIN (analyze on, costs off, timing off, summary off)
EXPLAIN (analyze off, costs off, timing off, summary off, BUFFERS OFF)
SELECT * FROM uncorrelated WHERE x = 78910;
QUERY PLAN
---------------------------------------------------------------------
Index Scan using uncorrelated_idx on uncorrelated (actual rows=1 loops=1)
Index Scan using uncorrelated_idx on uncorrelated
Index Cond: (x = 78910)
(2 rows)

View File

@ -204,18 +204,24 @@ $$
t
(1 row)
SELECT columnar_test_helpers.uses_custom_scan (
$$
SELECT a FROM full_correlated WHERE a=0 OR a=5;
$$
);
BEGIN;
SET LOCAL enable_indexscan TO 'OFF';
SET LOCAL enable_bitmapscan TO 'OFF';
SELECT columnar_test_helpers.uses_custom_scan (
$$
SELECT a FROM full_correlated WHERE a=0 OR a=5;
$$
);
uses_custom_scan
---------------------------------------------------------------------
t
(1 row)
ROLLBACK;
BEGIN;
SET LOCAL columnar.enable_custom_scan TO 'OFF';
SET LOCAL enable_indexscan TO 'OFF';
SET LOCAL enable_bitmapscan TO 'OFF';
SELECT columnar_test_helpers.uses_seq_scan (
$$
SELECT a FROM full_correlated WHERE a=0 OR a=5;
@ -583,7 +589,7 @@ CREATE INDEX correlated_idx ON correlated(x);
CREATE INDEX uncorrelated_idx ON uncorrelated(x);
ANALYZE correlated, uncorrelated;
-- should choose chunk group filtering; selective and correlated
EXPLAIN (analyze on, costs off, timing off, summary off)
EXPLAIN (analyze on, costs off, timing off, summary off, BUFFERS OFF)
SELECT * FROM correlated WHERE x = 78910;
QUERY PLAN
---------------------------------------------------------------------
@ -602,11 +608,11 @@ SELECT * FROM correlated WHERE x = 78910;
(1 row)
-- should choose index scan; selective but uncorrelated
EXPLAIN (analyze on, costs off, timing off, summary off)
EXPLAIN (analyze off, costs off, timing off, summary off, BUFFERS OFF)
SELECT * FROM uncorrelated WHERE x = 78910;
QUERY PLAN
---------------------------------------------------------------------
Index Scan using uncorrelated_idx on uncorrelated (actual rows=1 loops=1)
Index Scan using uncorrelated_idx on uncorrelated
Index Cond: (x = 78910)
(2 rows)

View File

@ -146,3 +146,11 @@ BEGIN
RETURN NEXT;
END LOOP;
END; $$ language plpgsql;
CREATE OR REPLACE FUNCTION _plan_json(q text)
RETURNS jsonb
LANGUAGE plpgsql AS $$
DECLARE j jsonb;
BEGIN
EXECUTE format('EXPLAIN (FORMAT JSON, COSTS OFF, ANALYZE OFF) %s', q) INTO j;
RETURN j;
END $$;

View File

@ -219,7 +219,7 @@ EXPLAIN (COSTS OFF) EXECUTE p0;
(2 rows)
EXECUTE p0;
EXPLAIN (ANALYZE true, COSTS off, TIMING off, SUMMARY off) EXECUTE p0;
EXPLAIN (ANALYZE true, COSTS off, TIMING off, SUMMARY off, BUFFERS OFF) EXECUTE p0;
QUERY PLAN
---------------------------------------------------------------------
Insert on t (actual rows=0 loops=1)
@ -252,7 +252,7 @@ EXPLAIN (COSTS OFF) EXECUTE p1(16);
(2 rows)
EXECUTE p1(16);
EXPLAIN (ANALYZE true, COSTS off, TIMING off, SUMMARY off) EXECUTE p1(20);
EXPLAIN (ANALYZE true, COSTS off, TIMING off, SUMMARY off, BUFFERS OFF) EXECUTE p1(20);
QUERY PLAN
---------------------------------------------------------------------
Insert on t (actual rows=0 loops=1)
@ -289,7 +289,7 @@ EXPLAIN (COSTS OFF) EXECUTE p2(30, 40);
(2 rows)
EXECUTE p2(30, 40);
EXPLAIN (ANALYZE true, COSTS off, TIMING off, SUMMARY off) EXECUTE p2(50, 60);
EXPLAIN (ANALYZE true, COSTS off, TIMING off, SUMMARY off, BUFFERS OFF) EXECUTE p2(50, 60);
QUERY PLAN
---------------------------------------------------------------------
Insert on t (actual rows=0 loops=1)
@ -342,7 +342,7 @@ EXECUTE p3;
8 | 8
(2 rows)
EXPLAIN (ANALYZE true, COSTS off, TIMING off, SUMMARY off) EXECUTE p3;
EXPLAIN (ANALYZE true, COSTS off, TIMING off, SUMMARY off, BUFFERS OFF) EXECUTE p3;
QUERY PLAN
---------------------------------------------------------------------
Custom Scan (ColumnarScan) on t (actual rows=2 loops=1)
@ -397,7 +397,7 @@ EXECUTE p5(16);
---------------------------------------------------------------------
(0 rows)
EXPLAIN (ANALYZE true, COSTS off, TIMING off, SUMMARY off) EXECUTE p5(9);
EXPLAIN (ANALYZE true, COSTS off, TIMING off, SUMMARY off, BUFFERS OFF) EXECUTE p5(9);
QUERY PLAN
---------------------------------------------------------------------
Custom Scan (ColumnarScan) on t (actual rows=2 loops=1)
@ -453,7 +453,7 @@ EXECUTE p6(30, 40);
31 | 41
(1 row)
EXPLAIN (ANALYZE true, COSTS off, TIMING off, SUMMARY off) EXECUTE p6(50, 60);
EXPLAIN (ANALYZE true, COSTS off, TIMING off, SUMMARY off, BUFFERS OFF) EXECUTE p6(50, 60);
QUERY PLAN
---------------------------------------------------------------------
Custom Scan (ColumnarScan) on t (actual rows=1 loops=1)

View File

@ -140,7 +140,7 @@ ROLLBACK;
-- INSERT..SELECT with re-partitioning in EXPLAIN ANALYZE after local execution
BEGIN;
INSERT INTO test VALUES (0,1000);
EXPLAIN (COSTS FALSE, ANALYZE TRUE, TIMING FALSE, SUMMARY FALSE) INSERT INTO test (x, y) SELECT y, x FROM test;
EXPLAIN (COSTS FALSE, ANALYZE TRUE, TIMING FALSE, SUMMARY FALSE, BUFFERS OFF) INSERT INTO test (x, y) SELECT y, x FROM test;
ERROR: EXPLAIN ANALYZE is currently not supported for INSERT ... SELECT commands with repartitioning
ROLLBACK;
-- DDL connects to locahost

View File

@ -473,25 +473,19 @@ SELECT create_distributed_table('color', 'color_id');
(1 row)
INSERT INTO color(color_name) VALUES ('Blue');
\d+ color
Table "generated_identities.color"
Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
SELECT pg_get_serial_sequence('color', 'color_id');
pg_get_serial_sequence
---------------------------------------------------------------------
color_id | bigint | | not null | generated always as identity | plain | |
color_name | character varying | | not null | | extended | |
Indexes:
"color_color_id_key" UNIQUE CONSTRAINT, btree (color_id)
generated_identities.color_color_id_seq
(1 row)
\c - - - :worker_1_port
SET search_path TO generated_identities;
\d+ color
Table "generated_identities.color"
Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
SELECT pg_get_serial_sequence('color', 'color_id');
pg_get_serial_sequence
---------------------------------------------------------------------
color_id | bigint | | not null | generated always as identity | plain | |
color_name | character varying | | not null | | extended | |
Indexes:
"color_color_id_key" UNIQUE CONSTRAINT, btree (color_id)
generated_identities.color_color_id_seq
(1 row)
INSERT INTO color(color_name) VALUES ('Red');
-- alter sequence .. restart

View File

@ -1,5 +0,0 @@
Parsed test spec with 2 sessions
starting permutation: s1-begin s1-upd-ins s2-result s1-commit s2-result
setup failed: ERROR: MERGE is not supported on PG versions below 15
CONTEXT: PL/pgSQL function inline_code_block line XX at RAISE

Some files were not shown because too many files have changed in this diff Show More