Compare commits

...

30 Commits

Author SHA1 Message Date
Gürkan İndibay 17f09d4ad7
Bump Citus to 11.3.1 (#7505) 2024-02-14 08:41:01 +03:00
Gürkan İndibay 98bbfc27c8
Adds changelog for v11.3.1 (#7497) 2024-02-13 13:56:34 +00:00
Teja Mupparti 37a1c3d9db Fix the incorrect column count after ALTER TABLE, this fixes the bug #7378 (please read the analysis in the bug for more information)
(cherry picked from commit 00068e07c5)
2024-01-24 14:23:27 -08:00
Gokhan Gulbiz 30307e6e79
Backport GHA Migration to release-11.3 (#7313)
DESCRIPTION: PR description that will go into the change log, up to 78
characters

---------

Co-authored-by: Jelte Fennema-Nio <jelte.fennema@microsoft.com>
2023-11-10 08:31:32 +02:00
Onur Tirtir 6ba68a978f Make sure to disallow creating a replicated distributed table concurrently (#7219)
See explanation in https://github.com/citusdata/citus/issues/7216.
Fixes https://github.com/citusdata/citus/issues/7216.

DESCRIPTION: Makes sure to disallow creating a replicated distributed
table concurrently

(cherry picked from commit 111b4c19bc)
2023-10-24 14:04:34 +03:00
Nils Dijk 78b272d971
Fix leaking of memory and memory contexts in Foreign Constraint Graphs (#7236)
DESCRIPTION: Fix leaking of memory and memory contexts in Foreign
Constraint Graphs

Previously, every time we (re)created the Foreign Constraint
Relationship Graph, we created a new Memory Context while loosing a
reference to the previous context. This old context could still have
left over memory in there causing a memory leak.

With this patch we statically have one memory context that we lazily
initialize the first time we create our foreign constraint relationship
graph. On every subsequent creation, beside destroying our previous
hashmap we also reset our memory context to remove any left over
references.
2023-10-09 13:16:39 +02:00
Gürkan İndibay d8a0d1c071 Removes ubuntu:kinetic pipelines since it's EOL (#7195)
ubuntu:kinetic is EOL so removing it's pipeline

https://fridge.ubuntu.com/2023/06/14/ubuntu-22-10-kinetic-kudu-reaches-end-of-life-on-july-20-2023/
(cherry picked from commit e0683aab84)
2023-09-26 16:54:51 +03:00
Gürkan İndibay bd17e5fd77 Removes pg_send_cancellation (#7135)
DESCRIPTION: Removes pg_send_cancellation and all references
(cherry picked from commit 371f094b68)
2023-09-26 16:54:51 +03:00
Hanefi Onaldi e4ba71f5de
Create a new colocation properly after braking one
When braking a colocation, we need to create a new colocation group
record in pg_dist_colocation for the relation. It is not sufficient to
have a new colocationid value in pg_dist_partition only.

This patch also fixes a bug when deleting a colocation group if no
tables are left in it. Previously we passed a relation id as a parameter
to DeleteColocationGroupIfNoTablesBelong function, where we should have
passed a colocation id.

(cherry picked from commit c22547d221)
2023-09-05 11:44:43 +03:00
zhjwpku f551eb50c2 PQputCopyData's return value 0 should be considered fail (#7152) 2023-08-29 11:22:36 +02:00
Halil Ozan Akgül 21be1af515
Improve citus_shard_sizes performance in release-11.3 (#7051)
This PR combines the relevant parts of #7003, #7018 and the PR that
backports those changes to 11.3, #7050 to merge into the 11.3 release
branch.

This PR backports the changes to 11.3 release branch, there is another
PR, #7062, for 12.0 release branch and one, #7050, for main branch.

The commits are:
613cced1ae
772d194357
2023-07-14 17:19:30 +03:00
aykut-bozkurt c18586c009 Properly handle error at owner check (#6984)
We did not properly handle the error at ownership check method, which
causes `max stack depth for errors` as in
https://github.com/citusdata/citus/issues/6980.

**Fix:**
In case of an error, we should rollback subtransaction and throw the
message with log level to `LOG_SERVER_ONLY`.

Note: We prevent logs from the client to prevent pg vanilla test
failures due to Citus logs which differs from the actual Postgres logs.
(For context: https://github.com/citusdata/citus/pull/6130)

I also needed to fix a flaky test: `multi_schema_support`

DESCRIPTION: Fixes a bug related to non-existent objects in DDL
commands.

Fixes https://github.com/citusdata/citus/issues/6980

(cherry picked from commit 565c5260fd)
2023-06-21 19:42:43 +03:00
aykut-bozkurt d13e9f50b2 Rewind tuple store to fix scrollable with hold cursor fetches (#7014)
We need to rewind the tuplestorestate's tuple index to get correct
results on fetching scrollable with hold cursors.

`PersistHoldablePortal` is responsible for persisting out
tuplestorestate inside a with hold cursor before commiting a
transaction.

It rewinds the cursor like below (`ExecutorRewindcalls` calls `rescan`):
```c
if (portal->cursorOptions & CURSOR_OPT_SCROLL)
{
  ExecutorRewind(queryDesc);
}
```

At the end, it adjusts tuple index for holdStore in the portal properly.
```c
if (portal->cursorOptions & CURSOR_OPT_SCROLL)
{
         if (!tuplestore_skiptuples(portal->holdStore,
	                                         portal->portalPos,
	                                         true))
	    elog(ERROR, "unexpected end of tuple stream");
}
```

DESCRIPTION: Fixes incorrect results on fetching scrollable with hold
cursors.

Fixes https://github.com/citusdata/citus/issues/7010

(cherry picked from commit f667f14029)
2023-06-21 19:37:52 +03:00
Teja Mupparti 2de879dc59 This commit adds a safety-net to the issue seen in #6785. The fix for the underlying issue will be in the PR#6943
(cherry picked from commit f9dbe7784b)
2023-05-30 10:58:51 -07:00
Emel Şimşek 78f0b0c6dd Run replicate_reference_tables background task as superuser. (#6930)
DESCRIPTION: Fixes a bug in background shard rebalancer where the
replicate reference tables task fails if the current user is not a
superuser.

This change is to be backported to earlier releases. We should fix the
permissions for replicate_reference_tables on main branch such that it
can be run by non-superuser roles.

Fixes #6925.
Fixes #6926.

(cherry picked from commit f9a5be59b9)
2023-05-22 22:00:05 +03:00
Hanefi Onaldi cb99468716
Normalize columnar version in tests (#6917)
When we bump columnar version, some tests fail because of the output
change. Instead of changing those lines every time, I think it is better
to normalize it in tests.

(cherry picked from commit 06e6f8e428)
2023-05-09 13:38:49 +03:00
Hanefi Onaldi 46337a53ae
Skip flaky tests on release branch 2023-05-08 22:11:16 +03:00
Hanefi Onaldi 15f5796eee
Fix flaky background rebalance parallel test (#6893)
A test in background_rebalance_parallel.sql was failing intermittently
where the order of tasks in the output was not deterministic. This
commit fixes the test by removing id columns for the background tasks in
the output.

A sample failing diff before this patch is below:

```diff
 SELECT D.task_id,
        (SELECT T.command FROM pg_dist_background_task T WHERE T.task_id = D.task_id),
        D.depends_on,
        (SELECT T.command FROM pg_dist_background_task T WHERE T.task_id = D.depends_on)
 FROM pg_dist_background_task_depend D  WHERE job_id in (:job_id) ORDER BY D.task_id, D.depends_on ASC;
  task_id |                               command                               | depends_on |                               command
 ---------+---------------------------------------------------------------------+------------+---------------------------------------------------------------------
-    1014 | SELECT pg_catalog.citus_move_shard_placement(85674026,50,57,'auto') |       1013 | SELECT pg_catalog.citus_move_shard_placement(85674025,50,56,'auto')
-    1016 | SELECT pg_catalog.citus_move_shard_placement(85674032,50,57,'auto') |       1015 | SELECT pg_catalog.citus_move_shard_placement(85674031,50,56,'auto')
-    1018 | SELECT pg_catalog.citus_move_shard_placement(85674038,50,57,'auto') |       1017 | SELECT pg_catalog.citus_move_shard_placement(85674037,50,56,'auto')
-    1020 | SELECT pg_catalog.citus_move_shard_placement(85674044,50,57,'auto') |       1019 | SELECT pg_catalog.citus_move_shard_placement(85674043,50,56,'auto')
+    1014 | SELECT pg_catalog.citus_move_shard_placement(85674038,50,57,'auto') |       1013 | SELECT pg_catalog.citus_move_shard_placement(85674037,50,56,'auto')
+    1016 | SELECT pg_catalog.citus_move_shard_placement(85674044,50,57,'auto') |       1015 | SELECT pg_catalog.citus_move_shard_placement(85674043,50,56,'auto')
+    1018 | SELECT pg_catalog.citus_move_shard_placement(85674026,50,57,'auto') |       1017 | SELECT pg_catalog.citus_move_shard_placement(85674025,50,56,'auto')
+    1020 | SELECT pg_catalog.citus_move_shard_placement(85674032,50,57,'auto') |       1019 | SELECT pg_catalog.citus_move_shard_placement(85674031,50,56,'auto')
 (4 rows)
```

Notice that the dependent and dependee tasks have some commands, but
they have different task ids.

(cherry picked from commit 3217e3f181)
2023-05-05 12:18:39 +03:00
Hanefi Onaldi fcd3b6c12f Changelog entries for 11.3.0 (#6856)
In this release, I tried something different. I experimented with adding
the PR number and title to the changelog right before each changelog
entry. This way, it is easier to track where a particular changelog
entry comes from. After reviews are over, I plan to remove those lines
with PR numbers and titles.

I went through all the PRs that are merged after 11.2.0 release and came
up with a list of PRs that may need help with changelog entries. You can
see details on PRs grouped in several sections below.

The following PRs below do not have a changelog entry. If you think that
this is a mistake, please share it in this PR along with a suggestion on
what the changelog item should be.

PR #6846 : fix 3 flaky tests in failure schedule
PR #6844 : Add CPU usage to citus_stat_tenants
PR #6833 : Fix citus_stat_tenants period updating bug
PR #6787 : Add more tests for ddl coverage
PR #6842 : Add build-cdc-* temporary directories to .gitignore
PR #6841 : Add build-cdc-* temporary directories to .gitignore
PR #6840 : Bump Citus to 12.0devel
PR #6824 : Fixes flakiness in multi_metadata_sync test
PR #6811 : Backport identity column improvements to v11.2
PR #6830 : In run_test.py actually return worker_count
PR #6825 : Fixes flakiness in multi_cluster_management test
PR #6816 : Refactor run_test.py
PR #6817 : Explicitly disallow local rels when inserting into dist table
PR #6821 : Rename citus stats tenants
PR #6822 : Add some more tests for initial sql support
PR #6819 : Fix flakyness in
citus_split_shard_by_split_points_deferred_drop
PR #6814 : Make python-regress based tests runnable with run_test.py
PR #6813 : Fix flaky multi_mx_schema_support test
PR #6720 : Convert columnar tap tests to pytest
PR #6812 : Revoke statistics permissions from public and grant them to
pg_monitor
PR #6769 : Citus stats tenants guc
PR #6807 : Fix the incorrect (constant) value passed to pointer-to-bool
parameter, pass a NULL as the value is not used
PR #6797 : Attribute local queries and cached plans on local execution
PR #6796 : Parse the annotation string correctly
PR #6762 : Add logs to citus_stats_tenants
PR #6773 : Add initial sql support for distributed tables that don't
have a shard key
PR #6792 : Disentangle MERGE planning code from the modify-planning code
path
PR #6761 : Citus stats tenants collector view
PR #6791 : Make 8 more tests runnable multiple times via run_test.py
PR #6786 : Refactor some of the planning code to accommodate a new
planning path for MERGE SQL
PR #6789 : Rename AllRelations.. functions to AllDistributedRelations..
PR #6788 : Actually skip arbitrary_configs_router & nested_execution for
AllNullDistKeyDefaultConfig
PR #6783 : Add a config for arbitrary config tests where all the tables
are null-shard-key tables
PR #6784 : Fix attach partition: citus local to null distributed
PR #6782 : Add an arbitrary config test heavily based on
multi_router_planner_fast_path.sql
PR #6781 : Decide what to do with router planner error at one place
PR #6778 : Support partitioning for dist tables with null dist keys
PR #6766 : fix pip lock file
PR #6764 : Make workerCount configurable for regression tests
PR #6745 : Add support for creating distributed tables with a null shard
key
PR #6696 : This implements MERGE phase-III
PR #6767 : Add pytest depedencies to Pipfile
PR #6760 : Decide core distribution params in CreateCitusTable
PR #6759 : Add multi_create_fdw into minimal_schedule
PR #6743 : Replace CITUS_TABLE_WITH_NO_DIST_KEY checks with
HasDistributionKey()
PR #6751 : Stabilize single_node.sql and others that report illegal node
removal
PR #6742 : Refactor CreateDistributedTable()
PR #6747 : Remove unused lock functions
PR #6744 : Fix multiple output version arbitrary config tests
PR #6741 : Stabilize single node tests
PR #6740 : Fix string eval bug in migration files check
PR #6736 : Make run_test.py and create_test.py importable without errors
PR #6734 : Don't blanket ignore flake8 E402 error
PR #6737 : Fixes bookworm packaging pipeline problem
PR #6735 : Fix run_test.py on python 3.9
PR #6733 : MERGE: In deparser, add missing check for RETURNING clause.
PR #6714 : Remove auto_explain workaround in citus explain hook for
ALTER TABLE
PR #6719 : Fix flaky test
PR #6718 : Add more powerfull dependency tracking to run_test.py
PR #6710 : Install non-vulnerable cryptography package
PR #6711 : Support compilation and run tests on latest PG versions
PR #6700 : Add auto-formatting and linting to our python code
PR #6707 : Allow multi_insert_select to run repeatably
PR #6708 : Fix flakyness in failure_create_distributed_table_non_empty
PR #6698 : Miscellaneous cleanup
PR #6704 : Update README for 11.2
PR #6703 : Fix dubious ownership error from git
PR #6690 : Bump Citus to 11.3devel

The following PRs have changelog entries that are too long to fit in a
single line. I'd expect authors to supply at changelog entries in
`DESCRIPTION:` lines that are at most 78 characters. If you want to
supply multi-line changelog items, you can have multiple lines that
start with `DESCRIPTION:` instead.

PR #6837 : fixes update propagation bug when
`citus_set_coordinator_host` is called more than once
PR #6738 :  Identity column implementation refactorings
PR #6756 : Schedule parallel shard moves in background rebalancer by
removing task dependencies between shard moves across colocation groups.
PR #6793 : Add a GUC to disallow planning the queries that reference
non-colocated tables via router planner
PR #6726 : fix memory leak during altering distributed table with a lot
of partition and shards
PR #6722 : fix memory leak during distribution of a table with a lot of
partitions
PR #6693 : prevent memory leak during ConvertTable with a lot of
partitions

The following PR had an empty `DESCRIPTION:` line. This generates an
empty changelog line that needs to be removed manually. Please either
provide a short entry, or remove `DESCRIPTION:` line completely.

PR #6810 : Make CDC decoder an independent extension
PR #6827 : Makefile changes to build CDC in builddir for pgoutput and
wal2json.

---------

Co-authored-by: Onur Tirtir <onurcantirtir@gmail.com>
(cherry picked from commit 934430003e)
2023-05-02 12:39:15 +03:00
Hanefi Onaldi 6d833a90e5 Bump columnar to 11.3
(cherry picked from commit eca70b29a67ed0b004c820bcb9f716ee2b6013eb)
2023-05-02 12:39:15 +03:00
Ahmet Gedemenli 12c27ace2f Ignore nodes not allowed for shards, when planning rebalance steps (#6887)
We are handling colocation groups with shard group count less than the
worker node count, using a method different than the usual rebalancer.
See #6739
While making the decision of using this method or not, we should've
ignored the nodes that are marked `shouldhaveshards = false`. This PR
excludes those nodes when making the decision.

Adds a test such that:
 coordinator: []
 worker 1: [1_1, 1_2]
 worker 2: [2_1, 2_2]
(rebalance)
 coordinator: []
 worker 1: [1_1, 2_1]
 worker 2: [1_2, 2_2]

If we take the coordinator into account, the rebalancer considers the
first state as balanced and does nothing (because shard_count <
worker_count)
But with this pr, we ignore the coordinator because it's
shouldhaveshards = false
So the rebalancer distributes each colocation group to both workers

Also, fixes an unrelated flaky test in the same file

(cherry picked from commit 59ccf364df)
2023-05-01 12:55:08 +02:00
aykut-bozkurt 262c335860 break sequence dependency during table creation (#6889)
We need to break sequence dependency for a table while creating the
table during non-transactional metadata sync to ensure idempotency of
the creation of the table.

**Problem:**
When we send `SELECT
pg_catalog.worker_drop_sequence_dependency(logicalrelid::regclass::text)
FROM pg_dist_partition` to workers during the non-transactional sync,
table might not be in `pg_dist_partition` at worker, and sequence
dependency is not broken at the worker.

**Solution:**
We break sequence dependency via `SELECT
pg_catalog.worker_drop_sequence_dependency(logicalrelid::regclass::text)`
for each table while creating it at the workers. It is safe to send
since the udf is a no-op when there is no sequence dependency.

DESCRIPTION: Fixes a bug related to sequence idempotency at
non-transactional sync.

Fixes https://github.com/citusdata/citus/issues/6888.

(cherry picked from commit 8cb69cfd13)
2023-04-28 15:12:02 +03:00
Emel Şimşek 7b98fbb05e When creating a HTAB we need to use HASH_COMPARE flag in order to set a user defined comparison function. (#6845)
DESCRIPTION: Fixes memory errors, caught by valgrind, of type
"conditional jump or move depends on uninitialized value"

When running Citus tests under Postgres with valgrind, the test cases
calling into `NonBlockingShardSplit` function produce valgrind errors of
type "conditional jump or move depends on uninitialized value".

The issue is caused by creating a HTAB in a wrong way. HASH_COMPARE flag
should have been used when creating a HTAB with user defined comparison
function. In the absence of HASH_COMPARE flag, HTAB falls back into
built-in string comparison function. However, valgrind somehow discovers
that the match function is not assigned to the user defined function as
intended.

Fixes #6835

(cherry picked from commit e7a25d82c9)
2023-04-18 14:50:09 +03:00
Gokhan Gulbiz 58155c5779
Backport to 11.3 - Ensure partitionKeyValue and colocationId are set for proper tenant stats gathering (#6834) (#6862)
This PR updates the tenant stats implementation to set partitionKeyValue
and colocationId in ExecuteLocalTaskListExtended, in addition to
LocallyExecuteTaskPlan. This ensures that tenant stats can be properly
gathered regardless of the code path taken. The changes were initially
made while testing stored procedure calls for tenant stats.

(cherry picked from commit 8782ea1582)
2023-04-17 15:24:29 +03:00
aykut-bozkurt 17149b92b2 fix 3 flaky tests in failure schedule (#6846)
Fixed 3 flaky tests in failure tests which caused flakiness in other
tests due to changed node and group sequence ids during node
addition-removal.

(cherry picked from commit 3286ec59e9)
2023-04-13 13:19:35 +03:00
aykut-bozkurt 1a9066c34a fixes update propagation bug when `citus_set_coordinator_host` is called more than once (#6837)
DESCRIPTION: Fixes update propagation bug when
`citus_set_coordinator_host` is called more than once.

Fixes https://github.com/citusdata/citus/issues/6731.

(cherry picked from commit a20f7e1a55)
2023-04-13 13:18:28 +03:00
Halil Ozan Akgül e14f4c3dee Add CPU usage to citus_stat_tenants (#6844)
This PR adds CPU usage to `citus_stat_tenants` monitor.
CPU usage is tracked in periods, similar to query counts.

(cherry picked from commit 9ba70696f7)
2023-04-12 17:46:00 +03:00
Halil Ozan Akgül 5525676aad Fix citus_stat_tenants period updating bug (#6833)
Fixes the bug that causes updating the citus_stat_tenants periods
incorrectly.

`TimestampDifferenceExceeds` expects the difference in milliseconds but
it was microseconds, this is fixed.
`tenantStats->lastQueryTime` was updated during monitoring too, now it's
updated only when there are tenant queries.

(cherry picked from commit 8b50e95dc8)
2023-04-12 17:45:44 +03:00
rajeshkt78 234df62106
Add build-cdc-* temporary directories to .gitignore (#6842)
The CDC decoder buillds different versions of CDC base decoders during
the build. Since the source files are copied to the temporay
directories, they come in git status for files to be added. So these
directories and a temporary CDC TAP test directory(tmpcheck) are added
to .gitignore file.
2023-04-11 08:54:43 +05:30
Onur Tirtir 5dd08835df Bump citus version to 11.3.0 2023-04-10 11:16:58 +03:00
100 changed files with 2842 additions and 1973 deletions

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,23 @@
name: 'Parallelization matrix'
inputs:
count:
required: false
default: 32
outputs:
json:
value: ${{ steps.generate_matrix.outputs.json }}
runs:
using: "composite"
steps:
- name: Generate parallelization matrix
id: generate_matrix
shell: bash
run: |-
json_array="{\"include\": ["
for ((i = 1; i <= ${{ inputs.count }}; i++)); do
json_array+="{\"id\":\"$i\"},"
done
json_array=${json_array%,}
json_array+=" ]}"
echo "json=$json_array" >> "$GITHUB_OUTPUT"
echo "json=$json_array"

View File

@ -0,0 +1,38 @@
name: save_logs_and_results
inputs:
folder:
required: false
default: "log"
runs:
using: composite
steps:
- uses: actions/upload-artifact@v3.1.1
name: Upload logs
with:
name: ${{ inputs.folder }}
if-no-files-found: ignore
path: |
src/test/**/proxy.output
src/test/**/results/
src/test/**/tmp_check/master/log
src/test/**/tmp_check/worker.57638/log
src/test/**/tmp_check/worker.57637/log
src/test/**/*.diffs
src/test/**/out/ddls.sql
src/test/**/out/queries.sql
src/test/**/logfile_*
/tmp/pg_upgrade_newData_logs
- name: Publish regression.diffs
run: |-
diffs="$(find src/test/regress -name "*.diffs" -exec cat {} \;)"
if ! [ -z "$diffs" ]; then
echo '```diff' >> $GITHUB_STEP_SUMMARY
echo -E "$diffs" >> $GITHUB_STEP_SUMMARY
echo '```' >> $GITHUB_STEP_SUMMARY
echo -E $diffs
fi
shell: bash
- name: Print stack traces
run: "./ci/print_stack_trace.sh"
if: failure()
shell: bash

View File

@ -0,0 +1,35 @@
name: setup_extension
inputs:
pg_major:
required: false
skip_installation:
required: false
default: false
type: boolean
runs:
using: composite
steps:
- name: Expose $PG_MAJOR to Github Env
run: |-
if [ -z "${{ inputs.pg_major }}" ]; then
echo "PG_MAJOR=${PG_MAJOR}" >> $GITHUB_ENV
else
echo "PG_MAJOR=${{ inputs.pg_major }}" >> $GITHUB_ENV
fi
shell: bash
- uses: actions/download-artifact@v3.0.1
with:
name: build-${{ env.PG_MAJOR }}
- name: Install Extension
if: ${{ inputs.skip_installation == 'false' }}
run: tar xfv "install-$PG_MAJOR.tar" --directory /
shell: bash
- name: Configure
run: |-
chown -R circleci .
git config --global --add safe.directory ${GITHUB_WORKSPACE}
gosu circleci ./configure --without-pg-version-check
shell: bash
- name: Enable core dumps
run: ulimit -c unlimited
shell: bash

View File

@ -0,0 +1,27 @@
name: coverage
inputs:
flags:
required: false
codecov_token:
required: true
runs:
using: composite
steps:
- uses: codecov/codecov-action@v3
with:
flags: ${{ inputs.flags }}
token: ${{ inputs.codecov_token }}
verbose: true
gcov: true
- name: Create codeclimate coverage
run: |-
lcov --directory . --capture --output-file lcov.info
lcov --remove lcov.info -o lcov.info '/usr/*'
sed "s=^SF:$PWD/=SF:=g" -i lcov.info # relative pats are required by codeclimate
mkdir -p /tmp/codeclimate
cc-test-reporter format-coverage -t lcov -o /tmp/codeclimate/${{ inputs.flags }}.json lcov.info
shell: bash
- uses: actions/upload-artifact@v3.1.1
with:
path: "/tmp/codeclimate/*.json"
name: codeclimate

490
.github/workflows/build_and_test.yml vendored Normal file
View File

@ -0,0 +1,490 @@
name: Build & Test
run-name: Build & Test - ${{ github.event.pull_request.title || github.ref_name }}
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
on:
workflow_dispatch:
inputs:
skip_test_flakyness:
required: false
default: false
type: boolean
pull_request:
types: [opened, reopened,synchronize]
jobs:
# Since GHA does not interpolate env varibles in matrix context, we need to
# define them in a separate job and use them in other jobs.
params:
runs-on: ubuntu-latest
name: Initialize parameters
outputs:
build_image_name: "citus/extbuilder"
test_image_name: "citus/exttester"
citusupgrade_image_name: "citus/citusupgradetester"
fail_test_image_name: "citus/failtester"
pgupgrade_image_name: "citus/pgupgradetester"
style_checker_image_name: "citus/stylechecker"
style_checker_tools_version: "0.8.18"
image_suffix: "-v3417e8d"
pg13_version: '{ "major": "13", "full": "13.10" }'
pg14_version: '{ "major": "14", "full": "14.7" }'
pg15_version: '{ "major": "15", "full": "15.2" }'
upgrade_pg_versions: "13.10-14.7-15.2"
steps:
# Since GHA jobs needs at least one step we use a noop step here.
- name: Set up parameters
run: echo 'noop'
check-sql-snapshots:
needs: params
runs-on: ubuntu-20.04
container:
image: ${{ needs.params.outputs.build_image_name }}:latest
options: --user root
steps:
- uses: actions/checkout@v3.5.0
- name: Check Snapshots
run: |
git config --global --add safe.directory ${GITHUB_WORKSPACE}
ci/check_sql_snapshots.sh
check-style:
needs: params
runs-on: ubuntu-20.04
container:
image: ${{ needs.params.outputs.style_checker_image_name }}:${{ needs.params.outputs.style_checker_tools_version }}${{ needs.params.outputs.image_suffix }}
steps:
- name: Check Snapshots
run: |
git config --global --add safe.directory ${GITHUB_WORKSPACE}
- uses: actions/checkout@v3.5.0
with:
fetch-depth: 0
- name: Check C Style
run: citus_indent --check
- name: Check Python style
run: black --check .
- name: Check Python import order
run: isort --check .
- name: Check Python lints
run: flake8 .
- name: Fix whitespace
run: ci/editorconfig.sh && git diff --exit-code
- name: Remove useless declarations
run: ci/remove_useless_declarations.sh && git diff --cached --exit-code
- name: Normalize test output
run: ci/normalize_expected.sh && git diff --exit-code
- name: Check for C-style comments in migration files
run: ci/disallow_c_comments_in_migrations.sh && git diff --exit-code
- name: 'Check for comment--cached ns that start with # character in spec files'
run: ci/disallow_hash_comments_in_spec_files.sh && git diff --exit-code
- name: Check for gitignore entries .for source files
run: ci/fix_gitignore.sh && git diff --exit-code
- name: Check for lengths of changelog entries
run: ci/disallow_long_changelog_entries.sh
- name: Check for banned C API usage
run: ci/banned.h.sh
- name: Check for tests missing in schedules
run: ci/check_all_tests_are_run.sh
- name: Check if all CI scripts are actually run
run: ci/check_all_ci_scripts_are_run.sh
- name: Check if all GUCs are sorted alphabetically
run: ci/check_gucs_are_alphabetically_sorted.sh
- name: Check for missing downgrade scripts
run: ci/check_migration_files.sh
build:
needs: params
name: Build for PG${{ fromJson(matrix.pg_version).major }}
strategy:
fail-fast: false
matrix:
image_name:
- ${{ needs.params.outputs.build_image_name }}
image_suffix:
- ${{ needs.params.outputs.image_suffix}}
pg_version:
- ${{ needs.params.outputs.pg13_version }}
- ${{ needs.params.outputs.pg14_version }}
- ${{ needs.params.outputs.pg15_version }}
runs-on: ubuntu-20.04
container:
image: "${{ matrix.image_name }}:${{ fromJson(matrix.pg_version).full }}${{ matrix.image_suffix }}"
options: --user root
steps:
- uses: actions/checkout@v3.5.0
- name: Expose $PG_MAJOR to Github Env
run: echo "PG_MAJOR=${PG_MAJOR}" >> $GITHUB_ENV
shell: bash
- name: Build
run: "./ci/build-citus.sh"
shell: bash
- uses: actions/upload-artifact@v3.1.1
with:
name: build-${{ env.PG_MAJOR }}
path: |-
./build-${{ env.PG_MAJOR }}/*
./install-${{ env.PG_MAJOR }}.tar
test-citus:
name: PG${{ fromJson(matrix.pg_version).major }} - ${{ matrix.make }}
strategy:
fail-fast: false
matrix:
suite:
- regress
image_name:
- ${{ needs.params.outputs.test_image_name }}
pg_version:
- ${{ needs.params.outputs.pg13_version }}
- ${{ needs.params.outputs.pg13_version }}
- ${{ needs.params.outputs.pg14_version }}
make:
- check-split
- check-multi
- check-multi-1
- check-multi-mx
- check-vanilla
- check-isolation
- check-operations
- check-follower-cluster
- check-columnar
- check-columnar-isolation
- check-enterprise
- check-enterprise-isolation
- check-enterprise-isolation-logicalrep-1
- check-enterprise-isolation-logicalrep-2
- check-enterprise-isolation-logicalrep-3
include:
- make: check-failure
pg_version: ${{ needs.params.outputs.pg13_version }}
suite: regress
image_name: ${{ needs.params.outputs.fail_test_image_name }}
- make: check-failure
pg_version: ${{ needs.params.outputs.pg14_version }}
suite: regress
image_name: ${{ needs.params.outputs.fail_test_image_name }}
- make: check-failure
pg_version: ${{ needs.params.outputs.pg15_version }}
suite: regress
image_name: ${{ needs.params.outputs.fail_test_image_name }}
- make: check-enterprise-failure
pg_version: ${{ needs.params.outputs.pg13_version }}
suite: regress
image_name: ${{ needs.params.outputs.fail_test_image_name }}
- make: check-enterprise-failure
pg_version: ${{ needs.params.outputs.pg14_version }}
suite: regress
image_name: ${{ needs.params.outputs.fail_test_image_name }}
- make: check-enterprise-failure
pg_version: ${{ needs.params.outputs.pg15_version }}
suite: regress
image_name: ${{ needs.params.outputs.fail_test_image_name }}
- make: check-pytest
pg_version: ${{ needs.params.outputs.pg13_version }}
suite: regress
image_name: ${{ needs.params.outputs.fail_test_image_name }}
- make: check-pytest
pg_version: ${{ needs.params.outputs.pg14_version }}
suite: regress
image_name: ${{ needs.params.outputs.fail_test_image_name }}
- make: check-pytest
pg_version: ${{ needs.params.outputs.pg15_version }}
suite: regress
image_name: ${{ needs.params.outputs.fail_test_image_name }}
- make: installcheck
suite: cdc
image_name: ${{ needs.params.outputs.test_image_name }}
pg_version: ${{ needs.params.outputs.pg15_version }}
runs-on: ubuntu-20.04
container:
image: "${{ matrix.image_name }}:${{ fromJson(matrix.pg_version).full }}${{ needs.params.outputs.image_suffix }}"
options: --user root --dns=8.8.8.8
# Due to Github creates a default network for each job, we need to use
# --dns= to have similar DNS settings as our other CI systems or local
# machines. Otherwise, we may see different results.
needs:
- params
- build
steps:
- uses: actions/checkout@v3.5.0
- uses: "./.github/actions/setup_extension"
- name: Run Test
run: gosu circleci make -C src/test/${{ matrix.suite }} ${{ matrix.make }}
timeout-minutes: 20
- uses: "./.github/actions/save_logs_and_results"
if: always()
with:
folder: ${{ fromJson(matrix.pg_version).major }}_${{ matrix.make }}
- uses: "./.github/actions/upload_coverage"
if: always()
with:
flags: ${{ env.PG_MAJOR }}_${{ matrix.suite }}_${{ matrix.make }}
codecov_token: ${{ secrets.CODECOV_TOKEN }}
test-arbitrary-configs:
name: PG${{ fromJson(matrix.pg_version).major }} - check-arbitrary-configs-${{ matrix.parallel }}
runs-on: ["self-hosted", "1ES.Pool=1es-gha-citusdata-pool"]
container:
image: "${{ matrix.image_name }}:${{ fromJson(matrix.pg_version).full }}${{ needs.params.outputs.image_suffix }}"
options: --user root
needs:
- params
- build
strategy:
fail-fast: false
matrix:
image_name:
- ${{ needs.params.outputs.fail_test_image_name }}
pg_version:
- ${{ needs.params.outputs.pg13_version }}
- ${{ needs.params.outputs.pg14_version }}
- ${{ needs.params.outputs.pg15_version }}
parallel: [0,1,2,3,4,5] # workaround for running 6 parallel jobs
steps:
- uses: actions/checkout@v3.5.0
- uses: "./.github/actions/setup_extension"
- name: Test arbitrary configs
run: |-
# we use parallel jobs to split the tests into 6 parts and run them in parallel
# the script below extracts the tests for the current job
N=6 # Total number of jobs (see matrix.parallel)
X=${{ matrix.parallel }} # Current job number
TESTS=$(src/test/regress/citus_tests/print_test_names.py |
tr '\n' ',' | awk -v N="$N" -v X="$X" -F, '{
split("", parts)
for (i = 1; i <= NF; i++) {
parts[i % N] = parts[i % N] $i ","
}
print substr(parts[X], 1, length(parts[X])-1)
}')
echo $TESTS
gosu circleci \
make -C src/test/regress \
check-arbitrary-configs parallel=4 CONFIGS=$TESTS
- uses: "./.github/actions/save_logs_and_results"
if: always()
- uses: "./.github/actions/upload_coverage"
if: always()
with:
flags: ${{ env.pg_major }}_upgrade
codecov_token: ${{ secrets.CODECOV_TOKEN }}
test-pg-upgrade:
name: PG${{ matrix.old_pg_major }}-PG${{ matrix.new_pg_major }} - check-pg-upgrade
runs-on: ubuntu-20.04
container:
image: "${{ needs.params.outputs.pgupgrade_image_name }}:${{ needs.params.outputs.upgrade_pg_versions }}${{ needs.params.outputs.image_suffix }}"
options: --user root
needs:
- params
- build
strategy:
fail-fast: false
matrix:
include:
- old_pg_major: 13
new_pg_major: 14
- old_pg_major: 14
new_pg_major: 15
env:
old_pg_major: ${{ matrix.old_pg_major }}
new_pg_major: ${{ matrix.new_pg_major }}
steps:
- uses: actions/checkout@v3.5.0
- uses: "./.github/actions/setup_extension"
with:
pg_major: "${{ env.old_pg_major }}"
- uses: "./.github/actions/setup_extension"
with:
pg_major: "${{ env.new_pg_major }}"
- name: Install and test postgres upgrade
run: |-
gosu circleci \
make -C src/test/regress \
check-pg-upgrade \
old-bindir=/usr/lib/postgresql/${{ env.old_pg_major }}/bin \
new-bindir=/usr/lib/postgresql/${{ env.new_pg_major }}/bin
- name: Copy pg_upgrade logs for newData dir
run: |-
mkdir -p /tmp/pg_upgrade_newData_logs
if ls src/test/regress/tmp_upgrade/newData/*.log 1> /dev/null 2>&1; then
cp src/test/regress/tmp_upgrade/newData/*.log /tmp/pg_upgrade_newData_logs
fi
if: failure()
- uses: "./.github/actions/save_logs_and_results"
if: always()
- uses: "./.github/actions/upload_coverage"
if: always()
with:
flags: ${{ env.old_pg_major }}_${{ env.new_pg_major }}_upgrade
codecov_token: ${{ secrets.CODECOV_TOKEN }}
test-citus-upgrade:
name: PG${{ fromJson(needs.params.outputs.pg13_version).major }} - check-citus-upgrade
runs-on: ubuntu-20.04
container:
image: "${{ needs.params.outputs.citusupgrade_image_name }}:${{ fromJson(needs.params.outputs.pg13_version).full }}${{ needs.params.outputs.image_suffix }}"
options: --user root
needs:
- params
- build
steps:
- uses: actions/checkout@v3.5.0
- uses: "./.github/actions/setup_extension"
with:
skip_installation: true
- name: Install and test citus upgrade
run: |-
# run make check-citus-upgrade for all citus versions
# the image has ${CITUS_VERSIONS} set with all verions it contains the binaries of
for citus_version in ${CITUS_VERSIONS}; do \
gosu circleci \
make -C src/test/regress \
check-citus-upgrade \
bindir=/usr/lib/postgresql/${PG_MAJOR}/bin \
citus-old-version=${citus_version} \
citus-pre-tar=/install-pg${PG_MAJOR}-citus${citus_version}.tar \
citus-post-tar=${GITHUB_WORKSPACE}/install-$PG_MAJOR.tar; \
done;
# run make check-citus-upgrade-mixed for all citus versions
# the image has ${CITUS_VERSIONS} set with all verions it contains the binaries of
for citus_version in ${CITUS_VERSIONS}; do \
gosu circleci \
make -C src/test/regress \
check-citus-upgrade-mixed \
citus-old-version=${citus_version} \
bindir=/usr/lib/postgresql/${PG_MAJOR}/bin \
citus-pre-tar=/install-pg${PG_MAJOR}-citus${citus_version}.tar \
citus-post-tar=${GITHUB_WORKSPACE}/install-$PG_MAJOR.tar; \
done;
- uses: "./.github/actions/save_logs_and_results"
if: always()
- uses: "./.github/actions/upload_coverage"
if: always()
with:
flags: ${{ env.pg_major }}_upgrade
codecov_token: ${{ secrets.CODECOV_TOKEN }}
upload-coverage:
if: always()
env:
CC_TEST_REPORTER_ID: ${{ secrets.CC_TEST_REPORTER_ID }}
runs-on: ubuntu-20.04
container:
image: ${{ needs.params.outputs.test_image_name }}:${{ fromJson(needs.params.outputs.pg15_version).full }}${{ needs.params.outputs.image_suffix }}
needs:
- params
- test-citus
- test-arbitrary-configs
- test-citus-upgrade
- test-pg-upgrade
steps:
- uses: actions/download-artifact@v3.0.1
with:
name: "codeclimate"
path: "codeclimate"
- name: Upload coverage results to Code Climate
run: |-
cc-test-reporter sum-coverage codeclimate/*.json -o total.json
cc-test-reporter upload-coverage -i total.json
ch_benchmark:
name: CH Benchmark
if: startsWith(github.ref, 'refs/heads/ch_benchmark/')
runs-on: ubuntu-20.04
needs:
- build
steps:
- uses: actions/checkout@v3.5.0
- uses: azure/login@v1
with:
creds: ${{ secrets.AZURE_CREDENTIALS }}
- name: install dependencies and run ch_benchmark tests
uses: azure/CLI@v1
with:
inlineScript: |
cd ./src/test/hammerdb
chmod +x run_hammerdb.sh
run_hammerdb.sh citusbot_ch_benchmark_rg
tpcc_benchmark:
name: TPCC Benchmark
if: startsWith(github.ref, 'refs/heads/tpcc_benchmark/')
runs-on: ubuntu-20.04
needs:
- build
steps:
- uses: actions/checkout@v3.5.0
- uses: azure/login@v1
with:
creds: ${{ secrets.AZURE_CREDENTIALS }}
- name: install dependencies and run tpcc_benchmark tests
uses: azure/CLI@v1
with:
inlineScript: |
cd ./src/test/hammerdb
chmod +x run_hammerdb.sh
run_hammerdb.sh citusbot_tpcc_benchmark_rg
prepare_parallelization_matrix_32:
name: Parallel 32
if: ${{ needs.test-flakyness-pre.outputs.tests != ''}}
needs: test-flakyness-pre
runs-on: ubuntu-20.04
outputs:
json: ${{ steps.parallelization.outputs.json }}
steps:
- uses: actions/checkout@v3.5.0
- uses: "./.github/actions/parallelization"
id: parallelization
with:
count: 32
test-flakyness-pre:
name: Detect regression tests need to be ran
if: ${{ !inputs.skip_test_flakyness }}}
runs-on: ubuntu-20.04
needs: build
outputs:
tests: ${{ steps.detect-regression-tests.outputs.tests }}
steps:
- uses: actions/checkout@v3.5.0
with:
fetch-depth: 0
- name: Detect regression tests need to be ran
id: detect-regression-tests
run: |-
detected_changes=$(git diff origin/release-11.3... --name-only --diff-filter=AM | (grep 'src/test/regress/sql/.*\.sql\|src/test/regress/spec/.*\.spec\|src/test/regress/citus_tests/test/test_.*\.py' || true))
tests=${detected_changes}
if [ -z "$tests" ]; then
echo "No test found."
else
echo "Detected tests " $tests
fi
echo 'tests<<EOF' >> $GITHUB_OUTPUT
echo "$tests" >> "$GITHUB_OUTPUT"
echo 'EOF' >> $GITHUB_OUTPUT
test-flakyness:
if: false
name: Test flakyness
runs-on: ubuntu-20.04
container:
image: ${{ needs.params.outputs.fail_test_image_name }}:${{ needs.params.outputs.pg15_version }}${{ needs.params.outputs.image_suffix }}
options: --user root
env:
runs: 8
needs:
- params
- build
- test-flakyness-pre
- prepare_parallelization_matrix_32
strategy:
fail-fast: false
matrix: ${{ fromJson(needs.prepare_parallelization_matrix_32.outputs.json) }}
steps:
- uses: actions/checkout@v3.5.0
- uses: actions/download-artifact@v3.0.1
- uses: "./.github/actions/setup_extension"
- name: Run minimal tests
run: |-
tests="${{ needs.test-flakyness-pre.outputs.tests }}"
tests_array=($tests)
for test in "${tests_array[@]}"
do
test_name=$(echo "$test" | sed -r "s/.+\/(.+)\..+/\1/")
gosu circleci src/test/regress/citus_tests/run_test.py $test_name --repeat ${{ env.runs }} --use-base-schedule --use-whole-schedule-line
done
shell: bash
- uses: "./.github/actions/save_logs_and_results"
if: always()

View File

@ -0,0 +1,79 @@
name: Flaky test debugging
run-name: Flaky test debugging - ${{ inputs.flaky_test }} (${{ inputs.flaky_test_runs_per_job }}x${{ inputs.flaky_test_parallel_jobs }})
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
on:
workflow_dispatch:
inputs:
flaky_test:
required: true
type: string
description: Test to run
flaky_test_runs_per_job:
required: false
default: 8
type: number
description: Number of times to run the test
flaky_test_parallel_jobs:
required: false
default: 32
type: number
description: Number of parallel jobs to run
jobs:
build:
name: Build Citus
runs-on: ubuntu-latest
container:
image: ${{ vars.build_image_name }}:${{ vars.pg15_version }}${{ vars.image_suffix }}
options: --user root
steps:
- uses: actions/checkout@v3.5.0
- name: Configure, Build, and Install
run: |
echo "PG_MAJOR=${PG_MAJOR}" >> $GITHUB_ENV
./ci/build-citus.sh
shell: bash
- uses: actions/upload-artifact@v3.1.1
with:
name: build-${{ env.PG_MAJOR }}
path: |-
./build-${{ env.PG_MAJOR }}/*
./install-${{ env.PG_MAJOR }}.tar
prepare_parallelization_matrix:
name: Prepare parallelization matrix
runs-on: ubuntu-latest
outputs:
json: ${{ steps.parallelization.outputs.json }}
steps:
- uses: actions/checkout@v3.5.0
- uses: "./.github/actions/parallelization"
id: parallelization
with:
count: ${{ inputs.flaky_test_parallel_jobs }}
test_flakyness:
name: Test flakyness
runs-on: ubuntu-latest
container:
image: ${{ vars.fail_test_image_name }}:${{ vars.pg15_version }}${{ vars.image_suffix }}
options: --user root
needs:
[build, prepare_parallelization_matrix]
env:
test: "${{ inputs.flaky_test }}"
runs: "${{ inputs.flaky_test_runs_per_job }}"
skip: false
strategy:
fail-fast: false
matrix: ${{ fromJson(needs.prepare_parallelization_matrix.outputs.json) }}
steps:
- uses: actions/checkout@v3.5.0
- uses: "./.github/actions/setup_extension"
- name: Run minimal tests
run: |-
gosu circleci src/test/regress/citus_tests/run_test.py ${{ env.test }} --repeat ${{ env.runs }} --use-base-schedule --use-whole-schedule-line
shell: bash
- uses: "./.github/actions/save_logs_and_results"
if: always()
with:
folder: ${{ matrix.id }}

View File

@ -20,14 +20,16 @@ jobs:
- name: Get Postgres Versions
id: get-postgres-versions
run: |
# Postgres versions are stored in .circleci/config.yml file in "build-[pg-version] format. Below command
# extracts the versions and get the unique values.
pg_versions=`grep -Eo 'build-[[:digit:]]{2}' .circleci/config.yml|sed -e "s/^build-//"|sort|uniq|tr '\n' ','| head -c -1`
set -euxo pipefail
# Postgres versions are stored in .github/workflows/build_and_test.yml
# file in json strings with major and full keys.
# Below command extracts the versions and get the unique values.
pg_versions=$(cat .github/workflows/build_and_test.yml | grep -oE '"major": "[0-9]+", "full": "[0-9.]+"' | sed -E 's/"major": "([0-9]+)", "full": "([0-9.]+)"/\1/g' | sort | uniq | tr '\n', ',')
pg_versions_array="[ ${pg_versions} ]"
echo "Supported PG Versions: ${pg_versions_array}"
# Below line is needed to set the output variable to be used in the next job
echo "pg_versions=${pg_versions_array}" >> $GITHUB_OUTPUT
shell: bash
rpm_build_tests:
name: rpm_build_tests
needs: get_postgres_versions_from_file
@ -43,7 +45,7 @@ jobs:
- oraclelinux-7
- oraclelinux-8
- centos-7
- centos-8
- almalinux-8
- almalinux-9
POSTGRES_VERSION: ${{ fromJson(needs.get_postgres_versions_from_file.outputs.pg_versions) }}
@ -112,7 +114,6 @@ jobs:
- ubuntu-bionic-all
- ubuntu-focal-all
- ubuntu-jammy-all
- ubuntu-kinetic-all
POSTGRES_VERSION: ${{ fromJson(needs.get_postgres_versions_from_file.outputs.pg_versions) }}
@ -155,7 +156,7 @@ jobs:
run: |
echo "Postgres version: ${POSTGRES_VERSION}"
apt-get update -y
apt-get update -y || true
## Install required packages to execute packaging tools for deb based distros
apt-get install python3-dev python3-pip -y
apt-get purge -y python3-yaml

2
.gitignore vendored
View File

@ -38,6 +38,8 @@ lib*.pc
/Makefile.global
/src/Makefile.custom
/compile_commands.json
/src/backend/distributed/cdc/build-cdc-*/*
/src/test/cdc/tmp_check/*
# temporary files vim creates
*.swp

View File

@ -1,3 +1,225 @@
### citus v11.3.1 (February 12, 2024) ###
* Disallows MERGE when the query prunes down to zero shards (#6946)
* Fixes a bug related to non-existent objects in DDL commands (#6984)
* Fixes a bug that could cause COPY logic to skip data in case of OOM (#7152)
* Fixes a bug with deleting colocation groups (#6929)
* Fixes incorrect results on fetching scrollable with hold cursors (#7014)
* Fixes memory and memory context leaks in Foreign Constraint Graphs (#7236)
* Fixes replicate reference tables task fail when user is superuser (#6930)
* Fixes the incorrect column count after ALTER TABLE (#7379)
* Improves citus_shard_sizes performance (#7050)
* Makes sure to disallow creating a replicated distributed table
concurrently (#7219)
* Removes pg_send_cancellation and all references (#7135)
### citus v11.3.0 (May 2, 2023) ###
* Introduces CDC implementation for Citus using logical replication
(#6623, #6810, #6827)
* Adds support for `MERGE` command on co-located distributed tables joined on
distribution column (#6696, #6733)
* Adds the view `citus_stats_tenants` that monitor statistics on tenant usages
(#6725)
* Adds the GUC `citus.max_background_task_executors_per_node` to control number
of background task executors involving a node (#6771)
* Allows parallel shard moves in background rebalancer (#6756)
* Introduces the GUC `citus.metadata_sync_mode` that introduces nontransactional
mode for metadata sync (#6728, #6889)
* Propagates CREATE/ALTER/DROP PUBLICATION statements for distributed tables
(#6776)
* Adds the GUC `citus.enable_non_colocated_router_query_pushdown` to ensure
generating a consistent distributed plan for the queries that reference
non-colocated distributed tables when set to "false" (#6793)
* Checks if all moves are able to be done via logical replication for rebalancer
(#6754)
* Correctly reports shard size in `citus_shards` view (#6748)
* Fixes a bug in shard copy operations (#6721)
* Fixes a bug that prevents enforcing identity column restrictions on worker
nodes (#6738)
* Fixes a bug with `INSERT .. SELECT` queries with identity columns (#6802)
* Fixes an issue that caused some queries with custom aggregates to fail (#6805)
* Fixes an issue when `citus_set_coordinator_host` is called more than once
(#6837)
* Fixes an uninitialized memory access in shard split API (#6845)
* Fixes memory leak and max allocation block errors during metadata syncing
(#6728)
* Fixes memory leak in `undistribute_table` (#6693)
* Fixes memory leak in `alter_distributed_table` (#6726)
* Fixes memory leak in `create_distributed_table` (#6722)
* Fixes memory leak issue with query results that returns single row (#6724)
* Improves rebalancer when shard groups have placement count less than worker
count (#6739)
* Makes sure to stop maintenance daemon when dropping a database even without
Citus extension (#6688)
* Prevents using `alter_distributed_table` and `undistribute_table` UDFs when a
table has identity columns (#6738)
* Prevents using identity columns on data types other than `bigint` on
distributed tables (#6738)
### citus v11.2.1 (April 20, 2023) ###
* Correctly reports shard size in `citus_shards` view (#6748)
* Fixes a bug in shard copy operations (#6721)
* Fixes a bug with `INSERT .. SELECT` queries with identity columns (#6802)
* Fixes an uninitialized memory access in shard split API (#6845)
* Fixes compilation for PG13.10 and PG14.7 (#6711)
* Fixes memory leak in `alter_distributed_table` (#6726)
* Fixes memory leak issue with query results that returns single row (#6724)
* Prevents using `alter_distributed_table` and `undistribute_table` UDFs when a
table has identity columns (#6738)
* Prevents using identity columns on data types other than `bigint` on
distributed tables (#6738)
### citus v11.1.6 (April 20, 2023) ###
* Correctly reports shard size in `citus_shards` view (#6748)
* Fixes a bug in shard copy operations (#6721)
* Fixes a bug that breaks pg upgrades if the user has a columnar table (#6624)
* Fixes a bug that causes background rebalancer to fail when a reference table
doesn't have a primary key (#6682)
* Fixes a regression in allowed foreign keys on distributed tables (#6550)
* Fixes a use-after-free bug in connection management (#6685)
* Fixes an unexpected foreign table error by disallowing to drop the
`table_name` option (#6669)
* Fixes an uninitialized memory access in shard split API (#6845)
* Fixes compilation for PG13.10 and PG14.7 (#6711)
* Fixes crash that happens when trying to replicate a reference table that is
actually dropped (#6595)
* Fixes memory leak issue with query results that returns single row (#6724)
* Fixes the modifiers for subscription and role creation (#6603)
* Makes sure to quote all identifiers used for logical replication to prevent
potential issues (#6604)
* Makes sure to skip foreign key validations at the end of shard moves (#6640)
### citus v11.0.8 (April 20, 2023) ###
* Correctly reports shard size in `citus_shards` view (#6748)
* Fixes a bug that breaks pg upgrades if the user has a columnar table (#6624)
* Fixes an unexpected foreign table error by disallowing to drop the
`table_name` option (#6669)
* Fixes compilation warning on PG13 + OpenSSL 3.0 (#6038, #6502)
* Fixes crash that happens when trying to replicate a reference table that is
actually dropped (#6595)
* Fixes memory leak issue with query results that returns single row (#6724)
* Fixes the modifiers for subscription and role creation (#6603)
* Fixes two potential dangling pointer issues (#6504, #6507)
* Makes sure to quote all identifiers used for logical replication to prevent
potential issues (#6604)
### citus v10.2.9 (April 20, 2023) ###
* Correctly reports shard size in `citus_shards` view (#6748)
* Fixes a bug in `ALTER EXTENSION citus UPDATE` (#6383)
* Fixes a bug that breaks pg upgrades if the user has a columnar table (#6624)
* Fixes a bug that prevents retaining columnar table options after a
table-rewrite (#6337)
* Fixes memory leak issue with query results that returns single row (#6724)
* Raises memory limits in columnar from 256MB to 1GB for reads and writes
(#6419)
### citus v10.1.6 (April 20, 2023) ###
* Fixes a crash that occurs when the aggregate that cannot be pushed-down
returns empty result from a worker (#5679)
* Fixes columnar freezing/wraparound bug (#5962)
* Fixes memory leak issue with query results that returns single row (#6724)
* Prevents alter table functions from dropping extensions (#5974)
### citus v10.0.8 (April 20, 2023) ###
* Fixes a bug that could break `DROP SCHEMA/EXTENSON` commands when there is a
columnar table (#5458)
* Fixes a crash that occurs when the aggregate that cannot be pushed-down
returns empty result from a worker (#5679)
* Fixes columnar freezing/wraparound bug (#5962)
* Fixes memory leak issue with query results that returns single row (#6724)
* Prevents alter table functions from dropping extensions (#5974)
### citus v9.5.12 (April 20, 2023) ###
* Fixes a crash that occurs when the aggregate that cannot be pushed-down
returns empty result from a worker (#5679)
* Fixes memory leak issue with query results that returns single row (#6724)
* Prevents alter table functions from dropping extensions (#5974)
### citus v11.2.0 (January 30, 2023) ###
* Adds support for outer joins with reference tables / complex subquery-CTEs

View File

@ -11,7 +11,7 @@ endif
include Makefile.global
all: extension pg_send_cancellation
all: extension
# build columnar only
@ -40,22 +40,14 @@ clean-full:
install-downgrades:
$(MAKE) -C src/backend/distributed/ install-downgrades
install-all: install-headers install-pg_send_cancellation
install-all: install-headers
$(MAKE) -C src/backend/columnar/ install-all
$(MAKE) -C src/backend/distributed/ install-all
# build citus_send_cancellation binary
pg_send_cancellation:
$(MAKE) -C src/bin/pg_send_cancellation/ all
install-pg_send_cancellation: pg_send_cancellation
$(MAKE) -C src/bin/pg_send_cancellation/ install
clean-pg_send_cancellation:
$(MAKE) -C src/bin/pg_send_cancellation/ clean
.PHONY: pg_send_cancellation install-pg_send_cancellation clean-pg_send_cancellation
# Add to generic targets
install: install-extension install-headers install-pg_send_cancellation
clean: clean-extension clean-pg_send_cancellation
install: install-extension install-headers
clean: clean-extension
# apply or check style
reindent:

View File

@ -15,9 +15,6 @@ PG_MAJOR=${PG_MAJOR:?please provide the postgres major version}
codename=${VERSION#*(}
codename=${codename%)*}
# get project from argument
project="${CIRCLE_PROJECT_REPONAME}"
# we'll do everything with absolute paths
basedir="$(pwd)"
@ -28,7 +25,7 @@ build_ext() {
pg_major="$1"
builddir="${basedir}/build-${pg_major}"
echo "Beginning build of ${project} for PostgreSQL ${pg_major}..." >&2
echo "Beginning build for PostgreSQL ${pg_major}..." >&2
# do everything in a subdirectory to avoid clutter in current directory
mkdir -p "${builddir}" && cd "${builddir}"

View File

@ -14,8 +14,8 @@ ci_scripts=$(
grep -v -E '^(ci_helpers.sh|fix_style.sh)$'
)
for script in $ci_scripts; do
if ! grep "\\bci/$script\\b" .circleci/config.yml > /dev/null; then
echo "ERROR: CI script with name \"$script\" is not actually used in .circleci/config.yml"
if ! grep "\\bci/$script\\b" -r .github > /dev/null; then
echo "ERROR: CI script with name \"$script\" is not actually used in .github folder"
exit 1
fi
if ! grep "^## \`$script\`\$" ci/README.md > /dev/null; then

View File

@ -1,96 +0,0 @@
#!/bin/bash
# Testing this script locally requires you to set the following environment
# variables:
# CIRCLE_BRANCH, GIT_USERNAME and GIT_TOKEN
# fail if trying to reference a variable that is not set.
set -u
# exit immediately if a command fails
set -e
# Fail on pipe failures
set -o pipefail
PR_BRANCH="${CIRCLE_BRANCH}"
ENTERPRISE_REMOTE="https://${GIT_USERNAME}:${GIT_TOKEN}@github.com/citusdata/citus-enterprise"
# shellcheck disable=SC1091
source ci/ci_helpers.sh
# List executed commands. This is done so debugging this script is easier when
# it fails. It's explicitly done after git remote add so username and password
# are not shown in CI output (even though it's also filtered out by CircleCI)
set -x
check_compile () {
echo "INFO: checking if merged code can be compiled"
./configure --without-libcurl
make -j10
}
# Clone current git repo (which should be community) to a temporary working
# directory and go there
GIT_DIR_ROOT="$(git rev-parse --show-toplevel)"
TMP_GIT_DIR="$(mktemp --directory -t citus-merge-check.XXXXXXXXX)"
git clone "$GIT_DIR_ROOT" "$TMP_GIT_DIR"
cd "$TMP_GIT_DIR"
# Fails in CI without this
git config user.email "citus-bot@microsoft.com"
git config user.name "citus bot"
# Disable "set -x" temporarily, because $ENTERPRISE_REMOTE contains passwords
{ set +x ; } 2> /dev/null
git remote add enterprise "$ENTERPRISE_REMOTE"
set -x
git remote set-url --push enterprise no-pushing
# Fetch enterprise-master
git fetch enterprise enterprise-master
git checkout "enterprise/enterprise-master"
if git merge --no-commit "origin/$PR_BRANCH"; then
echo "INFO: community PR branch could be merged into enterprise-master"
# check that we can compile after the merge
if check_compile; then
exit 0
fi
echo "WARN: Failed to compile after community PR branch was merged into enterprise"
fi
# undo partial merge
git merge --abort
# If we have a conflict on enterprise merge on the master branch, we have a problem.
# Provide an error message to indicate that enterprise merge is needed to fix this check.
if [[ $PR_BRANCH = master ]]; then
echo "ERROR: Master branch has merge conflicts with enterprise-master."
echo "Try re-running this CI job after merging your changes into enterprise-master."
exit 1
fi
if ! git fetch enterprise "$PR_BRANCH" ; then
echo "ERROR: enterprise/$PR_BRANCH was not found and community PR branch could not be merged into enterprise-master"
exit 1
fi
# Show the top commit of the enterprise PR branch to make debugging easier
git log -n 1 "enterprise/$PR_BRANCH"
# Check that this branch contains the top commit of the current community PR
# branch. If it does not it means it's not up to date with the current PR, so
# the enterprise branch should be updated.
if ! git merge-base --is-ancestor "origin/$PR_BRANCH" "enterprise/$PR_BRANCH" ; then
echo "ERROR: enterprise/$PR_BRANCH is not up to date with community PR branch"
exit 1
fi
# Now check if we can merge the enterprise PR into enterprise-master without
# issues.
git merge --no-commit "enterprise/$PR_BRANCH"
# check that we can compile after the merge
check_compile

18
configure vendored
View File

@ -1,6 +1,6 @@
#! /bin/sh
# Guess values for system-dependent variables and create Makefiles.
# Generated by GNU Autoconf 2.69 for Citus 11.3devel.
# Generated by GNU Autoconf 2.69 for Citus 11.3.1.
#
#
# Copyright (C) 1992-1996, 1998-2012 Free Software Foundation, Inc.
@ -579,8 +579,8 @@ MAKEFLAGS=
# Identity of this package.
PACKAGE_NAME='Citus'
PACKAGE_TARNAME='citus'
PACKAGE_VERSION='11.3devel'
PACKAGE_STRING='Citus 11.3devel'
PACKAGE_VERSION='11.3.1'
PACKAGE_STRING='Citus 11.3.1'
PACKAGE_BUGREPORT=''
PACKAGE_URL=''
@ -1262,7 +1262,7 @@ if test "$ac_init_help" = "long"; then
# Omit some internal or obsolete options to make the list less imposing.
# This message is too long to be a string in the A/UX 3.1 sh.
cat <<_ACEOF
\`configure' configures Citus 11.3devel to adapt to many kinds of systems.
\`configure' configures Citus 11.3.1 to adapt to many kinds of systems.
Usage: $0 [OPTION]... [VAR=VALUE]...
@ -1324,7 +1324,7 @@ fi
if test -n "$ac_init_help"; then
case $ac_init_help in
short | recursive ) echo "Configuration of Citus 11.3devel:";;
short | recursive ) echo "Configuration of Citus 11.3.1:";;
esac
cat <<\_ACEOF
@ -1429,7 +1429,7 @@ fi
test -n "$ac_init_help" && exit $ac_status
if $ac_init_version; then
cat <<\_ACEOF
Citus configure 11.3devel
Citus configure 11.3.1
generated by GNU Autoconf 2.69
Copyright (C) 2012 Free Software Foundation, Inc.
@ -1912,7 +1912,7 @@ cat >config.log <<_ACEOF
This file contains any messages produced by compilers while
running configure, to aid debugging if configure makes a mistake.
It was created by Citus $as_me 11.3devel, which was
It was created by Citus $as_me 11.3.1, which was
generated by GNU Autoconf 2.69. Invocation command line was
$ $0 $@
@ -5393,7 +5393,7 @@ cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1
# report actual input values of CONFIG_FILES etc. instead of their
# values after options handling.
ac_log="
This file was extended by Citus $as_me 11.3devel, which was
This file was extended by Citus $as_me 11.3.1, which was
generated by GNU Autoconf 2.69. Invocation command line was
CONFIG_FILES = $CONFIG_FILES
@ -5455,7 +5455,7 @@ _ACEOF
cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1
ac_cs_config="`$as_echo "$ac_configure_args" | sed 's/^ //; s/[\\""\`\$]/\\\\&/g'`"
ac_cs_version="\\
Citus config.status 11.3devel
Citus config.status 11.3.1
configured by $0, generated by GNU Autoconf 2.69,
with options \\"\$ac_cs_config\\"

View File

@ -5,7 +5,7 @@
# everyone needing autoconf installed, the resulting files are checked
# into the SCM.
AC_INIT([Citus], [11.3devel])
AC_INIT([Citus], [11.3.1])
AC_COPYRIGHT([Copyright (c) Citus Data, Inc.])
# we'll need sed and awk for some of the version commands

View File

@ -1,6 +1,6 @@
# Columnar extension
comment = 'Citus Columnar extension'
default_version = '11.2-1'
default_version = '11.3-1'
module_pathname = '$libdir/citus_columnar'
relocatable = false
schema = pg_catalog

View File

@ -0,0 +1 @@
-- citus_columnar--11.2-1--11.3-1

View File

@ -0,0 +1 @@
-- citus_columnar--11.3-1--11.2-1

View File

@ -1,6 +1,6 @@
# Citus extension
comment = 'Citus distributed database'
default_version = '11.3-1'
default_version = '11.3-2'
module_pathname = '$libdir/citus'
relocatable = false
schema = pg_catalog

View File

@ -1710,20 +1710,13 @@ ReplaceTable(Oid sourceId, Oid targetId, List *justBeforeDropCommands,
}
else if (ShouldSyncTableMetadata(sourceId))
{
char *qualifiedTableName = quote_qualified_identifier(schemaName, sourceName);
/*
* We are converting a citus local table to a distributed/reference table,
* so we should prevent dropping the sequence on the table. Otherwise, we'd
* lose track of the previous changes in the sequence.
*/
StringInfo command = makeStringInfo();
appendStringInfo(command,
"SELECT pg_catalog.worker_drop_sequence_dependency(%s);",
quote_literal_cstr(qualifiedTableName));
SendCommandToWorkersWithMetadata(command->data);
char *command = WorkerDropSequenceDependencyCommand(sourceId);
SendCommandToWorkersWithMetadata(command);
}
}

View File

@ -415,6 +415,19 @@ CreateDistributedTableConcurrently(Oid relationId, char *distributionColumnName,
if (!IsColocateWithDefault(colocateWithTableName) && !IsColocateWithNone(
colocateWithTableName))
{
if (replicationModel != REPLICATION_MODEL_STREAMING)
{
ereport(ERROR, (errmsg("cannot create distributed table "
"concurrently because Citus allows "
"concurrent table distribution only when "
"citus.shard_replication_factor = 1"),
errhint("table %s is requested to be colocated "
"with %s which has "
"citus.shard_replication_factor > 1",
get_rel_name(relationId),
colocateWithTableName)));
}
EnsureColocateWithTableIsValid(relationId, distributionMethod,
distributionColumnName,
colocateWithTableName);

View File

@ -393,9 +393,17 @@ GetDependencyCreateDDLCommands(const ObjectAddress *dependency)
tableDDLCommand));
}
/* we need to drop table, if exists, first to make table creation idempotent */
/*
* We need to drop table, if exists, first to make table creation
* idempotent. Before dropping the table, we should also break
* dependencies with sequences since `drop cascade table` would also
* drop depended sequences. This is safe as we still record dependency
* with the sequence during table creation.
*/
commandList = lcons(DropTableIfExistsCommand(relationId),
commandList);
commandList = lcons(WorkerDropSequenceDependencyCommand(relationId),
commandList);
}
return commandList;

View File

@ -187,7 +187,9 @@ multi_ProcessUtility(PlannedStmt *pstmt,
IsA(parsetree, ExecuteStmt) ||
IsA(parsetree, PrepareStmt) ||
IsA(parsetree, DiscardStmt) ||
IsA(parsetree, DeallocateStmt))
IsA(parsetree, DeallocateStmt) ||
IsA(parsetree, DeclareCursorStmt) ||
IsA(parsetree, FetchStmt))
{
/*
* Skip additional checks for common commands that do not have any

View File

@ -716,14 +716,14 @@ PutRemoteCopyData(MultiConnection *connection, const char *buffer, int nbytes)
Assert(PQisnonblocking(pgConn));
int copyState = PQputCopyData(pgConn, buffer, nbytes);
if (copyState == -1)
if (copyState <= 0)
{
return false;
}
/*
* PQputCopyData may have queued up part of the data even if it managed
* to send some of it succesfully. We provide back pressure by waiting
* to send some of it successfully. We provide back pressure by waiting
* until the socket is writable to prevent the internal libpq buffers
* from growing excessively.
*

View File

@ -1406,8 +1406,15 @@ set_join_column_names(deparse_namespace *dpns, RangeTblEntry *rte,
/* Assert we processed the right number of columns */
#ifdef USE_ASSERT_CHECKING
while (i < colinfo->num_cols && colinfo->colnames[i] == NULL)
i++;
for (int col_index = 0; col_index < colinfo->num_cols; col_index++)
{
/*
* In the above processing-loops, "i" advances only if
* the column is not new, check if this is a new column.
*/
if (colinfo->is_new_col[col_index])
i++;
}
Assert(i == colinfo->num_cols);
Assert(j == nnewcolumns);
#endif

View File

@ -1529,8 +1529,15 @@ set_join_column_names(deparse_namespace *dpns, RangeTblEntry *rte,
/* Assert we processed the right number of columns */
#ifdef USE_ASSERT_CHECKING
while (i < colinfo->num_cols && colinfo->colnames[i] == NULL)
i++;
for (int col_index = 0; col_index < colinfo->num_cols; col_index++)
{
/*
* In the above processing-loops, "i" advances only if
* the column is not new, check if this is a new column.
*/
if (colinfo->is_new_col[col_index])
i++;
}
Assert(i == colinfo->num_cols);
Assert(j == nnewcolumns);
#endif

View File

@ -1566,8 +1566,15 @@ set_join_column_names(deparse_namespace *dpns, RangeTblEntry *rte,
/* Assert we processed the right number of columns */
#ifdef USE_ASSERT_CHECKING
while (i < colinfo->num_cols && colinfo->colnames[i] == NULL)
i++;
for (int col_index = 0; col_index < colinfo->num_cols; col_index++)
{
/*
* In the above processing-loops, "i" advances only if
* the column is not new, check if this is a new column.
*/
if (colinfo->is_new_col[col_index])
i++;
}
Assert(i == colinfo->num_cols);
Assert(j == nnewcolumns);
#endif

View File

@ -780,7 +780,19 @@ CitusEndScan(CustomScanState *node)
*/
static void
CitusReScan(CustomScanState *node)
{ }
{
if (node->ss.ps.ps_ResultTupleSlot)
{
ExecClearTuple(node->ss.ps.ps_ResultTupleSlot);
}
ExecScanReScan(&node->ss);
CitusScanState *scanState = (CitusScanState *) node;
if (scanState->tuplestorestate)
{
tuplestore_rescan(scanState->tuplestorestate);
}
}
/*

View File

@ -129,6 +129,8 @@ static void LogLocalCommand(Task *task);
static uint64 LocallyPlanAndExecuteMultipleQueries(List *queryStrings,
TupleDestination *tupleDest,
Task *task);
static void SetColocationIdAndPartitionKeyValueForTasks(List *taskList,
Job *distributedPlan);
static void LocallyExecuteUtilityTask(Task *task);
static void ExecuteUdfTaskQuery(Query *localUdfCommandQuery);
static void EnsureTransitionPossible(LocalExecutionStatus from,
@ -228,6 +230,17 @@ ExecuteLocalTaskListExtended(List *taskList,
EnsureTaskExecutionAllowed(isRemote);
}
/*
* If workerJob has a partitionKeyValue, we need to set the colocation id
* and partition key value for each task before we start executing them
* because tenant stats are collected based on these values of a task.
*/
if (distributedPlan != NULL && distributedPlan->workerJob != NULL && taskList != NIL)
{
SetJobColocationId(distributedPlan->workerJob);
SetColocationIdAndPartitionKeyValueForTasks(taskList, distributedPlan->workerJob);
}
/*
* Use a new memory context that gets reset after every task to free
* the deparsed query string and query plan.
@ -367,6 +380,26 @@ ExecuteLocalTaskListExtended(List *taskList,
}
/*
* SetColocationIdAndPartitionKeyValueForTasks sets colocationId and partitionKeyValue
* for the tasks in the taskList if workerJob has a colocationId and partitionKeyValue.
*/
static void
SetColocationIdAndPartitionKeyValueForTasks(List *taskList, Job *workerJob)
{
if (workerJob->colocationId != 0 &&
workerJob->partitionKeyValue != NULL)
{
Task *task = NULL;
foreach_ptr(task, taskList)
{
task->colocationId = workerJob->colocationId;
task->partitionKeyValue = workerJob->partitionKeyValue;
}
}
}
/*
* LocallyPlanAndExecuteMultipleQueries plans and executes the given query strings
* one by one.

View File

@ -686,7 +686,7 @@ DropMetadataSnapshotOnNode(WorkerNode *workerNode)
bool singleTransaction = true;
List *dropMetadataCommandList = DetachPartitionCommandList();
dropMetadataCommandList = lappend(dropMetadataCommandList,
BREAK_CITUS_TABLE_SEQUENCE_DEPENDENCY_COMMAND);
BREAK_ALL_CITUS_TABLE_SEQUENCE_DEPENDENCY_COMMAND);
dropMetadataCommandList = lappend(dropMetadataCommandList,
WorkerDropAllShellTablesCommand(singleTransaction));
dropMetadataCommandList = list_concat(dropMetadataCommandList,
@ -4235,6 +4235,22 @@ WorkerDropAllShellTablesCommand(bool singleTransaction)
}
/*
* WorkerDropSequenceDependencyCommand returns command to drop sequence dependencies for
* given table.
*/
char *
WorkerDropSequenceDependencyCommand(Oid relationId)
{
char *qualifiedTableName = generate_qualified_relation_name(relationId);
StringInfo breakSequenceDepCommand = makeStringInfo();
appendStringInfo(breakSequenceDepCommand,
BREAK_CITUS_TABLE_SEQUENCE_DEPENDENCY_COMMAND,
quote_literal_cstr(qualifiedTableName));
return breakSequenceDepCommand->data;
}
/*
* PropagateNodeWideObjectsCommandList is called during node activation to
* propagate any object that should be propagated for every node. These are
@ -4352,8 +4368,8 @@ SendNodeWideObjectsSyncCommands(MetadataSyncContext *context)
void
SendShellTableDeletionCommands(MetadataSyncContext *context)
{
/* break all sequence deps for citus tables and remove all shell tables */
char *breakSeqDepsCommand = BREAK_CITUS_TABLE_SEQUENCE_DEPENDENCY_COMMAND;
/* break all sequence deps for citus tables */
char *breakSeqDepsCommand = BREAK_ALL_CITUS_TABLE_SEQUENCE_DEPENDENCY_COMMAND;
SendOrCollectCommandListToActivatedNodes(context, list_make1(breakSeqDepsCommand));
/* remove shell tables */

View File

@ -91,7 +91,8 @@ static bool DistributedTableSizeOnWorker(WorkerNode *workerNode, Oid relationId,
SizeQueryType sizeQueryType, bool failOnError,
uint64 *tableSize);
static List * ShardIntervalsOnWorkerGroup(WorkerNode *workerNode, Oid relationId);
static char * GenerateShardStatisticsQueryForShardList(List *shardIntervalList);
static char * GenerateShardIdNameValuesForShardList(List *shardIntervalList,
bool firstValue);
static char * GenerateSizeQueryForRelationNameList(List *quotedShardNames,
char *sizeFunction);
static char * GetWorkerPartitionedSizeUDFNameBySizeQueryType(SizeQueryType sizeQueryType);
@ -101,10 +102,10 @@ static char * GenerateAllShardStatisticsQueryForNode(WorkerNode *workerNode,
static List * GenerateShardStatisticsQueryList(List *workerNodeList, List *citusTableIds);
static void ErrorIfNotSuitableToGetSize(Oid relationId);
static List * OpenConnectionToNodes(List *workerNodeList);
static void ReceiveShardNameAndSizeResults(List *connectionList,
Tuplestorestate *tupleStore,
TupleDesc tupleDescriptor);
static void AppendShardSizeQuery(StringInfo selectQuery, ShardInterval *shardInterval);
static void ReceiveShardIdAndSizeResults(List *connectionList,
Tuplestorestate *tupleStore,
TupleDesc tupleDescriptor);
static void AppendShardIdNameValues(StringInfo selectQuery, ShardInterval *shardInterval);
static HeapTuple CreateDiskSpaceTuple(TupleDesc tupleDesc, uint64 availableBytes,
uint64 totalBytes);
@ -253,7 +254,7 @@ GetNodeDiskSpaceStatsForConnection(MultiConnection *connection, uint64 *availabl
/*
* citus_shard_sizes returns all shard names and their sizes.
* citus_shard_sizes returns all shard ids and their sizes.
*/
Datum
citus_shard_sizes(PG_FUNCTION_ARGS)
@ -271,7 +272,7 @@ citus_shard_sizes(PG_FUNCTION_ARGS)
TupleDesc tupleDescriptor = NULL;
Tuplestorestate *tupleStore = SetupTuplestore(fcinfo, &tupleDescriptor);
ReceiveShardNameAndSizeResults(connectionList, tupleStore, tupleDescriptor);
ReceiveShardIdAndSizeResults(connectionList, tupleStore, tupleDescriptor);
PG_RETURN_VOID();
}
@ -446,12 +447,12 @@ GenerateShardStatisticsQueryList(List *workerNodeList, List *citusTableIds)
/*
* ReceiveShardNameAndSizeResults receives shard name and size results from the given
* ReceiveShardIdAndSizeResults receives shard id and size results from the given
* connection list.
*/
static void
ReceiveShardNameAndSizeResults(List *connectionList, Tuplestorestate *tupleStore,
TupleDesc tupleDescriptor)
ReceiveShardIdAndSizeResults(List *connectionList, Tuplestorestate *tupleStore,
TupleDesc tupleDescriptor)
{
MultiConnection *connection = NULL;
foreach_ptr(connection, connectionList)
@ -488,13 +489,9 @@ ReceiveShardNameAndSizeResults(List *connectionList, Tuplestorestate *tupleStore
memset(values, 0, sizeof(values));
memset(isNulls, false, sizeof(isNulls));
/* format is [0] shard id, [1] shard name, [2] size */
char *tableName = PQgetvalue(result, rowIndex, 1);
Datum resultStringDatum = CStringGetDatum(tableName);
Datum textDatum = DirectFunctionCall1(textin, resultStringDatum);
values[0] = textDatum;
values[1] = ParseIntField(result, rowIndex, 2);
/* format is [0] shard id, [1] size */
values[0] = ParseIntField(result, rowIndex, 0);
values[1] = ParseIntField(result, rowIndex, 1);
tuplestore_putvalues(tupleStore, tupleDescriptor, values, isNulls);
}
@ -920,6 +917,12 @@ static char *
GenerateAllShardStatisticsQueryForNode(WorkerNode *workerNode, List *citusTableIds)
{
StringInfo allShardStatisticsQuery = makeStringInfo();
bool insertedValues = false;
appendStringInfoString(allShardStatisticsQuery, "SELECT shard_id, ");
appendStringInfo(allShardStatisticsQuery, PG_TOTAL_RELATION_SIZE_FUNCTION,
"table_name");
appendStringInfoString(allShardStatisticsQuery, " FROM (VALUES ");
Oid relationId = InvalidOid;
foreach_oid(relationId, citusTableIds)
@ -934,34 +937,49 @@ GenerateAllShardStatisticsQueryForNode(WorkerNode *workerNode, List *citusTableI
{
List *shardIntervalsOnNode = ShardIntervalsOnWorkerGroup(workerNode,
relationId);
char *shardStatisticsQuery =
GenerateShardStatisticsQueryForShardList(shardIntervalsOnNode);
appendStringInfoString(allShardStatisticsQuery, shardStatisticsQuery);
if (list_length(shardIntervalsOnNode) == 0)
{
relation_close(relation, AccessShareLock);
continue;
}
char *shardIdNameValues =
GenerateShardIdNameValuesForShardList(shardIntervalsOnNode,
!insertedValues);
insertedValues = true;
appendStringInfoString(allShardStatisticsQuery, shardIdNameValues);
relation_close(relation, AccessShareLock);
}
}
/* Add a dummy entry so that UNION ALL doesn't complain */
appendStringInfo(allShardStatisticsQuery, "SELECT 0::bigint, NULL::text, 0::bigint;");
if (!insertedValues)
{
return "SELECT 0 AS shard_id, '' AS table_name LIMIT 0";
}
appendStringInfoString(allShardStatisticsQuery, ") t(shard_id, table_name) "
"WHERE to_regclass(table_name) IS NOT NULL");
return allShardStatisticsQuery->data;
}
/*
* GenerateShardStatisticsQueryForShardList generates a query that returns:
* SELECT shard_id, shard_name, shard_size for all shards in the list
* GenerateShardIdNameValuesForShardList generates a list of (shard_id, shard_name) values
* for all shards in the list
*/
static char *
GenerateShardStatisticsQueryForShardList(List *shardIntervalList)
GenerateShardIdNameValuesForShardList(List *shardIntervalList, bool firstValue)
{
StringInfo selectQuery = makeStringInfo();
ShardInterval *shardInterval = NULL;
foreach_ptr(shardInterval, shardIntervalList)
{
AppendShardSizeQuery(selectQuery, shardInterval);
appendStringInfo(selectQuery, " UNION ALL ");
if (!firstValue)
{
appendStringInfoString(selectQuery, ", ");
}
firstValue = false;
AppendShardIdNameValues(selectQuery, shardInterval);
}
return selectQuery->data;
@ -969,11 +987,10 @@ GenerateShardStatisticsQueryForShardList(List *shardIntervalList)
/*
* AppendShardSizeQuery appends a query in the following form to selectQuery
* SELECT shard_id, shard_name, shard_size
* AppendShardIdNameValues appends (shard_id, shard_name) for shard
*/
static void
AppendShardSizeQuery(StringInfo selectQuery, ShardInterval *shardInterval)
AppendShardIdNameValues(StringInfo selectQuery, ShardInterval *shardInterval)
{
uint64 shardId = shardInterval->shardId;
Oid schemaId = get_rel_namespace(shardInterval->relationId);
@ -985,9 +1002,7 @@ AppendShardSizeQuery(StringInfo selectQuery, ShardInterval *shardInterval)
char *shardQualifiedName = quote_qualified_identifier(schemaName, shardName);
char *quotedShardName = quote_literal_cstr(shardQualifiedName);
appendStringInfo(selectQuery, "SELECT " UINT64_FORMAT " AS shard_id, ", shardId);
appendStringInfo(selectQuery, "%s AS shard_name, ", quotedShardName);
appendStringInfo(selectQuery, PG_TOTAL_RELATION_SIZE_FUNCTION, quotedShardName);
appendStringInfo(selectQuery, "(" UINT64_FORMAT ", %s)", shardId, quotedShardName);
}

View File

@ -108,7 +108,8 @@ static void BlockDistributedQueriesOnMetadataNodes(void);
static WorkerNode * TupleToWorkerNode(TupleDesc tupleDescriptor, HeapTuple heapTuple);
static bool NodeIsLocal(WorkerNode *worker);
static void SetLockTimeoutLocally(int32 lock_cooldown);
static void UpdateNodeLocation(int32 nodeId, char *newNodeName, int32 newNodePort);
static void UpdateNodeLocation(int32 nodeId, char *newNodeName, int32 newNodePort,
bool localOnly);
static bool UnsetMetadataSyncedForAllWorkers(void);
static char * GetMetadataSyncCommandToSetNodeColumn(WorkerNode *workerNode,
int columnIndex,
@ -231,8 +232,8 @@ citus_set_coordinator_host(PG_FUNCTION_ARGS)
* do not need to worry about concurrent changes (e.g. deletion) and
* can proceed to update immediately.
*/
UpdateNodeLocation(coordinatorNode->nodeId, nodeNameString, nodePort);
bool localOnly = false;
UpdateNodeLocation(coordinatorNode->nodeId, nodeNameString, nodePort, localOnly);
/* clear cached plans that have the old host/port */
ResetPlanCache();
@ -1290,7 +1291,8 @@ citus_update_node(PG_FUNCTION_ARGS)
*/
ResetPlanCache();
UpdateNodeLocation(nodeId, newNodeNameString, newNodePort);
bool localOnly = true;
UpdateNodeLocation(nodeId, newNodeNameString, newNodePort, localOnly);
/* we should be able to find the new node from the metadata */
workerNode = FindWorkerNodeAnyCluster(newNodeNameString, newNodePort);
@ -1352,7 +1354,7 @@ SetLockTimeoutLocally(int32 lockCooldown)
static void
UpdateNodeLocation(int32 nodeId, char *newNodeName, int32 newNodePort)
UpdateNodeLocation(int32 nodeId, char *newNodeName, int32 newNodePort, bool localOnly)
{
const bool indexOK = true;
@ -1396,6 +1398,20 @@ UpdateNodeLocation(int32 nodeId, char *newNodeName, int32 newNodePort)
CommandCounterIncrement();
if (!localOnly && EnableMetadataSync)
{
WorkerNode *updatedNode = FindWorkerNodeAnyCluster(newNodeName, newNodePort);
Assert(updatedNode->nodeId == nodeId);
/* send the delete command to all primary nodes with metadata */
char *nodeDeleteCommand = NodeDeleteCommand(updatedNode->nodeId);
SendCommandToWorkersWithMetadata(nodeDeleteCommand);
/* send the insert command to all primary nodes with metadata */
char *nodeInsertCommand = NodeListInsertCommand(list_make1(updatedNode));
SendCommandToWorkersWithMetadata(nodeInsertCommand);
}
systable_endscan(scanDescriptor);
table_close(pgDistNode, NoLock);
}

View File

@ -515,6 +515,16 @@ GetRebalanceSteps(RebalanceOptions *options)
/* sort the lists to make the function more deterministic */
List *activeWorkerList = SortedActiveWorkers();
int shardAllowedNodeCount = 0;
WorkerNode *workerNode = NULL;
foreach_ptr(workerNode, activeWorkerList)
{
if (workerNode->shouldHaveShards)
{
shardAllowedNodeCount++;
}
}
List *activeShardPlacementListList = NIL;
List *unbalancedShards = NIL;
@ -532,8 +542,7 @@ GetRebalanceSteps(RebalanceOptions *options)
shardPlacementList, options->workerNode);
}
if (list_length(activeShardPlacementListForRelation) >= list_length(
activeWorkerList))
if (list_length(activeShardPlacementListForRelation) >= shardAllowedNodeCount)
{
activeShardPlacementListList = lappend(activeShardPlacementListList,
activeShardPlacementListForRelation);
@ -2165,7 +2174,10 @@ RebalanceTableShardsBackground(RebalanceOptions *options, Oid shardReplicationMo
quote_literal_cstr(shardTranferModeLabel));
int32 nodesInvolved[] = { 0 };
BackgroundTask *task = ScheduleBackgroundTask(jobId, GetUserId(), buf.data, 0,
/* replicate_reference_tables permissions require superuser */
Oid superUserId = CitusExtensionOwner();
BackgroundTask *task = ScheduleBackgroundTask(jobId, superUserId, buf.data, 0,
NULL, 0, nodesInvolved);
replicateRefTablesTaskId = task->taskid;
}
@ -2268,7 +2280,7 @@ UpdateShardPlacement(PlacementUpdateEvent *placementUpdateEvent,
if (updateType == PLACEMENT_UPDATE_MOVE)
{
appendStringInfo(placementUpdateCommand,
"SELECT citus_move_shard_placement(%ld,%u,%u,%s)",
"SELECT pg_catalog.citus_move_shard_placement(%ld,%u,%u,%s)",
shardId,
sourceNode->nodeId,
targetNode->nodeId,
@ -2277,7 +2289,7 @@ UpdateShardPlacement(PlacementUpdateEvent *placementUpdateEvent,
else if (updateType == PLACEMENT_UPDATE_COPY)
{
appendStringInfo(placementUpdateCommand,
"SELECT citus_copy_shard_placement(%ld,%u,%u,%s)",
"SELECT pg_catalog.citus_copy_shard_placement(%ld,%u,%u,%s)",
shardId,
sourceNode->nodeId,
targetNode->nodeId,

View File

@ -1810,7 +1810,7 @@ CreateWorkerForPlacementSet(List *workersForPlacementList)
/* we don't have value field as it's a set */
info.entrysize = info.keysize;
uint32 hashFlags = (HASH_ELEM | HASH_FUNCTION | HASH_CONTEXT);
uint32 hashFlags = (HASH_ELEM | HASH_FUNCTION | HASH_CONTEXT | HASH_COMPARE);
HTAB *workerForPlacementSet = hash_create("worker placement set", 32, &info,
hashFlags);

View File

@ -855,7 +855,7 @@ ProcessShardStatisticsRow(PGresult *result, int64 rowIndex, uint64 *shardId,
return false;
}
*shardSize = ParseIntField(result, rowIndex, 2);
*shardSize = ParseIntField(result, rowIndex, 1);
return true;
}

View File

@ -1885,17 +1885,36 @@ RouterJob(Query *originalQuery, PlannerRestrictionContext *plannerRestrictionCon
{
RangeTblEntry *updateOrDeleteOrMergeRTE = ExtractResultRelationRTE(originalQuery);
/*
* If all of the shards are pruned, we replace the relation RTE into
* subquery RTE that returns no results. However, this is not useful
* for UPDATE and DELETE queries. Therefore, if we detect a UPDATE or
* DELETE RTE with subquery type, we just set task list to empty and return
* the job.
*/
if (updateOrDeleteOrMergeRTE->rtekind == RTE_SUBQUERY)
{
job->taskList = NIL;
return job;
/*
* Not generating tasks for MERGE target relation might
* result in incorrect behavior as source rows with NOT
* MATCHED clause might qualify for insertion.
*/
if (IsMergeQuery(originalQuery))
{
ereport(ERROR, (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
errmsg("Merge command is currently "
"unsupported with filters that "
"prunes down to zero shards"),
errhint("Avoid `WHERE false` clause or "
"any equivalent filters that "
"could prune down to zero shards")));
}
else
{
/*
* If all of the shards are pruned, we replace the
* relation RTE into subquery RTE that returns no
* results. However, this is not useful for UPDATE
* and DELETE queries. Therefore, if we detect a
* UPDATE or DELETE RTE with subquery type, we just
* set task list to empty and return the job.
*/
job->taskList = NIL;
return job;
}
}
}

View File

@ -0,0 +1,9 @@
DROP VIEW citus_shards;
DROP VIEW IF EXISTS pg_catalog.citus_tables;
DROP VIEW IF EXISTS public.citus_tables;
DROP FUNCTION citus_shard_sizes;
#include "udfs/citus_shard_sizes/11.3-2.sql"
#include "udfs/citus_shards/11.3-2.sql"
#include "udfs/citus_tables/11.3-2.sql"

View File

@ -0,0 +1,13 @@
DROP VIEW IF EXISTS public.citus_tables;
DROP VIEW IF EXISTS pg_catalog.citus_tables;
DROP VIEW pg_catalog.citus_shards;
DROP FUNCTION pg_catalog.citus_shard_sizes;
#include "../udfs/citus_shard_sizes/10.0-1.sql"
-- citus_shards/11.1-1.sql tries to create citus_shards in pg_catalog but it is not allowed.
-- Here we use citus_shards/10.0-1.sql to properly create the view in citus schema and
-- then alter it to pg_catalog, so citus_shards/11.1-1.sql can REPLACE it without any errors.
#include "../udfs/citus_shards/10.0-1.sql"
#include "../udfs/citus_tables/11.1-1.sql"
#include "../udfs/citus_shards/11.1-1.sql"

View File

@ -0,0 +1,6 @@
CREATE OR REPLACE FUNCTION pg_catalog.citus_shard_sizes(OUT shard_id int, OUT size bigint)
RETURNS SETOF RECORD
LANGUAGE C STRICT
AS 'MODULE_PATHNAME', $$citus_shard_sizes$$;
COMMENT ON FUNCTION pg_catalog.citus_shard_sizes(OUT shard_id int, OUT size bigint)
IS 'returns shards sizes across citus cluster';

View File

@ -1,6 +1,6 @@
CREATE FUNCTION pg_catalog.citus_shard_sizes(OUT table_name text, OUT size bigint)
CREATE OR REPLACE FUNCTION pg_catalog.citus_shard_sizes(OUT shard_id int, OUT size bigint)
RETURNS SETOF RECORD
LANGUAGE C STRICT
AS 'MODULE_PATHNAME', $$citus_shard_sizes$$;
COMMENT ON FUNCTION pg_catalog.citus_shard_sizes(OUT table_name text, OUT size bigint)
COMMENT ON FUNCTION pg_catalog.citus_shard_sizes(OUT shard_id int, OUT size bigint)
IS 'returns shards sizes across citus cluster';

View File

@ -0,0 +1,46 @@
CREATE OR REPLACE VIEW citus.citus_shards AS
SELECT
pg_dist_shard.logicalrelid AS table_name,
pg_dist_shard.shardid,
shard_name(pg_dist_shard.logicalrelid, pg_dist_shard.shardid) as shard_name,
CASE WHEN partkey IS NOT NULL THEN 'distributed' WHEN repmodel = 't' THEN 'reference' ELSE 'local' END AS citus_table_type,
colocationid AS colocation_id,
pg_dist_node.nodename,
pg_dist_node.nodeport,
size as shard_size
FROM
pg_dist_shard
JOIN
pg_dist_placement
ON
pg_dist_shard.shardid = pg_dist_placement.shardid
JOIN
pg_dist_node
ON
pg_dist_placement.groupid = pg_dist_node.groupid
JOIN
pg_dist_partition
ON
pg_dist_partition.logicalrelid = pg_dist_shard.logicalrelid
LEFT JOIN
(SELECT shard_id, max(size) as size from citus_shard_sizes() GROUP BY shard_id) as shard_sizes
ON
pg_dist_shard.shardid = shard_sizes.shard_id
WHERE
pg_dist_placement.shardstate = 1
AND
-- filter out tables owned by extensions
pg_dist_partition.logicalrelid NOT IN (
SELECT
objid
FROM
pg_depend
WHERE
classid = 'pg_class'::regclass AND refclassid = 'pg_extension'::regclass AND deptype = 'e'
)
ORDER BY
pg_dist_shard.logicalrelid::text, shardid
;
ALTER VIEW citus.citus_shards SET SCHEMA pg_catalog;
GRANT SELECT ON pg_catalog.citus_shards TO public;

View File

@ -1,4 +1,4 @@
CREATE OR REPLACE VIEW pg_catalog.citus_shards AS
CREATE OR REPLACE VIEW citus.citus_shards AS
SELECT
pg_dist_shard.logicalrelid AS table_name,
pg_dist_shard.shardid,
@ -23,7 +23,7 @@ JOIN
ON
pg_dist_partition.logicalrelid = pg_dist_shard.logicalrelid
LEFT JOIN
(SELECT (regexp_matches(table_name,'_(\d+)$'))[1]::int as shard_id, max(size) as size from citus_shard_sizes() GROUP BY shard_id) as shard_sizes
(SELECT shard_id, max(size) as size from citus_shard_sizes() GROUP BY shard_id) as shard_sizes
ON
pg_dist_shard.shardid = shard_sizes.shard_id
WHERE
@ -42,4 +42,5 @@ ORDER BY
pg_dist_shard.logicalrelid::text, shardid
;
ALTER VIEW citus.citus_shards SET SCHEMA pg_catalog;
GRANT SELECT ON pg_catalog.citus_shards TO public;

View File

@ -8,6 +8,8 @@ CREATE OR REPLACE FUNCTION pg_catalog.citus_stat_tenants (
OUT read_count_in_last_period INT,
OUT query_count_in_this_period INT,
OUT query_count_in_last_period INT,
OUT cpu_usage_in_this_period DOUBLE PRECISION,
OUT cpu_usage_in_last_period DOUBLE PRECISION,
OUT score BIGINT
)
RETURNS SETOF record
@ -51,6 +53,8 @@ AS (
read_count_in_last_period INT,
query_count_in_this_period INT,
query_count_in_last_period INT,
cpu_usage_in_this_period DOUBLE PRECISION,
cpu_usage_in_last_period DOUBLE PRECISION,
score BIGINT
)
ORDER BY score DESC
@ -66,7 +70,9 @@ SELECT
read_count_in_this_period,
read_count_in_last_period,
query_count_in_this_period,
query_count_in_last_period
query_count_in_last_period,
cpu_usage_in_this_period,
cpu_usage_in_last_period
FROM pg_catalog.citus_stat_tenants(FALSE);
ALTER VIEW citus.citus_stat_tenants SET SCHEMA pg_catalog;

View File

@ -8,6 +8,8 @@ CREATE OR REPLACE FUNCTION pg_catalog.citus_stat_tenants (
OUT read_count_in_last_period INT,
OUT query_count_in_this_period INT,
OUT query_count_in_last_period INT,
OUT cpu_usage_in_this_period DOUBLE PRECISION,
OUT cpu_usage_in_last_period DOUBLE PRECISION,
OUT score BIGINT
)
RETURNS SETOF record
@ -51,6 +53,8 @@ AS (
read_count_in_last_period INT,
query_count_in_this_period INT,
query_count_in_last_period INT,
cpu_usage_in_this_period DOUBLE PRECISION,
cpu_usage_in_last_period DOUBLE PRECISION,
score BIGINT
)
ORDER BY score DESC
@ -66,7 +70,9 @@ SELECT
read_count_in_this_period,
read_count_in_last_period,
query_count_in_this_period,
query_count_in_last_period
query_count_in_last_period,
cpu_usage_in_this_period,
cpu_usage_in_last_period
FROM pg_catalog.citus_stat_tenants(FALSE);
ALTER VIEW citus.citus_stat_tenants SET SCHEMA pg_catalog;

View File

@ -6,6 +6,8 @@ CREATE OR REPLACE FUNCTION pg_catalog.citus_stat_tenants_local(
OUT read_count_in_last_period INT,
OUT query_count_in_this_period INT,
OUT query_count_in_last_period INT,
OUT cpu_usage_in_this_period DOUBLE PRECISION,
OUT cpu_usage_in_last_period DOUBLE PRECISION,
OUT score BIGINT)
RETURNS SETOF RECORD
LANGUAGE C
@ -19,7 +21,9 @@ SELECT
read_count_in_this_period,
read_count_in_last_period,
query_count_in_this_period,
query_count_in_last_period
query_count_in_last_period,
cpu_usage_in_this_period,
cpu_usage_in_last_period
FROM pg_catalog.citus_stat_tenants_local()
ORDER BY score DESC;

View File

@ -6,6 +6,8 @@ CREATE OR REPLACE FUNCTION pg_catalog.citus_stat_tenants_local(
OUT read_count_in_last_period INT,
OUT query_count_in_this_period INT,
OUT query_count_in_last_period INT,
OUT cpu_usage_in_this_period DOUBLE PRECISION,
OUT cpu_usage_in_last_period DOUBLE PRECISION,
OUT score BIGINT)
RETURNS SETOF RECORD
LANGUAGE C
@ -19,7 +21,9 @@ SELECT
read_count_in_this_period,
read_count_in_last_period,
query_count_in_this_period,
query_count_in_last_period
query_count_in_last_period,
cpu_usage_in_this_period,
cpu_usage_in_last_period
FROM pg_catalog.citus_stat_tenants_local()
ORDER BY score DESC;

View File

@ -0,0 +1,55 @@
DO $$
declare
citus_tables_create_query text;
BEGIN
citus_tables_create_query=$CTCQ$
CREATE OR REPLACE VIEW %I.citus_tables AS
SELECT
logicalrelid AS table_name,
CASE WHEN partkey IS NOT NULL THEN 'distributed' ELSE
CASE when repmodel = 't' THEN 'reference' ELSE 'local' END
END AS citus_table_type,
coalesce(column_to_column_name(logicalrelid, partkey), '<none>') AS distribution_column,
colocationid AS colocation_id,
pg_size_pretty(table_sizes.table_size) AS table_size,
(select count(*) from pg_dist_shard where logicalrelid = p.logicalrelid) AS shard_count,
pg_get_userbyid(relowner) AS table_owner,
amname AS access_method
FROM
pg_dist_partition p
JOIN
pg_class c ON (p.logicalrelid = c.oid)
LEFT JOIN
pg_am a ON (a.oid = c.relam)
JOIN
(
SELECT ds.logicalrelid AS table_id, SUM(css.size) AS table_size
FROM citus_shard_sizes() css, pg_dist_shard ds
WHERE css.shard_id = ds.shardid
GROUP BY ds.logicalrelid
) table_sizes ON (table_sizes.table_id = p.logicalrelid)
WHERE
-- filter out tables owned by extensions
logicalrelid NOT IN (
SELECT
objid
FROM
pg_depend
WHERE
classid = 'pg_class'::regclass AND refclassid = 'pg_extension'::regclass AND deptype = 'e'
)
ORDER BY
logicalrelid::text;
$CTCQ$;
IF EXISTS (SELECT 1 FROM pg_namespace WHERE nspname = 'public') THEN
EXECUTE format(citus_tables_create_query, 'public');
GRANT SELECT ON public.citus_tables TO public;
ELSE
EXECUTE format(citus_tables_create_query, 'citus');
ALTER VIEW citus.citus_tables SET SCHEMA pg_catalog;
GRANT SELECT ON pg_catalog.citus_tables TO public;
END IF;
END;
$$;

View File

@ -11,7 +11,7 @@ citus_tables_create_query=$CTCQ$
END AS citus_table_type,
coalesce(column_to_column_name(logicalrelid, partkey), '<none>') AS distribution_column,
colocationid AS colocation_id,
pg_size_pretty(citus_total_relation_size(logicalrelid, fail_on_error := false)) AS table_size,
pg_size_pretty(table_sizes.table_size) AS table_size,
(select count(*) from pg_dist_shard where logicalrelid = p.logicalrelid) AS shard_count,
pg_get_userbyid(relowner) AS table_owner,
amname AS access_method
@ -21,6 +21,13 @@ citus_tables_create_query=$CTCQ$
pg_class c ON (p.logicalrelid = c.oid)
LEFT JOIN
pg_am a ON (a.oid = c.relam)
JOIN
(
SELECT ds.logicalrelid AS table_id, SUM(css.size) AS table_size
FROM citus_shard_sizes() css, pg_dist_shard ds
WHERE css.shard_id = ds.shardid
GROUP BY ds.logicalrelid
) table_sizes ON (table_sizes.table_id = p.logicalrelid)
WHERE
-- filter out tables owned by extensions
logicalrelid NOT IN (

View File

@ -1,70 +0,0 @@
/*-------------------------------------------------------------------------
*
* pg_send_cancellation.c
*
* This file contains functions to test setting pg_send_cancellation.
*
* Copyright (c) Citus Data, Inc.
*
*-------------------------------------------------------------------------
*/
#include "postgres.h"
#include "miscadmin.h"
#include "fmgr.h"
#include "port.h"
#include "postmaster/postmaster.h"
#define PG_SEND_CANCELLATION_VERSION \
"pg_send_cancellation (PostgreSQL) " PG_VERSION "\n"
/* exports for SQL callable functions */
PG_FUNCTION_INFO_V1(get_cancellation_key);
PG_FUNCTION_INFO_V1(run_pg_send_cancellation);
/*
* get_cancellation_key returns the cancellation key of the current process
* as an integer.
*/
Datum
get_cancellation_key(PG_FUNCTION_ARGS)
{
PG_RETURN_INT32(MyCancelKey);
}
/*
* run_pg_send_cancellation runs the pg_send_cancellation program with
* the specified arguments
*/
Datum
run_pg_send_cancellation(PG_FUNCTION_ARGS)
{
int pid = PG_GETARG_INT32(0);
int cancelKey = PG_GETARG_INT32(1);
char sendCancellationPath[MAXPGPATH];
char command[1024];
/* Locate executable backend before we change working directory */
if (find_other_exec(my_exec_path, "pg_send_cancellation",
PG_SEND_CANCELLATION_VERSION,
sendCancellationPath) < 0)
{
ereport(ERROR, (errmsg("could not locate pg_send_cancellation")));
}
pg_snprintf(command, sizeof(command), "%s %d %d %s %d",
sendCancellationPath, pid, cancelKey, "localhost", PostPortNumber);
if (system(command) != 0)
{
ereport(ERROR, (errmsg("failed to run command: %s", command)));
}
PG_RETURN_VOID();
}

View File

@ -462,21 +462,25 @@ HasDropCommandViolatesOwnership(Node *node)
static bool
AnyObjectViolatesOwnership(DropStmt *dropStmt)
{
bool hasOwnershipViolation = false;
volatile ObjectAddress objectAddress = { 0 };
Relation relation = NULL;
bool objectViolatesOwnership = false;
ObjectType objectType = dropStmt->removeType;
bool missingOk = dropStmt->missing_ok;
Node *object = NULL;
foreach_ptr(object, dropStmt->objects)
MemoryContext savedContext = CurrentMemoryContext;
ResourceOwner savedOwner = CurrentResourceOwner;
BeginInternalSubTransaction(NULL);
MemoryContextSwitchTo(savedContext);
PG_TRY();
{
PG_TRY();
Node *object = NULL;
foreach_ptr(object, dropStmt->objects)
{
objectAddress = get_object_address(objectType, object,
&relation, AccessShareLock, missingOk);
if (OidIsValid(objectAddress.objectId))
{
/*
@ -487,29 +491,39 @@ AnyObjectViolatesOwnership(DropStmt *dropStmt)
objectAddress,
object, relation);
}
}
PG_CATCH();
{
if (OidIsValid(objectAddress.objectId))
if (relation != NULL)
{
/* ownership violation */
objectViolatesOwnership = true;
relation_close(relation, NoLock);
relation = NULL;
}
}
PG_END_TRY();
ReleaseCurrentSubTransaction();
MemoryContextSwitchTo(savedContext);
CurrentResourceOwner = savedOwner;
}
PG_CATCH();
{
MemoryContextSwitchTo(savedContext);
ErrorData *edata = CopyErrorData();
FlushErrorState();
hasOwnershipViolation = true;
if (relation != NULL)
{
relation_close(relation, AccessShareLock);
relation_close(relation, NoLock);
relation = NULL;
}
RollbackAndReleaseCurrentSubTransaction();
MemoryContextSwitchTo(savedContext);
CurrentResourceOwner = savedOwner;
/* we found ownership violation, so can return here */
if (objectViolatesOwnership)
{
return true;
}
/* Rethrow error with LOG_SERVER_ONLY to prevent log to be sent to client */
edata->elevel = LOG_SERVER_ONLY;
ThrowErrorData(edata);
}
PG_END_TRY();
return false;
return hasOwnershipViolation;
}

View File

@ -141,7 +141,17 @@ SetRangeTblExtraData(RangeTblEntry *rte, CitusRTEKind rteKind, char *fragmentSch
fauxFunction->funcexpr = (Node *) fauxFuncExpr;
/* set the column count to pass ruleutils checks, not used elsewhere */
fauxFunction->funccolcount = list_length(rte->eref->colnames);
if (rte->relid != 0)
{
Relation rel = RelationIdGetRelation(rte->relid);
fauxFunction->funccolcount = RelationGetNumberOfAttributes(rel);
RelationClose(rel);
}
else
{
fauxFunction->funccolcount = list_length(rte->eref->colnames);
}
fauxFunction->funccolnames = funcColumnNames;
fauxFunction->funccoltypes = funcColumnTypes;
fauxFunction->funccoltypmods = funcColumnTypeMods;

View File

@ -12,13 +12,14 @@
#include "unistd.h"
#include "distributed/citus_safe_lib.h"
#include "distributed/colocation_utils.h"
#include "distributed/distributed_planner.h"
#include "distributed/jsonbutils.h"
#include "distributed/log_utils.h"
#include "distributed/listutils.h"
#include "distributed/metadata_cache.h"
#include "distributed/jsonbutils.h"
#include "distributed/colocation_utils.h"
#include "distributed/multi_executor.h"
#include "distributed/tuplestore.h"
#include "distributed/colocation_utils.h"
#include "distributed/utils/citus_stat_tenants.h"
#include "executor/execdesc.h"
#include "storage/ipc.h"
@ -38,12 +39,14 @@ ExecutorEnd_hook_type prev_ExecutorEnd = NULL;
#define ATTRIBUTE_PREFIX "/*{\"tId\":"
#define ATTRIBUTE_STRING_FORMAT "/*{\"tId\":%s,\"cId\":%d}*/"
#define STAT_TENANTS_COLUMNS 7
#define STAT_TENANTS_COLUMNS 9
#define ONE_QUERY_SCORE 1000000000
static char AttributeToTenant[MAX_TENANT_ATTRIBUTE_LENGTH] = "";
static CmdType AttributeToCommandType = CMD_UNKNOWN;
static int AttributeToColocationGroupId = INVALID_COLOCATION_ID;
static clock_t QueryStartClock = { 0 };
static clock_t QueryEndClock = { 0 };
static const char *SharedMemoryNameForMultiTenantMonitor =
"Shared memory for multi tenant monitor";
@ -56,7 +59,7 @@ static int CompareTenantScore(const void *leftElement, const void *rightElement)
static void UpdatePeriodsIfNecessary(TenantStats *tenantStats, TimestampTz queryTime);
static void ReduceScoreIfNecessary(TenantStats *tenantStats, TimestampTz queryTime);
static void EvictTenantsIfNecessary(TimestampTz queryTime);
static void RecordTenantStats(TenantStats *tenantStats);
static void RecordTenantStats(TenantStats *tenantStats, TimestampTz queryTime);
static void CreateMultiTenantMonitor(void);
static MultiTenantMonitor * CreateSharedMemoryForMultiTenantMonitor(void);
static MultiTenantMonitor * GetMultiTenantMonitor(void);
@ -142,7 +145,9 @@ citus_stat_tenants_local(PG_FUNCTION_ARGS)
tenantStats->writesInThisPeriod);
values[5] = Int32GetDatum(tenantStats->readsInLastPeriod +
tenantStats->writesInLastPeriod);
values[6] = Int64GetDatum(tenantStats->score);
values[6] = Float8GetDatum(tenantStats->cpuUsageInThisPeriod);
values[7] = Float8GetDatum(tenantStats->cpuUsageInLastPeriod);
values[8] = Int64GetDatum(tenantStats->score);
tuplestore_putvalues(tupleStore, tupleDescriptor, values, isNulls);
}
@ -225,6 +230,7 @@ AttributeTask(char *tenantId, int colocationId, CmdType commandType)
strncpy_s(AttributeToTenant, MAX_TENANT_ATTRIBUTE_LENGTH, tenantId,
MAX_TENANT_ATTRIBUTE_LENGTH - 1);
AttributeToCommandType = commandType;
QueryStartClock = clock();
}
@ -316,6 +322,17 @@ AttributeMetricsIfApplicable()
return;
}
/*
* return if we are not in the top level to make sure we are not
* stopping counting time for a sub-level execution
*/
if (ExecutorLevel != 0 || PlannerLevel != 0)
{
return;
}
QueryEndClock = clock();
TimestampTz queryTime = GetCurrentTimestamp();
MultiTenantMonitor *monitor = GetMultiTenantMonitor();
@ -345,7 +362,7 @@ AttributeMetricsIfApplicable()
UpdatePeriodsIfNecessary(tenantStats, queryTime);
ReduceScoreIfNecessary(tenantStats, queryTime);
RecordTenantStats(tenantStats);
RecordTenantStats(tenantStats, queryTime);
LWLockRelease(&tenantStats->lock);
}
@ -372,7 +389,7 @@ AttributeMetricsIfApplicable()
UpdatePeriodsIfNecessary(tenantStats, queryTime);
ReduceScoreIfNecessary(tenantStats, queryTime);
RecordTenantStats(tenantStats);
RecordTenantStats(tenantStats, queryTime);
LWLockRelease(&tenantStats->lock);
}
@ -396,6 +413,7 @@ static void
UpdatePeriodsIfNecessary(TenantStats *tenantStats, TimestampTz queryTime)
{
long long int periodInMicroSeconds = StatTenantsPeriod * USECS_PER_SEC;
long long int periodInMilliSeconds = StatTenantsPeriod * 1000;
TimestampTz periodStart = queryTime - (queryTime % periodInMicroSeconds);
/*
@ -410,20 +428,23 @@ UpdatePeriodsIfNecessary(TenantStats *tenantStats, TimestampTz queryTime)
tenantStats->readsInLastPeriod = tenantStats->readsInThisPeriod;
tenantStats->readsInThisPeriod = 0;
tenantStats->cpuUsageInLastPeriod = tenantStats->cpuUsageInThisPeriod;
tenantStats->cpuUsageInThisPeriod = 0;
}
/*
* If the last query is more than two periods ago, we clean the last period counts too.
*/
if (TimestampDifferenceExceeds(tenantStats->lastQueryTime, periodStart,
periodInMicroSeconds))
periodInMilliSeconds))
{
tenantStats->writesInLastPeriod = 0;
tenantStats->readsInLastPeriod = 0;
}
tenantStats->lastQueryTime = queryTime;
tenantStats->cpuUsageInLastPeriod = 0;
}
}
@ -503,7 +524,7 @@ EvictTenantsIfNecessary(TimestampTz queryTime)
* RecordTenantStats records the query statistics for the tenant.
*/
static void
RecordTenantStats(TenantStats *tenantStats)
RecordTenantStats(TenantStats *tenantStats, TimestampTz queryTime)
{
if (tenantStats->score < LLONG_MAX - ONE_QUERY_SCORE)
{
@ -524,6 +545,11 @@ RecordTenantStats(TenantStats *tenantStats)
{
tenantStats->writesInThisPeriod++;
}
double queryCpuTime = ((double) (QueryEndClock - QueryStartClock)) / CLOCKS_PER_SEC;
tenantStats->cpuUsageInThisPeriod += queryCpuTime;
tenantStats->lastQueryTime = queryTime;
}

View File

@ -170,12 +170,11 @@ BreakColocation(Oid sourceRelationId)
*/
Relation pgDistColocation = table_open(DistColocationRelationId(), ExclusiveLock);
uint32 newColocationId = GetNextColocationId();
bool localOnly = false;
UpdateRelationColocationGroup(sourceRelationId, newColocationId, localOnly);
uint32 oldColocationId = TableColocationId(sourceRelationId);
CreateColocationGroupForRelation(sourceRelationId);
/* if there is not any remaining table in the colocation group, delete it */
DeleteColocationGroupIfNoTablesBelong(sourceRelationId);
/* if there is not any remaining table in the old colocation group, delete it */
DeleteColocationGroupIfNoTablesBelong(oldColocationId);
table_close(pgDistColocation, NoLock);
}

View File

@ -28,6 +28,7 @@
#include "distributed/version_compat.h"
#include "nodes/pg_list.h"
#include "storage/lockdefs.h"
#include "utils/catcache.h"
#include "utils/fmgroids.h"
#include "utils/hsearch.h"
#include "common/hashfn.h"
@ -96,6 +97,8 @@ static List * GetConnectedListHelper(ForeignConstraintRelationshipNode *node,
bool isReferencing);
static List * GetForeignConstraintRelationshipHelper(Oid relationId, bool isReferencing);
MemoryContext ForeignConstraintRelationshipMemoryContext = NULL;
/*
* GetForeignKeyConnectedRelationIdList returns a list of relation id's for
@ -321,17 +324,36 @@ CreateForeignConstraintRelationshipGraph()
return;
}
ClearForeignConstraintRelationshipGraphContext();
/*
* Lazily create our memory context once and reset on every reuse.
* Since we have cleared and invalidated the fConstraintRelationshipGraph, right
* before we can simply reset the context if it was already existing.
*/
if (ForeignConstraintRelationshipMemoryContext == NULL)
{
/* make sure we've initialized CacheMemoryContext */
if (CacheMemoryContext == NULL)
{
CreateCacheMemoryContext();
}
MemoryContext fConstraintRelationshipMemoryContext = AllocSetContextCreateInternal(
CacheMemoryContext,
"Forign Constraint Relationship Graph Context",
ALLOCSET_DEFAULT_MINSIZE,
ALLOCSET_DEFAULT_INITSIZE,
ALLOCSET_DEFAULT_MAXSIZE);
ForeignConstraintRelationshipMemoryContext = AllocSetContextCreate(
CacheMemoryContext,
"Foreign Constraint Relationship Graph Context",
ALLOCSET_DEFAULT_MINSIZE,
ALLOCSET_DEFAULT_INITSIZE,
ALLOCSET_DEFAULT_MAXSIZE);
}
else
{
fConstraintRelationshipGraph = NULL;
MemoryContextReset(ForeignConstraintRelationshipMemoryContext);
}
Assert(fConstraintRelationshipGraph == NULL);
MemoryContext oldContext = MemoryContextSwitchTo(
fConstraintRelationshipMemoryContext);
ForeignConstraintRelationshipMemoryContext);
fConstraintRelationshipGraph = (ForeignConstraintRelationshipGraph *) palloc(
sizeof(ForeignConstraintRelationshipGraph));
@ -631,22 +653,3 @@ CreateOrFindNode(HTAB *adjacencyLists, Oid relid)
return node;
}
/*
* ClearForeignConstraintRelationshipGraphContext clear all the allocated memory obtained
* for foreign constraint relationship graph. Since all the variables of relationship
* graph was obtained within the same context, destroying hash map is enough as
* it deletes the context.
*/
void
ClearForeignConstraintRelationshipGraphContext()
{
if (fConstraintRelationshipGraph == NULL)
{
return;
}
hash_destroy(fConstraintRelationshipGraph->nodeMap);
fConstraintRelationshipGraph = NULL;
}

View File

@ -420,7 +420,7 @@ CopyShardPlacementToWorkerNodeQuery(ShardPlacement *sourceShardPlacement,
"auto";
appendStringInfo(queryString,
"SELECT citus_copy_shard_placement("
"SELECT pg_catalog.citus_copy_shard_placement("
UINT64_FORMAT ", %d, %d, "
"transfer_mode := %s)",
sourceShardPlacement->shardId,

View File

@ -1 +0,0 @@
pg_send_cancellation

View File

@ -1,24 +0,0 @@
citus_top_builddir = ../../..
PROGRAM = pg_send_cancellation
PGFILEDESC = "pg_send_cancellation sends a custom cancellation message"
OBJS = $(citus_abs_srcdir)/src/bin/pg_send_cancellation/pg_send_cancellation.o
PG_CPPFLAGS = -I$(libpq_srcdir)
PG_LIBS_INTERNAL = $(libpq_pgport)
PG_LDFLAGS += $(LDFLAGS)
include $(citus_top_builddir)/Makefile.global
# We reuse all the Citus flags (incl. security flags), but we are building a program not a shared library
# We sometimes build Citus with a newer version of gcc than Postgres was built
# with and this breaks LTO (link-time optimization). Even if disabling it can
# have some perf impact this is ok because pg_send_cancellation is only used
# for tests anyway.
override CFLAGS := $(filter-out -shared, $(CFLAGS)) -fno-lto
# Filter out unneeded dependencies
override LIBS := $(filter-out -lz -lreadline -ledit -ltermcap -lncurses -lcurses -lpam, $(LIBS))
clean: clean-pg_send_cancellation
clean-pg_send_cancellation:
rm -f $(PROGRAM) $(OBJS)

View File

@ -1,47 +0,0 @@
# pg_send_cancellation
pg_send_cancellation is a program for manually sending a cancellation
to a Postgres endpoint. It is effectively a command-line version of
PQcancel in libpq, but it can use any PID or cancellation key.
We use pg_send_cancellation primarily to propagate cancellations between pgbouncers
behind a load balancer. Since the cancellation protocol involves
opening a new connection, the new connection may go to a different
node that does not recognize the cancellation key. To handle that
scenario, we modified pgbouncer to pass unrecognized cancellation
keys to a shell command.
Users can configure the cancellation_command, which will be run with:
```
<cancellation_command> <client ip> <client port> <pid> <cancel key>
```
Note that pgbouncer does not use actual PIDs. Instead, it generates PID and cancellation key together a random 8-byte number. This makes the chance of collisions exceedingly small.
By providing pg_send_cancellation as part of Citus, we can use a shell script that pgbouncer invokes to propagate the cancellation to all *other* worker nodes in the same cluster, for example:
```bash
#!/bin/sh
remote_ip=$1
remote_port=$2
pid=$3
cancel_key=$4
postgres_path=/usr/pgsql-14/bin
pgbouncer_port=6432
nodes_query="select nodename from pg_dist_node where groupid > 0 and groupid not in (select groupid from pg_dist_local_group) and nodecluster = current_setting('citus.cluster_name')"
# Get hostnames of other worker nodes in the cluster, and send cancellation to their pgbouncers
$postgres_path/psql -c "$nodes_query" -tAX | xargs -n 1 sh -c "$postgres_path/pg_send_cancellation $pid $cancel_key \$0 $pgbouncer_port"
```
One thing we need to be careful about is that the cancellations do not get forwarded
back-and-forth. This is handled in pgbouncer by setting the last bit of all generated
cancellation keys (sent to clients) to 1, and setting the last bit of all forwarded bits to 0.
That way, when a pgbouncer receives a cancellation key with the last bit set to 0,
it knows it is from another pgbouncer and should not forward further, and should set
the last bit to 1 when comparing to stored cancellation keys.
Another thing we need to be careful about is that the integers should be encoded
as big endian on the wire.

View File

@ -1,261 +0,0 @@
/*
* pg_send_cancellation is a program for manually sending a cancellation
* to a Postgres endpoint. It is effectively a command-line version of
* PQcancel in libpq, but it can use any PID or cancellation key.
*
* Portions Copyright (c) Citus Data, Inc.
*
* For the internal_cancel function:
*
* Portions Copyright (c) 1996-2021, PostgreSQL Global Development Group
* Portions Copyright (c) 1994, Regents of the University of California
*
* Permission to use, copy, modify, and distribute this software and its
* documentation for any purpose, without fee, and without a written agreement
* is hereby granted, provided that the above copyright notice and this
* paragraph and the following two paragraphs appear in all copies.
*
* IN NO EVENT SHALL THE UNIVERSITY OF CALIFORNIA BE LIABLE TO ANY PARTY FOR
* DIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, INCLUDING
* LOST PROFITS, ARISING OUT OF THE USE OF THIS SOFTWARE AND ITS
* DOCUMENTATION, EVEN IF THE UNIVERSITY OF CALIFORNIA HAS BEEN ADVISED OF THE
* POSSIBILITY OF SUCH DAMAGE.
*
* THE UNIVERSITY OF CALIFORNIA SPECIFICALLY DISCLAIMS ANY WARRANTIES,
* INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY
* AND FITNESS FOR A PARTICULAR PURPOSE. THE SOFTWARE PROVIDED HEREUNDER IS
* ON AN "AS IS" BASIS, AND THE UNIVERSITY OF CALIFORNIA HAS NO OBLIGATIONS TO
* PROVIDE MAINTENANCE, SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS.
*
*/
#include "postgres_fe.h"
#include <sys/stat.h>
#include <fcntl.h>
#include <ctype.h>
#include <time.h>
#include <unistd.h>
#include "common/ip.h"
#include "common/link-canary.h"
#include "common/scram-common.h"
#include "common/string.h"
#include "libpq-fe.h"
#include "libpq-int.h"
#include "mb/pg_wchar.h"
#include "port/pg_bswap.h"
#define ERROR_BUFFER_SIZE 256
static int internal_cancel(SockAddr *raddr, int be_pid, int be_key,
char *errbuf, int errbufsize);
/*
* main entry point into the pg_send_cancellation program.
*/
int
main(int argc, char *argv[])
{
if (argc == 2 && strcmp(argv[1], "-V") == 0)
{
pg_fprintf(stdout, "pg_send_cancellation (PostgreSQL) " PG_VERSION "\n");
return 0;
}
if (argc < 4 || argc > 5)
{
char *program = argv[0];
pg_fprintf(stderr, "%s requires 4 arguments\n\n", program);
pg_fprintf(stderr, "Usage: %s <pid> <cancel key> <hostname> [port]\n", program);
return 1;
}
char *pidString = argv[1];
char *cancelKeyString = argv[2];
char *host = argv[3];
char *portString = "5432";
if (argc >= 5)
{
portString = argv[4];
}
/* parse the PID and cancellation key */
int pid = strtol(pidString, NULL, 10);
int cancelAuthCode = strtol(cancelKeyString, NULL, 10);
char errorBuffer[ERROR_BUFFER_SIZE] = { 0 };
struct addrinfo *ipAddressList;
struct addrinfo hint;
int ipAddressListFamily = AF_UNSPEC;
SockAddr socketAddress;
memset(&hint, 0, sizeof(hint));
hint.ai_socktype = SOCK_STREAM;
hint.ai_family = ipAddressListFamily;
/* resolve the hostname to an IP */
int ret = pg_getaddrinfo_all(host, portString, &hint, &ipAddressList);
if (ret || !ipAddressList)
{
pg_fprintf(stderr, "could not translate host name \"%s\" to address: %s\n",
host, gai_strerror(ret));
return 1;
}
if (ipAddressList->ai_addrlen > sizeof(socketAddress.addr))
{
pg_fprintf(stderr, "invalid address length");
return 1;
}
/*
* Explanation of IGNORE-BANNED:
* This is a common pattern when using getaddrinfo. The system guarantees
* that ai_addrlen < sizeof(socketAddress.addr). Out of an abundance of
* caution. We also check it above.
*/
memcpy(&socketAddress.addr, ipAddressList->ai_addr, ipAddressList->ai_addrlen); /* IGNORE-BANNED */
socketAddress.salen = ipAddressList->ai_addrlen;
/* send the cancellation */
bool cancelSucceeded = internal_cancel(&socketAddress, pid, cancelAuthCode,
errorBuffer, sizeof(errorBuffer));
if (!cancelSucceeded)
{
pg_fprintf(stderr, "sending cancellation to %s:%s failed: %s",
host, portString, errorBuffer);
return 1;
}
pg_freeaddrinfo_all(ipAddressListFamily, ipAddressList);
return 0;
}
/* *INDENT-OFF* */
/*
* internal_cancel is copied from fe-connect.c
*
* The return value is true if the cancel request was successfully
* dispatched, false if not (in which case an error message is available).
* Note: successful dispatch is no guarantee that there will be any effect at
* the backend. The application must read the operation result as usual.
*
* CAUTION: we want this routine to be safely callable from a signal handler
* (for example, an application might want to call it in a SIGINT handler).
* This means we cannot use any C library routine that might be non-reentrant.
* malloc/free are often non-reentrant, and anything that might call them is
* just as dangerous. We avoid sprintf here for that reason. Building up
* error messages with strcpy/strcat is tedious but should be quite safe.
* We also save/restore errno in case the signal handler support doesn't.
*
* internal_cancel() is an internal helper function to make code-sharing
* between the two versions of the cancel function possible.
*/
static int
internal_cancel(SockAddr *raddr, int be_pid, int be_key,
char *errbuf, int errbufsize)
{
int save_errno = SOCK_ERRNO;
pgsocket tmpsock = PGINVALID_SOCKET;
char sebuf[PG_STRERROR_R_BUFLEN];
int maxlen;
struct
{
uint32 packetlen;
CancelRequestPacket cp;
} crp;
/*
* We need to open a temporary connection to the postmaster. Do this with
* only kernel calls.
*/
if ((tmpsock = socket(raddr->addr.ss_family, SOCK_STREAM, 0)) == PGINVALID_SOCKET)
{
strlcpy(errbuf, "PQcancel() -- socket() failed: ", errbufsize);
goto cancel_errReturn;
}
retry3:
if (connect(tmpsock, (struct sockaddr *) &raddr->addr, raddr->salen) < 0)
{
if (SOCK_ERRNO == EINTR)
/* Interrupted system call - we'll just try again */
goto retry3;
strlcpy(errbuf, "PQcancel() -- connect() failed: ", errbufsize);
goto cancel_errReturn;
}
/*
* We needn't set nonblocking I/O or NODELAY options here.
*/
/* Create and send the cancel request packet. */
crp.packetlen = pg_hton32((uint32) sizeof(crp));
crp.cp.cancelRequestCode = (MsgType) pg_hton32(CANCEL_REQUEST_CODE);
crp.cp.backendPID = pg_hton32(be_pid);
crp.cp.cancelAuthCode = pg_hton32(be_key);
retry4:
if (send(tmpsock, (char *) &crp, sizeof(crp), 0) != (int) sizeof(crp))
{
if (SOCK_ERRNO == EINTR)
/* Interrupted system call - we'll just try again */
goto retry4;
strlcpy(errbuf, "PQcancel() -- send() failed: ", errbufsize);
goto cancel_errReturn;
}
/*
* Wait for the postmaster to close the connection, which indicates that
* it's processed the request. Without this delay, we might issue another
* command only to find that our cancel zaps that command instead of the
* one we thought we were canceling. Note we don't actually expect this
* read to obtain any data, we are just waiting for EOF to be signaled.
*/
retry5:
if (recv(tmpsock, (char *) &crp, 1, 0) < 0)
{
if (SOCK_ERRNO == EINTR)
/* Interrupted system call - we'll just try again */
goto retry5;
/* we ignore other error conditions */
}
/* All done */
closesocket(tmpsock);
SOCK_ERRNO_SET(save_errno);
return true;
cancel_errReturn:
/*
* Make sure we don't overflow the error buffer. Leave space for the \n at
* the end, and for the terminating zero.
*/
maxlen = errbufsize - strlen(errbuf) - 2;
if (maxlen >= 0)
{
/*
* Explanation of IGNORE-BANNED:
* This is well-tested libpq code that we would like to preserve in its
* original form. The appropriate length calculation is done above.
*/
strncat(errbuf, SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)), /* IGNORE-BANNED */
maxlen);
strcat(errbuf, "\n"); /* IGNORE-BANNED */
}
if (tmpsock != PGINVALID_SOCKET)
closesocket(tmpsock);
SOCK_ERRNO_SET(save_errno);
return false;
}
/* *INDENT-ON* */

View File

@ -20,7 +20,6 @@ extern bool ShouldUndistributeCitusLocalTable(Oid relationId);
extern List * ReferencedRelationIdList(Oid relationId);
extern List * ReferencingRelationIdList(Oid relationId);
extern void SetForeignConstraintRelationshipGraphInvalid(void);
extern void ClearForeignConstraintRelationshipGraphContext(void);
extern bool OidVisited(HTAB *oidVisitedMap, Oid oid);
extern void VisitOid(HTAB *oidVisitedMap, Oid oid);

View File

@ -156,6 +156,7 @@ extern void SendOrCollectCommandListToSingleNode(MetadataSyncContext *context,
extern void ActivateNodeList(MetadataSyncContext *context);
extern char * WorkerDropAllShellTablesCommand(bool singleTransaction);
extern char * WorkerDropSequenceDependencyCommand(Oid relationId);
extern void SyncDistributedObjects(MetadataSyncContext *context);
extern void SendNodeWideObjectsSyncCommands(MetadataSyncContext *context);
@ -180,8 +181,10 @@ extern void SendInterTableRelationshipCommands(MetadataSyncContext *context);
#define REMOVE_ALL_CITUS_TABLES_COMMAND \
"SELECT worker_drop_distributed_table(logicalrelid::regclass::text) FROM pg_dist_partition"
#define BREAK_CITUS_TABLE_SEQUENCE_DEPENDENCY_COMMAND \
#define BREAK_ALL_CITUS_TABLE_SEQUENCE_DEPENDENCY_COMMAND \
"SELECT pg_catalog.worker_drop_sequence_dependency(logicalrelid::regclass::text) FROM pg_dist_partition"
#define BREAK_CITUS_TABLE_SEQUENCE_DEPENDENCY_COMMAND \
"SELECT pg_catalog.worker_drop_sequence_dependency(%s);"
#define DISABLE_DDL_PROPAGATION "SET citus.enable_ddl_propagation TO 'off'"
#define ENABLE_DDL_PROPAGATION "SET citus.enable_ddl_propagation TO 'on'"

View File

@ -41,7 +41,7 @@
#define WORKER_PARTITIONED_RELATION_TOTAL_SIZE_FUNCTION \
"worker_partitioned_relation_total_size(%s)"
#define SHARD_SIZES_COLUMN_COUNT (3)
#define SHARD_SIZES_COLUMN_COUNT (2)
/*
* Flag to keep track of whether the process is currently in a function converting the

View File

@ -42,6 +42,13 @@ typedef struct TenantStats
int writesInLastPeriod;
int writesInThisPeriod;
/*
* CPU time usage of this tenant in this and last periods.
*/
double cpuUsageInLastPeriod;
double cpuUsageInThisPeriod;
/*
* The latest time this tenant ran a query. This value is used to update the score later.
*/

View File

@ -257,7 +257,7 @@ s/pg_cancel_backend\('[0-9]+'::bigint\)/pg_cancel_backend('xxxxx'::bigint)/g
s/issuing SELECT pg_cancel_backend\([0-9]+::integer\)/issuing SELECT pg_cancel_backend(xxxxx::integer)/g
# shard_rebalancer output for flaky nodeIds
s/issuing SELECT citus_copy_shard_placement\(43[0-9]+,[0-9]+,[0-9]+,'block_writes'\)/issuing SELECT citus_copy_shard_placement(43xxxx,xx,xx,'block_writes')/g
s/issuing SELECT pg_catalog.citus_copy_shard_placement\(43[0-9]+,[0-9]+,[0-9]+,'block_writes'\)/issuing SELECT pg_catalog.citus_copy_shard_placement(43xxxx,xx,xx,'block_writes')/g
# node id in run_command_on_all_nodes warning
s/Error on node with node id [0-9]+/Error on node with node id xxxxx/g
@ -309,3 +309,6 @@ s/(NOTICE: issuing SET LOCAL application_name TO 'citus_rebalancer gpid=)[0-9]+
s/improvement of 0.1[0-9]* is lower/improvement of 0.1xxxxx is lower/g
# normalize tenants statistics annotations
s/\/\*\{"tId":.*\*\///g
# Notice message that contains current columnar version that makes it harder to bump versions
s/(NOTICE: issuing CREATE EXTENSION IF NOT EXISTS citus_columnar WITH SCHEMA pg_catalog VERSION )"[0-9]+\.[0-9]+-[0-9]+"/\1 "x.y-z"/

View File

@ -112,6 +112,14 @@ DEPS = {
"multi_mx_function_table_reference",
],
),
"background_rebalance": TestDeps(
None,
[
"multi_test_helpers",
"multi_cluster_management",
],
worker_count=3,
),
"background_rebalance_parallel": TestDeps(
None,
[

View File

@ -10,7 +10,6 @@ test: isolation_move_placement_vs_modification
test: isolation_move_placement_vs_modification_fk
test: isolation_tenant_isolation_with_fkey_to_reference
test: isolation_ref2ref_foreign_keys_enterprise
test: isolation_pg_send_cancellation
test: isolation_shard_move_vs_start_metadata_sync
test: isolation_tenant_isolation
test: isolation_tenant_isolation_nonblocking

View File

@ -310,6 +310,61 @@ SELECT public.wait_until_metadata_sync(30000);
(1 row)
-- make sure a non-super user can rebalance when there are reference tables to replicate
CREATE TABLE ref_table(a int primary key);
SELECT create_reference_table('ref_table');
create_reference_table
---------------------------------------------------------------------
(1 row)
-- add a new node to trigger replicate_reference_tables task
SELECT 1 FROM citus_add_node('localhost', :worker_3_port);
?column?
---------------------------------------------------------------------
1
(1 row)
SET ROLE non_super_user_rebalance;
SELECT 1 FROM citus_rebalance_start(shard_transfer_mode := 'force_logical');
NOTICE: Scheduled 1 moves as job xxx
DETAIL: Rebalance scheduled as background job
HINT: To monitor progress, run: SELECT * FROM citus_rebalance_status();
?column?
---------------------------------------------------------------------
1
(1 row)
-- wait for success
SELECT citus_rebalance_wait();
citus_rebalance_wait
---------------------------------------------------------------------
(1 row)
SELECT state, details from citus_rebalance_status();
state | details
---------------------------------------------------------------------
finished | {"tasks": [], "task_state_counts": {"done": 2}}
(1 row)
RESET ROLE;
-- reset the the number of nodes by removing the previously added node
SELECT 1 FROM citus_drain_node('localhost', :worker_3_port);
NOTICE: Moving shard xxxxx from localhost:xxxxx to localhost:xxxxx ...
?column?
---------------------------------------------------------------------
1
(1 row)
CALL citus_cleanup_orphaned_resources();
NOTICE: cleaned up 1 orphaned resources
SELECT 1 FROM citus_remove_node('localhost', :worker_3_port);
?column?
---------------------------------------------------------------------
1
(1 row)
SET client_min_messages TO WARNING;
DROP SCHEMA background_rebalance CASCADE;
DROP USER non_super_user_rebalance;

View File

@ -466,17 +466,27 @@ SELECT citus_rebalance_start AS job_id from citus_rebalance_start() \gset
-- see dependent tasks to understand which tasks remain runnable because of
-- citus.max_background_task_executors_per_node
-- and which tasks are actually blocked from colocation group dependencies
SELECT D.task_id,
(SELECT T.command FROM pg_dist_background_task T WHERE T.task_id = D.task_id),
D.depends_on,
(SELECT T.command FROM pg_dist_background_task T WHERE T.task_id = D.depends_on)
FROM pg_dist_background_task_depend D WHERE job_id in (:job_id) ORDER BY D.task_id, D.depends_on ASC;
task_id | command | depends_on | command
SELECT (SELECT T.command FROM pg_dist_background_task T WHERE T.task_id = D.task_id),
(SELECT T.command depends_on_command FROM pg_dist_background_task T WHERE T.task_id = D.depends_on)
FROM pg_dist_background_task_depend D WHERE job_id in (:job_id) ORDER BY 1, 2 ASC;
command | depends_on_command
---------------------------------------------------------------------
1014 | SELECT pg_catalog.citus_move_shard_placement(85674026,50,57,'auto') | 1013 | SELECT pg_catalog.citus_move_shard_placement(85674025,50,56,'auto')
1016 | SELECT pg_catalog.citus_move_shard_placement(85674032,50,57,'auto') | 1015 | SELECT pg_catalog.citus_move_shard_placement(85674031,50,56,'auto')
1018 | SELECT pg_catalog.citus_move_shard_placement(85674038,50,57,'auto') | 1017 | SELECT pg_catalog.citus_move_shard_placement(85674037,50,56,'auto')
1020 | SELECT pg_catalog.citus_move_shard_placement(85674044,50,57,'auto') | 1019 | SELECT pg_catalog.citus_move_shard_placement(85674043,50,56,'auto')
SELECT pg_catalog.citus_move_shard_placement(85674026,50,57,'auto') | SELECT pg_catalog.citus_move_shard_placement(85674025,50,56,'auto')
SELECT pg_catalog.citus_move_shard_placement(85674032,50,57,'auto') | SELECT pg_catalog.citus_move_shard_placement(85674031,50,56,'auto')
SELECT pg_catalog.citus_move_shard_placement(85674038,50,57,'auto') | SELECT pg_catalog.citus_move_shard_placement(85674037,50,56,'auto')
SELECT pg_catalog.citus_move_shard_placement(85674044,50,57,'auto') | SELECT pg_catalog.citus_move_shard_placement(85674043,50,56,'auto')
(4 rows)
SELECT task_id, depends_on
FROM pg_dist_background_task_depend
WHERE job_id in (:job_id)
ORDER BY 1, 2 ASC;
task_id | depends_on
---------------------------------------------------------------------
1014 | 1013
1016 | 1015
1018 | 1017
1020 | 1019
(4 rows)
-- default citus.max_background_task_executors_per_node is 1

View File

@ -71,14 +71,17 @@ INSERT INTO dist_tbl VALUES (2, 'abcd');
UPDATE dist_tbl SET b = a + 1 WHERE a = 3;
UPDATE dist_tbl SET b = a + 1 WHERE a = 4;
DELETE FROM dist_tbl WHERE a = 5;
SELECT tenant_attribute, read_count_in_this_period, read_count_in_last_period, query_count_in_this_period, query_count_in_last_period FROM citus_stat_tenants(true) ORDER BY tenant_attribute;
tenant_attribute | read_count_in_this_period | read_count_in_last_period | query_count_in_this_period | query_count_in_last_period
SELECT tenant_attribute, read_count_in_this_period, read_count_in_last_period, query_count_in_this_period, query_count_in_last_period,
(cpu_usage_in_this_period>0) AS cpu_is_used_in_this_period, (cpu_usage_in_last_period>0) AS cpu_is_used_in_last_period
FROM citus_stat_tenants(true)
ORDER BY tenant_attribute;
tenant_attribute | read_count_in_this_period | read_count_in_last_period | query_count_in_this_period | query_count_in_last_period | cpu_is_used_in_this_period | cpu_is_used_in_last_period
---------------------------------------------------------------------
1 | 0 | 0 | 1 | 0
2 | 0 | 0 | 1 | 0
3 | 0 | 0 | 1 | 0
4 | 0 | 0 | 1 | 0
5 | 0 | 0 | 1 | 0
1 | 0 | 0 | 1 | 0 | t | f
2 | 0 | 0 | 1 | 0 | t | f
3 | 0 | 0 | 1 | 0 | t | f
4 | 0 | 0 | 1 | 0 | t | f
5 | 0 | 0 | 1 | 0 | t | f
(5 rows)
SELECT citus_stat_tenants_reset();
@ -241,26 +244,48 @@ SELECT count(*)>=0 FROM dist_tbl WHERE a = 1;
INSERT INTO dist_tbl VALUES (5, 'abcd');
\c - - - :worker_1_port
SELECT tenant_attribute, read_count_in_this_period, read_count_in_last_period, query_count_in_this_period, query_count_in_last_period FROM citus_stat_tenants_local ORDER BY tenant_attribute;
tenant_attribute | read_count_in_this_period | read_count_in_last_period | query_count_in_this_period | query_count_in_last_period
SELECT tenant_attribute, read_count_in_this_period, read_count_in_last_period, query_count_in_this_period, query_count_in_last_period,
(cpu_usage_in_this_period>0) AS cpu_is_used_in_this_period, (cpu_usage_in_last_period>0) AS cpu_is_used_in_last_period
FROM citus_stat_tenants_local
ORDER BY tenant_attribute;
tenant_attribute | read_count_in_this_period | read_count_in_last_period | query_count_in_this_period | query_count_in_last_period | cpu_is_used_in_this_period | cpu_is_used_in_last_period
---------------------------------------------------------------------
1 | 1 | 0 | 1 | 0
5 | 0 | 0 | 1 | 0
1 | 1 | 0 | 1 | 0 | t | f
5 | 0 | 0 | 1 | 0 | t | f
(2 rows)
-- simulate passing the period
SET citus.stat_tenants_period TO 2;
SET citus.stat_tenants_period TO 5;
SELECT sleep_until_next_period();
sleep_until_next_period
---------------------------------------------------------------------
(1 row)
SELECT tenant_attribute, read_count_in_this_period, read_count_in_last_period, query_count_in_this_period, query_count_in_last_period FROM citus_stat_tenants_local ORDER BY tenant_attribute;
tenant_attribute | read_count_in_this_period | read_count_in_last_period | query_count_in_this_period | query_count_in_last_period
SELECT tenant_attribute, read_count_in_this_period, read_count_in_last_period, query_count_in_this_period, query_count_in_last_period,
(cpu_usage_in_this_period>0) AS cpu_is_used_in_this_period, (cpu_usage_in_last_period>0) AS cpu_is_used_in_last_period
FROM citus_stat_tenants_local
ORDER BY tenant_attribute;
tenant_attribute | read_count_in_this_period | read_count_in_last_period | query_count_in_this_period | query_count_in_last_period | cpu_is_used_in_this_period | cpu_is_used_in_last_period
---------------------------------------------------------------------
1 | 0 | 1 | 0 | 1
5 | 0 | 0 | 0 | 1
1 | 0 | 1 | 0 | 1 | f | t
5 | 0 | 0 | 0 | 1 | f | t
(2 rows)
SELECT sleep_until_next_period();
sleep_until_next_period
---------------------------------------------------------------------
(1 row)
SELECT tenant_attribute, read_count_in_this_period, read_count_in_last_period, query_count_in_this_period, query_count_in_last_period,
(cpu_usage_in_this_period>0) AS cpu_is_used_in_this_period, (cpu_usage_in_last_period>0) AS cpu_is_used_in_last_period
FROM citus_stat_tenants_local
ORDER BY tenant_attribute;
tenant_attribute | read_count_in_this_period | read_count_in_last_period | query_count_in_this_period | query_count_in_last_period | cpu_is_used_in_this_period | cpu_is_used_in_last_period
---------------------------------------------------------------------
1 | 0 | 0 | 0 | 0 | f | f
5 | 0 | 0 | 0 | 0 | f | f
(2 rows)
\c - - - :master_port
@ -478,13 +503,17 @@ SELECT count(*)>=0 FROM dist_tbl_text WHERE a = 'bcde*';
t
(1 row)
DELETE FROM dist_tbl_text WHERE a = '/b*c/de';
DELETE FROM dist_tbl_text WHERE a = '/bcde';
DELETE FROM dist_tbl_text WHERE a = U&'\0061\0308bc';
DELETE FROM dist_tbl_text WHERE a = 'bcde*';
SELECT tenant_attribute, read_count_in_this_period, read_count_in_last_period, query_count_in_this_period, query_count_in_last_period FROM citus_stat_tenants_local ORDER BY tenant_attribute;
tenant_attribute | read_count_in_this_period | read_count_in_last_period | query_count_in_this_period | query_count_in_last_period
---------------------------------------------------------------------
/b*c/de | 1 | 0 | 1 | 0
/bcde | 1 | 0 | 1 | 0
äbc | 1 | 0 | 1 | 0
bcde* | 1 | 0 | 1 | 0
/b*c/de | 1 | 0 | 2 | 0
/bcde | 1 | 0 | 2 | 0
äbc | 1 | 0 | 2 | 0
bcde* | 1 | 0 | 2 | 0
(4 rows)
-- test local cached queries & prepared statements
@ -564,10 +593,10 @@ EXECUTE dist_tbl_text_select_plan('bcde*');
SELECT tenant_attribute, read_count_in_this_period, read_count_in_last_period, query_count_in_this_period, query_count_in_last_period FROM citus_stat_tenants_local ORDER BY tenant_attribute;
tenant_attribute | read_count_in_this_period | read_count_in_last_period | query_count_in_this_period | query_count_in_last_period
---------------------------------------------------------------------
/b*c/de | 4 | 0 | 4 | 0
/bcde | 4 | 0 | 4 | 0
äbc | 4 | 0 | 4 | 0
bcde* | 4 | 0 | 4 | 0
/b*c/de | 4 | 0 | 5 | 0
/bcde | 4 | 0 | 5 | 0
äbc | 4 | 0 | 5 | 0
bcde* | 4 | 0 | 5 | 0
(4 rows)
\c - - - :master_port
@ -650,10 +679,10 @@ SET search_path TO citus_stat_tenants;
SELECT tenant_attribute, read_count_in_this_period, read_count_in_last_period, query_count_in_this_period, query_count_in_last_period FROM citus_stat_tenants ORDER BY tenant_attribute;
tenant_attribute | read_count_in_this_period | read_count_in_last_period | query_count_in_this_period | query_count_in_last_period
---------------------------------------------------------------------
/b*c/de | 7 | 0 | 7 | 0
/bcde | 7 | 0 | 7 | 0
äbc | 7 | 0 | 7 | 0
bcde* | 7 | 0 | 7 | 0
/b*c/de | 7 | 0 | 8 | 0
/bcde | 7 | 0 | 8 | 0
äbc | 7 | 0 | 8 | 0
bcde* | 7 | 0 | 8 | 0
(4 rows)
\c - - - :master_port
@ -716,5 +745,131 @@ SELECT count(*)>=0 FROM citus_stat_tenants_local();
RESET ROLE;
DROP ROLE stats_non_superuser;
-- test function push down
CREATE OR REPLACE FUNCTION
select_from_dist_tbl_text(p_keyword text)
RETURNS boolean LANGUAGE plpgsql AS $fn$
BEGIN
RETURN(SELECT count(*)>=0 FROM citus_stat_tenants.dist_tbl_text WHERE a = $1);
END;
$fn$;
SELECT create_distributed_function(
'select_from_dist_tbl_text(text)', 'p_keyword', colocate_with => 'dist_tbl_text'
);
create_distributed_function
---------------------------------------------------------------------
(1 row)
SELECT citus_stat_tenants_reset();
citus_stat_tenants_reset
---------------------------------------------------------------------
(1 row)
SELECT select_from_dist_tbl_text('/b*c/de');
select_from_dist_tbl_text
---------------------------------------------------------------------
t
(1 row)
SELECT select_from_dist_tbl_text('/b*c/de');
select_from_dist_tbl_text
---------------------------------------------------------------------
t
(1 row)
SELECT select_from_dist_tbl_text(U&'\0061\0308bc');
select_from_dist_tbl_text
---------------------------------------------------------------------
t
(1 row)
SELECT select_from_dist_tbl_text(U&'\0061\0308bc');
select_from_dist_tbl_text
---------------------------------------------------------------------
t
(1 row)
SELECT tenant_attribute, query_count_in_this_period FROM citus_stat_tenants;
tenant_attribute | query_count_in_this_period
---------------------------------------------------------------------
/b*c/de | 2
äbc | 2
(2 rows)
CREATE OR REPLACE PROCEDURE select_from_dist_tbl_text_proc(
p_keyword text
)
LANGUAGE plpgsql
AS $$
BEGIN
PERFORM select_from_dist_tbl_text(p_keyword);
PERFORM count(*)>=0 FROM citus_stat_tenants.dist_tbl_text WHERE b < 0;
PERFORM count(*)>=0 FROM citus_stat_tenants.dist_tbl_text;
PERFORM count(*)>=0 FROM citus_stat_tenants.dist_tbl_text WHERE a = p_keyword;
COMMIT;
END;$$;
CALL citus_stat_tenants.select_from_dist_tbl_text_proc('/b*c/de');
CALL citus_stat_tenants.select_from_dist_tbl_text_proc('/b*c/de');
CALL citus_stat_tenants.select_from_dist_tbl_text_proc('/b*c/de');
CALL citus_stat_tenants.select_from_dist_tbl_text_proc(U&'\0061\0308bc');
CALL citus_stat_tenants.select_from_dist_tbl_text_proc(U&'\0061\0308bc');
CALL citus_stat_tenants.select_from_dist_tbl_text_proc(U&'\0061\0308bc');
CALL citus_stat_tenants.select_from_dist_tbl_text_proc(NULL);
SELECT tenant_attribute, query_count_in_this_period FROM citus_stat_tenants;
tenant_attribute | query_count_in_this_period
---------------------------------------------------------------------
/b*c/de | 8
äbc | 8
(2 rows)
CREATE OR REPLACE VIEW
select_from_dist_tbl_text_view
AS
SELECT * FROM citus_stat_tenants.dist_tbl_text;
SELECT count(*)>=0 FROM select_from_dist_tbl_text_view WHERE a = '/b*c/de';
?column?
---------------------------------------------------------------------
t
(1 row)
SELECT count(*)>=0 FROM select_from_dist_tbl_text_view WHERE a = '/b*c/de';
?column?
---------------------------------------------------------------------
t
(1 row)
SELECT count(*)>=0 FROM select_from_dist_tbl_text_view WHERE a = '/b*c/de';
?column?
---------------------------------------------------------------------
t
(1 row)
SELECT count(*)>=0 FROM select_from_dist_tbl_text_view WHERE a = U&'\0061\0308bc';
?column?
---------------------------------------------------------------------
t
(1 row)
SELECT count(*)>=0 FROM select_from_dist_tbl_text_view WHERE a = U&'\0061\0308bc';
?column?
---------------------------------------------------------------------
t
(1 row)
SELECT count(*)>=0 FROM select_from_dist_tbl_text_view WHERE a = U&'\0061\0308bc';
?column?
---------------------------------------------------------------------
t
(1 row)
SELECT tenant_attribute, query_count_in_this_period FROM citus_stat_tenants;
tenant_attribute | query_count_in_this_period
---------------------------------------------------------------------
/b*c/de | 11
äbc | 11
(2 rows)
SET client_min_messages TO ERROR;
DROP SCHEMA citus_stat_tenants CASCADE;

View File

@ -64,11 +64,11 @@ SET citus.multi_shard_modify_mode TO sequential;
SELECT citus_update_table_statistics('test_table_statistics_hash');
NOTICE: issuing BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;SELECT assign_distributed_transaction_id(xx, xx, 'xxxxxxx');
DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx
NOTICE: issuing SELECT 981000 AS shard_id, 'public.test_table_statistics_hash_981000' AS shard_name, pg_total_relation_size('public.test_table_statistics_hash_981000') UNION ALL SELECT 981001 AS shard_id, 'public.test_table_statistics_hash_981001' AS shard_name, pg_total_relation_size('public.test_table_statistics_hash_981001') UNION ALL SELECT 981002 AS shard_id, 'public.test_table_statistics_hash_981002' AS shard_name, pg_total_relation_size('public.test_table_statistics_hash_981002') UNION ALL SELECT 981003 AS shard_id, 'public.test_table_statistics_hash_981003' AS shard_name, pg_total_relation_size('public.test_table_statistics_hash_981003') UNION ALL SELECT 981004 AS shard_id, 'public.test_table_statistics_hash_981004' AS shard_name, pg_total_relation_size('public.test_table_statistics_hash_981004') UNION ALL SELECT 981005 AS shard_id, 'public.test_table_statistics_hash_981005' AS shard_name, pg_total_relation_size('public.test_table_statistics_hash_981005') UNION ALL SELECT 981006 AS shard_id, 'public.test_table_statistics_hash_981006' AS shard_name, pg_total_relation_size('public.test_table_statistics_hash_981006') UNION ALL SELECT 981007 AS shard_id, 'public.test_table_statistics_hash_981007' AS shard_name, pg_total_relation_size('public.test_table_statistics_hash_981007') UNION ALL SELECT 0::bigint, NULL::text, 0::bigint;
NOTICE: issuing SELECT shard_id, pg_total_relation_size(table_name) FROM (VALUES (981000, 'public.test_table_statistics_hash_981000'), (981001, 'public.test_table_statistics_hash_981001'), (981002, 'public.test_table_statistics_hash_981002'), (981003, 'public.test_table_statistics_hash_981003'), (981004, 'public.test_table_statistics_hash_981004'), (981005, 'public.test_table_statistics_hash_981005'), (981006, 'public.test_table_statistics_hash_981006'), (981007, 'public.test_table_statistics_hash_981007')) t(shard_id, table_name) WHERE to_regclass(table_name) IS NOT NULL
DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx
NOTICE: issuing BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;SELECT assign_distributed_transaction_id(xx, xx, 'xxxxxxx');
DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx
NOTICE: issuing SELECT 981000 AS shard_id, 'public.test_table_statistics_hash_981000' AS shard_name, pg_total_relation_size('public.test_table_statistics_hash_981000') UNION ALL SELECT 981001 AS shard_id, 'public.test_table_statistics_hash_981001' AS shard_name, pg_total_relation_size('public.test_table_statistics_hash_981001') UNION ALL SELECT 981002 AS shard_id, 'public.test_table_statistics_hash_981002' AS shard_name, pg_total_relation_size('public.test_table_statistics_hash_981002') UNION ALL SELECT 981003 AS shard_id, 'public.test_table_statistics_hash_981003' AS shard_name, pg_total_relation_size('public.test_table_statistics_hash_981003') UNION ALL SELECT 981004 AS shard_id, 'public.test_table_statistics_hash_981004' AS shard_name, pg_total_relation_size('public.test_table_statistics_hash_981004') UNION ALL SELECT 981005 AS shard_id, 'public.test_table_statistics_hash_981005' AS shard_name, pg_total_relation_size('public.test_table_statistics_hash_981005') UNION ALL SELECT 981006 AS shard_id, 'public.test_table_statistics_hash_981006' AS shard_name, pg_total_relation_size('public.test_table_statistics_hash_981006') UNION ALL SELECT 981007 AS shard_id, 'public.test_table_statistics_hash_981007' AS shard_name, pg_total_relation_size('public.test_table_statistics_hash_981007') UNION ALL SELECT 0::bigint, NULL::text, 0::bigint;
NOTICE: issuing SELECT shard_id, pg_total_relation_size(table_name) FROM (VALUES (981000, 'public.test_table_statistics_hash_981000'), (981001, 'public.test_table_statistics_hash_981001'), (981002, 'public.test_table_statistics_hash_981002'), (981003, 'public.test_table_statistics_hash_981003'), (981004, 'public.test_table_statistics_hash_981004'), (981005, 'public.test_table_statistics_hash_981005'), (981006, 'public.test_table_statistics_hash_981006'), (981007, 'public.test_table_statistics_hash_981007')) t(shard_id, table_name) WHERE to_regclass(table_name) IS NOT NULL
DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx
NOTICE: issuing COMMIT
DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx
@ -152,11 +152,11 @@ SET citus.multi_shard_modify_mode TO sequential;
SELECT citus_update_table_statistics('test_table_statistics_append');
NOTICE: issuing BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;SELECT assign_distributed_transaction_id(xx, xx, 'xxxxxxx');
DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx
NOTICE: issuing SELECT 981008 AS shard_id, 'public.test_table_statistics_append_981008' AS shard_name, pg_total_relation_size('public.test_table_statistics_append_981008') UNION ALL SELECT 981009 AS shard_id, 'public.test_table_statistics_append_981009' AS shard_name, pg_total_relation_size('public.test_table_statistics_append_981009') UNION ALL SELECT 0::bigint, NULL::text, 0::bigint;
NOTICE: issuing SELECT shard_id, pg_total_relation_size(table_name) FROM (VALUES (981008, 'public.test_table_statistics_append_981008'), (981009, 'public.test_table_statistics_append_981009')) t(shard_id, table_name) WHERE to_regclass(table_name) IS NOT NULL
DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx
NOTICE: issuing BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;SELECT assign_distributed_transaction_id(xx, xx, 'xxxxxxx');
DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx
NOTICE: issuing SELECT 981008 AS shard_id, 'public.test_table_statistics_append_981008' AS shard_name, pg_total_relation_size('public.test_table_statistics_append_981008') UNION ALL SELECT 981009 AS shard_id, 'public.test_table_statistics_append_981009' AS shard_name, pg_total_relation_size('public.test_table_statistics_append_981009') UNION ALL SELECT 0::bigint, NULL::text, 0::bigint;
NOTICE: issuing SELECT shard_id, pg_total_relation_size(table_name) FROM (VALUES (981008, 'public.test_table_statistics_append_981008'), (981009, 'public.test_table_statistics_append_981009')) t(shard_id, table_name) WHERE to_regclass(table_name) IS NOT NULL
DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx
NOTICE: issuing COMMIT
DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx

View File

@ -36,6 +36,19 @@ set citus.shard_replication_factor to 2;
select create_distributed_table_concurrently('test','key', 'hash');
ERROR: cannot distribute a table concurrently when citus.shard_replication_factor > 1
set citus.shard_replication_factor to 1;
set citus.shard_replication_factor to 2;
create table dist_1(a int);
select create_distributed_table('dist_1', 'a');
create_distributed_table
---------------------------------------------------------------------
(1 row)
set citus.shard_replication_factor to 1;
create table dist_2(a int);
select create_distributed_table_concurrently('dist_2', 'a', colocate_with=>'dist_1');
ERROR: cannot create distributed table concurrently because Citus allows concurrent table distribution only when citus.shard_replication_factor = 1
HINT: table dist_2 is requested to be colocated with dist_1 which has citus.shard_replication_factor > 1
begin;
select create_distributed_table_concurrently('test','key');
ERROR: create_distributed_table_concurrently cannot run inside a transaction block
@ -138,27 +151,8 @@ select count(*) from test;
rollback;
-- verify that we can undistribute the table
begin;
set local client_min_messages to warning;
select undistribute_table('test', cascade_via_foreign_keys := true);
NOTICE: converting the partitions of create_distributed_table_concurrently.test
NOTICE: creating a new table for create_distributed_table_concurrently.test
NOTICE: dropping the old create_distributed_table_concurrently.test
NOTICE: renaming the new table to create_distributed_table_concurrently.test
NOTICE: creating a new table for create_distributed_table_concurrently.ref
NOTICE: moving the data of create_distributed_table_concurrently.ref
NOTICE: dropping the old create_distributed_table_concurrently.ref
NOTICE: drop cascades to constraint test_id_fkey_1190041 on table create_distributed_table_concurrently.test_1190041
CONTEXT: SQL statement "SELECT citus_drop_all_shards(v_obj.objid, v_obj.schema_name, v_obj.object_name, drop_shards_metadata_only := false)"
PL/pgSQL function citus_drop_trigger() line XX at PERFORM
SQL statement "DROP TABLE create_distributed_table_concurrently.ref CASCADE"
NOTICE: renaming the new table to create_distributed_table_concurrently.ref
NOTICE: creating a new table for create_distributed_table_concurrently.test_1
NOTICE: moving the data of create_distributed_table_concurrently.test_1
NOTICE: dropping the old create_distributed_table_concurrently.test_1
NOTICE: renaming the new table to create_distributed_table_concurrently.test_1
NOTICE: creating a new table for create_distributed_table_concurrently.test_2
NOTICE: moving the data of create_distributed_table_concurrently.test_2
NOTICE: dropping the old create_distributed_table_concurrently.test_2
NOTICE: renaming the new table to create_distributed_table_concurrently.test_2
undistribute_table
---------------------------------------------------------------------
@ -245,7 +239,7 @@ insert into dist_table4 select s from generate_series(1,100) s;
select count(*) as total from dist_table4;
total
---------------------------------------------------------------------
100
100
(1 row)
-- verify we do not allow foreign keys from distributed table to citus local table concurrently
@ -295,13 +289,13 @@ select count(*) from test_columnar;
select id from test_columnar where id = 1;
id
---------------------------------------------------------------------
1
1
(1 row)
select id from test_columnar where id = 51;
id
---------------------------------------------------------------------
51
51
(1 row)
select count(*) from test_columnar_1;

View File

@ -187,6 +187,8 @@ ORDER BY placementid;
(1 row)
-- reset cluster to original state
ALTER SEQUENCE pg_dist_node_nodeid_seq RESTART 2;
ALTER SEQUENCE pg_dist_groupid_seq RESTART 2;
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
---------------------------------------------------------------------
@ -196,7 +198,7 @@ SELECT citus.mitmproxy('conn.allow()');
SELECT master_add_node('localhost', :worker_2_proxy_port);
master_add_node
---------------------------------------------------------------------
4
2
(1 row)
-- verify node is added

View File

@ -12,6 +12,8 @@ SET citus.shard_count TO 2;
SET citus.shard_replication_factor TO 1;
SET citus.max_adaptive_executor_pool_size TO 1;
SELECT pg_backend_pid() as pid \gset
ALTER SEQUENCE pg_catalog.pg_dist_shardid_seq RESTART 222222;
ALTER SEQUENCE pg_catalog.pg_dist_placement_placementid_seq RESTART 333333;
-- make sure coordinator is in the metadata
SELECT citus_set_coordinator_host('localhost', 57636);
citus_set_coordinator_host
@ -189,8 +191,8 @@ SELECT create_distributed_table_concurrently('table_1', 'id');
SELECT * FROM pg_dist_shard WHERE logicalrelid = 'table_1'::regclass;
logicalrelid | shardid | shardstorage | shardminvalue | shardmaxvalue
---------------------------------------------------------------------
table_1 | 1880080 | t | -2147483648 | -1
table_1 | 1880081 | t | 0 | 2147483647
table_1 | 222247 | t | -2147483648 | -1
table_1 | 222248 | t | 0 | 2147483647
(2 rows)
DROP SCHEMA create_dist_tbl_con CASCADE;
@ -201,3 +203,5 @@ SELECT citus_remove_node('localhost', 57636);
(1 row)
ALTER SEQUENCE pg_dist_node_nodeid_seq RESTART 3;
ALTER SEQUENCE pg_dist_groupid_seq RESTART 3;

View File

@ -19,10 +19,30 @@ SET client_min_messages TO ERROR;
-- Create roles
CREATE ROLE foo1;
CREATE ROLE foo2;
-- Create collation
CREATE COLLATION german_phonebook (provider = icu, locale = 'de-u-co-phonebk');
-- Create type
CREATE TYPE pair_type AS (a int, b int);
-- Create function
CREATE FUNCTION one_as_result() RETURNS INT LANGUAGE SQL AS
$$
SELECT 1;
$$;
-- Create text search dictionary
CREATE TEXT SEARCH DICTIONARY my_german_dict (
template = snowball,
language = german,
stopwords = german
);
-- Create text search config
CREATE TEXT SEARCH CONFIGURATION my_ts_config ( parser = default );
ALTER TEXT SEARCH CONFIGURATION my_ts_config ALTER MAPPING FOR asciiword WITH my_german_dict;
-- Create sequence
CREATE SEQUENCE seq;
-- Create colocated distributed tables
CREATE TABLE dist1 (id int PRIMARY KEY default nextval('seq'));
CREATE TABLE dist1 (id int PRIMARY KEY default nextval('seq'), col int default (one_as_result()), myserial serial, phone text COLLATE german_phonebook, initials pair_type);
CREATE SEQUENCE seq_owned OWNED BY dist1.id;
CREATE INDEX dist1_search_phone_idx ON dist1 USING gin (to_tsvector('my_ts_config'::regconfig, (COALESCE(phone, ''::text))::text));
SELECT create_distributed_table('dist1', 'id');
create_distributed_table
---------------------------------------------------------------------
@ -52,12 +72,30 @@ CREATE TABLE loc1 (id int PRIMARY KEY);
INSERT INTO loc1 SELECT i FROM generate_series(1,100) i;
CREATE TABLE loc2 (id int REFERENCES loc1(id));
INSERT INTO loc2 SELECT i FROM generate_series(1,100) i;
-- Create publication
CREATE PUBLICATION pub_all;
-- citus_set_coordinator_host with wrong port
SELECT citus_set_coordinator_host('localhost', 9999);
citus_set_coordinator_host
---------------------------------------------------------------------
(1 row)
-- citus_set_coordinator_host with correct port
SELECT citus_set_coordinator_host('localhost', :master_port);
citus_set_coordinator_host
---------------------------------------------------------------------
(1 row)
-- show coordinator port is correct on all workers
SELECT * FROM run_command_on_workers($$SELECT row(nodename,nodeport) FROM pg_dist_node WHERE groupid = 0$$);
nodename | nodeport | success | result
---------------------------------------------------------------------
localhost | 9060 | t | (localhost,57636)
localhost | 57637 | t | (localhost,57636)
(2 rows)
SELECT citus_add_local_table_to_metadata('loc1', cascade_via_foreign_keys => true);
citus_add_local_table_to_metadata
---------------------------------------------------------------------
@ -152,8 +190,8 @@ SELECT citus.mitmproxy('conn.onQuery(query="INSERT INTO pg_dist_node").kill()');
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
-- Failure to drop sequence
SELECT citus.mitmproxy('conn.onQuery(query="SELECT pg_catalog.worker_drop_sequence_dependency").cancel(' || :pid || ')');
-- Failure to drop sequence dependency for all tables
SELECT citus.mitmproxy('conn.onQuery(query="SELECT pg_catalog.worker_drop_sequence_dependency.*FROM pg_dist_partition").cancel(' || :pid || ')');
mitmproxy
---------------------------------------------------------------------
@ -161,7 +199,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="SELECT pg_catalog.worker_drop_sequen
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
ERROR: canceling statement due to user request
SELECT citus.mitmproxy('conn.onQuery(query="SELECT pg_catalog.worker_drop_sequence_dependency").kill()');
SELECT citus.mitmproxy('conn.onQuery(query="SELECT pg_catalog.worker_drop_sequence_dependency.*FROM pg_dist_partition").kill()');
mitmproxy
---------------------------------------------------------------------
@ -305,7 +343,7 @@ SELECT citus.mitmproxy('conn.onQuery(query="ALTER DATABASE.*OWNER TO").kill()');
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
-- Filure to create schema
-- Failure to create schema
SELECT citus.mitmproxy('conn.onQuery(query="CREATE SCHEMA IF NOT EXISTS mx_metadata_sync_multi_trans AUTHORIZATION").cancel(' || :pid || ')');
mitmproxy
---------------------------------------------------------------------
@ -320,6 +358,108 @@ SELECT citus.mitmproxy('conn.onQuery(query="CREATE SCHEMA IF NOT EXISTS mx_metad
(1 row)
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
-- Failure to create collation
SELECT citus.mitmproxy('conn.onQuery(query="SELECT worker_create_or_replace_object.*CREATE COLLATION mx_metadata_sync_multi_trans.german_phonebook").cancel(' || :pid || ')');
mitmproxy
---------------------------------------------------------------------
(1 row)
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
ERROR: canceling statement due to user request
SELECT citus.mitmproxy('conn.onQuery(query="SELECT worker_create_or_replace_object.*CREATE COLLATION mx_metadata_sync_multi_trans.german_phonebook").kill()');
mitmproxy
---------------------------------------------------------------------
(1 row)
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
-- Failure to create function
SELECT citus.mitmproxy('conn.onQuery(query="CREATE OR REPLACE FUNCTION mx_metadata_sync_multi_trans.one_as_result").cancel(' || :pid || ')');
mitmproxy
---------------------------------------------------------------------
(1 row)
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
ERROR: canceling statement due to user request
SELECT citus.mitmproxy('conn.onQuery(query="CREATE OR REPLACE FUNCTION mx_metadata_sync_multi_trans.one_as_result").kill()');
mitmproxy
---------------------------------------------------------------------
(1 row)
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
-- Failure to create text search dictionary
SELECT citus.mitmproxy('conn.onQuery(query="SELECT worker_create_or_replace_object.*my_german_dict").cancel(' || :pid || ')');
mitmproxy
---------------------------------------------------------------------
(1 row)
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
ERROR: canceling statement due to user request
SELECT citus.mitmproxy('conn.onQuery(query="SELECT worker_create_or_replace_object.*my_german_dict").kill()');
mitmproxy
---------------------------------------------------------------------
(1 row)
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
-- Failure to create text search config
SELECT citus.mitmproxy('conn.onQuery(query="SELECT worker_create_or_replace_object.*my_ts_config").cancel(' || :pid || ')');
mitmproxy
---------------------------------------------------------------------
(1 row)
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
ERROR: canceling statement due to user request
SELECT citus.mitmproxy('conn.onQuery(query="SELECT worker_create_or_replace_object.*my_ts_config").kill()');
mitmproxy
---------------------------------------------------------------------
(1 row)
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
-- Failure to create type
SELECT citus.mitmproxy('conn.onQuery(query="SELECT worker_create_or_replace_object.*pair_type").cancel(' || :pid || ')');
mitmproxy
---------------------------------------------------------------------
(1 row)
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
ERROR: canceling statement due to user request
SELECT citus.mitmproxy('conn.onQuery(query="SELECT worker_create_or_replace_object.*pair_type").kill()');
mitmproxy
---------------------------------------------------------------------
(1 row)
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
-- Failure to create publication
SELECT citus.mitmproxy('conn.onQuery(query="CREATE PUBLICATION.*pub_all").cancel(' || :pid || ')');
mitmproxy
---------------------------------------------------------------------
(1 row)
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
ERROR: canceling statement due to user request
SELECT citus.mitmproxy('conn.onQuery(query="CREATE PUBLICATION.*pub_all").kill()');
mitmproxy
---------------------------------------------------------------------
(1 row)
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
-- Failure to create sequence
@ -337,6 +477,40 @@ SELECT citus.mitmproxy('conn.onQuery(query="SELECT worker_apply_sequence_command
(1 row)
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
-- Failure to drop sequence dependency for distributed table
SELECT citus.mitmproxy('conn.onQuery(query="SELECT pg_catalog.worker_drop_sequence_dependency.*mx_metadata_sync_multi_trans.dist1").cancel(' || :pid || ')');
mitmproxy
---------------------------------------------------------------------
(1 row)
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
ERROR: canceling statement due to user request
SELECT citus.mitmproxy('conn.onQuery(query="SELECT pg_catalog.worker_drop_sequence_dependency.*mx_metadata_sync_multi_trans.dist1").kill()');
mitmproxy
---------------------------------------------------------------------
(1 row)
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
-- Failure to drop distributed table if exists
SELECT citus.mitmproxy('conn.onQuery(query="DROP TABLE IF EXISTS mx_metadata_sync_multi_trans.dist1").cancel(' || :pid || ')');
mitmproxy
---------------------------------------------------------------------
(1 row)
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
ERROR: canceling statement due to user request
SELECT citus.mitmproxy('conn.onQuery(query="DROP TABLE IF EXISTS mx_metadata_sync_multi_trans.dist1").kill()');
mitmproxy
---------------------------------------------------------------------
(1 row)
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
-- Failure to create distributed table
@ -354,6 +528,40 @@ SELECT citus.mitmproxy('conn.onQuery(query="CREATE TABLE mx_metadata_sync_multi_
(1 row)
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
-- Failure to record sequence dependency for table
SELECT citus.mitmproxy('conn.onQuery(query="SELECT pg_catalog.worker_record_sequence_dependency").cancel(' || :pid || ')');
mitmproxy
---------------------------------------------------------------------
(1 row)
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
ERROR: canceling statement due to user request
SELECT citus.mitmproxy('conn.onQuery(query="SELECT pg_catalog.worker_record_sequence_dependency").kill()');
mitmproxy
---------------------------------------------------------------------
(1 row)
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
-- Failure to create index for table
SELECT citus.mitmproxy('conn.onQuery(query="CREATE INDEX dist1_search_phone_idx ON mx_metadata_sync_multi_trans.dist1 USING gin").cancel(' || :pid || ')');
mitmproxy
---------------------------------------------------------------------
(1 row)
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
ERROR: canceling statement due to user request
SELECT citus.mitmproxy('conn.onQuery(query="CREATE INDEX dist1_search_phone_idx ON mx_metadata_sync_multi_trans.dist1 USING gin").kill()');
mitmproxy
---------------------------------------------------------------------
(1 row)
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
-- Failure to create reference table
@ -524,6 +732,125 @@ SELECT citus.mitmproxy('conn.onQuery(query="SELECT citus_internal_add_object_met
(1 row)
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
-- Failure to mark function as distributed
SELECT citus.mitmproxy('conn.onQuery(query="WITH distributed_object_data.*one_as_result").cancel(' || :pid || ')');
mitmproxy
---------------------------------------------------------------------
(1 row)
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
ERROR: canceling statement due to user request
SELECT citus.mitmproxy('conn.onQuery(query="WITH distributed_object_data.*one_as_result").kill()');
mitmproxy
---------------------------------------------------------------------
(1 row)
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
-- Failure to mark collation as distributed
SELECT citus.mitmproxy('conn.onQuery(query="WITH distributed_object_data.*german_phonebook").cancel(' || :pid || ')');
mitmproxy
---------------------------------------------------------------------
(1 row)
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
ERROR: canceling statement due to user request
SELECT citus.mitmproxy('conn.onQuery(query="WITH distributed_object_data.*german_phonebook").kill()');
mitmproxy
---------------------------------------------------------------------
(1 row)
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
-- Failure to mark text search dictionary as distributed
SELECT citus.mitmproxy('conn.onQuery(query="WITH distributed_object_data.*my_german_dict").cancel(' || :pid || ')');
mitmproxy
---------------------------------------------------------------------
(1 row)
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
ERROR: canceling statement due to user request
SELECT citus.mitmproxy('conn.onQuery(query="WITH distributed_object_data.*my_german_dict").kill()');
mitmproxy
---------------------------------------------------------------------
(1 row)
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
-- Failure to mark text search configuration as distributed
SELECT citus.mitmproxy('conn.onQuery(query="WITH distributed_object_data.*my_ts_config").cancel(' || :pid || ')');
mitmproxy
---------------------------------------------------------------------
(1 row)
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
ERROR: canceling statement due to user request
SELECT citus.mitmproxy('conn.onQuery(query="WITH distributed_object_data.*my_ts_config").kill()');
mitmproxy
---------------------------------------------------------------------
(1 row)
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
-- Failure to mark type as distributed
SELECT citus.mitmproxy('conn.onQuery(query="WITH distributed_object_data.*pair_type").cancel(' || :pid || ')');
mitmproxy
---------------------------------------------------------------------
(1 row)
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
ERROR: canceling statement due to user request
SELECT citus.mitmproxy('conn.onQuery(query="WITH distributed_object_data.*pair_type").kill()');
mitmproxy
---------------------------------------------------------------------
(1 row)
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
-- Failure to mark sequence as distributed
SELECT citus.mitmproxy('conn.onQuery(query="WITH distributed_object_data.*seq_owned").cancel(' || :pid || ')');
mitmproxy
---------------------------------------------------------------------
(1 row)
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
ERROR: canceling statement due to user request
SELECT citus.mitmproxy('conn.onQuery(query="WITH distributed_object_data.*seq_owned").kill()');
mitmproxy
---------------------------------------------------------------------
(1 row)
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
-- Failure to mark publication as distributed
SELECT citus.mitmproxy('conn.onQuery(query="WITH distributed_object_data.*pub_all").cancel(' || :pid || ')');
mitmproxy
---------------------------------------------------------------------
(1 row)
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
ERROR: canceling statement due to user request
SELECT citus.mitmproxy('conn.onQuery(query="WITH distributed_object_data.*pub_all").kill()');
mitmproxy
---------------------------------------------------------------------
(1 row)
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
ERROR: connection to the remote node localhost:xxxxx failed with the following error: connection not open
-- Failure to set isactive to true
@ -581,8 +908,8 @@ ERROR: connection not open
SELECT * FROM pg_dist_node ORDER BY nodeport;
nodeid | groupid | nodename | nodeport | noderack | hasmetadata | isactive | noderole | nodecluster | metadatasynced | shouldhaveshards
---------------------------------------------------------------------
4 | 4 | localhost | 9060 | default | f | t | primary | default | f | t
6 | 0 | localhost | 57636 | default | t | t | primary | default | t | f
2 | 2 | localhost | 9060 | default | f | t | primary | default | f | t
3 | 0 | localhost | 57636 | default | t | t | primary | default | t | f
1 | 1 | localhost | 57637 | default | t | t | primary | default | t | t
(3 rows)
@ -610,24 +937,14 @@ UPDATE dist1 SET id = :failed_node_val WHERE id = :failed_node_val;
-- Show that we can still delete from a shard at the node from coordinator
DELETE FROM dist1 WHERE id = :failed_node_val;
-- Show that DDL would still propagate to the node
SET client_min_messages TO NOTICE;
SET citus.log_remote_commands TO 1;
CREATE SCHEMA dummy;
NOTICE: issuing BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;SELECT assign_distributed_transaction_id(xx, xx, 'xxxxxxx');
NOTICE: issuing BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;SELECT assign_distributed_transaction_id(xx, xx, 'xxxxxxx');
NOTICE: issuing SET citus.enable_ddl_propagation TO 'off'
NOTICE: issuing CREATE SCHEMA dummy
NOTICE: issuing SET citus.enable_ddl_propagation TO 'on'
NOTICE: issuing SET citus.enable_ddl_propagation TO 'off'
NOTICE: issuing CREATE SCHEMA dummy
NOTICE: issuing SET citus.enable_ddl_propagation TO 'on'
NOTICE: issuing WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('schema', ARRAY['dummy']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data;
NOTICE: issuing PREPARE TRANSACTION 'citus_xx_xx_xx_xx'
NOTICE: issuing PREPARE TRANSACTION 'citus_xx_xx_xx_xx'
NOTICE: issuing COMMIT PREPARED 'citus_xx_xx_xx_xx'
NOTICE: issuing COMMIT PREPARED 'citus_xx_xx_xx_xx'
SET citus.log_remote_commands TO 0;
SET client_min_messages TO ERROR;
SELECT * FROM run_command_on_workers($$SELECT nspname FROM pg_namespace WHERE nspname = 'dummy'$$);
nodename | nodeport | success | result
---------------------------------------------------------------------
localhost | 9060 | t | dummy
localhost | 57637 | t | dummy
(2 rows)
-- Successfully activate the node after many failures
SELECT citus.mitmproxy('conn.allow()');
mitmproxy
@ -638,14 +955,14 @@ SELECT citus.mitmproxy('conn.allow()');
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
citus_activate_node
---------------------------------------------------------------------
4
2
(1 row)
-- Activate the node once more to verify it works again with already synced metadata
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
citus_activate_node
---------------------------------------------------------------------
4
2
(1 row)
-- Show node metadata info on worker2 and coordinator after success
@ -653,8 +970,8 @@ SELECT citus_activate_node('localhost', :worker_2_proxy_port);
SELECT * FROM pg_dist_node ORDER BY nodeport;
nodeid | groupid | nodename | nodeport | noderack | hasmetadata | isactive | noderole | nodecluster | metadatasynced | shouldhaveshards
---------------------------------------------------------------------
4 | 4 | localhost | 9060 | default | t | t | primary | default | t | t
6 | 0 | localhost | 57636 | default | t | t | primary | default | t | f
2 | 2 | localhost | 9060 | default | t | t | primary | default | t | t
3 | 0 | localhost | 57636 | default | t | t | primary | default | t | f
1 | 1 | localhost | 57637 | default | t | t | primary | default | t | t
(3 rows)
@ -662,8 +979,8 @@ SELECT * FROM pg_dist_node ORDER BY nodeport;
SELECT * FROM pg_dist_node ORDER BY nodeport;
nodeid | groupid | nodename | nodeport | noderack | hasmetadata | isactive | noderole | nodecluster | metadatasynced | shouldhaveshards
---------------------------------------------------------------------
4 | 4 | localhost | 9060 | default | t | t | primary | default | t | t
6 | 0 | localhost | 57636 | default | t | t | primary | default | t | f
2 | 2 | localhost | 9060 | default | t | t | primary | default | t | t
3 | 0 | localhost | 57636 | default | t | t | primary | default | t | f
1 | 1 | localhost | 57637 | default | t | t | primary | default | t | t
(3 rows)
@ -674,9 +991,10 @@ SELECT citus.mitmproxy('conn.allow()');
(1 row)
RESET citus.metadata_sync_mode;
DROP PUBLICATION pub_all;
DROP SCHEMA dummy;
DROP SCHEMA mx_metadata_sync_multi_trans CASCADE;
NOTICE: drop cascades to 10 other objects
NOTICE: drop cascades to 15 other objects
DROP ROLE foo1;
DROP ROLE foo2;
SELECT citus_remove_node('localhost', :master_port);
@ -685,3 +1003,5 @@ SELECT citus_remove_node('localhost', :master_port);
(1 row)
ALTER SEQUENCE pg_dist_node_nodeid_seq RESTART 3;
ALTER SEQUENCE pg_dist_groupid_seq RESTART 3;

View File

@ -210,6 +210,7 @@ select create_distributed_table('partitioned_tbl_with_fkey','x');
create table partition_1_with_fkey partition of partitioned_tbl_with_fkey for values from ('2022-01-01') to ('2022-12-31');
create table partition_2_with_fkey partition of partitioned_tbl_with_fkey for values from ('2023-01-01') to ('2023-12-31');
create table partition_3_with_fkey partition of partitioned_tbl_with_fkey for values from ('2024-01-01') to ('2024-12-31');
insert into partitioned_tbl_with_fkey (x,y) select s,s%10 from generate_series(1,100) s;
ALTER TABLE partitioned_tbl_with_fkey ADD CONSTRAINT fkey_to_ref_tbl FOREIGN KEY (y) REFERENCES ref_table_with_fkey(id);
WITH shardid AS (SELECT shardid FROM pg_dist_shard where logicalrelid = 'partitioned_tbl_with_fkey'::regclass ORDER BY shardid LIMIT 1)

View File

@ -1,42 +0,0 @@
Parsed test spec with 2 sessions
starting permutation: s1-register s2-lock s1-lock s2-wrong-cancel-1 s2-wrong-cancel-2 s2-cancel
step s1-register:
INSERT INTO cancel_table VALUES (pg_backend_pid(), get_cancellation_key());
step s2-lock:
BEGIN;
LOCK TABLE cancel_table IN ACCESS EXCLUSIVE MODE;
step s1-lock:
BEGIN;
LOCK TABLE cancel_table IN ACCESS EXCLUSIVE MODE;
END;
<waiting ...>
step s2-wrong-cancel-1:
SELECT run_pg_send_cancellation(pid + 1, cancel_key) FROM cancel_table;
run_pg_send_cancellation
---------------------------------------------------------------------
(1 row)
step s2-wrong-cancel-2:
SELECT run_pg_send_cancellation(pid, cancel_key + 1) FROM cancel_table;
run_pg_send_cancellation
---------------------------------------------------------------------
(1 row)
step s2-cancel:
SELECT run_pg_send_cancellation(pid, cancel_key) FROM cancel_table;
END;
run_pg_send_cancellation
---------------------------------------------------------------------
(1 row)
step s1-lock: <... completed>
ERROR: canceling statement due to user request

View File

@ -1290,8 +1290,82 @@ SELECT pg_identify_object_as_address(classid, objid, objsubid) from pg_catalog.p
(schema,{test_schema_for_sequence_propagation},{})
(1 row)
-- Bug: https://github.com/citusdata/citus/issues/7378
-- Create a reference table
CREATE TABLE tbl_ref_mats(row_id integer primary key);
INSERT INTO tbl_ref_mats VALUES (1), (2);
SELECT create_reference_table('tbl_ref_mats');
NOTICE: Copying data from local table...
NOTICE: copying the data has completed
DETAIL: The local data in the table is no longer visible, but is still on disk.
HINT: To remove the local data, run: SELECT truncate_local_data_after_distributing_table($$public.tbl_ref_mats$$)
create_reference_table
---------------------------------------------------------------------
(1 row)
-- Create a distributed table
CREATE TABLE tbl_dist_mats(series_id integer);
INSERT INTO tbl_dist_mats VALUES (1), (1), (2), (2);
SELECT create_distributed_table('tbl_dist_mats', 'series_id');
NOTICE: Copying data from local table...
NOTICE: copying the data has completed
DETAIL: The local data in the table is no longer visible, but is still on disk.
HINT: To remove the local data, run: SELECT truncate_local_data_after_distributing_table($$public.tbl_dist_mats$$)
create_distributed_table
---------------------------------------------------------------------
(1 row)
-- Create a view that joins the distributed table with the reference table on the distribution key.
CREATE VIEW vw_citus_views as
SELECT d.series_id FROM tbl_dist_mats d JOIN tbl_ref_mats r ON d.series_id = r.row_id;
-- The view initially works fine
SELECT * FROM vw_citus_views ORDER BY 1;
series_id
---------------------------------------------------------------------
1
1
2
2
(4 rows)
-- Now, alter the table
ALTER TABLE tbl_ref_mats ADD COLUMN category1 varchar(50);
SELECT * FROM vw_citus_views ORDER BY 1;
series_id
---------------------------------------------------------------------
1
1
2
2
(4 rows)
ALTER TABLE tbl_ref_mats ADD COLUMN category2 varchar(50);
SELECT * FROM vw_citus_views ORDER BY 1;
series_id
---------------------------------------------------------------------
1
1
2
2
(4 rows)
ALTER TABLE tbl_ref_mats DROP COLUMN category1;
SELECT * FROM vw_citus_views ORDER BY 1;
series_id
---------------------------------------------------------------------
1
1
2
2
(4 rows)
DROP SCHEMA test_schema_for_sequence_propagation CASCADE;
NOTICE: drop cascades to 2 other objects
DETAIL: drop cascades to sequence test_schema_for_sequence_propagation.seq_10
drop cascades to default value for column x of table table_without_sequence
DROP TABLE table_without_sequence;
DROP TABLE tbl_ref_mats CASCADE;
NOTICE: drop cascades to view vw_citus_views
DROP TABLE tbl_dist_mats CASCADE;

View File

@ -1330,12 +1330,28 @@ SELECT * FROM multi_extension.print_extension_changes();
| view citus_stat_tenants_local
(11 rows)
-- Test downgrade to 11.3-1 from 11.3-2
ALTER EXTENSION citus UPDATE TO '11.3-2';
ALTER EXTENSION citus UPDATE TO '11.3-1';
-- Should be empty result since upgrade+downgrade should be a no-op
SELECT * FROM multi_extension.print_extension_changes();
previous_object | current_object
---------------------------------------------------------------------
(0 rows)
-- Snapshot of state at 11.3-2
ALTER EXTENSION citus UPDATE TO '11.3-2';
SELECT * FROM multi_extension.print_extension_changes();
previous_object | current_object
---------------------------------------------------------------------
(0 rows)
DROP TABLE multi_extension.prev_objects, multi_extension.extension_diff;
-- show running version
SHOW citus.version;
citus.version
---------------------------------------------------------------------
11.3devel
11.3.1
(1 row)
-- ensure no unexpected objects were created outside pg_catalog
@ -1750,4 +1766,4 @@ DROP TABLE version_mismatch_table;
DROP SCHEMA multi_extension;
ERROR: cannot drop schema multi_extension because other objects depend on it
DETAIL: function multi_extension.print_extension_changes() depends on schema multi_extension
HINT: Use DROP ... CASCADE to drop the dependent objects too.
HINT: Use DROP ... CASCADE to drop the dependent objects too.

View File

@ -650,7 +650,7 @@ NOTICE: issuing BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;SELECT assign_
DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx
NOTICE: issuing SET citus.enable_ddl_propagation TO 'off'
DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx
NOTICE: issuing CREATE EXTENSION IF NOT EXISTS citus_columnar WITH SCHEMA pg_catalog VERSION "11.2-1";
NOTICE: issuing CREATE EXTENSION IF NOT EXISTS citus_columnar WITH SCHEMA pg_catalog VERSION "x.y-z";
DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx
NOTICE: issuing COMMIT
DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx
@ -658,7 +658,7 @@ NOTICE: issuing BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;SELECT assign_
DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx
NOTICE: issuing SET citus.enable_ddl_propagation TO 'off'
DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx
NOTICE: issuing CREATE EXTENSION IF NOT EXISTS citus_columnar WITH SCHEMA pg_catalog VERSION "11.2-1";
NOTICE: issuing CREATE EXTENSION IF NOT EXISTS citus_columnar WITH SCHEMA pg_catalog VERSION "x.y-z";
DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx
NOTICE: issuing COMMIT
DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx

View File

@ -149,6 +149,7 @@ SELECT unnest(activate_node_snapshot()) order by 1;
RESET ROLE
SELECT alter_role_if_exists('postgres', 'ALTER ROLE postgres SET lc_messages = ''C''')
SELECT citus_internal_add_partition_metadata ('public.mx_test_table'::regclass, 'h', 'col_1', 2, 's')
SELECT pg_catalog.worker_drop_sequence_dependency('public.mx_test_table');
SELECT pg_catalog.worker_drop_sequence_dependency(logicalrelid::regclass::text) FROM pg_dist_partition
SELECT pg_catalog.worker_record_sequence_dependency('public.mx_test_table_col_3_seq'::regclass,'public.mx_test_table'::regclass,'col_3')
SELECT worker_apply_sequence_command ('CREATE SEQUENCE IF NOT EXISTS public.mx_test_table_col_3_seq AS bigint INCREMENT BY 1 MINVALUE 1 MAXVALUE 9223372036854775807 START WITH 1 CACHE 1 NO CYCLE','bigint')
@ -174,7 +175,7 @@ SELECT unnest(activate_node_snapshot()) order by 1;
WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('table', ARRAY['public', 'mx_test_table']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data;
WITH placement_data(shardid, shardlength, groupid, placementid) AS (VALUES (1310000, 0, 1, 100000), (1310001, 0, 2, 100001), (1310002, 0, 1, 100002), (1310003, 0, 2, 100003), (1310004, 0, 1, 100004), (1310005, 0, 2, 100005), (1310006, 0, 1, 100006), (1310007, 0, 2, 100007)) SELECT citus_internal_add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data;
WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('public.mx_test_table'::regclass, 1310000, 't'::"char", '-2147483648', '-1610612737'), ('public.mx_test_table'::regclass, 1310001, 't'::"char", '-1610612736', '-1073741825'), ('public.mx_test_table'::regclass, 1310002, 't'::"char", '-1073741824', '-536870913'), ('public.mx_test_table'::regclass, 1310003, 't'::"char", '-536870912', '-1'), ('public.mx_test_table'::regclass, 1310004, 't'::"char", '0', '536870911'), ('public.mx_test_table'::regclass, 1310005, 't'::"char", '536870912', '1073741823'), ('public.mx_test_table'::regclass, 1310006, 't'::"char", '1073741824', '1610612735'), ('public.mx_test_table'::regclass, 1310007, 't'::"char", '1610612736', '2147483647')) SELECT citus_internal_add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data;
(49 rows)
(50 rows)
-- Show that CREATE INDEX commands are included in the activate node snapshot
CREATE INDEX mx_index ON mx_test_table(col_2);
@ -206,6 +207,7 @@ SELECT unnest(activate_node_snapshot()) order by 1;
RESET ROLE
SELECT alter_role_if_exists('postgres', 'ALTER ROLE postgres SET lc_messages = ''C''')
SELECT citus_internal_add_partition_metadata ('public.mx_test_table'::regclass, 'h', 'col_1', 2, 's')
SELECT pg_catalog.worker_drop_sequence_dependency('public.mx_test_table');
SELECT pg_catalog.worker_drop_sequence_dependency(logicalrelid::regclass::text) FROM pg_dist_partition
SELECT pg_catalog.worker_record_sequence_dependency('public.mx_test_table_col_3_seq'::regclass,'public.mx_test_table'::regclass,'col_3')
SELECT worker_apply_sequence_command ('CREATE SEQUENCE IF NOT EXISTS public.mx_test_table_col_3_seq AS bigint INCREMENT BY 1 MINVALUE 1 MAXVALUE 9223372036854775807 START WITH 1 CACHE 1 NO CYCLE','bigint')
@ -231,7 +233,7 @@ SELECT unnest(activate_node_snapshot()) order by 1;
WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('table', ARRAY['public', 'mx_test_table']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data;
WITH placement_data(shardid, shardlength, groupid, placementid) AS (VALUES (1310000, 0, 1, 100000), (1310001, 0, 2, 100001), (1310002, 0, 1, 100002), (1310003, 0, 2, 100003), (1310004, 0, 1, 100004), (1310005, 0, 2, 100005), (1310006, 0, 1, 100006), (1310007, 0, 2, 100007)) SELECT citus_internal_add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data;
WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('public.mx_test_table'::regclass, 1310000, 't'::"char", '-2147483648', '-1610612737'), ('public.mx_test_table'::regclass, 1310001, 't'::"char", '-1610612736', '-1073741825'), ('public.mx_test_table'::regclass, 1310002, 't'::"char", '-1073741824', '-536870913'), ('public.mx_test_table'::regclass, 1310003, 't'::"char", '-536870912', '-1'), ('public.mx_test_table'::regclass, 1310004, 't'::"char", '0', '536870911'), ('public.mx_test_table'::regclass, 1310005, 't'::"char", '536870912', '1073741823'), ('public.mx_test_table'::regclass, 1310006, 't'::"char", '1073741824', '1610612735'), ('public.mx_test_table'::regclass, 1310007, 't'::"char", '1610612736', '2147483647')) SELECT citus_internal_add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data;
(50 rows)
(51 rows)
-- Show that schema changes are included in the activate node snapshot
CREATE SCHEMA mx_testing_schema;
@ -265,6 +267,7 @@ SELECT unnest(activate_node_snapshot()) order by 1;
RESET ROLE
SELECT alter_role_if_exists('postgres', 'ALTER ROLE postgres SET lc_messages = ''C''')
SELECT citus_internal_add_partition_metadata ('mx_testing_schema.mx_test_table'::regclass, 'h', 'col_1', 2, 's')
SELECT pg_catalog.worker_drop_sequence_dependency('mx_testing_schema.mx_test_table');
SELECT pg_catalog.worker_drop_sequence_dependency(logicalrelid::regclass::text) FROM pg_dist_partition
SELECT pg_catalog.worker_record_sequence_dependency('mx_testing_schema.mx_test_table_col_3_seq'::regclass,'mx_testing_schema.mx_test_table'::regclass,'col_3')
SELECT worker_apply_sequence_command ('CREATE SEQUENCE IF NOT EXISTS mx_testing_schema.mx_test_table_col_3_seq AS bigint INCREMENT BY 1 MINVALUE 1 MAXVALUE 9223372036854775807 START WITH 1 CACHE 1 NO CYCLE','bigint')
@ -291,7 +294,7 @@ SELECT unnest(activate_node_snapshot()) order by 1;
WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('table', ARRAY['mx_testing_schema', 'mx_test_table']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data;
WITH placement_data(shardid, shardlength, groupid, placementid) AS (VALUES (1310000, 0, 1, 100000), (1310001, 0, 2, 100001), (1310002, 0, 1, 100002), (1310003, 0, 2, 100003), (1310004, 0, 1, 100004), (1310005, 0, 2, 100005), (1310006, 0, 1, 100006), (1310007, 0, 2, 100007)) SELECT citus_internal_add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data;
WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('mx_testing_schema.mx_test_table'::regclass, 1310000, 't'::"char", '-2147483648', '-1610612737'), ('mx_testing_schema.mx_test_table'::regclass, 1310001, 't'::"char", '-1610612736', '-1073741825'), ('mx_testing_schema.mx_test_table'::regclass, 1310002, 't'::"char", '-1073741824', '-536870913'), ('mx_testing_schema.mx_test_table'::regclass, 1310003, 't'::"char", '-536870912', '-1'), ('mx_testing_schema.mx_test_table'::regclass, 1310004, 't'::"char", '0', '536870911'), ('mx_testing_schema.mx_test_table'::regclass, 1310005, 't'::"char", '536870912', '1073741823'), ('mx_testing_schema.mx_test_table'::regclass, 1310006, 't'::"char", '1073741824', '1610612735'), ('mx_testing_schema.mx_test_table'::regclass, 1310007, 't'::"char", '1610612736', '2147483647')) SELECT citus_internal_add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data;
(52 rows)
(53 rows)
-- Show that append distributed tables are not included in the activate node snapshot
CREATE TABLE non_mx_test_table (col_1 int, col_2 text);
@ -331,6 +334,7 @@ SELECT unnest(activate_node_snapshot()) order by 1;
RESET ROLE
SELECT alter_role_if_exists('postgres', 'ALTER ROLE postgres SET lc_messages = ''C''')
SELECT citus_internal_add_partition_metadata ('mx_testing_schema.mx_test_table'::regclass, 'h', 'col_1', 2, 's')
SELECT pg_catalog.worker_drop_sequence_dependency('mx_testing_schema.mx_test_table');
SELECT pg_catalog.worker_drop_sequence_dependency(logicalrelid::regclass::text) FROM pg_dist_partition
SELECT pg_catalog.worker_record_sequence_dependency('mx_testing_schema.mx_test_table_col_3_seq'::regclass,'mx_testing_schema.mx_test_table'::regclass,'col_3')
SELECT worker_apply_sequence_command ('CREATE SEQUENCE IF NOT EXISTS mx_testing_schema.mx_test_table_col_3_seq AS bigint INCREMENT BY 1 MINVALUE 1 MAXVALUE 9223372036854775807 START WITH 1 CACHE 1 NO CYCLE','bigint')
@ -357,7 +361,7 @@ SELECT unnest(activate_node_snapshot()) order by 1;
WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('table', ARRAY['mx_testing_schema', 'mx_test_table']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data;
WITH placement_data(shardid, shardlength, groupid, placementid) AS (VALUES (1310000, 0, 1, 100000), (1310001, 0, 2, 100001), (1310002, 0, 1, 100002), (1310003, 0, 2, 100003), (1310004, 0, 1, 100004), (1310005, 0, 2, 100005), (1310006, 0, 1, 100006), (1310007, 0, 2, 100007)) SELECT citus_internal_add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data;
WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('mx_testing_schema.mx_test_table'::regclass, 1310000, 't'::"char", '-2147483648', '-1610612737'), ('mx_testing_schema.mx_test_table'::regclass, 1310001, 't'::"char", '-1610612736', '-1073741825'), ('mx_testing_schema.mx_test_table'::regclass, 1310002, 't'::"char", '-1073741824', '-536870913'), ('mx_testing_schema.mx_test_table'::regclass, 1310003, 't'::"char", '-536870912', '-1'), ('mx_testing_schema.mx_test_table'::regclass, 1310004, 't'::"char", '0', '536870911'), ('mx_testing_schema.mx_test_table'::regclass, 1310005, 't'::"char", '536870912', '1073741823'), ('mx_testing_schema.mx_test_table'::regclass, 1310006, 't'::"char", '1073741824', '1610612735'), ('mx_testing_schema.mx_test_table'::regclass, 1310007, 't'::"char", '1610612736', '2147483647')) SELECT citus_internal_add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data;
(52 rows)
(53 rows)
-- Show that range distributed tables are not included in the activate node snapshot
UPDATE pg_dist_partition SET partmethod='r' WHERE logicalrelid='non_mx_test_table'::regclass;
@ -390,6 +394,7 @@ SELECT unnest(activate_node_snapshot()) order by 1;
RESET ROLE
SELECT alter_role_if_exists('postgres', 'ALTER ROLE postgres SET lc_messages = ''C''')
SELECT citus_internal_add_partition_metadata ('mx_testing_schema.mx_test_table'::regclass, 'h', 'col_1', 2, 's')
SELECT pg_catalog.worker_drop_sequence_dependency('mx_testing_schema.mx_test_table');
SELECT pg_catalog.worker_drop_sequence_dependency(logicalrelid::regclass::text) FROM pg_dist_partition
SELECT pg_catalog.worker_record_sequence_dependency('mx_testing_schema.mx_test_table_col_3_seq'::regclass,'mx_testing_schema.mx_test_table'::regclass,'col_3')
SELECT worker_apply_sequence_command ('CREATE SEQUENCE IF NOT EXISTS mx_testing_schema.mx_test_table_col_3_seq AS bigint INCREMENT BY 1 MINVALUE 1 MAXVALUE 9223372036854775807 START WITH 1 CACHE 1 NO CYCLE','bigint')
@ -416,7 +421,7 @@ SELECT unnest(activate_node_snapshot()) order by 1;
WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('table', ARRAY['mx_testing_schema', 'mx_test_table']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data;
WITH placement_data(shardid, shardlength, groupid, placementid) AS (VALUES (1310000, 0, 1, 100000), (1310001, 0, 2, 100001), (1310002, 0, 1, 100002), (1310003, 0, 2, 100003), (1310004, 0, 1, 100004), (1310005, 0, 2, 100005), (1310006, 0, 1, 100006), (1310007, 0, 2, 100007)) SELECT citus_internal_add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data;
WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('mx_testing_schema.mx_test_table'::regclass, 1310000, 't'::"char", '-2147483648', '-1610612737'), ('mx_testing_schema.mx_test_table'::regclass, 1310001, 't'::"char", '-1610612736', '-1073741825'), ('mx_testing_schema.mx_test_table'::regclass, 1310002, 't'::"char", '-1073741824', '-536870913'), ('mx_testing_schema.mx_test_table'::regclass, 1310003, 't'::"char", '-536870912', '-1'), ('mx_testing_schema.mx_test_table'::regclass, 1310004, 't'::"char", '0', '536870911'), ('mx_testing_schema.mx_test_table'::regclass, 1310005, 't'::"char", '536870912', '1073741823'), ('mx_testing_schema.mx_test_table'::regclass, 1310006, 't'::"char", '1073741824', '1610612735'), ('mx_testing_schema.mx_test_table'::regclass, 1310007, 't'::"char", '1610612736', '2147483647')) SELECT citus_internal_add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data;
(52 rows)
(53 rows)
-- Test start_metadata_sync_to_node and citus_activate_node UDFs
-- Ensure that hasmetadata=false for all nodes
@ -1943,6 +1948,12 @@ SELECT unnest(activate_node_snapshot()) order by 1;
SELECT citus_internal_add_partition_metadata ('public.dist_table_1'::regclass, 'h', 'a', 10010, 's')
SELECT citus_internal_add_partition_metadata ('public.mx_ref'::regclass, 'n', NULL, 10009, 't')
SELECT citus_internal_add_partition_metadata ('public.test_table'::regclass, 'h', 'id', 10010, 's')
SELECT pg_catalog.worker_drop_sequence_dependency('mx_test_schema_1.mx_table_1');
SELECT pg_catalog.worker_drop_sequence_dependency('mx_test_schema_2.mx_table_2');
SELECT pg_catalog.worker_drop_sequence_dependency('mx_testing_schema.mx_test_table');
SELECT pg_catalog.worker_drop_sequence_dependency('public.dist_table_1');
SELECT pg_catalog.worker_drop_sequence_dependency('public.mx_ref');
SELECT pg_catalog.worker_drop_sequence_dependency('public.test_table');
SELECT pg_catalog.worker_drop_sequence_dependency(logicalrelid::regclass::text) FROM pg_dist_partition
SELECT pg_catalog.worker_record_sequence_dependency('mx_testing_schema.mx_test_table_col_3_seq'::regclass,'mx_testing_schema.mx_test_table'::regclass,'col_3')
SELECT worker_apply_sequence_command ('CREATE SEQUENCE IF NOT EXISTS mx_testing_schema.mx_test_table_col_3_seq AS bigint INCREMENT BY 1 MINVALUE 1 MAXVALUE 9223372036854775807 START WITH 1 CACHE 1 NO CYCLE','bigint')
@ -1997,7 +2008,7 @@ SELECT unnest(activate_node_snapshot()) order by 1;
WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('public.dist_table_1'::regclass, 1310074, 't'::"char", '-2147483648', '-1073741825'), ('public.dist_table_1'::regclass, 1310075, 't'::"char", '-1073741824', '-1'), ('public.dist_table_1'::regclass, 1310076, 't'::"char", '0', '1073741823'), ('public.dist_table_1'::regclass, 1310077, 't'::"char", '1073741824', '2147483647')) SELECT citus_internal_add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data;
WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('public.mx_ref'::regclass, 1310073, 't'::"char", NULL, NULL)) SELECT citus_internal_add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data;
WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('public.test_table'::regclass, 1310083, 't'::"char", '-2147483648', '-1073741825'), ('public.test_table'::regclass, 1310084, 't'::"char", '-1073741824', '-1'), ('public.test_table'::regclass, 1310085, 't'::"char", '0', '1073741823'), ('public.test_table'::regclass, 1310086, 't'::"char", '1073741824', '2147483647')) SELECT citus_internal_add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data;
(111 rows)
(117 rows)
-- shouldn't work since test_table is MX
ALTER TABLE test_table ADD COLUMN id3 bigserial;

View File

@ -149,6 +149,7 @@ SELECT unnest(activate_node_snapshot()) order by 1;
RESET ROLE
SELECT alter_role_if_exists('postgres', 'ALTER ROLE postgres SET lc_messages = ''C''')
SELECT citus_internal_add_partition_metadata ('public.mx_test_table'::regclass, 'h', 'col_1', 2, 's')
SELECT pg_catalog.worker_drop_sequence_dependency('public.mx_test_table');
SELECT pg_catalog.worker_drop_sequence_dependency(logicalrelid::regclass::text) FROM pg_dist_partition
SELECT pg_catalog.worker_record_sequence_dependency('public.mx_test_table_col_3_seq'::regclass,'public.mx_test_table'::regclass,'col_3')
SELECT worker_apply_sequence_command ('CREATE SEQUENCE IF NOT EXISTS public.mx_test_table_col_3_seq AS bigint INCREMENT BY 1 MINVALUE 1 MAXVALUE 9223372036854775807 START WITH 1 CACHE 1 NO CYCLE','bigint')
@ -174,7 +175,7 @@ SELECT unnest(activate_node_snapshot()) order by 1;
WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('table', ARRAY['public', 'mx_test_table']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data;
WITH placement_data(shardid, shardlength, groupid, placementid) AS (VALUES (1310000, 0, 1, 100000), (1310001, 0, 2, 100001), (1310002, 0, 1, 100002), (1310003, 0, 2, 100003), (1310004, 0, 1, 100004), (1310005, 0, 2, 100005), (1310006, 0, 1, 100006), (1310007, 0, 2, 100007)) SELECT citus_internal_add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data;
WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('public.mx_test_table'::regclass, 1310000, 't'::"char", '-2147483648', '-1610612737'), ('public.mx_test_table'::regclass, 1310001, 't'::"char", '-1610612736', '-1073741825'), ('public.mx_test_table'::regclass, 1310002, 't'::"char", '-1073741824', '-536870913'), ('public.mx_test_table'::regclass, 1310003, 't'::"char", '-536870912', '-1'), ('public.mx_test_table'::regclass, 1310004, 't'::"char", '0', '536870911'), ('public.mx_test_table'::regclass, 1310005, 't'::"char", '536870912', '1073741823'), ('public.mx_test_table'::regclass, 1310006, 't'::"char", '1073741824', '1610612735'), ('public.mx_test_table'::regclass, 1310007, 't'::"char", '1610612736', '2147483647')) SELECT citus_internal_add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data;
(49 rows)
(50 rows)
-- Show that CREATE INDEX commands are included in the activate node snapshot
CREATE INDEX mx_index ON mx_test_table(col_2);
@ -206,6 +207,7 @@ SELECT unnest(activate_node_snapshot()) order by 1;
RESET ROLE
SELECT alter_role_if_exists('postgres', 'ALTER ROLE postgres SET lc_messages = ''C''')
SELECT citus_internal_add_partition_metadata ('public.mx_test_table'::regclass, 'h', 'col_1', 2, 's')
SELECT pg_catalog.worker_drop_sequence_dependency('public.mx_test_table');
SELECT pg_catalog.worker_drop_sequence_dependency(logicalrelid::regclass::text) FROM pg_dist_partition
SELECT pg_catalog.worker_record_sequence_dependency('public.mx_test_table_col_3_seq'::regclass,'public.mx_test_table'::regclass,'col_3')
SELECT worker_apply_sequence_command ('CREATE SEQUENCE IF NOT EXISTS public.mx_test_table_col_3_seq AS bigint INCREMENT BY 1 MINVALUE 1 MAXVALUE 9223372036854775807 START WITH 1 CACHE 1 NO CYCLE','bigint')
@ -231,7 +233,7 @@ SELECT unnest(activate_node_snapshot()) order by 1;
WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('table', ARRAY['public', 'mx_test_table']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data;
WITH placement_data(shardid, shardlength, groupid, placementid) AS (VALUES (1310000, 0, 1, 100000), (1310001, 0, 2, 100001), (1310002, 0, 1, 100002), (1310003, 0, 2, 100003), (1310004, 0, 1, 100004), (1310005, 0, 2, 100005), (1310006, 0, 1, 100006), (1310007, 0, 2, 100007)) SELECT citus_internal_add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data;
WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('public.mx_test_table'::regclass, 1310000, 't'::"char", '-2147483648', '-1610612737'), ('public.mx_test_table'::regclass, 1310001, 't'::"char", '-1610612736', '-1073741825'), ('public.mx_test_table'::regclass, 1310002, 't'::"char", '-1073741824', '-536870913'), ('public.mx_test_table'::regclass, 1310003, 't'::"char", '-536870912', '-1'), ('public.mx_test_table'::regclass, 1310004, 't'::"char", '0', '536870911'), ('public.mx_test_table'::regclass, 1310005, 't'::"char", '536870912', '1073741823'), ('public.mx_test_table'::regclass, 1310006, 't'::"char", '1073741824', '1610612735'), ('public.mx_test_table'::regclass, 1310007, 't'::"char", '1610612736', '2147483647')) SELECT citus_internal_add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data;
(50 rows)
(51 rows)
-- Show that schema changes are included in the activate node snapshot
CREATE SCHEMA mx_testing_schema;
@ -265,6 +267,7 @@ SELECT unnest(activate_node_snapshot()) order by 1;
RESET ROLE
SELECT alter_role_if_exists('postgres', 'ALTER ROLE postgres SET lc_messages = ''C''')
SELECT citus_internal_add_partition_metadata ('mx_testing_schema.mx_test_table'::regclass, 'h', 'col_1', 2, 's')
SELECT pg_catalog.worker_drop_sequence_dependency('mx_testing_schema.mx_test_table');
SELECT pg_catalog.worker_drop_sequence_dependency(logicalrelid::regclass::text) FROM pg_dist_partition
SELECT pg_catalog.worker_record_sequence_dependency('mx_testing_schema.mx_test_table_col_3_seq'::regclass,'mx_testing_schema.mx_test_table'::regclass,'col_3')
SELECT worker_apply_sequence_command ('CREATE SEQUENCE IF NOT EXISTS mx_testing_schema.mx_test_table_col_3_seq AS bigint INCREMENT BY 1 MINVALUE 1 MAXVALUE 9223372036854775807 START WITH 1 CACHE 1 NO CYCLE','bigint')
@ -291,7 +294,7 @@ SELECT unnest(activate_node_snapshot()) order by 1;
WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('table', ARRAY['mx_testing_schema', 'mx_test_table']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data;
WITH placement_data(shardid, shardlength, groupid, placementid) AS (VALUES (1310000, 0, 1, 100000), (1310001, 0, 2, 100001), (1310002, 0, 1, 100002), (1310003, 0, 2, 100003), (1310004, 0, 1, 100004), (1310005, 0, 2, 100005), (1310006, 0, 1, 100006), (1310007, 0, 2, 100007)) SELECT citus_internal_add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data;
WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('mx_testing_schema.mx_test_table'::regclass, 1310000, 't'::"char", '-2147483648', '-1610612737'), ('mx_testing_schema.mx_test_table'::regclass, 1310001, 't'::"char", '-1610612736', '-1073741825'), ('mx_testing_schema.mx_test_table'::regclass, 1310002, 't'::"char", '-1073741824', '-536870913'), ('mx_testing_schema.mx_test_table'::regclass, 1310003, 't'::"char", '-536870912', '-1'), ('mx_testing_schema.mx_test_table'::regclass, 1310004, 't'::"char", '0', '536870911'), ('mx_testing_schema.mx_test_table'::regclass, 1310005, 't'::"char", '536870912', '1073741823'), ('mx_testing_schema.mx_test_table'::regclass, 1310006, 't'::"char", '1073741824', '1610612735'), ('mx_testing_schema.mx_test_table'::regclass, 1310007, 't'::"char", '1610612736', '2147483647')) SELECT citus_internal_add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data;
(52 rows)
(53 rows)
-- Show that append distributed tables are not included in the activate node snapshot
CREATE TABLE non_mx_test_table (col_1 int, col_2 text);
@ -331,6 +334,7 @@ SELECT unnest(activate_node_snapshot()) order by 1;
RESET ROLE
SELECT alter_role_if_exists('postgres', 'ALTER ROLE postgres SET lc_messages = ''C''')
SELECT citus_internal_add_partition_metadata ('mx_testing_schema.mx_test_table'::regclass, 'h', 'col_1', 2, 's')
SELECT pg_catalog.worker_drop_sequence_dependency('mx_testing_schema.mx_test_table');
SELECT pg_catalog.worker_drop_sequence_dependency(logicalrelid::regclass::text) FROM pg_dist_partition
SELECT pg_catalog.worker_record_sequence_dependency('mx_testing_schema.mx_test_table_col_3_seq'::regclass,'mx_testing_schema.mx_test_table'::regclass,'col_3')
SELECT worker_apply_sequence_command ('CREATE SEQUENCE IF NOT EXISTS mx_testing_schema.mx_test_table_col_3_seq AS bigint INCREMENT BY 1 MINVALUE 1 MAXVALUE 9223372036854775807 START WITH 1 CACHE 1 NO CYCLE','bigint')
@ -357,7 +361,7 @@ SELECT unnest(activate_node_snapshot()) order by 1;
WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('table', ARRAY['mx_testing_schema', 'mx_test_table']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data;
WITH placement_data(shardid, shardlength, groupid, placementid) AS (VALUES (1310000, 0, 1, 100000), (1310001, 0, 2, 100001), (1310002, 0, 1, 100002), (1310003, 0, 2, 100003), (1310004, 0, 1, 100004), (1310005, 0, 2, 100005), (1310006, 0, 1, 100006), (1310007, 0, 2, 100007)) SELECT citus_internal_add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data;
WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('mx_testing_schema.mx_test_table'::regclass, 1310000, 't'::"char", '-2147483648', '-1610612737'), ('mx_testing_schema.mx_test_table'::regclass, 1310001, 't'::"char", '-1610612736', '-1073741825'), ('mx_testing_schema.mx_test_table'::regclass, 1310002, 't'::"char", '-1073741824', '-536870913'), ('mx_testing_schema.mx_test_table'::regclass, 1310003, 't'::"char", '-536870912', '-1'), ('mx_testing_schema.mx_test_table'::regclass, 1310004, 't'::"char", '0', '536870911'), ('mx_testing_schema.mx_test_table'::regclass, 1310005, 't'::"char", '536870912', '1073741823'), ('mx_testing_schema.mx_test_table'::regclass, 1310006, 't'::"char", '1073741824', '1610612735'), ('mx_testing_schema.mx_test_table'::regclass, 1310007, 't'::"char", '1610612736', '2147483647')) SELECT citus_internal_add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data;
(52 rows)
(53 rows)
-- Show that range distributed tables are not included in the activate node snapshot
UPDATE pg_dist_partition SET partmethod='r' WHERE logicalrelid='non_mx_test_table'::regclass;
@ -390,6 +394,7 @@ SELECT unnest(activate_node_snapshot()) order by 1;
RESET ROLE
SELECT alter_role_if_exists('postgres', 'ALTER ROLE postgres SET lc_messages = ''C''')
SELECT citus_internal_add_partition_metadata ('mx_testing_schema.mx_test_table'::regclass, 'h', 'col_1', 2, 's')
SELECT pg_catalog.worker_drop_sequence_dependency('mx_testing_schema.mx_test_table');
SELECT pg_catalog.worker_drop_sequence_dependency(logicalrelid::regclass::text) FROM pg_dist_partition
SELECT pg_catalog.worker_record_sequence_dependency('mx_testing_schema.mx_test_table_col_3_seq'::regclass,'mx_testing_schema.mx_test_table'::regclass,'col_3')
SELECT worker_apply_sequence_command ('CREATE SEQUENCE IF NOT EXISTS mx_testing_schema.mx_test_table_col_3_seq AS bigint INCREMENT BY 1 MINVALUE 1 MAXVALUE 9223372036854775807 START WITH 1 CACHE 1 NO CYCLE','bigint')
@ -416,7 +421,7 @@ SELECT unnest(activate_node_snapshot()) order by 1;
WITH distributed_object_data(typetext, objnames, objargs, distargumentindex, colocationid, force_delegation) AS (VALUES ('table', ARRAY['mx_testing_schema', 'mx_test_table']::text[], ARRAY[]::text[], -1, 0, false)) SELECT citus_internal_add_object_metadata(typetext, objnames, objargs, distargumentindex::int, colocationid::int, force_delegation::bool) FROM distributed_object_data;
WITH placement_data(shardid, shardlength, groupid, placementid) AS (VALUES (1310000, 0, 1, 100000), (1310001, 0, 2, 100001), (1310002, 0, 1, 100002), (1310003, 0, 2, 100003), (1310004, 0, 1, 100004), (1310005, 0, 2, 100005), (1310006, 0, 1, 100006), (1310007, 0, 2, 100007)) SELECT citus_internal_add_placement_metadata(shardid, shardlength, groupid, placementid) FROM placement_data;
WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('mx_testing_schema.mx_test_table'::regclass, 1310000, 't'::"char", '-2147483648', '-1610612737'), ('mx_testing_schema.mx_test_table'::regclass, 1310001, 't'::"char", '-1610612736', '-1073741825'), ('mx_testing_schema.mx_test_table'::regclass, 1310002, 't'::"char", '-1073741824', '-536870913'), ('mx_testing_schema.mx_test_table'::regclass, 1310003, 't'::"char", '-536870912', '-1'), ('mx_testing_schema.mx_test_table'::regclass, 1310004, 't'::"char", '0', '536870911'), ('mx_testing_schema.mx_test_table'::regclass, 1310005, 't'::"char", '536870912', '1073741823'), ('mx_testing_schema.mx_test_table'::regclass, 1310006, 't'::"char", '1073741824', '1610612735'), ('mx_testing_schema.mx_test_table'::regclass, 1310007, 't'::"char", '1610612736', '2147483647')) SELECT citus_internal_add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data;
(52 rows)
(53 rows)
-- Test start_metadata_sync_to_node and citus_activate_node UDFs
-- Ensure that hasmetadata=false for all nodes
@ -1943,6 +1948,12 @@ SELECT unnest(activate_node_snapshot()) order by 1;
SELECT citus_internal_add_partition_metadata ('public.dist_table_1'::regclass, 'h', 'a', 10010, 's')
SELECT citus_internal_add_partition_metadata ('public.mx_ref'::regclass, 'n', NULL, 10009, 't')
SELECT citus_internal_add_partition_metadata ('public.test_table'::regclass, 'h', 'id', 10010, 's')
SELECT pg_catalog.worker_drop_sequence_dependency('mx_test_schema_1.mx_table_1');
SELECT pg_catalog.worker_drop_sequence_dependency('mx_test_schema_2.mx_table_2');
SELECT pg_catalog.worker_drop_sequence_dependency('mx_testing_schema.mx_test_table');
SELECT pg_catalog.worker_drop_sequence_dependency('public.dist_table_1');
SELECT pg_catalog.worker_drop_sequence_dependency('public.mx_ref');
SELECT pg_catalog.worker_drop_sequence_dependency('public.test_table');
SELECT pg_catalog.worker_drop_sequence_dependency(logicalrelid::regclass::text) FROM pg_dist_partition
SELECT pg_catalog.worker_record_sequence_dependency('mx_testing_schema.mx_test_table_col_3_seq'::regclass,'mx_testing_schema.mx_test_table'::regclass,'col_3')
SELECT worker_apply_sequence_command ('CREATE SEQUENCE IF NOT EXISTS mx_testing_schema.mx_test_table_col_3_seq AS bigint INCREMENT BY 1 MINVALUE 1 MAXVALUE 9223372036854775807 START WITH 1 CACHE 1 NO CYCLE','bigint')
@ -1997,7 +2008,7 @@ SELECT unnest(activate_node_snapshot()) order by 1;
WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('public.dist_table_1'::regclass, 1310074, 't'::"char", '-2147483648', '-1073741825'), ('public.dist_table_1'::regclass, 1310075, 't'::"char", '-1073741824', '-1'), ('public.dist_table_1'::regclass, 1310076, 't'::"char", '0', '1073741823'), ('public.dist_table_1'::regclass, 1310077, 't'::"char", '1073741824', '2147483647')) SELECT citus_internal_add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data;
WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('public.mx_ref'::regclass, 1310073, 't'::"char", NULL, NULL)) SELECT citus_internal_add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data;
WITH shard_data(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) AS (VALUES ('public.test_table'::regclass, 1310083, 't'::"char", '-2147483648', '-1073741825'), ('public.test_table'::regclass, 1310084, 't'::"char", '-1073741824', '-1'), ('public.test_table'::regclass, 1310085, 't'::"char", '0', '1073741823'), ('public.test_table'::regclass, 1310086, 't'::"char", '1073741824', '2147483647')) SELECT citus_internal_add_shard_metadata(relationname, shardid, storagetype, shardminvalue, shardmaxvalue) FROM shard_data;
(111 rows)
(117 rows)
-- shouldn't work since test_table is MX
ALTER TABLE test_table ADD COLUMN id3 bigserial;

View File

@ -1095,6 +1095,9 @@ ALTER TABLE IF EXISTS non_existent_table SET SCHEMA non_existent_schema;
NOTICE: relation "non_existent_table" does not exist, skipping
DROP SCHEMA existing_schema, another_existing_schema CASCADE;
NOTICE: drop cascades to table existing_schema.table_set_schema
-- test DROP SCHEMA with nonexisting schemas
DROP SCHEMA ax, bx, cx, dx, ex, fx, gx, jx;
ERROR: schema "ax" does not exist
-- test ALTER TABLE SET SCHEMA with interesting names
CREATE SCHEMA "cItuS.T E E N'sSchema";
CREATE SCHEMA "citus-teen's scnd schm.";
@ -1361,6 +1364,7 @@ SELECT pg_identify_object_as_address(classid, objid, objsubid) FROM pg_catalog.p
(schema,{run_test_schema},{})
(1 row)
DROP TABLE public.nation_local;
DROP SCHEMA run_test_schema, test_schema_support_join_1, test_schema_support_join_2, "Citus'Teen123", "CiTUS.TEEN2", bar, test_schema_support CASCADE;
-- verify that the dropped schema is removed from worker's pg_dist_object
SELECT pg_identify_object_as_address(classid, objid, objsubid) FROM pg_catalog.pg_dist_object

View File

@ -254,6 +254,76 @@ FETCH FORWARD 3 FROM holdCursor;
1 | 19
(3 rows)
CLOSE holdCursor;
-- Test DECLARE CURSOR .. WITH HOLD inside transaction block
BEGIN;
DECLARE holdCursor CURSOR WITH HOLD FOR
SELECT * FROM cursor_me WHERE x = 1 ORDER BY y;
FETCH 3 FROM holdCursor;
x | y
---------------------------------------------------------------------
1 | 10
1 | 11
1 | 12
(3 rows)
FETCH BACKWARD 3 FROM holdCursor;
x | y
---------------------------------------------------------------------
1 | 11
1 | 10
(2 rows)
FETCH FORWARD 3 FROM holdCursor;
x | y
---------------------------------------------------------------------
1 | 10
1 | 11
1 | 12
(3 rows)
COMMIT;
FETCH 3 FROM holdCursor;
x | y
---------------------------------------------------------------------
1 | 13
1 | 14
1 | 15
(3 rows)
CLOSE holdCursor;
-- Test DECLARE NO SCROLL CURSOR .. WITH HOLD inside transaction block
BEGIN;
DECLARE holdCursor NO SCROLL CURSOR WITH HOLD FOR
SELECT * FROM cursor_me WHERE x = 1 ORDER BY y;
FETCH 3 FROM holdCursor;
x | y
---------------------------------------------------------------------
1 | 10
1 | 11
1 | 12
(3 rows)
FETCH FORWARD 3 FROM holdCursor;
x | y
---------------------------------------------------------------------
1 | 13
1 | 14
1 | 15
(3 rows)
COMMIT;
FETCH 3 FROM holdCursor;
x | y
---------------------------------------------------------------------
1 | 16
1 | 17
1 | 18
(3 rows)
FETCH BACKWARD 3 FROM holdCursor;
ERROR: cursor can only scan forward
HINT: Declare it with SCROLL option to enable backward scan.
CLOSE holdCursor;
-- Test DECLARE CURSOR .. WITH HOLD with parameter
CREATE OR REPLACE FUNCTION declares_cursor(p int)

View File

@ -20,13 +20,14 @@ SELECT create_distributed_table('dist_table_test', 'a');
CREATE TABLE postgres_table_test(a int primary key);
-- make sure that all rebalance operations works fine when
-- reference tables are replicated to the coordinator
SET client_min_messages TO ERROR;
SELECT 1 FROM master_add_node('localhost', :master_port, groupId=>0);
NOTICE: localhost:xxxxx is the coordinator and already contains metadata, skipping syncing the metadata
?column?
---------------------------------------------------------------------
1
(1 row)
RESET client_min_messages;
-- should just be noops even if we add the coordinator to the pg_dist_node
SELECT rebalance_table_shards('dist_table_test');
rebalance_table_shards
@ -221,7 +222,7 @@ NOTICE: issuing SET LOCAL citus.shard_count TO '4';
DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx
NOTICE: issuing SET LOCAL citus.shard_replication_factor TO '2';
DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx
NOTICE: issuing SELECT citus_copy_shard_placement(43xxxx,xx,xx,'block_writes')
NOTICE: issuing SELECT pg_catalog.citus_copy_shard_placement(43xxxx,xx,xx,'block_writes')
DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx
NOTICE: issuing COMMIT
DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx
@ -244,7 +245,7 @@ NOTICE: issuing SET LOCAL citus.shard_count TO '4';
DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx
NOTICE: issuing SET LOCAL citus.shard_replication_factor TO '2';
DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx
NOTICE: issuing SELECT citus_copy_shard_placement(43xxxx,xx,xx,'block_writes')
NOTICE: issuing SELECT pg_catalog.citus_copy_shard_placement(43xxxx,xx,xx,'block_writes')
DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx
NOTICE: issuing COMMIT
DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx
@ -267,7 +268,7 @@ NOTICE: issuing SET LOCAL citus.shard_count TO '4';
DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx
NOTICE: issuing SET LOCAL citus.shard_replication_factor TO '2';
DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx
NOTICE: issuing SELECT citus_copy_shard_placement(43xxxx,xx,xx,'block_writes')
NOTICE: issuing SELECT pg_catalog.citus_copy_shard_placement(43xxxx,xx,xx,'block_writes')
DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx
NOTICE: issuing COMMIT
DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx
@ -290,7 +291,7 @@ NOTICE: issuing SET LOCAL citus.shard_count TO '4';
DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx
NOTICE: issuing SET LOCAL citus.shard_replication_factor TO '2';
DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx
NOTICE: issuing SELECT citus_copy_shard_placement(43xxxx,xx,xx,'block_writes')
NOTICE: issuing SELECT pg_catalog.citus_copy_shard_placement(43xxxx,xx,xx,'block_writes')
DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx
NOTICE: issuing COMMIT
DETAIL: on server postgres@localhost:xxxxx connectionId: xxxxxxx
@ -2713,6 +2714,113 @@ SELECT sh.logicalrelid, pl.nodeport
(5 rows)
DROP TABLE single_shard_colocation_1a, single_shard_colocation_1b, single_shard_colocation_1c, single_shard_colocation_2a, single_shard_colocation_2b CASCADE;
-- test the same with coordinator shouldhaveshards = false and shard_count = 2
-- so that the shard allowed node count would be 2 when rebalancing
-- for such cases, we only count the nodes that are allowed for shard placements
UPDATE pg_dist_node SET shouldhaveshards=false WHERE nodeport = :master_port;
create table two_shard_colocation_1a (a int primary key);
create table two_shard_colocation_1b (a int primary key);
SET citus.shard_replication_factor = 1;
select create_distributed_table('two_shard_colocation_1a','a', colocate_with => 'none', shard_count => 2);
create_distributed_table
---------------------------------------------------------------------
(1 row)
select create_distributed_table('two_shard_colocation_1b','a',colocate_with=>'two_shard_colocation_1a');
create_distributed_table
---------------------------------------------------------------------
(1 row)
create table two_shard_colocation_2a (a int primary key);
create table two_shard_colocation_2b (a int primary key);
select create_distributed_table('two_shard_colocation_2a','a', colocate_with => 'none', shard_count => 2);
create_distributed_table
---------------------------------------------------------------------
(1 row)
select create_distributed_table('two_shard_colocation_2b','a',colocate_with=>'two_shard_colocation_2a');
create_distributed_table
---------------------------------------------------------------------
(1 row)
-- move shards of colocation group 1 to worker1
SELECT citus_move_shard_placement(sh.shardid, 'localhost', :worker_2_port, 'localhost', :worker_1_port)
FROM pg_dist_shard sh JOIN pg_dist_shard_placement pl ON sh.shardid = pl.shardid
WHERE sh.logicalrelid = 'two_shard_colocation_1a'::regclass
AND pl.nodeport = :worker_2_port
LIMIT 1;
citus_move_shard_placement
---------------------------------------------------------------------
(1 row)
-- move shards of colocation group 2 to worker2
SELECT citus_move_shard_placement(sh.shardid, 'localhost', :worker_1_port, 'localhost', :worker_2_port)
FROM pg_dist_shard sh JOIN pg_dist_shard_placement pl ON sh.shardid = pl.shardid
WHERE sh.logicalrelid = 'two_shard_colocation_2a'::regclass
AND pl.nodeport = :worker_1_port
LIMIT 1;
citus_move_shard_placement
---------------------------------------------------------------------
(1 row)
-- current state:
-- coordinator: []
-- worker 1: [1_1, 1_2]
-- worker 2: [2_1, 2_2]
SELECT sh.logicalrelid, pl.nodeport
FROM pg_dist_shard sh JOIN pg_dist_shard_placement pl ON sh.shardid = pl.shardid
WHERE sh.logicalrelid::text IN ('two_shard_colocation_1a', 'two_shard_colocation_1b', 'two_shard_colocation_2a', 'two_shard_colocation_2b')
ORDER BY sh.logicalrelid, pl.nodeport;
logicalrelid | nodeport
---------------------------------------------------------------------
two_shard_colocation_1a | 57637
two_shard_colocation_1a | 57637
two_shard_colocation_1b | 57637
two_shard_colocation_1b | 57637
two_shard_colocation_2a | 57638
two_shard_colocation_2a | 57638
two_shard_colocation_2b | 57638
two_shard_colocation_2b | 57638
(8 rows)
-- If we take the coordinator into account, the rebalancer considers this as balanced and does nothing (shard_count < worker_count)
-- but because the coordinator is not allowed for shards, rebalancer will distribute each colocation group to both workers
select rebalance_table_shards(shard_transfer_mode:='block_writes');
NOTICE: Moving shard xxxxx from localhost:xxxxx to localhost:xxxxx ...
NOTICE: Moving shard xxxxx from localhost:xxxxx to localhost:xxxxx ...
rebalance_table_shards
---------------------------------------------------------------------
(1 row)
-- final state:
-- coordinator: []
-- worker 1: [1_1, 2_1]
-- worker 2: [1_2, 2_2]
SELECT sh.logicalrelid, pl.nodeport
FROM pg_dist_shard sh JOIN pg_dist_shard_placement pl ON sh.shardid = pl.shardid
WHERE sh.logicalrelid::text IN ('two_shard_colocation_1a', 'two_shard_colocation_1b', 'two_shard_colocation_2a', 'two_shard_colocation_2b')
ORDER BY sh.logicalrelid, pl.nodeport;
logicalrelid | nodeport
---------------------------------------------------------------------
two_shard_colocation_1a | 57637
two_shard_colocation_1a | 57638
two_shard_colocation_1b | 57637
two_shard_colocation_1b | 57638
two_shard_colocation_2a | 57637
two_shard_colocation_2a | 57638
two_shard_colocation_2b | 57637
two_shard_colocation_2b | 57638
(8 rows)
-- cleanup
DROP TABLE two_shard_colocation_1a, two_shard_colocation_1b, two_shard_colocation_2a, two_shard_colocation_2b CASCADE;
-- verify we detect if one of the tables do not have a replica identity or primary key
-- and error out in case of shard transfer mode = auto
SELECT 1 FROM citus_remove_node('localhost', :worker_2_port);

View File

@ -1,10 +1,10 @@
test: multi_test_helpers multi_test_helpers_superuser
test: multi_cluster_management
test: multi_test_catalog_views
test: worker_copy_table_to_node
test: shard_rebalancer_unit
test: shard_rebalancer
test: background_rebalance
test: worker_copy_table_to_node
test: background_rebalance_parallel
test: foreign_key_to_reference_shard_rebalance
test: multi_move_mx

View File

@ -1,65 +0,0 @@
setup
{
CREATE FUNCTION run_pg_send_cancellation(int,int)
RETURNS void
AS 'citus'
LANGUAGE C STRICT;
CREATE FUNCTION get_cancellation_key()
RETURNS int
AS 'citus'
LANGUAGE C STRICT;
CREATE TABLE cancel_table (pid int, cancel_key int);
}
teardown
{
DROP TABLE IF EXISTS cancel_table;
}
session "s1"
/* store the PID and cancellation key of session 1 */
step "s1-register"
{
INSERT INTO cancel_table VALUES (pg_backend_pid(), get_cancellation_key());
}
/* lock the table from session 1, will block and get cancelled */
step "s1-lock"
{
BEGIN;
LOCK TABLE cancel_table IN ACCESS EXCLUSIVE MODE;
END;
}
session "s2"
/* lock the table from session 2 to block session 1 */
step "s2-lock"
{
BEGIN;
LOCK TABLE cancel_table IN ACCESS EXCLUSIVE MODE;
}
/* PID mismatch */
step "s2-wrong-cancel-1"
{
SELECT run_pg_send_cancellation(pid + 1, cancel_key) FROM cancel_table;
}
/* cancellation key mismatch */
step "s2-wrong-cancel-2"
{
SELECT run_pg_send_cancellation(pid, cancel_key + 1) FROM cancel_table;
}
/* cancel the LOCK statement in session 1 */
step "s2-cancel"
{
SELECT run_pg_send_cancellation(pid, cancel_key) FROM cancel_table;
END;
}
permutation "s1-register" "s2-lock" "s1-lock" "s2-wrong-cancel-1" "s2-wrong-cancel-2" "s2-cancel"

View File

@ -110,6 +110,27 @@ SELECT public.wait_for_resource_cleanup();
SELECT 1 FROM citus_remove_node('localhost', :master_port);
SELECT public.wait_until_metadata_sync(30000);
-- make sure a non-super user can rebalance when there are reference tables to replicate
CREATE TABLE ref_table(a int primary key);
SELECT create_reference_table('ref_table');
-- add a new node to trigger replicate_reference_tables task
SELECT 1 FROM citus_add_node('localhost', :worker_3_port);
SET ROLE non_super_user_rebalance;
SELECT 1 FROM citus_rebalance_start(shard_transfer_mode := 'force_logical');
-- wait for success
SELECT citus_rebalance_wait();
SELECT state, details from citus_rebalance_status();
RESET ROLE;
-- reset the the number of nodes by removing the previously added node
SELECT 1 FROM citus_drain_node('localhost', :worker_3_port);
CALL citus_cleanup_orphaned_resources();
SELECT 1 FROM citus_remove_node('localhost', :worker_3_port);
SET client_min_messages TO WARNING;
DROP SCHEMA background_rebalance CASCADE;
DROP USER non_super_user_rebalance;

View File

@ -204,11 +204,14 @@ SELECT citus_rebalance_start AS job_id from citus_rebalance_start() \gset
-- see dependent tasks to understand which tasks remain runnable because of
-- citus.max_background_task_executors_per_node
-- and which tasks are actually blocked from colocation group dependencies
SELECT D.task_id,
(SELECT T.command FROM pg_dist_background_task T WHERE T.task_id = D.task_id),
D.depends_on,
(SELECT T.command FROM pg_dist_background_task T WHERE T.task_id = D.depends_on)
FROM pg_dist_background_task_depend D WHERE job_id in (:job_id) ORDER BY D.task_id, D.depends_on ASC;
SELECT (SELECT T.command FROM pg_dist_background_task T WHERE T.task_id = D.task_id),
(SELECT T.command depends_on_command FROM pg_dist_background_task T WHERE T.task_id = D.depends_on)
FROM pg_dist_background_task_depend D WHERE job_id in (:job_id) ORDER BY 1, 2 ASC;
SELECT task_id, depends_on
FROM pg_dist_background_task_depend
WHERE job_id in (:job_id)
ORDER BY 1, 2 ASC;
-- default citus.max_background_task_executors_per_node is 1
-- show that first exactly one task per node is running

View File

@ -35,7 +35,10 @@ UPDATE dist_tbl SET b = a + 1 WHERE a = 3;
UPDATE dist_tbl SET b = a + 1 WHERE a = 4;
DELETE FROM dist_tbl WHERE a = 5;
SELECT tenant_attribute, read_count_in_this_period, read_count_in_last_period, query_count_in_this_period, query_count_in_last_period FROM citus_stat_tenants(true) ORDER BY tenant_attribute;
SELECT tenant_attribute, read_count_in_this_period, read_count_in_last_period, query_count_in_this_period, query_count_in_last_period,
(cpu_usage_in_this_period>0) AS cpu_is_used_in_this_period, (cpu_usage_in_last_period>0) AS cpu_is_used_in_last_period
FROM citus_stat_tenants(true)
ORDER BY tenant_attribute;
SELECT citus_stat_tenants_reset();
@ -84,13 +87,26 @@ SELECT count(*)>=0 FROM dist_tbl WHERE a = 1;
INSERT INTO dist_tbl VALUES (5, 'abcd');
\c - - - :worker_1_port
SELECT tenant_attribute, read_count_in_this_period, read_count_in_last_period, query_count_in_this_period, query_count_in_last_period FROM citus_stat_tenants_local ORDER BY tenant_attribute;
SELECT tenant_attribute, read_count_in_this_period, read_count_in_last_period, query_count_in_this_period, query_count_in_last_period,
(cpu_usage_in_this_period>0) AS cpu_is_used_in_this_period, (cpu_usage_in_last_period>0) AS cpu_is_used_in_last_period
FROM citus_stat_tenants_local
ORDER BY tenant_attribute;
-- simulate passing the period
SET citus.stat_tenants_period TO 2;
SET citus.stat_tenants_period TO 5;
SELECT sleep_until_next_period();
SELECT tenant_attribute, read_count_in_this_period, read_count_in_last_period, query_count_in_this_period, query_count_in_last_period FROM citus_stat_tenants_local ORDER BY tenant_attribute;
SELECT tenant_attribute, read_count_in_this_period, read_count_in_last_period, query_count_in_this_period, query_count_in_last_period,
(cpu_usage_in_this_period>0) AS cpu_is_used_in_this_period, (cpu_usage_in_last_period>0) AS cpu_is_used_in_last_period
FROM citus_stat_tenants_local
ORDER BY tenant_attribute;
SELECT sleep_until_next_period();
SELECT tenant_attribute, read_count_in_this_period, read_count_in_last_period, query_count_in_this_period, query_count_in_last_period,
(cpu_usage_in_this_period>0) AS cpu_is_used_in_this_period, (cpu_usage_in_last_period>0) AS cpu_is_used_in_last_period
FROM citus_stat_tenants_local
ORDER BY tenant_attribute;
\c - - - :master_port
SET search_path TO citus_stat_tenants;
@ -158,6 +174,11 @@ SELECT count(*)>=0 FROM dist_tbl_text WHERE a = '/bcde';
SELECT count(*)>=0 FROM dist_tbl_text WHERE a = U&'\0061\0308bc';
SELECT count(*)>=0 FROM dist_tbl_text WHERE a = 'bcde*';
DELETE FROM dist_tbl_text WHERE a = '/b*c/de';
DELETE FROM dist_tbl_text WHERE a = '/bcde';
DELETE FROM dist_tbl_text WHERE a = U&'\0061\0308bc';
DELETE FROM dist_tbl_text WHERE a = 'bcde*';
SELECT tenant_attribute, read_count_in_this_period, read_count_in_last_period, query_count_in_this_period, query_count_in_last_period FROM citus_stat_tenants_local ORDER BY tenant_attribute;
-- test local cached queries & prepared statements
@ -231,5 +252,64 @@ SELECT count(*)>=0 FROM citus_stat_tenants_local();
RESET ROLE;
DROP ROLE stats_non_superuser;
-- test function push down
CREATE OR REPLACE FUNCTION
select_from_dist_tbl_text(p_keyword text)
RETURNS boolean LANGUAGE plpgsql AS $fn$
BEGIN
RETURN(SELECT count(*)>=0 FROM citus_stat_tenants.dist_tbl_text WHERE a = $1);
END;
$fn$;
SELECT create_distributed_function(
'select_from_dist_tbl_text(text)', 'p_keyword', colocate_with => 'dist_tbl_text'
);
SELECT citus_stat_tenants_reset();
SELECT select_from_dist_tbl_text('/b*c/de');
SELECT select_from_dist_tbl_text('/b*c/de');
SELECT select_from_dist_tbl_text(U&'\0061\0308bc');
SELECT select_from_dist_tbl_text(U&'\0061\0308bc');
SELECT tenant_attribute, query_count_in_this_period FROM citus_stat_tenants;
CREATE OR REPLACE PROCEDURE select_from_dist_tbl_text_proc(
p_keyword text
)
LANGUAGE plpgsql
AS $$
BEGIN
PERFORM select_from_dist_tbl_text(p_keyword);
PERFORM count(*)>=0 FROM citus_stat_tenants.dist_tbl_text WHERE b < 0;
PERFORM count(*)>=0 FROM citus_stat_tenants.dist_tbl_text;
PERFORM count(*)>=0 FROM citus_stat_tenants.dist_tbl_text WHERE a = p_keyword;
COMMIT;
END;$$;
CALL citus_stat_tenants.select_from_dist_tbl_text_proc('/b*c/de');
CALL citus_stat_tenants.select_from_dist_tbl_text_proc('/b*c/de');
CALL citus_stat_tenants.select_from_dist_tbl_text_proc('/b*c/de');
CALL citus_stat_tenants.select_from_dist_tbl_text_proc(U&'\0061\0308bc');
CALL citus_stat_tenants.select_from_dist_tbl_text_proc(U&'\0061\0308bc');
CALL citus_stat_tenants.select_from_dist_tbl_text_proc(U&'\0061\0308bc');
CALL citus_stat_tenants.select_from_dist_tbl_text_proc(NULL);
SELECT tenant_attribute, query_count_in_this_period FROM citus_stat_tenants;
CREATE OR REPLACE VIEW
select_from_dist_tbl_text_view
AS
SELECT * FROM citus_stat_tenants.dist_tbl_text;
SELECT count(*)>=0 FROM select_from_dist_tbl_text_view WHERE a = '/b*c/de';
SELECT count(*)>=0 FROM select_from_dist_tbl_text_view WHERE a = '/b*c/de';
SELECT count(*)>=0 FROM select_from_dist_tbl_text_view WHERE a = '/b*c/de';
SELECT count(*)>=0 FROM select_from_dist_tbl_text_view WHERE a = U&'\0061\0308bc';
SELECT count(*)>=0 FROM select_from_dist_tbl_text_view WHERE a = U&'\0061\0308bc';
SELECT count(*)>=0 FROM select_from_dist_tbl_text_view WHERE a = U&'\0061\0308bc';
SELECT tenant_attribute, query_count_in_this_period FROM citus_stat_tenants;
SET client_min_messages TO ERROR;
DROP SCHEMA citus_stat_tenants CASCADE;

View File

@ -28,6 +28,14 @@ set citus.shard_replication_factor to 2;
select create_distributed_table_concurrently('test','key', 'hash');
set citus.shard_replication_factor to 1;
set citus.shard_replication_factor to 2;
create table dist_1(a int);
select create_distributed_table('dist_1', 'a');
set citus.shard_replication_factor to 1;
create table dist_2(a int);
select create_distributed_table_concurrently('dist_2', 'a', colocate_with=>'dist_1');
begin;
select create_distributed_table_concurrently('test','key');
rollback;
@ -63,6 +71,7 @@ rollback;
-- verify that we can undistribute the table
begin;
set local client_min_messages to warning;
select undistribute_table('test', cascade_via_foreign_keys := true);
rollback;

View File

@ -97,6 +97,8 @@ WHERE s.logicalrelid = 'user_table'::regclass AND n.isactive
ORDER BY placementid;
-- reset cluster to original state
ALTER SEQUENCE pg_dist_node_nodeid_seq RESTART 2;
ALTER SEQUENCE pg_dist_groupid_seq RESTART 2;
SELECT citus.mitmproxy('conn.allow()');
SELECT master_add_node('localhost', :worker_2_proxy_port);

View File

@ -15,6 +15,9 @@ SET citus.shard_replication_factor TO 1;
SET citus.max_adaptive_executor_pool_size TO 1;
SELECT pg_backend_pid() as pid \gset
ALTER SEQUENCE pg_catalog.pg_dist_shardid_seq RESTART 222222;
ALTER SEQUENCE pg_catalog.pg_dist_placement_placementid_seq RESTART 333333;
-- make sure coordinator is in the metadata
SELECT citus_set_coordinator_host('localhost', 57636);
@ -108,3 +111,5 @@ SELECT * FROM pg_dist_shard WHERE logicalrelid = 'table_1'::regclass;
DROP SCHEMA create_dist_tbl_con CASCADE;
SET search_path TO default;
SELECT citus_remove_node('localhost', 57636);
ALTER SEQUENCE pg_dist_node_nodeid_seq RESTART 3;
ALTER SEQUENCE pg_dist_groupid_seq RESTART 3;

View File

@ -18,11 +18,36 @@ SET client_min_messages TO ERROR;
CREATE ROLE foo1;
CREATE ROLE foo2;
-- Create collation
CREATE COLLATION german_phonebook (provider = icu, locale = 'de-u-co-phonebk');
-- Create type
CREATE TYPE pair_type AS (a int, b int);
-- Create function
CREATE FUNCTION one_as_result() RETURNS INT LANGUAGE SQL AS
$$
SELECT 1;
$$;
-- Create text search dictionary
CREATE TEXT SEARCH DICTIONARY my_german_dict (
template = snowball,
language = german,
stopwords = german
);
-- Create text search config
CREATE TEXT SEARCH CONFIGURATION my_ts_config ( parser = default );
ALTER TEXT SEARCH CONFIGURATION my_ts_config ALTER MAPPING FOR asciiword WITH my_german_dict;
-- Create sequence
CREATE SEQUENCE seq;
-- Create colocated distributed tables
CREATE TABLE dist1 (id int PRIMARY KEY default nextval('seq'));
CREATE TABLE dist1 (id int PRIMARY KEY default nextval('seq'), col int default (one_as_result()), myserial serial, phone text COLLATE german_phonebook, initials pair_type);
CREATE SEQUENCE seq_owned OWNED BY dist1.id;
CREATE INDEX dist1_search_phone_idx ON dist1 USING gin (to_tsvector('my_ts_config'::regconfig, (COALESCE(phone, ''::text))::text));
SELECT create_distributed_table('dist1', 'id');
INSERT INTO dist1 SELECT i FROM generate_series(1,100) i;
@ -42,7 +67,15 @@ INSERT INTO loc1 SELECT i FROM generate_series(1,100) i;
CREATE TABLE loc2 (id int REFERENCES loc1(id));
INSERT INTO loc2 SELECT i FROM generate_series(1,100) i;
-- Create publication
CREATE PUBLICATION pub_all;
-- citus_set_coordinator_host with wrong port
SELECT citus_set_coordinator_host('localhost', 9999);
-- citus_set_coordinator_host with correct port
SELECT citus_set_coordinator_host('localhost', :master_port);
-- show coordinator port is correct on all workers
SELECT * FROM run_command_on_workers($$SELECT row(nodename,nodeport) FROM pg_dist_node WHERE groupid = 0$$);
SELECT citus_add_local_table_to_metadata('loc1', cascade_via_foreign_keys => true);
-- Create partitioned distributed table
@ -83,10 +116,10 @@ SELECT citus_activate_node('localhost', :worker_2_proxy_port);
SELECT citus.mitmproxy('conn.onQuery(query="INSERT INTO pg_dist_node").kill()');
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
-- Failure to drop sequence
SELECT citus.mitmproxy('conn.onQuery(query="SELECT pg_catalog.worker_drop_sequence_dependency").cancel(' || :pid || ')');
-- Failure to drop sequence dependency for all tables
SELECT citus.mitmproxy('conn.onQuery(query="SELECT pg_catalog.worker_drop_sequence_dependency.*FROM pg_dist_partition").cancel(' || :pid || ')');
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
SELECT citus.mitmproxy('conn.onQuery(query="SELECT pg_catalog.worker_drop_sequence_dependency").kill()');
SELECT citus.mitmproxy('conn.onQuery(query="SELECT pg_catalog.worker_drop_sequence_dependency.*FROM pg_dist_partition").kill()');
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
-- Failure to drop shell table
@ -137,24 +170,84 @@ SELECT citus_activate_node('localhost', :worker_2_proxy_port);
SELECT citus.mitmproxy('conn.onQuery(query="ALTER DATABASE.*OWNER TO").kill()');
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
-- Filure to create schema
-- Failure to create schema
SELECT citus.mitmproxy('conn.onQuery(query="CREATE SCHEMA IF NOT EXISTS mx_metadata_sync_multi_trans AUTHORIZATION").cancel(' || :pid || ')');
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
SELECT citus.mitmproxy('conn.onQuery(query="CREATE SCHEMA IF NOT EXISTS mx_metadata_sync_multi_trans AUTHORIZATION").kill()');
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
-- Failure to create collation
SELECT citus.mitmproxy('conn.onQuery(query="SELECT worker_create_or_replace_object.*CREATE COLLATION mx_metadata_sync_multi_trans.german_phonebook").cancel(' || :pid || ')');
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
SELECT citus.mitmproxy('conn.onQuery(query="SELECT worker_create_or_replace_object.*CREATE COLLATION mx_metadata_sync_multi_trans.german_phonebook").kill()');
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
-- Failure to create function
SELECT citus.mitmproxy('conn.onQuery(query="CREATE OR REPLACE FUNCTION mx_metadata_sync_multi_trans.one_as_result").cancel(' || :pid || ')');
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
SELECT citus.mitmproxy('conn.onQuery(query="CREATE OR REPLACE FUNCTION mx_metadata_sync_multi_trans.one_as_result").kill()');
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
-- Failure to create text search dictionary
SELECT citus.mitmproxy('conn.onQuery(query="SELECT worker_create_or_replace_object.*my_german_dict").cancel(' || :pid || ')');
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
SELECT citus.mitmproxy('conn.onQuery(query="SELECT worker_create_or_replace_object.*my_german_dict").kill()');
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
-- Failure to create text search config
SELECT citus.mitmproxy('conn.onQuery(query="SELECT worker_create_or_replace_object.*my_ts_config").cancel(' || :pid || ')');
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
SELECT citus.mitmproxy('conn.onQuery(query="SELECT worker_create_or_replace_object.*my_ts_config").kill()');
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
-- Failure to create type
SELECT citus.mitmproxy('conn.onQuery(query="SELECT worker_create_or_replace_object.*pair_type").cancel(' || :pid || ')');
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
SELECT citus.mitmproxy('conn.onQuery(query="SELECT worker_create_or_replace_object.*pair_type").kill()');
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
-- Failure to create publication
SELECT citus.mitmproxy('conn.onQuery(query="CREATE PUBLICATION.*pub_all").cancel(' || :pid || ')');
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
SELECT citus.mitmproxy('conn.onQuery(query="CREATE PUBLICATION.*pub_all").kill()');
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
-- Failure to create sequence
SELECT citus.mitmproxy('conn.onQuery(query="SELECT worker_apply_sequence_command").cancel(' || :pid || ')');
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
SELECT citus.mitmproxy('conn.onQuery(query="SELECT worker_apply_sequence_command").kill()');
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
-- Failure to drop sequence dependency for distributed table
SELECT citus.mitmproxy('conn.onQuery(query="SELECT pg_catalog.worker_drop_sequence_dependency.*mx_metadata_sync_multi_trans.dist1").cancel(' || :pid || ')');
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
SELECT citus.mitmproxy('conn.onQuery(query="SELECT pg_catalog.worker_drop_sequence_dependency.*mx_metadata_sync_multi_trans.dist1").kill()');
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
-- Failure to drop distributed table if exists
SELECT citus.mitmproxy('conn.onQuery(query="DROP TABLE IF EXISTS mx_metadata_sync_multi_trans.dist1").cancel(' || :pid || ')');
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
SELECT citus.mitmproxy('conn.onQuery(query="DROP TABLE IF EXISTS mx_metadata_sync_multi_trans.dist1").kill()');
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
-- Failure to create distributed table
SELECT citus.mitmproxy('conn.onQuery(query="CREATE TABLE mx_metadata_sync_multi_trans.dist1").cancel(' || :pid || ')');
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
SELECT citus.mitmproxy('conn.onQuery(query="CREATE TABLE mx_metadata_sync_multi_trans.dist1").kill()');
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
-- Failure to record sequence dependency for table
SELECT citus.mitmproxy('conn.onQuery(query="SELECT pg_catalog.worker_record_sequence_dependency").cancel(' || :pid || ')');
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
SELECT citus.mitmproxy('conn.onQuery(query="SELECT pg_catalog.worker_record_sequence_dependency").kill()');
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
-- Failure to create index for table
SELECT citus.mitmproxy('conn.onQuery(query="CREATE INDEX dist1_search_phone_idx ON mx_metadata_sync_multi_trans.dist1 USING gin").cancel(' || :pid || ')');
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
SELECT citus.mitmproxy('conn.onQuery(query="CREATE INDEX dist1_search_phone_idx ON mx_metadata_sync_multi_trans.dist1 USING gin").kill()');
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
-- Failure to create reference table
SELECT citus.mitmproxy('conn.onQuery(query="CREATE TABLE mx_metadata_sync_multi_trans.ref").cancel(' || :pid || ')');
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
@ -215,6 +308,48 @@ SELECT citus_activate_node('localhost', :worker_2_proxy_port);
SELECT citus.mitmproxy('conn.onQuery(query="SELECT citus_internal_add_object_metadata").kill()');
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
-- Failure to mark function as distributed
SELECT citus.mitmproxy('conn.onQuery(query="WITH distributed_object_data.*one_as_result").cancel(' || :pid || ')');
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
SELECT citus.mitmproxy('conn.onQuery(query="WITH distributed_object_data.*one_as_result").kill()');
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
-- Failure to mark collation as distributed
SELECT citus.mitmproxy('conn.onQuery(query="WITH distributed_object_data.*german_phonebook").cancel(' || :pid || ')');
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
SELECT citus.mitmproxy('conn.onQuery(query="WITH distributed_object_data.*german_phonebook").kill()');
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
-- Failure to mark text search dictionary as distributed
SELECT citus.mitmproxy('conn.onQuery(query="WITH distributed_object_data.*my_german_dict").cancel(' || :pid || ')');
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
SELECT citus.mitmproxy('conn.onQuery(query="WITH distributed_object_data.*my_german_dict").kill()');
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
-- Failure to mark text search configuration as distributed
SELECT citus.mitmproxy('conn.onQuery(query="WITH distributed_object_data.*my_ts_config").cancel(' || :pid || ')');
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
SELECT citus.mitmproxy('conn.onQuery(query="WITH distributed_object_data.*my_ts_config").kill()');
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
-- Failure to mark type as distributed
SELECT citus.mitmproxy('conn.onQuery(query="WITH distributed_object_data.*pair_type").cancel(' || :pid || ')');
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
SELECT citus.mitmproxy('conn.onQuery(query="WITH distributed_object_data.*pair_type").kill()');
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
-- Failure to mark sequence as distributed
SELECT citus.mitmproxy('conn.onQuery(query="WITH distributed_object_data.*seq_owned").cancel(' || :pid || ')');
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
SELECT citus.mitmproxy('conn.onQuery(query="WITH distributed_object_data.*seq_owned").kill()');
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
-- Failure to mark publication as distributed
SELECT citus.mitmproxy('conn.onQuery(query="WITH distributed_object_data.*pub_all").cancel(' || :pid || ')');
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
SELECT citus.mitmproxy('conn.onQuery(query="WITH distributed_object_data.*pub_all").kill()');
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
-- Failure to set isactive to true
SELECT citus.mitmproxy('conn.onQuery(query="UPDATE pg_dist_node SET isactive = TRUE").cancel(' || :pid || ')');
SELECT citus_activate_node('localhost', :worker_2_proxy_port);
@ -255,11 +390,8 @@ UPDATE dist1 SET id = :failed_node_val WHERE id = :failed_node_val;
DELETE FROM dist1 WHERE id = :failed_node_val;
-- Show that DDL would still propagate to the node
SET client_min_messages TO NOTICE;
SET citus.log_remote_commands TO 1;
CREATE SCHEMA dummy;
SET citus.log_remote_commands TO 0;
SET client_min_messages TO ERROR;
SELECT * FROM run_command_on_workers($$SELECT nspname FROM pg_namespace WHERE nspname = 'dummy'$$);
-- Successfully activate the node after many failures
SELECT citus.mitmproxy('conn.allow()');
@ -275,8 +407,11 @@ SELECT * FROM pg_dist_node ORDER BY nodeport;
SELECT citus.mitmproxy('conn.allow()');
RESET citus.metadata_sync_mode;
DROP PUBLICATION pub_all;
DROP SCHEMA dummy;
DROP SCHEMA mx_metadata_sync_multi_trans CASCADE;
DROP ROLE foo1;
DROP ROLE foo2;
SELECT citus_remove_node('localhost', :master_port);
ALTER SEQUENCE pg_dist_node_nodeid_seq RESTART 3;
ALTER SEQUENCE pg_dist_groupid_seq RESTART 3;

View File

@ -84,6 +84,7 @@ create table partitioned_tbl_with_fkey (x int, y int, t timestamptz default now(
select create_distributed_table('partitioned_tbl_with_fkey','x');
create table partition_1_with_fkey partition of partitioned_tbl_with_fkey for values from ('2022-01-01') to ('2022-12-31');
create table partition_2_with_fkey partition of partitioned_tbl_with_fkey for values from ('2023-01-01') to ('2023-12-31');
create table partition_3_with_fkey partition of partitioned_tbl_with_fkey for values from ('2024-01-01') to ('2024-12-31');
insert into partitioned_tbl_with_fkey (x,y) select s,s%10 from generate_series(1,100) s;
ALTER TABLE partitioned_tbl_with_fkey ADD CONSTRAINT fkey_to_ref_tbl FOREIGN KEY (y) REFERENCES ref_table_with_fkey(id);

View File

@ -667,5 +667,33 @@ ALTER TABLE table_without_sequence ADD COLUMN x BIGINT DEFAULT nextval('test_sch
SELECT pg_identify_object_as_address(classid, objid, objsubid) from pg_catalog.pg_dist_object WHERE objid IN ('test_schema_for_sequence_propagation.seq_10'::regclass);
SELECT pg_identify_object_as_address(classid, objid, objsubid) from pg_catalog.pg_dist_object WHERE objid IN ('test_schema_for_sequence_propagation'::regnamespace);
-- Bug: https://github.com/citusdata/citus/issues/7378
-- Create a reference table
CREATE TABLE tbl_ref_mats(row_id integer primary key);
INSERT INTO tbl_ref_mats VALUES (1), (2);
SELECT create_reference_table('tbl_ref_mats');
-- Create a distributed table
CREATE TABLE tbl_dist_mats(series_id integer);
INSERT INTO tbl_dist_mats VALUES (1), (1), (2), (2);
SELECT create_distributed_table('tbl_dist_mats', 'series_id');
-- Create a view that joins the distributed table with the reference table on the distribution key.
CREATE VIEW vw_citus_views as
SELECT d.series_id FROM tbl_dist_mats d JOIN tbl_ref_mats r ON d.series_id = r.row_id;
-- The view initially works fine
SELECT * FROM vw_citus_views ORDER BY 1;
-- Now, alter the table
ALTER TABLE tbl_ref_mats ADD COLUMN category1 varchar(50);
SELECT * FROM vw_citus_views ORDER BY 1;
ALTER TABLE tbl_ref_mats ADD COLUMN category2 varchar(50);
SELECT * FROM vw_citus_views ORDER BY 1;
ALTER TABLE tbl_ref_mats DROP COLUMN category1;
SELECT * FROM vw_citus_views ORDER BY 1;
DROP SCHEMA test_schema_for_sequence_propagation CASCADE;
DROP TABLE table_without_sequence;
DROP TABLE tbl_ref_mats CASCADE;
DROP TABLE tbl_dist_mats CASCADE;

View File

@ -591,6 +591,16 @@ SELECT * FROM multi_extension.print_extension_changes();
ALTER EXTENSION citus UPDATE TO '11.3-1';
SELECT * FROM multi_extension.print_extension_changes();
-- Test downgrade to 11.3-1 from 11.3-2
ALTER EXTENSION citus UPDATE TO '11.3-2';
ALTER EXTENSION citus UPDATE TO '11.3-1';
-- Should be empty result since upgrade+downgrade should be a no-op
SELECT * FROM multi_extension.print_extension_changes();
-- Snapshot of state at 11.3-2
ALTER EXTENSION citus UPDATE TO '11.3-2';
SELECT * FROM multi_extension.print_extension_changes();
DROP TABLE multi_extension.prev_objects, multi_extension.extension_diff;
-- show running version

View File

@ -802,6 +802,9 @@ ALTER TABLE IF EXISTS non_existent_table SET SCHEMA non_existent_schema;
DROP SCHEMA existing_schema, another_existing_schema CASCADE;
-- test DROP SCHEMA with nonexisting schemas
DROP SCHEMA ax, bx, cx, dx, ex, fx, gx, jx;
-- test ALTER TABLE SET SCHEMA with interesting names
CREATE SCHEMA "cItuS.T E E N'sSchema";
CREATE SCHEMA "citus-teen's scnd schm.";
@ -968,6 +971,7 @@ SET client_min_messages TO WARNING;
SELECT pg_identify_object_as_address(classid, objid, objsubid) FROM pg_catalog.pg_dist_object
WHERE classid=2615 and objid IN (select oid from pg_namespace where nspname='run_test_schema');
DROP TABLE public.nation_local;
DROP SCHEMA run_test_schema, test_schema_support_join_1, test_schema_support_join_2, "Citus'Teen123", "CiTUS.TEEN2", bar, test_schema_support CASCADE;
-- verify that the dropped schema is removed from worker's pg_dist_object
SELECT pg_identify_object_as_address(classid, objid, objsubid) FROM pg_catalog.pg_dist_object

View File

@ -137,6 +137,30 @@ FETCH FORWARD 3 FROM holdCursor;
CLOSE holdCursor;
-- Test DECLARE CURSOR .. WITH HOLD inside transaction block
BEGIN;
DECLARE holdCursor CURSOR WITH HOLD FOR
SELECT * FROM cursor_me WHERE x = 1 ORDER BY y;
FETCH 3 FROM holdCursor;
FETCH BACKWARD 3 FROM holdCursor;
FETCH FORWARD 3 FROM holdCursor;
COMMIT;
FETCH 3 FROM holdCursor;
CLOSE holdCursor;
-- Test DECLARE NO SCROLL CURSOR .. WITH HOLD inside transaction block
BEGIN;
DECLARE holdCursor NO SCROLL CURSOR WITH HOLD FOR
SELECT * FROM cursor_me WHERE x = 1 ORDER BY y;
FETCH 3 FROM holdCursor;
FETCH FORWARD 3 FROM holdCursor;
COMMIT;
FETCH 3 FROM holdCursor;
FETCH BACKWARD 3 FROM holdCursor;
CLOSE holdCursor;
-- Test DECLARE CURSOR .. WITH HOLD with parameter
CREATE OR REPLACE FUNCTION declares_cursor(p int)
RETURNS void AS $$

View File

@ -13,7 +13,9 @@ CREATE TABLE postgres_table_test(a int primary key);
-- make sure that all rebalance operations works fine when
-- reference tables are replicated to the coordinator
SET client_min_messages TO ERROR;
SELECT 1 FROM master_add_node('localhost', :master_port, groupId=>0);
RESET client_min_messages;
-- should just be noops even if we add the coordinator to the pg_dist_node
SELECT rebalance_table_shards('dist_table_test');
@ -1497,6 +1499,61 @@ SELECT sh.logicalrelid, pl.nodeport
DROP TABLE single_shard_colocation_1a, single_shard_colocation_1b, single_shard_colocation_1c, single_shard_colocation_2a, single_shard_colocation_2b CASCADE;
-- test the same with coordinator shouldhaveshards = false and shard_count = 2
-- so that the shard allowed node count would be 2 when rebalancing
-- for such cases, we only count the nodes that are allowed for shard placements
UPDATE pg_dist_node SET shouldhaveshards=false WHERE nodeport = :master_port;
create table two_shard_colocation_1a (a int primary key);
create table two_shard_colocation_1b (a int primary key);
SET citus.shard_replication_factor = 1;
select create_distributed_table('two_shard_colocation_1a','a', colocate_with => 'none', shard_count => 2);
select create_distributed_table('two_shard_colocation_1b','a',colocate_with=>'two_shard_colocation_1a');
create table two_shard_colocation_2a (a int primary key);
create table two_shard_colocation_2b (a int primary key);
select create_distributed_table('two_shard_colocation_2a','a', colocate_with => 'none', shard_count => 2);
select create_distributed_table('two_shard_colocation_2b','a',colocate_with=>'two_shard_colocation_2a');
-- move shards of colocation group 1 to worker1
SELECT citus_move_shard_placement(sh.shardid, 'localhost', :worker_2_port, 'localhost', :worker_1_port)
FROM pg_dist_shard sh JOIN pg_dist_shard_placement pl ON sh.shardid = pl.shardid
WHERE sh.logicalrelid = 'two_shard_colocation_1a'::regclass
AND pl.nodeport = :worker_2_port
LIMIT 1;
-- move shards of colocation group 2 to worker2
SELECT citus_move_shard_placement(sh.shardid, 'localhost', :worker_1_port, 'localhost', :worker_2_port)
FROM pg_dist_shard sh JOIN pg_dist_shard_placement pl ON sh.shardid = pl.shardid
WHERE sh.logicalrelid = 'two_shard_colocation_2a'::regclass
AND pl.nodeport = :worker_1_port
LIMIT 1;
-- current state:
-- coordinator: []
-- worker 1: [1_1, 1_2]
-- worker 2: [2_1, 2_2]
SELECT sh.logicalrelid, pl.nodeport
FROM pg_dist_shard sh JOIN pg_dist_shard_placement pl ON sh.shardid = pl.shardid
WHERE sh.logicalrelid::text IN ('two_shard_colocation_1a', 'two_shard_colocation_1b', 'two_shard_colocation_2a', 'two_shard_colocation_2b')
ORDER BY sh.logicalrelid, pl.nodeport;
-- If we take the coordinator into account, the rebalancer considers this as balanced and does nothing (shard_count < worker_count)
-- but because the coordinator is not allowed for shards, rebalancer will distribute each colocation group to both workers
select rebalance_table_shards(shard_transfer_mode:='block_writes');
-- final state:
-- coordinator: []
-- worker 1: [1_1, 2_1]
-- worker 2: [1_2, 2_2]
SELECT sh.logicalrelid, pl.nodeport
FROM pg_dist_shard sh JOIN pg_dist_shard_placement pl ON sh.shardid = pl.shardid
WHERE sh.logicalrelid::text IN ('two_shard_colocation_1a', 'two_shard_colocation_1b', 'two_shard_colocation_2a', 'two_shard_colocation_2b')
ORDER BY sh.logicalrelid, pl.nodeport;
-- cleanup
DROP TABLE two_shard_colocation_1a, two_shard_colocation_1b, two_shard_colocation_2a, two_shard_colocation_2b CASCADE;
-- verify we detect if one of the tables do not have a replica identity or primary key
-- and error out in case of shard transfer mode = auto
SELECT 1 FROM citus_remove_node('localhost', :worker_2_port);