Commit Graph

1778 Commits (ace800851a88d691f694c86244dcccd72ea90d1d)

Author SHA1 Message Date
Onder Kalaci 22b5175fd1 Make sure that the community and enterprise tests produce the same output 2022-01-04 13:30:31 +01:00
Önder Kalacı 0a8b0b06c6
Do not allow distributed functions on non-metadata synced nodes (#5586)
Before this commit, Citus was triggering metadata syncing
in the background when a function is distributed. However,
with Citus 11, we expect all clusters to have metadata synced
enabled. So, we do not expect any nodes not to have the metadata.

This change:
	(a) pro: simplifies the code and opens up possibilities
		 to simplify futher by reducing the scope of
		 bg worker to only sync node metadata
        (b) pro: explicitly asks users to sync the metadata such that
  	    any unforseen impact can be easily detected
        (c) con: For distributed functions without distribution
		 argument, we do not necessarily require the metadata
		 sycned. However, for completeness and simplicity, we
		 do so.
2022-01-04 13:12:57 +01:00
Halil Ozan Akgul aef2d83c7d Fix metadata sync fails on multi_transaction_recovery 2021-12-29 11:21:32 +03:00
Önder Kalacı d33650d1c1
Record if any partitioned Citus tables during upgrade (#5555)
With Citus 11, the default behavior is to sync the metadata.
However, partitioned tables created pre-Citus 11 might have
index names that are not compatiable with metadata syncing.

See https://github.com/citusdata/citus/issues/4962 for the
details.

With this commit, we record the existence of partitioned tables
such that we can fix it later if any exists.
2021-12-27 03:33:34 -08:00
Halil Ozan Akgul 0c292a74f5 Fix metadata sync fails on multi_truncate 2021-12-27 13:54:53 +03:00
Önder Kalacı c9127f921f
Avoid round trips while fixing index names (#5549)
With this commit, fix_partition_shard_index_names()
works significantly faster.

For example,

32 shards, 365 partitions, 5 indexes drop from ~120 seconds to ~44 seconds
32 shards, 1095 partitions, 5 indexes drop from ~600 seconds to ~265 seconds

`queryStringList` can be really long, because it may contain #partitions * #indexes entries.

Before this change, we were actually going through the executor where each command
in the query string triggers 1 round trip per entry in queryStringList.

The aim of this commit is to avoid the round-trips by creating a single query string.

I first simply tried sending `q1;q2;..;qn` . However, the executor is designed to
handle `q1;q2;..;qn` type of query executions via the infrastructure mentioned
above (e.g., by tracking the query indexes in the list and doing 1 statement
per round trip).

One another option could have been to change the executor such that only track
the query index when `queryStringList` is provided not with queryString
including multiple `;`s . That is (a) more work (b) could cause weird edge
cases with failure handling (c) felt like coding a special case in to the executor
2021-12-27 10:29:37 +01:00
Halil Ozan Akgul bb636e6a29 Fix metadata sync fails on multi_function_evaluation 2021-12-24 19:32:58 +03:00
Halil Ozan Akgul 70e68d5312 Fix metadata sync fails on multi_name_lengths 2021-12-24 14:33:32 +03:00
Halil Ozan Akgul 5c2fb06322 Fix metadata sync fails on multi_sequence_default 2021-12-24 14:33:32 +03:00
Halil Ozan Akgul b9c06a6762 Turn metadata sync on in multi_metadata_sync 2021-12-24 10:58:13 +03:00
Hanefi Onaldi 479b2da740 Fix one flaky failure test 2021-12-23 20:11:45 +03:00
Ahmet Gedemenli 042d45b263 Propagate foreign server ops 2021-12-23 17:54:04 +03:00
Ahmet Gedemenli 8e4ff34a2e Do not include return table params in the function arg list
(cherry picked from commit 90928cfd74)

Fix function signature generation

Fix comment typo

Add test for worker_create_or_replace_object

Add test for recreating distributed functions with OUT/TABLE params

Add test for recreating distributed function that returns setof int

Fix test output

Fix comment
2021-12-21 19:01:42 +03:00
Marco Slot 2eef71ccab Propagate SET TRANSACTION commands 2021-12-18 11:31:39 +01:00
Onder Kalaci fc98f83af2 Add citus.grep_remote_commands
Simply applies

```SQL
SELECT textlike(command, citus.grep_remote_commands)
```
And, if returns true, the command is logged. Else, the log is ignored.

When citus.grep_remote_commands is empty string, all commands are
logged.
2021-12-17 11:47:40 +01:00
Halil Ozan Akgul df8d0f3db1 Turn metadata sync on in multi_replicate_reference_table and multi_citus_tools 2021-12-17 10:25:57 +03:00
Halil Ozan Akgul 8943d7b52f Turn metadata sync on in mx_regular_user and remove_coordinator 2021-12-16 11:26:24 +03:00
Halil Ozan Akgul b82af4db3b Turn metadata sync on in multi_size_queries, multi_drop_extension and multi_unsupported_worker_operations 2021-12-16 11:10:54 +03:00
Hanefi Onaldi acdcd9422c
Fix one flaky failure test (#5528)
Removes flaky test
2021-12-15 18:59:58 +03:00
Hanefi Onaldi 29e4516642 Introduce citus_check_cluster_node_health UDF
This UDF coordinates connectivity checks accross the whole cluster.

This UDF gets the list of active readable nodes in the cluster, and
coordinates all connectivity checks in sequential order.

The algorithm is:

for sourceNode in activeReadableWorkerList:
    c = connectToNode(sourceNode)
    for targetNode in activeReadableWorkerList:
        result = c.execute(
            "SELECT citus_check_connection_to_node(targetNode.name,
                                                   targetNode.port")
        emit sourceNode.name,
             sourceNode.port,
             targetNode.name,
             targetNode.port,
             result

- result -> true  ->  connection attempt from source to target succeeded
- result -> false -> connection attempt from source to target failed
- result -> NULL  -> connection attempt from the current node to source node failed

I suggest you use the following query to get an overview on the connectivity:

SELECT bool_and(COALESCE(result, false))
FROM citus_check_cluster_node_health();

Whenever this query returns false, there is a connectivity issue, check in detail.
2021-12-15 01:41:51 +03:00
Halil Ozan Akgul e060720370 Fix metadata sync fails in multi_index_statements 2021-12-14 11:28:08 +03:00
Halil Ozan Akgul a951e52ce8 Fix drop index trying to drop coordinator local indexes on metadata worker nodes 2021-12-14 11:28:08 +03:00
Halil Ozan Akgul 98e38e2e4e Fix metadata sync fails on failure_connection_establishment 2021-12-13 11:51:56 +03:00
Halil Ozan Akgul 507df08422 Fix metadata sync fails on propagate_statistics and pg13_propagate_statistics tests 2021-12-09 12:28:11 +03:00
Halil Ozan Akgul 4c8f79d7dd Turn metadata sync on in failure schedule 2021-12-08 11:22:56 +03:00
Halil Ozan Akgul 4f272ea0e5 Fix metadata sync fails in multi_extension 2021-12-08 10:25:43 +03:00
Burak Velioglu ed8e32de5e
Sync pg_dist_object on an update and propagate while syncing to a new node
Before that PR we were updating citus.pg_dist_object metadata, which keeps
the metadata related to objects on Citus, only on the coordinator node. In
order to allow using those object from worker nodes (or erroring out with
proper error message) we've started to propagate that metedata to worker
nodes as well.
2021-12-06 19:25:50 +03:00
Halil Ozan Akgul ef09ba0d06 Fix metadata sync fails of multi_table_ddl 2021-12-06 13:44:30 +03:00
Halil Ozan Akgul a6d0de060c Fix fails with metadata syncing in undistribute_table 2021-12-03 13:58:53 +03:00
Hanefi Onaldi 56e9b1b968 Introduce UDF to check worker connectivity
citus_check_connection_to_node runs a simple query on a remote node and
reports whether this attempt was successful.

This UDF will be used to make sure each worker node can connect to all
the worker nodes in the cluster.

parameters:
nodename: required
nodeport: optional (default: 5432)

return value:
boolean success
2021-12-03 02:30:28 +03:00
Onder Kalaci 549edcabb6 Allow disabling node(s) when multiple failures happen
As of master branch, Citus does all the modifications to replicated tables
(e.g., reference tables and distributed tables with replication factor > 1),
via 2PC and avoids any shardstate=3. As a side-effect of those changes,
handling node failures for replicated tables change.

With this PR, when one (or multiple) node failures happen, the users would
see query errors on modifications. If the problem is intermitant, that's OK,
once the node failure(s) recover by themselves, the modification queries would
succeed. If the node failure(s) are permenant, the users should call
`SELECT citus_disable_node(...)` to disable the node. As soon as the node is
disabled, modification would start to succeed. However, now the old node gets
behind. It means that, when the node is up again, the placements should be
re-created on the node. First, use `SELECT citus_activate_node()`. Then, use
`SELECT replicate_table_shards(...)` to replicate the missing placements on
the re-activated node.
2021-12-01 10:19:48 +01:00
Halil Ozan Akgul 11072b4cb8 Normalize create role command in drop_partitioned_table test 2021-11-30 12:46:22 +03:00
Onder Kalaci 38b08ebde9 Generalize the error checks while removing node
The checks for preventing to remove a node are very much reference
table centric. We are soon going to add the same checks for replicated
tables. So, make the checks generic such that:
	 (a) replicated tables fit naturally
	 (b) we can the same checks in `citus_disable_node`.
2021-11-26 14:25:29 +01:00
Halil Ozan Akgul 87a1c760d9 Fix tests in multi-1-schedule that fail with metadata syncing 2021-11-26 12:09:53 +03:00
Onder Kalaci b4931f7345 Do not acquire locks on reference tables when a node is removed/disabled
Before this commit, we acquire the metadata locks on the reference
tables while removing/disabling a node on all the MX nodes.

Although it has some marginal benefits, such as a concurrent
modification during remove/disable node blocks, instead of erroring
out, the drawbacks seems worse. Both citus_remove_node and citus_disable_node
are not tolerant to multiple node failures.

With this commit, we relax the locks. The implication is that while
a node is removed/disabled, users might see query errors. On the
other hand, this change becomes removing/disabling nodes more
tolerant to multiple node failures.
2021-11-26 09:08:25 +01:00
Onur Tirtir 76b8006a9e
Allow overwriting columnar storage pages written by aborted xacts (#5484)
When refactoring storage layer in #4907, we deleted the code that allows
overwriting a disk page previously written but not known by metadata.

Readers can see the change that introduced the code allows doing so in
commit a8da9acc63.

The reasoning was that; as of 10.2, we started aligning page
reservations (`AlignReservation`) for subsequent writes right after
allocating pages from disk. That means, even if writer transaction
fails, subsequent writes are guaranteed to allocate a new page and write
to there. For this reason, attempting to write to a page allocated
before is not possible for a columnar table that user created when using
v10.2.x.

However, since the older versions of columnar doesn't do that, following
example scenario can still result in writing to such disk page, even if
user now upgraded to v10.2.x. This is because, when upgrading storage to
2.0 (`ColumnarStorageUpdateIfNeeded`), we calculate `reservedOffset` of
the metapage based on the highest used address known by stripe
metadata (`GetHighestUsedAddressAndId`). However, stripe metadata
doesn't have entries for aborted writes. As a result, highest used
address would be computed by ignoring pages that are allocated but not
used.

- User attempts writing to columnar table on Citus v10.0x/v10.1x.
- Write operation fails for some reason.
- User upgrades Citus to v10.2.x.
- When attempting to write to same columnar table, they hit to "attempt
  to write columnar data .." error since write operation done in the
  older version of columnar already allocated that page, and now we are
  overwriting it.

For this reason, with this commit, we re-do the change done in
a8da9acc63.

And for the reasons given above, it wasn't possible to add a test for
this commit via usual code-paths. For this reason, added a UDF only for
testing purposes so that we can reproduce the exact scenario in our
regression test suite.
2021-11-26 07:51:13 +01:00
Onur Tirtir 73f06323d8 Introduce dependencies from columnarAM to columnar metadata objects
During pg upgrades, we have seen that it is not guaranteed that a
columnar table will be created after metadata objects got created.
Prior to changes done in this commit, we had such a dependency
relationship in `pg_depend`:

```
columnar_table ----> columnarAM ----> citus extension
                                           ^  ^
                                           |  |
columnar.storage_id_seq --------------------  |
                                              |
columnar.stripe -------------------------------
```

Since `pg_upgrade` just knows to follow topological sort of the objects
when creating database dump, above dependency graph doesn't imply that
`columnar_table` should be created before metadata objects such as
`columnar.storage_id_seq` and `columnar.stripe` are created.

For this reason, with this commit we add new records to `pg_depend` to
make columnarAM depending on all rel objects living in `columnar`
schema. That way, `pg_upgrade` will know it needs to create those before
creating `columnarAM`, and similarly, before creating any tables using
`columnarAM`.

Note that in addition to inserting those records via installation script,
we also do the same in `citus_finish_pg_upgrade()`. This is because,
`pg_upgrade` rebuilds catalog tables in the new cluster and that means,
we must insert them in the new cluster too.
2021-11-23 13:14:00 +03:00
Onur Tirtir ef2ca03f24 Reproduce bug via test suite 2021-11-23 13:14:00 +03:00
Hanefi Onaldi e6160ad131
Document failing tests for issue 5099 2021-11-18 20:01:34 +03:00
Marco Slot 9e6ca23286 Remove cstore_fdw-related logic 2021-11-16 13:59:03 +01:00
Önder Kalacı 8c0bc94b51
Enable replication factor > 1 in metadata syncing (#5392)
- [x] Add some more regression test coverage
- [x] Make sure returning works fine in case of
     local execution + remote execution
     (task->partiallyLocalOrRemote works as expected, already added tests)
- [x] Implement locking properly (and add isolation tests)
     - [x] We do #shardcount round-trips on `SerializeNonCommutativeWrites`.
           We made it a single round-trip.
- [x] Acquire locks for subselects on the workers & add isolation tests
- [x] Add a GUC to prevent modification from the workers, hence increase the
      coordinator-only throughput
       - The performance slightly drops (~%15), unless
         `citus.allow_modifications_from_workers_to_replicated_tables`
         is set to false
2021-11-15 15:10:18 +03:00
Onur Tirtir 25024b776e
Skip deleting options if columnar.options is already dropped (#5458)
Drop extension might cascade to columnar.options before dropping a
columnar table. In that case, we were getting below error when opening
columnar.options to delete records for the columnar table that we are
about to drop.: "ERROR:  could not open relation with OID 0".

I somehow reproduced this bug easily when upgrading pg, that is why
adding added the test to after_pg_upgrade_schedule.
2021-11-12 12:30:09 +03:00
Ahmet Gedemenli 14a33d4e8e Introduce GUC citus.use_citus_managed_tables 2021-11-11 14:09:06 +03:00
Hanefi Onaldi 3d9cec70fd
Update migration paths from 10.2 to 11.0 (#5459)
We recently introduced a set of patches to 10.2, and introduced 10.2-4
migration version. This migration version only resides on `release-10.2`
branch, and is missing on our default branch. This creates a problem
because we do not have a valid migration path from 10.2 to latest 11.0.

To remedy this issue, I copied the relevant migration files from
`release-10.2` branch, and renamed some of our migration files on
default branch to make sure we have a linear upgrade path.
2021-11-11 13:55:28 +03:00
Önder Kalacı 6f5a343ff4
Make sure that enterprise tests pass (#5451) 2021-11-08 18:11:19 +03:00
Önder Kalacı 98ca6ba6ca
Allow lock_shard_resources to be called by the users with privileges (#5441)
Before this commit, we required the user to be owner of the shard/table
in order to call lock_shard_resources.

However, that is too restrictive. We can have users with GRANTS
to the table who are not owners of the tables/shards.

With this commit, we allow such patterns.
2021-11-08 15:36:51 +01:00
Önder Kalacı d5b371b2e0
Merge branch 'master' into naisila/fix-partitioned-index 2021-11-08 10:53:16 +01:00
naisila 385ba94d15 Run fix_partition_shard_index_names after each wrong naming command 2021-11-08 10:43:34 +01:00
Marco Slot 78866df13c Remove master_append_table_to_shard UDF 2021-11-08 10:43:24 +01:00
Marco Slot fba93df4b0 Remove copy into new append shard logic 2021-11-07 21:01:40 +01:00
Marco Slot 27ba19f7e1 Fix a flappy test in drop_column_partitioned_table 2021-11-07 18:25:44 +01:00
Sait Talha Nisanci ab29c25658 Fix missing from entry 2021-11-04 18:54:52 +03:00
Ahmet Gedemenli b30ed46068
Fixes ALTER STATISTICS IF EXISTS bug (#5435)
* Fix ALTER STATISTICS IF EXISTS bug
2021-11-04 16:14:05 +03:00
Halil Ozan Akgul 91b377490b Fix multi_cluster_management fails for metadata syncing 2021-11-04 11:09:21 +03:00
Talha Nisanci 19f28eabae
Fix citus upgrade local run issues (#5414)
This PR is fixing 2 separate issues related to the local run of citus upgrade tests.

d3e7c825ab fixes the issue that, with our new testing infrastructure, we moved/renamed some of existing folders. This created a problem for local runs of citus upgrade tests since some paths were sensitive to such changes. This commit tries to make it more generic so that this issue is less likely to happen in the future, while also fixing the current issue.

93de6b60c3 we are fixing an issue that a new environment variable was added for citus upgrade tests, which is defined in the CI. 0cb51f8c37/.circleci/config.yml (L294)
This environment variable wasn't set in our local runs hence it would create problems. Instead of defining this environment variable in the local run, we change the citus_upgrade run command to use an existing env variable, which is now also set in the CI.
2021-11-03 16:17:36 +03:00
Jelte Fennema 9b784e58bf
Add tests for special hash values (#5431)
We fixed some crashes a while back that would only occur in cases where
the value of a distribution column would have result in a high or a very
low hash value. This adds a regression test for those crashes.
2021-11-03 13:42:39 +01:00
Jelte Fennema 0cb51f8c37
Test a query that failed on 9.5.8 when coordinator is in metadata (#5412)
This test starts passing because of PR #4508, to be precise commit:
24e60b44a1

When I undo that commit this newly added test starts failing. This adds
this test to make sure we don't regress on this again.
2021-11-03 12:27:28 +01:00
Halil Ozan Akgul c0785d570c Remove EnsureSuperUser from start and stop metadata sync to node 2021-11-01 18:01:49 +03:00
Halil Ozan Akgul c0eb67b24f Skip forceCloseAtTransactionEnd connections only if BEGIN was not sent on them 2021-11-01 17:43:04 +03:00
Ahmet Gedemenli 67dca4363d
Dont auto-undistribute user-added citus local tables (#5314)
* Disable auto-undistribute for user-added citus local tables
2021-10-28 12:10:26 +03:00
Nils Dijk f4297f774a
Bump mitmproxy version (#5334)
There is a vulnerability in mitmproxy with the version we are using.

It would be hard to exploit anything with regards to the artifacts we ship as its only used in our test suite. Still its good hygiene to _not_ use software with known vulnerabilities.

This PR updates the version of python, mitmproxy and the crypto libraries used.
The latest version of mitmproxy for python 3.6 is not patched, hence the upgrade of python.
For our CI images this cascades into upgrading debian as well :)

For CI we bake these versions in our images so we need to update them as well.

Changes to the CI images: https://github.com/citusdata/the-process/pull/65
2021-10-27 17:57:13 +02:00
Philip Dubé cc50682158 Fix typos. Spurred spotting "connectios" in logs 2021-10-25 13:54:09 +00:00
Onder Kalaci 575bb6dde9 Drop support for Inactive Shard placements
Given that we do all operations via 2PC, there is no way
for any placement to be marked as INACTIVE.
2021-10-22 18:03:35 +02:00
Önder Kalacı b3299de81c
Drop support for citus.multi_shard_commit_protocol (#5380)
In the past, we allowed users to manually switch to 1PC
(e.g., one phase commit). However, with this commit, we
don't. All multi-shard modifications are done via 2PC.
2021-10-21 14:01:28 +02:00
Marco Slot dafba6c242 Deprecate master_get_table_metadata UDF 2021-10-21 12:08:05 +02:00
Marco Slot defb97b7f5 Support operator class parameters in indexes 2021-10-20 17:03:59 +02:00
Önder Kalacı 3f726c72e0
When replication factor > 1, all modifications are done via 2PC (#5379)
With Citus 9.0, we introduced `citus.single_shard_commit_protocol` which
defaults to 2PC.

With this commit, we prevent any user to set it to 1PC and drop support
for `citus.single_shard_commit_protocol`.

Although this might add some overhead for users, it is already the default
behaviour (so less likely) and marking placements as INVALID is much
worse.
2021-10-20 01:39:03 -07:00
Marco Slot 641ef9bd6f Fix flappy subquery_append test 2021-10-19 15:29:01 +02:00
Marco Slot 096660d61d Remove master_apply_delete_command 2021-10-18 22:29:37 +02:00
Marco Slot bece86b2f7 Add some subquery on append-distributed table tests 2021-10-18 21:11:16 +02:00
Marco Slot 93e79b9262 Never allow co-located joins of append-distributed tables 2021-10-18 21:11:16 +02:00
Marco Slot b97e5081c7 Disable co-located joins for append-distributed tables 2021-10-18 21:11:16 +02:00
Sait Talha Nisanci 6ff2083311 Remove base test as it is not useful anymore 2021-10-18 20:31:18 +03:00
Sait Talha Nisanci 7336c03c22 Add local-dist table joins to arbitrary configs 2021-10-18 20:31:18 +03:00
Önder Kalacı 31c8f279ac
Add helper UDFs to inspect object dependencies (#5293)
- citus_get_all_dependencies_for_object: emulate what Citus
                                         would qualify as
					 dependency when adding
					 a new node
- citus_get_dependencies_for_object: emulate what Citus would qualify
				     as dependency when creating an
				     object

Example use:
```SQL
-- find all the depedencies of table test
SELECT
	pg_identify_object(t.classid, t.objid, t.objsubid)
FROM
	(SELECT * FROM pg_get_object_address('table', '{test}', '{}')) as addr
JOIN LATERAL
	citus_get_all_dependencies_for_object(addr.classid, addr.objid, addr.objsubid) as t(classid oid, objid oid, objsubid int)
ON TRUE
	ORDER BY 1;
```
2021-10-18 14:46:49 +03:00
Halil Ozan Akgul b710e0064d Fix tests that fail with MX in multi_schedule 2021-10-15 12:58:38 +03:00
Ahmet Gedemenli 35f6fe5f9f
Refactor/Improve PreprocessAlterTableStmtAttachPartition (#5366)
* Refactor/Improve PreprocessAlterTableStmtAttachPartition
2021-10-14 11:39:39 +03:00
Önder Kalacı af876bf452
Add value materialization test (#5368) 2021-10-13 09:08:24 +02:00
SaitTalhaNisanci a39859bc74
Remove unnecesary output (#5367) 2021-10-13 09:28:01 +03:00
SaitTalhaNisanci 3f65751d43
Add an infrastructure to run same tests with arbitrary configs (#5316)
To run tests in parallel use:

```bash
make check-arbitrary-configs parallel=4
```

To run tests sequentially use:

```bash
make check-arbitrary-configs parallel=1
```

To run only some configs:

```bash
make check-arbitrary-base CONFIGS=CitusSingleNodeClusterConfig,CitusSmallSharedPoolSizeConfig
```

To run only some test files with some config:

```bash
make check-arbitrary-base CONFIGS=CitusSingleNodeClusterConfig EXTRA_TESTS=dropped_columns_1
```

To get a deterministic run, you can give the random's seed:

```bash
make check-arbitrary-configs parallel=4 seed=12312
```

The `seed` will be in the output of the run.

In our regular regression tests, we can see all the details about either planning or execution but this means
we need to run the same query under different configs/cluster setups again and again, which is not really maintanable.

When we don't care about the internals of how planning/execution is done but the correctness, especially with different configs
this infrastructure can be used.

With `check-arbitrary-configs` target, the following happens:

-   a bunch of configs are loaded, which are defined in `config.py`. These configs have different settings such as different shard count, different citus settings, postgres settings, worker amount, or different metadata.
-   For each config, a separate data directory is created for tests in `tmp_citus_test` with the config's name.
-   For each config, `create_schedule` is run on the coordinator to setup the necessary tables.
-   For each config, `sql_schedule` is run. `sql_schedule` is run on the coordinator if it is a non-mx cluster. And if it is mx, it is either run on the coordinator or a random worker.
-   Tests results are checked if they match with the expected.

When tests results don't match, you can see the regression diffs in a config's datadir, such as `tmp_citus_tests/dataCitusSingleNodeClusterConfig`.

We also have a PostgresConfig which runs all the test suite with Postgres.
By default configs use regular user, but we have a config to run as a superuser as well.

So the infrastructure tests:

-   Postgres vs Citus
-   Mx vs Non-Mx
-   Superuser vs regular user
-   Arbitrary Citus configs

When you want to add a new test, you can add the create statements to `create_schedule` and add the sql queries to `sql_schedule`.
If you are adding Citus UDFs that should be a NO-OP for Postgres, make sure to override the UDFs in `postgres.sql`.

You can add your new config to `config.py`. Make sure to extend either `CitusDefaultClusterConfig` or `CitusMXBaseClusterConfig`.

On the CI, upon a failure, all logfiles will be uploaded as artifacts, so you can check the artifacts tab.
All the regressions will be shown as part of the job on CI.

In your local, you can check the regression diffs in config's datadirs as in `tmp_citus_tests/dataCitusSingleNodeClusterConfig`.
2021-10-12 14:24:19 +03:00
Teja Mupparti a8348047c5
Pushdown procedures with OUT parameters (#5348) 2021-10-11 23:14:36 -07:00
Ahmet Gedemenli d19793c174 Add partitioning support for citus local tables
Add/fix tests

Fix creating partitions

Add test for mx - partition creating case

Enable cascading to partitioned tables

Fix mx partition adding test

Fix cascading through fkeys

Style

Disable converting with non-inherited fkeys

Fix detach bug

Early return in case of cascade & Add tests

Style

Fix undistribute_table bug & Fix test outputs

Remove RemovePartitionRelationIds

Test with undistribute_table

Add test for mx+convert+undistribute

Remove redundant usage of CreatePartitionedCitusLocalTable

Add some comments

Introduce bulk functions for generating attach/detach partition commands

Fix: Convert partitioned tables after adding fkey

Change the error message for partitions

Introduce function ErrorIfPartitionTableAddedToMetadata

Polish attach/detach command generation functions

Use time_partitions for testing

Move mx tests to citus_local_tables_mx

Add new partitioned table to cascade test

Add test with time series management UDFs

Fix test output

Fix: Assertion fail on relation access tracking

Style

Refactor creating partitioned citus local tables

Remove CreatePartitionedCitusLocalTable

Style

Error out if converting multi-level table

Revert some old tests

Error out adding partitioned partition

Polish

Polish/address

Fix create table partition of case

Use CascadeOperationForRelationIdList if no cascade needed

Fix create partition bug

Revert / Add new tests to mx

Style

Fix dropping fkey bug

Add test with IF NOT EXISTS

Convert to CLT when doing ATTACH PARTITION

Add comments

Add more tests with time series management

Edit the error message for converting the child

Use OR instead of AND in ErrorIfUnsupportedAlterTableStmt

Edit/improve tests

Disable ddl prop when dropping default column definitions

Disable/enable ddl prop just before/after the command

Add comment

Add sequence test

Add trigger test

Remove NeedCascadeViaForeignKeys

Add one more insert to sequence test

Add comment

Style

Fix test output shard ids

Update comments

Disable creating fkey on partitions

Move partition check to CreateCitusLocalTable

Add comment

Add check for  attachingmulti-level  partition

Add test for pg_constraint

Check pg_dist_partition in tests

Add test inserting on the worker
2021-10-11 10:45:07 +03:00
Marco Slot 386d2567d4 Reduce reliance on append tables in regression tests 2021-10-08 21:27:14 +02:00
Halil Ozan Akgul 9c9d4b5eeb Turn MX on by default 2021-10-08 18:17:21 +03:00
Naisila Puka 99d3785b5c
Fix flaky test in multi_fix_partition_shard_index_names.sql (#5364) 2021-10-08 18:03:34 +03:00
Naisila Puka d0390af72d
Add fix_partition_shard_index_names udf to fix currently broken names (#5291)
* Add udf to include shardId in broken partition shard index names

* Address reviews: rename index such that operations can be done on it

* More comprehensive index tests

* Final touches and formatting
2021-10-07 19:34:52 +03:00
Marco Slot 91b647024a Fixes CREATE INDEX deparsing issue 2021-10-06 13:08:16 +02:00
Hanefi Onaldi a74409f24c
Bump Citus to 11.0devel 2021-10-01 22:21:22 +03:00
Onur Tirtir fe72e8bb48
Discard index deletion requests made to columnarAM (#5331)
A write operation might trigger index deletion if index already had
dead entries for the key we are about to insert.
There are two ways of index deletion:
  a) simple deletion
  b) bottom-up deletion (>= pg14)

Since columnar_index_fetch_tuple never sets all_dead to true,
columnarAM doesn't ever expect to receive simple deletion requests
(columnar_index_delete_tuples) as we don't mark any index entries
as dead.

However, since columnarAM doesn't delete any dead entries via simple
deletion, postgres might ask for a more comprehensive deletion
(i.e.: bottom-up) at some point when pg >= 14.

So with this commit, we start gracefully ignoring bottom-up deletion
requests made to columnar_index_delete_tuples.

Given that users can anyway "VACUUM FULL" their columnar tables,
we don't see any problem in ignoring deletion requests.
2021-10-01 14:32:47 +03:00
Önder Kalacı c2311b4c0c
Make (columnar.stripe) first_row_number index a unique constraint (#5324)
* Make (columnar.stripe) first_row_number index a unique constraint

Since stripe_first_row_number_idx is required to scan a columnar
table, we need to make sure that it is created before doing anything
with columnar tables during pg upgrades.

However, a plain btree index is not a dependency of a table, so
pg_upgrade cannot guarantee that stripe_first_row_number_idx gets
created when creating columnar.stripe, unless we make it a unique
"constraint".

To do that, drop stripe_first_row_number_idx and create a unique
constraint with the same name to keep the code change at minimum.

* Add more pg upgrade tests for columnar

* Fix a logic error in uprade_columnar_after test

Co-authored-by: Onur Tirtir <onurcantirtir@gmail.com>
2021-09-30 10:51:56 +03:00
tejeswarm a1604a87e6 Parition shards to be colocated with the parent shards 2021-09-22 14:47:04 -07:00
Onur Tirtir 77a2dd68da
Revoke read access to columnar.chunk from unprivileged user (#5313)
Since this could expose chunk min/max values to unprivileged users.
2021-09-22 16:23:02 +03:00
Onur Tirtir 68335285b4 Columnar CustomScan: Pushdown BoolExpr's as we do before 2021-09-22 10:51:34 +03:00
Onur Tirtir f8b1ff7214
Add CheckCitusVersion() calls to columnarAM (#5308)
Considering all code-paths that we might interact with a columnar table,
add `CheckCitusVersion` calls to tableAM callbacks:
- initializing table scan (`columnar_beginscan` & `columnar_index_fetch_begin`)
- setting a new filenode for a relation (storage initializiation or a table rewrite)
- truncating the storage
- inserting tuple (single and multi)

Also add `CheckCitusVersion` call to:
- drop hook (`ColumnarTableDropHook`)
- `alter_columnar_table_set` & `alter_columnar_table_reset` UDFs
2021-09-20 17:26:41 +03:00
SaitTalhaNisanci 35ff513dfe
Give proper error while distributing a temp table (#5269) 2021-09-17 14:34:40 +03:00
jeff-davis 6e8b19984e
Columnar: separate plan and runtime quals. (#5261)
* Columnar: separate plain and exec quals.

Make a clear separation between plain quals, which contain constants
or extern params; and exec quals, which contain exec params and can't
be evaluated until a rescan.

Fixes #5258.

* more vanilla tests

Co-authored-by: Onur Tirtir <onurcantirtir@gmail.com>
2021-09-13 10:54:53 -07:00
jeff-davis d48ceee238
Columnar: add method ReparameterizeCustomPathByChild. (#5275)
When performing a partition-wise join, the planner will adjust paths
parameterized by the parent rel to instead parameterize by the child
rel directly. When this reparameterization happens, we also need to
adjust the join quals to reference the child rather than the parent.

Fixes #5257.
2021-09-13 10:33:48 -07:00
Onur Tirtir ea61efb63a
Not flush writes until need to read them when doing index-scan on columnar (#5247)
Not flush pending writes if given tid belongs to a "flushed" or
"aborted" stripe write, or to an "in-progress" stripe write of
another backend.

That way, we would reduce the cases where we flush single-tuple
stripes during index scan.

To do that, we follow below steps for index look-up's:

- Do not flush any pending writes and do stripe metadata look-up for
  given tid.
  If tuple with tid is found, then no need to do another look-up
  since we already found the tuple without needing to flush pending
  writes.

- If tuple is not found without flushing pending writes, then we have two
  scenarios:

  -  If given tid belongs to a pending write of my backend, then do stripe
     metadata look-up for given tid. But this time first **flush any pending
     writes**.
     
  -  Otherwise, just return false from `index_fetch_tuple` since flushing
      pending writes wouldn't help.
2021-09-13 18:41:20 +02:00
Onur Tirtir 4ee0fb2758
Make sure to skip aborted writes when reading the first tuple (#5274)
With 5825c44d5f, we made the changes to
skip aborted writes when scanning a columnar table.

However, looks like we forgot to handle such cases for the very first
call made to columnar_getnextslot. That means, that commit only
considered the intermediate stripe read operations.

However, functions called by columnar_getnextslot to find first stripe
to read (ColumnarBeginRead & ColumnarRescan) were not caring about
those aborted writes.

To fix that, we teach AdvanceStripeRead to find the very first stripe
to read, and then start using it where were blindly calling
FindNextStripeByRowNumber.
2021-09-13 11:50:53 +03:00
Naisila Puka a69abe3be0
Fixes bug about int and smallint sequences on MX (#5254)
* Introduce worker_nextval udf for int&smallint column defaults

* Fix current tests and add new ones for worker_nextval
2021-09-09 23:41:07 +03:00
Halil Ozan Akgul 19af1cef2f Errors for CTEs with search clause
Relevant PG commit:
3696a600e2292d43c00949ddf0352e4ebb487e5b
2021-09-09 13:48:24 +03:00
Marco Slot 04388e13b0 Add worker_append_table_to_shard permissions tests 2021-09-09 11:00:29 +02:00
SaitTalhaNisanci e3e0a028c7
return early in case we want to skip outer vars (#5259) 2021-09-09 10:53:36 +03:00
Onur Tirtir 9935dfb958 Remove a flaky test from columnar_paths
We already knew that it was flaky. Moreover, now it failed on my
branch too.

So removing it with this commit.
2021-09-08 14:15:22 +03:00
Onur Tirtir 3340f17c4e
Prevent planner from choosing parallel scan for columnar tables (#5245)
Previously, for regular table scans, we were setting `RelOptInfo->partial_pathlist`
to `NIL` via `set_rel_pathlist_hook` to discard scan `Path`s that need to use any
parallel workers, this was working nicely.

However, when building indexes, this hook doesn't get called so we were not
able to prevent spawning parallel workers when building an index. For this
reason, 9b4dc2f804 added basic
implementation for `columnar_parallelscan_*` callbacks but also made some
changes to skip using those workers when building the index.

However, now that we are doing stripe reservation in two stages, we call 
`heap_inplace_update` at some point to complete stripe reservation.
However, postgres throws an error if we call `heap_inplace_update` during
a parallel operation, even if we don't actually make use of those workers.

For this reason, with this pr, we make sure to not generate scan `Path`s that
need to use any parallel workers by using `get_relation_info_hook`.

This is indeed useful to prevent spawning parallel workers during index builds.
2021-09-08 13:53:43 +03:00
Onur Tirtir 5825c44d5f
Handle aborted writes properly when scanning a columnar table (#5244)
If it is certain that we will not use any `parallel_worker`s for a columnar table,
then stripe entries inserted by aborted transactions become visible to
`SnapshotAny` and that causes `REINDEX` to fail by throwing a duplicate key
error.

To fix that:
* consider three states for a stripe write operation:
   "flushed", "aborted", or "in-progress",
* make sure to have a clear separation between them, and
* act according to those three states when reading from a columnar table
2021-09-08 13:26:11 +03:00
Jelte Fennema bb5c494104 Enable binary encoding by default on PG14
Since PG14 we can now use binary encoding for arrays and composite types
that contain user defined types. This was fixed in this commit in
Postgres: 670c0a1d47

This change starts using that knowledge, by not necessarily falling back
to text encoding anymore for those types.

While doing this and testing a bit more I found various cases where
binary encoding would fail that our checks didn't cover. This fixes
those cases and adds tests for those. It also fixes EXPLAIN ANALYZE
never using binary encoding, which was a leftover of workaround that
was not necessary anymore.

Finally, it changes the default for both `citus.enable_binary_protocol`
and `citus.binary_worker_copy_format` to `true` for PG14 and up. In our
cloud offering `binary_worker_copy_format` already was true by default.
`enable_binary_protocol` had some bug with MX and user defined types,
this bug was fixed by the above mentioned fixes.
2021-09-06 10:27:29 +02:00
Burak Velioglu c3895f35cd
Add helper UDFs for easy time partition management
- get_missing_time_partition_ranges: Gets the ranges of missing partitions for the given table, interval and range unless any existing partition conflicts with calculated missing ranges.

- create_time_partitions: Creates partitions by getting range values from get_missing_time_partition_ranges.

- drop_old_time_partitions: Drops partitions of the table older than given threshold.
2021-09-03 23:03:13 +03:00
Onur Tirtir 2b71263e40
Align columnar path costing functions (#5239)
* Rename RecostColumnarPaths to CostColumnarPaths

* Rename RecostColumnarIndexPath to CostColumnarIndexPath

* Reorder args of CostColumnarScan to align with other two costing functions

* Not adjust index scan start-up cost

* Rename ColumnarIndexScanAddTotalCost to ColumnarIndexScanAdditionalCost

* Reflect that index scan will at least read one stripe in totalCost calculation

* Organize declarations in columnar_customscan.c
2021-09-03 19:37:42 +03:00
Halil Ozan Akgul 7fadfb74bb Adds error message for REINDEX TABLE queries on distributed partitioned tables 2021-09-03 16:46:42 +03:00
Sait Talha Nisanci 0b67fcf81d Fix style 2021-09-03 16:09:59 +03:00
Halil Ozan Akgul e1f5520e1a Adds propagation of ALTER TABLE .. ALTER COLUMN .. SET COMPRESSION .. 2021-09-03 15:44:28 +03:00
SaitTalhaNisanci 902af39a04 Add join alias tests (#5233)
PG COMMIT: 055fee7eb4dcc78e58672aef146334275e1cc40d
2021-09-03 15:44:28 +03:00
SaitTalhaNisanci 2a2ebab1fa Add tests for jsonb subscripting (#5232)
PG commit: 676887a3b0b8e3c0348ac3f82ab0d16e9a24bd43
2021-09-03 15:44:28 +03:00
Ahmet Gedemenli 2b263f9a2a ALTER STATISTICS .. OWNER TO CURRENT_ROLE (#5225)
(cherry picked from commit 42322caf90ca094777aa01376e02d1187afc1560)
2021-09-03 15:44:28 +03:00
Onder Kalaci 82a3b20fb3 Fix flaky test 2021-09-03 15:44:28 +03:00
Onder Kalaci 5844ab286c Support OUT parameters in procedure pushdown delegation
In PG 14, procedures can have OUT parameters. In Citus' procedure
delegation framework, we need to adjust the function expression
to get the outargs parameters.

Releven PG change:
e56bce5d43
2021-09-03 15:44:28 +03:00
Ahmet Gedemenli 1ff7186d20 Extended statistics on expressions - PG14 a4d75c8 (#5224)
(cherry picked from commit 1268415f123b5d99cfacfe207c8670240efc1c00)
2021-09-03 15:44:28 +03:00
Halil Ozan Akgul 113d5d6615 Adds support for column compression in table distribution 2021-09-03 15:44:28 +03:00
Ahmet Gedemenli 6fbdeb38a8 ALTER TABLE ... DETACH PARTITION ... CONCURRENTLY - PG14 #71f4c8c (#5223) 2021-09-03 15:44:28 +03:00
Ahmet Gedemenli 66303785f3 Add option PROCESS_TOAST to VACUUM - PG14 #7cb3048 (#5219)
(cherry picked from commit e63bdfc49f9203db14ef77313c1d5e3461a84a32)
2021-09-03 15:44:28 +03:00
Sait Talha Nisanci 35a3f7240d CHANGELOG: Allow REINDEX to change the tablespace of the new index 2021-09-03 15:44:28 +03:00
Sait Talha Nisanci 4e85d9ffce Add empty pg14 sql file 2021-09-03 15:44:28 +03:00
Sait Talha Nisanci 307eb81278 Fix failure for 1pc_copy_hash 2021-09-03 15:41:28 +03:00
Sait Talha Nisanci a6c40ebd14 Fix multi_follower_dml
When the_table is emtpy, we don't get an error with pg14 anymore so we
replace it generate_series so that we get the error.
2021-09-03 15:41:28 +03:00
Sait Talha Nisanci b16dadbe7c Avoid NOTICE message to avoid an alternative output with pg14 2021-09-03 15:41:28 +03:00
Sait Talha Nisanci 6ff609fa86 Add alternative output for data_types
It seems like there is a problem with Postgres14 with SELECT DISTINCT
COUNT. The issue is reported to Postgres and an alternative output is
added. We can remove the alternative output when the issue is fixed on
PG. If this is not an issue on PG(which is unlikely) we should consider
some other solution.
2021-09-03 15:41:28 +03:00
Sait Talha Nisanci 4b951a2ed9 Add alternative output for multi-mx 2021-09-03 15:41:28 +03:00
Sait Talha Nisanci 96964aeee5 Turn off debug for one query to avoid adding an alternative output 2021-09-03 15:41:28 +03:00
Sait Talha Nisanci e7607b6bed Add a helper function to check explain has a single task
In order to avoid adding an alternative output, a function to check if a
given explan plan has a single task added. This doesn't change what the
changed tests intend to do.
2021-09-03 15:41:28 +03:00
Sait Talha Nisanci e0faf34417 turn off costs in columnar_indexes explain query 2021-09-03 15:41:28 +03:00
Sait Talha Nisanci 2656d885f9 Rewrite AppendColumnNames for Pg14
Postgres changed stats expression types as of PG14. Hence we needed to
write the AppendColumnNames method. Also they removed the error on PG
side so we remove it as well.

Relevant commits on pg14:
a4d75c86bf15220df22de0a92c819ecef9db3849
388e75ad33489b77cfb9a8590a91e9287d8fb960
2021-09-03 15:41:28 +03:00
Halil Ozan Akgul c31b0c2652 Sets next_shard_id at partition_wise_join test 2021-09-03 15:41:28 +03:00
Sait Talha Nisanci 75fff14792 Turn off VERBOSE to avoid alternative output
With VERBOSE option, as of PG14, we get a line with "Query Identifier".
2021-09-03 15:41:28 +03:00
Sait Talha Nisanci 6b65dbc492 Add partition_wise_join to avoid big alternative output
There was a small part in multi_partitioning that would need an
alternative output for pg14. Instead of adding an alternative for the
whole file, we created a new file, called partition_wise_join.sql and
added the alternative output for that.
2021-09-03 15:41:28 +03:00
Sait Talha Nisanci 375a1adc9e Check if extversion is the same for seg extension
When we check the exact version of the seg extension, it becomes a
problem when its version changes, such as from 1.3 to 1.4. So now we
modified the changes to check for that the version is the same in all
the cluster.
2021-09-03 15:41:28 +03:00
Halil Ozan Akgul 9b6ce10892 Removes password outputs from alter_role_propagation tests 2021-09-03 15:41:28 +03:00
Sait Talha Nisanci 20c32a7a1d Add alternative output for multi_deparse_function
Postgres tightened up its checks for invalid GUC names hence we started
to get an alternative output for one of our tests. We add an alternative
output since the file is relatively small.

Commit on PG:
3db826bd55cd1df0dd8c3d811f8e5b936d7ba1e4
2021-09-03 15:41:28 +03:00
Sait Talha Nisanci dc81cae18f Turn off COSTS to avoid alternative output for pg14 2021-09-03 15:41:28 +03:00
Sait Talha Nisanci fb8671f291 Change pg13 test to not differ with pg14 to avoid adding alternative output 2021-09-03 15:41:28 +03:00
Sait Talha Nisanci 3f5c178c93 Remove VERBOSE output to make pg14 and pg13 output the same 2021-09-03 15:41:28 +03:00
jeff-davis 4718b6bcdf
Generate parameterized paths for columnar scans. (#5172)
Allow ColumnarScans to push down join quals by generating
parameterized paths. This significantly expands the utility of chunk
group filtering, making a ColumnarScan behave similar to an index when
on the inner of a nested loop join.

Also, evaluate all parameters on beginscan/rescan, which also works
for external parameters.

Fixes #4488.
2021-09-02 22:22:48 -07:00
Onur Tirtir 37d0ecfbb7 Show projected cols for columnar tables in EXPLAIN output 2021-09-02 19:05:32 +03:00
Naisila Puka 4fb05efabb
Distributes partition-to-be table before ProcessUtility (#5191)
* Skip ALTER TABLE constraint checks while planning

* Revert previous commit's solution, keep tests

* Distribute partition-to-be table before ProcessUtility

* Acquire locks in PreprocessAlterTableStmtAttachPartition
2021-09-02 13:07:42 +03:00
Onur Tirtir bf4dfad6f7 Update curcid of given snapshot if it is MVCC
Before starting to scan a columnar table, we always flush the pending
writes to disk.

However, we increment command counter after modifying metadata tables.

On the other hand, now that we _don't always use_ xact snapshot to scan
a columnar table, writes that we just flushed might not be visible to
the query that just flushed pending writes to disk since curcid of
provided snapshot would become smaller than the command id being used
when modifying metadata tables.

To give an example, before this change, below was a possible scenario
due to the changes that we made to use the correct snapshot.

```sql
CREATE TABLE t(a int, b int) USING columnar;
BEGIN;
  INSERT INTO t VALUES (5, 10);

  SELECT * FROM t;
  ┌───┬───┐
  │ a │ b │
  ├───┼───┤
  └───┴───┘
  (0 rows)

  SELECT * FROM t;
  ┌───┬────┐
  │ a │ b  │
  ├───┼────┤
  │ 5 │ 10 │
  └───┴────┘
  (1 row)
```
2021-09-02 11:11:59 +03:00
Naisila Puka acb5ae6ab6
Skip dropping shards when we know it's a partition (#5176) 2021-08-31 17:41:37 +03:00
SaitTalhaNisanci 5ae01303d4
Use get_attnum to find the attribute number of target entry (#5220)
* Use get_attnum to find the attribute number of target entry
2021-08-31 16:47:19 +03:00
Jelte Fennema 481f8be084
Fix crash in shard rebalancer when no distributed tables exist (#5205)
The logging of the amount of ignored moves crashed when no distributed
tables existed in a cluster. This also fixes in passing that the logging
of ignored moves logs the correct number of ignored moves if there
exist multiple colocation groups and all are rebalanced at the same time.
2021-08-31 14:15:24 +02:00
SaitTalhaNisanci b923d51fc6
Bump pg12 and pg13 images to pg12.8 and pg13.8 (#5208)
In our testing infra structure, even though we use pinned versions of postgres, the auxiliary libraries might pull in newer versions. This is for example the case for libpq, which will now use the libpq libraries from 14beta3.

The changes in this PR are a lot due to the libpq changes.

We also have changed the citus version that is used as a base for the citus upgrades, from 10.0 to 10.1 . This caused columnar to enforce some extra limits on the settings, which conflicted with our upgrade tests.

The changes in failure tests are due to the libpq changes.

There are also a lot of changes on isolation tests outputs, hence we
updated all of them.

Co-authored-by: Nils Dijk <nils@citusdata.com>
2021-08-25 16:04:57 +03:00
Onur Tirtir 5af839ada0
Not print metapage.reserved_offset in regression tests (#5168)
* We were anyway not testing reserved_offset in any of those tests
   but other fields.

* This only happens with compressed columnar tables and is because the
   libzstd/liblz4 versions that we have on exttester ci image might be different
   than what we might have on our local environments.
2021-08-23 11:07:10 +03:00
jeff-davis 4f213f293e
Columnar: use generate_series for test rather than load. (#5181) 2021-08-16 16:12:06 -07:00
Onur Tirtir 68f46c5dc9 Use scan context for intermediate mem allocs too 2021-08-16 11:06:03 +03:00
Burak Velioglu 4355ba0a38
Add CREATE INDEX ... ON ONLY and ALTER INDEX ... ATTACH PARTITION (#4938 #4980)
- Add support for CRETE INDEX ... ON ONLY: Before that commit we were not sending "ONLY" option to the worker nodes at all. With this commit, "ONLY" parameter will be sent to the worker nodes if it is necessary. (#4938)

- Add support for ALTER INDEX ... ATTACH PARTITION: Attach child_index to parent_index by creating same inheritance on shard level in addition to table level. (#4980)
2021-08-13 13:12:45 +03:00
Ahmet Gedemenli 9e90894f21
Synchronize hasmetadata flag on mx workers (#5086)
* Synchronize hasmetadata flag on mx workers

* Switch to sequential execution

* Add test

* Use SetWorkerColumn

* Add test for stop_sync

* Remove usage of UpdateHasmetadataOnWorkersWithMetadata

* Remove MarkNodeMetadataSynced

* Fix test for metadatasynced

* Remove MarkNodeMetadataSynced

* Style

* Remove MarkNodeHasMetadata

* Remove UpdateDistNodeBoolAttr

* Refactor SetWorkerColumn

* Use SetWorkerColumnLocalOnly when setting up dependencies

* Use SetWorkerColumnLocalOnly in TriggerSyncMetadataToPrimaryNodes

* Style

* Make update command generator functions static

* Set metadatasynced before syncing

* Call SetWorkerColumn only if the sync is successful

* Try to sync all nodes

* Fix indexno

* Update metadatasynced locally first

* Break if a node fails to sync metadata

* Send worker commands optional

* Style & Rebase

* Add raiseOnError param to SetWorkerColumn

* Style

* Set metadatasynced for all metadata nodes

* Style

* Introduce SetWorkerColumnOptional

* Polish

* Style

* Dont send set command to not synced metadata nodes

* Style

* Polish

* Add test for stop_sync

* Add test for shouldhaveshards

* Add test for isactive flag

* Sort by placementid in the function verify_metadata

* Cover edge cases for failing nodes

* Add comments

* Add nodeport to isactive test

* Add warning if metadata out of sync

* Update warning message
2021-08-12 14:16:18 +03:00
Onder Kalaci 5f02d18ef8 transactional metadata sync for maintanince daemon
As we use the current user to sync the metadata to the nodes
with #5105 (and many other PRs), there is no reason that
prevents us to use the coordinated transaction for metadata syncing.

This commit also renames few functions to reflect their actual
implementation.
2021-08-09 10:34:55 +02:00
Onder Kalaci 35964c6366 Dropped columns do not diverge distribution column for partitioned tables
Before this commit, creating a partition after a DROP column
on the parent (position before dist. key) was leading to
partition to have the wrong distribution column.
2021-08-06 13:36:12 +02:00
naisila 798a7902bf Fix master_update_table_statistics scripts for 9.5 2021-08-03 18:15:56 +03:00
naisila f9fa5a3d69 Fix master_update_table_statistics scripts for 9.4 2021-08-03 18:15:56 +03:00
Onder Kalaci 482b8096e9 Introduce citus_internal_update_relation_colocation
update_distributed_table_colocation can be called by the relation
owner, and internally it updates pg_dist_partition. With this
commit, update_distributed_table_colocation uses an internal
UDF to access pg_dist_partition.

As a result, this operation can now be done by regular users
on MX.
2021-08-03 11:44:58 +02:00
Onur Tirtir 93ebbb0607 Re-cost SeqPath's as well for columnar tables 2021-08-02 11:32:25 +03:00
Onur Tirtir 297f59a70e Re-cost columnar table index paths 2021-08-02 11:16:37 +03:00
Onur Tirtir 73058d35cc Not free (stripe) chunk buffers after de-serializing
Previously, we were only using chunk group reader for sequential scan.
However, to support index scans on columnar tables, now we use very
same low level functions for index scan too.

Since those low-level functions were only used for sequential scan, it
was guaranteed that we would never read the same chunk group more than
once, so we were freeing chunk buffers after deserializing them into a
separate buffer.

Now that we use those low level functions for index scan, we cannot
free chunk buffers since it's possible to read the same chunk group
again, such that:

- read chunk group 1 of stripe 5
- read chunk group 2 of stripe 5
- read chunk group 1 of stripe 5 again

Here, when we decide to read chunk group 1 for a second time,
chunk group 1 is not cached. Plus, before this commit, we were
freeing the chunk buffers for chunk group 1 after the first
read and then we were getting segfault or errors from low-level
de-compression APIs.
2021-08-02 11:00:12 +03:00
Onur Tirtir 83f5d42365 Use long-lasting mem cxt & optimize correlated index scan 2021-08-02 11:00:12 +03:00
Onur Tirtir 90e856d6bc Keep supported indexes when converting table to columnar 2021-07-30 16:41:01 +03:00
SaitTalhaNisanci 4559d02c41
Fix union pushdown issue (#5079)
* Fix UNION not being pushdown

Postgres optimizes column fields that are not needed in the output. We
were relying on these fields to understand if it is safe to push down a
union query.

This fix looks at the parse query, which has the original column fields
to detect if it is safe to push down a union query.

* Add more tests

* Simplify code and make it more robust

* Process varlevelsup > 0 in FindReferencedTableColumn

* Only look for outers vars in union path

* Add more comments

* Remove UNION ALL specific logic for pulling up childvars
2021-07-29 13:52:55 +03:00
Jelte Fennema 2aa67421a7
Fix showing target shard size in the rebalance progress monitor (#5136)
The progress monitor wouldn't actually update the size of the shard on
the target node when using "block_writes" as the `shard_transfer_mode`.
The reason for this is that the CREATE TABLE part of the shard creation
would only be committed once all data was moved as well. This caused
our size calculation to always return 0, since the table did not exist
yet in the session that the progress monitor used.

This is fixed by first committing creation of the table, and only then
starting the actual data copy.

The test output changes slightly. Apparently splitting this up in two
transactions instead of one, increases the table size after the copy by
about 40kB. The additional size used doesn't increase when with the
amount of data in the table is larger (it stays ~40kB per shard). So 
this small change in test output is not considered an actual problem.
2021-07-23 16:37:00 +02:00
Jelte Fennema 7d0b6dc9be Include data_type and cache in sequence definition on workers
These two options were not included when creating the sequences on the
workers as part of metadata syncing.

The missing `data_type` part of the definition made finding the cause
of #5126 harder than necessary, because of confusing errors.
2021-07-22 11:49:06 +02:00
Onder Kalaci c8368e7929 Introduce citus_internal_delete_shard_metadata
With this function, the owner of the table is allowed to remove
shard metadata. This is going to be useful for tenant-isolation.
2021-07-19 13:25:05 +02:00
Jelte Fennema adf17a8cf1
Add upgrade and dowgrade tests for Citus 10.2 (#5120)
It seems we forgot to add this when starting 10.2 development.
2021-07-16 14:39:04 +02:00
Onder Kalaci 2c349e6dfd Use current user to sync metadata
Before this commit, we always synced the metadata with superuser.
However, that creates various edge cases such as visibility errors
or self distributed deadlocks or complicates user access checks.

Instead, with this commit, we use the current user to sync the metadata.
Note that, `start_metadata_sync_to_node` still requires super user
because accessing certain metadata (like pg_dist_node) always require
superuser (e.g., the current user should be a superuser).

However, metadata syncing operations regarding the distributed
tables can now be done with regular users, as long as the user
is the owner of the table. A table owner can still insert non-sense
metadata, however it'd only affect its own table. So, we cannot do
anything about that.
2021-07-16 13:25:27 +02:00
Onur Tirtir f00c63c33d
Support columnar table index builds with CONCURRENTLY option (#5032)
With this commit, we add (`CREATE INDEX` / `REINDEX`) `CONCURRENTLY` support for columnar tables.

For that, we implement `columnar_index_validate_scan` callback.
The reasoning behind the implementation is as follows:

* Postgres function `validate_index` provides all the TIDs that are currently in the
  index to `columnar_index_validate_scan` callback via a `tupleSort` object..

* We start scanning the table by using `columnar_getnextslot` as usual.
  Before moving forward, note that `columnar_getnextslot` guarantees
  to return tuples in the order of their TIDs.

* For us to use during table scan, postgres provides a snapshot guaranteeing
  that any tuples that are valid according to that snapshot but are not in the
  index must be added to the index.

* Then for each tuple that we read from our table, we continue iterating
  given `tupleSort` to find the first TID that is greater than or equal to our
  tuple's TID.

  If both TID's are equal to each other, then we skip the tuple since it's already
  indexed.

  If the TID that we read from tupleSort is greater then our tuple's TID, then
  we decide to insert this tuple into index.
2021-07-09 13:44:58 +03:00
Hanefi Onaldi 8e9cc229ff
Remove public schema dependency for 10.0 upgrades
This commit contains a subset of the changes that should be cherry
picked to 10.0 releases.
2021-07-09 02:08:22 +03:00
Ahmet Gedemenli ed3b98a80b
Add failure test for stop_metadata_sync_to_node (#5102) 2021-07-08 18:23:19 +03:00
Marco Slot b14955c2bd
Fix PG upgrade scripts for 10.0 2021-07-05 14:38:20 +02:00
Marco Slot 3c0dfc12c0
Fix PG upgrade scripts for 9.5 2021-07-05 13:39:35 +02:00
Marco Slot bee202aa39
Fix PG upgrade scripts for 9.4 2021-07-05 13:39:28 +02:00
Onur Tirtir b118d4188e
Fix lower boundary calculation when pruning range dist table shards (#5082)
This happens only when we have a "<" or "<=" filter on distribution
column of a range distributed table and that filter falls in between
two shards.

When the filter falls in between two shards:

  If the filter is ">" or ">=", then UpperShardBoundary was
  returning "upperBoundIndex - 1", where upperBoundIndex is
  exclusive shard index used during binary seach.
  This is expected since upperBoundIndex is an exclusive
  index.
 
  If the filter is "<" or "<=", then LowerShardBoundary was
  returning "lowerBoundIndex + 1", where lowerBoundIndex is
  inclusive shard index used during binary seach.
  On the other hand, since lowerBoundIndex is an inclusive
  index, we should just return lowerBoundIndex instead of
  doing "+ 1". Before this commit, we were missing leftmost
  shard in such queries.

* Remove useless conditional branches

The branch that we delete from UpperShardBoundary was obviously useless.

The other one in LowerShardBoundary became useless after we remove "+ 1"
from there.

This indeed is another proof of what & how we are fixing with this pr.

* Improve comments and add more

* Add some tests for upper bound calculation too
2021-07-02 14:48:21 +03:00
Ahmet Gedemenli 8bae58fdb7
Add parameter to cleanup metadata (#5055)
* Add parameter to cleanup metadata

* Set clear metadata default to true

* Add test for clearing metadata

* Separate test file for start/stop metadata syncing

* Fix stop_sync bug for secondary nodes

* Use PreventInTransactionBlock

* DRemovedebuggiing logs

* Remove relation not found logs from mx test

* Revert localGroupId when doing stop_sync

* Move metadata sync test to mx schedule

* Add test with name that needs to be quoted

* Add test for views and matviews

* Add test for distributed table with custom type

* Add comments to test

* Add test with stats, indexes and constraints

* Fix matview test

* Add test for dropped column

* Add notice messages to stop_metadata_sync

* Add coordinator check to stop metadat sync

* Revert local_group_id only if clearMetadata is true

* Add a final check to see the metadata is sane

* Remove the drop verbosity in test

* Remove table description tests from sync test

* Add stop sync to coordinator test

* Change the order in stop_sync

* Add test for hybrid (columnar+heap) partitioned table

* Change error to notice for stop sync to coordinator

* Sync at the end of the test to prevent any failures

* Add test case in a transaction block

* Remove relation not found tests
2021-07-01 16:23:53 +03:00
Sait Talha Nisanci e7ed16c296 Not include to-be-deleted shards while finding shard placements
Ignore orphaned shards in more places

Only use active shard placements in RouterInsertTaskList

Use IncludingOrphanedPlacements in some more places

Fix comment

Add tests
2021-06-28 13:05:31 +03:00
Naisila Puka fe5907ad2d
Adds propagation of ALTER SEQUENCE and other improvements (#5061)
* Alter seq type when we first use the seq in a dist table

* Don't allow type changes when seq is used in dist table

* ALTER SEQUENCE propagation

* Tests for ALTER SEQUENCE propagation

* Relocate AlterSequenceType and ensure dependencies for sequence

* Support for citus local tables, and other fixes

* Final formatting
2021-06-24 21:23:25 +03:00
Jelte Fennema d1d386a904
Only allow moves of shards of distributed tables (#5072)
Moving shards of reference tables was possible in at least one case:
```sql
select citus_disable_node('localhost', 9702);
create table r(x int);
select create_reference_table('r');
set citus.replicate_reference_tables_on_activate = off;
select citus_activate_node('localhost', 9702);
select citus_move_shard_placement(102008, 'localhost', 9701, 'localhost', 9702);
```

This would then remove the reference table shard on the source, causing
all kinds of issues. This fixes that by disallowing all shard moves
except for shards of distributed tables.

Co-authored-by: Onur Tirtir <onurcantirtir@gmail.com>
2021-06-23 16:25:46 +02:00
Onder Kalaci 75847d10b5 Add regression tests for changing column type with fkey
closes https://github.com/citusdata/citus/issues/2337 as it doesn't
apply anymore.
2021-06-23 09:03:55 +03:00
Onder Kalaci 55ed93bf0d fix regression tests to avoid any conflicts in enterprise 2021-06-22 08:45:17 +03:00
Onder Kalaci 76ae5dd0db Improve regression tests for prepared statements
With a recent commit, we made (644b266dee)
the behaviour of prepared statements for local cached plans has
slightly changed.

Now, Citus caches the plans when they are re-used. This make triggering
of local cached plans on the 7th execution, and 8th execution is the
first time the plan is used from the cached.

So, the tests are improved to cover 8th execution.
2021-06-21 13:34:44 +03:00
Onder Kalaci 69ca943e58 Deparse/parse the local cached queries
With local query caching, we try to avoid deparse/parse stages as the
operation is too costly.

However, we can do deparse/parse operations once per cached queries, right
before we put the plan into the cache. With that, we avoid edge
cases like (4239) or (5038).

In a sense, we are making the local plan caching behave similar for non-cached
local/remote queries, by forcing to deparse the query once.
2021-06-21 12:24:29 +03:00
Onur Tirtir 681f700321 Fix first_row_number test for stripe_row_limit enforcement 2021-06-17 10:51:43 +03:00
Onur Tirtir 3d11c0f9ef Merge remote-tracking branch 'origin/master' into columnar-index
Conflicts:
	src/test/regress/expected/columnar_empty.out
	src/test/regress/expected/multi_extension.out
2021-06-16 20:23:50 +03:00
Onur Tirtir b6b969971a Error out for CLUSTER commands on columnar tables 2021-06-16 20:06:33 +03:00
Onur Tirtir 9b4dc2f804 Prevent using parallel scan for columnar index builds 2021-06-16 19:59:32 +03:00
Onur Tirtir 10a762aa88 Implement columnar index support functions 2021-06-16 19:59:32 +03:00
Halil Ozan Akgul db03afe91e Bump citus version to 10.2devel 2021-06-16 17:44:05 +03:00
SaitTalhaNisanci 1784c7ef85
Merge branch 'master' into split_multi 2021-06-16 15:26:09 +03:00
Sait Talha Nisanci c7d04e7f40 swap multi_schedule and multi_schedule_1 2021-06-16 14:40:14 +03:00
Sait Talha Nisanci c55e44a4af Drop table if exists 2021-06-16 14:19:59 +03:00
Naisila Puka e26b29d3bb
Fix nextval('seq_name'::text) bug, and schema for seq tests (#5046) 2021-06-16 13:58:49 +03:00
Marco Slot a7e4d6c94a Fix a bug that causes worker_create_or_alter_role to crash with NULL input 2021-06-15 20:07:08 +02:00
Onur Tirtir a209999618
Enforce table opt constraints when using alter_columnar_table_set (#5029) 2021-06-08 17:39:16 +03:00
Ahmet Gedemenli 089ef35940 Disable dropping and truncating known shards
Add test for disabling dropping and truncating known shards
2021-06-02 14:30:27 +02:00
Jelte Fennema 503c70b619 Cleanup orphaned shards before moving when necessary
A shard move would fail if there was an orphaned version of the shard on
the target node. With this change before actually fail, we try to clean
up orphaned shards to see if that fixes the issue.
2021-06-04 11:23:07 +02:00
Jelte Fennema 280b9ae018 Cleanup orphaned shards at the start of a rebalance
In case the background daemon hasn't cleaned up shards yet, we do this
manually at the start of a rebalance.
2021-06-04 11:23:07 +02:00