Commit Graph

1924 Commits (1784c7ef857d2d7b6f686f04fca4a1c12becba60)

Author SHA1 Message Date
SaitTalhaNisanci 1784c7ef85
Merge branch 'master' into split_multi 2021-06-16 15:26:09 +03:00
Sait Talha Nisanci c7d04e7f40 swap multi_schedule and multi_schedule_1 2021-06-16 14:40:14 +03:00
Sait Talha Nisanci c55e44a4af Drop table if exists 2021-06-16 14:19:59 +03:00
Sait Talha Nisanci fc89487e93 Split check multi 2021-06-16 14:19:59 +03:00
Naisila Puka e26b29d3bb
Fix nextval('seq_name'::text) bug, and schema for seq tests (#5046) 2021-06-16 13:58:49 +03:00
Marco Slot a7e4d6c94a Fix a bug that causes worker_create_or_alter_role to crash with NULL input 2021-06-15 20:07:08 +02:00
Onur Tirtir a209999618
Enforce table opt constraints when using alter_columnar_table_set (#5029) 2021-06-08 17:39:16 +03:00
Ahmet Gedemenli 089ef35940 Disable dropping and truncating known shards
Add test for disabling dropping and truncating known shards
2021-06-02 14:30:27 +02:00
Jelte Fennema 1a83628195 Use "orphaned shards" naming in more places
We were not very consistent in how we named these shards.
2021-06-04 11:39:19 +02:00
Jelte Fennema 503c70b619 Cleanup orphaned shards before moving when necessary
A shard move would fail if there was an orphaned version of the shard on
the target node. With this change before actually fail, we try to clean
up orphaned shards to see if that fixes the issue.
2021-06-04 11:23:07 +02:00
Jelte Fennema 280b9ae018 Cleanup orphaned shards at the start of a rebalance
In case the background daemon hasn't cleaned up shards yet, we do this
manually at the start of a rebalance.
2021-06-04 11:23:07 +02:00
Jelte Fennema 7015049ea5 Add citus_cleanup_orphaned_shards UDF
Sometimes the background daemon doesn't cleanup orphaned shards quickly
enough. It's useful to have a UDF to trigger this removal when needed.
We already had a UDF like this but it was only used during testing. This
exposes that UDF to users. As a safety measure it cannot be run in a
transaction, because that would cause the background daemon to stop
cleaning up shards while this transaction is running.
2021-06-04 11:23:07 +02:00
Naisila Puka 0f37ab5f85
Fixes column default coming from a sequence (#4914)
* Add user-defined sequence support for MX

* Remove default part when propagating to workers

* Fix ALTER TABLE with sequences for mx tables

* Clean up and add tests

* Propagate DROP SEQUENCE

* Removing function parts

* Propagate ALTER SEQUENCE

* Change sequence type before propagation & cleanup

* Revert "Propagate ALTER SEQUENCE"

This reverts commit 2bef64c5a29f4e7224a7f43b43b88e0133c65159.

* Ensure sequence is not used in a different column with different type

* Insert select tests

* Propagate rename sequence stmt

* Fix issue with group ID cache invalidation

* Add ALTER TABLE ALTER COLUMN TYPE .. precaution

* Fix attnum inconsistency and add various tests

* Add ALTER SEQUENCE precaution

* Remove Citus hook

* More tests

Co-authored-by: Marco Slot <marco.slot@gmail.com>
2021-06-03 23:02:09 +03:00
Marco Slot c03729ad03 Only warn about reference tables when removing last node 2021-06-01 10:53:12 +02:00
Hanefi Onaldi 056005db4d
Improve tests for truncating local data (#5012)
We have a slightly different behavior when using truncate_local_data_after_distributing_table UDF on metadata synced clusters. This PR aims to add tests to cover such cases.

We allow distributing tables with data that have foreign keys to reference tables only on metadata synced clusters. This is the reason why some of my earlier tests failed when run on a single node Citus cluster.
2021-06-03 08:51:32 +03:00
Ahmet Gedemenli 0fbddc740d Fix shard id difference for enterprise 2021-06-01 17:17:46 +03:00
Jelte Fennema 4c20bf7a36
Remove pg_dist_rebalence_strategy_enterprise_check (#5014)
This is not necessary anymore now that the rebalancer is open source.
2021-06-01 06:16:46 -07:00
Ahmet Gedemenli 69d39c0e8b Fix relname null bug when parallel execution 2021-06-01 14:14:35 +03:00
Ahmet Gedemenli 9638933d9d Remove function GenerateNewTargetEntriesForSortClauses 2021-06-01 12:35:36 +03:00
Jelte Fennema d3feee37ea
Add a simple python script to generate a new test (#3972)
The current default citus settings for tests are not really best
practice anymore. However, we keep them because lots of tests depend on
them.

I noticed that I created the same test harness for new tests I added all
the time. This is a simple script that generates that harness, given a
name for the test.

To run:

src/test/regress/bin/create_test.py my_awesome_test
2021-06-01 11:22:11 +02:00
SaitTalhaNisanci c72d2b479b
Add tests for union pushdown workaround (#5005) 2021-05-31 20:02:20 +02:00
SaitTalhaNisanci 8c3f85692d
Not consider old placements when disabling or removing a node (#4960)
* Not consider old placements when disabling or removing a node

* update cluster test
2021-05-28 22:38:20 +02:00
SaitTalhaNisanci 40a229976f
Fix flaky test because of parallel metadata syncing (#5004) 2021-05-28 13:19:15 +03:00
Hanefi Onaldi 4941f00a95
Do not run ref2ref tests in parallel 2021-05-21 16:14:59 +03:00
Hanefi Onaldi 878513f325
Remove all occurences of replication_model GUC 2021-05-21 16:14:59 +03:00
SaitTalhaNisanci 87e3a5e24a
Use 2PC when using a node connection (#4997) 2021-05-21 14:58:53 +03:00
SaitTalhaNisanci 82f34a8d88
Enable citus.defer_drop_after_shard_move by default (#4961)
Enable citus.defer_drop_after_shard_move by default
2021-05-21 10:48:32 +03:00
Jelte Fennema 10f06ad753 Fetch shard size on the fly for the rebalance monitor
Without this change the rebalancer progress monitor gets the shard sizes
from the `shardlength` column in `pg_dist_placement`. This column needs to
be updated manually by calling `citus_update_table_statistics`.
However, `citus_update_table_statistics` could lead to distributed
deadlocks while database traffic is on-going (see #4752).

To work around this we don't use `shardlength` column anymore. Instead
for every rebalance we now fetch all shard sizes on the fly.

Two additional things this does are:
1. It adds tests for the rebalance progress function.
2. If a shard move cannot be done because a source or target node is
   unreachable, then we error in stop the rebalance, instead of showing
   a warning and continuing. When using the by_disk_size rebalance
   strategy it's not safe to continue with other moves if a specific
   move failed. It's possible that the failed move made space for the
   next move, and because the failed move never happened this space now
   does not exist.
3. Adds two new columns to the result of `get_rebalancer_progress` which
   shows the size of the shard on the source and target node.

Fixes #4930
2021-05-20 16:38:17 +02:00
Nils Dijk a6c2d2a4c4
Feature: alter database owner (#4986)
DESCRIPTION: Add support for ALTER DATABASE OWNER

This adds support for changing the database owner. It achieves this by marking the database as a distributed object. By marking the database as a distributed object it will look for its dependencies and order the user creation commands (enterprise only) before the alter of the database owner. This is mostly important when adding new nodes.

By having the database marked as a distributed object it can easily understand for which `ALTER DATABASE ... OWNER TO ...` commands to propagate by resolving the object address of the database and verifying it is a distributed object, and hence should propagate changes of owner ship to all workers.

Given the ownership of the database might have implications on subsequent commands in transactions we force sequential mode for transactions that have a `ALTER DATABASE ... OWNER TO ...` command in them. This will fail the transaction with meaningful help when the transaction already executed parallel statements.

By default the feature is turned off since roles are not automatically propagated, having it turned on would cause hard to understand errors for the user. It can be turned on by the user via setting the `citus.enable_alter_database_owner`.
2021-05-20 13:27:44 +02:00
Onder Kalaci d07db99ea4 Make sure that target node in shard moves is eligable for shard move 2021-05-20 10:51:01 +02:00
Jelte Fennema 924959fdb1
Include result type in upgrade diff test (#4987)
We often change result types of functions slightly. Our downgrade tests
wouldn't notice these changes. This change adds them to the description
of these items.

An example of an SQL change that isn't caught without this change and is
caught with the get_rebalance_progress change in this PR:
https://github.com/citusdata/citus/pull/4963
2021-05-18 16:25:39 +02:00
SaitTalhaNisanci ff2a125a5b
Lookup hostname before execution (#4976)
We lookup the hostname just before the execution so that even if there are cached entries in the prepared statement cache we use the updated entries.
2021-05-18 16:46:31 +03:00
SaitTalhaNisanci eaa7d2bada
Not block maintenance daemon (#4972)
It was possible to block maintenance daemon by taking an SHARE ROW
EXCLUSIVE lock on pg_dist_placement. Until the lock is released
maintenance daemon would be blocked.

We should not block the maintenance daemon under any case hence now we
try to get the pg_dist_placement lock without waiting, if we cannot get
it then we don't try to drop the old placements.
2021-05-17 03:22:35 -07:00
Nils Dijk c91f8d8a15
Feature: localhost guc (#4836)
DESCRIPTION: introduce `citus.local_hostname` GUC for connections to the current node

Citus once in a while needs to connect to itself for some systems operations. This used to be hardcoded to `localhost`. The hardcoded hostname causes some issues, for example in environments where `sslmode=verify-full` is required. It is not always desirable or even feasible to get `localhost` as an alt name on the certificate.

By introducing a GUC to use when connecting to the current instance the user has more control what network path is used and what hostname is required to be present in the server certificate.
2021-05-12 16:59:44 +02:00
Hanefi Onaldi 13808b60cf
Update gitignore files 2021-05-12 09:49:07 +03:00
Jelte Fennema cbbd10b974
Implement an improvement threshold in the rebalancer (#4927)
Every move in the rebalancer algorithm results in an improvement in the
balance. However, even if the improvement in the balance was very small
the move was still chosen. This is especially problematic if the shard
itself is very big and the move will take a long time.

This changes the rebalancer algorithm to take the relative size of the
balance improvement into account when choosing moves. By default a move
will not be chosen if it improves the balance by less than half of the
size of the shard. An extra argument is added to the rebalancer
functions so that the user can decide to lower the default threshold if
the ignored move is wanted anyway.
2021-05-11 14:24:59 +02:00
Ahmet Gedemenli 8cb505d6e1
Fix matview access method change issue (#4959)
* Fix matview access method change issue

* Use pg function get_am_name

* Split view generation command into pieces
2021-05-07 15:47:24 +03:00
SaitTalhaNisanci 6b1904d37a
When moving a shard to a new node ensure there is enough space (#4929)
* When moving a shard to a new node ensure there is enough space

* Add WairForMiliseconds time utility

* Add more tests and increase readability

* Remove the retry loop and use a single udf for disk stats

* Address review

* address review

Co-authored-by: Jelte Fennema <github-tech@jeltef.nl>
2021-05-06 17:28:02 +03:00
Ahmet Gedemenli bc818e76e2 Add notice log message for skipping child tables for optimization 2021-05-06 16:49:37 +03:00
Ahmet Gedemenli 2e0bb5c0c8 Fix nested select query with union bug 2021-05-05 20:35:00 +03:00
Jelte Fennema 0e6c080e81
Run copy_modified in upgrade tests (#4952)
This allows running the following command to update the expected files
with normalized output files for upgrade tests too:

```bash
cp src/test/regress/{results,expected}/upgrade_rebalance_strategy_before.out
```
2021-05-05 12:28:05 +02:00
Jelte Fennema 50357db957
Simplify code that tests the shard rebalancer algorithm (#4925)
This modifies the test code to use sane defaults instead of requiring
all values to be specified in the test.
2021-05-03 15:47:19 +02:00
Hanefi Onaldi 23a505d41f
Bump PG versions in CI (#4941)
Co-authored-by: Hanefi Onaldi <Hanefi.Onaldi@microsoft.com>
Co-authored-by: Sait Talha Nisanci <s.talhanisanci@gmail.com>
2021-05-03 13:51:20 +03:00
Jelte Fennema 2f29d4e53e Continue to remove shards after first failure in DropMarkedShards
The comment of DropMarkedShards described the behaviour that after a
failure we would continue trying to drop other shards. However the code
did not do this and would stop after the first failure. Instead of
simply fixing the comment I fixed the code, because the described
behaviour is more useful. Now a single shard that cannot be removed yet
does not block others from being removed.
2021-04-30 15:42:09 +03:00
Marco Slot 4b49cb112f Fix FROM ONLY queries on partitioned tables 2021-04-27 16:10:07 +02:00
Onur Tirtir 889ad6fa8c Run some upgrade tests only when old version=9.0 2021-04-26 14:53:53 +03:00
Ahmet Gedemenli 332c5ce4ad
Fix worker partitioned size functions (#4922) 2021-04-26 10:29:46 +03:00
Jelte Fennema 763fa1cf41 Fix diff-filter to search the whole line for matches
Recently two new normalization line deletion rules have been added that
don't match the start of a line:
```
/local tables that are added to metadata but not chained with reference tables via foreign keys might be automatically converted back to postgres tables$/d
/Consider setting citus.enable_local_reference_table_foreign_keys to 'off' to disable this behavior$/d
```

Because `diff-filter` used `regex.match` these lines were not removed
when creating a new diff. This could cause some confusing diffs, where
the wrong lines were shown as changed. This fixes that by using
`regex.search` instead of `regex.match`.
2021-04-23 12:43:49 +02:00
Onder Kalaci 918838e488 Allow constant VALUES clauses in pushdown queries
As long as the VALUES clause contains constant values, we should not
recursively plan the queries/CTEs.

This is a follow-up work of #1805. So, we can easily apply OUTER join
checks as if VALUES clause is a reference table/immutable function.
2021-04-21 14:28:08 +02:00
SaitTalhaNisanci 93c2dcf3d2
Fix data-race with concurrent calls of DropMarkedShards (#4909)
* Fix problews with concurrent calls of DropMarkedShards

When trying to enable `citus.defer_drop_after_shard_move` by default it
turned out that DropMarkedShards was not safe to call concurrently.
This could especially cause big problems when also moving shards at the
same time. During tests it was possible to trigger a state where a shard
that was moved would not be available on any of the nodes anymore after
the move.

Currently DropMarkedShards is only called in production by the
maintenaince deamon. Since this is only a single process triggering such
a race is currently impossible in production settings. In future changes
we will want to call DropMarkedShards from other places too though.

* Add some isolation tests

Co-authored-by: Jelte Fennema <github-tech@jeltef.nl>
2021-04-21 10:59:48 +03:00