Commit Graph

2353 Commits (abbcf4099a39f21672ef60deb3f4a7b2fad7d723)

Author SHA1 Message Date
Jelte Fennema b1cad26ebc Move CheckCitusVersion to the top of each function
Previously this was usually done after argument parsing. This can cause
SEGFAULTs if the number or type of arguments changes in a new version.
By checking that Citus version is correct before doing any argument
parsing we protect against these types of issues. Issues like this have
occurred in pg_auto_failover, so it's not just a theoretical issue.

The main reason why these calls were not at the top of functions is
really just historical. It was because in the past we didn't allow
statements before declarations. Thus having this check before the
argument parsing would have only been possible if we first declared all
variables.

In addition to moving existing CheckCitusVersion calls it also adds
these calls to rebalancer related functions (they were missing there).
2021-06-01 17:43:46 +02:00
Jelte Fennema 4c20bf7a36
Remove pg_dist_rebalence_strategy_enterprise_check (#5014)
This is not necessary anymore now that the rebalancer is open source.
2021-06-01 06:16:46 -07:00
Ahmet Gedemenli 69d39c0e8b Fix relname null bug when parallel execution 2021-06-01 14:14:35 +03:00
Ahmet Gedemenli 9638933d9d Remove function GenerateNewTargetEntriesForSortClauses 2021-06-01 12:35:36 +03:00
Jelte Fennema 3271f1bd13
Fix data race in get_rebalance_progress (#5008)
To be able to report progress of the rebalancer, the rebalancer updates
the state of a shard move in a shared memory segment. To then fetch the
progress, `get_rebalance_progress` can be called which reads this shared
memory.

Without this change it did so without using any synchronization
primitives, allowing for data races. This fixes that by using atomic
operations to update and read from the parts of the shared memory that
can be changed after initialization.
2021-05-31 15:27:32 +02:00
SaitTalhaNisanci 8c3f85692d
Not consider old placements when disabling or removing a node (#4960)
* Not consider old placements when disabling or removing a node

* update cluster test
2021-05-28 22:38:20 +02:00
SaitTalhaNisanci a20cc3b36a
Only consider shard state 1 in citus shards (#4970) 2021-05-28 11:33:48 +03:00
SaitTalhaNisanci a4944a2102
Rename CoordinatedTransactionShouldUse2PC (#4995) 2021-05-21 18:57:42 +03:00
Hanefi Onaldi c160325d07
Use streaming replication when repl factor = 1 2021-05-21 16:14:59 +03:00
Hanefi Onaldi 878513f325
Remove all occurences of replication_model GUC 2021-05-21 16:14:59 +03:00
SaitTalhaNisanci 87e3a5e24a
Use 2PC when using a node connection (#4997) 2021-05-21 14:58:53 +03:00
SaitTalhaNisanci 82f34a8d88
Enable citus.defer_drop_after_shard_move by default (#4961)
Enable citus.defer_drop_after_shard_move by default
2021-05-21 10:48:32 +03:00
Nils Dijk d7dd247fb5
fix shared dependencies that are not resident in a database (#4992)
DESCRIPTION: fix shared dependencies that are not resident in a database

eg. databases depend on users (their owners) that both don’t have a
database they reside in. These dependencies are recorded in pg_shdepend
with a `dbid` of `InvalidOid` When we fetch our shared dependencies we don’t take
these links in account.

With this patch we use logic inspired by `classIdGetDbId` to decide when to use `MyDatabaseId` vs `InvalidOid` to correctly resolve dependencies between shared objects.
2021-05-20 08:55:02 -07:00
Jelte Fennema 10f06ad753 Fetch shard size on the fly for the rebalance monitor
Without this change the rebalancer progress monitor gets the shard sizes
from the `shardlength` column in `pg_dist_placement`. This column needs to
be updated manually by calling `citus_update_table_statistics`.
However, `citus_update_table_statistics` could lead to distributed
deadlocks while database traffic is on-going (see #4752).

To work around this we don't use `shardlength` column anymore. Instead
for every rebalance we now fetch all shard sizes on the fly.

Two additional things this does are:
1. It adds tests for the rebalance progress function.
2. If a shard move cannot be done because a source or target node is
   unreachable, then we error in stop the rebalance, instead of showing
   a warning and continuing. When using the by_disk_size rebalance
   strategy it's not safe to continue with other moves if a specific
   move failed. It's possible that the failed move made space for the
   next move, and because the failed move never happened this space now
   does not exist.
3. Adds two new columns to the result of `get_rebalancer_progress` which
   shows the size of the shard on the source and target node.

Fixes #4930
2021-05-20 16:38:17 +02:00
Nils Dijk a6c2d2a4c4
Feature: alter database owner (#4986)
DESCRIPTION: Add support for ALTER DATABASE OWNER

This adds support for changing the database owner. It achieves this by marking the database as a distributed object. By marking the database as a distributed object it will look for its dependencies and order the user creation commands (enterprise only) before the alter of the database owner. This is mostly important when adding new nodes.

By having the database marked as a distributed object it can easily understand for which `ALTER DATABASE ... OWNER TO ...` commands to propagate by resolving the object address of the database and verifying it is a distributed object, and hence should propagate changes of owner ship to all workers.

Given the ownership of the database might have implications on subsequent commands in transactions we force sequential mode for transactions that have a `ALTER DATABASE ... OWNER TO ...` command in them. This will fail the transaction with meaningful help when the transaction already executed parallel statements.

By default the feature is turned off since roles are not automatically propagated, having it turned on would cause hard to understand errors for the user. It can be turned on by the user via setting the `citus.enable_alter_database_owner`.
2021-05-20 13:27:44 +02:00
Onder Kalaci d07db99ea4 Make sure that target node in shard moves is eligable for shard move 2021-05-20 10:51:01 +02:00
Onder Kalaci 926069a859 Wait until all connections are successfully established
Comment from the code:
/*
 * Iterate until all the tasks are finished. Once all the tasks
 * are finished, ensure that that all the connection initializations
 * are also finished. Otherwise, those connections are terminated
 * abruptly before they are established (or failed). Instead, we let
 * the ConnectionStateMachine() to properly handle them.
 *
 * Note that we could have the connections that are not established
 * as a side effect of slow-start algorithm. At the time the algorithm
 * decides to establish new connections, the execution might have tasks
 * to finish. But, the execution might finish before the new connections
 * are established.
 */

 Note that the abruptly terminated connections lead to the following errors:

2020-11-16 21:09:09.800 CET [16633] LOG:  could not accept SSL connection: Connection reset by peer
2020-11-16 21:09:09.872 CET [16657] LOG:  could not accept SSL connection: Undefined error: 0
2020-11-16 21:09:09.894 CET [16667] LOG:  could not accept SSL connection: Connection reset by peer

To easily reproduce the issue:

- Create a single node Citus
- Add the coordinator to the metadata
- Create a distributed table with shards on the coordinator
- f.sql:  select count(*) from test;
- pgbench -f /tmp/f.sql postgres -T 12 -c 40 -P 1  or pgbench -f /tmp/f.sql postgres -T 12 -c 40 -P 1 -C
2021-05-19 15:59:13 +02:00
Onder Kalaci 995adf1a19 Executor takes connection establishment and task execution costs into account
With this commit, the executor becomes smarter about refrain to open
new connections. The very basic example is that, if the connection
establishments take 1000ms and task executions as 5 msecs, the executor
becomes smart enough to not establish new connections.
2021-05-19 15:48:07 +02:00
Onder Kalaci 28b0b4ebd1 Move slow start increment to generic place 2021-05-19 14:31:20 +02:00
Marco Slot 715dce1eea Reduce local insert memory usage during deparsing 2021-05-18 16:11:43 +02:00
Marco Slot 644b266dee Only cache local plans when reusing a distributed plan 2021-05-18 16:11:43 +02:00
Marco Slot 00792831ad Add execution memory contexts and free after local query execution 2021-05-18 16:11:43 +02:00
SaitTalhaNisanci ff2a125a5b
Lookup hostname before execution (#4976)
We lookup the hostname just before the execution so that even if there are cached entries in the prepared statement cache we use the updated entries.
2021-05-18 16:46:31 +03:00
SaitTalhaNisanci eaa7d2bada
Not block maintenance daemon (#4972)
It was possible to block maintenance daemon by taking an SHARE ROW
EXCLUSIVE lock on pg_dist_placement. Until the lock is released
maintenance daemon would be blocked.

We should not block the maintenance daemon under any case hence now we
try to get the pg_dist_placement lock without waiting, if we cannot get
it then we don't try to drop the old placements.
2021-05-17 03:22:35 -07:00
Nils Dijk c91f8d8a15
Feature: localhost guc (#4836)
DESCRIPTION: introduce `citus.local_hostname` GUC for connections to the current node

Citus once in a while needs to connect to itself for some systems operations. This used to be hardcoded to `localhost`. The hardcoded hostname causes some issues, for example in environments where `sslmode=verify-full` is required. It is not always desirable or even feasible to get `localhost` as an alt name on the certificate.

By introducing a GUC to use when connecting to the current instance the user has more control what network path is used and what hostname is required to be present in the server certificate.
2021-05-12 16:59:44 +02:00
Jelte Fennema cbbd10b974
Implement an improvement threshold in the rebalancer (#4927)
Every move in the rebalancer algorithm results in an improvement in the
balance. However, even if the improvement in the balance was very small
the move was still chosen. This is especially problematic if the shard
itself is very big and the move will take a long time.

This changes the rebalancer algorithm to take the relative size of the
balance improvement into account when choosing moves. By default a move
will not be chosen if it improves the balance by less than half of the
size of the shard. An extra argument is added to the rebalancer
functions so that the user can decide to lower the default threshold if
the ignored move is wanted anyway.
2021-05-11 14:24:59 +02:00
Onder Kalaci cc4870a635 Remove wrong PG_USED_FOR_ASSERTS_ONLY 2021-05-11 12:58:37 +02:00
Onder Kalaci a231ff29b0 Get prepared for some improvements for online rebalancer
To see all the changes, see https://github.com/citusdata/citus-enterprise/pull/586/files
2021-05-10 19:54:31 +02:00
Onur Tirtir 7def297a3b
Move the logic that builds relation col list into a function (#4964) 2021-05-10 20:01:28 +03:00
Onur Tirtir 59fea712e2
Implement an helper to create memory cxt for stripe read (#4965) 2021-05-10 19:55:47 +03:00
SaitTalhaNisanci 5a941814fd
Close connection after each shard move (#4967) 2021-05-10 16:57:19 +03:00
Ahmet Gedemenli 8cb505d6e1
Fix matview access method change issue (#4959)
* Fix matview access method change issue

* Use pg function get_am_name

* Split view generation command into pieces
2021-05-07 15:47:24 +03:00
SaitTalhaNisanci 6b1904d37a
When moving a shard to a new node ensure there is enough space (#4929)
* When moving a shard to a new node ensure there is enough space

* Add WairForMiliseconds time utility

* Add more tests and increase readability

* Remove the retry loop and use a single udf for disk stats

* Address review

* address review

Co-authored-by: Jelte Fennema <github-tech@jeltef.nl>
2021-05-06 17:28:02 +03:00
Ahmet Gedemenli bc818e76e2 Add notice log message for skipping child tables for optimization 2021-05-06 16:49:37 +03:00
Ahmet Gedemenli 2e0bb5c0c8 Fix nested select query with union bug 2021-05-05 20:35:00 +03:00
Jelte Fennema 50357db957
Simplify code that tests the shard rebalancer algorithm (#4925)
This modifies the test code to use sane defaults instead of requiring
all values to be specified in the test.
2021-05-03 15:47:19 +02:00
Jelte Fennema 2f29d4e53e Continue to remove shards after first failure in DropMarkedShards
The comment of DropMarkedShards described the behaviour that after a
failure we would continue trying to drop other shards. However the code
did not do this and would stop after the first failure. Instead of
simply fixing the comment I fixed the code, because the described
behaviour is more useful. Now a single shard that cannot be removed yet
does not block others from being removed.
2021-04-30 15:42:09 +03:00
Sait Talha Nisanci 8cabd2e822 Decrease memory usage with rebalancer
We decrease memory usage by:
- Freeing temporary buffers
- Using separate memory context for blocks that uses "small" amount of
memory but can be repeated many times such as loops
2021-04-29 13:40:47 +03:00
Hanefi Onaldi 2f90ce931b
Fix minor issues with makefile targets (#4717) 2021-04-28 15:46:55 +03:00
Marco Slot 4b49cb112f Fix FROM ONLY queries on partitioned tables 2021-04-27 16:10:07 +02:00
Ahmet Gedemenli fe65be993e Sort GUCs in alphabetic order 2021-04-26 15:05:42 +03:00
Ahmet Gedemenli 332c5ce4ad
Fix worker partitioned size functions (#4922) 2021-04-26 10:29:46 +03:00
Onder Kalaci 918838e488 Allow constant VALUES clauses in pushdown queries
As long as the VALUES clause contains constant values, we should not
recursively plan the queries/CTEs.

This is a follow-up work of #1805. So, we can easily apply OUTER join
checks as if VALUES clause is a reference table/immutable function.
2021-04-21 14:28:08 +02:00
SaitTalhaNisanci 93c2dcf3d2
Fix data-race with concurrent calls of DropMarkedShards (#4909)
* Fix problews with concurrent calls of DropMarkedShards

When trying to enable `citus.defer_drop_after_shard_move` by default it
turned out that DropMarkedShards was not safe to call concurrently.
This could especially cause big problems when also moving shards at the
same time. During tests it was possible to trigger a state where a shard
that was moved would not be available on any of the nodes anymore after
the move.

Currently DropMarkedShards is only called in production by the
maintenaince deamon. Since this is only a single process triggering such
a race is currently impossible in production settings. In future changes
we will want to call DropMarkedShards from other places too though.

* Add some isolation tests

Co-authored-by: Jelte Fennema <github-tech@jeltef.nl>
2021-04-21 10:59:48 +03:00
Ahmet Gedemenli 33c620f232
Optimize partitioned disk size calculation (#4905)
* Optimize partitioned disk size calculation

* Polish

* Fix test for citus_shard_cost_by_disk_size

Try optimizing if not CSTORE
2021-04-19 13:30:56 +03:00
Onur Tirtir 96278822d9
Move columnar test helpers to a separate file (#4908)
* Move columnar test helpers to another file

* Rename column_store_memory_stats to columnar_store_memory_stats
2021-04-16 18:56:21 +03:00
Onder Kalaci 5482d5822f Keep more statistics about connection establishment times
When DEBUG4 enabled, Citus now prints per connection establishment
time.
2021-04-16 14:56:31 +02:00
Onder Kalaci 5b78f6cd63 Keep more execution statistics
When DEBUG4 enabled, Citus now prints per task execution times.
2021-04-16 14:45:00 +02:00
jeff-davis 9ed56928d3
Columnar: fix use-after-free. (#4906)
Co-authored-by: Jeff Davis <jefdavi@microsoft.com>
2021-04-15 01:00:00 -07:00
Hanefi Onaldi 9919fbe3f8 Switch to sequential mode on long partition names
This commit adds support for long partition names for distributed tables:
- ALTER TABLE dist_table ATTACH PARTITION ..
- CREATE TABLE .. PARTITION OF dist_table ..

Note: create_distributed_table UDF does not support long table and
partition names, and is not covered in this commit
2021-04-14 15:27:50 +03:00