Commit Graph

5041 Commits (5aace9bb37b907c221f3375f8a9d3c50192f24b9)

Author SHA1 Message Date
Jelte Fennema b1cad26ebc Move CheckCitusVersion to the top of each function
Previously this was usually done after argument parsing. This can cause
SEGFAULTs if the number or type of arguments changes in a new version.
By checking that Citus version is correct before doing any argument
parsing we protect against these types of issues. Issues like this have
occurred in pg_auto_failover, so it's not just a theoretical issue.

The main reason why these calls were not at the top of functions is
really just historical. It was because in the past we didn't allow
statements before declarations. Thus having this check before the
argument parsing would have only been possible if we first declared all
variables.

In addition to moving existing CheckCitusVersion calls it also adds
these calls to rebalancer related functions (they were missing there).
2021-06-01 17:43:46 +02:00
Ahmet Gedemenli 98081557fb
Merge pull request #5016 from citusdata/fix-test-shard-id-issue
Fix shard id difference for enterprise
2021-06-01 17:44:24 +03:00
Ahmet Gedemenli 0fbddc740d Fix shard id difference for enterprise 2021-06-01 17:17:46 +03:00
Jelte Fennema 4c20bf7a36
Remove pg_dist_rebalence_strategy_enterprise_check (#5014)
This is not necessary anymore now that the rebalancer is open source.
2021-06-01 06:16:46 -07:00
Ahmet Gedemenli e2704d9ad9
Merge pull request #5015 from citusdata/fix-relname-null-bug-when-parallel-execution
Fix relname null bug when parallel execution
2021-06-01 15:05:32 +03:00
Ahmet Gedemenli 69d39c0e8b Fix relname null bug when parallel execution 2021-06-01 14:14:35 +03:00
Ahmet Gedemenli 28b97c6c53
Merge pull request #5010 from citusdata/remove-func-generate-new-target-entries-for-sort-clauses
Remove function GenerateNewTargetEntriesForSortClauses
2021-06-01 12:46:47 +03:00
Ahmet Gedemenli 9638933d9d Remove function GenerateNewTargetEntriesForSortClauses 2021-06-01 12:35:36 +03:00
Jelte Fennema d3feee37ea
Add a simple python script to generate a new test (#3972)
The current default citus settings for tests are not really best
practice anymore. However, we keep them because lots of tests depend on
them.

I noticed that I created the same test harness for new tests I added all
the time. This is a simple script that generates that harness, given a
name for the test.

To run:

src/test/regress/bin/create_test.py my_awesome_test
2021-06-01 11:22:11 +02:00
Marco Slot c03729ad03 Only warn about reference tables when removing last node 2021-06-01 10:53:12 +02:00
Onur Tirtir 94f30a0428 Refactor index check in ColumnarProcessUtility 2021-06-01 11:12:28 +03:00
SaitTalhaNisanci c72d2b479b
Add tests for union pushdown workaround (#5005) 2021-05-31 20:02:20 +02:00
Jelte Fennema 3271f1bd13
Fix data race in get_rebalance_progress (#5008)
To be able to report progress of the rebalancer, the rebalancer updates
the state of a shard move in a shared memory segment. To then fetch the
progress, `get_rebalance_progress` can be called which reads this shared
memory.

Without this change it did so without using any synchronization
primitives, allowing for data races. This fixes that by using atomic
operations to update and read from the parts of the shared memory that
can be changed after initialization.
2021-05-31 15:27:32 +02:00
SaitTalhaNisanci 8c3f85692d
Not consider old placements when disabling or removing a node (#4960)
* Not consider old placements when disabling or removing a node

* update cluster test
2021-05-28 22:38:20 +02:00
SaitTalhaNisanci 40a229976f
Fix flaky test because of parallel metadata syncing (#5004) 2021-05-28 13:19:15 +03:00
SaitTalhaNisanci a20cc3b36a
Only consider shard state 1 in citus shards (#4970) 2021-05-28 11:33:48 +03:00
SaitTalhaNisanci a4944a2102
Rename CoordinatedTransactionShouldUse2PC (#4995) 2021-05-21 18:57:42 +03:00
Hanefi Onaldi 8b8f0161c3
Merge pull request #4891 from citusdata/remove-replmodel-guc
Deprecates the `citus.replication_model` GUC

We used to have 2 different GUCs that decided shard replication models:
- `citus.replication_model`: either set to "statement" or "streaming"
- `citus.shard_replication_factor` that prevents us to use streaming
  replication when greater than 1.

This PR aims to deprecate the `citus.replication_model` GUC and decide
on the replication model, solely based on the shard replication factor
of distributed tables that are affected by queries.
2021-05-21 16:32:44 +03:00
Hanefi Onaldi 4941f00a95
Do not run ref2ref tests in parallel 2021-05-21 16:14:59 +03:00
Hanefi Onaldi c160325d07
Use streaming replication when repl factor = 1 2021-05-21 16:14:59 +03:00
Hanefi Onaldi 878513f325
Remove all occurences of replication_model GUC 2021-05-21 16:14:59 +03:00
SaitTalhaNisanci 87e3a5e24a
Use 2PC when using a node connection (#4997) 2021-05-21 14:58:53 +03:00
SaitTalhaNisanci 82f34a8d88
Enable citus.defer_drop_after_shard_move by default (#4961)
Enable citus.defer_drop_after_shard_move by default
2021-05-21 10:48:32 +03:00
Nils Dijk d7dd247fb5
fix shared dependencies that are not resident in a database (#4992)
DESCRIPTION: fix shared dependencies that are not resident in a database

eg. databases depend on users (their owners) that both don’t have a
database they reside in. These dependencies are recorded in pg_shdepend
with a `dbid` of `InvalidOid` When we fetch our shared dependencies we don’t take
these links in account.

With this patch we use logic inspired by `classIdGetDbId` to decide when to use `MyDatabaseId` vs `InvalidOid` to correctly resolve dependencies between shared objects.
2021-05-20 08:55:02 -07:00
Jelte Fennema b25d3e83ef
Merge pull request #4963 from citusdata/fill-rebalance-monitor-shardsize 2021-05-20 16:51:08 +02:00
Jelte Fennema 10f06ad753 Fetch shard size on the fly for the rebalance monitor
Without this change the rebalancer progress monitor gets the shard sizes
from the `shardlength` column in `pg_dist_placement`. This column needs to
be updated manually by calling `citus_update_table_statistics`.
However, `citus_update_table_statistics` could lead to distributed
deadlocks while database traffic is on-going (see #4752).

To work around this we don't use `shardlength` column anymore. Instead
for every rebalance we now fetch all shard sizes on the fly.

Two additional things this does are:
1. It adds tests for the rebalance progress function.
2. If a shard move cannot be done because a source or target node is
   unreachable, then we error in stop the rebalance, instead of showing
   a warning and continuing. When using the by_disk_size rebalance
   strategy it's not safe to continue with other moves if a specific
   move failed. It's possible that the failed move made space for the
   next move, and because the failed move never happened this space now
   does not exist.
3. Adds two new columns to the result of `get_rebalancer_progress` which
   shows the size of the shard on the source and target node.

Fixes #4930
2021-05-20 16:38:17 +02:00
Nils Dijk a6c2d2a4c4
Feature: alter database owner (#4986)
DESCRIPTION: Add support for ALTER DATABASE OWNER

This adds support for changing the database owner. It achieves this by marking the database as a distributed object. By marking the database as a distributed object it will look for its dependencies and order the user creation commands (enterprise only) before the alter of the database owner. This is mostly important when adding new nodes.

By having the database marked as a distributed object it can easily understand for which `ALTER DATABASE ... OWNER TO ...` commands to propagate by resolving the object address of the database and verifying it is a distributed object, and hence should propagate changes of owner ship to all workers.

Given the ownership of the database might have implications on subsequent commands in transactions we force sequential mode for transactions that have a `ALTER DATABASE ... OWNER TO ...` command in them. This will fail the transaction with meaningful help when the transaction already executed parallel statements.

By default the feature is turned off since roles are not automatically propagated, having it turned on would cause hard to understand errors for the user. It can be turned on by the user via setting the `citus.enable_alter_database_owner`.
2021-05-20 13:27:44 +02:00
Önder Kalacı 1ce607dd23
Merge pull request #4990 from citusdata/prevent_moving_to_non_existing_node
Make sure that target node in shard moves is eligible for shard move
2021-05-20 10:57:56 +02:00
Onder Kalaci d07db99ea4 Make sure that target node in shard moves is eligable for shard move 2021-05-20 10:51:01 +02:00
Önder Kalacı 4d8e3969ac
Merge pull request #4923 from citusdata/wait_until_connections_ready
Wait until all connections are successfully established
2021-05-19 16:04:36 +02:00
Onder Kalaci 926069a859 Wait until all connections are successfully established
Comment from the code:
/*
 * Iterate until all the tasks are finished. Once all the tasks
 * are finished, ensure that that all the connection initializations
 * are also finished. Otherwise, those connections are terminated
 * abruptly before they are established (or failed). Instead, we let
 * the ConnectionStateMachine() to properly handle them.
 *
 * Note that we could have the connections that are not established
 * as a side effect of slow-start algorithm. At the time the algorithm
 * decides to establish new connections, the execution might have tasks
 * to finish. But, the execution might finish before the new connections
 * are established.
 */

 Note that the abruptly terminated connections lead to the following errors:

2020-11-16 21:09:09.800 CET [16633] LOG:  could not accept SSL connection: Connection reset by peer
2020-11-16 21:09:09.872 CET [16657] LOG:  could not accept SSL connection: Undefined error: 0
2020-11-16 21:09:09.894 CET [16667] LOG:  could not accept SSL connection: Connection reset by peer

To easily reproduce the issue:

- Create a single node Citus
- Add the coordinator to the metadata
- Create a distributed table with shards on the coordinator
- f.sql:  select count(*) from test;
- pgbench -f /tmp/f.sql postgres -T 12 -c 40 -P 1  or pgbench -f /tmp/f.sql postgres -T 12 -c 40 -P 1 -C
2021-05-19 15:59:13 +02:00
Önder Kalacı 61977a3c09
Merge pull request #4895 from citusdata/conservative_connection_establishment
Executor takes connection establishment and task execution costs into account
2021-05-19 15:57:22 +02:00
Onder Kalaci 995adf1a19 Executor takes connection establishment and task execution costs into account
With this commit, the executor becomes smarter about refrain to open
new connections. The very basic example is that, if the connection
establishments take 1000ms and task executions as 5 msecs, the executor
becomes smart enough to not establish new connections.
2021-05-19 15:48:07 +02:00
Onder Kalaci 28b0b4ebd1 Move slow start increment to generic place 2021-05-19 14:31:20 +02:00
Marco Slot 54c9bf8342
Merge pull request #4849 from citusdata/marcocitus/fix-insert-mem 2021-05-19 14:19:02 +02:00
Jelte Fennema 924959fdb1
Include result type in upgrade diff test (#4987)
We often change result types of functions slightly. Our downgrade tests
wouldn't notice these changes. This change adds them to the description
of these items.

An example of an SQL change that isn't caught without this change and is
caught with the get_rebalance_progress change in this PR:
https://github.com/citusdata/citus/pull/4963
2021-05-18 16:25:39 +02:00
Marco Slot 715dce1eea Reduce local insert memory usage during deparsing 2021-05-18 16:11:43 +02:00
Marco Slot 644b266dee Only cache local plans when reusing a distributed plan 2021-05-18 16:11:43 +02:00
Marco Slot 00792831ad Add execution memory contexts and free after local query execution 2021-05-18 16:11:43 +02:00
SaitTalhaNisanci ff2a125a5b
Lookup hostname before execution (#4976)
We lookup the hostname just before the execution so that even if there are cached entries in the prepared statement cache we use the updated entries.
2021-05-18 16:46:31 +03:00
SaitTalhaNisanci eaa7d2bada
Not block maintenance daemon (#4972)
It was possible to block maintenance daemon by taking an SHARE ROW
EXCLUSIVE lock on pg_dist_placement. Until the lock is released
maintenance daemon would be blocked.

We should not block the maintenance daemon under any case hence now we
try to get the pg_dist_placement lock without waiting, if we cannot get
it then we don't try to drop the old placements.
2021-05-17 03:22:35 -07:00
Hanefi Onaldi b649dffabd
Improve CI checks for enterprise merges on master (#4981) 2021-05-12 19:15:15 +03:00
Nils Dijk c91f8d8a15
Feature: localhost guc (#4836)
DESCRIPTION: introduce `citus.local_hostname` GUC for connections to the current node

Citus once in a while needs to connect to itself for some systems operations. This used to be hardcoded to `localhost`. The hardcoded hostname causes some issues, for example in environments where `sslmode=verify-full` is required. It is not always desirable or even feasible to get `localhost` as an alt name on the certificate.

By introducing a GUC to use when connecting to the current instance the user has more control what network path is used and what hostname is required to be present in the server certificate.
2021-05-12 16:59:44 +02:00
Hanefi Onaldi acdde9fa2d
Merge pull request #4811 from citusdata/fix-gitignore 2021-05-12 11:50:38 +03:00
Hanefi Onaldi 6b2c9d3567
Remove ignored files from git tree 2021-05-12 09:49:07 +03:00
Hanefi Onaldi 13808b60cf
Update gitignore files 2021-05-12 09:49:07 +03:00
Hanefi Onaldi c96b439a48
Introduce scripts to sync gitignore rules for .source files 2021-05-12 09:49:06 +03:00
Jelte Fennema cbbd10b974
Implement an improvement threshold in the rebalancer (#4927)
Every move in the rebalancer algorithm results in an improvement in the
balance. However, even if the improvement in the balance was very small
the move was still chosen. This is especially problematic if the shard
itself is very big and the move will take a long time.

This changes the rebalancer algorithm to take the relative size of the
balance improvement into account when choosing moves. By default a move
will not be chosen if it improves the balance by less than half of the
size of the shard. An extra argument is added to the rebalancer
functions so that the user can decide to lower the default threshold if
the ignored move is wanted anyway.
2021-05-11 14:24:59 +02:00
Önder Kalacı fa61eda7b9
Merge pull request #4974 from citusdata/minor_b_fix
Remove wrong PG_USED_FOR_ASSERTS_ONLY
2021-05-11 14:13:52 +02:00
Onder Kalaci cc4870a635 Remove wrong PG_USED_FOR_ASSERTS_ONLY 2021-05-11 12:58:37 +02:00