When breaking a colocation, we need to create a new colocation group
record in pg_dist_colocation for the relation. It is not sufficient to
have a new colocationid value in pg_dist_partition only.
This patch also fixes a bug when deleting a colocation group if no
tables are left in it. Previously we passed a relation id as a parameter
to DeleteColocationGroupIfNoTablesBelong function, where we should have
passed a colocation id.
Fixes: #6928
When braking a colocation, we need to create a new colocation group
record in pg_dist_colocation for the relation. It is not sufficient to
have a new colocationid value in pg_dist_partition only.
This patch also fixes a bug when deleting a colocation group if no
tables are left in it. Previously we passed a relation id as a parameter
to DeleteColocationGroupIfNoTablesBelong function, where we should have
passed a colocation id.
1. Adds an `sql_row` function, for when a query returns a single row
with multiple columns.
2. Include a `notice_handler` for easier debugging
3. Retry dropping replication slots when they are "in use", this is
often an ephemeral state and can cause flaky tests
In PG16, REINDEX DATABASE/SYSTEM name is optional.
We already don't propagate these commands automatically.
Testing here with run_command_on_workers.
Relevant PG commit:
https://github.com/postgres/postgres/commit/2cbc3c1
When we create a database, it already needs to be manually created in
the workers as well.
This new icu_rules option should work as the other options as well.
Added a test for that.
Relevant PG commit:
https://github.com/postgres/postgres/commit/30a53b7
DESCRIPTION: Presenting citus_pause_node UDF enabling pausing by
node_id.
citus_pause_node takes a node_id parameter and fetches all the shards in
that node and puts AccessExclusiveLock on all the shards inside that
node. With this lock, insert is disabled, until citus_pause_node
transaction is closed.
---------
Co-authored-by: Hanefi Onaldi <Hanefi.Onaldi@microsoft.com>
Replaces https://github.com/citusdata/citus/pull/7120.
Closes https://github.com/citusdata/citus/issues/4692.
#7120 added the same functionality by implementing a transactional
--but scoped to Citus local tables-- version of TransferShards().
It was passing all the regression tests but didn't feel like an
intuitive approach.
This PR instead adds that functionality via the functions that we
use when creating a distributed table, namely, CreateShardsOnWorkers()
and CopyLocalDataIntoShards().
We insert entries into pg_dist_placement for the new shard placement(s)
and then call CreateShardsOnWorkers() to create those placement(s) on
workers.
Then we use CopyFromLocalTableIntoDistTable() to copy the data from
the local shard placement to the new shard placement(s).
CopyFromLocalTableIntoDistTable() is a new function that re-uses the
underlying logic of CopyLocalDataIntoShards() that allows copying
data from a local table into a distributed table. We tell
CopyLocalDataIntoShards() to read from local shard placement table
and to write the tuples into shard placement/s of the reference /
single-shard table. Before doing this, we temporarily delete metadata
record for the local placement to avoid from duplicating the data in
the local shard placement.
Finally, we drop the local shard placement if we were creating a
single-shard placement table and that effectively means moving the
local shard placement to the appropriate worker as we've already
created the new shard placement on the worker.
While the main motivation behind adding this functionality is to
avoid from the limitations when UndistributeTable() is called for
a Citus local table (during table conversion), this indeed optimizes
how we convert a Citus local table to a reference table /
single-shard table. This is because, the prior logic was causing
to use more disk space due to the duplication of the data during
UndistributeTable().
DESCRIPTION: Allow creating reference / distributed-schema tables from
local tables added to metadata and that use identity columns
- [x] Add tests.
- [x] Test django-tenants.
If we're in the middle of a table type conversion (such as from Citus
local table to a reference table), the table might not have all the
placements that we expect from the table type. For this reason, we
should intersect the placements of tables at hand when creating
inter-shard ddl tasks.
What we do to collect foreign key constraint commands in
WorkerCreateShardCommandList is quite similar to what we do in
CopyShardForeignConstraintCommandList. Plus, the code that we used
in WorkerCreateShardCommandList before was not able to properly handle
foreign key constraints between Citus local tables --when creating a
reference table from the referencing one.
With a few slight modifications made to
CopyShardForeignConstraintCommandList, we can use the same logic in
WorkerCreateShardCommandList too.
DESCRIPTION: Adds grant/revoke propagation support for database
privileges
Following the implementation of support for granting and revoking
database privileges, certain tests that issued grants for worker nodes
experienced failures. These ones are fixed in this PR as well.
DESCRIPTION: Removes ubuntu/bionic from packaging pipelines
Since pg16 beta is not available for ubuntu/bionic and ubuntu/bionic
support is EOL, I need to remove this os from pipeline
https://ubuntu.com/blog/ubuntu-18-04-eol-for-devices
Additionally, added concurrency support for GH Actions Packaging
pipeline