DESCRIPTION: Not automatically create citus_columnar when there are no
relations using it.
Previously, we were always creating citus_columnar when creating citus
with version >= 11.1. And how we were doing was as follows:
* Detach SQL objects owned by old columnar, i.e., "drop" them from
citus, but not actually drop them from the database
* "old columnar" is the one that we had before Citus 11.1 as part of
citus, i.e., before splitting the access method ands its catalog to
citus_columnar.
* Create citus_columnar and attach the SQL objects leftover from old
columnar to it so that we can continue supporting the columnar tables
that user had before Citus 11.1 with citus_columnar.
First part is unchanged, however, now we don't create citus_columnar
automatically anymore if the user didn't have any relations using
columnar. For this reason, as of Citus 13.2, when these SQL objects are
not owned by an extension and there are no relations using columnar
access method, we drop these SQL objects when updating Citus to 13.2.
The net effect is still the same as if we automatically created
citus_columnar and user dropped citus_columnar later, so we should not
have any issues with dropping them.
(**Update:** Seems we've made some assumptions in citus, e.g.,
citus_finish_pg_upgrade() still assumes columnar metadata exists and
tries to apply some fixes for it, so this PR fixes them as well. See the
last section of this PR description.)
Also, ideally I was hoping to just remove some lines of code from
extension.c, where we decide automatically creating citus_columnar when
creating citus, however, this didn't happen to be the case for two
reasons:
* We still need to automatically create it for the servers using
columnar access method.
* We need to clean-up the leftover SQL objects from old columnar when
the above is not case otherwise we would have leftover SQL objects from
old columnar for no reason, and that would confuse users too.
* Old columnar cannot be used to create columnar tables properly, so we
should clean them up and let the user decide whether they want to create
citus_columnar when they really need it later.
---
Also made several changes in the test suite because similarly, we don't
always want to have citus_columnar created in citus tests anymore:
* Now, columnar specific test targets, which cover **41** test sql
files, always install columnar by default, by using
"--load-extension=citus_columnar".
* "--load-extension=citus_columnar" is not added to citus specific test
targets because by default we don't want to have citus_columnar created
during citus tests.
* Excluding citus_columnar specific tests, we have **601** sql files
that we have as citus tests and in **27** of them we manually create
citus_columnar at the very beginning of the test because these tests do
test some functionalities of citus together with columnar tables.
Also, before and after schedules for PG upgrade tests are now duplicated
so we have two versions of each: one with columnar tests and one
without. To choose between them, check-pg-upgrade now supports a
"test-with-columnar" option, which can be set to "true" or anything else
to logically indicate "false". In CI, we run the check-pg-upgrade test
target with both options. The purpose is to ensure we can test PG
upgrades where citus_columnar is not created in the cluster before the
upgrade as well.
Finally, added more tests to multi_extension.sql to test Citus upgrade
scenarios with / without columnar tables / citus_columnar extension.
---
Also, seems citus_finish_pg_upgrade was assuming that citus_columnar is
always created but actually we should have never made such an
assumption. To fix that, moved columnar specific post-PG-upgrade work
from citus to a new columnar UDF, which is columnar_finish_pg_upgrade.
But to avoid breaking existing customer / managed service scripts, we
continue to automatically perform post PG-upgrade work for columnar
within citus_finish_pg_upgrade, but only if columnar access method
exists this time.
Changes test files in multi and multi-1 schedules such that they
accomodate coordinator in metadata.
Changes fall into the following buckets:
1. When coordinator is in metadata, reference table shards are present
in coordinator too.
This changes test outputs checking the table size, shard numbers etc.
for reference tables.
2. When coordinator is in metadata, postgres tables are converted to
citus local tables whenever a foreign key relationship to them is
created. This changes some test cases which tests it should not be
possible to create foreign keys to postgres tables.
3. Remove lines that add/remove coordinator for testing purposes.
We mark objects as distributed objects in Citus metadata only if we need
to propagate given the command that creates it to worker nodes. For this
reason, we were not doing this for the objects that are created while
pg_dist_node is empty.
One implication of doing so is that we defer the schema propagation to
the time when user creates the first distributed table in the schema.
However, this doesn't help for schema-based sharding (#6866) because we
want to sync pg_dist_tenant_schema to the worker nodes even for empty
schemas too.
* Support test dependencies for isolation tests without a schedule
* Comment out a test due to a known issue (#6901)
* Also, reduce the verbosity for some log messages and make some
tests compatible with run_test.py.
* Bug fix for bug #5876. Memset MetadataCacheSystem every time there is an abort
* Created an ObjectAccessHook that saves the transactionlevel of when citus was created and will clear metadatacache if that transaction level is rolled back. Added additional tests to make sure metadatacache is cleared
* Replace master_add_node with citus_add_node
* Replace master_activate_node with citus_activate_node
* Replace master_add_inactive_node with citus_add_inactive_node
* Use master udfs in old scripts
* Replace master_add_secondary_node with citus_add_secondary_node
* Replace master_disable_node with citus_disable_node
* Replace master_drain_node with citus_drain_node
* Replace master_remove_node with citus_remove_node
* Replace master_set_node_property with citus_set_node_property
* Replace master_unmark_object_distributed with citus_unmark_object_distributed
* Replace master_update_node with citus_update_node
* Replace master_update_shard_statistics with citus_update_shard_statistics
* Replace master_update_table_statistics with citus_update_table_statistics
* Rename master_conninfo_cache_invalidate to citus_conninfo_cache_invalidate
Rename master_dist_local_group_cache_invalidate to citus_dist_local_group_cache_invalidate
* Replace master_copy_shard_placement with citus_copy_shard_placement
* Replace master_move_shard_placement with citus_move_shard_placement
* Rename master_dist_node_cache_invalidate to citus_dist_node_cache_invalidate
* Rename master_dist_object_cache_invalidate to citus_dist_object_cache_invalidate
* Rename master_dist_partition_cache_invalidate to citus_dist_partition_cache_invalidate
* Rename master_dist_placement_cache_invalidate to citus_dist_placement_cache_invalidate
* Rename master_dist_shard_cache_invalidate to citus_dist_shard_cache_invalidate
* Drop master_modify_multiple_shards
* Rename master_drop_all_shards to citus_drop_all_shards
* Drop master_create_distributed_table
* Drop master_create_worker_shards
* Revert old function definitions
* Add missing revoke statement for citus_disable_node
- master_add_node enforces that there is only one primary per group
- there's also a trigger on pg_dist_node to prevent multiple primaries
per group
- functions in metadata cache only return primary nodes
- Rename ActiveWorkerNodeList -> ActivePrimaryNodeList
- Rename WorkerGetLive{Node->Group}Count()
- Refactor WorkerGetRandomCandidateNode
- master_remove_node only complains about active shard placements if the
node being removed is a primary.
- master_remove_node only deletes all reference table placements in the
group if the node being removed is the primary.
- Rename {Node->NodeGroup}HasShardPlacements, this reflects the behavior it
already had.
- Rename DeleteAllReferenceTablePlacementsFrom{Node->NodeGroup}. This also
reflects the behavior it already had, but the new signature forces the
caller to pass in a groupId
- Rename {WorkerGetLiveGroup->ActivePrimaryNode}Count
With this change we add an option to add a node without replicating all reference
tables to that node. If a node is added with this option, we mark the node as
inactive and no queries will sent to that node.
We also added two new UDFs;
- master_activate_node(host, port):
- marks node as active and replicates all reference tables to that node
- master_add_inactive_node(host, port):
- only adds node to pg_dist_node
Related to #786
This change adds the `pg_dist_node` table that contains the information
about the workers in the cluster, replacing the previously used
`pg_worker_list.conf` file (or the one specified with `citus.worker_list_file`).
Upon update, `pg_worker_list.conf` file is read and `pg_dist_node` table is
populated with the file's content. After that, `pg_worker_list.conf` file
is renamed to `pg_worker_list.conf.obsolete`
For adding and removing nodes, the change also includes two new UDFs:
`master_add_node` and `master_remove_node`, which require superuser
permissions.
'citus.worker_list_file' guc is kept for update purposes but not used after the
update is finished.
Fixes#271
This change sets ShardIds and JobIds for each test case. Before this change,
when a new test that somehow increments Job or Shard IDs is added, then
the tests after the new test should be updated.
ShardID and JobID sequences are set at the beginning of each file with the
following commands:
```
ALTER SEQUENCE pg_catalog.pg_dist_shardid_seq RESTART 290000;
ALTER SEQUENCE pg_catalog.pg_dist_jobid_seq RESTART 290000;
```
ShardIds and JobIds are multiples of 10000. Exceptions are:
- multi_large_shardid: shardid and jobid sequences are set to much larger values
- multi_fdw_large_shardid: same as above
- multi_join_pruning: Causes a race condition with multi_hash_pruning since
they are run in parallel.
When we notice that pg_dist_partition is being invalidated we assume
that the citus extension is being dropped and drop state such as
extensionLoaded and the cached oids of all the metadata tables.
This frees the user from needing to reconnect after running DROP
EXTENSION, so we also no longer send a warning message.