Commit Graph

5795 Commits (0eee7fd9b8cc86d704687babbe6533c0884a044d)

Author SHA1 Message Date
Onur Tirtir 85da4fc2e0
Merge branch 'master' into col/pg-upgrade-dependency 2021-11-26 09:34:43 +03:00
Onur Tirtir 81af605e07
Fix typo: "no sharding pruning constraints" -> "no shard pruning constraints" (#5490) 2021-11-25 21:00:44 +01:00
Onur Tirtir 73f06323d8 Introduce dependencies from columnarAM to columnar metadata objects
During pg upgrades, we have seen that it is not guaranteed that a
columnar table will be created after metadata objects got created.
Prior to changes done in this commit, we had such a dependency
relationship in `pg_depend`:

```
columnar_table ----> columnarAM ----> citus extension
                                           ^  ^
                                           |  |
columnar.storage_id_seq --------------------  |
                                              |
columnar.stripe -------------------------------
```

Since `pg_upgrade` just knows to follow topological sort of the objects
when creating database dump, above dependency graph doesn't imply that
`columnar_table` should be created before metadata objects such as
`columnar.storage_id_seq` and `columnar.stripe` are created.

For this reason, with this commit we add new records to `pg_depend` to
make columnarAM depending on all rel objects living in `columnar`
schema. That way, `pg_upgrade` will know it needs to create those before
creating `columnarAM`, and similarly, before creating any tables using
`columnarAM`.

Note that in addition to inserting those records via installation script,
we also do the same in `citus_finish_pg_upgrade()`. This is because,
`pg_upgrade` rebuilds catalog tables in the new cluster and that means,
we must insert them in the new cluster too.
2021-11-23 13:14:00 +03:00
Onur Tirtir ef2ca03f24 Reproduce bug via test suite 2021-11-23 13:14:00 +03:00
Onur Tirtir 4a97664fd7 Store tmp_upgrade/newData/*.log as an artifact 2021-11-22 18:19:45 +03:00
Burak Velioglu 8d7c497d68
Merge pull request #5480 from citusdata/velioglu/make_object_lock_explicit
Make object locking explicit while adding dependencies
2021-11-22 15:56:09 +03:00
Burak Velioglu 6590f12de4
Merge branch 'master' into velioglu/make_object_lock_explicit 2021-11-22 13:55:36 +03:00
Burak Velioglu 12e05ad196
Sorted addresses before getting lock 2021-11-22 11:43:32 +03:00
Marco Slot 7694569976
Merge pull request #5481 from citusdata/marcocitus/remove-shard-range-update 2021-11-19 11:00:14 +01:00
Marco Slot f49d26fbeb Remove citus_update_table_statistics isolation test 2021-11-19 10:51:15 +01:00
Marco Slot 56eae48daf Stop updating shard range in citus_update_shard_statistics 2021-11-19 10:51:15 +01:00
Burak Velioglu 3a68263cc7
Change lock type 2021-11-19 12:03:17 +03:00
Burak Velioglu baeaca7bc5
Update comment 2021-11-19 10:51:56 +03:00
Hanefi Onaldi 6ff65db7ee
Merge pull request #5372 from citusdata/fix-broken-drop-schema 2021-11-18 23:58:53 +03:00
Hanefi Onaldi c0d43d4905
Prevent cache usage on citus_drop_trigger codepaths 2021-11-18 20:24:51 +03:00
Burak Velioglu 77dd12c09d
Merge branch 'master' into velioglu/make_object_lock_explicit 2021-11-18 20:18:07 +03:00
Hanefi Onaldi e6160ad131
Document failing tests for issue 5099 2021-11-18 20:01:34 +03:00
Hanefi Onaldi a3cc9b4e53
Remove case block that is identical to its neighbor (#5472) 2021-11-18 19:41:39 +03:00
Burak Velioglu b484d9b234
Make object locking explicit while adding dependencies 2021-11-18 19:34:00 +03:00
Marco Slot 77d948a595
Merge pull request #5465 from citusdata/marcocitus/remove-cstore_fdw 2021-11-16 17:43:29 +01:00
Marco Slot 9e6ca23286 Remove cstore_fdw-related logic 2021-11-16 13:59:03 +01:00
Önder Kalacı 8c0bc94b51
Enable replication factor > 1 in metadata syncing (#5392)
- [x] Add some more regression test coverage
- [x] Make sure returning works fine in case of
     local execution + remote execution
     (task->partiallyLocalOrRemote works as expected, already added tests)
- [x] Implement locking properly (and add isolation tests)
     - [x] We do #shardcount round-trips on `SerializeNonCommutativeWrites`.
           We made it a single round-trip.
- [x] Acquire locks for subselects on the workers & add isolation tests
- [x] Add a GUC to prevent modification from the workers, hence increase the
      coordinator-only throughput
       - The performance slightly drops (~%15), unless
         `citus.allow_modifications_from_workers_to_replicated_tables`
         is set to false
2021-11-15 15:10:18 +03:00
Hanefi Onaldi bbcf287f7e
Merge pull request #5462 from citusdata/changelog-10.0.6
Add changelog entries for 10.0.6
2021-11-12 13:11:03 +03:00
Hanefi Onaldi 45549d20a6
Add changelog entries for 10.0.6 2021-11-12 12:38:14 +03:00
Onur Tirtir 25024b776e
Skip deleting options if columnar.options is already dropped (#5458)
Drop extension might cascade to columnar.options before dropping a
columnar table. In that case, we were getting below error when opening
columnar.options to delete records for the columnar table that we are
about to drop.: "ERROR:  could not open relation with OID 0".

I somehow reproduced this bug easily when upgrading pg, that is why
adding added the test to after_pg_upgrade_schedule.
2021-11-12 12:30:09 +03:00
Ahmet Gedemenli 1aa32d5dbc
Merge pull request #5440 from citusdata/default-add-to-metadata-experiment
Introduce GUC citus.use_citus_managed_tables
2021-11-11 14:39:03 +03:00
Ahmet Gedemenli 14a33d4e8e Introduce GUC citus.use_citus_managed_tables 2021-11-11 14:09:06 +03:00
Hanefi Onaldi 3d9cec70fd
Update migration paths from 10.2 to 11.0 (#5459)
We recently introduced a set of patches to 10.2, and introduced 10.2-4
migration version. This migration version only resides on `release-10.2`
branch, and is missing on our default branch. This creates a problem
because we do not have a valid migration path from 10.2 to latest 11.0.

To remedy this issue, I copied the relevant migration files from
`release-10.2` branch, and renamed some of our migration files on
default branch to make sure we have a linear upgrade path.
2021-11-11 13:55:28 +03:00
Önder Kalacı 6f5a343ff4
Make sure that enterprise tests pass (#5451) 2021-11-08 18:11:19 +03:00
Önder Kalacı 98ca6ba6ca
Allow lock_shard_resources to be called by the users with privileges (#5441)
Before this commit, we required the user to be owner of the shard/table
in order to call lock_shard_resources.

However, that is too restrictive. We can have users with GRANTS
to the table who are not owners of the tables/shards.

With this commit, we allow such patterns.
2021-11-08 15:36:51 +01:00
Hanefi Onaldi db613b2f5c
Merge pull request #5448 from citusdata/changelog-9.5.10 2021-11-08 16:50:12 +03:00
Hanefi Onaldi 7b63edfc83
Add changelog entries for 9.5.10 2021-11-08 16:41:47 +03:00
Önder Kalacı 3bce4d76d3
Merge pull request #5405 from citusdata/simplify_executor_locks
Simplify/Unify executor locks
2021-11-08 13:58:11 +01:00
Onder Kalaci d5e89b1132 Unify distributed execution logic for single replicated tables
Citus does not acquire any executor locks for shard replication == 1.
With this commit, we unify this decision and exit early.
2021-11-08 13:52:20 +01:00
Hanefi Onaldi 20f3248b6e
Merge pull request #5445 from citusdata/changelog-9.5.9 2021-11-08 14:09:53 +03:00
Hanefi Onaldi 3d49cbf9ab
Add changelog entries for 9.5.9 2021-11-08 13:19:10 +03:00
Önder Kalacı 65911ce162
Merge pull request #5397 from citusdata/naisila/fix-partitioned-index
Run fix_partition_shard_index_names after each wrong naming command
2021-11-08 11:09:08 +01:00
Önder Kalacı d5b371b2e0
Merge branch 'master' into naisila/fix-partitioned-index 2021-11-08 10:53:16 +01:00
Marco Slot 7f162ba834
Merge pull request #5444 from citusdata/marcocitus/remove-master_append_table_to_shard 2021-11-08 10:49:17 +01:00
naisila 385ba94d15 Run fix_partition_shard_index_names after each wrong naming command 2021-11-08 10:43:34 +01:00
Marco Slot 78866df13c Remove master_append_table_to_shard UDF 2021-11-08 10:43:24 +01:00
Marco Slot ee0cd75648
Merge pull request #5399 from citusdata/marcocitus/remove-append-copy 2021-11-07 21:09:26 +01:00
Marco Slot fba93df4b0 Remove copy into new append shard logic 2021-11-07 21:01:40 +01:00
Marco Slot 27ba19f7e1 Fix a flappy test in drop_column_partitioned_table 2021-11-07 18:25:44 +01:00
Nils Dijk 3fcb456381
Refactor/partitioned result destreceiver (#5432)
This change creates a slightly higher abstraction of the `PartitionedResultDestReceiver` where it decouples the partitioning from writing it to a file. This allows for easier reuse for other `DestReceiver`'s that would like to route different tuples to different `DestReceiver`'s.

Originally there was a lot of state kept in `PartitionedResultDestReceiver` to be able to lazily create `FileDestReceivers` when the first tuple arrived for that target. This convoluted the implementation of the processing of tuples with where they should go.

This refactor changes that where it makes the `PartitionedResultDestReceiver` completely agnostic of what kind of Receivers it is writing to. When constructed you pass it a list of `DestReceiver` compatible pointers with the length of `partitionCount`. Internally the `PartitionedResultDestReceiver` keeps track of which `DestReceiver`'s have been started or not, and start them when they first receive a tuple.

Alternatively, if the instantiating code of the `PartitionedResultDestReceiver` wants, the startup can be turned from lazily to eagerly. When the startup is eager (not lazy) all `rStartup` functions on the list of `DestReceiver`'s are called during the startup of the `PartitionedResultDestReceiver` and marked as such.

A downside of this approach is the following. On highly partitioned destinations we now need to allocate a `FileDestReceiver` for every target, _always_. When the data passed into the `PartitionedResultDestReceiver` is highly skewed to a small set of `FileDestReceiver`'s this will waste some memory. Given the small size of a `FileDestReceiver`, and the fact that actual file handles are only created during the processing of the startup of the `FileDestReceiver` I think this memory waste is not a problem. If this would become a problem we could refactor the source list into some kind of generator object which can generate the `DestReceiver`'s on the fly.
2021-11-05 13:31:18 +01:00
Nils Dijk 0e7cf9f0ca
reinstate optimization that got unintentionally broken in 366461ccdb (#5418)
DESCRIPTION: Reinstate optimisation for uniform shard interval ranges

During a refactor introduced in #4132 the following change was made, which made the optimisation in `CalculateUniformHashRangeIndex` unreachable: 
366461ccdb (diff-565a339ed3c78bc5a0d4ffeb4e91032150b1dffbeeff59cd3e65981d20b998c7L319-R319)

This PR reinstates the path to the optimisation!
2021-11-05 13:07:51 +01:00
Önder Kalacı 763176a4d9
Some minor improvements on top of 5314 (#5428)
* Refactor some checks in citus local tables

* all existing citus local tables are auto converted after upgrade

* Update warning messages in CreateCitusLocalTable

* Hide notice msg for auto converting local tables

* Hide hint msg

Co-authored-by: Ahmet Gedemenli <afgedemenli@gmail.com>
2021-11-05 13:59:13 +03:00
Sait Talha Nisanci ab29c25658 Fix missing from entry 2021-11-04 18:54:52 +03:00
Halil Ozan Akgül a23f1fb259
Merge pull request #5417 from citusdata/fix_isolation_schedule_with_mx
Turns mx on in isolations tests
2021-11-04 17:18:50 +03:00
Halil Ozan Akgul a8f3f712cc Turns mx on in isolations tests 2021-11-04 17:12:30 +03:00