Commit Graph

5354 Commits (4c135de9e4ffbbdfcda5e28b0ede95f44eda5fc8)

Author SHA1 Message Date
Hanefi Onaldi 4c135de9e4 Introduce CI checks for hash comments in specs
We do not use comments starting with # in spec files because it creates
errors from C preprocessor that expects directives after this character.
Instead use C style comments, i.e:
// single line comment

You can also use multiline comments as well
/*
 * multi line comment
 */
2021-11-26 14:52:51 +03:00
Halil Ozan Akgül 27a161f443
Merge pull request #5496 from citusdata/multi-1-schedule-with-metadata-syncing
Fix tests in multi-1-schedule that fail with metadata syncing
2021-11-26 13:28:18 +03:00
Halil Ozan Akgul 87a1c760d9 Fix tests in multi-1-schedule that fail with metadata syncing 2021-11-26 12:09:53 +03:00
Önder Kalacı 0beb1aba62
Merge pull request #5470 from citusdata/redefine_active_placements
Active placements can only be on active nodes
2021-11-26 09:23:28 +01:00
Onder Kalaci 121f5c4271 Active placements can only be on active nodes
We re-define the meaning of active shard placement. It used
to only be defined via shardstate == SHARD_STATE_ACTIVE.

Now, we also add one more check. The worker node that the
placement is on should be active as well.

This is a preparation for supporting citus_disable_node()
for MX with multiple failures at the same time.

With this change, the maintanince daemon only needs to
sync the "node metadata" (e.g., pg_dist_node), not the
shard metadata.
2021-11-26 09:14:33 +01:00
Önder Kalacı d6cbfd0886
Merge pull request #5467 from citusdata/remove_useless_locks
Do not acquire locks on reference tables when a node is removed/disabled
2021-11-26 09:13:55 +01:00
Onder Kalaci b4931f7345 Do not acquire locks on reference tables when a node is removed/disabled
Before this commit, we acquire the metadata locks on the reference
tables while removing/disabling a node on all the MX nodes.

Although it has some marginal benefits, such as a concurrent
modification during remove/disable node blocks, instead of erroring
out, the drawbacks seems worse. Both citus_remove_node and citus_disable_node
are not tolerant to multiple node failures.

With this commit, we relax the locks. The implication is that while
a node is removed/disabled, users might see query errors. On the
other hand, this change becomes removing/disabling nodes more
tolerant to multiple node failures.
2021-11-26 09:08:25 +01:00
Onur Tirtir 76b8006a9e
Allow overwriting columnar storage pages written by aborted xacts (#5484)
When refactoring storage layer in #4907, we deleted the code that allows
overwriting a disk page previously written but not known by metadata.

Readers can see the change that introduced the code allows doing so in
commit a8da9acc63.

The reasoning was that; as of 10.2, we started aligning page
reservations (`AlignReservation`) for subsequent writes right after
allocating pages from disk. That means, even if writer transaction
fails, subsequent writes are guaranteed to allocate a new page and write
to there. For this reason, attempting to write to a page allocated
before is not possible for a columnar table that user created when using
v10.2.x.

However, since the older versions of columnar doesn't do that, following
example scenario can still result in writing to such disk page, even if
user now upgraded to v10.2.x. This is because, when upgrading storage to
2.0 (`ColumnarStorageUpdateIfNeeded`), we calculate `reservedOffset` of
the metapage based on the highest used address known by stripe
metadata (`GetHighestUsedAddressAndId`). However, stripe metadata
doesn't have entries for aborted writes. As a result, highest used
address would be computed by ignoring pages that are allocated but not
used.

- User attempts writing to columnar table on Citus v10.0x/v10.1x.
- Write operation fails for some reason.
- User upgrades Citus to v10.2.x.
- When attempting to write to same columnar table, they hit to "attempt
  to write columnar data .." error since write operation done in the
  older version of columnar already allocated that page, and now we are
  overwriting it.

For this reason, with this commit, we re-do the change done in
a8da9acc63.

And for the reasons given above, it wasn't possible to add a test for
this commit via usual code-paths. For this reason, added a UDF only for
testing purposes so that we can reproduce the exact scenario in our
regression test suite.
2021-11-26 07:51:13 +01:00
Onur Tirtir 80849f2444
Merge pull request #5456 from citusdata/col/pg-upgrade-dependency 2021-11-26 09:44:17 +03:00
Onur Tirtir 85da4fc2e0
Merge branch 'master' into col/pg-upgrade-dependency 2021-11-26 09:34:43 +03:00
Onur Tirtir 81af605e07
Fix typo: "no sharding pruning constraints" -> "no shard pruning constraints" (#5490) 2021-11-25 21:00:44 +01:00
Onur Tirtir 73f06323d8 Introduce dependencies from columnarAM to columnar metadata objects
During pg upgrades, we have seen that it is not guaranteed that a
columnar table will be created after metadata objects got created.
Prior to changes done in this commit, we had such a dependency
relationship in `pg_depend`:

```
columnar_table ----> columnarAM ----> citus extension
                                           ^  ^
                                           |  |
columnar.storage_id_seq --------------------  |
                                              |
columnar.stripe -------------------------------
```

Since `pg_upgrade` just knows to follow topological sort of the objects
when creating database dump, above dependency graph doesn't imply that
`columnar_table` should be created before metadata objects such as
`columnar.storage_id_seq` and `columnar.stripe` are created.

For this reason, with this commit we add new records to `pg_depend` to
make columnarAM depending on all rel objects living in `columnar`
schema. That way, `pg_upgrade` will know it needs to create those before
creating `columnarAM`, and similarly, before creating any tables using
`columnarAM`.

Note that in addition to inserting those records via installation script,
we also do the same in `citus_finish_pg_upgrade()`. This is because,
`pg_upgrade` rebuilds catalog tables in the new cluster and that means,
we must insert them in the new cluster too.
2021-11-23 13:14:00 +03:00
Onur Tirtir ef2ca03f24 Reproduce bug via test suite 2021-11-23 13:14:00 +03:00
Onur Tirtir 4a97664fd7 Store tmp_upgrade/newData/*.log as an artifact 2021-11-22 18:19:45 +03:00
Burak Velioglu 8d7c497d68
Merge pull request #5480 from citusdata/velioglu/make_object_lock_explicit
Make object locking explicit while adding dependencies
2021-11-22 15:56:09 +03:00
Burak Velioglu 6590f12de4
Merge branch 'master' into velioglu/make_object_lock_explicit 2021-11-22 13:55:36 +03:00
Burak Velioglu 12e05ad196
Sorted addresses before getting lock 2021-11-22 11:43:32 +03:00
Marco Slot 7694569976
Merge pull request #5481 from citusdata/marcocitus/remove-shard-range-update 2021-11-19 11:00:14 +01:00
Marco Slot f49d26fbeb Remove citus_update_table_statistics isolation test 2021-11-19 10:51:15 +01:00
Marco Slot 56eae48daf Stop updating shard range in citus_update_shard_statistics 2021-11-19 10:51:15 +01:00
Burak Velioglu 3a68263cc7
Change lock type 2021-11-19 12:03:17 +03:00
Burak Velioglu baeaca7bc5
Update comment 2021-11-19 10:51:56 +03:00
Hanefi Onaldi 6ff65db7ee
Merge pull request #5372 from citusdata/fix-broken-drop-schema 2021-11-18 23:58:53 +03:00
Hanefi Onaldi c0d43d4905
Prevent cache usage on citus_drop_trigger codepaths 2021-11-18 20:24:51 +03:00
Burak Velioglu 77dd12c09d
Merge branch 'master' into velioglu/make_object_lock_explicit 2021-11-18 20:18:07 +03:00
Hanefi Onaldi e6160ad131
Document failing tests for issue 5099 2021-11-18 20:01:34 +03:00
Hanefi Onaldi a3cc9b4e53
Remove case block that is identical to its neighbor (#5472) 2021-11-18 19:41:39 +03:00
Burak Velioglu b484d9b234
Make object locking explicit while adding dependencies 2021-11-18 19:34:00 +03:00
Marco Slot 77d948a595
Merge pull request #5465 from citusdata/marcocitus/remove-cstore_fdw 2021-11-16 17:43:29 +01:00
Marco Slot 9e6ca23286 Remove cstore_fdw-related logic 2021-11-16 13:59:03 +01:00
Önder Kalacı 8c0bc94b51
Enable replication factor > 1 in metadata syncing (#5392)
- [x] Add some more regression test coverage
- [x] Make sure returning works fine in case of
     local execution + remote execution
     (task->partiallyLocalOrRemote works as expected, already added tests)
- [x] Implement locking properly (and add isolation tests)
     - [x] We do #shardcount round-trips on `SerializeNonCommutativeWrites`.
           We made it a single round-trip.
- [x] Acquire locks for subselects on the workers & add isolation tests
- [x] Add a GUC to prevent modification from the workers, hence increase the
      coordinator-only throughput
       - The performance slightly drops (~%15), unless
         `citus.allow_modifications_from_workers_to_replicated_tables`
         is set to false
2021-11-15 15:10:18 +03:00
Hanefi Onaldi bbcf287f7e
Merge pull request #5462 from citusdata/changelog-10.0.6
Add changelog entries for 10.0.6
2021-11-12 13:11:03 +03:00
Hanefi Onaldi 45549d20a6
Add changelog entries for 10.0.6 2021-11-12 12:38:14 +03:00
Onur Tirtir 25024b776e
Skip deleting options if columnar.options is already dropped (#5458)
Drop extension might cascade to columnar.options before dropping a
columnar table. In that case, we were getting below error when opening
columnar.options to delete records for the columnar table that we are
about to drop.: "ERROR:  could not open relation with OID 0".

I somehow reproduced this bug easily when upgrading pg, that is why
adding added the test to after_pg_upgrade_schedule.
2021-11-12 12:30:09 +03:00
Ahmet Gedemenli 1aa32d5dbc
Merge pull request #5440 from citusdata/default-add-to-metadata-experiment
Introduce GUC citus.use_citus_managed_tables
2021-11-11 14:39:03 +03:00
Ahmet Gedemenli 14a33d4e8e Introduce GUC citus.use_citus_managed_tables 2021-11-11 14:09:06 +03:00
Hanefi Onaldi 3d9cec70fd
Update migration paths from 10.2 to 11.0 (#5459)
We recently introduced a set of patches to 10.2, and introduced 10.2-4
migration version. This migration version only resides on `release-10.2`
branch, and is missing on our default branch. This creates a problem
because we do not have a valid migration path from 10.2 to latest 11.0.

To remedy this issue, I copied the relevant migration files from
`release-10.2` branch, and renamed some of our migration files on
default branch to make sure we have a linear upgrade path.
2021-11-11 13:55:28 +03:00
Önder Kalacı 6f5a343ff4
Make sure that enterprise tests pass (#5451) 2021-11-08 18:11:19 +03:00
Önder Kalacı 98ca6ba6ca
Allow lock_shard_resources to be called by the users with privileges (#5441)
Before this commit, we required the user to be owner of the shard/table
in order to call lock_shard_resources.

However, that is too restrictive. We can have users with GRANTS
to the table who are not owners of the tables/shards.

With this commit, we allow such patterns.
2021-11-08 15:36:51 +01:00
Hanefi Onaldi db613b2f5c
Merge pull request #5448 from citusdata/changelog-9.5.10 2021-11-08 16:50:12 +03:00
Hanefi Onaldi 7b63edfc83
Add changelog entries for 9.5.10 2021-11-08 16:41:47 +03:00
Önder Kalacı 3bce4d76d3
Merge pull request #5405 from citusdata/simplify_executor_locks
Simplify/Unify executor locks
2021-11-08 13:58:11 +01:00
Onder Kalaci d5e89b1132 Unify distributed execution logic for single replicated tables
Citus does not acquire any executor locks for shard replication == 1.
With this commit, we unify this decision and exit early.
2021-11-08 13:52:20 +01:00
Hanefi Onaldi 20f3248b6e
Merge pull request #5445 from citusdata/changelog-9.5.9 2021-11-08 14:09:53 +03:00
Hanefi Onaldi 3d49cbf9ab
Add changelog entries for 9.5.9 2021-11-08 13:19:10 +03:00
Önder Kalacı 65911ce162
Merge pull request #5397 from citusdata/naisila/fix-partitioned-index
Run fix_partition_shard_index_names after each wrong naming command
2021-11-08 11:09:08 +01:00
Önder Kalacı d5b371b2e0
Merge branch 'master' into naisila/fix-partitioned-index 2021-11-08 10:53:16 +01:00
Marco Slot 7f162ba834
Merge pull request #5444 from citusdata/marcocitus/remove-master_append_table_to_shard 2021-11-08 10:49:17 +01:00
naisila 385ba94d15 Run fix_partition_shard_index_names after each wrong naming command 2021-11-08 10:43:34 +01:00
Marco Slot 78866df13c Remove master_append_table_to_shard UDF 2021-11-08 10:43:24 +01:00