Commit Graph

6071 Commits (28dceecfffbe84937163ca0343838926a3403084)

Author SHA1 Message Date
Nitish Upreti 28dceecfff Handling failure with subtransaction 2022-08-29 18:42:14 -07:00
Nitish Upreti 789ff7b162 Validate relation name before logging it 2022-08-29 18:24:38 -07:00
Nitish Upreti 2c50101074 Update sql script 2022-08-28 17:58:20 -07:00
Nitish Upreti 6348faf7d3 Sort GUC 2022-08-28 17:48:30 -07:00
Nitish Upreti 0353ca3258 Upgrade test tweak 2022-08-28 17:44:45 -07:00
Nitish Upreti 3d162e1623 Update split tests output 2022-08-28 17:08:12 -07:00
Nitish Upreti d3442e2e04 Update isolation tests 2022-08-28 16:45:47 -07:00
Nitish Upreti e9e64eb3e7 failure split cleanup 2022-08-28 01:04:16 -07:00
Nitish Upreti 2b83be1f1a failure split cleanup 2022-08-28 00:56:59 -07:00
Nitish Upreti 2ce437776c test message 2022-08-28 00:20:23 -07:00
Nitish Upreti daa38468c8 test message 2022-08-28 00:16:44 -07:00
Nitish Upreti 7280b80ef4 Update tests 2022-08-28 00:08:58 -07:00
Nitish Upreti 0655a03ee0 Upgrade Basic After 2022-08-27 23:54:55 -07:00
Nitish Upreti bd5cd55b7a Upgrade Basic After 2022-08-27 23:42:33 -07:00
Nitish Upreti a9557725fd Multi Tenant Isolation 2022-08-27 23:27:12 -07:00
Nitish Upreti 44480c3586 Multi Tenant Isolation 2022-08-27 23:25:35 -07:00
Nitish Upreti f1c0e0456a Multi Tenant Isolation 2022-08-27 23:21:39 -07:00
Nitish Upreti 0180214826 Multi Tenant Isolation 2022-08-27 23:16:41 -07:00
Nitish Upreti 5aedb82268 multi_tenant_isolation edit 2022-08-27 23:12:04 -07:00
Nitish Upreti 3b231b70d8 multi_tenant_isolation test changes 2022-08-27 23:04:11 -07:00
Nitish Upreti 5e5a2147cd Fix more tests 2022-08-27 22:19:14 -07:00
Nitish Upreti ef2361f091 Run reindent 2022-08-27 21:24:56 -07:00
Nitish Upreti 59aaed3e5c Fix failing tests 2022-08-27 21:23:17 -07:00
Nitish Upreti 21028434ce Add operation name for drop 2022-08-27 20:12:29 -07:00
Nitish Upreti ce3ae8ff81 Downgrade steps 2022-08-27 18:06:24 -07:00
Nitish Upreti f3a14460e8 Permission check causes tenant isolation failure 2022-08-26 16:08:37 -07:00
Nitish Upreti a7ec398f7a Use recordid sequence always 2022-08-26 16:03:14 -07:00
Nitish Upreti cc54697580 Remove null.d 2022-08-26 15:37:01 -07:00
Nitish Upreti 6e02b84394 Deferred drop test 2022-08-26 15:13:33 -07:00
Nitish Upreti fa1456d14f Fix dummy shard logging bug and update test 2022-08-26 13:50:32 -07:00
Nitish Upreti 206690925e Failure testing 2022-08-25 22:45:22 -07:00
Nitish Upreti 3d46860fbb Reindent 2022-08-25 18:45:41 -07:00
Nitish Upreti 919e44eab6 Improvements and comments 2022-08-25 18:42:46 -07:00
Nitish Upreti 92b1cdf6c0 Deferred drop Hello World 2022-08-25 13:18:04 -07:00
Nitish Upreti bf61fe3565 Cleaner Improvement 2022-08-25 09:29:19 -07:00
Nitish Upreti 895fe14040 Initial Commit 2022-08-24 23:55:44 -07:00
Jelte Fennema 31faa88a4e
Track rebalance progress at the shard move level (#6187)
We're in the processes of totally changing the shard rebalancer
experience and infrastructure. Soon the shard rebalancer will include
retries, crash recovery and support for running in the background.

These improvements come at a cost though, the way the
get_rebalance_progress UDF currently works is very hard to replicate
with this new structure. This is mostly because the old behaviour
doesn't really make sense anymore with this new infrastructure. A new
and better way to track the progress will be included as part of the new
infrastructure.

This PR is in preparation of the new code rebalancer experience.
It changes the get_rebalance_progress UDF to only display the moves that
are in progress at the moment, not the ones that happened in the past or
that are planned in the future. Another option would have been to
completely remove the current get_rebalance_progress functionality and
point people to the new way of tracking progress. But old blogposts
still reference the old UDF and users might have some automation on top
of it. Showing the progress of the current moves is fairly simple to
achieve, even with the new infrastructure.

So this PR is a kind of compromise: It doesn't have complete feature
parity with the old get_rebalance_progress, but the most common use
cases will still work.

There's also an advantage of the change: You can now see progress of
shard moves that were triggered by calling citus_move_shard_placement
manually. Instead of only being able to see progress of moves that were
initiated using get_rebalance_table_shards.
2022-08-18 18:57:04 +02:00
Önder Kalacı 961fcff5db
Properly add / remove coordinator for isolation tests (#6181)
We used to rely on a seperate session to add the coordinator.
However, that might prevent the existing sessions to get
assigned proper gpids, which causes flaky tests.
2022-08-18 17:32:12 +03:00
Jelte Fennema 7dca028391
Fix flakyness in isolation_reference_table (#6193)
The newly introduced isolation_reference_table test had some flakyness,
because the assumption on how the arbitrary reference table gets chosen
was incorrect. This introduces a VACUUM FULL at the start of the test to
ensure the assumption actually holds.

Example of failed test: https://app.circleci.com/pipelines/github/citusdata/citus/26108/workflows/0a5cd526-006b-423e-8b67-7411b9c6be36/jobs/736802
2022-08-18 15:47:28 +03:00
Jelte Fennema 0a045afd3a
Fix flakyness in columnar_first_row_number test (#6192)
When running columnar_first_row_number in parallel with the
columnar_query test sometimes it would fail. This bug is tracked
in #6191. For now to make CI less flaky we simply don't run these tests
in parallel.

Example of failed test: https://app.circleci.com/pipelines/github/citusdata/citus/26106/workflows/75d00ea9-23f8-4bff-a927-bced19e1f81b/jobs/736713

Fixes #6184
2022-08-18 15:32:57 +03:00
Jelte Fennema d16b458e2a
Remove the flaky rollback_to_savepoint test (#6190)
This removes a flaky test that I introduced in #3868 after I fixed the
issue described in #3622. This test is sometimes fails randomly in CI.
The way it fails indicates that there might be some bug: A connection
breaks after rolling back to a savepoint.

I tried reproducing this issue locally, but I wasn't able to. I don't
understand what causes the failure.

Things that I tried were:

1. Running the test with:
   ```sql
   SET citus.force_max_query_parallelization = true;
   ```
2. Running the test with:
   ```sql
   SET citus.max_adaptive_executor_pool_size = 1;
   ```
3. Running the test in parallel with the same tests that it is run in
   parallel with in multi_schedule.

None of these allowed me to reproduce the issue locally.

So I think it's time to give on fixing this test and simply remove the
test. The regression that this test protects against seems very unlikely
to reappear, since in #3868 I also added a big comment about the need
for the newly added `UnclaimConnection` call. So, I think the need for
the test is quite small, and removing it will make our CI less flaky.

In case the cause of the bug ever gets found, I tracked the bug in #6189

Example of a failing CI run:
https://app.circleci.com/pipelines/github/citusdata/citus/26098/workflows/f84741d9-13b1-4ae7-9155-c21ed3466951/jobs/736424

For reference the unexpected diff is this (so both warnings and an error):
```diff
 INSERT INTO t SELECT i FROM generate_series(1, 100) i;
+WARNING:  connection to the remote node localhost:57638 failed with the following error: 
+WARNING:  
+CONTEXT:  while executing command on localhost:57638
+ERROR:  connection to the remote node localhost:57638 failed with the following error: 
 ROLLBACK;
```

This test is also mentioned as the most failing regression test in #5975
2022-08-18 15:14:16 +03:00
Önder Kalacı 418b4f96d6
Merge pull request #6166 from citusdata/fix_seq_ownership
Support Sequences owned by columns that are added before distributing tables
2022-08-18 11:16:14 +02:00
Onder Kalaci 9ec8e627c1 Support Sequences owned by columns before distributing tables
There are 3 different ways that a sequence can be interacting
with tables. (1) and (2) are already supported. This commit adds
support for (3).

     (1) column DEFAULT nextval('seq'):

	The dependency is roughly like below,
	and ExpandCitusSupportedTypes() is responsible
	for finding the depending sequences.

        schema <--- table <--- column <---- default value
         ^                                     |
         |------------------ sequence <--------|

    (2) serial columns: Bigserial/small serial etc:

	The dependency is roughly like below,
	and ExpandCitusSupportedTypes() is responsible
	for finding the depending sequences.

        schema <--- table <--- column <---- default value
                                 ^             |
				 |             |
          		     sequence <--------|

   (3) Sequence OWNED BY table.column: Added support for
       this type of resolution in this commit.

       The dependency is almost like the following, and
       ExpandCitusSupportedTypes() is NOT responsible for finding
       the dependency.

        schema <--- table <--- column
                                 ^
				 |
          		     sequence
2022-08-18 10:29:40 +02:00
Naisila Puka 69ffdbf0e3
Uses object name in cannot distribute object error (#6186)
Object type ids have changed in PG15 because of at least two added
objects in the list: OBJECT_PARAMETER_ACL, OBJECT_PUBLICATION_NAMESPACE

To avoid different output between pg versions, let's use the object
name in the error, and put the object id in the error detail.

Relevant PG commits:
a0ffa885e478f5eeacc4e250e35ce25a4740c487
5a2832465fd8984d089e8c44c094e6900d987fcd
2022-08-18 11:05:17 +03:00
Ying Xu 91473635db
[Columnar] Check for existence of Citus before creating Citus_Columnar (#6178)
* Added a check to see if Citus has already been loaded before creating citus_columnar

* added tests
2022-08-17 15:12:42 -07:00
Nils Dijk a9d47a96f6
Fix reference table lock contention (#6173)
DESCRIPTION: Fix reference table lock contention

Dropping and creating reference tables unintentionally blocked on each other due to the use of an ExclusiveLock for both the Drop and conditionally copying existing reference tables to (new) nodes.

The patch does the following:
 - Lower lock lever for dropping (reference) tables to `ShareLock` so they don't self conflict
 - Treat reference tables and distributed tables equally and acquire the colocation lock when dropping any table that is in a colocation group
 - Perform the precondition check for copying reference tables twice, first time with a lower lock that doesn't conflict with anything. Could have been a NoLock, however, in preparation for dropping a colocation group, it is an `AccessShareLock`

During normal operation the first check will always pass and we don't have to escalate that lock. Making it that we won't be blocked on adding and remove reference tables. Only after a node addition the first `create_reference_table` will still need to acquire an `ExclusiveLock` on the colocation group to perform the copy.
2022-08-17 18:19:28 +02:00
Ahmet Gedemenli 0631e1998b
Fix upgrade paths for #6100 (#6176)
* Fix upgrade paths for #6100

Co-authored-by: Hanefi Onaldi <Hanefi.Onaldi@microsoft.com>
2022-08-17 18:56:53 +03:00
Naisila Puka 20a0e0ed39
Grant create on public to some users where necessary (for PG15) (#6180) 2022-08-17 17:35:10 +03:00
Jelte Fennema 3f6ce889eb
Use CreateSimpleHash (and variants) whenever possible (#6177)
This is a refactoring PR that starts using our new hash table creation
helper function. It adds a few more macros for ease of use, because C
doesn't have default arguments. It also adds a macro to check if a
struct contains automatic padding bytes. No struct that is hashed using
tag_hash should have automatic padding bytes, because those bytes are
undefined and thus using them to create a hash will result in undefined
behaviour (usually a random hash).
2022-08-17 13:01:59 +03:00
aykut-bozkurt 52efe08642
default mode for shard splitting is set to auto. (#6179) 2022-08-17 12:18:47 +03:00