Onur Tirtir
10a762aa88
Implement columnar index support functions
2021-06-16 19:59:32 +03:00
Halil Ozan Akgül
27c7d28f7f
Merge pull request #5045 from citusdata/master-update-version-9e0729f4-4406-43ed-9942-d27aa5c398ec
...
Bump Citus to 10.2devel
2021-06-16 19:26:41 +03:00
Halil Ozan Akgul
db03afe91e
Bump citus version to 10.2devel
2021-06-16 17:44:05 +03:00
Ahmet Gedemenli
5115100db0
Set table size to zero if no size is read ( #5049 )
...
* Set table size to zero if no size is read
* Add comment to relation size bug fix
2021-06-16 17:23:19 +03:00
SaitTalhaNisanci
2511c4c045
Merge pull request #5025 from citusdata/split_multi
...
Split multi schedule
2021-06-16 15:30:15 +03:00
SaitTalhaNisanci
1784c7ef85
Merge branch 'master' into split_multi
2021-06-16 15:26:09 +03:00
Marco Slot
9797857967
Merge pull request #5048 from citusdata/marcocitus/fix-wcoar-null-input
2021-06-16 13:40:51 +02:00
Sait Talha Nisanci
c7d04e7f40
swap multi_schedule and multi_schedule_1
2021-06-16 14:40:14 +03:00
Sait Talha Nisanci
c55e44a4af
Drop table if exists
2021-06-16 14:19:59 +03:00
Sait Talha Nisanci
fc89487e93
Split check multi
2021-06-16 14:19:59 +03:00
Naisila Puka
e26b29d3bb
Fix nextval('seq_name'::text) bug, and schema for seq tests ( #5046 )
2021-06-16 13:58:49 +03:00
Marco Slot
a7e4d6c94a
Fix a bug that causes worker_create_or_alter_role to crash with NULL input
2021-06-15 20:07:08 +02:00
Halil Ozan Akgül
72eb37095b
Merge pull request #5043 from citusdata/citus-10.1.0-changelog-1623733267
...
Update Changelog for 10.1.0
2021-06-15 17:21:19 +03:00
Halil Ozan Akgul
91db015051
Add changelog entry for 10.1.0
2021-06-15 14:28:15 +03:00
Jelte Fennema
4c3934272f
Improve performance of citus_shards ( #5036 )
...
We were effectively joining on a calculated column because of our calls
to `shard_name`. This caused a really bad plan to be generated. In my
specific case it was taking ~18 seconds to show the output of
citus_shards. It had this explain plan:
```
QUERY PLAN
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
Subquery Scan on citus_shards (cost=18369.74..18437.34 rows=5408 width=124) (actual time=18277.461..18278.509 rows=5408 loops=1)
-> Sort (cost=18369.74..18383.26 rows=5408 width=156) (actual time=18277.457..18277.726 rows=5408 loops=1)
Sort Key: ((pg_dist_shard.logicalrelid)::text), pg_dist_shard.shardid
Sort Method: quicksort Memory: 1629kB
CTE shard_sizes
-> Function Scan on citus_shard_sizes (cost=0.00..10.00 rows=1000 width=40) (actual time=71.137..71.934 rows=5413 loops=1)
-> Hash Join (cost=177.62..18024.42 rows=5408 width=156) (actual time=77.985..18257.237 rows=5408 loops=1)
Hash Cond: ((pg_dist_shard.logicalrelid)::oid = (pg_dist_partition.logicalrelid)::oid)
-> Hash Join (cost=169.81..371.98 rows=5408 width=48) (actual time=1.415..13.166 rows=5408 loops=1)
Hash Cond: (pg_dist_placement.groupid = pg_dist_node.groupid)
-> Hash Join (cost=168.68..296.49 rows=5408 width=16) (actual time=1.403..10.011 rows=5408 loops=1)
Hash Cond: (pg_dist_placement.shardid = pg_dist_shard.shardid)
-> Seq Scan on pg_dist_placement (cost=0.00..113.60 rows=5408 width=12) (actual time=0.004..3.684 rows=5408 loops=1)
Filter: (shardstate = 1)
-> Hash (cost=101.08..101.08 rows=5408 width=12) (actual time=1.385..1.386 rows=5408 loops=1)
Buckets: 8192 Batches: 1 Memory Usage: 318kB
-> Seq Scan on pg_dist_shard (cost=0.00..101.08 rows=5408 width=12) (actual time=0.003..0.688 rows=5408 loops=1)
-> Hash (cost=1.06..1.06 rows=6 width=40) (actual time=0.007..0.007 rows=6 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 9kB
-> Seq Scan on pg_dist_node (cost=0.00..1.06 rows=6 width=40) (actual time=0.004..0.005 rows=6 loops=1)
-> Hash (cost=5.69..5.69 rows=169 width=130) (actual time=0.070..0.071 rows=169 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 36kB
-> Seq Scan on pg_dist_partition (cost=0.00..5.69 rows=169 width=130) (actual time=0.009..0.041 rows=169 loops=1)
SubPlan 2
-> Limit (cost=0.00..3.25 rows=1 width=8) (actual time=3.370..3.370 rows=1 loops=5408)
-> CTE Scan on shard_sizes (cost=0.00..32.50 rows=10 width=8) (actual time=3.369..3.369 rows=1 loops=5408)
Filter: ((shard_name(pg_dist_shard.logicalrelid, pg_dist_shard.shardid) = table_name) OR (('public.'::text || shard_name(pg_dist_shard.logicalrelid, pg_dist_shard.shardid)) = table_name))
Rows Removed by Filter: 2707
Planning Time: 0.705 ms
Execution Time: 18278.877 ms
```
With the changes it only takes 180ms to show the same output:
```
QUERY PLAN
──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
Sort (cost=904.59..918.11 rows=5408 width=156) (actual time=182.508..182.960 rows=5408 loops=1)
Sort Key: ((pg_dist_shard.logicalrelid)::text), pg_dist_shard.shardid
Sort Method: quicksort Memory: 1629kB
-> Hash Join (cost=418.03..569.27 rows=5408 width=156) (actual time=136.333..146.591 rows=5408 loops=1)
Hash Cond: ((pg_dist_shard.logicalrelid)::oid = (pg_dist_partition.logicalrelid)::oid)
-> Hash Join (cost=410.22..492.83 rows=5408 width=56) (actual time=136.231..140.132 rows=5408 loops=1)
Hash Cond: (pg_dist_placement.groupid = pg_dist_node.groupid)
-> Hash Right Join (cost=409.09..417.34 rows=5408 width=24) (actual time=136.218..138.890 rows=5408 loops=1)
Hash Cond: ((((regexp_matches(citus_shard_sizes.table_name, '_(\d+)$'::text))[1])::integer) = pg_dist_shard.shardid)
-> HashAggregate (cost=45.00..48.50 rows=200 width=12) (actual time=131.609..132.481 rows=5408 loops=1)
Group Key: ((regexp_matches(citus_shard_sizes.table_name, '_(\d+)$'::text))[1])::integer
Batches: 1 Memory Usage: 737kB
-> Result (cost=0.00..40.00 rows=1000 width=12) (actual time=107.786..129.831 rows=5408 loops=1)
-> ProjectSet (cost=0.00..22.50 rows=1000 width=40) (actual time=107.780..128.492 rows=5408 loops=1)
-> Function Scan on citus_shard_sizes (cost=0.00..10.00 rows=1000 width=40) (actual time=107.746..108.107 rows=5414 loops=1)
-> Hash (cost=296.49..296.49 rows=5408 width=16) (actual time=4.595..4.598 rows=5408 loops=1)
Buckets: 8192 Batches: 1 Memory Usage: 339kB
-> Hash Join (cost=168.68..296.49 rows=5408 width=16) (actual time=1.702..3.783 rows=5408 loops=1)
Hash Cond: (pg_dist_placement.shardid = pg_dist_shard.shardid)
-> Seq Scan on pg_dist_placement (cost=0.00..113.60 rows=5408 width=12) (actual time=0.004..0.837 rows=5408 loops=1)
Filter: (shardstate = 1)
-> Hash (cost=101.08..101.08 rows=5408 width=12) (actual time=1.683..1.685 rows=5408 loops=1)
Buckets: 8192 Batches: 1 Memory Usage: 318kB
-> Seq Scan on pg_dist_shard (cost=0.00..101.08 rows=5408 width=12) (actual time=0.004..0.824 rows=5408 loops=1)
-> Hash (cost=1.06..1.06 rows=6 width=40) (actual time=0.007..0.008 rows=6 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 9kB
-> Seq Scan on pg_dist_node (cost=0.00..1.06 rows=6 width=40) (actual time=0.004..0.006 rows=6 loops=1)
-> Hash (cost=5.69..5.69 rows=169 width=130) (actual time=0.079..0.079 rows=169 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 36kB
-> Seq Scan on pg_dist_partition (cost=0.00..5.69 rows=169 width=130) (actual time=0.011..0.046 rows=169 loops=1)
Planning Time: 0.789 ms
Execution Time: 184.095 ms
```
2021-06-14 13:32:30 +02:00
Onur Tirtir
a209999618
Enforce table opt constraints when using alter_columnar_table_set ( #5029 )
2021-06-08 17:39:16 +03:00
Hanefi Onaldi
5c6069a74a
Do not rely on fk cache when truncating local data ( #5018 )
2021-06-07 11:56:48 +03:00
Marco Slot
9770a1bf00
Merge pull request #5020 from citusdata/disable-dropping-shards
2021-06-04 14:48:11 +02:00
Jelte Fennema
c113cb3198
Merge pull request #5024 from citusdata/cleanup-old-shards-before-rebalance
2021-06-04 14:37:22 +02:00
Jelte Fennema
1a83628195
Use "orphaned shards" naming in more places
...
We were not very consistent in how we named these shards.
2021-06-04 11:39:19 +02:00
Jelte Fennema
3f60e4f394
Add ExecuteCriticalCommandInDifferentTransaction function
...
We use this pattern multiple times throughout the codebase now. Seems
like a good moment to abstract it away.
2021-06-04 11:30:27 +02:00
Jelte Fennema
503c70b619
Cleanup orphaned shards before moving when necessary
...
A shard move would fail if there was an orphaned version of the shard on
the target node. With this change before actually fail, we try to clean
up orphaned shards to see if that fixes the issue.
2021-06-04 11:23:07 +02:00
Jelte Fennema
280b9ae018
Cleanup orphaned shards at the start of a rebalance
...
In case the background daemon hasn't cleaned up shards yet, we do this
manually at the start of a rebalance.
2021-06-04 11:23:07 +02:00
Jelte Fennema
7015049ea5
Add citus_cleanup_orphaned_shards UDF
...
Sometimes the background daemon doesn't cleanup orphaned shards quickly
enough. It's useful to have a UDF to trigger this removal when needed.
We already had a UDF like this but it was only used during testing. This
exposes that UDF to users. As a safety measure it cannot be run in a
transaction, because that would cause the background daemon to stop
cleaning up shards while this transaction is running.
2021-06-04 11:23:07 +02:00
Naisila Puka
0f37ab5f85
Fixes column default coming from a sequence ( #4914 )
...
* Add user-defined sequence support for MX
* Remove default part when propagating to workers
* Fix ALTER TABLE with sequences for mx tables
* Clean up and add tests
* Propagate DROP SEQUENCE
* Removing function parts
* Propagate ALTER SEQUENCE
* Change sequence type before propagation & cleanup
* Revert "Propagate ALTER SEQUENCE"
This reverts commit 2bef64c5a29f4e7224a7f43b43b88e0133c65159.
* Ensure sequence is not used in a different column with different type
* Insert select tests
* Propagate rename sequence stmt
* Fix issue with group ID cache invalidation
* Add ALTER TABLE ALTER COLUMN TYPE .. precaution
* Fix attnum inconsistency and add various tests
* Add ALTER SEQUENCE precaution
* Remove Citus hook
* More tests
Co-authored-by: Marco Slot <marco.slot@gmail.com>
2021-06-03 23:02:09 +03:00
Marco Slot
ec9664c5a4
Merge pull request #5021 from citusdata/marcocitus/fix-remove-node
2021-06-03 11:27:57 +02:00
Hanefi Onaldi
056005db4d
Improve tests for truncating local data ( #5012 )
...
We have a slightly different behavior when using truncate_local_data_after_distributing_table UDF on metadata synced clusters. This PR aims to add tests to cover such cases.
We allow distributing tables with data that have foreign keys to reference tables only on metadata synced clusters. This is the reason why some of my earlier tests failed when run on a single node Citus cluster.
2021-06-03 08:51:32 +03:00
Nils Dijk
5f76b93eac
fix link to codecov report from badge ( #5022 )
...
links the codecov badge to the codecov report instead of the badge
2021-06-02 16:48:33 +02:00
Marco Slot
e81d25a7be
Refactor RelationIsAKnownShard to remove onlySearchPath argument
2021-06-02 14:30:27 +02:00
Ahmet Gedemenli
089ef35940
Disable dropping and truncating known shards
...
Add test for disabling dropping and truncating known shards
2021-06-02 14:30:27 +02:00
Hanefi Onaldi
fa29d6667a
Accept invalidation before fk graph validity check ( #5017 )
...
InvalidateForeignKeyGraph sends an invalidation via shared memory to all
backends, including the current one.
However, we might not call AcceptInvalidationMessages before reading
from the cache below. It would be better to also add a call to
AcceptInvalidationMessages in IsForeignConstraintRelationshipGraphValid.
2021-06-02 14:45:35 +03:00
Ahmet Gedemenli
f9c7d74623
Merge pull request #5019 from citusdata/sort-gucs-in-alphabetical-order
...
Sort GUCs in alphabetical order
2021-06-02 13:01:25 +03:00
Ahmet Gedemenli
103cf34418
Sort GUCs in alphabetical order
2021-06-02 12:52:18 +03:00
Jelte Fennema
abbcf4099a
Merge pull request #5013 from citusdata/better-citus-version-check
...
Move CheckCitusVersion to the top of each function
2021-06-02 10:03:42 +02:00
Jelte Fennema
b1cad26ebc
Move CheckCitusVersion to the top of each function
...
Previously this was usually done after argument parsing. This can cause
SEGFAULTs if the number or type of arguments changes in a new version.
By checking that Citus version is correct before doing any argument
parsing we protect against these types of issues. Issues like this have
occurred in pg_auto_failover, so it's not just a theoretical issue.
The main reason why these calls were not at the top of functions is
really just historical. It was because in the past we didn't allow
statements before declarations. Thus having this check before the
argument parsing would have only been possible if we first declared all
variables.
In addition to moving existing CheckCitusVersion calls it also adds
these calls to rebalancer related functions (they were missing there).
2021-06-01 17:43:46 +02:00
Ahmet Gedemenli
98081557fb
Merge pull request #5016 from citusdata/fix-test-shard-id-issue
...
Fix shard id difference for enterprise
2021-06-01 17:44:24 +03:00
Ahmet Gedemenli
0fbddc740d
Fix shard id difference for enterprise
2021-06-01 17:17:46 +03:00
Jelte Fennema
4c20bf7a36
Remove pg_dist_rebalence_strategy_enterprise_check ( #5014 )
...
This is not necessary anymore now that the rebalancer is open source.
2021-06-01 06:16:46 -07:00
Ahmet Gedemenli
e2704d9ad9
Merge pull request #5015 from citusdata/fix-relname-null-bug-when-parallel-execution
...
Fix relname null bug when parallel execution
2021-06-01 15:05:32 +03:00
Ahmet Gedemenli
69d39c0e8b
Fix relname null bug when parallel execution
2021-06-01 14:14:35 +03:00
Ahmet Gedemenli
28b97c6c53
Merge pull request #5010 from citusdata/remove-func-generate-new-target-entries-for-sort-clauses
...
Remove function GenerateNewTargetEntriesForSortClauses
2021-06-01 12:46:47 +03:00
Ahmet Gedemenli
9638933d9d
Remove function GenerateNewTargetEntriesForSortClauses
2021-06-01 12:35:36 +03:00
Jelte Fennema
d3feee37ea
Add a simple python script to generate a new test ( #3972 )
...
The current default citus settings for tests are not really best
practice anymore. However, we keep them because lots of tests depend on
them.
I noticed that I created the same test harness for new tests I added all
the time. This is a simple script that generates that harness, given a
name for the test.
To run:
src/test/regress/bin/create_test.py my_awesome_test
2021-06-01 11:22:11 +02:00
Marco Slot
c03729ad03
Only warn about reference tables when removing last node
2021-06-01 10:53:12 +02:00
Onur Tirtir
94f30a0428
Refactor index check in ColumnarProcessUtility
2021-06-01 11:12:28 +03:00
SaitTalhaNisanci
c72d2b479b
Add tests for union pushdown workaround ( #5005 )
2021-05-31 20:02:20 +02:00
Jelte Fennema
3271f1bd13
Fix data race in get_rebalance_progress ( #5008 )
...
To be able to report progress of the rebalancer, the rebalancer updates
the state of a shard move in a shared memory segment. To then fetch the
progress, `get_rebalance_progress` can be called which reads this shared
memory.
Without this change it did so without using any synchronization
primitives, allowing for data races. This fixes that by using atomic
operations to update and read from the parts of the shared memory that
can be changed after initialization.
2021-05-31 15:27:32 +02:00
SaitTalhaNisanci
8c3f85692d
Not consider old placements when disabling or removing a node ( #4960 )
...
* Not consider old placements when disabling or removing a node
* update cluster test
2021-05-28 22:38:20 +02:00
SaitTalhaNisanci
40a229976f
Fix flaky test because of parallel metadata syncing ( #5004 )
2021-05-28 13:19:15 +03:00
SaitTalhaNisanci
a20cc3b36a
Only consider shard state 1 in citus shards ( #4970 )
2021-05-28 11:33:48 +03:00