Jelte Fennema
f4a2d99ce9
Harden ReplicateShardToNode to unexpected placements ( #5071 )
...
Originally ReplicateShardToNode was meant for
`upgrade_to_reference_table`, which required handling of existing inactive
placements. These days `upgrade_to_reference_table` is deprecated and
cannot be used anymore. Now that we have SHARD_STATE_TO_DELETE too, this
left over code seemed error prone. So this removes support for
activating inactive reference table placemements, since these should not
be possible. If it finds a non active reference table placement anyway
it now errors out.
This also removes a few outdated comments related to `upgrade_to_refeference_table`.
2021-06-24 13:11:02 +03:00
Jelte Fennema
d1d386a904
Only allow moves of shards of distributed tables ( #5072 )
...
Moving shards of reference tables was possible in at least one case:
```sql
select citus_disable_node('localhost', 9702);
create table r(x int);
select create_reference_table('r');
set citus.replicate_reference_tables_on_activate = off;
select citus_activate_node('localhost', 9702);
select citus_move_shard_placement(102008, 'localhost', 9701, 'localhost', 9702);
```
This would then remove the reference table shard on the source, causing
all kinds of issues. This fixes that by disallowing all shard moves
except for shards of distributed tables.
Co-authored-by: Onur Tirtir <onurcantirtir@gmail.com>
2021-06-23 16:25:46 +02:00
Marco Slot
b2d51e6691
Add link to paper to README
2021-06-23 11:38:08 +02:00
Önder Kalacı
2939f10fd0
Merge pull request #5067 from citusdata/add_tests_mx
...
Add regression tests for changing column type with fkey
2021-06-23 09:33:41 +03:00
Onder Kalaci
75847d10b5
Add regression tests for changing column type with fkey
...
closes https://github.com/citusdata/citus/issues/2337 as it doesn't
apply anymore.
2021-06-23 09:03:55 +03:00
Önder Kalacı
0a1a3e0dc0
Merge pull request #5066 from citusdata/fix_test
...
fix regression tests to avoid any conflicts in enterprise
2021-06-22 08:49:13 +03:00
Onder Kalaci
55ed93bf0d
fix regression tests to avoid any conflicts in enterprise
2021-06-22 08:45:17 +03:00
Jelte Fennema
ca00b63272
Avoid two race conditions in the rebalance progress monitor ( #5050 )
...
The first and main issue was that we were putting absolute pointers into
shared memory for the `steps` field of the `ProgressMonitorData`. This
pointer was being overwritten every time a process requested the monitor
steps, which is the only reason why this even worked in the first place.
To quote a part of a relevant stack overflow answer:
> First of all, putting absolute pointers in shared memory segments is
> terrible terible idea - those pointers would only be valid in the
> process that filled in their values. Shared memory segments are not
> guaranteed to attach at the same virtual address in every process.
> On the contrary - they attach where the system deems it possible when
> `shmaddr == NULL` is specified on call to `shmat()`
Source: https://stackoverflow.com/a/10781921/2570866
In this case a race condition occurred when a second process overwrote
the pointer in between the first process its write and read of the steps
field.
This issue is fixed by not storing the pointer in shared memory anymore.
Instead we now calculate it's position every time we need it.
The second race condition I have not been able to trigger, but I found
it while investigating this. This issue was that we published the handle
of the shared memory segment, before we initialized the data in the
steps. This means that during initialization of the data, a call to
`get_rebalance_progress()` could read partial data in an unsynchronized
manner.
2021-06-21 14:03:42 +00:00
Önder Kalacı
206401b708
Merge pull request #5064 from citusdata/solidfy_prepared_statements
...
Improve regression tests for prepared statements for local cached plans
2021-06-21 14:07:32 +03:00
Onder Kalaci
76ae5dd0db
Improve regression tests for prepared statements
...
With a recent commit, we made (644b266dee
)
the behaviour of prepared statements for local cached plans has
slightly changed.
Now, Citus caches the plans when they are re-used. This make triggering
of local cached plans on the 7th execution, and 8th execution is the
first time the plan is used from the cached.
So, the tests are improved to cover 8th execution.
2021-06-21 13:34:44 +03:00
Önder Kalacı
4e632d9da3
Merge pull request #5053 from citusdata/fix_dropped_cached_plans
...
Deparse/parse the local cached queries
2021-06-21 13:33:36 +03:00
Onder Kalaci
69ca943e58
Deparse/parse the local cached queries
...
With local query caching, we try to avoid deparse/parse stages as the
operation is too costly.
However, we can do deparse/parse operations once per cached queries, right
before we put the plan into the cache. With that, we avoid edge
cases like (4239) or (5038).
In a sense, we are making the local plan caching behave similar for non-cached
local/remote queries, by forcing to deparse the query once.
2021-06-21 12:24:29 +03:00
Onur Tirtir
82e58c91f3
Use correct test schedule name in columnar vg test target ( #5027 )
2021-06-18 11:31:16 +03:00
Onur Tirtir
b0ca823b4d
Merge pull request #5052 from citusdata/columnar-index
...
Merge columnar metapage changes and basic index support
2021-06-17 14:55:40 +03:00
Onur Tirtir
6215a3aa93
Merge remote-tracking branch 'origin/master' into columnar-index
2021-06-17 14:31:12 +03:00
Hanefi Onaldi
c4f50185e0
Ignore pl/pgsql line numbers in regression outputs ( #4411 )
2021-06-17 14:11:17 +03:00
SaitTalhaNisanci
3edef11a9f
Fix a test in hyperscale schedule ( #5042 )
2021-06-17 13:40:05 +03:00
Önder Kalacı
e56f5909c9
Merge pull request #5054 from citusdata/base_for_enterprise_16_june
...
Get ready for Improve index backed constraint creation for online rebalancer
2021-06-17 13:09:28 +03:00
Onder Kalaci
bc09288651
Get ready for Improve index backed constraint creation for online rebalancer
...
See:
https://github.com/citusdata/citus-enterprise/issues/616
2021-06-17 13:05:56 +03:00
Onur Tirtir
681f700321
Fix first_row_number test for stripe_row_limit enforcement
2021-06-17 10:51:43 +03:00
Onur Tirtir
18fe0311c0
Move rest of the schema changes to 10.2-1
2021-06-16 20:43:41 +03:00
Onur Tirtir
07117b0454
Move sql files for upgrade/downgrade_columnar_storage to 10.2-1
2021-06-16 20:40:26 +03:00
Onur Tirtir
3d11c0f9ef
Merge remote-tracking branch 'origin/master' into columnar-index
...
Conflicts:
src/test/regress/expected/columnar_empty.out
src/test/regress/expected/multi_extension.out
2021-06-16 20:23:50 +03:00
Onur Tirtir
a2efe59e2f
Merge pull request #4950 from citusdata/col/index-support
...
Add basic index support for columnar tables.
This pr brings the support for following index/constraint types:
* btree indexes
* primary keys
* unique constraints / indexes
* exclusion constraints
* hash indexes
* partial indexes
* indexes including additional columns (INCLUDE syntax), even if we don't properly support index-only scans
2021-06-16 20:11:54 +03:00
Onur Tirtir
b6b969971a
Error out for CLUSTER commands on columnar tables
2021-06-16 20:06:33 +03:00
Onur Tirtir
5adab2a3ac
Report progress when building index on columnar tables
2021-06-16 20:06:33 +03:00
Onur Tirtir
9b4dc2f804
Prevent using parallel scan for columnar index builds
2021-06-16 19:59:32 +03:00
Onur Tirtir
82ea1b5daf
Not remove all paths, keep IndexPath's
2021-06-16 19:59:32 +03:00
Onur Tirtir
1af50e98b3
Fix a comment in ColumnarMetapageRead
2021-06-16 19:59:32 +03:00
Onur Tirtir
10a762aa88
Implement columnar index support functions
2021-06-16 19:59:32 +03:00
Halil Ozan Akgül
27c7d28f7f
Merge pull request #5045 from citusdata/master-update-version-9e0729f4-4406-43ed-9942-d27aa5c398ec
...
Bump Citus to 10.2devel
2021-06-16 19:26:41 +03:00
Halil Ozan Akgul
db03afe91e
Bump citus version to 10.2devel
2021-06-16 17:44:05 +03:00
Ahmet Gedemenli
5115100db0
Set table size to zero if no size is read ( #5049 )
...
* Set table size to zero if no size is read
* Add comment to relation size bug fix
2021-06-16 17:23:19 +03:00
SaitTalhaNisanci
2511c4c045
Merge pull request #5025 from citusdata/split_multi
...
Split multi schedule
2021-06-16 15:30:15 +03:00
SaitTalhaNisanci
1784c7ef85
Merge branch 'master' into split_multi
2021-06-16 15:26:09 +03:00
Marco Slot
9797857967
Merge pull request #5048 from citusdata/marcocitus/fix-wcoar-null-input
2021-06-16 13:40:51 +02:00
Sait Talha Nisanci
c7d04e7f40
swap multi_schedule and multi_schedule_1
2021-06-16 14:40:14 +03:00
Sait Talha Nisanci
c55e44a4af
Drop table if exists
2021-06-16 14:19:59 +03:00
Sait Talha Nisanci
fc89487e93
Split check multi
2021-06-16 14:19:59 +03:00
Naisila Puka
e26b29d3bb
Fix nextval('seq_name'::text) bug, and schema for seq tests ( #5046 )
2021-06-16 13:58:49 +03:00
Marco Slot
a7e4d6c94a
Fix a bug that causes worker_create_or_alter_role to crash with NULL input
2021-06-15 20:07:08 +02:00
Halil Ozan Akgül
72eb37095b
Merge pull request #5043 from citusdata/citus-10.1.0-changelog-1623733267
...
Update Changelog for 10.1.0
2021-06-15 17:21:19 +03:00
Halil Ozan Akgul
91db015051
Add changelog entry for 10.1.0
2021-06-15 14:28:15 +03:00
Jelte Fennema
4c3934272f
Improve performance of citus_shards ( #5036 )
...
We were effectively joining on a calculated column because of our calls
to `shard_name`. This caused a really bad plan to be generated. In my
specific case it was taking ~18 seconds to show the output of
citus_shards. It had this explain plan:
```
QUERY PLAN
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
Subquery Scan on citus_shards (cost=18369.74..18437.34 rows=5408 width=124) (actual time=18277.461..18278.509 rows=5408 loops=1)
-> Sort (cost=18369.74..18383.26 rows=5408 width=156) (actual time=18277.457..18277.726 rows=5408 loops=1)
Sort Key: ((pg_dist_shard.logicalrelid)::text), pg_dist_shard.shardid
Sort Method: quicksort Memory: 1629kB
CTE shard_sizes
-> Function Scan on citus_shard_sizes (cost=0.00..10.00 rows=1000 width=40) (actual time=71.137..71.934 rows=5413 loops=1)
-> Hash Join (cost=177.62..18024.42 rows=5408 width=156) (actual time=77.985..18257.237 rows=5408 loops=1)
Hash Cond: ((pg_dist_shard.logicalrelid)::oid = (pg_dist_partition.logicalrelid)::oid)
-> Hash Join (cost=169.81..371.98 rows=5408 width=48) (actual time=1.415..13.166 rows=5408 loops=1)
Hash Cond: (pg_dist_placement.groupid = pg_dist_node.groupid)
-> Hash Join (cost=168.68..296.49 rows=5408 width=16) (actual time=1.403..10.011 rows=5408 loops=1)
Hash Cond: (pg_dist_placement.shardid = pg_dist_shard.shardid)
-> Seq Scan on pg_dist_placement (cost=0.00..113.60 rows=5408 width=12) (actual time=0.004..3.684 rows=5408 loops=1)
Filter: (shardstate = 1)
-> Hash (cost=101.08..101.08 rows=5408 width=12) (actual time=1.385..1.386 rows=5408 loops=1)
Buckets: 8192 Batches: 1 Memory Usage: 318kB
-> Seq Scan on pg_dist_shard (cost=0.00..101.08 rows=5408 width=12) (actual time=0.003..0.688 rows=5408 loops=1)
-> Hash (cost=1.06..1.06 rows=6 width=40) (actual time=0.007..0.007 rows=6 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 9kB
-> Seq Scan on pg_dist_node (cost=0.00..1.06 rows=6 width=40) (actual time=0.004..0.005 rows=6 loops=1)
-> Hash (cost=5.69..5.69 rows=169 width=130) (actual time=0.070..0.071 rows=169 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 36kB
-> Seq Scan on pg_dist_partition (cost=0.00..5.69 rows=169 width=130) (actual time=0.009..0.041 rows=169 loops=1)
SubPlan 2
-> Limit (cost=0.00..3.25 rows=1 width=8) (actual time=3.370..3.370 rows=1 loops=5408)
-> CTE Scan on shard_sizes (cost=0.00..32.50 rows=10 width=8) (actual time=3.369..3.369 rows=1 loops=5408)
Filter: ((shard_name(pg_dist_shard.logicalrelid, pg_dist_shard.shardid) = table_name) OR (('public.'::text || shard_name(pg_dist_shard.logicalrelid, pg_dist_shard.shardid)) = table_name))
Rows Removed by Filter: 2707
Planning Time: 0.705 ms
Execution Time: 18278.877 ms
```
With the changes it only takes 180ms to show the same output:
```
QUERY PLAN
──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
Sort (cost=904.59..918.11 rows=5408 width=156) (actual time=182.508..182.960 rows=5408 loops=1)
Sort Key: ((pg_dist_shard.logicalrelid)::text), pg_dist_shard.shardid
Sort Method: quicksort Memory: 1629kB
-> Hash Join (cost=418.03..569.27 rows=5408 width=156) (actual time=136.333..146.591 rows=5408 loops=1)
Hash Cond: ((pg_dist_shard.logicalrelid)::oid = (pg_dist_partition.logicalrelid)::oid)
-> Hash Join (cost=410.22..492.83 rows=5408 width=56) (actual time=136.231..140.132 rows=5408 loops=1)
Hash Cond: (pg_dist_placement.groupid = pg_dist_node.groupid)
-> Hash Right Join (cost=409.09..417.34 rows=5408 width=24) (actual time=136.218..138.890 rows=5408 loops=1)
Hash Cond: ((((regexp_matches(citus_shard_sizes.table_name, '_(\d+)$'::text))[1])::integer) = pg_dist_shard.shardid)
-> HashAggregate (cost=45.00..48.50 rows=200 width=12) (actual time=131.609..132.481 rows=5408 loops=1)
Group Key: ((regexp_matches(citus_shard_sizes.table_name, '_(\d+)$'::text))[1])::integer
Batches: 1 Memory Usage: 737kB
-> Result (cost=0.00..40.00 rows=1000 width=12) (actual time=107.786..129.831 rows=5408 loops=1)
-> ProjectSet (cost=0.00..22.50 rows=1000 width=40) (actual time=107.780..128.492 rows=5408 loops=1)
-> Function Scan on citus_shard_sizes (cost=0.00..10.00 rows=1000 width=40) (actual time=107.746..108.107 rows=5414 loops=1)
-> Hash (cost=296.49..296.49 rows=5408 width=16) (actual time=4.595..4.598 rows=5408 loops=1)
Buckets: 8192 Batches: 1 Memory Usage: 339kB
-> Hash Join (cost=168.68..296.49 rows=5408 width=16) (actual time=1.702..3.783 rows=5408 loops=1)
Hash Cond: (pg_dist_placement.shardid = pg_dist_shard.shardid)
-> Seq Scan on pg_dist_placement (cost=0.00..113.60 rows=5408 width=12) (actual time=0.004..0.837 rows=5408 loops=1)
Filter: (shardstate = 1)
-> Hash (cost=101.08..101.08 rows=5408 width=12) (actual time=1.683..1.685 rows=5408 loops=1)
Buckets: 8192 Batches: 1 Memory Usage: 318kB
-> Seq Scan on pg_dist_shard (cost=0.00..101.08 rows=5408 width=12) (actual time=0.004..0.824 rows=5408 loops=1)
-> Hash (cost=1.06..1.06 rows=6 width=40) (actual time=0.007..0.008 rows=6 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 9kB
-> Seq Scan on pg_dist_node (cost=0.00..1.06 rows=6 width=40) (actual time=0.004..0.006 rows=6 loops=1)
-> Hash (cost=5.69..5.69 rows=169 width=130) (actual time=0.079..0.079 rows=169 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 36kB
-> Seq Scan on pg_dist_partition (cost=0.00..5.69 rows=169 width=130) (actual time=0.011..0.046 rows=169 loops=1)
Planning Time: 0.789 ms
Execution Time: 184.095 ms
```
2021-06-14 13:32:30 +02:00
Onur Tirtir
a209999618
Enforce table opt constraints when using alter_columnar_table_set ( #5029 )
2021-06-08 17:39:16 +03:00
Hanefi Onaldi
5c6069a74a
Do not rely on fk cache when truncating local data ( #5018 )
2021-06-07 11:56:48 +03:00
Marco Slot
9770a1bf00
Merge pull request #5020 from citusdata/disable-dropping-shards
2021-06-04 14:48:11 +02:00
Jelte Fennema
c113cb3198
Merge pull request #5024 from citusdata/cleanup-old-shards-before-rebalance
2021-06-04 14:37:22 +02:00
Jelte Fennema
1a83628195
Use "orphaned shards" naming in more places
...
We were not very consistent in how we named these shards.
2021-06-04 11:39:19 +02:00
Jelte Fennema
3f60e4f394
Add ExecuteCriticalCommandInDifferentTransaction function
...
We use this pattern multiple times throughout the codebase now. Seems
like a good moment to abstract it away.
2021-06-04 11:30:27 +02:00