Commit Graph

4895 Commits (76ae5dd0dbf35bef0d27a7296d11d1467a8e146c)

Author SHA1 Message Date
Onder Kalaci 76ae5dd0db Improve regression tests for prepared statements
With a recent commit, we made (644b266dee)
the behaviour of prepared statements for local cached plans has
slightly changed.

Now, Citus caches the plans when they are re-used. This make triggering
of local cached plans on the 7th execution, and 8th execution is the
first time the plan is used from the cached.

So, the tests are improved to cover 8th execution.
2021-06-21 13:34:44 +03:00
Önder Kalacı 4e632d9da3
Merge pull request #5053 from citusdata/fix_dropped_cached_plans
Deparse/parse the local cached queries
2021-06-21 13:33:36 +03:00
Onder Kalaci 69ca943e58 Deparse/parse the local cached queries
With local query caching, we try to avoid deparse/parse stages as the
operation is too costly.

However, we can do deparse/parse operations once per cached queries, right
before we put the plan into the cache. With that, we avoid edge
cases like (4239) or (5038).

In a sense, we are making the local plan caching behave similar for non-cached
local/remote queries, by forcing to deparse the query once.
2021-06-21 12:24:29 +03:00
Onur Tirtir 82e58c91f3
Use correct test schedule name in columnar vg test target (#5027) 2021-06-18 11:31:16 +03:00
Onur Tirtir b0ca823b4d
Merge pull request #5052 from citusdata/columnar-index
Merge columnar metapage changes and basic index support
2021-06-17 14:55:40 +03:00
Onur Tirtir 6215a3aa93 Merge remote-tracking branch 'origin/master' into columnar-index 2021-06-17 14:31:12 +03:00
Hanefi Onaldi c4f50185e0
Ignore pl/pgsql line numbers in regression outputs (#4411) 2021-06-17 14:11:17 +03:00
SaitTalhaNisanci 3edef11a9f
Fix a test in hyperscale schedule (#5042) 2021-06-17 13:40:05 +03:00
Önder Kalacı e56f5909c9
Merge pull request #5054 from citusdata/base_for_enterprise_16_june
Get ready for Improve index backed constraint creation for online rebalancer
2021-06-17 13:09:28 +03:00
Onder Kalaci bc09288651 Get ready for Improve index backed constraint creation for online rebalancer
See:
https://github.com/citusdata/citus-enterprise/issues/616
2021-06-17 13:05:56 +03:00
Onur Tirtir 681f700321 Fix first_row_number test for stripe_row_limit enforcement 2021-06-17 10:51:43 +03:00
Onur Tirtir 18fe0311c0 Move rest of the schema changes to 10.2-1 2021-06-16 20:43:41 +03:00
Onur Tirtir 07117b0454 Move sql files for upgrade/downgrade_columnar_storage to 10.2-1 2021-06-16 20:40:26 +03:00
Onur Tirtir 3d11c0f9ef Merge remote-tracking branch 'origin/master' into columnar-index
Conflicts:
	src/test/regress/expected/columnar_empty.out
	src/test/regress/expected/multi_extension.out
2021-06-16 20:23:50 +03:00
Onur Tirtir a2efe59e2f
Merge pull request #4950 from citusdata/col/index-support
Add basic index support for columnar tables.

This pr brings the support for following index/constraint types:
* btree indexes
* primary keys
* unique constraints / indexes
* exclusion constraints
* hash indexes
* partial indexes
* indexes including additional columns (INCLUDE syntax), even if we don't properly support index-only scans
2021-06-16 20:11:54 +03:00
Onur Tirtir b6b969971a Error out for CLUSTER commands on columnar tables 2021-06-16 20:06:33 +03:00
Onur Tirtir 5adab2a3ac Report progress when building index on columnar tables 2021-06-16 20:06:33 +03:00
Onur Tirtir 9b4dc2f804 Prevent using parallel scan for columnar index builds 2021-06-16 19:59:32 +03:00
Onur Tirtir 82ea1b5daf Not remove all paths, keep IndexPath's 2021-06-16 19:59:32 +03:00
Onur Tirtir 1af50e98b3 Fix a comment in ColumnarMetapageRead 2021-06-16 19:59:32 +03:00
Onur Tirtir 10a762aa88 Implement columnar index support functions 2021-06-16 19:59:32 +03:00
Halil Ozan Akgül 27c7d28f7f
Merge pull request #5045 from citusdata/master-update-version-9e0729f4-4406-43ed-9942-d27aa5c398ec
Bump Citus to 10.2devel
2021-06-16 19:26:41 +03:00
Halil Ozan Akgul db03afe91e Bump citus version to 10.2devel 2021-06-16 17:44:05 +03:00
Ahmet Gedemenli 5115100db0
Set table size to zero if no size is read (#5049)
* Set table size to zero if no size is read

* Add comment to relation size bug fix
2021-06-16 17:23:19 +03:00
SaitTalhaNisanci 2511c4c045
Merge pull request #5025 from citusdata/split_multi
Split multi schedule
2021-06-16 15:30:15 +03:00
SaitTalhaNisanci 1784c7ef85
Merge branch 'master' into split_multi 2021-06-16 15:26:09 +03:00
Marco Slot 9797857967
Merge pull request #5048 from citusdata/marcocitus/fix-wcoar-null-input 2021-06-16 13:40:51 +02:00
Sait Talha Nisanci c7d04e7f40 swap multi_schedule and multi_schedule_1 2021-06-16 14:40:14 +03:00
Sait Talha Nisanci c55e44a4af Drop table if exists 2021-06-16 14:19:59 +03:00
Sait Talha Nisanci fc89487e93 Split check multi 2021-06-16 14:19:59 +03:00
Naisila Puka e26b29d3bb
Fix nextval('seq_name'::text) bug, and schema for seq tests (#5046) 2021-06-16 13:58:49 +03:00
Marco Slot a7e4d6c94a Fix a bug that causes worker_create_or_alter_role to crash with NULL input 2021-06-15 20:07:08 +02:00
Halil Ozan Akgül 72eb37095b
Merge pull request #5043 from citusdata/citus-10.1.0-changelog-1623733267
Update Changelog for 10.1.0
2021-06-15 17:21:19 +03:00
Halil Ozan Akgul 91db015051 Add changelog entry for 10.1.0 2021-06-15 14:28:15 +03:00
Jelte Fennema 4c3934272f
Improve performance of citus_shards (#5036)
We were effectively joining on a calculated column because of our calls
to `shard_name`. This caused a really bad plan to be generated. In my
specific case it was taking ~18 seconds to show the output of
citus_shards. It had this explain plan:

```
                                                                                                       QUERY PLAN
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
 Subquery Scan on citus_shards  (cost=18369.74..18437.34 rows=5408 width=124) (actual time=18277.461..18278.509 rows=5408 loops=1)
   ->  Sort  (cost=18369.74..18383.26 rows=5408 width=156) (actual time=18277.457..18277.726 rows=5408 loops=1)
         Sort Key: ((pg_dist_shard.logicalrelid)::text), pg_dist_shard.shardid
         Sort Method: quicksort  Memory: 1629kB
         CTE shard_sizes
           ->  Function Scan on citus_shard_sizes  (cost=0.00..10.00 rows=1000 width=40) (actual time=71.137..71.934 rows=5413 loops=1)
         ->  Hash Join  (cost=177.62..18024.42 rows=5408 width=156) (actual time=77.985..18257.237 rows=5408 loops=1)
               Hash Cond: ((pg_dist_shard.logicalrelid)::oid = (pg_dist_partition.logicalrelid)::oid)
               ->  Hash Join  (cost=169.81..371.98 rows=5408 width=48) (actual time=1.415..13.166 rows=5408 loops=1)
                     Hash Cond: (pg_dist_placement.groupid = pg_dist_node.groupid)
                     ->  Hash Join  (cost=168.68..296.49 rows=5408 width=16) (actual time=1.403..10.011 rows=5408 loops=1)
                           Hash Cond: (pg_dist_placement.shardid = pg_dist_shard.shardid)
                           ->  Seq Scan on pg_dist_placement  (cost=0.00..113.60 rows=5408 width=12) (actual time=0.004..3.684 rows=5408 loops=1)
                                 Filter: (shardstate = 1)
                           ->  Hash  (cost=101.08..101.08 rows=5408 width=12) (actual time=1.385..1.386 rows=5408 loops=1)
                                 Buckets: 8192  Batches: 1  Memory Usage: 318kB
                                 ->  Seq Scan on pg_dist_shard  (cost=0.00..101.08 rows=5408 width=12) (actual time=0.003..0.688 rows=5408 loops=1)
                     ->  Hash  (cost=1.06..1.06 rows=6 width=40) (actual time=0.007..0.007 rows=6 loops=1)
                           Buckets: 1024  Batches: 1  Memory Usage: 9kB
                           ->  Seq Scan on pg_dist_node  (cost=0.00..1.06 rows=6 width=40) (actual time=0.004..0.005 rows=6 loops=1)
               ->  Hash  (cost=5.69..5.69 rows=169 width=130) (actual time=0.070..0.071 rows=169 loops=1)
                     Buckets: 1024  Batches: 1  Memory Usage: 36kB
                     ->  Seq Scan on pg_dist_partition  (cost=0.00..5.69 rows=169 width=130) (actual time=0.009..0.041 rows=169 loops=1)
               SubPlan 2
                 ->  Limit  (cost=0.00..3.25 rows=1 width=8) (actual time=3.370..3.370 rows=1 loops=5408)
                       ->  CTE Scan on shard_sizes  (cost=0.00..32.50 rows=10 width=8) (actual time=3.369..3.369 rows=1 loops=5408)
                             Filter: ((shard_name(pg_dist_shard.logicalrelid, pg_dist_shard.shardid) = table_name) OR (('public.'::text || shard_name(pg_dist_shard.logicalrelid, pg_dist_shard.shardid)) = table_name))
                             Rows Removed by Filter: 2707
 Planning Time: 0.705 ms
 Execution Time: 18278.877 ms
```

With the changes it only takes 180ms to show the same output:
```
                                                                              QUERY PLAN
──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
 Sort  (cost=904.59..918.11 rows=5408 width=156) (actual time=182.508..182.960 rows=5408 loops=1)
   Sort Key: ((pg_dist_shard.logicalrelid)::text), pg_dist_shard.shardid
   Sort Method: quicksort  Memory: 1629kB
   ->  Hash Join  (cost=418.03..569.27 rows=5408 width=156) (actual time=136.333..146.591 rows=5408 loops=1)
         Hash Cond: ((pg_dist_shard.logicalrelid)::oid = (pg_dist_partition.logicalrelid)::oid)
         ->  Hash Join  (cost=410.22..492.83 rows=5408 width=56) (actual time=136.231..140.132 rows=5408 loops=1)
               Hash Cond: (pg_dist_placement.groupid = pg_dist_node.groupid)
               ->  Hash Right Join  (cost=409.09..417.34 rows=5408 width=24) (actual time=136.218..138.890 rows=5408 loops=1)
                     Hash Cond: ((((regexp_matches(citus_shard_sizes.table_name, '_(\d+)$'::text))[1])::integer) = pg_dist_shard.shardid)
                     ->  HashAggregate  (cost=45.00..48.50 rows=200 width=12) (actual time=131.609..132.481 rows=5408 loops=1)
                           Group Key: ((regexp_matches(citus_shard_sizes.table_name, '_(\d+)$'::text))[1])::integer
                           Batches: 1  Memory Usage: 737kB
                           ->  Result  (cost=0.00..40.00 rows=1000 width=12) (actual time=107.786..129.831 rows=5408 loops=1)
                                 ->  ProjectSet  (cost=0.00..22.50 rows=1000 width=40) (actual time=107.780..128.492 rows=5408 loops=1)
                                       ->  Function Scan on citus_shard_sizes  (cost=0.00..10.00 rows=1000 width=40) (actual time=107.746..108.107 rows=5414 loops=1)
                     ->  Hash  (cost=296.49..296.49 rows=5408 width=16) (actual time=4.595..4.598 rows=5408 loops=1)
                           Buckets: 8192  Batches: 1  Memory Usage: 339kB
                           ->  Hash Join  (cost=168.68..296.49 rows=5408 width=16) (actual time=1.702..3.783 rows=5408 loops=1)
                                 Hash Cond: (pg_dist_placement.shardid = pg_dist_shard.shardid)
                                 ->  Seq Scan on pg_dist_placement  (cost=0.00..113.60 rows=5408 width=12) (actual time=0.004..0.837 rows=5408 loops=1)
                                       Filter: (shardstate = 1)
                                 ->  Hash  (cost=101.08..101.08 rows=5408 width=12) (actual time=1.683..1.685 rows=5408 loops=1)
                                       Buckets: 8192  Batches: 1  Memory Usage: 318kB
                                       ->  Seq Scan on pg_dist_shard  (cost=0.00..101.08 rows=5408 width=12) (actual time=0.004..0.824 rows=5408 loops=1)
               ->  Hash  (cost=1.06..1.06 rows=6 width=40) (actual time=0.007..0.008 rows=6 loops=1)
                     Buckets: 1024  Batches: 1  Memory Usage: 9kB
                     ->  Seq Scan on pg_dist_node  (cost=0.00..1.06 rows=6 width=40) (actual time=0.004..0.006 rows=6 loops=1)
         ->  Hash  (cost=5.69..5.69 rows=169 width=130) (actual time=0.079..0.079 rows=169 loops=1)
               Buckets: 1024  Batches: 1  Memory Usage: 36kB
               ->  Seq Scan on pg_dist_partition  (cost=0.00..5.69 rows=169 width=130) (actual time=0.011..0.046 rows=169 loops=1)
 Planning Time: 0.789 ms
 Execution Time: 184.095 ms
 ```
2021-06-14 13:32:30 +02:00
Onur Tirtir a209999618
Enforce table opt constraints when using alter_columnar_table_set (#5029) 2021-06-08 17:39:16 +03:00
Hanefi Onaldi 5c6069a74a
Do not rely on fk cache when truncating local data (#5018) 2021-06-07 11:56:48 +03:00
Marco Slot 9770a1bf00
Merge pull request #5020 from citusdata/disable-dropping-shards 2021-06-04 14:48:11 +02:00
Jelte Fennema c113cb3198
Merge pull request #5024 from citusdata/cleanup-old-shards-before-rebalance 2021-06-04 14:37:22 +02:00
Jelte Fennema 1a83628195 Use "orphaned shards" naming in more places
We were not very consistent in how we named these shards.
2021-06-04 11:39:19 +02:00
Jelte Fennema 3f60e4f394 Add ExecuteCriticalCommandInDifferentTransaction function
We use this pattern multiple times throughout the codebase now. Seems
like a good moment to abstract it away.
2021-06-04 11:30:27 +02:00
Jelte Fennema 503c70b619 Cleanup orphaned shards before moving when necessary
A shard move would fail if there was an orphaned version of the shard on
the target node. With this change before actually fail, we try to clean
up orphaned shards to see if that fixes the issue.
2021-06-04 11:23:07 +02:00
Jelte Fennema 280b9ae018 Cleanup orphaned shards at the start of a rebalance
In case the background daemon hasn't cleaned up shards yet, we do this
manually at the start of a rebalance.
2021-06-04 11:23:07 +02:00
Jelte Fennema 7015049ea5 Add citus_cleanup_orphaned_shards UDF
Sometimes the background daemon doesn't cleanup orphaned shards quickly
enough. It's useful to have a UDF to trigger this removal when needed.
We already had a UDF like this but it was only used during testing. This
exposes that UDF to users. As a safety measure it cannot be run in a
transaction, because that would cause the background daemon to stop
cleaning up shards while this transaction is running.
2021-06-04 11:23:07 +02:00
Naisila Puka 0f37ab5f85
Fixes column default coming from a sequence (#4914)
* Add user-defined sequence support for MX

* Remove default part when propagating to workers

* Fix ALTER TABLE with sequences for mx tables

* Clean up and add tests

* Propagate DROP SEQUENCE

* Removing function parts

* Propagate ALTER SEQUENCE

* Change sequence type before propagation & cleanup

* Revert "Propagate ALTER SEQUENCE"

This reverts commit 2bef64c5a29f4e7224a7f43b43b88e0133c65159.

* Ensure sequence is not used in a different column with different type

* Insert select tests

* Propagate rename sequence stmt

* Fix issue with group ID cache invalidation

* Add ALTER TABLE ALTER COLUMN TYPE .. precaution

* Fix attnum inconsistency and add various tests

* Add ALTER SEQUENCE precaution

* Remove Citus hook

* More tests

Co-authored-by: Marco Slot <marco.slot@gmail.com>
2021-06-03 23:02:09 +03:00
Marco Slot ec9664c5a4
Merge pull request #5021 from citusdata/marcocitus/fix-remove-node 2021-06-03 11:27:57 +02:00
Hanefi Onaldi 056005db4d
Improve tests for truncating local data (#5012)
We have a slightly different behavior when using truncate_local_data_after_distributing_table UDF on metadata synced clusters. This PR aims to add tests to cover such cases.

We allow distributing tables with data that have foreign keys to reference tables only on metadata synced clusters. This is the reason why some of my earlier tests failed when run on a single node Citus cluster.
2021-06-03 08:51:32 +03:00
Nils Dijk 5f76b93eac
fix link to codecov report from badge (#5022)
links the codecov badge to the codecov report instead of the badge
2021-06-02 16:48:33 +02:00
Marco Slot e81d25a7be Refactor RelationIsAKnownShard to remove onlySearchPath argument 2021-06-02 14:30:27 +02:00
Ahmet Gedemenli 089ef35940 Disable dropping and truncating known shards
Add test for disabling dropping and truncating known shards
2021-06-02 14:30:27 +02:00