Commit Graph

4902 Commits (5e6eb2cd97885174c5de89765b97a19f8aa9a458)

Author SHA1 Message Date
Hanefi Onaldi 5e6eb2cd97
Bump Citus version to 10.1.4 2022-02-01 14:12:34 +03:00
Hanefi Onaldi 96b767c96b
Add changelog entries for 10.1.4
(cherry picked from commit 768643644b)
2022-02-01 14:07:59 +03:00
Onur Tirtir 29403f60bc
HAVE_LZ4 -> HAVE_CITUS_LZ4 (#5541)
(cherry picked from commit cc4c83b1e5)
2022-02-01 11:12:11 +03:00
Onder Kalaci c9e01e816d Use a fixed application_name while connecting to remote nodes
Citus heavily relies on application_name, see
`IsCitusInitiatedRemoteBackend()`.

But if the user set the application name, such as export PGAPPNAME=test_name,
Citus uses that name while connecting to the remote node.

With this commit, we ensure that Citus always connects with
the "citus" user name to the remote nodes.

(cherry picked from commit b26eeaecd3)
2022-01-27 13:00:02 +01:00
Onur Tirtir 854eec7380 Skip deleting options if columnar.options is already dropped (#5458)
Drop extension might cascade to columnar.options before dropping a
columnar table. In that case, we were getting below error when opening
columnar.options to delete records for the columnar table that we are
about to drop.: "ERROR:  could not open relation with OID 0".

I somehow reproduced this bug easily when upgrading pg, that is why
adding added the test to after_pg_upgrade_schedule.

(cherry picked from commit 25024b776e)

 Conflicts:
	src/test/regress/after_pg_upgrade_schedule
	src/test/regress/expected/upgrade_columnar_after.out
	src/test/regress/sql/upgrade_columnar_after.sql
2021-11-12 12:36:20 +03:00
Sait Talha Nisanci dd53a1ad1f Fix missing from entry
(cherry picked from commit a0e0759f73)
2021-11-08 12:43:22 +03:00
Nils Dijk 4e7d5060f5
reinstate optimization that got unintentionally broken in 366461ccdb (#5418)
DESCRIPTION: Reinstate optimisation for uniform shard interval ranges

During a refactor introduced in #4132 the following change was made, which made the optimisation in `CalculateUniformHashRangeIndex` unreachable: 
366461ccdb (diff-565a339ed3c78bc5a0d4ffeb4e91032150b1dffbeeff59cd3e65981d20b998c7L319-R319)

This PR reinstates the path to the optimisation!
2021-11-05 13:10:18 +01:00
Onur Tirtir c0f35e782f Add CheckCitusVersion() calls to columnarAM (#5308)
Considering all code-paths that we might interact with a columnar table,
add `CheckCitusVersion` calls to tableAM callbacks:
- initializing table scan (`columnar_beginscan` & `columnar_index_fetch_begin`)
- setting a new filenode for a relation (storage initializiation or a table rewrite)
- truncating the storage
- inserting tuple (single and multi)

Also add `CheckCitusVersion` call to:
- drop hook (`ColumnarTableDropHook`)
- `alter_columnar_table_set` & `alter_columnar_table_reset` UDFs
(cherry picked from commit f8b1ff7214)

 Conflicts:
	src/backend/columnar/columnar_tableam.c
2021-09-20 17:30:37 +03:00
Gürkan İndibay aa20cc53ba
Missing v in changelogs 2021-09-17 20:13:10 +03:00
Hanefi Onaldi df1b410b02
Bump Citus version to 10.1.3 2021-09-17 14:40:19 +03:00
Hanefi Onaldi b74de6f7ba
Add changelog entries for 10.1.3
(cherry picked from commit c995a55641)
2021-09-17 14:38:01 +03:00
Marco Slot 3bcde91bcb Avoid switch to superuser in worker_merge_files_into_table 2021-09-09 14:11:48 +02:00
Marco Slot 5008a8f61a Add worker_append_table_to_shard permissions tests 2021-09-09 14:11:48 +02:00
Marco Slot 38cf1f291a Perform copy command as regular user in worker_append_table_to_shard 2021-09-09 14:11:48 +02:00
Onur Tirtir 3bd991d215 Not read heaptuple after closing pg_rewrite (#5255)
(cherry picked from commit cc49e63222)
2021-09-08 16:02:30 +03:00
Jelte Fennema 6460fc45e4 Fix crash in shard rebalancer when no distributed tables exist (#5205)
The logging of the amount of ignored moves crashed when no distributed
tables existed in a cluster. This also fixes in passing that the logging
of ignored moves logs the correct number of ignored moves if there
exist multiple colocation groups and all are rebalanced at the same time.

(cherry picked from commit 481f8be084)
2021-09-01 11:55:31 +02:00
Hanefi Onaldi 280bc704d0
Bump Citus version to 10.1.2 2021-08-16 17:26:07 +03:00
Hanefi Onaldi fb0bb40225
Add changelog entries for 10.1.2
(cherry picked from commit da29a57837)
2021-08-16 17:24:45 +03:00
Onder Kalaci 9f4b6a6cb9 Guard against hard WaitEvenSet errors
In short, add wrappers around Postgres' AddWaitEventToSet() and
ModifyWaitEvent().

AddWaitEventToSet()/ModifyWaitEvent*() may throw hard errors. For
example, when the underlying socket for a connection is closed by
the remote server and already reflected by the OS, however
Citus hasn't had a chance to get this information. In that case,
if replication factor is >1, Citus can failover to other nodes
for executing the query. Even if replication factor = 1, Citus
can give much nicer errors.

So CitusAddWaitEventSetToSet()/CitusModifyWaitEvent() simply puts
AddWaitEventToSet()/ModifyWaitEvent() into a PG_TRY/PG_CATCH block
in order to catch any hard errors, and returns this information to
the caller.
2021-08-10 09:36:11 +02:00
Onder Kalaci 3c5ea1b1f2 Adjust the tests to earlier versions
- Drop PRIMARY KEY for Citus 10 compatibility
- Drop columnar for PG 12
- Do not start/stop metadata sync as stop is not implemented in 10.1
2021-08-06 15:56:17 +02:00
Onder Kalaci 0eb5c144ed Dropped columns do not diverge distribution column for partitioned tables
Before this commit, creating a partition after a DROP column
on the parent (position before dist. key) was leading to
partition to have the wrong distribution column.
2021-08-06 13:42:40 +02:00
Hanefi Onaldi b3947510b9
Bump Citus version to 10.1.1 2021-08-05 20:50:42 +03:00
Hanefi Onaldi e64e627e31
Add changelog entries for 10.1.1
(cherry picked from commit bc5553b5d1)
2021-08-05 20:49:43 +03:00
naisila c456a933f0 Fix master_update_table_statistics scripts for 9.5 2021-08-03 16:46:05 +03:00
naisila 86f1e181c4 Fix master_update_table_statistics scripts for 9.4 2021-08-03 16:46:05 +03:00
Jelte Fennema 84410da2ba Fix showing target shard size in the rebalance progress monitor (#5136)
The progress monitor wouldn't actually update the size of the shard on
the target node when using "block_writes" as the `shard_transfer_mode`.
The reason for this is that the CREATE TABLE part of the shard creation
would only be committed once all data was moved as well. This caused
our size calculation to always return 0, since the table did not exist
yet in the session that the progress monitor used.

This is fixed by first committing creation of the table, and only then
starting the actual data copy.

The test output changes slightly. Apparently splitting this up in two
transactions instead of one, increases the table size after the copy by
about 40kB. The additional size used doesn't increase when with the
amount of data in the table is larger (it stays ~40kB per shard). So 
this small change in test output is not considered an actual problem.

(cherry picked from commit 2aa67421a7)
2021-07-23 16:49:39 +02:00
Önder Kalacı 106d68fd61
CLUSTER ON deparser should consider schemas (#5122)
(cherry picked from commit 87a51ae552)
2021-07-16 19:16:37 +03:00
Hanefi Onaldi f571abcca6
Add changelog entries for 10.1.0
This patch also moves the section to the top of the changelog

(cherry picked from commit 6b4996f47e)
2021-07-16 18:09:17 +03:00
Sait Talha Nisanci 6fee3068e3
Not include to-be-deleted shards while finding shard placements
Ignore orphaned shards in more places

Only use active shard placements in RouterInsertTaskList

Use IncludingOrphanedPlacements in some more places

Fix comment

Add tests

(cherry picked from commit e7ed16c296)

Conflicts:
	src/backend/distributed/planner/multi_router_planner.c

Quite trivial conflict that was easy to resolve
2021-07-14 19:28:32 +03:00
Jelte Fennema 6f400dab58
Fix check to always allow foreign keys to reference tables (#5073)
With the previous version of this check we would disallow distributed
tables that did not have a colocationid, to have a foreign key to a
reference table. This fixes that, since there's no reason to disallow
that.

(cherry picked from commit e9bfb8eddd)
2021-07-14 19:09:58 +03:00
Jelte Fennema 90da684f56
Only allow moves of shards of distributed tables (#5072)
Moving shards of reference tables was possible in at least one case:
```sql
select citus_disable_node('localhost', 9702);
create table r(x int);
select create_reference_table('r');
set citus.replicate_reference_tables_on_activate = off;
select citus_activate_node('localhost', 9702);
select citus_move_shard_placement(102008, 'localhost', 9701, 'localhost', 9702);
```

This would then remove the reference table shard on the source, causing
all kinds of issues. This fixes that by disallowing all shard moves
except for shards of distributed tables.

Co-authored-by: Onur Tirtir <onurcantirtir@gmail.com>
(cherry picked from commit d1d386a904)
2021-07-14 19:07:45 +03:00
Jelte Fennema 6986ac2f17
Avoid two race conditions in the rebalance progress monitor (#5050)
The first and main issue was that we were putting absolute pointers into
shared memory for the `steps` field of the `ProgressMonitorData`. This
pointer was being overwritten every time a process requested the monitor
steps, which is the only reason why this even worked in the first place.

To quote a part of a relevant stack overflow answer:

> First of all, putting absolute pointers in shared memory segments is
> terrible terible idea - those pointers would only be valid in the
> process that filled in their values. Shared memory segments are not
> guaranteed to attach at the same virtual address in every process.
> On the contrary - they attach where the system deems it possible when
> `shmaddr == NULL` is specified on call to `shmat()`

Source: https://stackoverflow.com/a/10781921/2570866

In this case a race condition occurred when a second process overwrote
the pointer in between the first process its write and read of the steps
field.

This issue is fixed by not storing the pointer in shared memory anymore.
Instead we now calculate it's position every time we need it.

The second race condition I have not been able to trigger, but I found
it while investigating this. This issue was that we published the handle
of the shared memory segment, before we initialized the data in the
steps. This means that during initialization of the data, a call to
`get_rebalance_progress()` could read partial data in an unsynchronized
manner.

(cherry picked from commit ca00b63272)
2021-07-14 19:06:32 +03:00
Marco Slot 998b044fdc
Fix a bug that causes worker_create_or_alter_role to crash with NULL input
(cherry picked from commit a7e4d6c94a)
2021-07-14 13:56:43 +03:00
Naisila Puka 1507f32282
Fix nextval('seq_name'::text) bug, and schema for seq tests (#5046)
(cherry picked from commit e26b29d3bb)
2021-07-14 13:55:48 +03:00
Hanefi Onaldi 20e500f96b
Remove public schema dependency for 10.1 upgrades
This commit contains a subset of the changes that should be cherry
picked to 10.1 releases.

(cherry picked from commit efc5776451)
2021-07-09 12:12:19 +03:00
Hanefi Onaldi 60424534ef
Remove public schema dependency for 10.0 upgrades
This commit contains a subset of the changes that should be cherry
picked to 10.0 releases.

(cherry picked from commit 8e9cc229ff)
2021-07-09 12:12:01 +03:00
Nils Dijk fefaed37e7 fix 10.1-1 upgrade script to adhere to idempotency 2021-07-08 12:25:32 +02:00
Nils Dijk 5adc151e7c fix 9.5-2 upgrade script to adhere to idempotency 2021-07-08 12:25:32 +02:00
Nils Dijk 6192dc2bff Add test for idempotency of citus_prepare_pg_upgrade 2021-07-08 12:25:32 +02:00
Onur Tirtir 3f6e903722 Fix lower boundary calculation when pruning range dist table shards (#5082)
This happens only when we have a "<" or "<=" filter on distribution
column of a range distributed table and that filter falls in between
two shards.

When the filter falls in between two shards:

  If the filter is ">" or ">=", then UpperShardBoundary was
  returning "upperBoundIndex - 1", where upperBoundIndex is
  exclusive shard index used during binary seach.
  This is expected since upperBoundIndex is an exclusive
  index.

  If the filter is "<" or "<=", then LowerShardBoundary was
  returning "lowerBoundIndex + 1", where lowerBoundIndex is
  inclusive shard index used during binary seach.
  On the other hand, since lowerBoundIndex is an inclusive
  index, we should just return lowerBoundIndex instead of
  doing "+ 1". Before this commit, we were missing leftmost
  shard in such queries.

* Remove useless conditional branches

The branch that we delete from UpperShardBoundary was obviously useless.

The other one in LowerShardBoundary became useless after we remove "+ 1"
from there.

This indeed is another proof of what & how we are fixing with this pr.

* Improve comments and add more

* Add some tests for upper bound calculation too

(cherry picked from commit b118d4188e)
2021-07-07 13:14:27 +03:00
Marco Slot 9efd8e05d6
Fix PG upgrade scripts for 10.1 2021-07-06 16:08:47 +02:00
Marco Slot 210bcdcc08
Fix PG upgrade scripts for 10.0 2021-07-06 16:08:46 +02:00
Marco Slot d3417a5e34
Fix PG upgrade scripts for 9.5 2021-07-06 16:08:46 +02:00
Marco Slot e2330e8f87
Fix PG upgrade scripts for 9.4 2021-07-06 16:08:46 +02:00
Onder Kalaci 690dab316a fix regression tests to avoid any conflicts in enterprise 2021-06-22 08:49:48 +03:00
Onder Kalaci be6e372b27 Deparse/parse the local cached queries
With local query caching, we try to avoid deparse/parse stages as the
operation is too costly.

However, we can do deparse/parse operations once per cached queries, right
before we put the plan into the cache. With that, we avoid edge
cases like (4239) or (5038).

In a sense, we are making the local plan caching behave similar for non-cached
local/remote queries, by forcing to deparse the query once.

(cherry picked from commit 69ca943e58)
2021-06-22 08:24:15 +03:00
Onder Kalaci 3d6bc315ab Get ready for Improve index backed constraint creation for online rebalancer
See:
https://github.com/citusdata/citus-enterprise/issues/616
(cherry picked from commit bc09288651)
2021-06-22 08:23:57 +03:00
Ahmet Gedemenli 4a904e070d
Set table size to zero if no size is read (#5049) (#5056)
* Set table size to zero if no size is read

* Add comment to relation size bug fix

(cherry picked from commit 5115100db0)
2021-06-21 14:36:13 +03:00
Halil Ozan Akgul f8e06fb1ed Bump citus version to 10.1.0 2021-06-15 18:50:09 +03:00
Halil Ozan Akgül 72eb37095b
Merge pull request #5043 from citusdata/citus-10.1.0-changelog-1623733267
Update Changelog for 10.1.0
2021-06-15 17:21:19 +03:00